TCP vs UDP in the video stream

I just returned from my network programming exam, and one of the questions they asked us was . If you are going to transfer video, are you using TCP or UDP? Give an explanation of both saved video and live video streams . " To this question, they just expected a short TCP response for saved video and UDP for real-time video, but I thought about it on the way home, and is it really better to use UDP for real-time video streaming time? I mean, if you have bandwidth for it, and say that you are streaming a football match or concert for that matter, do you really need to use UDP?

Suppose that while you are broadcasting this concert or something using TCP, you begin to play packets (something bad happened on some network between you and the sender), and for the whole minute you do not receive any packets. The video stream will be suspended, and after a minute the packets will begin to pass again (IP found a new route for you). What will happen then is that TCP will relay the minute you lost and continue sending in real time. As an assumption, the bandwidth is higher than the transmission speed in the stream, and the ping is not too high, so in a short time, one minute that you lost will act as a buffer for the stream for you, so if packet loss occurs again, you don't notice.

Now I can think of some devices where it will not be a good idea, for example, video conferencing, where you need to always be at the end of the stream, because the delay during video chat is just awful, but during a football match or concert, which matters if you are one minute behind the stream? In addition, you are guaranteed that you will receive all the data, and it would be better to save them for later viewing when it logs in without errors.

So that brings me to my question. Are there any disadvantages that I don’t know about using TCP for real-time streaming? Or should it be, indeed, if you have bandwidth for it, you should go for TCP, given that it is "better" for the network (flow control)?

+57
udp networking video-streaming video tcp
May 31 '11 at 12:17
source share
13 answers

Disadvantages of using TCP for real-time video:

  • Typically, streaming video streams are not designed with the TCP stream in mind. If you use TCP, the OS must buffer unacknowledged segments for each client. This is undesirable, especially in the case of live events; presumably your concurrent customer list is long due to the nature of the event. Pre-recorded video roles usually do not have such a problem because viewers are waving their reproduction activities; therefore, TCP is more suitable for playing video on demand.
  • IP multicast significantly reduces video bandwidth requirements for a large audience; TCP prevents the use of IP multicast, but UDP is well suited for IP multicast.
  • Live video, as a rule, is a stream with a constant bandwidth recorded from the camera; pre-recorded video streams go off the disc. The dynamics of losses in the TCP reverse channel makes it difficult to maintain live video when the original streams have a constant bandwidth (as it would be for a live event). If you are loading a disk onto a disk from the camera, make sure you have enough buffers for unpredictable network events and TCP / Sendoff variables. Note: if TCP loses too many packets, the connection dies; thus, UDP gives you much more control for this application, since UDP does not care about lowering the network transport level.

FYI, please do not use the word "packages" when describing networks. Networks send packets.

+57
May 31 '11 at 12:25
source share

but during a football match or concert, what does it matter if you are within one minute of the stream?

For some football fans, quite a bit. It has been noticed that delays of even a few seconds in digital video streams due to encoding (or something else) can be very annoying when during high-profile events such as world cup matches, you can hear greetings and groans from the guys (who are watching for the incomplete analog program), before you can see the course of the game that caused them.

I think that for someone who cares a lot about sports (and this is the largest group of solvent customers for digital television, at least here in Germany), a minute in a live video stream would be completely unacceptable (as in, they would switch to your competitor if this did not happen).

+18
May 31 '11 at 12:37
source share

Usually the video stream is somewhat fault tolerant. Therefore, if some packets are lost (due to some kind of router, for example, during congestion), it will still be able to display content, but with reduced quality.

If your live broadcast uses TCP / IP, it will be forced to wait for those dropped packets until it can continue to process newer data.

This is doubly bad:

  • old data will be retransmitted (possibly for a frame that has already been shown and is therefore useless) and
  • new data cannot appear until the old data is retransmitted.

If your goal is to display as fresh information as possible (and for a live stream, you usually want to be in the know, even if your frames look a little worse), then TCP will work against you.

For a recorded stream, the situation is slightly different: you are probably buffering a lot more (maybe a few minutes!) And are more likely to have data retransmitted than some artifacts due to lost packets. In this case, TCP is a good match (it can still be implemented in UDP, of course, but TCP does not have the same drawbacks as for a stream in real time).

+13
May 31 '11 at 12:23
source share

There are some use cases suitable for UDP transport, and others suitable for TCP transport.

The use case also defines the encoding settings for the video. When broadcasting a soccer match focuses on quality and the focus of video conferencing is on latency.

When using multicast, UDP is used to deliver video to your customers.

The multicast requirement is expensive network equipment between the broadcast server and the client. In practice, this means that if your company owns a network infrastructure, you can use UDP and multicast to stream live video. Even then, quality of service is also implemented to mark video packets and determine their priorities, so packet loss does not occur.

Multicast simplifies broadcast software, as network equipment will handle packet distribution to clients. Clients subscribe to multicast channels, and the network will reconfigure packet routing to a new subscriber. By default, all channels are accessible to all clients and can be optimally routed.

This workflow puts the dificulty in the authorization process. Network equipment does not distinguish subscribers from other users. The authorization solution is to encrypt video content and enable decryption in the player software when the subscription is valid.

The Unicast Workflow (TCP) allows the server to verify client credentials and only allow valid subscriptions. Even allow only a certain number of concurrent connections.

Multicast is not enabled over the Internet.

To deliver video over the Internet, you must use TCP. When UDP is used, developers end up re-implementing packet retransmission, for example. Bittorrent p2p live protocol.

"If you use TCP, the OS should buffer unconfirmed segments for each client. This is undesirable, especially in the case of live events."

This buffer must exist in some form. The same is true for the jitter buffer on the player side. It is called a “socket buffer”, and server software can know when this buffer is full and discard the correct video clips for live streams. It is better to use the unicast / TCP method, because server software can implement the correct frame drop logic. Random missing packets in the case of UDP will simply create a bad user interface. like in this video: http://tinypic.com/r/2qn89xz/9

“IP multicast dramatically reduces video bandwidth requirements for large audiences.”

This is true for private networks; multicast is not enabled over the Internet.

"Note: if TCP loses too many packets, the connection dies, so UDP gives you much more control for this application, since UDP does not care about lowering the level of network transport."

UDP also does not care about deleting whole frames or a group of frames so that it no longer controls the user's work.

"Typically, a video stream is somewhat fault tolerant."

Encoded video is not fault tolerant. When sending over unreliable vehicles, error correction is added to the video container. A good example is the MPEG-TS container used in satellite video transmission, which carry multiple streams of audio, video, EPG, etc. This is necessary because satellite communication is not duplex, that is, the receiver cannot request the retransmission of lost packets.

If there is duplex communication, it is always better to forward data only to clients with packet loss, and then include the overhead of forward error correction in the stream sent to all clients.

In any case, lost packets are not acceptable. Discarded frames are in order, in exceptional cases, when bandwidth is difficult.

The result of the lack of packages is artifacts like this: artifacts

Some decoders may split streams that pass packets at critical locations.

+5
Nov 20 '16 at 14:50
source share

It depends. How critical is the content you submit? If critical use of TCP. This can cause problems with bandwidth, video quality (you may have to use lower quality to work with latency) and latency. But if you need content so that you can get it, use it.

Otherwise, UDP must be accurate if the stream is not critical and would be preferable because UDP is less expensive.

+3
Jun 01 2018-11-11T00:
source share

I recommend you watch the new p2p Bittorent Live protocol.

As for streaming, it’s better to use UDP, primarily because it reduces the load on the servers, but mainly because you can send multicast packets, it's easier than sending them to every connected client.

+3
Feb 23 '12 at 10:16
source share

One of the biggest problems with delivering live events on the Internet is scale, and TCP does not scale very well. For example, when you are sitting in a direct football match, and against playing the film on demand, the number of people who watch can easily be 1000 times more. In such a scenario, using TCP is a death sentence for CDNs (content delivery networks).

There are several main reasons why TCP does not scale well:

  • One of the biggest tradeoffs of TCP is throughput variability between sender and receiver. When streaming video over the Internet, video packets must go through several routers over the Internet; each of these routers is associated with different high-speed connections. The TCP algorithm starts with a small TCP window, then grows until packet loss is detected, packet loss is considered a sign of congestion, and TCP responds to it by drastically reducing the window size to avoid congestion. Thus, in turn, the effective throughput is immediately reduced. Now imagine a TCP network using 6-7 router hops for the client (a very common scenario), if any of the intermediate routers loses any packet, TCP for this link will reduce the transmission speed. In fact, the traffic flow between the routers matches the hourglass shape; always gong up and down between one of the intermediate routers. Providing efficient end-to-end translation is much lower than UDP with maximum effort.

  • As you already know, TCP is an acknowledgment protocol. For example, let's say the sender is at a distance of 50 ms (i.e., the delay between two points). This will mean the time it takes to send a packet to the receiver and the receiver to send a confirmation of 100 ms; thus, the maximum throughput possible compared to UDP-based transmission has already been halved.

  • TCP does not support multicasting or the new new AMT multicast standard. This means that CDNs are not able to reduce network traffic by replicating packets — when many clients are viewing the same content. This alone is reason enough for a CDN (like Akamai or Level3) not to switch from TCP for live streams.

+2
Dec 10 '13 at 11:21
source share

For the bandwidth of the video stream, there is probably a limitation on the system. By using multicast, you can significantly reduce the amount of bandwidth used. With UDP, you can easily forward your packets to all connected terminals. You can also use a reliable multicast protocol, one called Pragmatic General Multicast (PGM), I know nothing about it, and I assume that it is not common to use it.

+1
May 31 '11 at 12:25
source share

Among all other reasons, UDP can use multicast. Support for 1000 TCP users transmitting the same data spends traffic. However, there is another important reason for using TCP.

TCP can go through firewalls and NAT much easier. Depending on your NAT and operator, you may not even be able to receive a UDP stream due to problems with punching UDP holes.

+1
Dec 26 '15 at 21:38
source share

All answers “use UDP” assume an open network and “make it use as much as you can.” Well suited for indoor gardens in indoor gardens audio / video networks that disappear.

In the real world, your transfer will go through firewalls (which will remove multicast, and sometimes udp), the network will be shared by other important ($$$) applications, so you want to punish attackers with window scaling.

+1
Mar 30 '16 at 9:27
source share

While reading the TCP UDP debate, I noticed a logical flaw. The loss of TCP packets, causing a one-minute delay, which translates into one minute buffer, can be correlated so that UDP decreases within one minute while experiencing the same loss. A fairer comparison is as follows.

TCP detects packet loss. The video stops when TCP sends packets, trying to flood mathematically perfect packets. The video is delayed for one minute and rises where it remained after the missing packet makes its appointment. We are all waiting, but we know that we will not miss a single pixel.

UDP experiences packet loss. Within a second, during a video stream, the angle of the screen becomes a bit blurry. No one notices, and the show continues without looking for lost packages.

Everything that streams gets the most benefit from UDP. Packet loss causing a one minute delay in TCP will not result in a one minute delay for UDP. Given that most systems use multiple streams with a resolving effect, which makes things a bit too much when packets are starving, it makes sense to use UDP.

UDP FTW when streaming.

0
Mar 19 '14 at 18:55
source share

If bandwidth is much higher than bit, I would recommend TCP for unidirectional video streaming.

Case 1: consecutive packets are lost within a few seconds. => live video will stop on the client side regardless of the transport level (TCP or UDP). When the network is restored: - if TCP is used, the client video player can choose to restart the stream at the first packet lost (time shift) OR delete all later packets and restart the video stream without a time shift. - if UDP is used, there is no choice on the client side, real-time video reboot without any time shift. => TCP is equal to or better.

Case 2: some packets are accidentally and often lost on the network. - if TCP is used, these packets will be relayed immediately and with the correct jitter buffer will not affect the quality / latency of the video stream. - if UDP is used, the video quality will be unsatisfactory. => TCP is much better

0
Dec 04 '15 at 13:58
source share

This is a matter, it is more a matter of content than a matter of time. TCP requires that a packet that has not been delivered must be verified, verified, and reinstalled. UDP does not use this requirement. Therefore, if you sent a file containing millions of packets using UDP, for example, video, if some packets are missing during delivery, they will most likely go unanswered.

0
Jun 18 '16 at 23:55
source share



All Articles