The "correct" way to send a sequence of UDP datagrams?

I got the impression that UDP instability is a property of the physical layer, but it seems like it is not:

I am trying to send a message over UDP, which is divided into a sequence of packets. Message identification and re-ordering are implicit.

I tested this method on two applications that ran on the same computer and expected it to work smoothly. However, despite the fact that the exchange of data was between two programs on the same machine, there were packet losses and quite frequent. Losses also seem rather random: sometimes the whole message has passed, sometimes not.

Now the fact that losses occur even on one machine makes me wonder if I am doing it right?

Initially, I sent all message messages asynchronously in a single run, without waiting for the completion of one fragment before sending the next.

Then I tried to send the next fragment of the message from the completion procedure of the previous one. This improved the packet loss rate, but did not stop it at all.

If I added a pause (Sleep (...)) between peices, it works 100%.

EDIT: As suggested: packets just send too fast, and the OS performs minimal buffering. This is logical.

So, if I want to prevent the addition of confirmation and retransmission to the system (I could just use TCP then), what should I do? What is the best way to improve packet loss rate without lowering datarate to levels that could be higher?

EDIT 2: It occurred to me that the problem might not be just a buffer overflow, but not buffering. I use async WSARecvFrom to receive, which takes a buffer, which, as I understand it, overrides the OS default buffer. When a datagram is received, it is sent to the buffer, and the completion procedure is called wether when the buffer is full or not.

At this point, there is no buffer for processing incoming data until WSARecvFrom is re-called from the completion routine.

The question is, is there a way to create some kind of buffer pool so that data can be buffered when processing another buffer?

+4
source share
7 answers

In your case, you are just sending packets too fast for the receiving process to read them. O / S will only buffer a certain number of received packets before it starts dropping them.

The simplest mechanism to avoid this is for the receive process to send the minimum ACK packet, but for the transfer process to execute regardless of whether it received the ACK within a few milliseconds or so.

EDIT - essentially UDP is "fire and forget." There is no feedback mechanism in the protocol, for example, with TCP. The only way to adjust the baud rate is to let you know that it is not receiving the entire stream. See Also RFC 2309 .


Re: Packet sequence - reordering does not occur due to the physical layer, as a rule, because IP networks are “packet-switched” rather than “circuit-switched”.

This means that each packet can go on a different route through the network, and since these different routes can have different delays, then the packets can fail.

In practice these days, very few packets are lost due to physical layer errors. Packets are lost because they are sent to limited bandwidth at a speed faster than this channel can satisfy. Buffering can help with this by smoothing the packet flow rate, but if the buffer is full, you will return to the square.

+7
source

To avoid problems with OS buffers, you need to implement a speed management system. It can be closed (the receiver sends the ACK and information about its buffers) or open loop (the sender slows down, which means you have to be conservative).

Semi-standard protocols exist for UDP to implement both. RBUDP (Reliable Blast UDP) comes to mind, and there are others.

+3
source

If you use UDP, the only way I can detect packet loss, as far as I know, would be some feedback. If you are on a network with a fairly stable bandwidth, you can make a training period in which you send data packets and wait for the recipient to answer and tell you how many packets are from the received packet (i.e., make the receiver counter even after the timeout, answer to the number he received). Then you simply increase the amount of data in a single burst until you reach the limit, and drop a little bit to be sure.

This will avoid acks after the initial evaluation period, but will only work if the load does not change during the network / reception process.

I have written UDP clients in Python before, and the only time I found a significant packet loss was when the input buffer was too small during reception. As a result, when the system was under heavy load, you received packet loss because the buffer silently overflowed.

+2
source

If you pass the WSA_FLAG_OVERLAPPED flag to WSASocket() , you can call WSARecvFrom() several times to queue multiple receive I / O requests. So there is already another buffer available to receive the next packet, even before your completion routine calls another I / O request.

This does not necessarily mean that you will not refuse packages. If your program does not provide enough buffers fast enough or too long to process them and reorder, then it will not be able to keep up and that when some kind of speed limit can be useful.

+1
source

I suspect that the IP level of your device cannot transmit as fast as you scanned it.

Perhaps because the protocol allows packets to be dropped when other targets sending packets as quickly as possible, otherwise they cannot be achieved.

The different results can be explained by other screen printing or computed tomography processes on your computer, have you observed using top (unix) or prcess explorer (nt) during the tests?

0
source

You must be doing something wrong. The only way you should lose packets: 1) an untrusted network. 2) You are sending data too fast to process your receiving program. 3) You send messages larger than the maximum UDP message size. 4) Each device on your network has a maximum message size (MTU), so you can exceed the limit.

In case No. 1, since you are sending to the same computer, the network is not even involved, so it should be 100% reliable. You did not say that you have two network cards, so I do not think this is a problem.

In case # 2, you usually need to send a huge amount of data before you start discarding the data. From your description, this is not like the case.

In case # 3, make sure all your posts are below this limit.

In case # 4, I’m sure that you answer the maximum size of a UDP message, then you should be fine, but there might be some older hardware or custom device with a small MTU that your data passes through. If so, then these packets will be silently omitted.

I used UDP for many applications and it turned out to be very reliable. Do you use MFC to receive messages? If so, then you need to carefully read the documentation, as they clearly state some of the problems you need to know about, but most people simply disguise them. I had to fix a lot of such shines when people couldn't understand why messaging didn't work.

EDIT: You say your packages are implicitly reordered. I could start by checking that your implicit reordering is actually working correctly. This seems like the most likely candidate for your problem.

EDIT No. 2: Have you tried using a network monitor. Microsoft has (or at least got used to) a free program called Network Monitor, which is likely to help.

0
source

It seems that OS buffering is unable to cope with less frequent context switches, i.e. sending low-level messages requires more frequent context switches. Check if there is a way to optimize the size of the low-level send buffer.

-1
source

All Articles