I got the impression that UDP instability is a property of the physical layer, but it seems like it is not:
I am trying to send a message over UDP, which is divided into a sequence of packets. Message identification and re-ordering are implicit.
I tested this method on two applications that ran on the same computer and expected it to work smoothly. However, despite the fact that the exchange of data was between two programs on the same machine, there were packet losses and quite frequent. Losses also seem rather random: sometimes the whole message has passed, sometimes not.
Now the fact that losses occur even on one machine makes me wonder if I am doing it right?
Initially, I sent all message messages asynchronously in a single run, without waiting for the completion of one fragment before sending the next.
Then I tried to send the next fragment of the message from the completion procedure of the previous one. This improved the packet loss rate, but did not stop it at all.
If I added a pause (Sleep (...)) between peices, it works 100%.
EDIT: As suggested: packets just send too fast, and the OS performs minimal buffering. This is logical.
So, if I want to prevent the addition of confirmation and retransmission to the system (I could just use TCP then), what should I do? What is the best way to improve packet loss rate without lowering datarate to levels that could be higher?
EDIT 2: It occurred to me that the problem might not be just a buffer overflow, but not buffering. I use async WSARecvFrom to receive, which takes a buffer, which, as I understand it, overrides the OS default buffer. When a datagram is received, it is sent to the buffer, and the completion procedure is called wether when the buffer is full or not.
At this point, there is no buffer for processing incoming data until WSARecvFrom is re-called from the completion routine.
The question is, is there a way to create some kind of buffer pool so that data can be buffered when processing another buffer?