An implicit question: if Linux blocks the send() call when the socket send buffer is full, why should there be lost packets?
Details: I wrote a small utility in C to send UDP packets to the unicast address and port as soon as possible. Each time I send a UDP payload of 1450 bytes, and the first bytes are a counter that increments by 1 for each packet. I run it on Fedora 20 inside VirtualBox on a desktop PC with 1Gb nic (= pretty slow).
Then I wrote a small utility for reading UDP packets from a given port, which checks the packet counter for its own counter and prints a message if they are different (i.e. 1 or more packets are lost). I run it on a Fedora 20 dual-core server with a 1Gb ethernet nic (= super fast). It shows a lot of lost packets.
Both machines are on the local network. I do not know exactly the number of flights between them, but I do not think that there are more than two routers between them.
Things I tried:
- Add a delay after each
send() . If I set the delay to 1 ms, then the packets are no longer lost. A delay of 100us will begin to lose packets. - Increase the receive socket buffer size to 4MiB with
setsockopt() . It does not matter...
Please enlighten me!
c linux udp networking sockets
seeker
source share