I am writing a network application that uses ASIO / UDP to send and receive a remote / local endpoint between one endpoint. I used udp :: socket :: receive to receive data, and everything in my code worked logically, but I was losing a huge amount of packets. I found that any packet received, although not blocked by the receive function, was lost - it was not buffered. This was especially strange because I set the receive buffer to 2 MB using the following command:
sock_udp.connect( remote_endpoint );
sock_udp.set_option( boost::asio::socket_base::receive_buffer_size(2*1024*1024) );
This is also the fact that if I sent only two packets of 100 bytes each, I would still lose the second one, if I spent some time processing the first one.
I thought this was probably a flaw with udp :: socket :: receive, so I rewrote my network code to use udp::socket::async_receive, but I still have the same problem. That is, as soon as my handler is called, I delete any packets until I call async_receive again.
Am I fundamentally misunderstanding something? Is there any other approach I should use to increase the buffering of incoming packets?
If this helps, I checked that this happens both on OS X in Xcode, using their own build of gcc4.2, and Ubuntu 10.10 using gcc4.5. I have not been able to try it on Windows yet.
source
share