I wrote a C ++ application (works on Linux) that serves an RTP stream of around 400 kbps. For most destinations this works fine, but packet loss occurs in some places. It seems that the problematic destinations have a slower connection, but for the stream I'm sending it should be fast enough.
Since these destinations can receive the same RTP streams for other applications without packet loss, my application may be malfunctioning.
I already checked a few things: - in tcpdump, I see that all RTP packets that go to the sending machine - there is a UDP send buffer in place (I tried sizes from 64 KB to 300 KB) - RTP packets basically stay below 1400 bytes so that avoid fragmentation
What can the sending application do to minimize the chance of packet loss and what would be the best way to debug this situation?
source
share