If you want to make sure that the data (packet) is not lost, use TCP!
The advantage of UDP is that it has less overhead and is thus used for crowded high-traffic connections such as video or game streams. The reason for the low costs is the lack of guarantees that the data will not be lost during transmission.
From your question, it seems that you care about the missing data, so you need to develop measures to detect it. If you probably want the data to be resent before they appear correctly? This is what TCP offers ..!
If actually Java, which throws the data, probably because the queues are full. UDP may be older, but Java knows that UDP exists with all its consequences. Because UDP is designed for high throughput, parts of Java are designed for the same requirement. The queue all causes (massive) overhead that contradicts the UDP design, so this is unlikely. In addition, deleting data from the queue is no different from data loss during transmission (IMHO), so it doesnβt surprise me that Java drops data!
If you want to prevent this, you will need large queues (although they can also fill up) and, more importantly, faster processing of data in the queue (to prevent queues from filling up).
But the most important thing is to accept data loss! If your application / server cannot handle this: do not use UDP
Veger
source share