What are good UDP timeout and retry values?

I am working on a UDP server / client configuration. The client sends one packet to the server, the size of which varies, but is usually <500 bytes. The server instantly responds to one outgoing packet, usually smaller than the incoming request packet. Complete transactions always consist of a single packet exchange.

If the client does not see the answer within T time, he repeats R times, increasing T by X before each attempt, before finally giving up and returning an error. Currently, R never changes.

Is there any special logic for choosing the optimal initial T (wait time), R (retries) and X (increase in wait)? How robust should attempts be (i.e. what minimum R should be used) in order to achieve some approximation of a โ€œreliableโ€ protocol?

+4
source share
2 answers

This is similar to question 5227520 . Googling "tcp retries" and "tcp retransmission" leads to many offers that have been tried and tested over the years. Unfortunately, not a single solution seems optimal.

I would choose T to start in 2 or 3 seconds. My increase in X will be half T (doubling T seems popular, but you quickly get long timeouts). I would adjust R on the fly to be at least 5 or more, if necessary, so my total timeout is at least a minute or two.

I would be careful not to leave R and T too high if subsequent transactions are usually faster; you may need to lower R and T, since your statistics allow you to try again and get a quick answer, rather than leaving R and T at the maximum level (especially if your customers are people and you want to be responsive).

Keep in mind: you will never be as reliable as an algorithm that repeats more than you if these attempts succeed. On the other hand, if your server is always available and always "responds essentially instantly," then if the client does not see the answer, this is a rejection of your server control, and the only thing that can be done is to retry the client (although again the attempt can be more than just re-sending, for example, closing / reopening the connection, trying the backup server on a different IP address, etc.).

+4
source

The minimum timeout must be path latency or half response time (RTT).

http://www.faqs.org/rfcs/rfc908.html

The big question is to decide what happens after one timeout, do you reset to one timeout or doubled? This is a complex decision based on the size of the frequency of communication and how fairly you want to play with others.

If you find that packets are often lost, and latency is a concern, then you want to look at either the same timeout or slow growth to exponential timeouts, for example. 1x, 1x, 1x, 1x, 2x, 4x, 8x, 16x, 32x.

If bandwidth is not a big concern, but the delay is valid, follow the UDT and force data with small timeouts and redundant delivery. This is useful for WAN environments, especially intercontinental distances, and why UDT is often found in WAN accelerators.

Rather, latency is not so important, and justice is preferable for other protocols, then use the standard deferral pattern, 1x, 2x, 4x, 8x, 16x, 32x.

Ideally, the implementation of protocol processing should be improved to automatically obtain optimal latency and snooze periods. When there is no data loss, you do not need excess delivery, when there is data loss, you need to increase delivery. For timeouts, you can consider reducing the timeout under optimal conditions, and then slowing down when congestion occurs to prevent synonymous broadcast storms.

+2
source

All Articles