The minimum timeout must be path latency or half response time (RTT).
http://www.faqs.org/rfcs/rfc908.html
The big question is to decide what happens after one timeout, do you reset to one timeout or doubled? This is a complex decision based on the size of the frequency of communication and how fairly you want to play with others.
If you find that packets are often lost, and latency is a concern, then you want to look at either the same timeout or slow growth to exponential timeouts, for example. 1x, 1x, 1x, 1x, 2x, 4x, 8x, 16x, 32x.
If bandwidth is not a big concern, but the delay is valid, follow the UDT and force data with small timeouts and redundant delivery. This is useful for WAN environments, especially intercontinental distances, and why UDT is often found in WAN accelerators.
Rather, latency is not so important, and justice is preferable for other protocols, then use the standard deferral pattern, 1x, 2x, 4x, 8x, 16x, 32x.
Ideally, the implementation of protocol processing should be improved to automatically obtain optimal latency and snooze periods. When there is no data loss, you do not need excess delivery, when there is data loss, you need to increase delivery. For timeouts, you can consider reducing the timeout under optimal conditions, and then slowing down when congestion occurs to prevent synonymous broadcast storms.