Imagine that you have many clustered servers on many hosts in a heterogeneous network environment, so the connections between the servers can have completely different delays and bandwidth. You want to build a map of connections between servers by transferring data between them.
Of course, this map may become obsolete over time as the network topology changes, but now ignore these difficulties and assume that the network is relatively static.
Given the delays between nodes in this host graph, bandwidth calculation is a relative simple time exercise. However, I have more problems with delays. To get round-trip time, there is a simple time to check the return path from the local host to the remote host - both synchronization events (start, stop) occur on the local host.
What if I want one-way time under the assumption that latency is not equal in both directions? Assuming the clocks on different nodes are not exactly synchronized (at least their error is the same magnitude as the delays associated with it) - how can I calculate a one-way delay?
In a related question, is asymmetric latency (where is the connection faster in direction than the other), common in practice? For what reasons / hardware configurations? Of course, I know about asymmetric bandwidth scenarios, especially on last-mile consumer lines such as DSL and Cable, but I'm not sure about the delay.
Added by:. After reviewing the comment below, the second part of the question is probably better on serverfault .
latency udp networking cluster-computing tcp
BeeOnRope
source share