Network elapsed time measurement

I developed a server and client application for streaming video frames from one end to the other using RTSP. Now, in order to collect statistics that will help me improve my applications, I need to measure the elapsed time between sending a frame and receiving a frame.

I am currently using the following formula:

Client_Receive_Timestamp - Server_Send_Timestamp = Elapsed_Time 

Problem

It seems to me that the elapsed time is too large for 100-200 m. I think the reason is that the server clock and the client clock are not synchronized and have a difference of about 100-200 ms.

Question

How can I accurately measure elapsed time between two machines?

Topic An accurate measurement of elapsed time between machines involves calculating the round-trip delay. However, I cannot use this solution since the client does not request frames. It just receives frames through RTSP.

+7
c ++ time
source share
2 answers

Assuming

then you can simply subtract the "sent timestamp" from the "received timestamp" to get the duration of the delay. The observed error will be less than the sum of both clock errors. If the timelines are small enough (probably something less than an hour), you can reasonably ignore slew effects .

If ntpd is not already running on both machines, and if you have the necessary permissions, you can

 $ sudo ntpdate -v pool.ntp.org 

to force synchronization with a pool of public time servers.

Then you can use C ++ 11 high_resolution_clock to calculate the duration:

 /* hrc.cc */ #include <chrono> #include <iostream> int main(int,char**){ using std::chrono::high_resolution_clock; // send something high_resolution_clock::time_point start = high_resolution_clock::now(); std::cout << "time this" << std::endl ; // receive something high_resolution_clock::time_point stop = high_resolution_clock::now(); std::cout << "duration == " << std::chrono::duration_cast<std::chrono::nanoseconds>(stop-start).count() << "ns" << std::endl ; return 0; } 

Here is what the previous example looks like on my system:

 $ make hrc && ./hrc c++ hrc.cc -o hrc time this duration == 32010ns 
+5
source share

I need to measure the elapsed time between sending a frame and receiving a frame.

You do not need accurate timestamps for this. You can average the expected delay.

If A sends a packet (or frame) to B, B responds immediately (*) :

A (sendTime) ---> B ---> A (receivedTime)

you can easily calculate the delay:

 latency = (receivedTime - sendTime) / 2 

This suggests, of course, that latency is symmetrical. You can find more complex algorithms if you research the phrases "network latency estimation algorithm."

Having a calculated delay, you can, of course, estimate the time difference (but this does not seem necessary):

A (sendTime) ---> B (receivedTimeB) - (receivedTimeB) → A

 timeDelta = sendTime + latency - receivedTimeB 

Note that even if you average a lot of results, this algorithm is probably very biased. This is just published as a simple example of a general idea.


(*) The fact that this does not happen really causes an error right away, of course. It depends on how busy machine B.

+4
source share

All Articles