How to calculate runtime in microsecond accuracy

I want to calculate the performance of a function in microsecond accuracy on a Windows platform.

Now Windows itself has a milisecond boundary, so how can I achieve this.

I tried the following sample but did not get the correct results.

LARGE_INTEGER ticksPerSecond = {0}; LARGE_INTEGER tick_1 = {0}; LARGE_INTEGER tick_2 = {0}; double uSec = 1000000; // Get the frequency QueryPerformanceFrequency(&ticksPerSecond); //Calculate per uSec freq double uFreq = ticksPerSecond.QuadPart/uSec; // Get counter b4 start of op QueryPerformanceCounter(&tick_1); // The ope itself Sleep(10); // Get counter after opfinished QueryPerformanceCounter(&tick_2); // And now the op time in uSec double diff = (tick_2.QuadPart/uFreq) - (tick_1.QuadPart/uFreq); 
+6
c ++ performance c windows precision
source share
8 answers

Run the operation in a cycle a million times or so and divide the result by that number. Thus, you will get the average runtime over many executions. The timing of one (or even hundreds) of a very fast operation is very unreliable due to multitasking and something else.

+20
source share
  • compile it
  • Look at the assembler output
  • count the amount of each command in your function
  • apply instruction loops on your target processor
  • ends with a loop counter
  • multiply by the clock frequency that you use by
  • apply arbitrary scaling factors to account for misses in the cache and incorrect predictions of the lol branch

(the person I'm so going to go down voted for this answer)

+7
source share

No, you probably get the exact result, QueryPerformanceCounter () works well for time slots. What is wrong is that you expect the accuracy of sleep (). It has a resolution of 1 millisecond, its accuracy is much worse. No better than about 15.625 milliseconds on most Windows machines.

To get it anywhere close to 1 millisecond, you first need to call timeBeginPeriod (1) . This will probably improve the match by ignoring the jitter you get from Windows, which is a multitasking operating system.

+3
source share

If you do this for offline profiling, a very simple way is to run the function 1000 times, measure to the nearest millisecond and divide by 1000.

0
source share

To get a more accurate resolution than 1 ms, you will have to refer to the OS documentation. There may be APIs for obtaining timer resolution in microsecond resolution. If so, run the application many times and take the averages.

0
source share

I like Matti Virkkunen. Check the time, call the function many times, check the time when you are done, and divide by the number of times you called the function. He mentioned that it might be disabled due to OS interruptions. You can vary the number of times you make a call and see the difference. Can you increase the priority of the process? Can you get all the calls within one time slice of the OS?

Since you don’t know when the OS can change you, you can put all this inside a larger loop to take all the measurements many times and keep the smallest number, since this is the one that had the least number of OS interruptions. This may be longer than the actual execution time of the function, as it may contain some OS interruptions.

0
source share

Sanjit

It looks (to me) as if you are doing it for sure. QueryPerformanceCounter is a great way to measure short periods of time with a high degree of accuracy. If you do not see the expected result, most likely, because the dream does not sleep during the time that you expected! However, it will probably be measured correctly.

I want to return to the original question of how to measure time on windows with an accuracy of microseconds. As you already know, a high-performance counter (i.e. QueryPerformanceCounter) ticks off at the frequency indicated by QueryPerformanceFrequency. This means that you can measure time with an accuracy equal to:

1 / second frequency

On my machine, QueryPerformanceFrequency reports 2337910 (counts / sec). This means that my QPC computer can measure with accuracy 4.277e-7 seconds or 0.427732 microseconds. . This means that the smallest bit of time that I can measure is 0.427732 microseconds. This, of course, gives you what you originally asked for :) Your machine frequency should be the same, but you can always do the math and test it.

0
source share

Or you can use gettimeofday (), which gives you a timeval structure that is a timestamp (down to ΞΌs).

-one
source share

All Articles