A more accurate equivalent of the time command in Linux regarding sys and usertime?

I am in the following situation:

I want to define the sys- and usertime of small pieces of code (PHP- and C ++). Obviously, I could use the binary β€œtime” in Linux, but given the fact that these fragments are so fast, the normal (or even verbose) output of β€œtime” will not be enough for my purpose. The accuracy of the β€œtime” is in milliseconds, and I need microseconds. Or even better: nanoseconds.

Can someone point me to a piece of software that can do this for me? I found material for the wall, but I'm interested in sys and usertime.

Thanks in advance!

BTW: Im running Ubuntu 10.10 64-bit

+4
source share
3 answers

There is no way that will give you any equivalent of sys or usertime, as reported by the time command, which will be more accurate. Most of the apparent precision of the time command is false precision, as it is.

The technique for dealing with this is to put these pieces of code in narrow loops that call them thousands of times and figure out how long this loop takes from this. And even then you have to repeat the experiment several times and choose the lowest time.

Here's an analogy that describes why accuracy is false, and what I mean by that. Suppose you have someone using a stopwatch during a race by manually pressing a button when the race starts and when the timed person crosses the finish line. Presumably your stopwatch is accurate to the 100th second. But this accuracy is incorrect, because it is overshadowed by errors introduced by the reaction time of the person pressing the button.

This is pretty close to why time gives you accuracy, which is supposedly in microseconds, but actually significantly less accurate. In any given system, much happens at any given time, and all these things introduce errors into the calculations. Interruptions from network I / O or IO disk, timer interruptions to start the scheduler, what other processes are performed for the processor cache L1 or L2. All this adds up.

Using something like valgrind that runs your program on a simulated processor, you can specify numbers that are apparently accurate for the number of processor cycles. But this accuracy is not what you will experience in the real world. It’s better to use the technique that I originally described, and just admit that these timings can be fuzzy.

+3
source

gettimeofday() will give you microsecond resolution.

0
source

clock_gettime() will give you nanosecond resolution.

0
source

All Articles