There is no way that will give you any equivalent of sys or usertime, as reported by the time command, which will be more accurate. Most of the apparent precision of the time command is false precision, as it is.
The technique for dealing with this is to put these pieces of code in narrow loops that call them thousands of times and figure out how long this loop takes from this. And even then you have to repeat the experiment several times and choose the lowest time.
Here's an analogy that describes why accuracy is false, and what I mean by that. Suppose you have someone using a stopwatch during a race by manually pressing a button when the race starts and when the timed person crosses the finish line. Presumably your stopwatch is accurate to the 100th second. But this accuracy is incorrect, because it is overshadowed by errors introduced by the reaction time of the person pressing the button.
This is pretty close to why time gives you accuracy, which is supposedly in microseconds, but actually significantly less accurate. In any given system, much happens at any given time, and all these things introduce errors into the calculations. Interruptions from network I / O or IO disk, timer interruptions to start the scheduler, what other processes are performed for the processor cache L1 or L2. All this adds up.
Using something like valgrind that runs your program on a simulated processor, you can specify numbers that are apparently accurate for the number of processor cycles. But this accuracy is not what you will experience in the real world. Itβs better to use the technique that I originally described, and just admit that these timings can be fuzzy.
source share