How expensive is the call time (NULL) in the server loop?

I am looking at a server implementation that calls time (NULL) for each request processed. I wonder what the effect of call time (NULL) has on a typical Linux system, and if there are cheaper commands for determining the day - or in general, how often will you call time ()?

Thank you for your opinion on the topic.

+7
source share
4 answers

This is a system call, like other answers, and other answers give you a good way to evaluate the value of your system. (Once it doesn’t have to work much in the kernel, so it’s pretty close to the cost of pure syscall builds. And Linux did everything it could to efficiently implement system calls. So, in this sense, you may find it pretty well optimized.)

Unlike the other answers, I would not find it so cheap that you don’t have to worry about it automatically. If it's in the inner loop, it depends on what else you do in your inner loop. If these are requests for server processing, it will probably cause a lot of system calls for each request, and another one will not really change the cost of each request. However, I saw code in which the syscall overhead from calling time () (or gettimeofday (), which it really comes down to) adversely affects.

If you are worried about cost, the next thing to ask yourself is that cheaper ways to find time are available. In general, it will not be cheaper. If you are on x86, you can request a CPU with the rdtsc instruction (and most likely it is an analog on other processor architectures) - this is one assembly instruction that does not have privileges, so you can paste it into your code anywhere. But there are many pitfalls - rdtsc does not always increase at a predictable speed, especially if the processor speed changes to manage power, depending on the particular processor model you are using; values ​​may not be synchronized across multiple processors, etc. The OS tracks all of this and provides you with a friendly, easy-to-use version of the information when you call gettimeofday ().

+7
source

Getting the current time is associated with a system call in Linux. As Vilx said, it's pretty easy to evaluate:

#include <time.h> int main(void) { int i; for (i = 0; i < 10000000; i++) time(NULL); return 0; } 

Launching this program takes 6.26 s on my pure Atom 1.6 with a frequency of 1.6 GHz with a 64-bit core equal to about 1002 CPU cycles per call (6.26 s * 1.6 G cycles per second / 10 M and asymptomatic 1002 cycle).

This, of course, does not cause much concern, as others have noted.

+3
source

Is this REALLY your bottleneck? Instead, I would suggest profiling. Getting the current time is a very common operation, and I doubt very much that it is expensive (although you can easily write a program to measure it). For example, all web servers do this with each request for their log files.

+1
source

This is just one system table without processing in the kernel. time () doesn't matter if your server sends a file to a user who accepts 100 read () / write () or does something like this.

0
source

All Articles