Setitimer question

I have the following code running on my dual core machine.

When I run one or two instances of the application on the same PC, I have the correct time resolution of 100 ms. However, when I ran 3 instances of the same application on the same PC, the time resolution is more than 100 ms. Is it even possible to make 3 instances of the application with the same resolution of 100 ms? Is this related to the number of cores on my machine?

#include <signal.h> #include <stdio.h> #include <string.h> #include <sys/time.h> void timer_handler ( int signum) { double time ; // obtain time here gettimeofday() ; printf("timer_handler at time = %lf \n", time ) ; } int main () { struct sigaction sa; struct itimerval timer ; memset ( &sa, 0, sizeof ( sa ) ) ; sa.sa_handler = &timer_handler ; sigaction ( SIGALRM, &sa, NULL ); timer.it_value.tv_sec = 0 ; timer.it_value.tv_usec = 100000; timer.it_interval.tv_sec = 0; timer.it_interval.tv_usec = 100000 ; setitimer ( ITIMER_REAL, &timer, NULL ) ; for (;;); } 
+4
source share
1 answer

The setitimer(2) page has the following:

Timers never expire before the set time, but some (short) time may expire, which depends on the resolution of the system timer and on system loading; see time (7).

Obviously, when you launch more instances of your application, system loading will be higher and timers will be less accurate.

If you replace the busy cycle:

 for (;;); 

with something less CPU limitation (like IO-related workload) the time will be more accurate.

+3
source

All Articles