A potentially significant difference may be the fairness of the distribution of resources. I don’t know the details about the implementation of the semget/semop , but I suspect that it is usually implemented as a “traditional” semaphore while planning is in progress. As a rule, I believe that released streams are processed based on FIFO (first semaphore is expected first). I do not think this will happen with file locking, as I suspect (again, just guessing) that the processing is not performed at the kernel level.
I had existing code for checking semaphores for IPC purposes, so I compared two situations (one of which uses semop and one that uses lockf ). I did a bad test and just ran to application instances. A common semaphore was used to synchronize the start. When the semop test was run, both processes completed 3 million cycles almost in sync. On the other hand, the lockf loop was not so fair. One process usually ends, while the other has completed only half the cycle.
The semop test loop looked like this. The semwait and semsignal are just wrappers for semop calls.
ct = myclock(); for ( i = 0; i < loops; i++ ) { ret = semwait( "test", semid, 0 ); if ( ret < 0 ) { perror( "semwait" ); break; } if (( i & 0x7f ) == 0x7f ) printf( "\r%d%%", (int)(i * 100.0 / loops )); ret = semsignal( semid, 0 ); if ( ret < 0 ) { perror( "semsignal" ); break; } } printf( "\nsemop time: %d ms\n", myclock() - ct );
The overall runtime for both methods was approximately the same, although the lockf version was actually faster overall due to scheduling injustice. As soon as the first process is completed, the other process will have undeniable access for approximately 1.5 million iterations and will work very quickly.
When running uncontested (one process of getting and releasing locks), the semop version was faster. It took about 2 seconds for 1 million iterations, while the lockf version took about 3 seconds.
This was done in the following version:
[]$ uname -r 2.6.11-1.1369_FC4smp
Mark wilkins
source share