Conditional Expectations

When using boost::conditional_variable , ACE_Conditional or directly pthread_cond_wait is there any overhead for the wait itself? These are more specific problems that can be unpleasant:

  • After the pending thread is unplanned, will it be scheduled before the timeout has expired and then not scheduled again, or will it remain unplanned until a signal is indicated?
  • Does wait increase the mutex? In this case, I think it spends some time on each iteration of the processor on system calls to lock and release the mutex. Is it the same as constantly acquiring and releasing mutexes?
  • Also, how much time elapses between the signal and the return from wait ?

Afaik, when using semaphores, the sensitivity of acquired calls depends on the size of the slice of the scheduler. How does it work in pthread_cond_wait ? I guess this is platform dependent. I'm more interested in Linux, but if someone knows how it works on other platforms, this will help too.

And one more question: are there any additional system resources for each conditional? I will not create 30,000 mutexes in my code, but should I worry about 30,000 conditional expressions that use the same mutex?

+4
source share
2 answers

Here, what is written on the pthread_cond man page:

pthread_cond_wait atomically unlocks the mutex and waits for the condition variable cond be signaled. Thread execution is paused, rather than consuming processor time , until a condition variable is signaled.

So, I would answer the questions as follows:

  • The waiting thread will not be scheduled until the wait is signaled or canceled.
  • There are no periodic acquisitions of mutexes. The mutex reloads only once before waiting.
  • The time that elapses between a signal and a wait return is similar to the time it took to schedule a thread due to mutex release.

Regarding resources on the same page:

In the LinuxThreads implementation, no resources are associated with condition variables , so pthread_cond_destroy actually does nothing except to verify that the condition has no pending threads.

Update: I dig into the sources of the pthread_cond_ * functions, and the behavior is as follows:

  • All Linux pthread conditions are implemented using futex .
  • When a thread calls wait , it pauses and is unplanned. The thread ID is inserted at the tail of the list of pending threads.
  • When a thread calls signal , the thread at the top of the list goes back. Thus, wakefulness is as efficient as the scheduler, OS resources are not consumed, and the only memory overhead is the size of the waitlist (see futex_wake function).
+6
source

You should only call pthread_cond_wait if the variable is already in the "wrong" state. Since he always waits, there is always overhead associated with transferring the current stream and switching.

When a thread is unplanned, it is unplanned. It should not use any resources, but, of course, the OS can be theoretically implemented poorly. It is allowed to re-acquire mutexes and even return before the signal (therefore, you must double-check the condition), but the OS will be implemented so that this does not affect performance much if this happens at all. This does not occur spontaneously, but rather in response to another, possibly unrelated signal.

30,000 mutexes should not be a problem, but some operating systems may have a problem with 30,000 sleeping threads.

+1
source

All Articles