Fair Critical Partition (Linux)

In a multi-threaded Linux application, I use the mutex for critical partitions. This works very well, except for the equity issue. It may happen that a thread exiting a critical section and re-entering immediately does not give any other thread a chance. for example

while(true) { critsect.enter(); ... do calculations ... ... maybe call a blocking operation so we sleep ... critsect.leave(); } 

can probably stop any other thread to enter the same critical section. Mutexe are not fair.

Is there a decision to make a fair critical section? I was thinking of adding a queue so that critical sections are executed in the order of their โ€œarrivalโ€. Alternatively, at least the counter might do pthread_yield () after unlocking if other threads are waiting.

Is there any recommended practice for this kind of requirement?

+4
linux pthreads mutex critical-section
source share
4 answers

You can create a FIFO ticket lock on top of pthreads mutexes in the following lines:

 #include <pthread.h> typedef struct ticket_lock { pthread_cond_t cond; pthread_mutex_t mutex; unsigned long queue_head, queue_tail; } ticket_lock_t; #define TICKET_LOCK_INITIALIZER { PTHREAD_COND_INITIALIZER, PTHREAD_MUTEX_INITIALIZER } void ticket_lock(ticket_lock_t *ticket) { unsigned long queue_me; pthread_mutex_lock(&ticket->mutex); queue_me = ticket->queue_tail++; while (queue_me != ticket->queue_head) { pthread_cond_wait(&ticket->cond, &ticket->mutex); } pthread_mutex_unlock(&ticket->mutex); } void ticket_unlock(ticket_lock_t *ticket) { pthread_mutex_lock(&ticket->mutex); ticket->queue_head++; pthread_cond_broadcast(&ticket->cond); pthread_mutex_unlock(&ticket->mutex); } 

Under such a scheme, mutex pthreads of a low level are not executed anywhere when a thread is in a critical section protected by a ticket, allowing other threads to join the queue.

+7
source share

Even with a fair critical section, the code is likely to have terrible performance, because if the critical section is held for long periods of time, threads will often wait for it.

So, I suggest you try rebuilding the code, so it does not need to block critical sections for long periods of time. Either use a different approach altogether (it is often recommended to transfer objects in turn messages, because they are easy to receive) or, at least, performing most of the calculations on local variables without holding the lock, and not only blocking to store the results. If a lock is held for shorter periods of time, threads will spend less time waiting for it, which will generally improve performance and make impartiality. You can also try increasing lock lock (lock small objects separately), which will also reduce competition.

Edit: Well, thinking about this, I believe that every critical section in Linux is roughly fair. Whenever there are sleepers, the unlock operation must enter the core to tell him to wake them. During the return from the kernel, the scheduler starts and selects the process with the highest priority. Sleepers grow in priority, waiting, so at some point they will be high enough that the release will cause the swtich task.

+3
source share

If your statement is true (I donโ€™t have time to read, and it would seem that you investigated this before posting the question), I suggest

  sleep(0); 

to explicitly get between critical sections.

 while(true) { critsect.enter(); ... do calculations ... ... maybe call a blocking operation so we sleep ... critsect.leave(); sleep(0); } 
0
source share

OK, how about this:

 while(true) { sema.wait; critsect.enter(); sema.post; ... do calculations ... ... maybe call a blocking operation so we sleep ... critsect.leave(); } 

Init. the semaphore is calculated to 1. Ask the other threads to wait on the semaphore, before trying to get the CS and signaling when this is done. If the stream "calculate" receives sema, it can go to CS and block it. As soon as inside the castle, but before a long claculate, the sema is signaled, and one other thread can then reach CS, but not get inside. When the thread "calculate" leaves the lock, it cannot cyclically go around it and block it, because sema. count is zero, so another thread gets a lock. The โ€œcomputeโ€ stream must wait on the seme until another incoming stream completes its access and issues the sema command.

Thus, another thread can โ€œreserveโ€ access to data, even if it cannot yet receive it.

Rgds, Martin

0
source share

All Articles