Using a mutex to block execution from outside a critical section

I'm not sure I got the correct terminology, but here it goes - I have this function that is used by multiple threads to write data (using the pseudo-code in the comments to illustrate what I want)

//these are initiated in the constructor int* data; std::atomic<size_t> size; void write(int value) { //wait here while "read_lock" //set "write_lock" to "write_lock" + 1 auto slot = size.fetch_add(1, std::memory_order_acquire); data[slot] = value; //set "write_lock" to "write_lock" - 1 } 

the recording order is not important, all I need is to go to a unique slot for each recording

Every once in a while, although I need one stream to read data using this function

 int* read() { //set "read_lock" to true //wait here while "write_lock" int* ret = data; data = new int[capacity]; size = 0; //set "read_lock" to false return ret; } 

so it basically swaps the buffer and returns the old one (I removed the capacity logic to cut fragments)

Theoretically, this should lead to two operational scenarios:

1 - just a bunch of threads writing to the container

2 - when some thread performs the reading function, all new authors will have to wait, the reader will wait until all existing entries are finished, then the reading logic will be executed, and scenario 1 may continue.

The question is, I don’t know which barrier to locks -

Spin locks will be wasteful, as there are many such containers, and they all need cpu loops

I do not know how to apply std :: mutex, since I want the write function to be in the critical section if the read function is activated. Wrapping the entire write function in the mutex will cause unnecessary slowdown for scenario 1.

So what would be the optimal solution here?

+7
c ++ multithreading
source share
2 answers

If you have the power of C++14 , you can use std :: shared_timed_mutex to separate readers and writers. In this case, it seems that you need to provide general access to the writer files (allowing other message flows at the same time), and your reader uses unique access (choosing all other flows).

So, something like this might be what you need:

 class MyClass { public: using mutex_type = std::shared_timed_mutex; using shared_lock = std::shared_lock<mutex_type>; using unique_lock = std::unique_lock<mutex_type>; private: mutable mutex_type mtx; public: // All updater threads can operate at the same time auto lock_for_updates() const { return shared_lock(mtx); } // Reader threads need to kick all the updater threads out auto lock_for_reading() const { return unique_lock(mtx); } }; // many threads can call this void do_writing_work(std::shared_ptr<MyClass> sptr) { auto lock = sptr->lock_for_updates(); // update the data here } // access the data from one thread only void do_reading_work(std::shared_ptr<MyClass> sptr) { auto lock = sptr->lock_for_reading(); // read the data here } 

shared_lock allows other threads to receive shared_lock at the same time, but don't let unique_lock get simultaneous access. When the reader thread tries to get unique_lock, all shared_lock will be released before unique_lock gets exclusive control.

+2
source share

You can also do this using regular mutexes and variable conditions, rather than general ones. Presumably shared_mutex has higher overhead, so I'm not sure if it will be faster. With Gallik’s decision, you are supposed to pay to lock the shared mutex with every write call; I got the impression that write receives a call more than it is being read, so maybe this is undesirable.

 int* data; // initialized somewhere std::atomic<size_t> size = 0; std::atomic<bool> reading = false; std::atomic<int> num_writers = 0; std::mutex entering; std::mutex leaving; std::condition_variable cv; void write(int x) { ++num_writers; if (reading) { --num_writers; if (num_writers == 0) { std::lock_guard l(leaving); cv.notify_one(); } { std::lock_guard l(entering); } ++num_writers; } auto slot = size.fetch_add(1, std::memory_order_acquire); data[slot] = x; --num_writers; if (reading && num_writers == 0) { std::lock_guard l(leaving); cv.notify_one(); } } int* read() { int* other_data = new int[capacity]; { std::unique_lock enter_lock(entering); reading = true; std::unique_lock leave_lock(leaving); cv.wait(leave_lock, [] () { return num_writers == 0; }); swap(data, other_data); size = 0; reading = false; } return other_data; } 

It's a bit complicated, and it took me a while to work, but I think it should serve the purpose well.

In the normal case, when only writing occurs, reading is always false. Thus, you do the usual thing and pay for two additional atomic increments and two inactive branches. Thus, there is no need to block any mutexes for a common path, unlike a solution using a common mutex, it is supposedly expensive: http://permalink.gmane.org/gmane.comp.lib.boost.devel/211180 .

Now suppose read is called. First, an expensive, slow heap allocation occurs, and recording continues uninterrupted. Then an input lock is entered, which has no immediate effect. Now the reading parameter is set to true. Immediately, any new calls for recording enter the first branch and, in the end, hit an input lock that they cannot receive (as has already been accepted), and these threads then go to sleep.

Meanwhile, the read stream is now waiting, provided that the number of writers is 0. If we are lucky, it can really go right away. If, however, there are entries in one of two places between the increment and decrease of num_writers, then it will not. Each time the write stream decreases num_writers , it checks to see if this number has been reduced to zero, and when it does, a condition variable will be signaled. Since num_writers is atomic, which prevents various reordering frauds, it is guaranteed that the last thread sees num_writers == 0 ; it can also be notified more than once, but this is normal and cannot lead to bad behavior.

As soon as this condition variable has been signaled, it shows that all authors either fall into the first branch or are modified. So, the read stream can now safely exchange data, and then unlock everything, and then return what it needs.

As mentioned earlier, in a typical operation there are no locks, the branches simply expand and decompress. Even when reading occurs, the read stream will have one lock and one condition variable, while a typical write stream will have about one mutex lock / unlock and that is all (one or a small number of write streams will also notify the condition variable).

-one
source share

All Articles