Is a generic mutex more efficient than an atom of relatively large structure?

Suppose I have a class that looks like this (actually this is the size):

class K { public: long long get_x() const; // lock m_mutex in shared/read-only mode void update( long long w ); // lock m_mutex with a unique_lock private: long long m_a; long long m_b; long long m_c; long long m_x; double m_flow_factor; mutable boost::shared_mutex m_mutex; }; 

As you can see, this should be thread safe. The update function is called by one thread at a time, but it is unknown, but only one thread (guaranteed), but the accessor can be called by several threads at the same time.

The update function changes all values ​​and is called very often (hundread time per second). The current implementation, as you can guess, will block a lot.

I considered using std :: atomic to avoid blocking and potentially make this code more efficient. However, I really need an update function to update the values ​​together. So I'm considering doing something like this:

 class K { public: long long get_x() const { return data.load().x; } void update( long long w ) { auto data_now = data.load(); // ... work with data_now data.store( data_now ); } private: struct Data { long long a; long long b; long long c; long long x; double flow_factor; }; std::atomic<Data> data; }; 

My current understanding of std :: atomic is that even if this code is more readable than the previous one (since it has no lock declarations everywhere) , since the K :: Data structure is “big”, std :: atomic will just be implemented with a normal mutex lock (so it should not be faster than my original implementation).

Am I right?

+6
source share
2 answers

Any specialization for std: atomic for such a structure will be related to internal locking, so you haven’t received anything, and now you also have a data race between boot and storage, which you did not have, since it had an exclusive lock around the entire block (suppose ?) in the previous version.

Also with shared_mutex it may be prudent to profile with the regular mutex vs shared_mutex, you may find that the normal mutex works better (it all depends on how long you hold the locks for).

The advantage of shared_mutex appears only when locks are kept for reading for a long period of time, and there are very few of them, otherwise the overhead involved in shared_mutex will kill any winnings that you would have over a regular mutex.

+9
source

std :: atomic is not necessarily slower than std :: mutex. For example, in MSVC 14.0, the implementation of std :: atomic.store looks like this:

 inline void _Atomic_copy( volatile _Atomic_flag_t *_Flag, size_t _Size, volatile void *_Tgt, volatile const void *_Src, memory_order _Order) { /* atomically copy *_Src to *_Tgt with memory ordering */ _Lock_spin_lock(_Flag); _CSTD memcpy((void *)_Tgt, (void *)_Src, _Size); _Unlock_spin_lock(_Flag); } inline void _Lock_spin_lock( volatile _Atomic_flag_t *_Flag) { /* spin until _Flag successfully set */ while (_ATOMIC_FLAG_TEST_AND_SET(_Flag, memory_order_acquire)) _YIELD_PROCESSOR; } 

It is not guaranteed that direct locking will be faster than the correct std :: mutex. It depends on what exactly you are doing. But std :: atomic is probably NOT ALWAYS a suboptimal solution compared to std :: mutex.

+1
source

All Articles