What C ++ 11 <atomic> operations / memory operations guarantee freshness?

Possible duplicate:
Concurrency: Atomic and volatile in C ++ 11 memory model

Is there a guarantee of freshness in the C ++ 11 <atomic> specification? Descriptions of different memory orders relate only to reordering (as far as I saw).

In particular, in this situation:

 #include <atomic> std::atomic<int> cancel_work(0); // Thread 1 is executing this function void thread1_func() { ... while (cancel_work.load(<some memory order>) == 0) { ...do work... } } // Thread 2 executes this function void thread2_func() { ... cancel_work.store(1, <some memory order>); ... } 

If stream 1 and stream 2 have no data other than cancel_work , it seems to me that no order guarantees are needed, and std::memory_order_relax enough for both storage and loading. But does this guarantee that thread 1 will never see the cancel_work update instead of just repeating its local cache line without updating it from main memory? If not, what is the minimum required for a guarantee?

+6
source share
3 answers

There is nothing that could guarantee this: everything is about order. Even memory_order_seq_cst simply ensures that everything happens in one complete order. Theoretically, the / library / cpu compiler could schedule every load from cancel_store at the end of the program.

In 29.3p13 there is a general operator that

Implementations should make nuclear storage visible to atomic loads within a reasonable amount of time.

But there is no specification of what constitutes a "reasonable amount of time."

So: memory_order_relaxed should be just fine, but memory_order_seq_cst may work better on some platforms, as the cache line may be reloaded earlier.

+4
source

He appears this answer also answers my question. Well, hopefully my question will help googlers better find it.

Subject 1 "SHOULD" see the updated cancel_work in a "reasonable amount of time", however, what exactly is reasonably (apparently) not indicated.

+3
source

A function call [not built by the compiler] automatically reloads any registers that store variables that are not local. As long as the processor working with thread1_func () receives the contents of the cache, flushed or updated based on the store , it will work.

memory_order_relax must ensure that the data (at some point in the future) will be cleared of any other processor caches [this is automatic in x86, but not for all types of processors, for example, for some ARM processors, code-driven flushing is required], but this is not guaranteed before any other entry [to regular or atomic variables].

And note that the memory order ONLY affects the current thread / processor. What another thread or processor does during storage or loading is completely dependent on that thread / processor. I mean, thread1_func() in your case can read the value 0 in a certain amount of time after the value 1 been written by another processor / thread. All guarantees of atomic operations are that it receives an OLD value or a NEW value, and not something in between [if you do not use memory_order_relax , which does not force the load / storage to be ordered between operations within the stream. However, no matter what memory order you use, atomic must guarantee [if implemented correctly] that the value will eventually be updated. It is harder to say when in a casual case.

+2
source

All Articles