How does C # guarantee atomicity of read / write operations?

The C # specifications in section 5.5 that read and write specific types (namely bool , char , byte , sbyte , short , ushort , uint , int , float and reference types) are guaranteed to be atomic.

This aroused my interest. How can you do this? I mean that my modest personal experience showed me only the blocking of variables or the use of barriers if I want to read and write in order to look atomic; this will be a performance killer if you need to do this for each individual read / write. And yet C # does something with a similar effect.

Perhaps other languages ​​(like Java) do this. I seriously don’t know. My question is not really for the language, but I know that C # does this.

I understand that it may deal with certain specific processor instructions and may not be used in C / C ++. However, I would still like to know how this works.

[EDIT] Honestly, I thought that reading and writing could be non-atomic under certain conditions, for example, a processor can access a memory location when another processor writes there. Does this happen only when the processor cannot simultaneously process the entire object, for example, because it is too large or because the memory is not aligned on the corresponding boundary?

+7
c # atomic
source share
5 answers

The reason these types guarantee atomicity is because they are all 32 bits or less. Since .NET only works on 32-bit and 64-bit operating systems, the processor architecture can read and write all the value in one operation. This, unlike, say, Int64 on a 32-bit platform, which needs to be read and written using two 32-bit operations.

I am not a hardware guy, so I apologize if my terminology makes me sound like a shooter, but this is the main idea.

+13
source share

It's cheap enough to implement an atomicity guarantee on x86 and x64 kernels, because the CLR only promises atomicity for variables that are 32-bit or smaller. All that is required is that the variable is correctly aligned and does not limit the line of the cache. The JIT compiler provides this by allocating local variables at a 4-byte stack offset. The GC heap manager does the same for heap distribution.

It is noteworthy that the CLR guarantee is not very good. Alignment with alignment is not good enough to write code that sequentially performs the functions of double arrays. Very beautifully demonstrated in this thread . For this reason, it is also very difficult to interact with machine code that uses SIMD instructions.

+4
source share

In x86, reading and writing are atomic anyway. It is supported at the hardware level. This, however, does not mean that operations, such as addition and multiplication, are atomic; they require loading, computing, and then storing, which means they can interfere. That includes a lock prefix.

You mentioned blocking and memory barriers; they have nothing to do with reading and writing, being atomic. On x86, with or without memory barriers, there is no way for you to see a half-written 32-bit value.

+3
source share

Yes, C # and Java ensure that the downloads and stocks of some primitive types are atomic, as you say. This is cheap because processors capable of running .NET or JVM ensure that the loads and supplies of suitable primitive primitive types are atomic.

Now that neither C #, nor Java, nor the processors that they run under warranty, and what is expensive, gives out memory barriers so that these variables can be used for synchronization in a multi-threaded program. However, in Java and C #, you can mark your variable with the "volatile" attribute, in which case the compiler will take care of returning the appropriate memory barriers.

+2
source share

You can not. Even when switching to assembler, you should use special LOCK opcodes to ensure that another kernel or even a process does not come and does not destroy all your hard work.

-4
source share

All Articles