When you change data in C #, something similar to one operation can be compiled into several instructions. Take the following class:
public class Number { private int a = 0; public void Add(int b) { a += b; } }
When you create it, you get the following IL code:
IL_0000: nop IL_0001: ldarg.0 IL_0002: dup
Now, let's say you have a Number object, and two threads call its Add method as follows:
number.Add(2); // Thread 1 number.Add(3); // Thread 2
If you want the result to be 5 (0 + 2 + 3), a problem arises. You do not know when these threads will execute their instructions. Both threads could execute IL_0003 (pushing zero on the stack) before either IL_000a (actually changing the member variable), and you get the following:
a = 0 + 2;
The last thread to complete the "wins" and at the end of the process, a is 2 or 3 instead of 5.
So, you must make sure that one complete set of instructions ends before the other set. To do this, you can:
1) Block access to a member of the class while it is being written, using one of many . NET synchronization primitives (for example, lock , Mutex , ReaderWriterLockSlim , etc.), so that only one thread can work on it.
2) Move write operations to the queue and process this queue in a single thread. As Torarin points out, you still have to synchronize access to the queue if it is not thread safe, but it is worth it for complex write operations.
There are other methods. Some (for example, Interlocked ) are limited to specific data types, and there are even more (for example, those discussed in Non- Blocking Synchronization and Joseph Albahari Threading Part 4 in C # ), although they are more complex: use caution.