The real problem here is that when multiple threads cheat on the data structure, the threads will not necessarily march at the blocking stage.
One thread is read for user1. One thread writes for user2. No thread can predict where another thread will execute in their respective processes. In addition, we cannot predict any orders for users to complete these two processes. If the record first updates the data, the read will show the updated status, even if user1 could request a read a little earlier.
Reading or changing during a restart is the same, with the additional consideration that the transition to the next (during iteration) essentially becomes a βreadβ operation of the map state, if not the content of any specific data in it.
So, when you enable concurrency in these data structures, you get a test "close enough" in terms of time. (This is very similar to the same considerations with databases, except that we are used to thinking about databases in this way, and the time frame is several factors out of 10 different.
NOTE. To make a comment about the wonderful little timeline shown by @Matts in another answer ...
The timeline shows two streams and the beginning and end for each stream. Triggering for two threads can occur in any order (a, b) or (b, a). The ends can be executed in any order because you cannot determine how long the operation will take. This provides four ways to start and end two threads. (it starts and ends first, starts first, and b ends first, b starts first, and ends first, b starts first, and b ends first). Now ... imagine 20 threads that do the same thing in response to, say, 20 end users send requests for this and that. How many possible ways this can work.
source share