I know this has already been answered, but perhaps this is still useful:
Think of it as a sequence of atomic operations. A processor performs one atom operation at a time.
Here we have the following:
- Read x
- Burn x
- Add 1
- Multiply 2
In this order "within oneself" the following two sequences are guaranteed:
- Read x, add 1, write x
- Read x, Multiply 2, Write x
However, if you execute them in parallel, the execution time of each atomic operation relative to any other atomic operation in a different sequence is random, that is, these two sequences alternate.
One possible execution order would be 0, which is given by Paul Butcher
Here is an illustration I found on the Internet:

Each blue / purple block is one atomic operation, you can see how you can have different results based on the block order
To solve this problem, you can use the synchronized keyword
I understand that if you mark two blocks of code (for example, two methods) with synchronization inside the same object, then each block will own a lock on this object during its execution, so that the other block cannot be executed while the first one still not finished. However, if you have two synchronized blocks in two different objects, they can be executed in parallel.
source share