From Goetz, Peierls, Bloch et al. 2006: Java Concurrency in Practice
3.1.2. Non-atomic 64-bit operations
When a stream reads a variable without synchronization, it can see an obsolete value, but at least it sees a value that is actually put there by some stream, and not some random value. This security guarantee is called ultra-thin security.
Interference protection for all variables, with one exception: 64-bit numeric variables (double and long) that are not declared mutable (see section 3.1.4). The Java memory model requires the retrieval and storage operations to be atomic, but for non-volatile long and double variables, the JVM is allowed to treat 64-bit read or write as two separate 32-bit operations. If reading and writing occur in different streams, it is therefore possible to read durability and return high 32 bits of one value and low 32 bits of another. [3]
Thus, even if you do not need obsolete values, it is unsafe to use shared modified long and double variables in multi-threaded programs unless they are declared mutable or not protected by a lock.
[3] When the specification of the Java virtual machine was written, many widely used processor architectures could not efficiently provide atomic 64-bit arithmetic operations.
This was written after the release of Java 5, released in 2004, with many changes aimed at simplifying multithreading and programming Concurrency. So why is it still applied? Even after ten years?
If this is only because you can run Java applications on 32-bit hardware, why is it not possible to run the JVM if necessary?
Wouldn't it be useful to be able to code multi-threaded applications with low latency without worrying about it?
Adam source share