What is the performance associated with sharing objects between threads?

I know that reading from one object through multiple threads is safe in Java if the object is not written. But what are the performance implications for this, and not for copying data to a stream?

Should threads wait until others stop reading memory? Or is the data implicitly copied (reason for existence volatile)? But what would you do to use the memory of the entire JVM? And how does all this differ when the read object is older than the threads that read it, instead of creating them in life?

+5
source share
3 answers

If you know that the object will not change (for example, immutable objects, such as String or Integer), and therefore avoid using any of the synchronization constructs ( synchronized, volatile), counting this object from several threads does not affect performance. All threads will access memory where the object is stored in parallel.

The JVM may choose, however, to cache some values ​​locally in each thread for performance reasons. Using volatileprohibits exactly this behavior - the JVM will have to explicitly and atomically access the field volatileevery time.

+3
source

, , . - , . volatile ( , Java, C), , - (, c) . , .

+1

To have a common state between several threads, you will have to coordinate access to it using some kind of synchronization mechanism - mutable, synchronized, cas. I'm not sure what you expect to hear about the “performance impact” - it will depend on the specific scenario and context. In general, you will pay some price for coordinating access to a shared object across multiple threads.

+1
source

All Articles