The JVM runs in the same process, and threads in the JVM share the heap belonging to this process. Then how does the JVM use multiple cores that provide multiple OS threads to provide high concurrency?
Java will use the threads of the underlying OS to do the actual work of executing the code on different processors if they are running on a multiprocessor machine. When each Java thread starts, it creates a related OS thread, and the OS is responsible for scheduling, etc. A specific JVM does some flow control and tracking, and Java language constructs such as volatile , synchronized , notify() , wait() , etc. All this affects the execution status of the OS thread.
The JVM runs in the same process, and threads in the JVM share the heap belonging to this process.
JVMs do not have to βwork in one processβ because even the garbage collector and other JVM code work in different threads, and the OS often presents these different threads as different processes. For example, on Linux, the one process that you see in the process list often disguises a bunch of different threading processes. This is even if you are working on a single-core computer.
However, you are correct that they all use the same heap space. In fact, they share the same memory space, which means code, interned lines, stack space, etc.
Then how does the JVM use multiple cores that provide multiple OS threads to provide high concurrency?
Threads get performance improvements for several reasons. Obviously, direct concurrency often makes a program run faster. The ability to simultaneously perform multiple CPU tasks can (although not always) improve application performance. You can also isolate I / O operations in one stream, which means that other streams can be executed while the stream is waiting for I / O (read / write to disk / network, etc.).
But in terms of memory, threads get many performance improvements due to local cached memory for each processor. When a thread runs on the CPU, a local high-speed memory cache for the CPU helps the thread isolate storage requests locally without spending time reading or writing to central memory. This is why volatile and synchronized calls include memory synchronization constructs because the cache must be flushed to main memory or canceled when threads need to coordinate or communicate with each other.