How does Java use multiple cores?

The JVM runs in the same process, and threads in the JVM share the heap belonging to this process. Then, how does the JVM use multiple cores that provide multiple OS threads for high concurrency?

+59
java multithreading parallel-processing
Dec 14 '10 at 6:15
source share
4 answers

You can use multiple cores using multiple threads. But using more threads than the number of cores present in a machine can be a waste of resources. You can use availableProcessors () to get the number of cores.

Java 7 has a fork / join framework for using multiple cores.

Related issues:

  • Is a multi-threaded algorithm required to use multi-core processors?
  • Threads per processor
  • Is multithreaded quicksort or merge algorithm correct in Java?
+27
Dec 14 '10 at 6:52
source share

Green themes have been replaced with native threads in Java 1.2.

+18
Dec 14 '10 at 6:24
source share

Java will benefit from several cores if the OS distributes threads over available processors. The JVM itself does nothing special to evenly distribute threads across multiple cores. A few things to keep in mind:

  • When implementing parallel algorithms, it is best to create as many threads as there are cores. ( Runtime.getRuntime().availableProcessors() ). No more no less.
  • Use the facilities provided by the java.util.concurrent package.
  • Make sure your personal Java Concurrency in Practice library.
+16
Dec 14 '10 at 6:58
source share

The JVM runs in the same process, and threads in the JVM share the heap belonging to this process. Then how does the JVM use multiple cores that provide multiple OS threads to provide high concurrency?

Java will use the threads of the underlying OS to do the actual work of executing the code on different processors if they are running on a multiprocessor machine. When each Java thread starts, it creates a related OS thread, and the OS is responsible for scheduling, etc. A specific JVM does some flow control and tracking, and Java language constructs such as volatile , synchronized , notify() , wait() , etc. All this affects the execution status of the OS thread.

The JVM runs in the same process, and threads in the JVM share the heap belonging to this process.

JVMs do not have to β€œwork in one process” because even the garbage collector and other JVM code work in different threads, and the OS often presents these different threads as different processes. For example, on Linux, the one process that you see in the process list often disguises a bunch of different threading processes. This is even if you are working on a single-core computer.

However, you are correct that they all use the same heap space. In fact, they share the same memory space, which means code, interned lines, stack space, etc.

Then how does the JVM use multiple cores that provide multiple OS threads to provide high concurrency?

Threads get performance improvements for several reasons. Obviously, direct concurrency often makes a program run faster. The ability to simultaneously perform multiple CPU tasks can (although not always) improve application performance. You can also isolate I / O operations in one stream, which means that other streams can be executed while the stream is waiting for I / O (read / write to disk / network, etc.).

But in terms of memory, threads get many performance improvements due to local cached memory for each processor. When a thread runs on the CPU, a local high-speed memory cache for the CPU helps the thread isolate storage requests locally without spending time reading or writing to central memory. This is why volatile and synchronized calls include memory synchronization constructs because the cache must be flushed to main memory or canceled when threads need to coordinate or communicate with each other.

+15
Feb 23 '17 at 13:10
source share



All Articles