Why -Xrs reduces performance

From IBM:

-Xrs

Disables signal processing in the JVM.

-Xrs

The -Xrs setting does not allow the Java ™ runtime to work with any internal or external signals, such as SIGSEGV and SIGABRT. Any signals that occur are handled by the default handlers of the operating system. Disabling signal processing in the JVM reduces performance by about 2-4%, depending on the application.

-Xrs: sync

On UNIX systems, this option disables signal processing in the JVM for SIGSEGV, SIGFPE, SIGBUS, SIGILL, SIGTRAP, and SIGABRT signals. However, the JVM still processes the SIGQUIT and SIGTERM signals, among others. As with -Xrs, using -Xrs: sync reduces performance by about 2-4%, depending on the application.

Note. Setting this option prevents JVM dumping for signals such as SIGSEGV and SIGABRT, since the JVM no longer intercepts these signals.

https://www-01.ibm.com/support/knowledgecenter/SSYKE2_7.0.0/com.ibm.java.aix.70.doc/diag/appendixes/cmdline/Xrs.html

From my point of view, -Xrs really used to prevent -Xrs when intercepting certain OS signals.

Since the JVM no longer intercepts and processes these signals, it will be justified, it will increase performance, not decrease it, according to IBM.

Why -Xrs performance?

+7
java jvm
source share
2 answers

Due to safepoints and VM operations, as well as other optimizations that JIT can do if you allow for signal management.

The JVM sometimes has to perform some operations that require it to pause execution around the world ("stop the world"), for example, some large-scale garbage collections, hot reloads, or internal recompiling classes, etc. To do this, he must make sure that all running threads immediately fall into the barrier and pause, perform the operation, and then release the threads.

One method that HotSpot uses (and probably other JVMs) to implement safepoints is the clever abuse of segfaults: it creates a memory page that is not actually used for any data, and then each thread periodically tries to read from this page, When no VM operation is required, reading is performed with very low overhead, and the stream simply continues to run.

When the JVM needs to perform a virtual machine operation, this invalidates this memory page. The next time each thread falls into safepoint, it now calls segfault , which allows the JVM to regain control of this thread execution; it runs until the VM operation is completed, resets the sentinel page and restarts all threads.

When you turn off SIGSEGV processing, the JVM must use other methods to synchronize secure points, which are less efficient than delegating built-in processor protection.

In addition, the JVM does some serious profiling magic (essentially the same as a processor branch predictor). One of the optimizations that he uses is that if he finds that a particular zero check is almost never zero, it returns a check and relies on segfault (expensive, but in rare cases) to catch zero. This optimization also requires special SIGSEGV processing.

+8
source share

In addition to the safepoints mentioned by @chrylis segfault handlers, they are also used for other smart optimization tricks, such as implicit null pointer checking (at least they are on a hot spot). If profiles show that a pool of zero-validated codes rarely runs, it is optimized and the unlikely event is then covered by the signal handler.

Such optimization cannot be performed without installing a special signal processor.

+5
source share

All Articles