What is the purpose of JMH @Fork?

If IIUC each fork creates a separate virtual machine for the reason that each instance of the virtual machine can work with slight differences in JIT instructions?

I am also interested to know what the time attribute does in the following annotations:

@Warmup(iterations = 10, time = 500, timeUnit = TimeUnit.MILLISECONDS) @Measurement(iterations = 10, time = 500, timeUnit = TimeUnit.MILLISECONDS) 

TIA, Ole

+7
java benchmarking jmh
source share
2 answers

JMH offers a fork function for several reasons. One of them is the separation of the compilation profile, as discussed above by Raphael. But this behavior is not controlled by the @Forks annotation (unless you select 0 forks, which means that subprocesses are not played out to run the tests). You can use all benchmarks as part of your progressive test (thus creating a mixed profile for JIT to work) using the warm-up mode (-wm).

The reality is that many things can conspire to somehow tilt your results and run a test several times to establish a run-to-run deviation - an important practice that JMH supports (and most manual lines do not use help with ) Reasons for triggering rejections may include (but I'm sure there are more):

  • Starting the CPU in a certain C-state and increasing the frequency on the verge of load, then overheating and scaling. You can control this problem on certain OS.

  • Aligning the memory of your process can lead to differences in the content of the search call.

  • Background activity of the application.
  • The processor allocation of the OS will vary depending on the different processor sets used for each launch.
  • Page Content and Summary
  • JIT compilation starts at the same time and can lead to different results (this will occur when large bits of code are tested). Please note that small single-threaded tests will usually not have this problem.
  • GC behavior can start with slightly different timings from start to start, which leads to different results.

Running a test with at least a few forks will help get rid of these differences and give you an idea of ​​the mileage that you can deviate from your test. I would recommend that you start with default 10 and drop it (or increase it) experimentally depending on your test.

+13
source share

The JVM optimizes the application by creating an application behavior profile. A plug is created to reset this profile. Otherwise, do:

 benchmarkFoo(); benchmarkBar(); 

may lead to different measurements than

 benchmarkBar(); benchmarkFoo(); 

since the profile of the first standard affects the second.

Time determines the length of the JMH's costs to warm up or run the test. If these points are short, your virtual machine will not be warm enough, or your result may have too high a standard deviation.

+7
source share

All Articles