How does the JMH measure runtime below the granularity value?

So, I'm trying to play a little with microbenchmarks, chose JMH, read some articles. How does JMH measure the execution of methods below the granularity of the system timer?

More detailed explanation:

These are the tests that I run (the method names speak for themselves):

import org.openjdk.jmh.annotations.*; import org.openjdk.jmh.infra.Blackhole; import java.util.concurrent.TimeUnit; @BenchmarkMode(Mode.AverageTime) @OutputTimeUnit(TimeUnit.NANOSECONDS) @State(Scope.Thread) @Warmup(iterations = 10, time = 200, timeUnit = TimeUnit.NANOSECONDS) @Measurement(iterations = 20, time = 200, timeUnit = TimeUnit.NANOSECONDS) public class RandomBenchmark { public long lastValue; @Benchmark @Fork(1) public void blankMethod() { } @Benchmark @Fork(1) public void simpleMethod(Blackhole blackhole) { int i = 0; blackhole.consume(i++); } @Benchmark @Fork(1) public void granularityMethod(Blackhole blackhole) { long initialTime = System.nanoTime(); long measuredTime; do { measuredTime = System.nanoTime(); } while (measuredTime == initialTime); blackhole.consume(measuredTime); } } 

Here are the results:

 # Run complete. Total time: 00:00:02 Benchmark Mode Cnt Score Error Units RandomBenchmark.blankMethod avgt 20 0,887 ? 0,274 ns/op RandomBenchmark.granularityMethod avgt 20 407,002 ? 26,297 ns/op RandomBenchmark.simpleMethod avgt 20 6,979 ? 0,743 ns/op 

Currently running on Windows 7 and, as described in various articles, has a greater degree of detail (407 ns). Checking with the base code below this really new timer value comes every ~ 400ns:

  final int sampleSize = 100; long[] timeMarks = new long[sampleSize]; for (int i=0; i < sampleSize; i++) { timeMarks[i] = System.nanoTime(); } for (long timeMark : timeMarks) { System.out.println(timeMark); } 

It's hard to fully understand how the generated methods work exactly, but looking at the decompiled JMH code, it seems to use the same System.nanoTime () before and after execution and measure the difference. How can it measure the execution of a method for a couple of nanoseconds, while the grain size is 400 ns?

+4
source share
1 answer

You are absolutely right. You cannot measure something that is faster than the granularity of your system timer.

JMH does not measure every call to the bechmark method. It calls System.nanotime () before starting the iteration, executes the test method X times and calls System.nanotime () again after the iteration. The result is a time difference of # operations (you may specify more than 1 operation in the method for each call using @OperationsPerInvocation).

Alexei Shipilev discussed measurement problems with Nanotime in his article Nanotime Nanotime . The Delay section contains sample code that shows how the JMH measures one iteration against a reference.

+2
source

All Articles