So, I'm trying to play a little with microbenchmarks, chose JMH, read some articles. How does JMH measure the execution of methods below the granularity of the system timer?
More detailed explanation:
These are the tests that I run (the method names speak for themselves):
import org.openjdk.jmh.annotations.*; import org.openjdk.jmh.infra.Blackhole; import java.util.concurrent.TimeUnit; @BenchmarkMode(Mode.AverageTime) @OutputTimeUnit(TimeUnit.NANOSECONDS) @State(Scope.Thread) @Warmup(iterations = 10, time = 200, timeUnit = TimeUnit.NANOSECONDS) @Measurement(iterations = 20, time = 200, timeUnit = TimeUnit.NANOSECONDS) public class RandomBenchmark { public long lastValue; @Benchmark @Fork(1) public void blankMethod() { } @Benchmark @Fork(1) public void simpleMethod(Blackhole blackhole) { int i = 0; blackhole.consume(i++); } @Benchmark @Fork(1) public void granularityMethod(Blackhole blackhole) { long initialTime = System.nanoTime(); long measuredTime; do { measuredTime = System.nanoTime(); } while (measuredTime == initialTime); blackhole.consume(measuredTime); } }
Here are the results:
Currently running on Windows 7 and, as described in various articles, has a greater degree of detail (407 ns). Checking with the base code below this really new timer value comes every ~ 400ns:
final int sampleSize = 100; long[] timeMarks = new long[sampleSize]; for (int i=0; i < sampleSize; i++) { timeMarks[i] = System.nanoTime(); } for (long timeMark : timeMarks) { System.out.println(timeMark); }
It's hard to fully understand how the generated methods work exactly, but looking at the decompiled JMH code, it seems to use the same System.nanoTime () before and after execution and measure the difference. How can it measure the execution of a method for a couple of nanoseconds, while the grain size is 400 ns?
source share