In this question, I would like to talk about how to test the performance of Java code. The usual approach works in the following directions:
long start = System.nanoTime(); for( int i=0; i<SOME_VERY_LARGE_NUMBER; i++) { ...do something... } long duration = System.nanoTime() - start; System.out.println( "Performance: " + new BigDecimal( duration ).divide( new BigDecimal( SOME_VERY_LARGE_NUMBER, 3, RoundingMode.HALF_UP ) ) );
The "optimized" versions transfer calls to System.nanoTime()
in a loop, increasing the error field, since System.nanoTime()
takes much more time (and is less predictable during execution) than i ++
and comparison.
My criticism is this:
This gives me an average runtime, but this value depends on factors that really don’t interest me: like loading the system during the test cycle or during the transition of JIT / GC.
Wouldn't this approach be (much) better in most cases?
- Run the measurement code often enough to force JIT compilation
- Run the code in a loop and measure the execution time. Remember the smallest values and interrupt the cycle when this value stabilizes.
My explanation is that I usually want to know how fast some code can be (lower bounds). Any code can become arbitrarily slow using external events (mouse movements, interrupts from the video card, because you have an analog clock on your desktop, swapping, network packets ...), but most of the time I just want to know how fast my The code can be in ideal conditions.
It will also make the performance measurement much faster since I don’t need to run the code in seconds or minutes (to undo unwanted effects).
Can someone confirm / debunk this?
source share