I recently started reading about tests and writing them for Android (in Java). I know about issues like warm-ups, garbage collector, and compiler optimizations, but I don’t know if the problem I am facing can be caused by any of them.
In my test application, I create an array of 10,000 floating-point variables and initialize it with random values. When running the control code:
private void runMinorBenchmarkFloat (float[] array) { float sum = 0; long startTime; long endTime; /* Fast warm-up */ startTime = System.nanoTime(); for(int i=0; i<SMALL_LOOP_ITERATION_COUNT; i++) for(int j=0; j<TAB_SIZE; j++) sum += array[j]; endTime = System.nanoTime() - startTime; postMessage("Warm-up for FLOAT finished in: " + endTime/1000000 + "ms.\n"); /* Main benchmark loop */ startTime = System.nanoTime(); for(int i=0; i<BIG_LOOP_ITERATION_COUNT; i++) { sum = 0; for(int j=0; j<TAB_SIZE; j++) sum += array[j]; } endTime = System.nanoTime() - startTime; postMessage("Benchmark for FLOAT finished in: " + endTime/1000000 + "ms.\n"); postMessage("Final value: " + sum + "\n\n"); }
on my phone I get about 2 seconds for a warm up and 20 seconds for a “real” cycle.
Now, when I add two more floating variables (sum2 and sum3 - are never used inside the method):
private void runMinorBenchmarkFloat (float[] array) { float sum = 0, sum2 = 0, sum3 = 0; // <------- the only code change here!!! long startTime; long endTime; /* Fast warm-up */ startTime = System.nanoTime(); for(int i=0; i<SMALL_LOOP_ITERATION_COUNT; i++) for(int j=0; j<TAB_SIZE; j++) sum += array[j]; endTime = System.nanoTime() - startTime; postMessage("Warm-up for FLOAT finished in: " + endTime/1000000 + "ms.\n"); /* Main benchmark loop */ startTime = System.nanoTime(); for(int i=0; i<BIG_LOOP_ITERATION_COUNT; i++) { sum = 0; for(int j=0; j<TAB_SIZE; j++) sum += array[j]; } endTime = System.nanoTime() - startTime; postMessage("Benchmark for FLOAT finished in: " + endTime/1000000 + "ms.\n"); postMessage("Final value: " + sum + "\n\n"); }
runtime jumps from 2 seconds to warm up to 5 seconds and from 20 seconds for a real cycle to 50 seconds.
Constants:
SMALL_LOOP_ITERATION_COUNT = 100,000 BIG_LOOP_ITERATION_COUNT = 1,000,000
Do you think that such a difference can be caused by alignment problems (just losing the idea)?
Thanks in advance for any answers.
EDIT:
It seems that this error does not appear on every device. I can play it on Samsung Galaxy S5. The main goal of the program was to make a small benchmark. I performed four almost identical functions (runMinorBenchmark____, where _ was: int, short, float, double), which differed only in a variable of type sum. In the main control function, I called these functions. Since the indicated error occurred, I decided to combine this function into one big one. Now ... When I run the test, I have such moments: 1. 37640 ms. (for int) 2. 46728ms. (for short) 3. 60589ms. (for the float) 4. 34467ms. (for double)
I know that short means that it is slower due to type casting. I also thought that float should be slower if you double it (maybe FPU picks up casting every time (?)). But when I change the variable type for sumFloat from float to double the time for float, it is identical to double the time. I also did this “test” on another device that did not seem to suffer from this strange behavior, and the times for each test were almost the same: ~ 45000 ms. (in fact there are no visible differences).
Dalvik VM Error (?)