Acceleration peaks seem to be related to CPU runtime. By analyzing GPU time, it seems to increase linearity with the number of agents. However, CPU time, which also increases linearity in general terms, has a fall time in the range [0.6,1.6] aprox, and some peaks in the range [2.6,3.1] aprox.
Considering the above, your maximum acceleration is 55 times reduced in the range [0,6,1,1] aprox. because your cpu time is also decreasing. Therefore, to calculate acceleration as CPU time / GPU time is normal that the result is smaller. The same applies to the second, in the range [2.6,3.1] .
How can I find out the reason for this acceleration graph? I assume that the processor was interrupted by some external event (I / O, another program running on the CPU, OS ...).
To more accurately calculate accelerations, repeat the experiment 10 times as individual executions , i.e. don't create a loop inside your main function to execute it 10 times. Using 10, 20, 30 or even more individual designs, you can calculate the average time as well as the variance . Then, the time taken to complete the study: one or two peaks can be considered as special cases (ignore them). If you see a trend, then a deeper study is needed.
source share