The processor, operating system, and interpreter can interfere with the speed of your program in ways that are difficult to predict. This is why large notation is used to evaluate algorithms.
One of the reasons that the speed can be different is that for one smaller zero value, the value of i can be expressed using only 32 bits. Thus, the assembly code generated by the interpreter can perform optimizations and use instructions from 32-bit integers.
Also, the value of i in the last code is converted to a 32-bit integer that changes the number of iterations, and that is why it runs faster when the value of i cannot be expressed using only 32 bits.
Waterscroll
source share