Why does this code behave differently for different values

This code:

var i = 10000000000; do { i--; } while (i !== 0); //Result: 38 second. var i = 10000000000; do {} while (i-- !== 0); //Result: 27 second. //(same result with while (i--) var i = 10000000000; do {} while (i-- | 0); //Result: 13.5 second. 

The question arises: why does this version get the same time for a smaller value of i? If I cut one zero from i; all versions require 2.2 seconds. (checked even after JIT optimization - only in V8)

It seems logical that the third version should always be faster, but it is faster only for very high values.

This is just curiosity ... really not important.

+7
javascript
source share
1 answer

The processor, operating system, and interpreter can interfere with the speed of your program in ways that are difficult to predict. This is why large notation is used to evaluate algorithms.

One of the reasons that the speed can be different is that for one smaller zero value, the value of i can be expressed using only 32 bits. Thus, the assembly code generated by the interpreter can perform optimizations and use instructions from 32-bit integers.

Also, the value of i in the last code is converted to a 32-bit integer that changes the number of iterations, and that is why it runs faster when the value of i cannot be expressed using only 32 bits.

0
source share

All Articles