Understanding JavaScript Performance Deviations

http://jsfiddle.net/6L2pJ/

var test = function () { var i, a, startTime; startTime = new Date().getTime(); for (i = 0; i < 3000000000; i = i + 1) { a = i % 5; } console.log(a); //prevent dead code eliminiation return new Date().getTime() - startTime; }; var results = []; for (var i = 0; i < 5; i = i + 1) { results.push(test()); } for (var i = 0; i < results.length; i = i + 1) { console.log('Time needed: ' + results[i] + 'ms'); } 

Results in:

First performance:

 Time needed: 13654ms Time needed: 32192ms Time needed: 33167ms Time needed: 33587ms Time needed: 33630ms 

Second run:

 Time needed: 14004ms Time needed: 32965ms Time needed: 33705ms Time needed: 33923ms Time needed: 33727ms 

Third performance:

 Time needed: 13124ms Time needed: 30706ms Time needed: 31555ms Time needed: 32275ms Time needed: 32752ms 

What is the reason for switching from the first to the second line?

My setup:

  • Ubuntu 13.10

  • Google Chrome 36.0.1985.125 (Mozilla Firefox 30.0 gives the same results)

EDIT:

I changed the code, leaving semantically the same, but nesting everything. Interestingly, this not only significantly speeds up the execution, but also eliminates the phenomena that I described above, to a large extent. A little jump is still noteworthy.

Modified Code:

http://jsfiddle.net/cay69/

Results:

First performance:

 Time needed: 13786ms Time needed: 14402ms Time needed: 14261ms Time needed: 14355ms Time needed: 14444ms 

Second run:

 Time needed: 13778ms Time needed: 14293ms Time needed: 14236ms Time needed: 14459ms Time needed: 14728ms 

Third performance:

 Time needed: 13639ms Time needed: 14375ms Time needed: 13824ms Time needed: 14125ms Time needed: 14081ms 
+7
performance javascript
source share
3 answers

After a little testing, I think I have a pin-point, which can cause a difference. This should have something to do with the type that I think.

  var i, a = 0, startTime; 

var a = 0 gives me a uniform result with higher performance, on the other hand var a = "0" gives me the same result as yours: the first one is somewhat faster.

I do not know why this is happening.

+1
source share

It looks like Google Chrome is breaking your script execution into pieces and giving processing time to other processes. This is not noticeable until your execution reaches 600 ms per function call. I tested with a smaller subset of data (300000000 if I remember correctly.)

0
source share

Below is just a pseudo answer, which I hope can be updated by the community. It was originally a comment, but it got too long, too fast, and therefore had to be sent as an answer.


Interesting / Results

In running several tests , I could not find any correlation with console.log . Testing in OSX Safari I found that the problem arose with and without printing to the console.

What I noticed was a pattern. When I approached 2147483648 (2 ^ 31), you got an excess from the initial starting value. It most likely depends on the user environment, but I found an inflection point around 2147485000 (try the numbers above and below: 2147430000..2147490000). Somewhere near this number, where the timings have become more uniform.

I really hoped it would be 2 ^ 31 [for sure], since this number is also significant in computer terms; this is the upper bound of the integer long . However, my tests led to a number that was slightly larger (for reasons unknown at the moment). Besides checking that the page file was not used, I did not do another memory analysis.


CHANGE from user:

In my setup, this is actually exactly 2 ^ 31 where the transition occurs. I tested it while playing with the following code:

http://jsfiddle.net/8w24v/


This information may support the observation of Derek's initialization.

This is just a thought and can be stretched:
LLVM or something else may perform some preliminary optimizations. Perhaps the loop variable begins as an int , and then after passing through two or two, the optimizer notices that the variable becomes long . When trying to optimize, he tries to set it as a long front, only in this case it is not an optimization that saves time, since working with a regular integer is better than the cost of converting from int to long.

I would not be surprised if the answer was somewhere in ECMAScript :)

0
source share

All Articles