Why is filling a new array so much faster with a while loop?

I looked at ways to create arrays containing the default value using my own methods and ending up with

function pushMap(length, fill){ var a = [], b = []; a.length = length; b.push.apply(b,a); return b.map(function(){return fill;}); } 

Expecting it to be 2 or 3 times slower than the while loop, since native methods should loop twice, while while only once, so I compared it with jsperf against

 function whileLengthNew(len, val) { var rv = new Array(len); while (--len >= 0) { rv[len] = val; } return rv; } 

and actually 18-27 times slower (tested using Google Chrome on Ubuntu, browsers / OS are welcome).

What happens that makes such a big difference?

+6
source share
4 answers

I would expect this to be due to two main factors:

  • Memory allocation - whileLengthNew first creates an array of the correct size, and then runs on it, pushMap creates a finite array one element at a time with map . This can cause multiple distributions, especially if the array of sources is large. (The way to create the initial arrays a and b basically doesn't matter, since map creates a new array to return anyway - it doesn't change anything in b )

  • Function overhead - when you call map you call the function for each element of the array. This is due to a large amount of overhead; When configuring activation records and area chains, manipulate stacks and pass the return value back. - all this to access a variable that is constant inside a function. In addition, you set a closure, so even accessing the fill variable is slower than in the whileLengthNew version.

+2
source

I am not a javascript expert, I have not contributed to any javascript engines, so the following is just an assumption:

You have a lot to do in your pushMap function.
1. First, you expand var a to the desired size, apparently in a very inefficient way. length is just a property of the array, so either the base implementation has callbacks when the property is changed, or the base implementation can handle the length attribute, changing and moving with it the next time something is available in it.
2. Creating var b seems even more inefficient because you invoke a reflection type method to make b array of a specific length.
3. Then you basically call the foreach loop in a functional way, which is likely to be a bit slower than the while loop, due to the overhead of the inner function (closing, in my opinion, because it's JS)

You can get a more even result if you created var b in the same way you created var rv . Hope this helps. EDIT, which of course does not work in this situation, because the map only works with initialized array values. This means another reason this approach is slower (and the OP mentioned this in his question), you initialize the map twice, once with empty values ​​and again with the values ​​you want.

0
source

You make an unnecessary function call for each individual value, so the JavaScript virtual machine has difficulties optimizing this template, and your array will not be considered as a simple array inside, but as a complex data structure, in your while loop, on the other hand, you using a very simple array access pattern, and I suspect that if you use a for loop with rv.length as a stop condition, it will be even faster

0
source

I think the map () call does the same thing as the while loop, plus it additionally calls a function call at each iteration. In general, function calls are very slow.

0
source

Source: https://habr.com/ru/post/925604/


All Articles