JavaScript Task Duration

I noticed a question the other day ( Decreasing the use of the Javascript processor ), and I was intrigued.

In fact, the guy wanted to encrypt some files by nature. Obviously, all this will close the browser in one go.

His first idea was to do this in pieces of about 1 kilobyte of string at a time, and then pause for X ms so that it allows the user to continue interacting with the page between processing. He also considered using webWorkers (the best idea), but obviously this is not a cross browser.

Now I really don't want to go into why this is probably not a good idea in javascript. But I wanted to see if I could come up with a solution.

I remembered watching a Douglas Crockford video in js conf . The video was related to node.js and the event loop. But I remembered that he was talking about breaking long functions down into separate pieces, so the newly called function goes to the end of the event loop. Instead of clogging an event loop with a lengthy task, preventing anything else.

I knew this decision was worthy of my investigation. As an interface designer, I never encountered very big tasks in JS and really wanted to know how to break them down and how they work.

I decided to try a recursive function that calls itself inside setTimeout 0ms. I realized that this would provide breaks in the event loop for something else that was supposed to happen during its launch. But I also thought that while nothing is happening, you will get the maximum calculation.

Here is what I came up with.

(I'm going to apologize for the code. I experimented in the console, so it was quick and dirty.)

function test(i, ar, callback, start){ if ( ar === undefined ){ var ar = [], start = new Date; }; if ( ar.length < i ){ ar.push( i - ( i - ar.length ) ); setTimeout(function(){ test( i, ar, callback, start); },0); } else { callback(ar, start); }; } 

(You can paste this code into the console and it will work)

Essentially, what the function does is, it takes a number, creates an array and calls itself, and array.length < number still pushes the counter into the array. It passes the array created on the first call of all subsequent calls.

I tested it and it seemed to work exactly as intended. Only its performance is rather poor. I checked it with.

(again, this is not a sexual code)

 test(5000, undefined, function(ar, start ){ var finish = new Date; console.log( ar.length, 'timeTaken: ', finish - start ); }); 

Now, I obviously wanted to know how long it took to complete, the code above took about 20 seconds. Now it seems to me that it shouldn't take 20 seconds for JS to calculate up to 5000. Add to the fact that it does some calculations and processing to input the elements into an array. But still the 20s are a little cool.

So, I decided to create several at the same time to find out how this affected browser performance and computing speed.

(code does not get sexier)

 function foo(){ test(5000, undefined, function(ar, start ){ var finish = new Date; console.log(ar.length, 'timeTaken: ', finish - start, 'issue: 1' ) }); test(5000, undefined, function(ar, start ){ var finish = new Date; console.log(ar.length, 'timeTaken: ', finish - start, 'issue: 2' ) }); test(5000, undefined, function(ar, start ){ var finish = new Date; console.log(ar.length, 'timeTaken: ', finish - start, 'issue: 3' ) }); test(5000, undefined, function(ar, start ){ var finish = new Date; console.log(ar.length, 'timeTaken: ', finish - start, 'issue: 4' ) }); test(5000, undefined, function(ar, start ){ var finish = new Date; console.log(ar.length, 'timeTaken: ', finish - start, 'issue: 5' ) }); }; 

So, five in total work simultaneously and do not cause browser freezes.

after the completion of the process, all results are returned almost at the same time. to complete them, it took about 21.5 s. This is only 1.5s slower than one. But I moved the mouse around the window on elements that had effects :hover to make sure the browser was still responding, so this might reflect some of the overhead for 1.5 seconds.

Since these functions obviously work in parallel, more computing juice remains in the browser.

Can someone explain what is happening here and give detailed information on how to improve such features?

Just go crazy, I did it.

 function foo(){ var count = 100000000000000000000000000000000000000; test(count, undefined, function(ar, start ){ var finish = new Date; console.log(ar.length, 'timeTaken: ', finish - start, 'issue: 1' ) }); test(count, undefined, function(ar, start ){ var finish = new Date; console.log(ar.length, 'timeTaken: ', finish - start, 'issue: 2' ) }); test(count, undefined, function(ar, start ){ var finish = new Date; console.log(ar.length, 'timeTaken: ', finish - start, 'issue: 3' ) }); test(count, undefined, function(ar, start ){ var finish = new Date; console.log(ar.length, 'timeTaken: ', finish - start, 'issue: 4' ) }); test(count, undefined, function(ar, start ){ var finish = new Date; console.log(ar.length, 'timeTaken: ', finish - start, 'issue: 5' ) }); }; 

It worked all the time when I wrote this post, and is still ready for it. The browser does not complain or hang. I will add a completion time as soon as it ends.

+20
performance javascript recursion
Jul 28 '11 at 19:13
source share
4 answers

setTimeout has no minimum delay of 0ms . The minimum delay is in the range of 5 ms to 20 ms, depending on browsers.

My personal testing shows that setTimeout does not push you back to the event stack immediately

Living example

He has a harsh minimum delay before he is called again

 var s = new Date(), count = 10000, cb = after(count, function() { console.log(new Date() - s); }); doo(count, function() { test(10, undefined, cb); }); 
  • Running 10,000 of them in parallel counting to 10 takes 500 ms.
  • Executing 100 samples to 10 takes 60 ms.
  • Starting 1 count to 10 takes 40 ms.
  • Starting 1 account up to 100 takes 400 ms.

Cleary, it seems that every single setTimeout must wait at least 4 ms for a second call. But this is the throat of a bottle. Individual delay on setTimeout .

If you plan to use 100 or more of them in parallel, this will work.

How do we optimize this?

 var s = new Date(), count = 100, cb = after(count, function() { console.log(new Date() - s); }), array = []; doo(count, function() { test(10, array, cb); }); 

Configure 100 in parallel in a single array. This avoids the main bottleneck, which is a setTimeout delay.

The above is completed in 2 ms.

 var s = new Date(), count = 1000, cb = after(count, function() { console.log(new Date() - s); }), array = []; doo(count, function() { test(1000, array, cb); }); 

Ends in 7 milliseconds

 var s = new Date(), count = 1000, cb = after(1, function() { console.log(new Date() - s); }), array = []; doo(count, function() { test(1000000, array, cb); }); 

Performing 1000 tasks in parallel is approximately optimal. But you will start to encounter bottlenecks. Counting up to 1 million still takes 4,500 ms.

+14
Jul 31 '11 at 12:19
source share

Your problem is overhead and unit of work. The setTimeout skills are very high and your ar.push unit of work is very low. This is an old optimization method known as Block Processing. Instead of processing one UoW per call, you need to process the UoW block. How big a β€œblock” depends on how much time each UoW takes and the maximum amount of time you can spend on each setTimeout / call / iteration command (before the user interface stops responding).

 function test(i, ar, callback, start){ if ( ar === undefined ){ var ar = [], start = new Date; }; if ( ar.length < i ){ // **** process a block **** // for(var x=0; x<50 && ar.length<i; x++){ ar.push( i - ( i - ar.length ) ); } setTimeout(function(){ test( i, ar, callback, start); },0); } else { callback(ar, start); }; } 

You should handle the largest block you can without causing a user interface / performance problem for the user. Previous runs ~ 50 times faster (block size).

For the same reason, we use a buffer to read the file, and not to read it one byte at a time.

+8
Aug 4 '11 at 12:13
source share

Just a hypothesis ... maybe the code is so slow because you are building a recursion stack with 5,000 instances of recursion? your call is not really recursive, as it happens with the settimeout function, but the function you pass to it is closure, so it will have to store all closure contexts ...

The performance issue may be related to the memory management costs, and this may be explained while your last test seems to make the situation worse ...

I have not tried anything with an interpreter, but it would be interesting to know if the computation time is linear with the number of recursions or not ... let's say: 100, 500, 1000, 5000 recursions ...

The first thing I would try as a workaround is not using closure:

 setTimeout(test, 0, i, ar, callback, start); 
+3
Jul 28 '11 at 19:35
source share

In fact, we talked about this, what you use is recursive functions, and JavaScript does not have β€œ Recursive tail calls ” right now, which means the interpreter / engine must support the stack frame to call EVERY, which becomes heavy.

To optimize the solution, I would try to turn it into a direct executable function called in the global scope.

0
Aug 01 2018-11-21T00:
source share



All Articles