Creating a recursive promise scheme in javascript memory considerations

In this answer, the promise chain is constructed recursively.

Simplified, we have:

function foo() { function doo() { // always return a promise if (/* more to do */) { return doSomethingAsync().then(doo); } else { return Promise.resolve(); } } return doo(); // returns a promise } 

Presumably this will lead to a stack of calls and a chain of promises - that is, "deep" and "wide."

I would expect a burst of memory to be more than doing a recursion or creating a single chain of promises.

  • This is true?
  • Has anyone considered memory issues when constructing a circuit this way?
  • Will memory consumption vary between both sheets?
+40
javascript promise recursion
Apr 28 '15 at 17:21
source share
5 answers

call stack and promise chain - that is, deep and wide.

No, actually. There is no disadvantaged chain here, since we know it from doSomeThingAsynchronous.then(doSomethingAsynchronous).then(doSomethingAsynchronous).โ€ฆ (which Promise.each or Promise.reduce can do to execute handlers sequentially if it was written this way).

What we are facing here is decision chain 1 - what happens at the end when a basic case of recursion occurs, something like Promise.resolve(Promise.resolve(Promise.resolve(โ€ฆ))) . It is only โ€œdeep,โ€ not โ€œwide,โ€ if you want to call it.

I would expect a burst of memory to be more than doing a recursion or creating a single chain of promises.

This is actually not a thorn. You gradually, over time, create most of the promises that are resolved from the innermost, all present the same result. When the condition is fulfilled at the end of your task and the internal promise is resolved with the actual value, all these promises must be resolved with the same value. This will result in an O(n) cost for moving along the permission chain (if implemented naively, it can even be recursive and cause a stack overflow). After that, all promises, except the most external, can become garbage collected.

On the contrary, a promise chain created by something like

 [โ€ฆ].reduce(function(prev, val) { // successive execution of fn for all vals in array return prev.then(() => fn(val)); }, Promise.resolve()) 

will show a splash, highlighting objects n both at the same time, and then slowly resolve them one by one, garbage collects the previous ones until there is only the promised promise of the end.

 memory ^ resolve promise "then" (tail) | chain chain recursion | /| |\ | / | | \ | / | | \ | ___/ |___ ___| \___ ___________ | +----------------------------------------------> time 

This is true?

Not necessary. As stated above, all promises in this volume are ultimately resolved with the same value of 2, so we only need to keep the most external and internal promise at a time. All intermediate promises can be collected as soon as possible, and we want to run this recursion in constant space and time.

In fact, this recursive construction is absolutely necessary for agastra cycles with a dynamic condition (without a fixed number of steps), you cannot avoid this. In Haskell, where it is used all the time for the IO monad, the optimization for it is implemented only because of this case. It is very similar to tail call recursion , which is usually eliminated by compilers.

Has anyone considered memory issues when constructing a circuit this way?

Yes. This was discussed in promises / aplus , but, nevertheless, there was no result.

Many promise libraries support iterative helpers to avoid chains of then promises, such as the Bluebird each and map methods.

My own promise library 3.4 implements a chain of decisions without entering memory overhead or runtime. When one promise accepts another (even if it has not yet been accepted), they become indistinguishable, and intermediate promises are no longer referenced anywhere.

Will memory consumption vary between both libs?

Yes. Although this case can be optimized, this is rarely the case. In particular, the ES6 specification requires promises to check the value with each call to resolve , so chain folding is not possible. promises in a chain can even be resolved with different values โ€‹โ€‹(by creating an example object that abuses getters, rather than in real life). The problem was raised by esdiscuss , but remains unresolved.

So, if you use implementation leaks, but need asynchronous recursion, then it is better to switch to callbacks and use deferred antipatterns to distribute the secret result promises a single result.

[1]: no official terminology
[2]: well, they are resolved with each other. But we want to resolve them with the same value, we expect that [3]: an undocumented playground, passes aplus. Read the code at your own risk: https://github.com/bergus/F-Promise
[4]: also implemented for Creed in this transfer request

+33
Apr 28 '15 at 23:23
source share

Disclaimer: premature optimization is bad, the real way to find out about differences in performance is to compare your code, and you should not worry about it (I only needed once and I used promises for at least 100 projects).

This is true?

Yes , promises should โ€œrememberโ€ what they follow, if you do it for 10000 promises, you will have a whole promising chain of 10000, if you then you wonโ€™t (for example, with recursion) - this is true for anyone queue flow control.

If you need to track 10,000 extra things (operations), then you need to save memory for this, and it takes time, if this number is a million, this may not be practical. It depends on the libraries.

Has anyone considered memory issues when constructing a circuit this way?

Of course, this is a big problem and a precedent for using something like Promise.each in libraries like bluebird over then .

I personally had in my code to avoid this style for a fast application that traverses all files in a virtual machine once, but in the vast majority of cases this is not a problem.

Will memory consumption vary between both libs?

Yes strong. . For example, bluebird 3.0 will not allocate an additional queue if it detects a promise operation that is already asynchronous (for example, if it starts with Promise.delay) and will simply execute things synchronously (because the asynchronous guarantees are already saved).

This means that what I stated in my answer to the first question is not always true (but true in the normal use case) :) Native promises can never do this unless internal support is provided.

And again - this is not surprising, since the promise libraries differ by orders of magnitude from each other.

+15
Apr 28 '15 at 17:27
source share

In addition to the amazing existing answers, I would like to illustrate the expression that is the result of such asynchronous recursion. For simplicity, I use a simple function that calculates the power of a given base and exponent. Recursive and base case are equivalent to the OP example:

 const powerp = (base, exp) => exp === 0 ? Promise.resolve(1) : new Promise(res => setTimeout(res, 0, exp)).then( exp => power(base, exp - 1).then(x => x * base) ); powerp(2, 8); // Promise {...[[PromiseValue]]: 256} 

With some substitution steps, the recursive part can be replaced. Please note that this expression can be evaluated in your browser:

 // apply powerp with 2 and 8 and substitute the recursive case: 8 === 0 ? Promise.resolve(1) : new Promise(res => setTimeout(res, 0, 8)).then( res => 7 === 0 ? Promise.resolve(1) : new Promise(res => setTimeout(res, 0, 7)).then( res => 6 === 0 ? Promise.resolve(1) : new Promise(res => setTimeout(res, 0, 6)).then( res => 5 === 0 ? Promise.resolve(1) : new Promise(res => setTimeout(res, 0, 5)).then( res => 4 === 0 ? Promise.resolve(1) : new Promise(res => setTimeout(res, 0, 4)).then( res => 3 === 0 ? Promise.resolve(1) : new Promise(res => setTimeout(res, 0, 3)).then( res => 2 === 0 ? Promise.resolve(1) : new Promise(res => setTimeout(res, 0, 2)).then( res => 1 === 0 ? Promise.resolve(1) : new Promise(res => setTimeout(res, 0, 1)).then( res => Promise.resolve(1) ).then(x => x * 2) ).then(x => x * 2) ).then(x => x * 2) ).then(x => x * 2) ).then(x => x * 2) ).then(x => x * 2) ).then(x => x * 2) ).then(x => x * 2); // Promise {...[[PromiseValue]]: 256} 

Interpretation:

  • With new Promise(res => setTimeout(res, 0, 8)) executor is called immediately and performs a non-blocking calculation (noticed with setTimeout ). Then an uninstalled Promise returned. This is equivalent to the doSomethingAsync() OP example.
  • The permission callback is associated with this Promise via .then(... Note: The body of this callback has been replaced by the powerp body.
  • Point 2) is repeated, and then the nested handler structure is created until the base case of recursion is reached. The base register returns a Promise with 1 .
  • The nested handler structure then "unwinds" by invoking the corresponding callback accordingly.

Why is the generated structure nested rather than chained? Because the recursive case in then handlers does not allow them to return a value until the base case is reached.

How can this work without a stack? Associated callbacks form a โ€œchainโ€ that connects consecutive microtasks of the main event loop.

+1
Jul 12 '16 at 14:34
source share

I just released a hack that can help solve the problem: do not recurs in the last then , rather, do it in the last catch , since catch out of the permission chain. Using your example, it will look like this:

 function foo() { function doo() { // always return a promise if (/* more to do */) { return doSomethingAsync().then(function(){ throw "next"; }).catch(function(err) { if (err == "next") doo(); }) } else { return Promise.resolve(); } } return doo(); // returns a promise } 
0
Apr 14 '16 at 6:34
source share

This promise template will generate a recursive chain. Thus, each solution () will create a new stack frame (with its own data), using some memory. This means that a large number of chaining functions using this promise template can create errors.

To illustrate this, I will use a small promise library called Sequence , which I wrote. It relies on recursion to provide consistent execution for a chain of functions:

 var funcA = function() { setTimeout(function() {console.log("funcA")}, 2000); }; var funcB = function() { setTimeout(function() {console.log("funcB")}, 1000); }; sequence().chain(funcA).chain(funcB).execute(); 

The sequence works great for small and medium sized networks in the range of 0-500 functions. However, about 600 circuits The sequence begins to degrade and often generates errors.

Bottom line: Currently , recursion-based promise libraries are more suitable for smaller / medium-sized networks, while abbreviation-based promise implementation is suitable for all cases, including larger networks.

This, of course, does not mean that recursion promises is bad. We just need to use them based on their limitations. In addition, you rarely have to bind a lot of calls (> = 500) with promises. I usually use them for asynchronous configurations that use ajax heavily. But even if in the most difficult cases I did not see a situation with more than 15 chains.

On a side note ...

These statistics were obtained from tests performed with another of my libraries - provisnr - which records the number of function calls reached within a given time interval.

0
Jul 23 '17 at 17:17
source share



All Articles