As the comments on your question suggest, you must go through the code in the debugger to get a good idea of what will happen if you cannot follow the explanation in the book. But I will give you a brief overview of what is happening:
What is being demonstrated is a “memoir”, which is a general optimization technique used in functional programming. A function is called pure if the result depends only on the arguments passed to it. So, if the function is pure, you can cache the result based on the arguments - this method is called memoisation. You would do this if the function is expensive to calculate and is called several times.
The classic example used to demonstrate this (as here) generates Fibonacci numbers . I will not understand how they are designed, but mainly as you move to higher numbers, you repeat more and more, since each number is calculated from the previous two numbers. Memoizing each intermediate result, you only need to calculate them once, so the algorithm will be much faster (much faster when you exit above the sequence).
Regarding this code, memoizer accepts two parameters - "memo", which is the cache. In this case, this happens with the first two values already filled in with "[0,1]" - these are the first two Fibonacci numbers.
The second parameter is the function to which memoisation will be applied. In this case, the recursive Fibonacci function:
function (shell, n) {return shell (n - 1) + shell (n - 2); }
i.e. the result is the sum of the two previous numbers in the sequence.
The memoiser first checks to see if it already has a cached result. If this happens, it will return immediately. If it does not calculate the result and stores it in the cache. Without doing this, it repeats itself over and over again and quickly becomes incredibly slow to get to higher numbers in the sequence.
Finnnk
source share