How does (co) recursive definition work in Haskell?

I play with the language to start learning, and I am puzzled by my mind about how the recursive definition works.

For example, take a sequence of triangular numbers ( TN n = sum [1..n] )

Proposed Solution:

 triangularNumbers = scanl1 (+) [1..] 

So far so good.

But the solution I came up with was:

 triangularNumbers = zipWith (+) [1..] $ 0 : triangularNumbers 

This is also correct.

Now my question is: how does this translate to a lower level implementation? What exactly happens behind the scenes when such a recursive definition occurs?

+7
declaration recursion haskell corecursion
source share
1 answer

Here is a simple recursive function that gives the nth triangular number:

 triag 0 = 0 triag n = n + triag (n-1) 

Your solution triag' = zipWith (+) [1..] $ 0 : triag' is something more fantastic: it is corecursive ( click , click ). Instead of calculating the nth number by reducing it to a value of smaller input data, you determine the entire infinite sequence of triangular numbers, recursively determining the next value, given the initial segment.

How does Haskell deal with such a scheme? When he meets your definition, calculations are not performed, it is delayed until the results are needed for further calculations. When you access a specific element of your triag' list, Haskell begins to compute the elements of the list based on the definition, down to the element being accessed. For more details, I found this article on lazy pricing. In general, a lazy score is great if you don't need to predict memory usage.

Here's a similar SO question, a step-by-step explanation for evaluating fibs = 0 : 1 : zipWith (+) fibs (tail fibs) , a core-recursive definition of the Fibonacci sequence.

+5
source share

All Articles