Functional programming. Much attention is paid to recursion, why?

I became acquainted with functional programming [FP] (using Scala). One thing that comes out of my initial knowledge is that FPs are highly dependent on recursion. And also, it seems, in pure FPs, the only way to do iterative stuff is to write recursive functions.

And due to the heavy use of recursion, it seems the next thing FP should have been worried about was StackoverflowExceptions , usually due to lengthy recursive calls. This was solved by introducing some optimizations (optimization of tail recursion accompanied by stack frames and @tailrec annotations from Scala v2.8 onwards)

Can someone please enlighten me, why is recursion so important for the functional programming paradigm? Is there something in the specifications of functional programming languages ​​that are “broken” if we do iteratively? If so, then I also want to know.

PS: Please note that I am new to functional programming, so do not hesitate to point out existing resources to me if they explain / answer my question. I also understand that Scala, in particular, provides support for iterative use.

+60
scala functional-programming recursion tail-recursion
Sep 30
source share
9 answers

Turing Church testing emphasizes equivalence between different computability models.

Using recursion, we do not need a mutable state when solving any problem, and this allows us to define semantics in simpler terms. Thus, decisions can be easier in the formal sense.

I think that Prolog shows better than functional languages, recursion efficiency (it does not have iteration) and the practical limitations that we encounter when using it.

+43
Sep 30 '12 at 8:07
source share

Purely functional programming means programming without side effects. This means that if you write a cycle, for example, the body of your cycle cannot create side effects. Thus, if you want your loop to do something, it must reuse the result of the previous iteration and produce something for the next iteration. Thus, the body of your cycle is a function that takes as a parameter the result of the previous execution and calls itself for the next iteration with its own result. This does not have much of an advantage over directly writing a recursive function for a loop.

A program that does nothing trivial will have to sort through something at some point. For functional programming, this means that the program must use recursive functions.

+33
Sep 30 '12 at 8:03
source share

A function that calls for the requirement that you do something recursively is immutable variables.

Consider a simple function to calculate the sum of a list (in pseudocode):

 fun calculateSum(list): sum = 0 for each element in list: # dubious sum = sum + element # impossible! return sum 

Now element in each iteration of the list is different, but we can rewrite this to use the foreach function with the lambda argument to get rid of this problem:

 fun calculateSum(list): sum = 0 foreach(list, lambda element: sum = sum + element # impossible! ) return sum 

However, the value of the variable sum must be changed in each lambda run. This is illegal in a language with immutable variables, so you need to rewrite it so that it does not change state:

 fun calculateSum([H|T]): return H + calculateSum(T) fun calculateSum([]): return 0 

Now this implementation will require a lot of clicks and popping from the call stack, and a program in which all small operations will do this will not work very quickly. Therefore, we rewrite it as a tail recursive, so the compiler can optimize the tail call:

 fun calculateSum([H|T], partialSum): return calculateSum(T, H + partialSum) fun calculateSum([], partialSum): return partialSum fun calculateSum(list): return calculateSum(list, 0) 

Of course, if you want to loop for an indefinite period, you absolutely need a tail recursive call, otherwise it will overflow the stack.

Scala's @tailrec annotation is a tool to help you analyze which functions are tail recursive. You say, "This function is recursive on the tail," and then the compiler can tell you if you are mistaken. This is especially important in Scala compared to other functional languages, because the machine it runs on, the JVM, does not support tail call optimization, so it is not possible to optimize tail optimization in Scala for circumstances that you would get in other functional languages.

+20
Sep 30 '12 at 8:50
source share

TL; DR : recursion is used to process inductively defined data that is ubiquitous.

Recursion is natural when you work at higher levels of abstraction. Functional programming is not only about function coding; it's about working at higher levels of abstraction, where you naturally use functions. Using functions, it is natural to reuse the same function (call it again), from any context where it makes sense.

The world is built by repeating similar / identical building blocks. If you cut a piece of fabric in half, you have two pieces of fabric. Mathematical induction underlies mathematics. We humans think (like, 1,2,3 ...). Any inductively defined thing (for example, {numbers from 1} are {1 and numbers from 2}) is naturally processed / analyzed by a recursive function, in accordance with the same cases in which this thing is defined / constructed.

Recursion is everywhere. Any iterative loop is still overridden for recursion, because when you enter this loop, you enter the same loop again (only with different loop variables). Thus, this is not like inventing new concepts in the field of computing, it is more like discovering the basics and making it explicit.




So recursion is natural. We just write down some laws about our problem, some equations with a function we define that preserve some invariant (assuming that the function is coherently defined), redefining the problem in simplified terms and voila! We have a solution.

Example, function for calculating the length of a list (inductively defined recursive data type). Suppose it is defined and returns the length of the list, which is not surprising. What laws should he obey? What invariant is preserved under what simplification of the problem?

The most immediate thing is to list the list separately from its head element, and the rest is aka tail of the list (according to how the list is defined / built). Law,

 length (x:xs) = 1 + length xs 

D'ee! But what about an empty list? It must be that

 length [] = 0 

So how do we write such a function? ... Wait ... We already wrote this! (In Haskell, if you are wondering where the function application is expressed by matching, the brackets are used only for grouping, and (x:xs) is a list with x its first element and xs rest).

All we need for a language that allows this programming style is that it is TCO (and perhaps a little luxurious, TRMCO ), so there is no stack inflating, and we are set up.




Another thing - cleanliness - the immutability of code variables and / or data structures (record fields, etc.).

What this does is, in addition to freeing our minds from having to keep track of what is ever changing, it makes time clearly evident in our code, rather than hiding in our “changing” mutable variables / data. We can only “change” the value of a variable in the imperative code from now on - we cannot very well change its value in the past, can we?

And so we end up with lists of recorded change history, with an explicit explicit expression in the code: instead of x := x + 2 we write let x2 = x1 + 2 . This simplifies the discussion of code.




To eliminate invariance in the context of tail recursion with TCO , consider this tail recursive rewriting of the above length function under the battery paradigm of arguments:

 length xs = length2 0 xs -- the invariant: length2 a [] = a -- 1st arg plus length2 a (x:xs) = length2 (a+1) xs -- length of 2nd arg 

Here TCO means the reuse of call forwarding in addition to a direct transition, and therefore the call chain for length [1,2,3] can be considered as actually mutating the records of the call stack frame corresponding to the function parameters:

 length [1,2,3] length2 0 [1,2,3] -- a=0 (x:xs)=[1,2,3] length2 1 [2,3] -- a=1 (x:xs)=[2,3] length2 2 [3] -- a=2 (x:xs)=[3] length2 3 [] -- a=3 (x:xs)=[] 3 

In a pure language, without any primitives that change the value, the only way to express the changes is to pass the updated values ​​as arguments to the function, which will be processed further. If the further processing is the same as before, then, naturally, we must call the same function for it, passing it the updated values ​​as arguments. And this recursion.




And the following leads to the fact that the whole history of calculating the length of the argument list is explicit (and available for reuse if necessary):

 length xs = last results where results = length3 0 xs length3 a [] = [a] length3 a (x:xs) = a : length3 (a+1) xs 

In Haskell, this is known as guarded recursion, or corecursion (at least I think so).

+7
Sep 30 '12 at 15:24
source share

There are two properties that I consider essential for functional programming:

  • Functions are the first members of a class (only relevant, because in order to make this useful, a second property is required)

  • Functions are pure, i.e. a function called with the same arguments returns the same value.

Now, if you are programming in an imperative style, you need to use assignment.

Consider a for loop. It has an index, and at each iteration, the index has a different meaning. That way you can define a function that returns this index. If you call this function twice, you may get different results. Thus, violating principle number 2.

If you violate principle number 2. Walking around functions (principle number 1) becomes something extremely dangerous, because now the result of the function may depend on when and how often the function is called.

+6
Sep 30 '12 at 8:33
source share

There is nothing special about recursion. This is a widespread programming and mathematics tool, and nothing more. However, functional languages ​​are usually minimalistic. They introduce many bizarre concepts, such as pattern matching, type system, list comprehension, etc., But this is nothing more than syntactic sugar for very general and very powerful, but simple and primitive tools. These tools are: abstraction function and functional application. This is a conscious choice, because simplicity is the core of the language greatly simplifies the argument. It also simplifies compiler compilation. The only way to describe a loop in terms of these tools is to use recursion, so commanding programmers might think that functional programming is recursion. This is not so, just need to imitate these fantasy loops for the poor who cannot give up syntactic sugar over the goto expression, and therefore this is one of the first things that they are stuck with.

Another point when recursion is required (maybe indirect) is the processing of recursively defined data structures. The most common example is list ADT. In FP it is usually defined as data List a = Nil | Branch a (List a) data List a = Nil | Branch a (List a) . Since the definition of ADT here is recursive, the processing function for it must also be recursive. Again, recursion here is not special in any case: processing such an ADT in a recursive way looks natural in both imperative and functional languages. Well, in the case where ADT-type logical loops can still be accepted, but in the case of different tree structures, they cannot.

So recursion is nothing special. This is just another feature app. However, due to the limitations of modern computing systems (which come from poorly made C design decisions, which is actually a standard cross-platform assembler), function calls cannot be nested indefinitely, even if they are tail calls. Because of this, developers of functional programming languages ​​must either restrict allowed tail calls to tail recursion (scala), or use complex methods such as trampoling (old ghc codegen) or compile directly into asm (modern ghc codegen).

TL; DR: There is nothing special about recursion in FP, but no less than IP, but tail recursion is the only type of tail call allowed in scala due to JVM restrictions.

+6
Sep 30
source share

Avoiding side effects is one of the pillars of functional programming (the other uses higher-order functions).

Imagine how you can use imperative flow control without relying on mutation. Is it possible?

Of course, for (var i = 0; i < 10; i++) ... depends on the mutation ( i++ ). In fact, any conditional loop design. In while (something) ... value of something will depend on some mutable state. Of course, while (true) ... does not use mutation. But if you ever want to get out of this loop, you will need if (something) break . Indeed, try to think of a (non-infinite) loop method other than recursion that does not rely on mutation.

What about for (var x in someCollection) ... ? Now we are approaching functional programming. x can be considered as a parameter for the body of the cycle. Reusing a name is not the same as overriding a value. Perhaps in the body of the loop you yield return values ​​as an expression in terms of x (without mutation).

In the same way, you can move the body of the for loop to the body of the foo (x) ... function, and then match this over someCollection with a higher order function - replacing your loop construct with something like Map(foo, someCollection) .

But then how is the library function Map implemented without mutation? Well, using recursion, of course! It is made for you. Less often, you have to implement recursive functions yourself, as soon as you start using the second part of higher-order functions to replace your loop constructs.

In addition, in tail call optimization, a recursive definition is equivalent to an iterative process. You may also like this blog post: http://blogs.msdn.com/b/ashleyf/archive/2010/02/06/recursion-is-the-new-iteration.aspx

+6
Sep 30
source share

Last time I used Functional Language (Clojure). I was never tempted to use recursion. Everything could be considered as a set of things to which the function was applied, in order to get a part of the products to which another function was applied, until the final result was achieved.

Recursion is only one way and not necessarily the clearest way to handle multiple elements that you usually need to handle in order to deal with any g

+1
04 Oct '12 at 23:27
source share

For new FP students, I would like to add my 2 cents. As mentioned in some answers, recursion is to use immutable variables, but why should we do this? because it makes it easy to run a program on multiple cores in parallel, but why do we want this? Can't we run it in one core and be happy, as always, we were? No, because the content for processing increases every day, and the CPU clock cycle cannot be significantly increased than adding more cores. Since the last decade, the clock frequency has been up to about 2.7 GHz to 3.0 GHz for household computers, and chip developers have encountered problems in installing an increasing number of transistors in their own. FP also had them for a very long time, but didn’t pick it up because it used recursion and the memory was very expensive in those days, but since clock speeds grew year after year, so the community decided to continue working with OOP Edit: it was pretty fast, I have it was only a couple of minutes

+1
Mar 26 '16 at 16:40
source share



All Articles