2 questions at the end of the functional programming course

Here are perhaps the two biggest things that I can learn from the How to Develop a Program (Simplified Racket) course I just finished, right from the course’s lectures:

1) Optimization of tail optimization and the absence of non-functional languages ​​in it:

Unfortunately, most other languages ​​do not support TAIL CALL OPTIMIZATION. In other words, they create a stack even for tail calls.

Call optimization was invented in the mid-70s, long after the basic elements of most languages ​​were developed. Since they do not have tail call optimization, these languages ​​provide a fixed set of LOOPING CONSTRUCTS that allow you to move data of arbitrary size.

a) What are the equivalents of this type of optimization in procedural languages ​​that do not use it? b) To use these equivalents means that we avoid creating a stack in similar situations in languages ​​that do not have it?

2) Mutation and multi-core processors

This mechanism is fundamental to almost any other language that you program. We postpone it until now for several reasons:

  • despite being fundamental, it is surprisingly difficult.

  • excessive use of this leads to programs that are not amenable to parallelization (runs on multiple processors). As multi-core computers are now shared, the ability to use a mutation only when needed is becoming more and more important.

  • excessive use of mutations can also make programs difficult to understand and hard to test.

But variable variables are important, and learning this mechanism will give you more preparation for working with Java, Python, and many other languages. Even in such languages, you want to use a style called "mainly functional programming."

I learned some Java, Python, and C ++ before taking this course, so I came to mutation for granted. Now all this has been lifted in the air by the above statement. My questions:

a) where can I find more detailed information about what is proposed in the 2nd armor, and what to do with it, and b) which patterns would arise from the style of “mostly functional programming”, and not the more careless style, which probably i would continue with these other languages ​​instead of taking this course?

+7
source share
3 answers

As Leppy notes, looping designs allows you to restore space savings to properly tail the tail for the specific types of loops that they support. The only problem with looping is that the ones you have will never be enough unless you toss the ball into the user's court and force them to explicitly simulate the stack.

To take an example, suppose you traverse a binary tree using a loop. It works ... but you need to explicitly track "those you need to return to." A recursive tail tongue traversal allows you to have your cake and eat it too, without wasting time when it is not needed, and not force you to keep track of the stack yourself.

Your question about parallelism and concurrency is much more open, and the best pointers probably relate to areas of research rather than existing solutions. I think most will agree that a crisis is taking place in the computer world; how do we adapt our mutational programming skills to a new multi-core world?

Just moving to a functional paradigm here is not a silver bullet; we still don’t know how to write high-level code and generate fast, non-mutating startup code at the same time. A lot of people are working on it though!

+8
source

To deploy “mutation makes the concept of parallelism tough” when you have multiple cores, you need to use synchronization if you want to change something from one core and see it constantly with all the other cores.

Correct synchronization synchronization. If you re-synchronize, you have deadlocks, slow (sequential, not parallel) performance, etc. If you are not synchronized enough, you have partially observable changes (where the other kernel sees only part of the changes you made from another kernel), leaving your objects observable in an invalid state “halfway”.

It is for this reason that many functional programming languages ​​encourage the concept of message queuing instead of the concept of general state. In this case, the only common state is the message queue, and managing synchronization in the message queue is a problem.

+3
source

a) What are the equivalents of this type of optimization in procedural languages ​​that do not use it? b) To use these equivalents means that we avoid creating a stack in similar situations in languages ​​that do not have it?

Well, the meaning of the tail call is that it can evaluate another function without adding calls to the stack, so anything that creates the stack cannot really be called equivalent.

A tail call behaves essentially like a transition to a new code, using the attributes of the function call language and all the necessary details management. Therefore, in languages ​​without this optimization, you will use the leap within one function. Loops, conditional blocks, or even arbitrary goto if nothing works.

a) where can I find more information about what is offered in the 2nd armor and what to do with it

The second bullet sounds like a simplification. There are many ways to make parallelization more complex than necessary, and overuse of the mutation is just one.

However, note that parallelization (dividing a task into parts that can be performed at the same time) is not quite the same as concurrency (with simultaneously performing several tasks that can interact), although they certainly overlap. Avoiding a mutation is incredibly useful when writing parallel programs, since constant data avoids many race conditions and resource competition that would otherwise be possible.

b) What patterns will emerge from the “mostly functional programming” style, and not the more careless style that I would probably continue with these other languages ​​instead of taking this course?

Have you looked at Haskell or Clojure? Both are very prone to a very functional style, emphasizing a controlled mutation. Haskell is more rigorous in this regard, but has many tools for working with limited forms of volatility, and Clojure is a bit more informal and may be more familiar to you, as it is another dialect of Lisp.

0
source

All Articles