Are functional languages ​​inherently more parallel than their OOs or strong cousins?

I read and thought about it a bit. In the multi-core future, it seems that functional languages ​​will become more popular. I relate to functional programming. My only exposition was academic, and nothing was complicated enough to really put this class of language in its ranks.

So, as I understand it, pure functions can be easily and transparently parallelized. This is a great feature, as it means there is no problem writing the stream code. However, it does not seem to help much in the serial code.

Example: fooN( ... (foo3(foo2(foo1(0))))) 

Successive calls like this seem like a common, and sometimes inevitable, problem. For me, this is the reason why parallelization is so complicated. Some tasks are simple (or seem) very consistent. Does "functional thinking" have the ability to better decompose some seemingly sequential tasks? Do existing functional languages ​​have transparent mechanisms for better parallelization of high-serial code? Finally, functional languages ​​are inherently more parallel than OO or imperative languages, and why?

+6
parallel-processing functional-programming
source share
2 answers

Functional languages ​​are more parallel than imperative and OO languages ​​due to pure functions. However, you are absolutely right, if you have similar data dependencies, you cannot parallelize them. The main value of functional programming is that the parallelism present in your code is easier to detect and explain, because only data relationships can interfere, and not a shared mutable state.

In fact, since most simple suicide programmers find it difficult to work in purely functional languages, and because the draconian policy of completely banning volatile state can be ineffective, there was some buzz in the idea of ​​allowing individual functional organs to be written imperatively, but the prohibition of collateral effects for all functions. In other words, all functions that must be parallelized must be clean. Then you can have an altered state for local variables in order to make the code more write-friendly and more efficient, but at the same time provide safe and easy automatic parallelization of calls to these pure functions. This is being studied, for example, in branch 2.0 of the D language.

+8
source share

This mainly concerns side effects. If the compiler knows that some parts of the code have no side effects, it can optimize based on the structure of the code to run some of them in parallel.

Consider linq in C # /, which is semi-functional:

 var someValues = from c in someArray where // some comparisson with no side effects select c; 

You indicate the intention of what you want to do, if the compiler knew that each part of the expression has no side effects, it can safely assign different parts of the array for processing on different kernels. Actually there is .AsParalell, which will run parallel to linq (plinq), which will allow just that. The problem is that he will not be able to forcibly use a bit of side effects (located in a language / framework that does not support it), which can become really ugly if the developers do not know. Because of this, they made it explicit, but you can see that this creates problems along the way.

+6
source share

All Articles