What are the advantages and disadvantages of a “meaningless” style in functional programming?

I know that in some languages ​​(Haskell?) The desire is to achieve a silent style or never refer to function arguments by name. For me this is a very complicated concept, but it can help me understand what are the advantages (or maybe even disadvantages) of this style. Can someone explain?

+50
functional-programming f # pointfree
Apr 15 2018-11-11T00:
source share
3 answers

I believe that the goal should be concise and express pipelined calculations as a set of functions, and not think about cross-cutting arguments. A simple example (in F #) is this:

let sum = List.sum let sqr = List.map (fun x -> x * x) 

Used as:

 > sum [3;4;5] 12 > sqr [3;4;5] [9;16;25] 

We could express the function "sum of squares" as:

 let sumsqr x = sum (sqr x) 

And use like:

 > sumsqr [3;4;5] 50 

Or we could identify it by pipeline x through:

 let sumsqr x = x |> sqr |> sum 

Written in this way, it is obvious that x is only transmitted to be "threaded" through a sequence of functions. Direct composition looks much nicer:

 let sumsqr = sqr >> sum 

This is more eloquent, and it is a different way of thinking about what we do; composing functions, not representing the process of passing arguments. We do not describe how sumsqr works. We describe what it is.

PS: An interesting way to think about the composition is to try programming in a concatenative language such as Forth, Joy, Factor, etc. This can be thought of as a kind of composition (Forth : sumsqr sqr sum ; ), in which the space between words is the composition operator.

PPS: Perhaps others may comment on differences in performance. It seems to me that the composition can reduce the pressure of the HA, making it more obvious to the compiler that there is no need to produce intermediate values, as in pipelining; helping to make the so-called "deforestation problem" more acceptable.

+50
Apr 15 2018-11-11T00:
source share

The exact style is considered by the author as the final style of functional programming. To put it simply, a function of type t1 -> t2 describes a conversion from one element of type t1 to another element of type t2 . The idea is that the "point" functions (written using explicit variables) emphasize the elements (when you write \x -> ... x ... you describe what happens to the element x ), while the functions "without dots "(expressed without the use of variables) emphasize the transformation itself, as part of simpler transformations. A bespectacled style advocate argues that transformation really needs to be a central concept and that point designation, being easy to use, distracts us from this noble ideal.

Pointless functional programming has been available for a very long time. This was already known to logicians who studied combinational logic after the semantic work of Moses Schonfinkel in 1924 and served as the basis for the first study of what would become the ML type of the conclusion of Robert Feis and ... Haskell Curry in the 1950s.

The idea of ​​constructing functions from an expressive set of core combinators is very attractive and is applied in various fields, such as array manipulation languages ​​derived from APL or parser combinator libraries such as Haskell Parsec . A notable proponent of point programming is John Beckus . In a 1978 speech, “Is it possible to free programming from von Neumann's style?”, He wrote:

A lambda expression (with its substitution rules) is capable of defining all possible computable functions of all possible types and any number of arguments. This freedom and power have disadvantages as well as obvious advantages. This is analogous to the power of unlimited control statements in ordinary languages: unlimited freedom is chaos. If one constantly invents new combinations of forms suitable for this occasion, since in the lambda calculus one may not get to know the style or useful properties of several combining forms that are adequate for all purposes. Like structured programming, it avoids many test statements to obtain programs with a simpler structure, better properties, and common methods for understanding their behavior, so functional programming avoids lambda expression, substitution, and multiple function types. Thus, he implements programs created with familiar functional forms with known useful properties. These programs are so structured that their behavior can often be understood and proven by mechanical use of algebraic methods similar to those used to solve high school algebra problems.

So there they are. The main advantage of point programming is that they create a structured combinator style that makes rational reasoning natural. Equational reasoning was especially touted by supporters of the Squiggol movement (see [1] [2]) and indeed uses a fair share of point combinators and calculation / rewriting / reasoning rules.

Finally, one of the reasons for the popularity of point programming among heckellites is its relation to category theory. In category theory, morphisms (which can be considered as “transformations between objects”) are the main object of study and computation. Although the partial results allow us to reason in certain categories in a precise style, the general way of constructing, exploring, and manipulating arrows is still a meaningless style, and other syntaxes, such as string diagrams, also demonstrate this "pointfreeness". There are rather close connections between people who promote methods of "programming algebra" and users of categories in programming (for example, the authors of a banana newspaper [2] / are hardcore categorists).

You might be interested in the Pointfree page on the Haskell wiki.

The disadvantage of the pointfree style is pretty obvious: it can be a real pain to read. The reason we still love to use variables, despite the many horrors of shading, alpha equivalence, etc., is because it is a notation that is so natural to read and think. The general idea is that a complex function (in a transparent reference language) is similar to a complex water supply system: input parameters - they enter some pipes, apply to internal functions, are duplicated ( \x -> (x,x) ) or forgotten ( \x -> () , not leading pipe anywhere), etc. And the variable notation is well hidden about all the machines: you specify the name of the entrance and the names of the outputs (or auxiliary calculations), but you do not need to describe the entire water supply plan, where small pipes will not be an obstacle for larger ones, etc. The amount of plumbing inside something short than \(f,x,y) -> ((x,y), fxy) is amazing. You can monitor each variable separately or read each intermediate node plumbing, but you will never have to see the whole machine together. When you use a style without dots, it is all explicit, you have to write everything down and look at it later, and sometimes just ugly.

PS: this vision of plumbing is closely related to the stack programming languages, which are probably the least accurate programming languages ​​(hardly used). I would recommend trying to do some programming for them to just feel it (as I would recommend logical programming). See Factor , Cat or the venerable Forth .

+59
Apr 15 2018-11-21T00:
source share

While I am attracted to the dissolute concept and use it for some things, and agree with all the positives said earlier, I found these things with him as negative (some of them are described in detail above):

  • A shorter notation reduces redundancy; in a highly structured composition (ramda.js style or with no dots in Haskell or some concatenative language) reading the code is more complicated than linear scanning through a bunch of const bindings and using a symbol marker to see which binding goes into what other subsequent calculation. Besides the tree compared to the linear structure, the loss of descriptive symbol names makes this function difficult to intuitively understand. Of course, both the tree structure and the loss of named connectives also have many positive results, for example, functions will feel more general - not tied to any area of ​​the application through the selected symbol names - and the tree structure is semantically present even if the bindings are laid out and can be consistently understood (lisp let / let * style).

  • Without dots, it’s easiest to just lay out or make up a series of functions, as it also leads to a linear structure that we humans can easily follow. However, the flows of some intermediate computations using multiple recipients are tedious. There are all kinds of tuple wraps, lensing, and other painstaking mechanisms that make it easy to make some calculations available, which otherwise would be just using a few value bindings. Of course, the repeated part can be extracted as a separate function, and maybe this is a good idea, but there are arguments for some short-lived functions, and even if it is extracted, its arguments must somehow be pierced through both applications, and then it may be necessary to remember the function so as not to actually repeat the calculation. One will use a lot of converge , lens , memoize , useWidth , etc.

  • JavaScript features: harder to debug. With a linear flow of let bindings, it's easy to add a breakpoint everywhere. Using a pointless style, even if a breakpoint is somehow added, the stream of values ​​is difficult to read, for example. you cannot just query or hover over any variable in the dev console. In addition, since point-free is not native in JS, the library functions of ramda.js or similar will hide the stack a bit, especially with mandatory currying.

  • Code of fragility, especially on non-trivial size systems and in production. If a new request arises, then the aforementioned shortcomings enter into it (for example, it is more difficult to read the code for the next maintainer, which can be several weeks in a row, and also more difficult to monitor the data flow for verification). But most importantly, even something seemingly small and innocent, a new requirement may require a whole different structure of the code. It can be argued that this is good, because it will be a crystal clear idea of ​​a new thing, but rewriting large strips of dotted code is very laborious, and then we did not mention testing. Therefore, he feels that a lighter, less structured, lexical coding-based coding can be more quickly reprofiled. Especially if the coding is exploratory in the field of human data with strange conventions (time, etc.), which can rarely be fixed 100% accurately, and there can always be a preliminary request for processing something more accurately or more for needs client, whichever way leads to more frequent turning issues.

0
Oct 18 '17 at 9:54 on
source share



All Articles