Where is the clever use of rigorous pricing?

There seem to be many examples of smart things being done in a lazily priced language that cannot be done in a rigorous evaluation environment. For example, infinite lists in Haskell or replace each element of the tree with the minimum value of the tree in one pass .

Are there examples of clever things done in a strictly evaluated language that are not easy to do in a lazily evaluated language?

+6
haskell lazy-evaluation
source share
9 answers

The main things that you can easily do in a passionate (strict) language, and not in a lazy language:

  • Predict the time and space of your programs from source code

  • Allow side effects, including constantly updating modified arrays, which simplifies the implementation of some algorithms quickly

In my opinion, the main advantage of an impatient language is that it is much easier to get your code to work the way you want, and there are very few performance traps in which a small change in code leads to a huge change in performance.

Having said that in general, I prefer to write complex things in Haskell.

+8
source share

Not; There are some things you can do * with a lazy grade (decreasing the normal order of AKA or decreasing on the left) that you cannot do with a strict grade, but not vice versa.

The reason for this is that lazy rating is somehow "the most common way of rating, which is known as:

Computational adequacy theorem : If any order of evaluation ends and gives a specific result, then a lazy evaluation will also stop and give the same result.

* (note that this is not about Turing equivalence here)

+5
source share

Well, no, more or less by definition. In the language of lazy assessment, you should, by definition, get the same results as energetic ones (do people really use “strict” now?) Assessment, with the exception of delaying the assessment until it is necessary, the consequences for the repository and all that. Therefore, if you could get other behavior besides this, that would be a mistake.

+4
source share

The most obvious use of laziness in everyday language is the if statement, where only one branch of a conditional expression is executed.

The opposite of a purely non-strict (lazy) language will be a purely strict language.

There is at least one case where “purely strict” is useful, namely, branch prediction .

Rough paraphrase of the related product:

Once upon a time in the world of processors, instructions for execution were loaded when checking the status of a branch. At some point, pipelines were added to reduce load times. The downside was that the CPU did not know which branch it needed to load, so by default it would load one. If the branch went the other way, the pipeline will stop while the code for the other branch is loaded.

The solution is to load both branches, execute both branches, then the result of the conditional expression indicates which branch result to save and which should be discarded. Then you do not get the conveyor.

This is my favorite (only?) Example of the benefits of strict language.

+2
source share

“Hello world” comes to mind or all things related to side effects basically.

In a rigorous assessment, evaluating an expression can easily have a side effect, since you have a clear overview of the order of evaluation and, therefore, the order of side effects that are usually important in side effects. This is the main advantage of rigorous evaluation, as well as why most languages ​​have it. And why even performance-oriented languages ​​like C often use a skipped model.

Both can do the same, just with different levels of human complexity, you can perfectly simulate endless lists in a strict language, and you can simulate all the effects of side effects with non-strict languages.

+1
source share

In a strict evaluation language such as C #, you can achieve a lazy evaluation by returning thunk to (Func) instead of the value itself. As an example, when building a Y-Combinator in C #, you can do this as follows:

public Func<int, int> Y(Func<Func<int, int>, Func<int, int>> f) { return x => f(Y(f))(x); } 

This expression would be more concise in a lazy environment:

 Y f = f (Y f) 
0
source share

As Charlie Martin wrote , the result of a rigorous and lazy program should be equivalent. The difference lies in limiting the time and space and / or expressiveness of the language. Besides the laziness performance effect for unnecessary values, in a lazy language it is easy to introduce new language constructs without an additional language concept (for example, a macro in LISP). Anyway, laziness can bite you. How does Haskell tail recursion work? and the same can be more complicated than in strict language. (Shouldn't the haskell compiler recognize that calculating +1 less expensive than make thunk ( x + 1 ) ?)

0
source share

I program a bit in Erlang and consider the lack of lazy assessment that I learned at the university to be very frustrating.

I briefly reviewed some of Euler’s problems with the project, especially those regarding prime numbers.

With lazy pricing, you can have a function that returns a list of all primes, but only actually returns the ones you really want. Therefore, it’s very easy to say “give me the first n prime numbers”.

Without a lazy assessment, you tend to find yourself more restrained, "give me a list of all primes between 1 and n."

0
source share

The answer, which was rated best, unfortunately suffers from a logical error. It does not follow from Porges’s theorem that more can be done in lazy language.

The proof of the opposite is that all programs in lazy languages ​​are compiled for equivalents in strict languages ​​(which are compiled further for assembler programs) or are executed by an interpreter written in a strict language (and yes, the interpreter is an assembler program ultimatively).

-5
source share

All Articles