Lack of pure function in functional programming

A pure function in functional programming is one that has no side effects. One of the meanings of this is that it cannot change the values โ€‹โ€‹of the input parameters. Can this be considered a lack of memory usage?
For example, let's say I have a function that simply takes a list and adds another element to it. In C ++, it can be as simple as

void addElem(std::vector& vec, int a) { vec.insert(a); } 

This function obviously does not use much memory than the one already accepted by the passed object.
But the same thing in Haskell will come to something similar.

  addElem :: [Int] -> Int -> [Int] 
 addElem xs a = xs ++ [a]

Now in this calculation, since xs does not change its value. I correctly say that at some point the function will consume double the xs memory size, one for xs and another for the return value. Or does the lazy call somehow convince that in fact xs is returned only by adding only one element at the end?
I know that Monad is a way to get something that can have a side effect. But is it possible to use Monad to simply change the input and return its value?
You can also change xs ++ [a] to: xs consumes less memory?

+4
source share
6 answers

Purity implies that a function cannot create destructive updates, therefore, if

 xs ++ [a] 

fully appreciated, a copy of xs should be made. This can happen in constant space if xs generated lazily and not mentioned anywhere, so it could be garbage collection as it is created. Or he may need a copy other than xs , but cleanliness allows the cells of both lists to point to the same data, so the data does not need to be copied, but only the spine (adjusted for the final minuses). But if a copy is made, the old xs value is still available.

You can also change xs ++ [a] to: xs consumes less memory?

Yes, this is a simple reuse of xs , all you need to do is create a new list cell and point its tail to xs .

From the comment:

I do not want the C ++ function to be clean. I want it to change the input state. And that is exactly what I wanted to be on this issue.

In this case, you need an unclean function / action. This is possible for some data structures to implement, but for cell lists, you need to copy / rebuild. But adding an element to std::vector<T> may also require redistribution.

+10
source

Yes, pure FP can be more memory intensive than imperative programming, although it depends on how you write your programs and how smart your compilers are. Haskell compilers, in particular, have very powerful optimizers and can convert pure FP code to fairly efficient machine code that reuses allocated memory. This requires writing good FP code, although even the smartest compiler will not include optimizations for processing programs that simply mimic imperative with surface-like FP constructs.

Remember that your C ++ example is not valid. If you meant

 v[0] = a; // assuming v.size() > 0 

then it makes no highlight. If you meant

 v.append(a); 

then this may or may not be distributed depending on the capacity v .

Or does the lazy call somehow guarantee that xs is actually returned only by adding only one element to the end?

Depends on the implementation and use of the expression in its context. When xs ++ [a] fully evaluated, it copies the entire xs list. If it is partially evaluated, it can perform any number of distributions between none and len(xs) .

Can I also change xs ++ [a] to a: does xs consume less memory?

Yes, this changes it from O (n) in the worst case to O (1) the worst allocations / use of extra memory. The same goes for time complexity. When processing lists in Haskell, never add them to the end. If you need it, use the list of differences .

+8
source

I think the difference between your two examples is the choice of the data structure, and not the question of clean and unclean on their own.

In both approaches, you can have singly linked lists, and you can have arrays with constant time updates in both. In the pure case, updating the data structure can semantically make a copy, but this does not necessarily mean that it does it physically.

Ropes, for example, are a variant of immutable arrays that allows constant time concatenation. You can also have immutable array abstraction with constant functional updating (where a.set (i, x) semantically returns a copy) using an internal copy-to-write scheme that only performs a physical copy if you really access the old version after creating a new one ( i.e. if you use the ability to maintain a clean data structure that is not available in the unclean case).

+4
source

ab strac. [ab-strak-shuh n]

the act of considering something as a general quality or characteristic, in addition to specific realities, specific objects or actual items.

compromise

balancing factors, all of which are not achievable at the same time

skipped abstractions

All non-trivial abstractions, to some extent, are leaky.

Functional programming can be seen as an abstraction, and since all abstractions flow, there are some trade-offs that need to be made.

+3
source

Now in this calculation, since xs does not change its value. I correctly say that at some point the function will consume double the xs memory size, one for xs and another for the return value.

Not necessary. The advantage of having immutable data is sharing. That way, compilers can optimize by splitting xs in that case. Thus, the size will remain the same as in the case of C ++.

Or does the lazy call somehow guarantee that xs is actually returned only by adding only one element to the end?

Laziness simply means to evaluate when it is really necessary, this does not guarantee less memory use. On the other hand, corpses created because of laziness can use more memory.

I know that Monad is a way to get something that can have a side effect. But is it possible to use Monad to simply change the input and return its value?

You are partially right. Monad are used to write code that has side effects. You can very well use a mutable vector and write code very similar to your C ++ example in the IO monad.

Can I also change xs ++ [a] to a: does xs consume less memory?

It depends on the compiler, but I think that it has copied the whole list and will add the element to the end. (I am not an expert in this).

You can always review your code and check memory consumption, which is the best way to find out.

+3
source

Different languages โ€‹โ€‹have different idioms in common, and artists are working to make these idioms as effective as possible. I would not assume that something entails additional overhead in C ++, the same would be true in another language, just as I did not assume that the most effective idioms in another language would be the most effective in C ++.

Having said that: I am currently working on a large, high one where we often return std::vector , and we did not find that this is a problem. Things like NRVO and moving semantics ultimately mean that it can be very efficient in C ++ as well.

+1
source

All Articles