Yes, pure FP can be more memory intensive than imperative programming, although it depends on how you write your programs and how smart your compilers are. Haskell compilers, in particular, have very powerful optimizers and can convert pure FP code to fairly efficient machine code that reuses allocated memory. This requires writing good FP code, although even the smartest compiler will not include optimizations for processing programs that simply mimic imperative with surface-like FP constructs.
Remember that your C ++ example is not valid. If you meant
v[0] = a; // assuming v.size() > 0
then it makes no highlight. If you meant
v.append(a);
then this may or may not be distributed depending on the capacity v .
Or does the lazy call somehow guarantee that xs is actually returned only by adding only one element to the end?
Depends on the implementation and use of the expression in its context. When xs ++ [a] fully evaluated, it copies the entire xs list. If it is partially evaluated, it can perform any number of distributions between none and len(xs) .
Can I also change xs ++ [a] to a: does xs consume less memory?
Yes, this changes it from O (n) in the worst case to O (1) the worst allocations / use of extra memory. The same goes for time complexity. When processing lists in Haskell, never add them to the end. If you need it, use the list of differences .
source share