Optimization of consecutive card / filter / fold calls

Let's say I have a large list on which I would like to execute several cards, filter and drop / reduce calls. For clarity and expressiveness, this should be done with small lambda functions passed to map / filter / fold. However, as far as I know, they actually move around the list every time, calling lambda on it (maybe inline) and generating a new list. If so, I could just code the for-each loop and merge all the lambdas into its body.

I measured the runtime of a simple map / filter / reduce algorithm and the corresponding imperative for each loop in Python, and the latter was more than twice as fast as expected, but I know that Python is not the best language in this regard.

My questions are: is it possible for the compiler to compute them and somehow combine them into one cycle? Are there compilers that do this? I am mainly interested in functional languages ​​(Haskell, Erlang / Elixir, Scala), but it would be nice to hear about others (Rust, LINQ implementation).

+4
source share
1 answer

Yes, such optimizations have been reviewed many times.

"fusion" ( map fusion), map f . map g = map (f . g). , "" ( ).

, , , ( , , , Haskell, ). Scala Stream s, Clojure transducers ( ). , , , ( ).

Python ( # IEnumerable/LINQ Java new Stream s) , , ( ). xs = map(print, range(10)) - ; , . (, , , .)

+2

All Articles