If you are not going to use the result of something, then it is lazy to store it, and then never execute it, more effectively than it is useless to execute it. This is very obvious.
However, if you are going to execute it, then its lazy storage and its subsequent execution is less effective than just executing it now. There is more indirection. It takes time to write down all the details you need to complete, and it takes time to load them back when you realize that you really need to complete.
This is especially true for adding two integers of machine width. If your operands are already in the CPU register, then adding them immediately is one machine instruction. Instead, we painstakingly put all these things in a heap, and then return them later (possibly with a bunch of cache misses and pipelines).
In addition, sometimes the calculation is not so expensive and gives a small result, but the details that we need to save to start the calculation later are quite large. The canonical example summarizes the list. The result may be a single 32-bit integer, but the list to be added up can be huge! All this extra work for the garbage collector to manage this data, which otherwise might be dead objects that could be freed.
In general, laziness used correctly can lead to huge performance gains, but laziness uses the wrong results in terrifying performance disasters. And laziness can be very difficult to reason about; it's not easy. With experience, you gradually get used to it, though.
MathematicalOrchid
source share