I wrote a 2D curve algorithm and had some code that efficiently performed a la summation:
for (i=0, end=...; i<end; i++) { value += coefficients[i] * expensiveToCalculateValue(i); }
where the value of coefficients[i] is equal to zero for some stages of the iteration. Since zero time is something equal to zero anyway (at least with simple arithmetic rules), I decided that I could significantly optimize this code by first checking to see if coefficients[i] zero, and if so, just continue for the next iterations. Added, sorted, works brilliantly.
But this leaves the question: why is this not done for me? This is not some kind of creative niche version of multiplication, it is simple arithmetic. Almost all short-circuit operations in the binary language OR and AND, if an operand is found that makes the result invariant from this point, so why is arithmetic multiplication by zero not equally short-circuited?
I tried to run this code (modified for syntax) in Java, PHP, JavaScript, Perl, Python, C ++ and even looked at what Prolog did, but none of them realized that when they see "zero times" .. " They should not evaluate the potentially costly second (or third, fourth, etc.) term:
printed = 0; function veryExpensive() { print "oh god this costs so much, x" + (printed++); return 0; } value = 0 * veryExpensive() * veryExpensive() * veryExpensive()
They all just run veryExpensive() three times.
Now I understand that you can - if you are such a person - write your veryExpensive function to do administrative overhead work, assuming that you can rely on it to work, even though its result does not contribute to the arithmetic expression ( if you do this, you are probably abusing the language, but everyone loves the sneaky ninja code at some point during their programming life cycle), but you only do this because you know that the language is not accidentally optimized for this case. Your expressiveness of the code would not hurt you if the language optimized your arithmetic evaluation.
So: is there any historical precedent that caused the boat of the languages currently used to optimize "true OR ..." and "false AND ..." but not "zero TIMES ..."? Why are we optimizing binary operations, but not for MUL 0? (And if we are lucky, someone has a fascinating story to tell about why we are not closing now)
Update
Both John Skeet and Nick Bugalis give good arguments in favor of why optimizing this in the existing language will lead to problems, but Nick will answer the question with the question much more, so I marked his answer as “correct”. However, they cover various aspects of the same problem, so the real answer is a combination of the two.