So ... I was curious and dug a little.
As I mentioned in the comments, in IEEE 754 there is a “highest end value” type if you count exception state flags. An infinity value with a set of overflow flags matches your proposed LFV, with the difference that the flag is read-only after the operation, instead having to store it as part of the meaning itself. This means that you need to manually check the flag and act if an overflow occurs, instead of having the built-in LFV * 0 = 0.
There is a rather interesting article on exception handling and its support in programming languages. Quote:
The IEEE 754 model for flagging and returning infinity or quiet NaN assumes that the user often checks the status (or at least accordingly). Diagnosing the original problem requires the user to check all the results for exceptional values, which, in turn, assumes that they leak through all the operations, so that erroneous data can be marked. Given these assumptions, everything should work, but, unfortunately, they are not very realistic.
The document also talks about poor support for floating point exception handling, especially in C99 and Java (I'm sure most other languages ​​are no better). Given that, despite this, there is no serious effort to eliminate this or create a better standard, it seems to me that the IEEE 754 and its support are, in a sense, “good enough” (more on this later).
Let me give you a solution to your sample problem to demonstrate something. I use numpy seterr to make it throw an overflow exception:
import numpy as np def exp_then_mult_naive(a, b): err = np.seterr(all='ignore') x = np.exp(a) * b np.seterr(**err) return x def exp_then_mult_check_zero(a, b): err = np.seterr(all='ignore', over='raise') try: x = np.exp(a) return x * b except FloatingPointError: if b == 0: return 0 else: return exp_then_mult_naive(a, b) finally: np.seterr(**err) def exp_then_mult_scaling(a, b): err = np.seterr(all='ignore', over='raise') e = np.exp(1) while abs(b) < 1: try: x = np.exp(a) * b break except FloatingPointError: a -= 1 b *= e else: x = exp_then_mult_naive(a, b) np.seterr(**err) return x large = np.float_(710) tiny = np.float_(0.01) zero = np.float_(0.0) print('naive: e**710 * 0 = {}'.format(exp_then_mult_naive(large, zero))) print('check zero: e**710 * 0 = {}' .format(exp_then_mult_check_zero(large, zero))) print('check zero: e**710 * 0.01 = {}' .format(exp_then_mult_check_zero(large, tiny))) print('scaling: e**710 * 0.01 = {}'.format(exp_then_mult_scaling(large, tiny)))
exp_then_mult_naive does what you did: an expression that will overflow times 0 , and you get nan .exp_then_mult_check_zero catches the overflow and returns 0 if the second argument is 0 , otherwise the same as the naive version (note that inf * 0 == nan while inf * positive_value == inf ). This is the best you could do if there was an LFV constant.exp_then_mult_scaling uses information about the problem to obtain results for inputs that the other two could not handle: if b is small, we can multiply it by e while decreasing a without changing the result. Therefore, if np.exp(a) < np.inf before b >= 1 , the result is suitable. (I know that I can check if it fits one step instead of using a loop, but now it was easier to write.)
So now you have a situation where a solution that does not require LFV can provide the correct results for more input pairs than the one that does. The only advantage that LFV has here is to use fewer lines of code, but it will give the correct result in this particular case.
By the way, I'm not sure about thread safety using seterr . Therefore, if you use it in several threads with different settings in each thread, check it out earlier to avoid headaches later.
Bonus fact: the original standard actually provided that you should be able to register a trap handler that, when overflowed, would receive the result of the operation divided by a large number (see section 7.3). This will allow you to continue the calculation if you remember that the value is actually much greater. Although I suppose this could become a WTF minefield in a multi-threaded environment, it doesn't matter that I really did not find support for it.
To get back to the “good enough” point above: In my opinion, the IEEE 754 was designed as a general-purpose format suitable for almost any application. When you say, "the same problem often arises in many different settings," it (or at least was) does not seem to be often enough to justify the inflation of the standard.
Let me quote from a Wikipedia article:
[...] more esoteric features of the IEEE 754 standard discussed here, such as extended formats, NaN, infinity, subnormal phenomena, etc. [...] are designed to provide safe, reliable default values ​​for numerically inexperienced programmers, in addition to experts supporting complex digital libraries.
Putting it aside, in my opinion, even with NaN as a special value, this is a slightly dubious decision, adding LFV will not really make it easier or safer for the "numerically inexperienced", t allow the experts to do everything that they couldn’t.
I suppose the bottom line is that representing rational numbers is complicated. The IEEE 754 does a pretty good job, making it easy for many applications. If yours is not one of them, in the end you just have to deal with hard things either
- using higher precision float if available (ok, it's pretty easy)
- carefully choose the execution order so that you do not receive overflows in the first place,
- adding bias to all your values, if you know that they will all be very large,
- using an arbitrary precision representation that cannot overflow (if you do not have enough memory) or
- something else that i can't think about right now.