Is there a C / C ++ function for safely managing division by zero?

We have a situation where we want to make a weighted average of two values w1 and w2 , based on how far the other two values v1 and v2 are far from zero ... for example:

  • If v1 is zero, it does not get a weighted value, so we return w2
  • If v2 is zero, it is not weighted at all, so we return w1
  • If both values ​​are equally far from zero, we make the average value and return ( w1 + w2 ) / 2

I inherited code like:

float calcWeightedAverage(v1,v2,w1,w2) { v1=fabs(v1); v2=fabs(v2); return (v1/(v1+v2))*w1 + (v2/(v1+v2)*w2); } 

For a small background, v1 and v2 show how two different knobs turn, weighing their individual resulting effects depends only on how much they turn, and not in which direction.

Clearly, this is a problem when v1==v2==0 , since we end with return (0/0)*w1 + (0/0)*w2 , and you cannot do 0/0 . Entering a special test for v1==v2==0 sounds awful mathematically, even if it was not a bad practice with floating point numbers.

So, I thought, if

  • there was a standard library function to handle this
  • there is a more accurate mathematical representation
+4
source share
10 answers

You are trying to implement this math function:

 F(x, y) = (W1 * |x| + W2 * |y|) / (|x| + |y|) 

This function is discontinuous at x = 0, y = 0 . Unfortunately, as R. R. stated in the commentary, the gap is not removable - at the moment there is no reasonable value.

This is because the “reasonable value” changes depending on the path you take to go to x = 0, y = 0 . For example, consider the path F(0, r) from r = R1 to r = 0 (this is equivalent to having the X knob at zero and smoothly adjusting the Y knob down from R1 to 0). The value of F(x, y) will be constant at W2 , until you reach the gap.

Now consider the next F(r, 0) (keeping the Y knob at zero and smoothly adjusting the X knob to zero) - the output will be constant at W1 until you reach the gap.

Now consider the next F(r, r) (holding both knobs with the same value and adjusting them down to zero). The output here will be constant at W1 + W2 / 2 until you move on to a break.

This means that any value between W1 and W2 results as a result at x = 0, y = 0 . There is no reasonable way to choose between them. (And, in addition, always choosing 0, since the output is completely incorrect, the output is otherwise limited to the interval W1..W2 (i.e., for any path that you approach the gap, the limit F() always in this interval ), and 0 cannot even lie in this interval!)


You can "fix" the problem by slightly adjusting the function - add a constant (for example, 1.0 ) to v1 and v2 after fabs() . This will make sure that the minimum contribution of each pen cannot be zero - it is simply “close to zero” (a constant determines how close).

It may be tempting to define this constant as a "very small number", but it will simply cause diversification of the output, since the manipulators manipulate near their zero points, which is probably undesirable.

+14
source

This is the best I could come up with quickly.

 float calcWeightedAverage(float v1,float v2,float w1,float w2) { float a1 = 0.0; float a2 = 0.0; if (v1 != 0) { a1 = v1/(v1+v2) * w1; } if (v2 != 0) { a2 = v2/(v1+v2) * w2; } return a1 + a2; } 
+4
source

I do not see what would be wrong with this:

 float calcWeightedAverage( float v1, float v2, float w1, float w2 ) { static const float eps = FLT_MIN; //Or some other suitably small value. v1 = fabs( v1 ); v2 = fabs( v2 ); if( v1 + v2 < eps ) return (w1+w2)/2.0f; else return (v1/(v1+v2))*w1 + (v2/(v1+v2)*w2); } 

Of course, there are no “bizarre” things to figure out your separation, but why make it harder than it should be?

+3
source

Personally, I see nothing wrong with explicitly checking for division by zero. We all do this, so it can be argued that he has no more ugly one.

However, you can disable the separation of IEEE by zero exceptions . How you do this depends on your platform. I know that in windows this should be done throughout the process, so you can inadvertently communicate with other threads (and they are with you), doing this if you are not careful.

However , if you do this, your result value will be NaN , not 0 . I doubt very much what you want. If you have to put a special check in any case with a different logic, when you get NaN, you can just check for 0 in the denominator in front.

+3
source

So, with the weighted average, you need to look at a special case where both are zero. In this case, you want to treat it as 0.5 * w1 + 0.5 * w2, right? How about this?

 float calcWeightedAverage(float v1,float v2,float w1,float w2) { v1=fabs(v1); v2=fabs(v2); if (v1 == v2) { v1 = 0.5; } else { v1 = v1 / (v1 + v2); // v1 is between 0 and 1 } v2 = 1 - v1; // avoid addition and division because they should add to 1 return v1 * w1 + v2 * w2; } 
+1
source

You can test fabs(v1)+fabs(v2)==0 (this is apparently the fastest if you have already calculated them) and return a value that makes sense in this case ( w1+w2/2 ?). Otherwise, save the code as is.

However, I suspect that the algorithm itself is broken, if possible v1==v2==0 . Such a numerical instability, when the handles are “close to zero,” is hardly desirable.

If the behavior is indeed correct, and you want to avoid special cases, you can add the minimum positive floating-point value of a given type in v1 and v2 after accepting their absolute values. (Note that DBL_MIN and friends are not correct values, because they are the minimum normalized values, you need a minimum of all positive values, including subnormal values.) This will have no effect if they are already very small; additions will just bring v1 and v2 in the normal case.

+1
source

The problem with using explicit zero checking is that you may run into behavioral gaps if you are not careful, as indicated in the cafe's answer (and if it can be expensive at the heart of your algorithm, but don't cost about it while you don't measure ...)

I tend to use something that just smooths the weighting around zero.

 float calcWeightedAverage(v1,v2,w1,w2) { eps = 1e-7; // Or whatever you like... v1=fabs(v1)+eps; v2=fabs(v2)+eps; return (v1/(v1+v2))*w1 + (v2/(v1+v2)*w2); } 

Your function is now smooth, without asymptotes or division by zero, and as long as one of v1 or v2 exceeds 1e-7 by a significant amount, it will be indistinguishable from the "real" weighted average.

+1
source

If the denominator is zero, how do you want it by default? You can do something like this:

 static inline float divide_default(float numerator, float denominator, float default) { return (denominator == 0) ? default : (numerator / denominator); } float calcWeightedAverage(v1, v2, w1, w2) { v1 = fabs(v1); v2 = fabs(v2); return w1 * divide_default(v1, v1 + v2, 0.0) + w2 * divide_default(v2, v1 + v2, 0.0); } 

Note that defining a function and using a static inline should really let the compiler know what it can inline.

0
source

This should work

 #include <float.h> float calcWeightedAverage(v1,v2,w1,w2) { v1=fabs(v1); v2=fabs(v2); return (v1/(v1+v2+FLT_EPSILON))*w1 + (v2/(v1+v2+FLT_EPSILON)*w2); } 

edit: I saw that there might be problems with some precision, so instead of using FLT_EPSILON, use DBL_EPSILON for accurate results (I think you will return a float value).

0
source

I would do the following:

 float calcWeightedAverage(double v1, double v2, double w1, double w2) { v1 = fabs(v1); v2 = fabs(v2); /* if both values are equally far from 0 */ if (fabs(v1 - v2) < 0.000000001) return (w1 + w2) / 2; return (v1*w1 + v2*w2) / (v1 + v2); } 
-2
source

All Articles