Slightly different from the exp function on Mac and Linux.

The following C program gives different results on my Mac and Linux. I am surprised because I suggested that the libm implementation is somehow standardized

 #include<math.h> #include<stdio.h> int main() { double x18=-6.899495205106946e+01; double x19 = exp(-x18); printf("x19 = %.15e\n", x19); printf("x19 hex = %llx\n", *((unsigned long long *)(&x19))); } 

Mac Out -

 x19 = 9.207186811339878e+29 x19 hex = 46273e0149095886 

and on Linux

 x19 = 9.207186811339876e+29 x19 hex = 46273e0149095885 

Both were compiled without any optimization flags as follows:

 gcc -lm .... 

I know that I should never compare floats to be exactly the same.

This problem arose during debugging; unfortunately, the algorithm using these calculated proofs is numerically unstable, and this small difference leads to significant deviations in the final result. But that is another problem.

I am just surprised that basic operations like exp are not standardized, as I can expect for basic algebraic operations specified by IEEE 754.

Are there any assumptions about accuracy that I can rely on for different libm implementations for different machines or for different versions?


In connection with the discussion below, I used mpmath to calculate the value accurate to machine precision, and I get the result 9.2071868113398768244 with two other values, so the last digit is already incorrect for both of my results. The result in linux can be explained by a decrease in the rounding of this value, the Mac result is also disabled if the computer uses rounding.

+7
c linux precision numeric macos
source share
2 answers

Specification C99 state (other version should be similar):

J.3 Implementation-Defined Behavior

1 An appropriate implementation is required to document your choice of behavior in each of the areas listed in this subclause. The following implementations are listed below:

...

J.3.6 Float

- accuracy of floating point operations and the library functions in <math.h> and <complex.h> , which are returned with a floating point (5.2.4.2.2).

The meaning of GNU libm and BSD libm can have different levels of accuracy. It is likely what is happening is that the BSD implementation in OSX rounds to the nearest (unit in last place) ULP, and the GNU implementation truncates to the next ULP.

+2
source share

The behavior of IEEE-754 is set at the binary level. Using Linux, I get the same values ​​for the native math library for Python, mpmath and MPFR (via gmpy2 ). However, decimal conversion varies between three ways.

 >>> import mpmath, gmpy2 >>> import mpmath, gmpy2, math >>> x18=68.99495205106946 >>> x19=math.exp(x18) >>> mp18=mpmath.mpf("68.99495205106946") >>> mp19=mpmath.exp(mp18) >>> gp18=gmpy2.mpfr("68.99495205106946") >>> gp19=gmpy2.exp(gp18) >>> x18 == mp18 True >>> x18 == gp18 True >>> x19 == mp19 True >>> x19 == gp19 True >>> print(x18, mp18, gp18) 68.99495205106946 68.9949520510695 68.994952051069461 >>> print(x19, mp19, gp19) 9.207186811339876e+29 9.20718681133988e+29 9.2071868113398761e+29 

After converting to a whole form of arbitrary precision Python, all three results are also displayed as accurate.

 >>> hex(int(x19)) '0xb9f00a484ac42800000000000' >>> hex(int(mp19)) '0xb9f00a484ac42800000000000' >>> hex(int(gp19)) '0xb9f00a484ac42800000000000' 

So (at least one) the Linux math library, mpmath and gmpy2.mpfr agree.

Disclaimer: I support gmpy2 and have previously contributed to mpmath .

+1
source share

All Articles