The following C program gives different results on my Mac and Linux. I am surprised because I suggested that the libm implementation is somehow standardized
#include<math.h> #include<stdio.h> int main() { double x18=-6.899495205106946e+01; double x19 = exp(-x18); printf("x19 = %.15e\n", x19); printf("x19 hex = %llx\n", *((unsigned long long *)(&x19))); }
Mac Out -
x19 = 9.207186811339878e+29 x19 hex = 46273e0149095886
and on Linux
x19 = 9.207186811339876e+29 x19 hex = 46273e0149095885
Both were compiled without any optimization flags as follows:
gcc -lm ....
I know that I should never compare floats to be exactly the same.
This problem arose during debugging; unfortunately, the algorithm using these calculated proofs is numerically unstable, and this small difference leads to significant deviations in the final result. But that is another problem.
I am just surprised that basic operations like exp are not standardized, as I can expect for basic algebraic operations specified by IEEE 754.
Are there any assumptions about accuracy that I can rely on for different libm implementations for different machines or for different versions?
In connection with the discussion below, I used mpmath to calculate the value accurate to machine precision, and I get the result 9.2071868113398768244 with two other values, so the last digit is already incorrect for both of my results. The result in linux can be explained by a decrease in the rounding of this value, the Mac result is also disabled if the computer uses rounding.
c linux precision numeric macos
rocksportrocker
source share