You ask too much from your NaN. According to the IEEE standard, the sign bit on NaN can be any. Thus, the compiler, processor, or floating point libraries can do whatever they want, and you get different results for different compilers, processors, and libraries.
In particular, with such a program, constant folding may mean that operations are performed by the compiler, and not in the target environment, depending on how the compiler is executed. The compiler can use its own floating point instructions or use something like GMP or MPFR instead. This is not uncommon. Since the IEEE standard does not say anything about sign bits, you will end up with different values ββfor different implementations. I would not be completely surprised if you could demonstrate that the values ββchanged when you turned optimization on or off and didn't include things like -ffast-math .
As an example of optimization, the compiler knows that you are calculating NaN , and perhaps he decides not to worry about flipping the sign bit afterwards. All this happens due to the constant spread. Another compiler does not do such an analysis, and therefore it emits instructions for flipping the sign bit, and the people who made your processor do not do this operation differently for NaN.
In short, do not try to understand the sign bit on NaN.
What exactly are you trying to accomplish here?
Dietrich epp
source share