Haskell sets (0/0) as qnan

I noticed that Haskell (ghci 7.10.2 from the Haskell platform on Windows) reverses the sign to QNAN (0/0 :: Double) from what I saw in C ++ (tested MSVS C ++ 2013 and cygwin gcc 4.9. 2). Haskell creates the pattern bit 0xfff8000000000000 for (0/0) (and - (0/0) produces 0x7ff8 ...). This is similar to the reverse implementation of C ++ implementations.

Here is a test program to illustrate:

 import Data.Word import Unsafe.Coerce import Text.Printf dblToBits :: Double -> Word64 dblToBits = unsafeCoerce test :: Double -> IO () test d = putStrLn $ printf "%12f 0x%x" d (dblToBits d) go = do test (0/0) test (-(0/0)) test (1/0) test (-(1/0)) 

This gives the result:

  NaN 0xfff8000000000000 <- I expect 0x7F...? NaN 0x7ff8000000000000 <- I expect 0xFF...? Infinity 0x7ff0000000000000 -Infinity 0xfff0000000000000 

Note that infinity works well, but NaN seems upside down.

  • Is this part of the semantics of undefined NaN in Haskell? That is, (0/0) means that ghc can use any NaN pattern they want? Then we have an exact way in Haskell to specify a QNAN or SNAN in a floating point, without resorting to special IEEE 4 libraries? I am writing assembler for a piece of hardware that can be picky about this NaN flavor.

  • Am I burning unsafeCoerce ? I have no easy way in Haskell to convert from float to bit and vice versa.

LITERATURE:

  • MSVS 2013. C ++ std::numeric_limits<double>::quiet_NaN() from <limits> gives 0x7ff8000000000000 . Also tested on cygwin gcc 4.9.2
  • std :: numeric_limits :: quiet_NaN . Claims that any sign bit value is determined by the implementation. Does Haskell have a similar rule regarding this?
  • Perl semantics consistent with MSV C ++
  • Possible Haskell library for IEEE
  • The slightly related question uses the same unsafeCoerce that I returned.
+7
c ++ floating-point ieee-754 haskell
source share
1 answer

You ask too much from your NaN. According to the IEEE standard, the sign bit on NaN can be any. Thus, the compiler, processor, or floating point libraries can do whatever they want, and you get different results for different compilers, processors, and libraries.

In particular, with such a program, constant folding may mean that operations are performed by the compiler, and not in the target environment, depending on how the compiler is executed. The compiler can use its own floating point instructions or use something like GMP or MPFR instead. This is not uncommon. Since the IEEE standard does not say anything about sign bits, you will end up with different values ​​for different implementations. I would not be completely surprised if you could demonstrate that the values ​​changed when you turned optimization on or off and didn't include things like -ffast-math .

As an example of optimization, the compiler knows that you are calculating NaN , and perhaps he decides not to worry about flipping the sign bit afterwards. All this happens due to the constant spread. Another compiler does not do such an analysis, and therefore it emits instructions for flipping the sign bit, and the people who made your processor do not do this operation differently for NaN.

In short, do not try to understand the sign bit on NaN.

What exactly are you trying to accomplish here?

+5
source share

All Articles