How to force all double precision in 64-bit (and not any 80-bit registers)

I am trying to get a numerical package to give the same results on two different platforms (linux and macos) in order to eliminate errors / hardware problems. I suspect that the current source of differences is different from 64-bit and 80-bit double-precision arithmetic. I know that for gcc there are compiler options for gcc, but I thought there was also a function call to set this, as well as something that could be done on the command line or shell environment.

Any ideas?

+4
source share
1 answer

So, suppose you are on x86 using a floating point, you should set bit 9-8 in the x87 FCW register to 10b. [For some reason, Windows is the default by default].

Something like this should work in gcc:

void set_fp_mode() { short mode; __asm__ __volatile__ ("fstcw %0\n" "andw $0xFEFF, %0\n" // Clear bit 8. "orw $0x200, %0\n" // just to be safe, set bit 9. "fldcw %0\n" : : "m" (mode)); } 

[Where I worked a few years ago, someone complained that their new computer was not as good as a floating point as the old one, and gave me a dump of binary results from both. I have a similar computer for clients of the old computer and one like a new one, and I could not get it to go differently - until I realized that one machine works with windows, the other did not, so the floating point results were "better" "on a windowless machine. After I gave the client a very similar code code above, but he set both bits 8 and 9 to make 80-bit precision, the client was happy!]

+1
source

All Articles