Initializing a floating-point variable with a large literal

#include <stdio.h> int main(void) { double x = 0.12345678901234567890123456789; printf("%0.16f\n", x); return 0; }; 

In the above code, I initialize x literal that is too large to be represented by a double IEEE 754. On my PC with gcc 4.9.2, it works well. A literal is rounded to the nearest value, which fits into a double. I am wondering what happens behind the scenes (at the compiler level) in this case? Does it depend on the platform? Is it legal?

+5
source share
1 answer

When you write double x = 0.1; , the decimal number you wrote is rounded to the nearest double . So what happens when you write 0.12345678901234567890123456789 is basically the same.

The behavior is essentially determined by the implementation, but most compilers will use the closest representable double instead of a constant. The C standard indicates that it should be either double directly above, or one that is below.

+6
source

All Articles