#include <stdio.h> int main(void) { double x = 0.12345678901234567890123456789; printf("%0.16f\n", x); return 0; };
In the above code, I initialize x literal that is too large to be represented by a double IEEE 754. On my PC with gcc 4.9.2, it works well. A literal is rounded to the nearest value, which fits into a double. I am wondering what happens behind the scenes (at the compiler level) in this case? Does it depend on the platform? Is it legal?
source share