In a nutshell
You say your compiler is Visual C ++ 2010 Express. I do not have access to this compiler, but I understand that it generates programs that initially configure the x87 processor to use 53 bits of accuracy to maximize the IEEE 754 double-precision calculations.
Unfortunately, "as close as possible" is not always close enough. Historical 80-bit floating point registers may have their own limited widths to emulate double precision, but they always preserve the full range for the exponent. The difference is manifested, in particular, when manipulating denormals (for example, your y ).
What will happen
My explanation is that in printf("%23.16e\n", 1.6*y); , 1.6*y calculated as an 80-bit full-digit number with a reduced value, and therefore is a normal number, and then converted to double precision by IEEE 754 (resulting in denormal), then printed.
On the other hand, in printf("%23.16e\n", x + 1.6*y); , x + 1.6*y calculated with all 80-bit numbers with a full indicator with reduced significance (again, all the intermediate results are normal numbers), and then converted to IEEE 754 double precision, then printed.
This explains why 1.6*y prints the same as 2.0*y , but has a different effect when added to x . The number that is printed is a double-precision denormation. The number added to x is an 80-bit normal number with a full exponent with a reduced significance (not the same thing).
What happens to other compilers when generating x87 instructions
Other compilers, such as GCC, do not configure x87 FPUs to control 53-bit banners. This can have the same consequences (in this case x + 1.6*y will be calculated with all 80-bit full significant full exponents, and then converted to double precision for printing or storage in memory). In this case, the problem is noticeable even more often (you do not need to use denormals or infinite numbers to notice the differences).
This David Monnius article contains all the details you might want, and more.
Remove unwanted behavior
To get rid of the problem (if you think it's one), find a flag that tells the compiler to generate SSE2 instructions for floating point. They implement the exact semantics of IEEE 754 for single and double point processing.