In binary terms, 0.7:
b0.1011001100110011001100110011001100110011001100110011001100110...
However, 0.7 is a double precision literal whose value is 0.7, rounded to the nearest represented double precision value, which is:
b0.10110011001100110011001100110011001100110011001100110
In decimal, this is for sure:
0.6999999999999999555910790149937383830547332763671875
When you write float a = 0.7 , this double value is again rounded to one precision, and a gets a binary value:
b0.101100110011001100110011
which is exactly equal
0.699999988079071044921875
in decimal form.
When you perform a comparison (a < 0.7) , you compare this value of one precision (converted to double, which does not round, since all values ββwith the same precision are represented in double precision) to the original value of double precision. Because the
0.699999988079071044921875 < 0.6999999999999999555910790149937383830547332763671875
the comparison returns true correctly, and your program prints "C" .
Please note that none of this is different from the C ++ language, and the appearance of the code in question is vice versa. There are certain (numerically dangerous) compiler optimizers that can change behavior, but they are not unique to C or C ++.
source share