In hexadecimal, 1065353216 is 0x3F800000. If you interpret this as a 32 bit floating point number, you will get 1.0. If you write it in binary format, you will get the following:
3 F 8 0 0 0 0 0
0011 1111 1000 0000 0000 0000 0000 0000
Or grouped in different ways:
0 01111111 00000000000000000000000
s eeeeeeee vvvvvvvvvvvvvvvvvvvvvvvvv
The first bit ( s ) is the sign bit, the next 8 bits ( e ) are exponential, and the last 23 bits ( v ) are significant. "A single precision binary floating-point exponent is encoded using the binary representation of the offset at zero offset 127, also known as the exponent offset in IEEE 754. " Interpreting this, you see that the sign is 0 (positive), the exponent is 0 (01111111 b = 127, "zero offset"), and the value is 0. This gives you +0 0 which is 1.0.
In any case, what happens is that you take a reference to float ( b ) and re-interpret it as a reference to int (int&) . Therefore, when you read the value of j , you get a bit from b . Interpreted as float, these bits mean 1.0, but are interpreted as int, these bits mean 1065353216.
For what it's worth, I never used cast, using & , like (int&) . I would not expect to see this or use it in any normal C ++ code.
John kugelman
source share