I am trying to set the bit to double (IEEE Standard 754). Saying that I want to "build" 3, I would set the 51st and 62nd bits of the double floating-point representation, so that I get 1.1 * 2 in binary, which in decimal is 3. I wrote this simple basic
int main() { double t; uint64_t *i = reinterpret_cast<uint64_t*>(&t); uint64_t one = 1; *i = ((one << 51) | (one << 62)); std::cout << sizeof(uint64_t) << " " << sizeof(uint64_t*) << " " << sizeof(double) << " " << sizeof(double*) << std::endl; std::cout << t << std::endl; return 0; }
The result of this will be
8 8 8 8 3
when compiling with g ++ 4.3 and without optimization. However, I get weird behavior if I add optimization flags -O2 or -O3. That is, if I just leave the main one as it is, I get the same result. But if I delete the line that outputs 4 sizeof, I get the output
0
The fixed version without output sizeof returns 3, correctly.
So, I am wondering if this is an optimizer error, or if I am doing something wrong here.
source share