Because this is not what reinterpret_cast . All permitted conversions with reinterpret_cast include pointers or references, except that the integer or enum type can be reinterpret_cast for itself. All of this is defined in the standard, [expr.reinterpret.cast] .
I'm not sure what you are trying to achieve here, but if you want randomIntNumber have the same meaning as randomUintNumber , then do
const int64_t randomIntNumber = randomUintNumber;
If this leads to a compiler warning, or if you just want to be more explicit, then:
const int64_t randomIntNumber = static_cast<int64_t>(randomUintNumber);
The result of the cast has the same meaning as the input if randomUintNumber less than 2 63 . Otherwise, the result will be determined by the implementation, but I expect all known implementations that int64_t will define it to do the obvious thing: the result is equivalent to input modulo 2 64 .
If you want randomIntNumber have the same bit pattern as randomUintNumber , you can do this:
int64_t tmp; std::memcpy(&tmp, &randomUintNumber, sizeof(tmp)); const int64_t randomIntNumber = tmp;
Since int64_t guaranteed to use two additional representations, you hope that the implementation defines static_cast to have the same result as for values outside the uint64_t range. But in fact, this is not guaranteed in the AFAIK standard.
Even if randomUintNumber is a compile-time constant, unfortunately, here randomIntNumber not a compile-time constant. But then, how is “random” a compile-time constant ?; -)
If you need to get around this, and you don't trust the implementation, to be reasonable about converting non-standard unsigned values to signed types, then something like this:
const int64_t randomIntNumber = randomUintNumber <= INT64_MAX ? (int64_t) randomUintNumber : (int64_t) (randomUintNumber - INT64_MAX - 1) + INT64_MIN;
Now I advocate writing truly portable code wherever possible, but even so, I think it borders on paranoia.
Btw, you may be tempted to write the following:
const int64_t randomIntNumber = reinterpret_cast<int64_t&>(randomUintNumber);
or equivalently:
const int64_t randomIntNumber = *reinterpret_cast<int64_t*>(&randomUintNumber);
This is not entirely guaranteed to work, because although they exist where int64_t and uint64_t , it is guaranteed that they are a signed type and an unsigned type of the same size, in fact they are not guaranteed to be signed and unsigned versions of the standard integer type. Thus, this is implementation-specific, regardless of whether this code violates a strict alias. Code that violates a strict alias has undefined behavior. The following does not violate a strict alias, and this is normal if the pattern bit in randomUintNumber is a valid representation of the long long value:
unsigned long long x = 0; const long long y = reinterpret_cast<long long &>(x);
So, in implementations where int64_t and uint64_t are typedefs for long long and unsigned long long , then my reinterpret_cast is fine. And just as with converting values out of range to fit the implementation, you should expect that a reasonable thing to implement is to make them the corresponding types with / without signature. So, like static_cast and implicit conversion, you expect it to work in any reasonable implementation, but in reality it is not guaranteed.