As the title of the question goes, assigning a 2 ^ 31 signed and unsigned 32-bit integer variable gives an unexpected result.
Here is a short program (in C++) that I did to see what happens:
#include <cstdio>
using namespace std;
int main()
{
unsigned long long n = 1<<31;
long long n2 = 1<<31;
printf("%llu\n",n);
printf("%lld\n",n2);
printf("size of ULL: %d, size of LL: %d\n", sizeof(unsigned long long), sizeof(long long) );
return 0;
}
Here's the conclusion:
MyPC /
MyPC /
18446744071562067968 <- Should be 2^31 right?
-2147483648 <- This is correct ( -2^31 because of the sign bit)
size of ULL: 8, size of LL: 8
Then I added another function p()to it:
void p()
{
unsigned long long n = 1<<32;
printf("%llu\n",n);
}
When compiling and starting it upset me even more:
MyPC / # c++ test.cpp -o test
test.cpp: In function ‘void p()’:
test.cpp:6:28: warning: left shift count >= width of type [enabled by default]
MyPC / # ./test
0
MyPC /
Why does the compiler complain that the number of left shifts is too large? sizeof(unsigned long long) returns 8, so that does not mean that 2 ^ 63-1 is the maximum value for this data type?
It seemed to me that perhaps n * 2 and n <1 do not always behave the same, so I tried this:
void s()
{
unsigned long long n = 1;
for(int a=0;a<63;a++) n = n*2;
printf("%llu\n",n);
}
2 ^ 63 , 9223372036854775808 ( python). ?
n 2 n( )
-
, , 2 ^ 63 ( ).
, , - ?
PS: 32- , linux mint ( )