Weird result after assigning a 2 ^ 31 32-bit integer variable with and without signature

As the title of the question goes, assigning a 2 ^ 31 signed and unsigned 32-bit integer variable gives an unexpected result.

Here is a short program (in C++) that I did to see what happens:

#include <cstdio>
using namespace std;

int main()
{
    unsigned long long n = 1<<31;
    long long n2 = 1<<31;  // this works as expected
    printf("%llu\n",n);
    printf("%lld\n",n2);
    printf("size of ULL: %d, size of LL: %d\n", sizeof(unsigned long long), sizeof(long long) );
    return 0;
}

Here's the conclusion:

MyPC / # c++ test.cpp -o test
MyPC / # ./test
18446744071562067968      <- Should be 2^31 right?
-2147483648               <- This is correct ( -2^31 because of the sign bit)
size of ULL: 8, size of LL: 8

Then I added another function p()to it:

void p()
{
  unsigned long long n = 1<<32;  // since n is 8 bytes, this should be legal for any integer from 32 to 63
  printf("%llu\n",n);
}

When compiling and starting it upset me even more:

MyPC / # c++ test.cpp -o test
test.cpp: In function ‘void p()’:
test.cpp:6:28: warning: left shift count >= width of type [enabled by default]
MyPC / # ./test 
0
MyPC /

Why does the compiler complain that the number of left shifts is too large? sizeof(unsigned long long) returns 8, so that does not mean that 2 ^ 63-1 is the maximum value for this data type?

It seemed to me that perhaps n * 2 and n <1 do not always behave the same, so I tried this:

void s()
{
   unsigned long long n = 1;
   for(int a=0;a<63;a++) n = n*2;
   printf("%llu\n",n);
}

2 ^ 63 , 9223372036854775808 ( python). ?

n 2 n( )

-

, , 2 ^ 63 ( ).

, , - ?

PS: 32- , linux mint ( )

+5
3

:

unsigned long long n = 1<<32;

, 1 int - , , 32 . .

, , , .

, , , unsigned long long:

unsigned long long n = (unsigned long long)1 << 32;
unsigned long long n = 1ULL << 32;
+10

1 << 32 , 1 ( int). , , 1 << 32 int , .

1LL 1ULL, , , long long unsigned long long.

+5

unsigned long long n = 1<<32;

, 1 int, 1 << 32 int, 32 .

unsigned long long n = 1<<31;

also crowded for the same reason. Note that 1 is of type signed int, so it really only has 31 bits for the value and 1 bit for the sign. Therefore, when you change 1 << 31, it overflows the bit of the value, the result is -2147483648, which is then converted to an unsigned long long, which is equal 18446744071562067968. You can check this in the debugger if you check the variables and convert them.

Therefore use

unsigned long long n = 1ULL << 31;
+3
source

All Articles