Why is the result of a 32-bit program different from a 64-bit one?

I was working on assignment to represent an integer byte level. And I wrote a small program:

e1.c

int main(void) { printf("%d\n", -2147483648 < 2147483647); return 0; } 

When I compiled a 32-bit version of the executable using the C89 standard with the command gcc e1.c -m64 -std=c89 -g -O0 -o e1 , it worked as I expected: it printed 0 , indicating that the C compiler treated the value 2147483648 as an unsigned int , so it converts the rest of the expression to an unsigned int . But strangely, this relation is not preserved in the 64-bit version, which prints 1 .

Can anyone explain this?

+7
c
source share
1 answer

Specification C89:

The type of an integer constant is the first of the corresponding list in which its value can be represented. Unsuffixed decimal: int , long int , unsigned long int ; [...]

Thus, the literal type 2147483648 depends on the size of int , long and unsigned long respectively. Suppose int is 32 bits, like on many platforms (and this probably applies to your platforms).

On a 32-bit platform, 32 bits are typically used for long . So type 2147483648 will be unsigned long .

On a 64-bit platform, typically long should have 64 bits (although some platforms, such as MSVC, will still use 32 bits for long ). So type 2147483648 will be long .

This leads to the mismatch that you see. In one case, you negate the unsigned long , and in the other case you negate the long .

On a 32-bit platform, -2147483648 is evaluated as 2147483648 (using the unsigned long type). Thus, the comparison obtained is 2147483648 < 2147483647 , which is evaluated as 0 .

On a 64-bit platform, -2147483648 is evaluated to -2147483648 (using the long type). Thus, the comparison obtained is -2147483648 < 2147483647 , which is evaluated as 1 .

+9
source share

All Articles