Given the following snippet:
#include <stdio.h> typedef signed long long int64; typedef signed int int32; typedef signed char int8; int main() { printf("%i\n", sizeof(int8)); printf("%i\n", sizeof(int32)); printf("%i\n", sizeof(int64)); int8 a = 100; int8 b = 100; int32 c = a * b; printf("%i\n", c); int32 d = 1000000000; int32 e = 1000000000; int64 f = d * e; printf("%I64d\n", f); }
The output with MinGW GCC 3.4.5 is (-O0):
1 4 8 10000 -1486618624
The first multiplication is entered in int32 inside (in accordance with the assembler output). The second multiplication is not performed. I'm not sure if the results are different because the program was run on IA32, or because it is defined somewhere in the C standard. Nevertheless, I wonder if this exact behavior is defined somewhere (ISO / IEC 9899?) , Because I like to better understand why and when I have to manually overlay (I have problems transferring the program from another architecture).
c multiplication long-long
azraiyl
source share