Long long vs int multipication

Given the following snippet:

#include <stdio.h> typedef signed long long int64; typedef signed int int32; typedef signed char int8; int main() { printf("%i\n", sizeof(int8)); printf("%i\n", sizeof(int32)); printf("%i\n", sizeof(int64)); int8 a = 100; int8 b = 100; int32 c = a * b; printf("%i\n", c); int32 d = 1000000000; int32 e = 1000000000; int64 f = d * e; printf("%I64d\n", f); } 

The output with MinGW GCC 3.4.5 is (-O0):

 1 4 8 10000 -1486618624 

The first multiplication is entered in int32 inside (in accordance with the assembler output). The second multiplication is not performed. I'm not sure if the results are different because the program was run on IA32, or because it is defined somewhere in the C standard. Nevertheless, I wonder if this exact behavior is defined somewhere (ISO / IEC 9899?) , Because I like to better understand why and when I have to manually overlay (I have problems transferring the program from another architecture).

+7
c multiplication long-long
source share
4 answers

The C99 standard states that binary operators, such as * , do not work with integer types smaller than int . Expressions of these types advance to int before the statement is applied. See Paragraph 6.3.1.4 of paragraph 2 and the numerous occurrences of the words “whole promotion”. But this is somewhat orthogonal to the compiler-generated build instructions that work with int , because it happens faster even when the compiler is allowed to calculate a shorter result (because the result is immediately stored in the l-value of a short type, for example).

Regarding int64 f = d * e; , where d and e are of type int , multiplication is performed as int in accordance with the same promotion rules. Overflow is technically undefined behavior , you get the result with two additions, but you can get anything according to the standard.

Note. Promotion rules distinguish between signed and unsigned types when promoting. The rule is to push smaller types to int , if int cannot represent all the values ​​of the type, in this case unsigned int .

+7
source share

The problem is that the multiplication is int32 * int32, which runs as int32, and the result is then assigned to int64. You will get the same effect with double d = 3 / 2; which would divide 3 by 2 using integer division and assign 1.0 d .

You should pay attention to the type of expression or subexpression, if that matters. To do this, you need to make sure that the corresponding operation is calculated as the corresponding type, for example, casting one of the factors in int64 or (in my example) 3.0 / 2 or (float)3 / 2 3/2.

+5
source share

a * b evaluates to int and then moves on to the type of the receiving variable (which is just int)

d * e evaluates to int and then moves on to the type of the receiving variable (which is just int64)

If one of the type variables was larger than int (or was a floating point), then this type would be used. But since all types used in multiplications were int or less, ints were used.

+2
source share

Read K & R (original). All whole operations are performed using a natural integer type, unless it includes variables that (or are discarded) for something more. Char operations are written in 32 bits, because this is the natural size of integers in this architecture. Multiplying two 32-bit integers is done in 32 bits, because nothing throws it at anything more (until you assign it to a 64-bit variable, but it's too late). If you want the operation to be performed in 64 bits, add one or both lines in 64 bits.

 int64 f = (int64)d * e; 
+2
source share

All Articles