There is a set of transformations called ordinary arithmetic transformations that are used before evaluating most arithmetic operators.
Basically, you can consider a few rules for arithmetic for integers:
Firstly, integer arithmetic is never performed with operands "less than" int , therefore, in the case of short * signed char both operands short and signed char raised to int two values ββof int are multiplied, and then the result is int .
Secondly, if one or both of the types is "larger than" int , the compiler selects a type that is at least "as large" as the type of the largest operand. So, if you have long * int , int advances to long , and the result is long .
Thirdly, if any operand is unsigned , then the result will be unsigned. Thus, if you have long * unsigned int , long and unsigned int advertised as unsigned long , and the result is unsigned long .
If one of the operands is of a floating-point type, floating-point arithmetic is performed: float , double or long double (which depends on the types of operands, the full table used to determine the type of result can be found on the page linked at the beginning of this answer).
Note that the type of the result does not depend on the values ββof the operands. The type must be selected by the compiler at compile time before the values ββare known.
If the result s * i * i falls outside the bounds of the acceptable type of result ( int , in your script), you are out of luck: your program cannot decide at run time: "Oh, I have to switch to using long !" because the type of result had to be selected at compile time.
James McNellis
source share