I would use the properties of two additions to calculate the values.
unsigned int uint_max = ~0U; signed int int_max = uint_max >> 1; signed int int_min1 = (-int_max - 1); signed int int_min2 = ~int_max;
2 ^ 3 is 1000 . 2 ^ 3 - 1 is 0111 . 2 ^ 4 - 1 is 1111 .
w is the bit length of your data type.
uint_max is 2 ^ w - 1 or 111...111 . This effect is achieved with ~0U .
int_max is 2 ^ (w-1) - 1 or 0111...111 . This effect can be achieved by shifting the uint_max bits 1 bit to the right. Since uint_max is an unsigned value, a logical shift is applied by the >> operator, that is, it adds leading zeros instead of expanding the sign bit.
int_min is -2 ^ (w-1) or 100...000 . In two additions, the most significant bit has a negative weight!
Here's how to visualize the first expression to evaluate int_min1 :
... 011...111 int_max +2^(w-1) - 1 100...000 (-int_max - 1) -2^(w-1) == -2^(w-1) + 1 - 1 100...001 -int_max -2^(w-1) + 1 == -(+2^(w-1) - 1) ...
Addition 1 will move down, and subtraction 1 will move up. First, we int_max to generate the correct int value, then subtract 1 to get int_min . We cannot simply negate (int_max + 1) because it will exceed the int_max value int_max , the largest value of int .
Depending on which version of C or C ++ you are using, the expression -(int_max + 1) will either become a 64-bit signed integer, preserving the signature, but sacrificing the original width in bits, or it will become a 32-bit unsigned integer, keeping the original bit width, but sacrificing the signature. We need to declare int_min programmatically to preserve the actual value of int .
If this bit (or byte) is too complicated for you, you can just do ~int_max , noting that int_max is 011...111 and int_min is 100...000 .
Keep in mind that these methods that I mentioned here can be used for any bit width w of an integer data type. They can be used for char , short , int , long , as well as long long . Keep in mind that integer literals are almost always 32-bit by default, so you may have to cast 0U to a data type with an appropriate bit width before BITING, NOT ANNOUNCING. But beyond this, these methods are based on fundamental mathematical principles for representing an integer from two additions. However, they will not work if your computer uses a different way of representing integers, such as padding or the most significant character bit.