Why is this program not overflowing?

Trying to learn how C # responds to overflows, I wrote this simple code:

static uint diff(int a, int b) { return (uint)(b - a); } static void Main(string[] args) { Console.Out.WriteLine(int.MaxValue); uint l = diff(int.MinValue, int.MaxValue); Console.Out.WriteLine(l); Console.In.ReadLine(); } 

I get this output:

 2147483647 4294967295 

I find it surprising that it worked so well, since subtracting an int in diff should produce a result greater than int.MaxValue .

However, if I write this, which seems to be equivalent to the code above:

 uint l = (uint)(int.MaxValue - int.MinValue); 

C # doesn’t even compile, as the code may overflow.

What does the first code do without overflow, and the compiler does not even compile the second line?

+7
c # integer-overflow
source share
1 answer

When using constant values:

 uint l = (uint)(int.MaxValue - int.MinValue); 

The compiler knows exactly what you are trying to do, because it knows the values ​​and sees that the result of the subtraction cannot be placed in int , so it gives you an error.

When you use variables:

 return (uint)(b - a); 

The compiler has no idea about the compilation time, what the values ​​of the variables will be, so he does not complain.

Note that the overflow is in int , not uint , as you indicated in your remote answer. You might think that you are subtracting a large value from a small value, but it is not. int.MinValue is actually negative (-2147483648), and subtraction means you actually add it (2147483647 - (-2147483648)), so the result (4294967295) cannot fit in int , but it can fit in uint . So, for example, this will compile and give the correct result 4294967295:

 uint x = (uint)((long)int.MaxValue - int.MinValue); 

Because now you tell the compiler to save the result of the subtraction in long instead of int , and this will work. Now type x on the console and note that the result of the subtraction is 4294967295, not -1, as you state in your answer. If it were -1, as you said, then the code below should compile, but this is not because 4294967295 overflows int :

 int x = int.MaxValue - int.MinValue; 

Change There are more people trying to understand the result, so even more explanations that I hope will help:

First of all, we all know that int.MaxValue is 2147483647, and int.MinValue is -2147483648. I hope we all agree with this simple math without proving it in a program:

 2147483647 - (-2147483648) = 4294967295 

So, we all must agree that the result of mathematics is 4294967295, not -1. If someone does not agree with this, return to school.

So, why is the result in the program sometimes -1, which confuses so many people?

OK, we all agree that an overflow occurs, so this is not a problem. Some people do not understand where overflow occurs. This does not happen when uint pressed. Of course, -1 will overflow uint , but the program overflows int in the step before casting to uint . When an overflow occurs, the behavior of the program changes depending on the execution context (checked or not checked). In the checked context, an OverflowException is OverflowException , and after that nothing is executed, so casting to uint not performed. In an uncontrolled context, the most significant bits of the result are discarded and execution continues, so casting to uint is performed and another overflow occurs. Here's an MSDN article on how integer overflows behave .

So, let's see how we get -1:

First, in C #, when you subtract two integers, the result is an integer. Now, if the result cannot fit into an integer, an overflow occurs. The tricky part is that in an uncontrolled context, the most significant bits of the result are discarded, as I mentioned above. In the question scenario, this results in -1. Here are a few examples that I hope will make this clear:

 Console.WriteLine(unchecked(int.MaxValue)); //Result 2147483647 Console.WriteLine(unchecked(int.MinValue)); //Result -2147483648 Console.WriteLine(unchecked(int.MaxValue-int.MinValue)); //Result -1 overflow Console.WriteLine(unchecked(2147483647-(-2147483648))); //Same as above Console.WriteLine(unchecked(int.MaxValue+int.MinValue)); //Result -1 no overflow Console.WriteLine(unchecked(2147483647+(-2147483648))); //Same as above Console.WriteLine(unchecked(int.MaxValue+1)); //Result -2147483648 overflow Console.WriteLine(unchecked(2147483647+1)); //Same as above Console.WriteLine(unchecked(int.MaxValue-int.MaxValue)); //Result 0 Console.WriteLine(unchecked(2147483647-2147483647)); //Same as above Console.WriteLine(unchecked(int.MaxValue+int.MaxValue)); //Result -2 overflow Console.WriteLine(unchecked(2147483647+2147483647)); //Same as above 

The results of these examples should be clear. I do not do any castings here to avoid arguments about where the overflow occurs, so it is clear that this is happening in int . Every time an overflow occurs, it looks like the first int assigned the value int.MinValue , which is -2147483648, and then the second int added / subtracted.

If you dialed the first number to long , then the result will be long . Now overflow will not happen, and you will get the same results as in mathematics:

 Console.WriteLine((long)int.MaxValue); //Result 2147483647 Console.WriteLine((long)int.MinValue); //Result -2147483648 Console.WriteLine((long)int.MaxValue-int.MinValue); //Result 4294967295 Console.WriteLine((long)2147483647-(-2147483648)); //Same as above Console.WriteLine((long)int.MaxValue+int.MinValue); //Result -1 Console.WriteLine((long)2147483647+(-2147483648)); //Same as above Console.WriteLine((long)int.MaxValue+1); //Result 2147483648 Console.WriteLine((long)2147483647+1); //Same as above Console.WriteLine((long)int.MaxValue-int.MaxValue); //Result 0 Console.WriteLine((long)2147483647-2147483647); //Same as above Console.WriteLine((long)int.MaxValue+int.MaxValue); //Result 4294967294 Console.WriteLine((long)2147483647+2147483647); //Same as above 

And here is the proof without using addition / subtraction. The simple value of the value above int.MaxValue causes an overflow, which unchecked converts to int.MinValue . Any value greater than int.MaxValue + 1 will be added to int.MinValue :

 Console.WriteLine(unchecked((int)2147483647)); //Result 2147483647 Console.WriteLine(unchecked((int)2147483648)); //Result -2147483648 overflow Console.WriteLine(unchecked((int)2147483649)); //Result -2147483647 overflow Console.WriteLine(unchecked((int)2147483649)); //Result -2147483647 overflow Console.WriteLine(unchecked((int)2147483650)); //Result -2147483646 overflow Console.WriteLine(unchecked((int)2147483651)); //Result -2147483645 overflow 

The exact opposite happens when you overflow an int with values ​​below int.MinValue :

 Console.WriteLine(unchecked((int)-2147483648)); //Result -2147483648 Console.WriteLine(unchecked((int)-2147483649)); //Result 2147483647 overflow Console.WriteLine(unchecked((int)-2147483650)); //Result 2147483646 overflow Console.WriteLine(unchecked((int)-2147483651)); //Result 2147483645 overflow 

This means that int works like an infinite spinning counter. The two ends are glued next to each other, so when you reach one end, you roll over to the other and continue 1 2 3 1 2 3 1 2 3.

+12
source share

All Articles