Can I prevent integer overflows in C # by using an unsigned right shift?

I want alwaysPositive to be assigned a positive number with all possible values ​​for lareValue1 and largeValue2 (they are at least 1).

The following statement causes a buffer overflow:

int alwaysPositive = (largeValue1 + largeValue2) / 2;

I know that I can prevent it by subtracting and adding:

int alwaysPositive = largeValue1 + ((largeValue2 - largeValue1) / 2);

But in other programming languages, I can use unsigned bithift to do the trick:

int alwaysPositive3 = (largeValue1 + largeValue2) >>> 1;

How to do it in C #?


The answers below all solve the problem. There are probably many ways to do this, but they all (including my solutions) have one thing in common: they all look confusing.

+5
source share
6 answers
int alwaysPositive = (largeValue1 >> 1) + (largeValue2 >> 1) + (largeValue1 & largeValue2 & 0x01);

, , , , , . , , ( ). , (),

int alwaysPositive = (largeValue1 >> 1) + (largeValue2 >> 1) + ((largeValue1 | largeValue2) & 0x01);
+3

:

  x = largeValue1;
  y = largeValue2; 
  return (x&y)+((x^y)/2);

.

, -, .

+2

unchecked((largeValue1 + largeValue2) >> 1) - .

. unchecked keyword.

+2
source

You can use uints:

uint alwaysPositive = (uint)(largeValue1 + largeValue2) / 2;
0
source

Do not use nitpick, but you mean "integer overflow", not "buffer overflow".

I don't know C #, so there might be another way, but you could simulate an unsigned shift by simply masking the top bit: (x β†’ 1) and 0x80000000

0
source
try
{
    checked { alwaysPositive3 = (largeValue1 + largeValue2); }
}
catch (OverflowException ex)
{
   // Corrective logic
}
0
source

All Articles