C # - as well as almost all other "puter on the planet" - represent integers in two-digit notation. I believe that at some point processors were designed that used a different representation, but these days you can pretty much depend on integers represented in two-digit notation.
we count the bits from right to left, with the rightmost bit, bit 0 is the least significant bit, and the leftmost bit is the most significant.
the most significant (leftmost) bit is a sign: 0 is positive; 1 is negative.
The remaining bits carry the value. This means that the real domain of an integer number of bits of size N is (2 n-1 ) <= x <= + (2 n-1 -1). For example, you can imagine another negative number, which can be a positive number: for 16 -bit integer with a signed domain is from -32.768 to +32.767.
Putting a number in two additions is easy:
- converts its absolute value to binary / base 2. Notation
- if the value is negative
So, a value of +1 is represented as 0x0001, and -1 is represented as
- 0x0001 (absolute value 1 in binary)
- 0xFFFE (inverted bits)
- 0xFFFF (add 1)
Or 0xFFFF
The reason for the two-digit notation is that it simplifies the design of the processor: since subtraction is the addition of a negative one (for example, 3-2 matches 3 + -2), they do not need to create subtraction schemes:
1-1 is the same as 1 + -1, and evaluates to zero.
or in hexadecimal format
0x0001 (decimal +1) + 0xFFFF (decimal -1) ====== 0x0000 (decimal 0)
On most processors, wrapping to or from high order bits sets a fixed point overflow flag.
Nicholas carey
source share