C # method represents negative integers in memory and discards them unverified

C # has different types of values, and each of them performs its own task. Int32 ranges from - (0x7FFFFFFF + 1) to 0x7FFFFFFF, and from every machine I have ever run, it seems that unchecked ((int) 0xFFFFFFFF) always got a resulting value of -1. It's always like that? Also, does .NET always represent -1 as 0xFFFFFFFF in memory on any system? Is the leading bit always a signed bit? Is Two Complement binary representation always used for integers?

+8
c # integer value-type
source share
2 answers

The documentation for System.Int32 explicitly states that it is stored in two forms of compliment. This is at the very bottom:

In addition to working with single integers as decimal values, you may need to perform bitwise operations with integer values ​​or work with binary or hexadecimal representations of integer values. Int32 values ​​are represented in 31 bits, and the thirty-second bit is used as a sign bit. Positive values ​​are represented by representing the sign and magnitude. Negative values ​​are in two additions. This is important to keep in mind when you perform bitwise operations with Int32 values ​​or when you work with individual bits. To perform a numerical, logical, or comparison for any two non-decimal values, both values ​​must use the same representation.

So, it seems that the answer to all your questions is yes.

Also, the range for Int32 is from (0x80000000) to 0x7FFFFFFFF.

+8
source share

C # - as well as almost all other "puter on the planet" - represent integers in two-digit notation. I believe that at some point processors were designed that used a different representation, but these days you can pretty much depend on integers represented in two-digit notation.

  • we count the bits from right to left, with the rightmost bit, bit 0 is the least significant bit, and the leftmost bit is the most significant.

  • the most significant (leftmost) bit is a sign: 0 is positive; 1 is negative.

  • The remaining bits carry the value. This means that the real domain of an integer number of bits of size N is (2 n-1 ) <= x <= + (2 n-1 -1). For example, you can imagine another negative number, which can be a positive number: for 16 -bit integer with a signed domain is from -32.768 to +32.767.

Putting a number in two additions is easy:

  • converts its absolute value to binary / base 2. Notation
  • if the value is negative
    • invert bits
    • add 1

So, a value of +1 is represented as 0x0001, and -1 is represented as

  • 0x0001 (absolute value 1 in binary)
  • 0xFFFE (inverted bits)
  • 0xFFFF (add 1)

Or 0xFFFF

The reason for the two-digit notation is that it simplifies the design of the processor: since subtraction is the addition of a negative one (for example, 3-2 matches 3 + -2), they do not need to create subtraction schemes:

  • 1-1 is the same as 1 + -1, and evaluates to zero.

  • or in hexadecimal format

    0x0001 (decimal +1) + 0xFFFF (decimal -1) ====== 0x0000 (decimal 0) 

On most processors, wrapping to or from high order bits sets a fixed point overflow flag.

+2
source share

All Articles