C #: Why is 0xFFFFFFFF being uint when it represents -1?

I don't understand why C # treats the literal 0xFFFFFFFF as uint when it also represents -1 for int types.

Below is the code that was entered into the Immediate window, shown here with the exit:

int i = -1; 

1

 string s = i.ToString("x"); 

"FFFFFFFF"

 int j = Convert.ToInt32(s, 16); 

1

 int k = 0xFFFFFFFF; 

It is not possible to implicitly convert the type 'uint' to 'int'. Explicit conversion exists (are you skipping listing?)

 int l = Convert.ToInt32(0xFFFFFFFF); 

The OverflowException error was unhandled: the value was too large or too small for Int32.

Why can a hexadecimal line number be converted without problems, but a literal will only be converted using unverified ones?

+6
source share
4 answers

Why is 0xFFFFFFFF uint when it represents -1?

Because you do not write a bit pattern when you write

 i = 0xFFFFFFFF; 

you write a C # rule number for whole literals . Using C # integer literals to write a negative number, we write a - , followed by the value of the number (e.g. -1 ), and not the bit diagram for what we want. It’s really good that we don’t have to write a bit pattern, it will be very inconvenient to write negative numbers. When I want -3, I do not want to write 0xFFFFFFFD . :-) And I really do not want to change the number of leading F based on the type size ( 0xFFFFFFFFFFFFFFFD for long -3 ).

The rule for choosing a literal type is covered by the above link, saying:

If a literal does not have a suffix, it has the first of these types in which its value can be represented: int , uint , long , ulong .

0xFFFFFFFF does not fit into an int that has a maximum positive value of 0x7FFFFFFF , so the next uint in the list that it fits in is.

+7
source

0xffffffff is 4294967295 is UInt32, which simply has a bit pattern equal to Int32 -1 because negative numbers are represented on computers. Just because they have the same bit pattern, this does not mean 4294967295 = -1. They are completely different numbers, so, of course, you cannot just trivially convert between them. You can force reconfiguration of the bit pattern by explicit casting to int: (int)0xffffffff .

+4
source

The C # language rules state that 0xFFFFFFFF is an unsigned literal.

A C # signed int - 2 types of additions. This scheme uses 0xFFFFFFFF to represent -1. (2 additions is a smart scheme, since it does not have a sign with a zero value).

For unsigned int, 0xFFFFFFFF is the largest value that may be required, and because of its size, it cannot be converted to a signed int .

0
source

C # docs say that the compiler will try to set the number you provide with the smallest type that can match it. This document is a bit outdated, but it is still applicable. He always assumes that the number is positive.

As a backup, you can always force a type.

0
source

All Articles