Why is 0xFFFFFFFF uint when it represents -1?
Because you do not write a bit pattern when you write
i = 0xFFFFFFFF;
you write a C # rule number for whole literals . Using C # integer literals to write a negative number, we write a - , followed by the value of the number (e.g. -1 ), and not the bit diagram for what we want. It’s really good that we don’t have to write a bit pattern, it will be very inconvenient to write negative numbers. When I want -3, I do not want to write 0xFFFFFFFD . :-) And I really do not want to change the number of leading F based on the type size ( 0xFFFFFFFFFFFFFFFD for long -3 ).
The rule for choosing a literal type is covered by the above link, saying:
If a literal does not have a suffix, it has the first of these types in which its value can be represented: int , uint , long , ulong .
0xFFFFFFFF does not fit into an int that has a maximum positive value of 0x7FFFFFFF , so the next uint in the list that it fits in is.
source share