Why is an extra integer type among short / int / long?

Until recently, I thought the “long” was the same as the “int” due to historical reasons and desktop processors having at least 32 bits (and only had problems with this “trick” because it only developed at 32 bit machines)

Reading this , I found that in fact the C standard defines an int type of at least int16, while "long" must be at least int32.

Actually listed

  • Short-valued integer type. Able to contain at least a range [-32767, +32767]
  • Basic signed integer type. Able to contain at least the range [-32767, +32767];
  • Long integer type. Able to contain at least the range [-2147483647, +2147483647]
  • Long long signed integer type. It is capable of containing at least the range [-9223372036854775807, +9223372036854775807];

there are always nonempty intersections, and therefore a duplicate, any implementation that the compiler and platform choose.

Why did the standard commit introduce an additional type among what could be as simple as char / short / int / long (or int_k, int_2k, int_4k, int_8k)?

Was it for historical reasons, for example, gcc xx implemented int as 32 bits, while another compiler implemented it as 16, or is there a real technical reason why I am missing?

+6
source share
1 answer

The central point is that int/unsigned is not just another step of integer sizes from the stairs char, short,int, long, long long . int is special. This is a size that extends to increasingly narrower types and, as a rule, works “better” on this processor. Thus, int corresponds to short , long or clearly wedges between short/long , depending on the platform.

C is designed to accommodate a wide range of processors. Given that C is 40+ years old, is evidence of a successful strategy.

+6
source

All Articles