Until recently, I thought the “long” was the same as the “int” due to historical reasons and desktop processors having at least 32 bits (and only had problems with this “trick” because it only developed at 32 bit machines)
Reading this , I found that in fact the C standard defines an int type of at least int16, while "long" must be at least int32.
Actually listed
- Short-valued integer type. Able to contain at least a range [-32767, +32767]
- Basic signed integer type. Able to contain at least the range [-32767, +32767];
- Long integer type. Able to contain at least the range [-2147483647, +2147483647]
- Long long signed integer type. It is capable of containing at least the range [-9223372036854775807, +9223372036854775807];
there are always nonempty intersections, and therefore a duplicate, any implementation that the compiler and platform choose.
Why did the standard commit introduce an additional type among what could be as simple as char / short / int / long (or int_k, int_2k, int_4k, int_8k)?
Was it for historical reasons, for example, gcc xx implemented int as 32 bits, while another compiler implemented it as 16, or is there a real technical reason why I am missing?
source share