The difference between int32, int, int32_t, int8 and int8_t

I recently met the int32_t data int32_t in program C. Do I know that it stores 32 bits, but do not int and int32 do the same?

In addition, I want to use char in the program. Can I use int8_t instead? What is the difference?

To summarize: what is the difference between int32, int, int32_t, int8 and int8_t in C?

+54
c int declaration
Jan 25 '13 at 5:26
source share
3 answers

Between int32 and int32_t , (and also between int8 and int8_t ) the difference is quite simple: the C standard defines int8_t and int32_t , but does not define anything with the name int8 or int32 - the latter (if they exist at all), probably from some other header or libraries (most likely preceded by the addition of int8_t and int32_t in C99).

Plain int quite different from the rest. Where int8_t and int32_t each have a given size, int can be of any size> = 16 bits. At different times, both 16 bits and 32 bits were quite common (and for a 64-bit implementation, there should probably be 64 bits).

On the other hand, int guaranteed to be present in every C implementation, where int8_t and int32_t are not. You can probably doubt whether this matters to you. If you use C on small embedded systems and / or older compilers, this can be a problem. If you use it mostly with a modern compiler on desktop / server machines, this probably won't.

Oops - skipped the char part. You would use int8_t instead of char if (and only if) you want the integer type to be exactly 8 bits in size. If you want to store characters, you most likely want to use char . Its size may vary (in terms of the number of bits), but it is guaranteed to be one byte. One small oddity: there is no guarantee whether a simple char signed or unsigned (and many compilers can do this or one, depending on the compilation flag). If you need to make sure that it is signed or not signed, you need to explicitly indicate this.

+73
Jan 25 '13 at 5:32
source share

The _t data types are typedef types in the stdint.h header, and int is the built-in base data type. This makes _t available only if stdint.h exists. int, on the other hand, is guaranteed to exist.

+11
Jan 25 '13 at 5:35
source share

Always remember that “size” is a variable unless explicitly stated, so if you declare

  int i = 10; 

On some systems, this can lead to a 16-bit integer compiler, and for some others, it can lead to a 32-bit integer (or 64-bit integer on newer systems).

In embedded environments, this can end up with strange results (especially when processing I / O with memory mapping or it can be considered as a simple situation with an array), so it is strongly recommended to specify fixed size variables. On legacy systems you may run into

  typedef short INT16; typedef int INT32; typedef long INT64; 

Starting with C99, designers have added the stdint.h header file, which essentially uses a similar typedef.

On a Windows based system, you can see the entries in the stdin.h header file as

  typedef signed char int8_t; typedef signed short int16_t; typedef signed int int32_t; typedef unsigned char uint8_t; 

There is more to it than the minimum width value or the exact width of integer types, I think it is nice to study stdint.h for a better understanding.

0
Dec 03 '17 at 17:12
source share



All Articles