When to use short instead of int?

Two use cases come to mind, for which I would consider short :

  • I need an integer type of at least 16 bits in size
  • I need an integer type whose size is 16 bits

In the first case, since int guaranteed at least 16 bits and is the most efficient integral data type, I would use int . In the second case, since the standard does not guarantee that the size of short is exactly 16 bits, I would use int16_t . So what use is short?

+8
c
source share
5 answers

There is never a reason to use short in a C99 environment with 16-bit integers; you can use int16_t , int_fast16_t or int_least16_t .

The main reasons for using short are backward compatibility with C89 or older environments that do not offer these types, or with libraries using short as part of their public API, to implement <stdint.h> or for compatibility with platforms that do not have 16- bit integers, so their C compilers do not provide int16_t .

+6
source share

ISO / IEC 9899: 1999 (C99) adds the headers <stdint.h> and <inttypes.h> , which provide what you need:

  • int16_t cannot be defined, but if the implementation has a 16-bit (exactly) integer type, int16_t will be an alias for it.
  • int_least16_t is a type that is the smallest type that contains at least 16 bits. He is always available.
  • int_fast16_t is the fastest type that contains at least 16 bits. He is always available.

Similarly for other sizes: 8, 16, 32, 64.

Here is also intmax_t for an integer type of maximum precision. Of course, for each of them there is also an unsigned type: uint16_t , etc.

These types are also present in C2011. They were not present on the C89 or C90. However, I believe that headers are available in some form or form for most compilers, even those such as MS Visual C, which do not pretend to support C99.

Please note that I have provided links to the POSIX 2008 versions of the headers <stdint.h> and <inttypes.h> . POSIX imposes implementation rules that are not in the C standard:

Β§7.18.1.1 Integer types of exact width

ΒΆ1 The name typedef int N _t denotes a signed integer type with a width of N, without filling in bits, and representing a double complement. Thus int8_t denotes a signed integer type with a width of exactly 8 bits.

ΒΆ2 Typedef name uint N _t denotes an unsigned integer type with a width of N. Thus, uint24_t denotes an unsigned integer type with a width of exactly 24 bits.

ΒΆ3 These types are optional. However, if an implementation provides integer types with a width of 8, 16, 32, or 64 bits, it must specify the appropriate typedef names.

+3
source share

Yes, if you really need a specific data size, you use int16_t, int32_t, etc.

int16_t is usually platform-specific typedef from short (or any cards up to 16 bits). On a 32-bit machine, int16_t can be typedef'd short, on a 16-bit machine int16_t can be typedef as int.

+1
source share

ANSI C indicates the minimum value ranges for types. You can be sure only in the first case; not the second.

0
source share

If you have a very large array, you can use shorts.

You might find them useful for highlighting parts of some other data as part of a union.

0
source share

All Articles