Why should there be an unsigned int type for each type of signed type in C?

I read C in a nutshell and found this:

"If an optional subscription type is specified (without the u prefix), then the corresponding unsigned type is required (with the initial u) and vice versa."

Paragraph o Integer types with exact width (C99) .

+8
c c99 unsigned signed stdint
source share
1 answer

Because C primitive data types have signed and unsigned versions. For C99 reasons, they explain the need for inttypes/stdint types like this, C99 rationale V5.10 7.8:

C89 indicates that the language should support four standard and unsigned integer data types, char, short, int and long, but there are very few requirements for their size other than int and short, must be at least 16 bits and longer, according to at least up to int and at least 32 bits. For 16-bit systems, most implementations assign 8, 16, 16, and 32 bits of char, short, int, and long, respectively. For 32-bit systems, it is common practice to assign 8, 16, 32, and 32 bits to these types. This int size difference can create some problems for users who transfer from one system to another, which assigns different sizes to integer types, since the standard Cs integer rule can unexpectedly make silent changes. The need for an extended integer type definition has increased with the introduction of 64-bit systems.

The goal of <inttypes.h> is to provide a set of integer types whose definitions are consistent between machines and independently of operating systems and other implementations of idiosyncrasy. It defines through typedef integer types of various sizes. Implementations can introduce them as standard types or extensions to the C standard that they support. consistent use of this header will significantly increase portability of the user program across platforms.

It was assumed that executing inttypes/stdin could be done using typedef . Therefore, there must be one type of fixed width corresponding to each supported primitive data type.

As for why C primarily signs types, it is simply because CPU: s supports both signed and unambiguous arithmetic of a number. But also, since we want to use integer types to express stored raw binary data: the unsigned char / uint8_t is a C language equivalent to a raw data byte that can contain anything. (And the fact that the character types of the reason cannot contain any trap representations, etc.)

From the C99 standard itself, we can find text similar to the text from your book, C99 6.2.5 / 6:

For each of the signed integer types, there is a corresponding (but different) unsigned integer type (indicated by the unsigned keyword) that uses the same amount (including character information) and has the same alignment requirements.

+5
source share

All Articles