I think there is an error describing the sign bit for integer types in section 6.2.6.2 of ISO / IEC 9899: TC3 C
For signed integer types, the object representation bits should be divided into three groups: value bits, padding bits, and a little bit. There should be no padding bits; There must be exactly one bit of sign. Each bit that is a value bit must have the same value as the same bit in the representation of the object of the corresponding unsigned type (if there are bits of the value M in the signed type and N in the unsigned type, then M β€ N). If the sign bit is zero, it should not affect the final value. If the sign bit is one, the value must be changed in one of the following ways:
- the corresponding value with the sign bit 0 is negated (sign and value);
- the sign bit matters - (2 ^ N) (two additions);
- the sign bit matters - (2 ^ N - 1) (one addition)
In the previous section, N was defined as the number of bits of a value in a signed type, but here it is the number of bits of a value in an unsigned type.
Accepting the case of signed char with 8 bits per byte and two additions, this indicates that the sign bit has the value - (2 ^ 8) = -256, and not - (2 ^ 7) = -128.
I think the standard should either switch M and N to the opening paragraph, or change the definition of the sign bit to use M:
- the sign bit matters - (2 ^ M) (two additions);
- the sign bit matters - (2 ^ M - 1) (one addition)
Am I missing something or is this a mistake?
source share