Regarding 1)
People often use char arrays when they really need a byte buffer for data flow. This is not a good practice, but many projects do this, and if you are careful, there will be no real harm. There are probably other times.
Regarding 2)
Signed integers often expand when they move from a smaller data type. thus 11111111b (-1 in base 10) becomes 11111111 11111111 11111111 11111111 when expanding to 32 bits. However, if char was supposed to be unsigned +255, then the signed integer could end up being -1.
About tolerance 3)
Some machines treat characters as signed integers, while others interpret them as unsigned. It may also vary depending on the compiler implementation. Most of the time you do not need to worry about it. Kernighan is simply trying to help you figure out the details.
Edit
I know this is a dead issue, but you can use the following code to check if char is signed on your system or unsigned:
#include <limits.h> //Include implementation specific constants (MAX_INT, et c.) #if CHAR_MAX == SCHAR_MAX // Plain "char" is signed #else // Plain "char" is unsigned #endif
source share