From a machine architecture point of view, char is a value that can be represented in 8 bits (what ever happened to non-8-bit architectures?).
The number of bits in int is not fixed; I believe that it is defined as a "natural" value for a particular machine, that is, the number of bits that is the easiest and quickest to manipulate for that machine.
As already mentioned, all values ββin computers are stored as sequences of binary bits. How are these bits interpreted. They can be interpreted as binary numbers or as code representing something else, such as a set of alphabetic characters or as many other possibilities.
When C was first designed, the assumption was that 256 codes were sufficient to represent all the characters in the alphabet. (In fact, this was probably not an assumption, but at that time it was good enough, and the designers tried to keep the language simple and consistent with the then existing computer architectures). Therefore, an 8-bit value (256 possibilities) was considered sufficient to store the alphabetic character code, and the char data type was defined as convenient
Disclaimer: all that is written above is my opinion or assumption. C designers are the only ones who can really answer this question.
A simpler but misleading answer is that you cannot store the integer value 257 in char , but you can in int . naturally
source share