If characters can contain integers, then why is there a need to use integers?

Why do we even use integers in C?

#include<stdio.h> int main() { char c=10; printf("%d",c); return 0; } 

Same as:

 #include<stdio.h> int main() { int c=10; printf("%d",c); return 0; } 
+4
source share
8 answers

Technically, all data types are represented by 0 and 1. So, if they are all the same at the end, why do we need different types?

Well, a type is a combination of data and operations that you can perform on data.

We have ints to represent numbers. They have operations of type + to calculate the sum of two numbers or - to calculate the difference.

When you think of a character, in the usual sense, it is a single letter or symbol in a humanoid format. The ability to summarize 'A' + 'h' does not make sense. (Even if c allows you to do this.)

So, we have different types in different languages ​​to simplify programming. They essentially encapsulate data and functions / operations that are legal for them.

Wikipedia has a good article on type systems.

+20
source

Since char contains numbers only from -127 to 127

+11
source

A character can contain only 8 bits, while an integer can have 16, 32, or even 64 bits (long long int).

+3
source

Try the following:

 #include <stdio.h> int main() { int c = 300; printf("%d", c); return 0; } #include <stdio.h> int main() { char c = 300; printf("%d", c); return 0; } 

The char , short , int , long and long long data types contain (possibly) integers of different sizes that can take values ​​up to a certain limit. char contains an 8-bit number (which is technically not signed and unsigned , but will actually be one way or another). Therefore, the range is only 256 (from -127 to 128 or from 0 to 255).

It is good practice to avoid char , short , int , long and long long and use int8_t , int16_t , int32_t , uint8_t etc. or even better: int_fast8_t , int_least8_t , etc.

+2
source

Generally speaking, char means the smallest unit of reasonable data storage on the machine, but int means the β€œbest” size for normal computing (for example, register size). The size of any data type can be expressed as char s, but not necessarily as int s. For example, on a Microchip PIC16, a char is eight bits, and int is 16 bits, and short long is 24 bits. ( short long should have been the dumbest type classifier I've ever come across.)

Note that a char not necessarily 8 bits, but usually it is. Consequence: at any time when someone claims that it is 8 bits, someone will ring and call a car where it is not.

+2
source

From a machine architecture point of view, char is a value that can be represented in 8 bits (what ever happened to non-8-bit architectures?).

The number of bits in int is not fixed; I believe that it is defined as a "natural" value for a particular machine, that is, the number of bits that is the easiest and quickest to manipulate for that machine.

As already mentioned, all values ​​in computers are stored as sequences of binary bits. How are these bits interpreted. They can be interpreted as binary numbers or as code representing something else, such as a set of alphabetic characters or as many other possibilities.

When C was first designed, the assumption was that 256 codes were sufficient to represent all the characters in the alphabet. (In fact, this was probably not an assumption, but at that time it was good enough, and the designers tried to keep the language simple and consistent with the then existing computer architectures). Therefore, an 8-bit value (256 possibilities) was considered sufficient to store the alphabetic character code, and the char data type was defined as convenient

Disclaimer: all that is written above is my opinion or assumption. C designers are the only ones who can really answer this question.

A simpler but misleading answer is that you cannot store the integer value 257 in char , but you can in int . naturally

+2
source

because of

 #include <stdio.h> int main(int argc, char **argv) { char c = 42424242; printf("%d", c); /* Oops. */ return(0); } 
+1
source

char cannot contain integers. Not all. At least not the way you assign a char value. Do some experimenting with sizeof to see if there is a difference between char and int .

If you really want to use char instead of int , you should probably consider char[] instead and keep ASCII, base 10, the character representation of a number. :-)

+1
source

All Articles