How do computers convert everything to binary? When they see the binary, how do they know if it represents a number or a word or instruction?

I know how computers convert numbers to binary. But I do not understand what I heard that computers translated everything (words, instructions, ...) into binary, and not just numbers. How is this possible?

Could you show me some examples? How does a computer translate the letter "A" into binary?

And when computers see binary, how can they know that this long string of 0 and 1 represents a number or a word or instruction?

.

Exemple:

Let's say that the programmer encoded the letter "Z" so that it translates to this binary string: 11011001111011010111

Therefore, when the computer encounters this binary string, it will translate it to the letter "Z".

But what happens when we ask this computer "what is the product 709 by 1259?"

The computer will answer us "892631." But this number, when switching to binary, is 1101100111101101011111.

So, how would this affect the Z and 892631?

.

Please note that I know little about computer science, so please explain everything in simple words.

+7
computer-science binary-data binary binaryfiles computer-architecture
source share
6 answers

Computers actually do not translate anything into binary, everything is binary from the very beginning, and a computer never knows anything but binary.

The A character stored in memory will be 01000001 , and the computer will not see it as anything but a binary number. When we ask the computer to display this number as a character on the screen, it will look for a graphical representation for it in the font definition to find other binary numbers to send to the screen equipment.

For example, if the computer was an eight-bit Atari, it would detect eight binary values โ€‹โ€‹to represent the character A on the screen:

 00000000 00011000 00111100 01100110 01100110 01111110 01100110 00000000 

As you can see, the binary values โ€‹โ€‹are then converted to dark and bright pixels when the graphics equipment draws it on the screen.

Similarly, no matter what we do with the numbers on the computer, all the ways to move binary values โ€‹โ€‹around, perform calculations on binary values โ€‹โ€‹and translate them into other binary values.

If you, for example, take the character code for A and want to display it as a decimal number, the computer will calculate that the decimal representation of the number is the digits 6 ( 110 ) and 5 (), translate this into character 6 ( 00110110 ) and character 5 ( 00110101 ), and then translate them into their graphical representation.

+9
source share

This is an excellent question, and it will take several years, and several doctors of science will fully explain. I can offer you a simplified answer, but to fully understand, you will have to do MORE more research. May I offer some free MIT related online classes here .

At the lowest level, the letter A and the number 65 are actually stored using the same sequence of 0 and 1. 1000001, if I'm not mistaken.

The computer then decides what it is when it grabs it from memory. This means that letters can be displayed as numbers, and vice versa.

The way the computer knows what it is looking for is what the programmer tells him what it is looking for. The programmer says that I need a number stored in such and such a place, and the computer goes and searches for it.

Allows you to increase the level, because rarely the program runs at such a low level. other programs (usually compilers that take C ++ type code and turn it into something that a computer can understand). Make sure that the location we are accessing is what we said. They have additional information that tells them that this particular set of 1 and 0 is actually a floating point type (has a decimal point), while this set is also an integer (without a decimal point)

Then other types are built on these types, large integers or floating point or character strings, and again the compilers apply the types.

This is a simplification, and I understand that everything here is not quite right, but it will lead you to the right path. You can check out some of these topics to get a much better idea:

How do instructions differ from data?

http://en.wikipedia.org/wiki/Computer_data_storage

How do the data, address and instruction differ in the processor / register / memory?

http://en.wikipedia.org/wiki/Reference_(computer_science)

Hope this makes things a bit easier. Feel free to ask for clarification!

+7
source share

So how would it make a difference between "Z" and "892631"?

This is not true. On the computer all 0 and 1 sec. Their raw bits make no difference until the processor REVERSES what to do with these 0 and 1 sec!

For example, I could create a variable x and make its value 0b01000001 (0b means "this is the number that I describe in binary format"). Then I could ask the processor to print the variable x on the screen for me. But I FIRST must tell the processor WHAT x !

 printf("%d", x); // this prints the decimal number 65 printf("%c", x); // this prints the character A 

So, x itself means nothing but the raw bits 01000001 . But, as a programmer, my job is to tell the computer what x really means.

+5
source share

The computer uses only 7 bits to store letters / special characters, whereas when storing a number, it uses all 8 bits of a byte.

Let's take โ€œAโ€ and โ€œ65โ€ as examples.

65/2 - QUO - 32, and a reminder - 1 1 2 for power 0 is 1

The 32/2 quo is 16 and the reminder is 0 01

The 16/2 quo is 8 and the reminder is 0 001

8/2 quo - 4, and a reminder - 0 0001

4/2 quo - 2, and a reminder - 0 00001

2/2 quo is 1, and a reminder is 0 1000001 2, power 6 is 64

  ========= 1000001 binary repressents 65 

The ASCII value for the letter A is stored as 01000001 in binary format (only 7 bits are used, and the 8th bit is stored from 0 for letters and special characters).

Hope this helps.

+1
source share

Let's discuss some basics:

  1. Suppose your hard drive is nothing more than an aluminum plate in a round shape and has tiny holes / spots all over (you can only see with a microscope). Spot is a small hole, grouped by bytes - 8 bits (1 bit - 1 hole).
  2. RAM is like a hard drive, but it is a silicon semiconductor, so it can store information in the form of an electric field and has an address for each byte, so it is faster.
  3. The computer stores all the information that you enter from the keyboard on the hard drive, like magnetic impulses (representing 1 for human understanding), called 1. If there is no information, then the spot (small hole) is empty is called zero.

Let's discuss your first part of your question. Could you show me some examples? For example, how does a computer translate the letter โ€œAโ€ into binary?

  1. For example, you enter the characters "A" and "เฎ…" from the keyboard.
  2. The character "A" is represented as 65 in Unicode / ASCII, which is 01000001 in the binary base of base 2. OS makes the mapping of A to binary. This โ€œAโ€ symbol you entered is now stored on the hard drive as 01000001 and will be displayed at 8 different points (for example, no magnetic pulse for most residues 0, magnetic pulse for 7 in the seventh digit, etc.).
  3. In the case of RAM, it stores information in the form of electrical pulses, and, therefore, RAM loses all information when the power is turned off.

Now all that you see in RAM or HardDrive is energy or energy in a given byte, and we call it a binary format for human understanding (let's call it 0 without energy and 1 for energy).

Now it depends on the compiler how to store it. If it is a C compiler on an AMD processor / Windows OS, it stores a value of 2 bytes (one byte for 5 and one byte for 6). A byte storing the value 5 will be on the right side of 6, if this is AMD processing - it is called low endian. C does not support the "เฎ…" character, since it takes more than 1 byte to store international characters.

If it is a Java compiler, it uses a variable length of 4 bytes called UTF-16. In the case of the letter โ€œAโ€, it requires 1 byte, since the Unicode / ASCII representation is 65. If you save an international language such as โ€œเฎ…โ€ (similar to A in Tamil), then the corresponding Unicode value is 2949 and the corresponding binary value is 11100000 10101110 10000101 (3 bytes). Java has no problem storing and reading "A" and "เฎ…".

Now imagine that you saved the โ€œเฎ…โ€ character on your hard drive using the Java / Windows / AMD processor as the character type (Char).

Now imagine that you want to read this using a C program like Char. The C compiler only supports ASCII, but not a complete Unicode list. Here C will read the right (10000101) byte of 3 bytes (for type char it reads 1 byte), what do you get on the screen? Your C program will read this 1 byte without any problems and draw it on your screen if you asked your program to print. Thus, the compiler is the developer of the difference.

**** Let's discuss your second part of your question: ** * And when computers see binary code, how can they know if this long string of 0s and 1s is a number or a word or instruction? ** *

Now loading your compiled Java program into RAM in the area of โ€‹โ€‹text and data (RAM is divided into the area of โ€‹โ€‹text and data at a high level). Now you ask that the processor ALU is executing a set of instructions from your program called Process.

A line in your compiled program is an instruction for moving data from one variable to another.

When the ALU executes the first command, it goes into the corresponding registers sitting outside, if RAM. The processor has a set of registers for data and a set of register registers. ALU now knows which register is for what, based on the fact that it executes your instruction.

Hope this helps.

0
source share

Keep in mind that if you see the letter A. This is just a command to display on the screen what looks like A. The programmer still needs to create fonts and contain the data for the screen as follows: If the color is black. Fill the pixel coordinates indicated by the letter A in this font file with black. According to us ...

-one
source share

All Articles