Well, consider this: computers just work with binary numbers, so all calculations are done at the binary level. When comparing two numbers, the computer must check whether both are the same length, and add 0 on the left side of the shortest number. Then, when both are the same length, the computer starts comparing the bits from left to right. As long as both are equal to 1, they are equal. If both are 0, they are equal. If one is 0, the smaller is the number and the other is larger. This is how you determine the order of numbers.
Now add two numbers. This time you start on the right side, and if both bits are 0, the result is 0. If one of them is one and the other is 0, the result is 1. If both are 1, the result is 0, and one is added to two bits left. Move it to the left and repeat. Then add the 1 that you just moved to this byte of the result, which can cause the other 1 to move to the left. The interesting part of this is that you only need to add 1 to the left only once. In no case did you have to move two on the left.
And basically, this is how processors learned to add two numbers.
When you start working with numbers greater than 0 and 1, you simply add a mathematical problem to the complexity. And, considering your example, you already split it a little in 1. Basically, if you add 5 + 3, you break it into (1 + 1 + 1 + 1 + 1) + (1 + 1 + 1), thus , 8 1. Translate it into binary code, and you get 101 + 011. Two of them on the right translate to 0, moving 1. Then 1 + 0 is one. Add 1 that has been shifted and it will return to 0 by moving 1 to the left. Then you get 0 + 1, which is 1 time. Plus 1, you remembered the results at 0, a shift of 1 to the left. There are no numbers, so suppose both values โโare 0. 0 plus 1 is one. There are no more shifts, so the calculation is done, and you get 1000.
What you were thinking about could be considered many years ago when they developed the first computers, but by adding numbers, the binary path is more efficient. (Especially when it comes to huge quantities.)