Teach your computer how to make an addition?

The problem is to teach the computer how to make an addition. Computers know about numbers: he โ€œknowsโ€ that after 1 goes 2, after 2 goes 3, and so on. Having this data computer can easily get the next number.

Then the computer knows that x+0=x and x+(y+1)=(x+1)+y . These axioms allow the computer to make an addition. For example, to add 5 and 3, the computer does the following: 5+3 = 5+(2+1) = (5+1)+2 = 6+2 = 6+(1+1) = (6+1)+1 = 7+1 = 8 .

But it is too long to add numbers in this way. The problem is to develop a program that can improve this addition method using the rules of mathematics and logic. The goal should be that the addition should be done in O (log (N)) time, not O (N), N is the value of the numbers added.

Does this program have any scientific value? Is there any program that can do such things?

+4
source share
6 answers

Those automatic theoretical proxies that donโ€™t realize that arithmetic does what you propose: they try to invent it from a definition every time they need it. Results? This does not work very well. You can help these instructors by providing more general general facts about arithmetic (either as axioms, or if you are strict because the lems are proved separately). Example: associativity, commutativity, ...

This is still not very good. It seems like another intuitive fact that you need to provide the tool for the specific evidence that interests you. For example, x> y => x> = y, z is either odd or even, properties like this ...

To compensate for this problem, some automatic theoretical proxies perform internal arithmetic. In this case, the results are better. Simplification and alt-ergo are two examples of such pollinators.

+8
source

I do not think that there is a lot of intelligence. Your "computer" can remember the results of previous additions. Having infinite memory and a sufficiently long training period, he will build a map (X, Y) โ†’ X + Y for any two numbers, which allows adding to O (1). No intellect can win.

+4
source

Well, consider this: computers just work with binary numbers, so all calculations are done at the binary level. When comparing two numbers, the computer must check whether both are the same length, and add 0 on the left side of the shortest number. Then, when both are the same length, the computer starts comparing the bits from left to right. As long as both are equal to 1, they are equal. If both are 0, they are equal. If one is 0, the smaller is the number and the other is larger. This is how you determine the order of numbers.

Now add two numbers. This time you start on the right side, and if both bits are 0, the result is 0. If one of them is one and the other is 0, the result is 1. If both are 1, the result is 0, and one is added to two bits left. Move it to the left and repeat. Then add the 1 that you just moved to this byte of the result, which can cause the other 1 to move to the left. The interesting part of this is that you only need to add 1 to the left only once. In no case did you have to move two on the left.

And basically, this is how processors learned to add two numbers.

When you start working with numbers greater than 0 and 1, you simply add a mathematical problem to the complexity. And, considering your example, you already split it a little in 1. Basically, if you add 5 + 3, you break it into (1 + 1 + 1 + 1 + 1) + (1 + 1 + 1), thus , 8 1. Translate it into binary code, and you get 101 + 011. Two of them on the right translate to 0, moving 1. Then 1 + 0 is one. Add 1 that has been shifted and it will return to 0 by moving 1 to the left. Then you get 0 + 1, which is 1 time. Plus 1, you remembered the results at 0, a shift of 1 to the left. There are no numbers, so suppose both values โ€‹โ€‹are 0. 0 plus 1 is one. There are no more shifts, so the calculation is done, and you get 1000.

What you were thinking about could be considered many years ago when they developed the first computers, but by adding numbers, the binary path is more efficient. (Especially when it comes to huge quantities.)

+1
source

I want to indicate a problem. The computer has limited memory and limited time)))

0
source

O (log (N)) time tells me one thing: a binary search tree.

Treat the problem as the first answer, but as you accumulate the results, place them in the tree where you load x, then search for the correct subtree x for x + y, if necessary create new nodes.

If you do not need to generate a tree on the fly from axioms, you can create a balanced tree for your set of input data, and you're done.

0
source

Your assumptions about how computers make additions are completely wrong.

Computers store numbers in binary and make binary additions. If we have two 2 byte integers and store 5 in one and 3 in the other, and then add these two numbers, it looks like this:

 00000000 00000101 00000000 00000011 _________________ 00000000 00001000 

Adding two two byte integers is the same regardless of whether you add 200 and 1143 or whether you add 5 and 3.

-2
source

All Articles