Some computational problems can be described in such a way that the problem can be divided into smaller subtasks. Smaller subtasks are solved using the same method as the main problem. If a smaller subtask is just a smaller case of a larger problem, then this in itself can be destroyed even further.
In the end, the problem is so small that it can be solved without further destruction. This is called basic. With a basic solution, you can build a solution for a bigger problem.
Say you wanted to find b where a and b are positive integers. You can see that this is the same as a * a (b-1) . That is, (b-1) is a smaller problem than the original, but still requires that the same technique be solved as the original problem. To solve a (b-1) you see that it is a * a (b-2) .
And so on.
In the end, you get * a * a * ... * a (bb) . And we know that bb = 0 and a 0 = 1. Therefore, we do not need to develop this last bit, because we already know the answer. Ultimately, a b = a * a * a * ... * 1.
So 2 4 = 2 * 2 3 = 2 * 2 * 2 2 = 2 * 2 * 2 * 2 1 = 2 * 2 * 2 * 2 * 2 0 = 2 * 2 * 2 * 2 * 1.
To write this program, first process the base register, and then use recursion to handle the rest.
pow(a, b){ if(b == 0){ return 1; }else{ return a * pow(a, b - 1); } }
It is important to note that this is just the basic idea of recursion. These examples that you see in various answers, such as the problem of fibonacci numbers, are very inefficient. You can create more efficient programs using dynamic programming methods that use recursion as one of its mechanisms to solve the problem.
source share