This is one face of dynamic programming, but there is more.
For a trivial example, take the Fibonacci numbers:
F (n) = n = 0: 0 n = 1: 1 else: F (n - 2) + F (n - 1)
We can call the above code "backtracking" or "recursion". Let's convert it to "backtracking with cache" or "recursion with memoization":
F (n) = n in Fcache: Fcache[n] n = 0: 0, and cache it as Fcache[0] n = 1: 1, and cache it as Fcache[1] else: F (n - 2) + F (n - 1), and cache it as Fcache[n]
However, that is not all.
If the problem can be solved by dynamic programming, there is an oriented acyclic graph of states and dependencies between them. We are interested in the state. There are also basic conditions for which we immediately know the answer.
We can cross this graph from a vertex that interests us in all its dependencies, from them to all their dependencies, in turn, etc., stopping to further develop in basic states. This can be done by recursion.
A directed acyclic graph can be considered as a partial order on vertices. We can sort this graph topologically and visit the vertices in sorted order. In addition, you can find a simple general order that matches your partial order.
Also note that we can often observe some structure of states. For example, states can often be expressed as integers or tuples of integers. Thus, instead of using common caching methods (for example, associative arrays for storing pairs of states-> values), we can pre-allocate a regular array, which is easier and faster to use.
Let us return to our Fibonacci example, a partial order relation is that the state n >= 2 depends on the states n - 1 and n - 2 . The base states are n = 0 and n = 1 . A simple general order consistent with this order relation is the natural order: 0 , 1 , 2 , ... Here's what we start with:
Preallocate array F with indices 0 to n, inclusive F[0] = 0 F[1] = 1
Well, we have an order of visiting the states. Now, what is a "visit"? There are two more possibilities:
(1) “Reverse DP”: when we visit state u , we look at all its dependencies v and calculate the answer for this state u :
for u = 2, 3, ..., n: F[u] = F[u - 1] + F[u - 2]
(2) "Forward DP": when visiting state u we look at all states v that depend on it and take u into account in each of these states v :
for u = 1, 2, 3, ..., n - 1: add F[u] to F[u + 1] add F[u] to F[u + 2]
Note that in the first case, we still use the formula for the Fibonacci numbers directly. However, in the latter case, the imperative code cannot be easily expressed by a mathematical formula. However, in some problems the “direct DP” approach is more intuitive (now there is no good example, who wants to introduce it?).
Another use of dynamic programming that is difficult to express as rollback is as follows: Dijkstra's algorithm can also be thought of as DP. In the algorithm, we build a tree of optimal paths by adding vertices to it. When we add a vertex, we use the fact that the entire path to it - with the exception of the very last edge on the path - is already known as optimal. Thus, we actually use the optimal solution for the subtask - this is exactly what we do in DP. However, the order in which we add vertices to the tree is not known in advance.