Dynamic programming and division and conquest

I read notes on dynamic programming , and I met the following comment.

If the subtasks are not independent, i.e. subproblems share subproblems, then the divideand-conquest algorithm repeatedly solves common subsubproblems. Thus, he does more work than necessary

What does it mean? Can you give examples to make the above point understandable?

+7
source share
1 answer

The author refers to the fact that many separation and quiescent algorithms have subtasks that overlap with each other. Consider, for example, this very simple Fibonacci implementation:

int Fibonacci(int n) { if (n <= 1) return n; return Fibonacci(n - 1) + Fibonacci(n - 2); } 

If you trace the calls made to calculate the Fibonacci (4), we get

  • Fibonacci (4) calls Fibonacci (3) and Fibonacci (2)
  • Fibonacci (3) calls Fibonacci (2) and Fibonacci (1)
  • Fibonacci (2) calls Fibonacci (1) and Fibonacci (0)
  • Fibonacci (2) (other) calls Fibonacci (1) and Fibonacci (0)
  • Fibonacci (1) ends.
  • Fibonacci (1) ends.
  • Fibonacci (1) ends.
  • Fibonacci (0) ends.
  • Fibonacci (0) ends.

In other words, 9 common function calls are made, but there are only five unique calls (Fibonacci from 0 to 4 inclusive). This algorithm could be made much more efficient if recursive calls were distributed across sub-tasks rather than being recounted from scratch every time. This is one of the key ideas of dynamic programming.

To compute F n (nth Fibonacci number), the code above will make a total of 2F n + 1 - 1 recursive calls. Since Fibonacci numbers grow exponentially fast, this requires exponentially large work. However, you can use the fact that many recursive calls are identical in order to simplify this. Instead of starting with Fibonacci (4) and working down, starting with Fibonacci (0) and working. In particular, we will build a table (call it FTable) with a length of 5 and fill it as follows:

  • FTable [0] = 0
  • FTable [1] = 1

Now suppose we want to compute FTable [2]. This requires us to know FTable [0] and FTable [1], but we already know this because they are in the table. So we can set

  • FTable [2] = 1

Using the same logic, we can calculate FTable [3] from FTable [2] and FTable [1]:

  • FTable [3] = 2

And FTable [4] from FTable [2] and FTable [3]:

  • FTable [4] = 3

Notice how we avoided making many overlapping recursive calls by simply creating them in the reverse order! It now calculates Fibonacci numbers in time O (n), which is exponentially faster than before. Using more advanced maths, we can do even better than this, but this illustrates why dynamic programming can take on impracticable problems and make them unexpectedly doable.

Hope this helps!

+13
source

All Articles