14

Dynamic programming can reduce the time needed to perform a recursive algorithm. I know that dynamic programming can help reduce the time complexity of algorithms. Are the general conditions such that if satisfied by a recursive algorithm would imply that using dynamic programming will reduce the time complexity of the algorithm? When should I use dynamic programming?

Raphael
  • 73,212
  • 30
  • 182
  • 400
Anonymous
  • 141
  • 1
  • 1
  • 3

2 Answers2

11

Dynamic programming is useful is your recursive algorithm finds itself reaching the same situations (input parameters) many times. There is a general transformation from recursive algorithms to dynamic programming known as memoization, in which there is a table storing all results ever calculated by your recursive procedure. When the recursive procedure is called on a set of inputs which were already used, the results are just fetched from the table. This reduces recursive Fibonacci to iterative Fibonacci.

Dynamic programming can be even smarter, applying more specific optimizations. For example, sometimes there is no need to store the entire table in memory at any given time.

Yuval Filmus
  • 280,205
  • 27
  • 317
  • 514
9

If you just seek to speed up your recursive algorithm, memoisation might be enough. This is the technique of storing results of function calls so that future calls with the same parameters can just reuse the result. This is applicable if (and only if) your function

  • does not have side effects and
  • does only depend on its parameters (i.e. not on some state).

It will save you time if (and only if) the function is called with the same parameters over and over again. Popular examples include the recursive definition of the Fibonacci numbers, that is

$\qquad \begin{align} f(0) &= 0 \\ f(1) &= 1 \\ f(n+2) &= f(n+1) + f(n) \qquad ,\ n \geq 0 \end{align}$

When evaluated naively, $f$ is called exponentially often. With memoisation, $f(n)$ has always been computed by $f(n+1)$ already, thus only a linear number of calls remains.

Note that, in contrast, memoisation is next to useless for algorithms like merge sort: usually few (if any) partial lists are identical, and equality checks are expensive (sorting is only slightly more costly!).

In practical implementations, how you store results is of great import to performance. Using hash tables may be the obvious choice, but might break locality. If your parameters are non-negative integers, arrays are a natural choice but may cause huge memory overhead if you use only some entries. Therefore, memoisation is a tradeoff between effect and cost; whether it pays off depends on your specific scenario.


Dynamic programming is a completely other beast. It is applicable to problems with the property that

  • it can be partitioned into subproblems (probably in more than one way),
  • those subproblems can be solved independently,
  • (optimal) solutions of those subproblems can be combined to (optimal) solutions of the original problem and
  • subproblems have the same property (or are trivial).

This is usually (implicitly) implied when people invoke Bellman's Principle of Optimality.

Now, this only describes a class of problems that can be expressed by a certain kind of recursion. Evaluation of those is (often) efficient because memoisation can be applied to great effect (see above); usually, smaller subproblems occur as parts of many larger problems. Popular examples include edit distance and the Bellman-Ford algorithm.

Raphael
  • 73,212
  • 30
  • 182
  • 400