What is Dynamic Programming?
- Dynamic programming (also known as dynamic optimization) is a method for solving a complex problem by breaking it down into a collection of simpler sub-problems, solving each of those sub-problems just once, and storing their solutions.The next time the same sub-problem occurs, instead of recomputing its solution, one simply looks up the previously computed solution, thereby saving computation time at the expense of a (hopefully) modest expenditure in storage space. (Each of the sub-problem solutions is indexed in some way, typically based on the values of its input parameters, so as to facilitate its lookup.) The technique of storing solutions to sub-problems instead of recomputing them is called "memoization"(not memorization).
(“Programming” in this context refers to a tabular method, not to writing computer code.)
Dynamic programming approach is similar to divide and conquer in
breaking down the problem into smaller and yet smaller possible
sub-problems. But unlike, divide and conquer, these sub-problems are not
solved independently. Rather, results of these smaller sub-problems are
remembered and used for similar or overlapping sub-problems.
Dynamic programming is used where we have problems, which can be
divided into similar sub-problems, so that their results can be re-used.
Mostly, these algorithms are used for optimization. Before solving the
in-hand sub-problem, dynamic algorithm will try to examine the results
of the previously solved sub-problems. The solutions of sub-problems are
combined in order to achieve the best solution.
So we can say that −
- The problem should be able to be divided into smaller overlapping sub-problem.
- An optimum solution can be achieved by using an optimum solution of smaller sub-problems.
- Dynamic algorithms use memoization(not memorization).
When developing a dynamic-programming algorithm, we follow a sequence of
four steps:
1. Characterize the structure of an optimal solution.
2. Recursively define the value of an optimal solution.
3. Compute the value of an optimal solution, typically in a bottom-up fashion.
4. Construct an optimal solution from computed information.
Steps 1–3 form the basis of a dynamic-programming solution to a problem. If we need only the value of an optimal solution, and not the solution itself, then we can omit step 4.
FIBONACCI SERIES EXAMPLE:
Here is a naïve implementation of a function finding the nth member of the Fibonacci sequence, based directly on the mathematical definition:
function fib(n)
if n <= 1 return n
return fib(n − 1) + fib(n − 2)
Notice that if we call, say, fib(5), we produce a call tree that calls the function on the same value many different times:
- fib(5)
- fib(4) + fib(3)
- (fib(3) + fib(2)) + (fib(2) + fib(1))
- ((fib(2) + fib(1)) + (fib(1) + fib(0))) + ((fib(1) + fib(0)) + fib(1))
- (((fib(1) + fib(0)) + fib(1)) + (fib(1) + fib(0))) + ((fib(1) + fib(0)) + fib(1))
Above, fib(2)is calculated three times, this can take exponential time if problem is larger.
The resulting function requires only O(n) time instead of exponential time (but requires O(n) space):
var m := map(0 → 0, 1 → 1)
function fib(n)
if key n is not in map m
m[n] := fib(n − 1) + fib(n − 2)
return m[n]
This technique of saving values that have already been calculated is called memoization; this is the top-down approach, since we first break the problem into subproblems and then calculate and store values.
In the bottom-up approach, we calculate the smaller values of fib first, then build larger values from them. This method also uses O(n) time since it contains a loop that repeats n − 1 times, but it only takes constant (O(1)) space, in contrast to the top-down approach which requires O(n) space to store the map.
function fib(n)
if n = 0
return 0
else
var previousFib := 0, currentFib := 1
repeat n − 1 times // loop is skipped if n = 1
var newFib := previousFib + currentFib
previousFib := currentFib
currentFib := newFib
return currentFib
TOWER OF HANOI EXAMPLE:
A model set of the Towers of Hanoi (with 8 disks)(Source:Wikipedia) |
Animated Tower of Hanoi (Source:Wikipedia) |
In contrast to greedy algorithms, where local optimization is
addressed, dynamic algorithms are motivated for an overall optimization
of the problem.
In contrast to divide and conquer algorithms, where solutions are combined to achieve an overall solution, dynamic algorithms use the output of a smaller sub-problem and then try to optimize a bigger sub-problem. Dynamic algorithms use memoization to remember the output of already solved sub-problems.
In contrast to divide and conquer algorithms, where solutions are combined to achieve an overall solution, dynamic algorithms use the output of a smaller sub-problem and then try to optimize a bigger sub-problem. Dynamic algorithms use memoization to remember the output of already solved sub-problems.
The following computer problems can be solved using dynamic programming approach −
- Fibonacci number series
- Knapsack problem
- Tower of Hanoi
- All pair shortest path by Floyd-Warshall
- Shortest path by Dijkstra
- Project scheduling
Source: TutorialsPoint & Wikipedia