Dynamic programming was invented by U.S. mathematician Richard Bellman in 1950. Like greedy algorithms, it is also used to solve optimization problems. But unlike greedy approach, dynamic programming always ensures optimal / best solution.
A feasible solution is a solution that satisfies constraints of the problem. When the problem has multiple feasible solutions with different cost, the solution with the minimum cost or maximum profit is called optimal solution
Cost metric depends on the problem. For sorting problem, cost metric may be a number of comparisons or number of swaps. For matrix multiplication, a cost metric is a number of multiplications. For knapsack problem, cost metric is total profit earned.
- Dynamic programming is powerful design technique for optimization problems. Here word “programming” refers to planning or construction of a solution, it does not have any resemblance with computer programming.
- Divide and conquer divides the problem into small sub problems. Sub problems are solved recursively. Unlike divide and conquer, sub problems in dynamic programming are not independent. Sub problems in it overlap with each other. Solutions of sub problems are merged to get the solution of the original large problem.
- In divide and conquer, sub problems are independent and hence repeated problems are solved multiple times. Dynamic programming saves the solution in the table, so when the same problem encounters again, the solution is retrieved from the table. It is bottom up approach. It starts solving the smallest possible problem and uses a solution of the smaller problem to build solution of the larger problem.
- The method is applicable to only those problems which possess the property of principle of optimality.
- We must keep track of partial solutions.
- Dynamic programming is more complex and time-consuming
- Dynamic programming (DP) splits the large problem at every possible point. When the problem becomes sufficiently small, DP solves it.
- Dynamic programming is bottom up approach, it finds the solution of the smallest problem and constructs the solution of the larger problem from already solved smaller problems.
- To avoid recomputation of the same problem, DP saves the result of sub problems into thetable. When next time same problem encounters, the answer is retrieved from the table by lookup procedure.
- Control abstraction for dynamic programming is shown below :
Algorithm DYNAMIC_PROGRAMMING (P) if solved(P) then return lookup(P) else Ans ← SOLVE(P) store (P, Ans) end Function SOLVE(P) if sufficiently small(P) then solution(P) // Find solution for sufficiently small problem else Divide P into smaller sub problems P1, P2, …., Pn Ans1 ← DYNAMIC_PROGRAMMIN(P1) Ans2 ← DYNAMIC_PROGRAMMIN(P2) ... Ansn ← DYNAMIC_PROGRAMMIN(Pn) return(combine(Ans1, Ans2, … , Ansn)) end
Characteristics of Dynamic Programming
Dynamic programming works on following principles :
- Characterize structure of optimal solution, i.e. build a mathematical model of the solution.
- Recursively define the value of the optimal solution.
- Using bottom up approach, compute the value of the optimal solution for each possible sub problems.
- Construct optimal solution for the original problem using information computed in the previous step.
Applications of Dynamic Programming
Dynamic programming is used to solve optimization problems. It is used to solve many real life problems such as,
- Make a change problem
- Knapsack problem
- Optimal binary search tree
- Travelling salesman problem
- All pair shortest path problem
- Assembly line scheduling
- Multi stage graph problem
Principle of Optimality
Principle of optimality : “In an optimal sequence of decisions or choices, each sub sequence must also be optimal”.
- The principle of optimality is the heart of dynamic programming. It states that to find the optimal solution of the original problem, a solution of each sub problem also must be optimal. It is not possible to derive optimal solution using dynamic programming if the problem does not possess the principle of optimality.
- Shortest path problem satisfies the principle of optimality. If A – X1 – X2 – . . . Xn – B is the shortest path between nodes A and B, then any sub path Xi to Xj must be shortest. If there exist multiple paths from Xi to Xj, and the selected path is not minimum, then obviously the path from A to B cannot be shortest.
- For example : In Figure (a) shortest path from A to C is A – B – C. There exist two paths from B to C. One is B – C and other is B – E – D – C. But B – C is the shortest path so we should add that one in final solution.
- On the other hand, longest path problem does not satisfy the principle of optimality. In Figure (b), the longest non-cyclic path from A to D is A – B – C – D. In this path, sub path from B to C is just an edge joining B and C. However, BC itself is not the longest path, because longest non-cyclic path from B to C is
B – A – E – D – C. Thus the sub path of longest path may not be longest always, so violets principle of optimality.
Elements of Dynamic Programming
- Optimal substructure: “For the optimal solution of the problem, a solution of sub problem also must be optimal”. Dynamic programming builds the optimal solution of the bigger problem using the solution of smaller sub problems. Hence we should consider only those sub problems which have an optimal solution.
- Overlapping subproblems: When the big problem is divided into small problems, it may create exponential subproblems. Only polynomial numbers of them are distinct.
- Following figure shows the overlapping problems of the binomial coefficient. Dynamic programming saves solution in the table, so no rework is done. When C(n – 3, r – 2) problem encounters again, its solution is retrieved from the table.
- Divide and conquer solves C(n – 3, r – 2) four times, whereas dynamic programming solves only once. There may exist many sub problems with multiplicity greater than one.
Some Popular Problems Solved using Dynamic Programming
- Binomial Coefficient
- Making a Change
- Knapsack Problem
- Multistate Graph Problem
- Optimal Binary Search Tree
- Matrix Chain Multiplication
- Longest Common Subsequence
- Bellman-Ford (Single Source Shortest Path) Algorithm
- Floyd-Warshall (All Pair Shortest Path) Problem
- Assembly Line Scheduling
- Travelling Salseman Problem
- Flow Shop Scheduling
Additional Reading: Wiki