What is dynamic programming?|Where to use dynamic programming|Bottom up and Top Down|Time complexity|Examples

                                               CONTENTS

 

List of topics we are going to cover in this article:

 

  • What is dynamic programming?
  • Where to use dynamic programming
  • Bottom up and Top Down
  • Time complexity
  • Performance
  •  Examples
  • Conclusion 

                                                     DYNAMIC PROGRAMMING

  • What is dynamic programming:

Dynamic programming is same as the divide and conquer in which the problem is break down into sub problems but where as the dynamic programming breaking down sub problems into smaller sub problems and storing the results to avoid unnecessary computations. Dynamic programming is the most powerful technique which solves different types of problems.

The main difference between dynamic programming and divide and conquer is dynamic programming is used when overlapping subproblems are occurred.

  • Example:
  • Binary search
  • Fibonacci series
  • Where to use dynamic programming:

We use dynamic programming when these two problems occurred, and it can be solved by dynamic programming.

  • Overlapping subproblems
  • Optimal substructure
  • Overlapping subproblems:

Now let’s see what happen when we calculate fib (5) but here since 5 is not less than or equal to 1 so we need to calculate fib (4) and fib(3). The interesting thing that we must observe here is we are calling fib () function to calculate the same number multiple times.

For example the right subtree of fib (5) is equal to left subtree of fib(4) and the right subtree of fib(4) is same as left subtree of fib(3). This property is called the overlapping subproblems property.

Because clearly, here we are doing a lot of unnecessary computation but before we must see how to avoid unnecessary computation, lets try to figure out the time complexity of the fib () function.

  • Optimal substructure:

A problem is said to be optimal substructure property if an optimal solution of the given problem can be obtained by using optimal solutions of its subproblems.

For example, consider shortest path problem:

If a node y lies in the shortest path from source node ‘I’ to destination node ‘j’ then the shortest path from I to j is the combination of shortest path from I to y and shortest path from y to j.

  • All pair shortest path:
  • Bellman-Ford
  • Floyd-warshall   
  • The solutions for the above problem can be solved by using ‘memory’
  • Remember the results
  • There is no need of solving the same problem again just retrieve from the memory.
  • There are two methods of storing the results in memory:
  • Memorization (Top-down)
  • Tabulation (Bottom-up)
  • In top-down you start with building large solution to the problem by explaining how you build it from smaller solutions.
  • In bottom-up you start with solving small problems and after we start building. Simply we can say build the lookup table in bottom up fashion that is begin with solutions.
  • After the table is built then return table.
  • It avoids multiple lookups
  • Algorithm:
  1. Begin with initializing the base values of ‘i’.
  2. Next, we run a loop that iterates over the remaining values of ‘i’.
  3. At every iteration i f(n) updates the ith entry in lookup table.
  4. Finally, f(n) returns table[n].    
  • Time complexity:

Time complexity for overlapping subproblem is

T(n) = T(n-1) + (n-2) +O(1)

Where, T(n<=1) =O (1)

          T(n)=O(2^n)  

  • Performance:

Time taken for calculating the 40th Fibonacci number:

  • Recursive:14 seconds
  • Memorization: 0.17 seconds
  • Tabulation:0.30 seconds
  • Examples:
  • Travelling salesman problem
  • Tower of Hanoi
  • Knapsack problem
  • All pair shortest path
  • These problems can be solved by using dynamic programming technique.
  • Travelling salesman problem:

In this problem set of cities are given and distance between every city, the problem is to find the shortest possible path that should be visit every city exactly once and returns to the starting point.

  • Solution:
  • Let us consider city 1 as the starting and ending point.
  • Generate all (n-1)! Permutation of cities
  • Calculate cost of every permutation and keep track of minimum cost permutation.
  • Return the permutation with minimum cost.

Let’s see all the possible paths:

1-2-3-4-1      80

1-3-2-4-1      80

These two routes are the shortest paths that we can travel each city once. Basically, we solved this problem by naïve approach.

  • Now we will see how we solve using Dynamic programming.

S: subset of the graph which is not yet traversed

dist(i,1): distance from i to 1

if size of S is 2, then S must be {1,i}, C(S,i) = dist(1,i)

else if size of S is greater than 2

C(S, i)=min {C(s-{i}, j)+dis(j,i)} where j belongs to S, j!=I and j!=1

Lets start from node 1

C (4, phi) = 20  C(3,phi)=15   C(2,phi)=10

C(2,{3}) = d(2,3)+C(3,phi)=50

C(3,{2}) = d(3,2)+C(2,phi)=45

C(4,{2}) = d(4,2)+C(2,phi)=35

C(2,{4}) = d(2,4)+C(4,phi)=45

C(3,{4}) = d(3,4)+C(4,phi)=50

C(4,{3}) = d(4,3)+C(3,phi)=45

C(2,{3,4}) = min(d(2,3))+C(3,{4}), d(2,4)+C(4,{3})) = min(85,80)=80

C(3,{2,4}) = min(d(3,2))+C(2,{4}), d(3,4)+C(4,{2})) = min(80,65)=65

C(4,{2,3}) = min(d(4,2))+C(2,{3}), d(4,3)+C(3,{2})) = min(75,75)=75

Finally we obtain

C(1,{2,3,4}) = min(d(1,2)+c(2,{3,4}),d(1,3)+c(3,{2,4}),d(1,4)+c(4,{2,3}))

                     = min(90,80,95)

                     = 80

So an optimal tour has length 80

Optimal tour:1-3-4-2-1

  • Conclusion:

Dynamic programming is the most powerful technique useful for solving some of the problems. Here the concept is very simple we breakdown the problem into subproblems and those into smaller ones those results will be stored for the future to reduce solving the problem again.

Contributor's Info

Created:
0Comment