Dynamic Programming | Algorithm Notes | B.Tech
top of page

Dynamic Programming: Algorithm Class Notes

Updated: Oct 22, 2022

Mobiprep has created last-minute notes for all topics of Algorithm to help you with the revision of concepts for your university examinations. So let’s get started with the lecture notes on Algorithm.

Our team has curated a list of the most important questions asked in universities such as DU, DTU, VIT, SRM, IP, Pune University, Manipal University, and many more. The questions are created from the previous year's question papers of colleges and universities.



Dynamic Programming


Question- 1) What are the elements and characteristics of dynamic programming?

Answer: Elements:

  • Sub-structure: The given problem is divided into simpler sub-problems. The sub-problems are called substructures.

  • Table-structure: After each sub-problem is solved, the results must be stored in a table. The table is used to store the results of the sub-problems which might be used multiple times in the future. This eliminates the necessity of computing the same sub-problem multiple times.

  • Bottom-up Computation: Finally, the solutions of the simpler sub-problems are combined to get the solution of the original problem. The results of the sub-problems are taken from the table.

Characteristics :

  • Optimal substructure: The optimality principle states that an optimal solution to a complex problem contains optimal solutions to subproblems. In other words, an optimal solution to a problem can be constructed from the optimal solutions to its sub-problems. A problem is said to have an optimal substructure only if it has an optimal solution. Dynamic programming is used to solve problems with optimal substructure.

  • Overlapping subproblems: In dynamic programming, a complex problem is divided into smaller sub-problems. The dynamic programming is used when the same sub-problems have to be solved again and again. The solutions to the sub-problems are stored in dynamic programming. Hence, unwanted and redundant computation is avoided. Dynamic programming strategy is useful only when there are overlapping subproblems. If there are no overlapping subproblems, the storage of the subproblem solutions is not needed. Hence, dynamic programming is not used when there are non-overlapping sub-problems.


 

Question- 2) Differentiate between divide and conquer method and dynamic programming.

Answer:

Divide and Conquer

Dynamic Programming

  • A complex problem is solved by dividing it into simpler non-overlapping sub-problems.

  • A complex problem is solved by dividing it into simpler overlapping sub-problems.

  • It is recursive.

  • It is non-recursive.

  • ​Efficiency is low.

  • ​Efficiency is high.

  • Sub-problems are not dependent on each other.

  • Sub-problems are dependent on each other.

  • Solutions of the sub-problems are not stored.

  • ​Solutions of the sub-problems are stored.

  • ​Example:

Binary Search, Merge Sort etc.

  • ​Example:

0-1 Knapsack problem, matrix chain multiplication.



 

Question- 3) Explain Fibonacci sequence .

Answer: The Fibonacci sequence is an infinite series of numbers in which each number is the sum of the two preceding numbers. The first 2 elements of the series are 0 and 1.

The mathematical equation that describes the Fibonacci series is

x_n= x_(n-1)+ x_(n-2)

The Fibonacci series is given below:

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ….

C Code:

#include<stdio.h>
int main()
{
int count, first_term = 0, second_term = 1, next_term, i;
printf("Enter the number of terms:\n");
scanf("%d",&count);
printf("First %d terms of Fibonacci series:\n",count);
for ( i = 0 ; i < count ; i++ )
{
if ( i <= 1 )
next_term = i;
else
{
next_term = first_term + second_term;
first_term = second_term;
second_term = next_term;
}
printf("%d ",next_term);
}
return 0;
}

Output


Enter the number of terms: 8
First 8 terms of Fibonacci series:
0 1 1 2 3 5 8 13


 

Question- 4) Explain Matrix Chain multiplication with example.

Answer: To find the product of a chain of matrices efficiently, the number of multiplication operations performed must be reduced. To do this, we need to find the order of multiplication of matrices. We should also decide the order in which the product should be parenthesized.

As the multiplication operation is associative, the order multiplication of matrices does not affect the final result. But, the order of parenthesizing the product affects the number of arithmetic operations performed to calculate the product.


Example:

Let A be a 15x6 matrix, B be a 6x20 matrix and C be a 20x20 matrix. Find the product of the three matrices.

(AB)C = (15x6x20) + (15x20x20) = 7800 operations

A(BC) = (15x6x20) + (6x20x20) = 4200 operations

From the above two parenthesizations, the second one involves less number of arithmetic operations. So, the second parenthesization is efficient.


Dynamic programming approach:

Parenthesis is placed at all possible places and the cost of each parenthesization is calculated. Parenthesization divides the given problem into substructures. These substrucutres can be solved using recursion.


C Code:

#include <limits.h>
#include <stdio.h>
int MatrixChainOrder(int p[], int i, int j)
{
if (i == j)
return 0;
int k;
int min = INT_MAX;
int count;

for (k = i; k < j; k++) {
count = MatrixChainOrder(p, i, k) +
MatrixChainOrder(p, k + 1, j) +
p[i - 1] * p[k] * p[j];

if (count < min)
min = count;
}

// Return minimum count
return min;
}

int main()
{
int arr[] = { 1, 2, 3, 4, 3 };
int n = sizeof(arr) / sizeof(arr[0]);

printf("Minimum number of multiplications is %d ",
MatrixChainOrder(arr, 1, n - 1));

getchar();
return 0;
}

 

Question- 5) Explain Longest common subsequence.

Answer: A subsequence is a sequence is a sequence that occurs in a string in the same relative order.

Longest Common Subsequence

Given, two strings s1 and s2 of length m and n respectively. Find the length of the longest subsequence that is found in both the strings.


Example:

S1 = 'NIGHTINGALE'

S2 = 'ABGTNOWE'

The longest common subsequence in both S1 and S2 is 'GTNE'. So, the length of the longest common subsequence is 4.


Dynamic Programming Approach


The last characters of both the strings is compared.

  1. If s1[m-1] == s2[n-1], then the following statement is executed 1+LCS (s1[0:m-2], s2[0:n-2]

  2. If s1[m-1] != s2[n-1], then execute the following statement LCS (s1[0:m-2], s2[0:n-2]


C Code:

#include<stdio.h>

#include<string.h>

int i,j,m,n,c[20][20];

char x[20],y[20],b[20][20];

void print(int i,int j)

{

if(i==0 || j==0)

return;

if(b[i][j]=='c')

{

print(i-1,j-1);

printf("%c",x[i-1]);

}

else if(b[i][j]=='u')

print(i-1,j);

else

print(i,j-1);

}

void lcs()

{

m=strlen(x);

n=strlen(y);

for(i=0;i<=m;i++)

c[i][0]=0;

for(i=0;i<=n;i++)

c[0][i]=0;

//c, u and l denotes cross, upward and downward directions respectively

for(i=1;i<=m;i++)

for(j=1;j<=n;j++)

{

if(x[i-1]==y[j-1])

{

c[i][j]=c[i-1][j-1]+1;

b[i][j]='c';

}

else if(c[i-1][j]>=c[i][j-1])

{

c[i][j]=c[i-1][j];

b[i][j]='u';

}

else

{

c[i][j]=c[i][j-1];

b[i][j]='l';

}

}

}

int main()

{

printf("Enter 1st sequence:");

scanf("%s",x);

printf("Enter 2nd sequence:");

scanf("%s",y);

printf("\nThe Longest Common Subsequence is ");

lcs();

print(m,n);

return 0;

 


Question- 6) Explain 0/1 knapsack problem using dynamic programming approach.

Answer: 0-1 Knapsack Problem

Two arrays value and weight each of size 'n' are given. Our aim is to find the maximum value of the items in a knapsack of capacity w. i.e. the sum of weight of items in the knapsack must be less than or equal to w. Note that in a 0-1 knapsack, an item cannot be broken.

Solution using Dynamic Programming approach

The 0-1 knapsack can be solved using the Dynamic Programming approach. A dynamic programming table T is formulated in a bottom-top approach. The columns of the table are the weights from '1' to 'wi'. Each row contains the weights of the given items. The entry T[i][j] in the table contains the maximum value of 'j-weight' from all values from '1' to 'i'.

In all the columns that have 'weight'>'wi', we fill the ith column's weight (wi) in the table. But before entering the 'wi' value in the table, we must check the value of 'w'+T[i-1][j-wi]. The maximum of 'wi' and 'wi'+T[i-1][j-wi] is filled in the current position.


If the ith weight is not filled in the jth column, then, T[i][j] is same as that of T[i-1][j].


C Code:

#include <stdio.h>

int max(int a, int b)
{
return (a > b) ? a : b;
}

// Returns the maximum value that can be put in a knapsack of capacity W

int knapSack(int W, int wt[], int val[], int n)
{
int i, w;
int T[n + 1][W + 1];

// Build table T[][] in bottom up manner

for (i = 0; i <= n; i++) {

for (w = 0; w <= W; w++) {

if (i == 0 || w == 0)

T[i][w] = 0;

else if (wt[i - 1] <= w)

T[i][w] = max(

val[i - 1] + T[i - 1][w - wt[i - 1]],

T[i - 1][w]);

else

T[i][w] = K[i - 1][w];

}

}

return K[n][W];
}

int main()
{
int value[] = { 60, 100, 120 };

int weight[] = { 10, 20, 30 };

int W = 50;

int n = sizeof(value) / sizeof(value[0]);

printf("%d", knapSack(W, weight, value, n));

return 0;

}

 

Question- 7) What is the principle of optimality?

Answer: The principle of optimality states that an optimal solution to the given problem can be found if and only if the solution to each of its sub-problems is optimal. In other words, a problem's optimal solution should contain optimal solution to its sub-problems. If the principle of optimality cannot be applied to a problem, then it is impossible to use dynamic programming technique to solve that problem.


 

Question- 8) Differentiate between greedy approach and dynamic programming.

Answer:

Greedy Method

Dynamic Programming

  • Chooses an optimal solution to the next stage.

  • ​A complex problem is solved by dividing it into simpler overlapping sub-problems.

  • ​Sub-problems are solved after making a Greedy decision.

  • ​First, the sub-problems are solved. Then an optimal solution is found.

  • Top-down approach.

  • Bottom-up approach.

  • Efficiency is less.

  • Efficiency is more.

  • Slow and Complex.

​Fast and Simple.

  • Example: Fractional Knapsack problem

  • Example: 0-1 knapsack problem


 

Question- 9) Solve the problem of multistage graphs by using forward and backward approach.

Answer: A multistage graph G is a directed graph that has multiple stages. i.e. the graph can be divided into more than one stages or disjoint sets.


Our aim is to find the shortest path from source to sink. The shortest path in a multistage graph cannot be found using greedy method, because it sometimes leads to wrong results. So, Dynamic Programming is used to solve this problem.


Forward Approach

Consider the two-stage graph given below. Let S be the Source and D be the Destination.


two stage graph in algorithm class notes

Using the Forward Approach, the shortest path can be calculated by using the following formula:


d(S, D) = min { 20+d(W,D), 5+d(X,D), -1+d(Y,D), 17+d(Z,D) }


Algorithm:

F graph (graph G, int K, int n, int p[])

{

Float cost [max size], int d [max size], r;

Cost [n] = 0.0

For (int j = n-1; j>= 1; j--)

{

Let r be a vertex such that is an edge of G and C[j][r] + cost[r] is minimum;

Cost [j] = C[j][r] + Cost[r]

D [j] = r

}

P [1] = 1 , P[k] = n

For (j = 2 ; j <= K-1; j++)

P[j] = d[P(j-1)];

}

Backward Approach

A two stage graph is given below. (S is the source and D is the destination)


backward approach two stage graph in algorithm class notes

Algorithm:

Algorithm BGraph (G, K, n, p)

{

B cost [1] = 0.0;

For j = 2 to n do

{

// compute b cost [j].

Let r be such that is an edge of

G and b cost [r] + c [r, j];

D [j] = r;

}

// find a minimum cost path

P [1] = 1; p [k] = n;

For j = k-1 to 2 do p[j] = d [p (j+1)];

}


 

Question- 10) Explain Floyd-Warshall algorithm.

Answer: The Floyd-Warshall Algorithm is used to find the shortest path between each pair of vertices in a weighted directed graph.


Steps

a. Remove all self-loops and parallel edges.

b. Build the distance matrix and fill it with the initial distance values.

  1. For self-loops, distance = 0

  2. For adjacent vertices, distance = weight of the edge connecting the two vertices

  3. For non-adjacent vertices, distance = infinity

c. Update the distance values in the distance matrix


Example

Find the shortest path between every pair of vertices.


floyd warshall example in algorithm class notes

Initial distance matrix, D = (0 5 ∞ 10 ∞ 0 1 ∞ 3 ∞ 0 ∞ ∞ 2 2 0 )


D1 = (0 5 ∞ 10 ∞ 0 1 ∞ 3 8 0 13 ∞ 2 2 0 )

D2 = (0 5 6 10 ∞ 0 1 ∞ 3 8 0 13 ∞ 2 2 0 )

D3 = (0 5 6 10 4 0 1 14 3 8 0 13 5 2 2 0 )

D4 = (0 5 6 10 4 0 1 14 3 8 0 13 5 2 2 0 )


D4 gives the shortest path between all pairs of vertices in the given graph.


Algorithm

Create a |V| x |V| matrix, M, that will describe the distances between vertices

For each cell (i, j) in M:

if i == j:

M[i][j] = 0

if (i, j) is an edge in E:

M[i][j] = weight(i, j)

else:

M[i][j] = infinity

for k from 1 to |V|:

for i from 1 to |V|:

for j from 1 to |V|:

if M[i][j] > M[i][k] + M[k][j]:

M[i][j] = M[i][k] + M[k][j]

C Code:

#include <stdio.h>

// defining the number of vertices

#define nV 4

#define INF 999

void printMatrix(int matrix[][nV]);

// Implementing floyd warshall algorithm

void floydWarshall(int graph[][nV]) {

int matrix[nV][nV], i, j, k;

for (i = 0; i < nV; i++)

for (j = 0; j < nV; j++)

matrix[i][j] = graph[i][j];

// Adding vertices individually

for (k = 0; k < nV; k++) {

for (i = 0; i < nV; i++) {

for (j = 0; j < nV; j++) {

if (matrix[i][k] + matrix[k][j] < matrix[i][j])

matrix[i][j] = matrix[i][k] + matrix[k][j];

}

}

}

printMatrix(matrix);

}

void printMatrix(int matrix[][nV]) {

for (int i = 0; i < nV; i++) {

for (int j = 0; j < nV; j++) {

if (matrix[i][j] == INF)

printf("%4s", "INF");

else

printf("%4d", matrix[i][j]);

}

printf("\n");

}

}

int main() {

int graph[nV][nV] = {{0, 3, INF, 5},

{2, 0, INF, 4},

{INF, 1, 0, INF},

{INF, INF, 2, 0}};

floydWarshall(graph);

}

 

Question- 11) Explain the mechanism of travelling salesman problem using dynamic programming.

Answer: Travelling Salesperson Problem

A set of cities and the distance between each pair of cities is given. A salesperson starts from one city, visits all the cities and then returns to the starting city. Our aim is to find the shortest route the salesperson should take to visit all the cities exactly once.

Solution using Dynamic Programming

Steps

  • Consider vertex 1 as the starting and ending point.

  • Choose the next vertex using the formula given below:

Algorithm

Let, S be the directed weighted graph and C be the cost of the minimum cost path.

If size of S is 2, then S must be {1, i},
C(S, i) = dist(1, i) 
Else if size of S is greater than 2.
C(S, i) = min { C(S-{i}, j) + dis(j, i)} where j belongs to S, j != i and j != 1


 





101 views0 comments

Recent Posts

See All
bottom of page