Algorithms Notes
Algorithm Basics
Divide And Conquer
Greedy Algorithm
Dynamic Programming
Shortest Path
Backtracking
Minimum Spanning Tree
Maximum Flow
Complexity Classes
Heading
Q
1
What are the elements and characteristics of dynamic programming?
Ans
Elements:
Sub-structure: The given problem is divided into simpler sub-problems. The sub-problems are called substructures.
Table-structure: After each sub-problem is solved, the results must be stored in a table. The table is used to store the results of the sub-problems which might be used multiple times in the future. This eliminates the necessity of computing the same sub-problem multiple times.
Bottom-up Computation
Finally, the solutions of the simpler sub-problems are combined to get the solution of the original problem. The results of the sub-problems are taken from the table.
Characteristics :
Optimal substructure:
The optimality principle states that an optimal solution to a complex problem contains optimal solutions to subproblems. In other words, an optimal solution to a problem can be constructed from the optimal solutions to its sub-problems. A problem is said to have an optimal substructure only if it has an optimal solution.
Dynamic programming is used to solve problems with optimal substructure.
Overlapping subproblems:
In dynamic programming, a complex problem is divided into smaller sub-problems. The dynamic programming is used when the same sub-problems have to be solved again and again. The solutions to the sub-problems are stored in dynamic programming. Hence, unwanted and redundant computation is avoided.
Dynamic programming strategy is useful only when there are overlapping subproblems. If there are no overlapping subproblems, the storage of the subproblem solutions is not needed. Hence, dynamic programming is not used when there are non-overlapping sub-problems.
Q
2
Differentiate between divide and conquer method and dynamic programming.
Ans
Divide and Conquer
A complex problem is solved by dividing it into simpler non-overlapping sub-problems.
It is recursive
Efficiency is low.
Sub-problems are not dependent on each other.
Solutions of the sub-problems are not stored.
Example:
Binary Search, Merge Sort etc.
Dynamic Programming
A complex problem is solved by dividing it into simpler overlapping sub-problems.
It is non-recursive
Efficiency is high.
Sub-problems are dependent on each other.
Solutions of the sub-problems are stored.
Example:
0-1 Knapsack problem, matrix chain multiplication.
Q
3
Explain Fibonacci sequence .
Ans
The Fibonacci sequence is an infinite series of numbers in which each number is the sum of the two preceding numbers. The first 2 elements of the series are 0 and 1.
The mathematical equation that describes the Fibonacci series is
x_n= x_(n-1)+ x_(n-2)
The Fibonacci series is given below:
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ….
C Code
#include<stdio.h>
int main()
{
int count, first_term = 0, second_term = 1, next_term, i;
printf("Enter the number of terms:\n");
scanf("%d",&count);
printf("First %d terms of Fibonacci series:\n",count);
for ( i = 0 ; i < count ; i++ )
{
if ( i <= 1 )
next_term = i;
else
{
next_term = first_term + second_term;
first_term = second_term;
second_term = next_term;
}
printf("%d ",next_term);
}
return 0;
}
Output
Enter the number of terms: 8
First 8 terms of Fibonacci series:
0 1 1 2 3 5 8 13
Q
4
Explain Matrix Chain multiplication with example.
Ans
To find the product of a chain of matrices efficiently, the number of multiplication operations performed must be reduced. To do this, we need to find the order of multiplication of matrices. We should also decide the order in which the product should be parenthesized.
As the multiplication operation is associative, the order multiplication of matrices does not affect the final result. But, the order of parenthesizing the product affects the number of arithmetic operations performed to calculate the product.
Example:
Let A be a 15x6 matrix, B be a 6x20 matrix and C be a 20x20 matrix. Find the product of the three matrices.
(AB)C = (15x6x20) + (15x20x20) = 7800 operations
A(BC) = (15x6x20) + (6x20x20) = 4200 operations
From the above two parenthesizations, the second one involves less number of arithmetic operations. So, the second parenthesization is efficient.
Dynamic programming approach:
Parenthesis is placed at all possible places and the cost of each parenthesization is calculated. Parenthesization divides the given problem into substructures. These substrucutres can be solved using recursion.
C Code
#include <limits.h>
#include <stdio.h>
int MatrixChainOrder(int p[], int i, int j)
{
if (i == j)
return 0;
int k;
int min = INT_MAX;
int count;
for (k = i; k < j; k++) {
count = MatrixChainOrder(p, i, k) +
MatrixChainOrder(p, k + 1, j) +
p[i - 1] * p[k] * p[j];
if (count < min)
min = count;
}
// Return minimum count
return min;
}
int main()
{
int arr[] = { 1, 2, 3, 4, 3 };
int n = sizeof(arr) / sizeof(arr[0]);
printf("Minimum number of multiplications is %d ",
MatrixChainOrder(arr, 1, n - 1));
getchar();
return 0;
}
Q
5
Explain Longest common subsequence.
Ans
A subsequence is a sequence is a sequence that occurs in a string in the same relative order.
Longest Common Subsequence
Given, two strings s1 and s2 of length m and n respectively. Find the length of the longest subsequence that is found in both the strings.
Example:
S1 = 'NIGHTINGALE'
S2 = 'ABGTNOWE'
The longest common subsequence in both S1 and S2 is 'GTNE'. So, the length of the longest common subsequence is 4.
Dynamic Programming Approach
The last characters of both the strings is compared.
i. If s1[m-1] == s2[n-1], then the following statement is executed
1+LCS (s1[0:m-2], s2[0:n-2]
ii. If s1[m-1] != s2[n-1], then execute the following statement
LCS (s1[0:m-2], s2[0:n-2]
C Code
#include<stdio.h>
#include<string.h>
int i,j,m,n,c[20][20];
char x[20],y[20],b[20][20];
void print(int i,int j)
{
if(i==0 || j==0)
return;
if(b[i][j]=='c')
{
print(i-1,j-1);
printf("%c",x[i-1]);
}
else if(b[i][j]=='u')
print(i-1,j);
else
print(i,j-1);
}
void lcs()
{
m=strlen(x);
n=strlen(y);
for(i=0;i<=m;i++)
c[i][0]=0;
for(i=0;i<=n;i++)
c[0][i]=0;
//c, u and l denotes cross, upward and downward directions respectively
for(i=1;i<=m;i++)
for(j=1;j<=n;j++)
{
if(x[i-1]==y[j-1])
{
c[i][j]=c[i-1][j-1]+1;
b[i][j]='c';
}
else if(c[i-1][j]>=c[i][j-1])
{
c[i][j]=c[i-1][j];
b[i][j]='u';
}
else
{
c[i][j]=c[i][j-1];
b[i][j]='l';
}
}
}
int main()
{
printf("Enter 1st sequence:");
scanf("%s",x);
printf("Enter 2nd sequence:");
scanf("%s",y);
printf("\nThe Longest Common Subsequence is ");
lcs();
print(m,n);
return 0;
Q
6
Explain 0/1 kanpsack problem using dynamic programming approach.
Ans
0-1 Knapsack Problem
Two arrays value and weight each of size 'n' are given. Our aim is to find the maximum value of the items in a knapsack of capacity w. i.e. the sum of weight of items in the knapsack must be less than or equal to w. Note that in a 0-1 knapsack, an item cannot be broken.
Solution using Dynamic Programming approach
The 0-1 knapsack can be solved using the Dynamic Programming approach. A dynamic programming table T is formulated in a bottom-top approach. The columns of the table are the weights from '1' to 'wi'. Each row contains the weights of the given items. The entry T[i][j] in the table contains the maximum value of 'j-weight' from all values from '1' to 'i'.
In all the columns that have 'weight'>'wi', we fill the ith column's weight (wi) in the table. But before entering the 'wi' value in the table, we must check the value of 'w'+T[i-1][j-wi]. The maximum of 'wi' and 'wi'+T[i-1][j-wi] is filled in the current position.
If the ith weight is not filled in the jth column, then, T[i][j] is same as that of T[i-1][j].
C Code
#include <stdio.h>
int max(int a, int b)
{
return (a > b) ? a : b;
}
// Returns the maximum value that can be put in a knapsack of capacity W
int knapSack(int W, int wt[], int val[], int n)
{
int i, w;
int T[n + 1][W + 1];
// Build table T[][] in bottom up manner
for (i = 0; i <= n; i++) {
for (w = 0; w <= W; w++) {
if (i == 0 || w == 0)
T[i][w] = 0;
else if (wt[i - 1] <= w)
T[i][w] = max(
val[i - 1] + T[i - 1][w - wt[i - 1]],
T[i - 1][w]);
else
T[i][w] = K[i - 1][w];
}
}
return K[n][W];
}
int main()
{
int value[] = { 60, 100, 120 };
int weight[] = { 10, 20, 30 };
int W = 50;
int n = sizeof(value) / sizeof(value[0]);
printf("%d", knapSack(W, weight, value, n));
return 0;
}
Q
7
What is the principle of optimality?
Ans
The principle of optimality states that an optimal solution to the given problem can be found if and only if the solution to each of its sub-problems is optimal. In other words, a problem's optimal solution should contain optimal solution to its sub-problems. If the principle of optimality cannot be applied to a problem, then it is impossible to use dynamic programming technique to solve that problem.
Q
8
Differentiate between greedy approach and dynamic programming.
Ans
Greedy Method
Chooses an optimal solution to the next stage.
Sub-problems are solved after making a Greedy decision.
Top-down approach.
Efficiency is less.
Slow and Complex.
Example: Fractional Knapsack problem
Dynamic Programming
A complex problem is solved by dividing it into simpler overlapping sub-problems.
First, the sub-problems are solved. Then an optimal solution is found.
Bottom-up approach
Efficiency is more.
Fast and Simple.
Example: 0-1 knapsack problem
Q
9
Solve the problem of multistage graphs by using forward and backward approach.
Ans
A multistage graph G is a directed graph that has multiple stages. i.e. the graph can be divided into more than one stages or disjoint sets.
Our aim is to find the shortest path from source to sink. The shortest path in a multistage graph cannot be found using greedy method, because it sometimes leads to wrong results. So, Dynamic Programming is used to solve this problem.
Forward Approach
Consider the two-stage graph given below. Let S be the Source and D be the Destination. Using the Forward Approach, the shortest path can be calculated by using the following formula:
d(S, D) = min { 20+d(W,D), 5+d(X,D), -1+d(Y,D), 17+d(Z,D) }
Algorithm
F graph (graph G, int K, int n, int p[])
{
Float cost [max size], int d [max size], r;
Cost [n] = 0.0
For (int j = n-1; j>= 1; j--)
{
Let r be a vertex such that is an edge of G and C[j][r] + cost[r] is minimum;
Cost [j] = C[j][r] + Cost[r]
D [j] = r
}
P [1] = 1 , P[k] = n
For (j = 2 ; j <= K-1; j++)
P[j] = d[P(j-1)];
}
Backward Approach
A two stage graph is given below. (S is the source and D is the destination)
Algorithm
Algorithm BGraph (G, K, n, p)
{
B cost [1] = 0.0;
For j = 2 to n do
{
// compute b cost [j].
Let r be such that is an edge of
G and b cost [r] + c [r, j];
D [j] = r;
}
// find a minimum cost path
P [1] = 1; p [k] = n;
For j = k-1 to 2 do p[j] = d [p (j+1)];
}
Q
10
Explain floyd-warshall algorithm.
Ans
The Floyd-Warshall Algorithm is used to find the shortest path between each pair of vertices in a weighted directed graph.
Steps
a. Remove all self-loops and parallel edges.
b. Build the distance matrix and fill it with the initial distance values.
i. For self-loops, distance = 0
ii. For adjacent vertices, distance = weight of the edge connecting the two vertices
iii. For non-adjacent vertices, distance = infinity
c. Update the distance values in the distance matrix
Example
Find the shortest path between every pair of vertices.
Initial distance matrix, D = (0 5 ∞ 10 ∞ 0 1 ∞ 3 ∞ 0 ∞ ∞ 2 2 0 )
D1 = (0 5 ∞ 10 ∞ 0 1 ∞ 3 8 0 13 ∞ 2 2 0 )
D2 = (0 5 6 10 ∞ 0 1 ∞ 3 8 0 13 ∞ 2 2 0 )
D3 = (0 5 6 10 4 0 1 14 3 8 0 13 5 2 2 0 )
D4 = (0 5 6 10 4 0 1 14 3 8 0 13 5 2 2 0 )
D4 gives the shortest path between all pairs of vertices in the given graph.
Algorithm
Create a |V| x |V| matrix, M, that will describe the distances between vertices
For each cell (i, j) in M:
if i == j:
M[i][j] = 0
if (i, j) is an edge in E:
M[i][j] = weight(i, j)
else:
M[i][j] = infinity
for k from 1 to |V|:
for i from 1 to |V|:
for j from 1 to |V|:
if M[i][j] > M[i][k] + M[k][j]:
M[i][j] = M[i][k] + M[k][j]
C Code
#include <stdio.h>
// defining the number of vertices
#define nV 4
#define INF 999
void printMatrix(int matrix[][nV]);
// Implementing floyd warshall algorithm
void floydWarshall(int graph[][nV]) {
int matrix[nV][nV], i, j, k;
for (i = 0; i < nV; i++)
for (j = 0; j < nV; j++)
matrix[i][j] = graph[i][j];
// Adding vertices individually
for (k = 0; k < nV; k++) {
for (i = 0; i < nV; i++) {
for (j = 0; j < nV; j++) {
if (matrix[i][k] + matrix[k][j] < matrix[i][j])
matrix[i][j] = matrix[i][k] + matrix[k][j];
}
}
}
printMatrix(matrix);
}
void printMatrix(int matrix[][nV]) {
for (int i = 0; i < nV; i++) {
for (int j = 0; j < nV; j++) {
if (matrix[i][j] == INF)
printf("%4s", "INF");
else
printf("%4d", matrix[i][j]);
}
printf("\n");
}
}
int main() {
int graph[nV][nV] = {{0, 3, INF, 5},
{2, 0, INF, 4},
{INF, 1, 0, INF},
{INF, INF, 2, 0}};
floydWarshall(graph);
}