May 8, 202215 min
Updated: Oct 17, 2022
Mastering Algorithms interview questions can help you crack placements of most companies such as Infosys, Amazon, TCS, Wipro, Accenture, etc. Data Structures and Algorithms are the most important topics asked in the recruitment process.
So let’s get started with the 40 most important interview questions on Algorithms.
What is an algorithm?
What is time complexity?
What is worst-case efficiency?
Write rules for converting infix to postfix.
What is best-case efficiency?
What is average case efficiency?
Define O-notation?
Define Ω-notation?
Define θ-notation?
How to calculate time complexity?
.. and many more
Mobiprep handbooks are free downloadable placement preparation resources that can help you ace any company placement process. We have curated a list of the 40 most important MCQ questions with detailed explanations to ensure the best placement preparation.
These top Algorithms Interview questions have been segmented into three sections:
Basic Interview Questions
Intermediate Interview Questions
Advanced Interview Questions
An algorithm is a sequence of computational steps that converts input into the output. It is a finite set of rules or a procedure to solve a problem.
Time complexity is a procedure to measure the efficiency of an algorithm.
It is the time needed by an algorithm expressed as a function of the size of a problem is.
The time complexity of a program is the amount of computer time it needs to run to completion.
The worst-case efficiency of an algorithm is its efficiency for the worst-case input of size n, which is an input or inputs of size n for which the algorithm runs the longest among all possible inputs of that size.
Scan the input string (infix notation) from left to right. One pass is sufficient.
If the next symbol scanned is an operand, it may be immediately appended to the postfix string.
If the next symbol is an operator,
Pop and append to the postfix string every operator on the stack that
It is above the most recently scanned left parenthesis, and
It has precedence higher than or is a right-associative operator of equal precedence to that of the new operator symbol.
Push the new operator onto the stack.
When a left parenthesis is seen, it must be pushed onto the stack.
When a right parenthesis is seen, all operators down to the most recently scanned left parenthesis must be popped and appended to the postfix string. Furthermore, this pair of parentheses must be discarded.
When the infix string is completely scanned, the stack may still contain some operators.
[Why are there no parentheses on the stack at this point?] All the remaining operators should be popped and appended to the postfix string.
Question 5. What is best-case efficiency?
The best-case efficiency of an algorithm is its efficiency for the best-case input of size n, which is an input or inputs for which the algorithm runs the fastest among all possible inputs of that size
The average case efficiency of an algorithm is its efficiency for an average case input of size n. It provides information about an algorithm behavior on a typical or random input.
(It is not the average of best case and worst case efficiencies).
A function t[n] is said to be in O[g[n]], denoted by t[n] belongs to O[g[n]], if t[n] is bounded above by some constant multiple of g[n] for all large n, i.e., if there exists some positive constant c and some nonnegative integer n0 such that T [n] <=cg [n] for all n >= n0
T[n] and g[n] are functions over non-negative integers
Used for worst-case analysis
A function t[n] is said to be in Ω [g[n]], denoted by t[n] ε Ω [g[n]], if t[n] is bounded below by some constant multiple of g[n] for all large n.
When we say that the running time of an algorithm is Ω(g(n)), we mean that no matter what particular input of size n is chosen for each value of n, the running time on that input is at least a constant times g(n), for sufficiently large n.
Used to describe best case running times or lower bounds for algorithmic problems.
A function t[n] is said to be in θ [g[n]], denoted by t[n] ε θ [g[n]], if t[n] is bounded both above & below by some constant multiple of g[n] for all large n.
Theta can be used to denote tight bounds of an algorithm.
f(n) is θ(g(n)) if and only if f(n) is O(g(n)) and f(n) is Ω(g(n)).
Question 10. How to calculate time complexity?
The running time of each assignment read and write statement can usually be taken to be O(1).
The running time of a sequence of statements is determined by the sum rule.
I.e. the running time of the sequence is, to with in a constant factor, the largest running time of any statement in the sequence.
The running time of an if–statement is the cost of conditionally executed statements, plus the time for evaluating the condition. The time to evaluate the condition is normally O(1) the time for an if–then–else construct is the time to evaluate the condition plus the larger of the time needed for the statements executed when the condition is true and the time for the statements executed when the condition is false.
The time to execute a loop is the sum, over all times around the loop, the time to execute the body and the time to evaluate the condition for termination (usually the latter is O(1)). Often this time is, neglected constant factors, the product of the number of times around the loop and the largest possible time for one execution of the body, but we must consider each loop separately to make sure.
EXAMPLE:
statement;
>>here we have a single statement. Its Time Complexity will be Constant. The running time of the statement will not change in relation to N.
for(i=0; i < N; i++)
{
statement;
}
>>The time complexity for the above algorithm will be Linear. The running time of the loop is directly proportional to N. When N doubles, so does the running time.
for(i=0; i < N; i++)
{
for(j=0; j < N;j++)
{
statement;
}
}
>>This time, the time complexity for the above code will be Quadratic. The running time of the two loops is proportional to the square of N. When N doubles, the running time increases by N * N.
ALGORITHM: Euclid_gcd[m, n]
//Input: Two nonnegative, not-both-zero integers m and n
//Output: Greatest common divisor of m and n
while n ≠ 0 do
r ←m mod n
m←n
n←r
return m
Question 12. What is the use of asymptotic notation?
The notations describe different rate-of-growth relations between the defining function and the defined set of function. They are used to compare two function sizes.
It is a mathematical tool to represent the complexities and allow us to analyze running time of an algorithm by identifying its behavior as the input size for the algorithm increases.
The three commonly used notations are:
Big Oh notation
Big Omega notation
Big Theata notation
ALGORITHM F[n]
//Computes n! recursively
//Input: A nonnegative integer n
//Output: The value of n!
if n = 0 return 1
else return F[n − 1] * n
The time Complexity of this algorithm is O(n).
Brute force is an straightforward approach, usually based directly on the problem's statement and definitions of the concepts involved.
Brute-force algorithms are the processes that reach the perfect solution to a problem by analyzing all the possible candidate solutions. Usually, a brute-force approach is simple to implement, and it will always find a solution to the computational problem, by considering iteratively all the possible solutions one by one.
Merge sort algorithm repeatedly divides the data in half, sorts each half, and combines the sorted halves into a sorted whole.
The algorithm:
Divide the list into two roughly equal halves.
Sort the left half.
Sort the right half.
Merge the two sorted halves into one sorted list.
Often implemented recursively.
An example of a "divide and conquer" algorithm.
Time Complexity O(nlogn)
Inplace merge-sort algorithm: It will sort the array without using extra space.
ALGORITHM:
merge(arr, start, mid, end):
start2 = mid + 1;
# If the direct merge is already sorted
if (arr[mid] <= arr[start2]):
return;
# Two pointers to maintain start
# of both arrays to merge
while (start <= mid and start2 <= end):
# If element 1 is in right place
if (arr[start] <= arr[start2]):
start += 1;
else:
value = arr[start2];
index = start2;
# Shift all the elements between element 1
# element 2, right by 1.
while (index != start):
arr[index] = arr[index - 1];
index -= 1;
arr[start] = value;
# Update all the pointers
start += 1;
mid += 1;
start2 += 1;
'''
* l is for left index and r is right index of
the sub-array of arr to be sorted
'''
ALGORITHM mergeSort(arr, l, r):
if (l < r):
# Same as (l + r) / 2, but avoids overflow
# for large l and r
m = l + (r - l) // 2;
# Sort first and second halves
mergeSort(arr, l, m);
mergeSort(arr, m + 1, r);
merge(arr, l, m, r);
ALGORITHM QS[A[l..r]]
//Input: Subarray of array A[0..n − 1], defined by its left and right
// indices l and r
//Output: Subarray A[l..r] sorted in nondecreasing order
if l < r
s ←Partition[A[l..r]] //s is a split position
QS[A[l..s − 1]]
QS[A[s + 1..r]]
Partition (arr[low....high])
{
// pivot - Element at right most position
pivot = arr[high];
i = (low - 1); // Index of smaller element
for (j = low; j <= high-1; j++)
{
// If current element is smaller than the pivot, swap the element with pivot
if (arr[j] < pivot)
{
i++; // increment index of smaller element
swap(arr[i], arr[j]);
}
}
swap(arr[i + 1], arr[high]);
return (i + 1);
}
The time complexity of quick sort algorithm is O(n^2)
Dijkstra‘s algorithm solves the single source shortest path problem of finding shortest paths from a given vertex[ the source], to all the other vertices of a weighted graph or digraph. Dijkstra‘s algorithm provides a correct solution for a graph with non-negative weights.
Please refer adjacent cell for example:
The shortest paths [identified by following nonnumeric labels backward from a destination vertex in the left column to the source] and their lengths [given by numeric labels of the tree vertices] are as follows:
from a to b : a − b of length 3
from a to d : a − b − d of length 5
from a to c : a − b − c of length 7
from a to e : a − b − d − e of length 9
Exhaustive searching techniques [TSP,KP,AP]
Finding closest pair and convex hull problems
Sorting: selection and bubble sort
Searching: Brute force string match and sequential search.
Computing n
Sorting : Merge sort and quick sort
Search: Binary search
Strassen‘s Matrix multiplication
Finding closest pair and convex hull problems
Multiplication of large integers
Select an unvisited node x, visit it, and treat as the current node
Find an unvisited neighbor of the current node, visit it, and make it the new current node;
If the current node has no unvisited neighbors, backtrack to the its parent, and make that parent the new current node;
Repeat steps 3 and 4 until no more nodes can be visited.
If there are still unvisited nodes, repeat from step 1.
Select an unvisited node x, visit it, have it be the root in a BFS tree being formed. Its level is called the current level.
From each node z in the current level, in the order in which the level nodes were visited, visit all the unvisited neighbors of z. The newly visited nodes from this level form a new level that becomes the next current level.
Repeat step 2 until no more nodes can be visited.
If there are still unvisited nodes, repeat from Step 1
void Hanoi( int n, string a, string b, string c)
{
if (n == 1) /* base case */
Move( a, b );
else { /* recursion */
Hanoi( n-1, a, c, b );
Move( a, b );
Hanoi( n-1, c, b, a );
}
}
Time Complexity of this algorithm is: O(2^n).
Warshall‘s algorithm is an application of dynamic programming technique, which is used to find the transitive closure of a directed graph.
Transitive closure of a graph:
Given a digraph G(V,E) the transitive closure is a digraph G’(V’,E’) such that
V’ = V (same set of vertices)
If (vi, vi+1,…,v k) is a path in G, then (vi, v k) is an edge of E
Generally, The reach-ability matrix is called the transitive closure of a graph.
Greedy technique suggests a greedy grab of the best alternative available in the hope that a sequence of locally optimal choices will yield a globally optimal solution to the entire problem.
Examples:
Kruskal’s Minimum Spanning Tree
Prim’s Minimum Spanning Tree
Boruvka’s Minimum Spanning Tree
Reverse delete algorithm for MST
Problem Solving for Minimum Spanning Trees (Kruskal’s and Prim’s)
Dijkstra's Shortest Path Algorithm
The traveling salesman problem [TSP] is one of the combinatorial problems. The problem asks to find the shortest tour through a given set of n cities that visits each city exactly once before returning to the city where it started. The problem can be conveniently modeled by a weighted graph, with the graph‘s vertices representing the cities and the edge weights specifying the distances.
Then the problem can be stated as the problem of finding the shortest Hamiltonian circuit of the graph.
A Hamiltonian circuit is defined as a cycle that passes through all the vertices of the graph exactly once.
Prim‘s algorithm constructs a minimum spanning tree through a sequence of expanding sub trees. The initial sub tree in such a sequence consists of a single vertex selected arbitrarily from the set V of the graph‘s vertices. On each iteration, the algorithm expands the current tree in the greedy manner by simply attaching to it the nearest vertex not in that tree.
A spanning tree for a connected graph is a tree whose vertex set is the same as the vertex set of the given graph, and whose edge set is a subset of the edge set of the given graph. i.e., any connected graph will have a spanning tree.
Weight of a spanning tree w (T) is the sum of weights of all edges in T. The Minimum spanning tree (MST) is a spanning tree with the smallest possible weight.
public static int recursive(int arr[], int low, int high, int key)
{
int mid = low + (high - low)/2;
if(arr[mid] == key)
{
return mid;
}
else if(arr[mid] < key)
{
return recursive(arr,mid+1,high,key);
}
else
{
return recursive(arr,low,mid-1,key);
}
int unorderedLinearSearch(int arr[], int size, int data)
{
int index;
for(int i = 0; i < size; i++)
{
if(arr[i] == data)
{
index = i;
break;
}
}
return index;
}
First pass: i=1 (8,34,64,51,32,21)
Second Pass:i=2 (8, 34, 64, 51, 32, 21) no change because 64 is greater than 8 and 34
An algorithm is a sequence of unambiguous instructions for solving problem, i.e., for obtaining a required output for any legitimate input in a finite amount of time. Algorithmic steps are
Understand the problem
Decision making
Design an algorithm
Proving correctness of an algorithm
Analyze the algorithm
Coding and implementation of an algorithm
for(int j=arr.length-1; j>=0; j--)
{
for(int k=0; k<j; k++)
{
if(arr[k] > arr[k+1])
{
int temp = arr[k];
arr[k] = arr[k+1];
arr[k+1] = temp;
}
}
}
int min;
for(int j=0; j<arr.length-1; j++)
{
min = j;
for(int k=j+1; k<=arr.length-1; k++)
{
if(arr[k] < arr[min])
min = k;
}
int temp = arr[min];
arr[min] = arr[j];
arr[j] = temp;
}
The time complexity of heap sort is not affected in any case as it is independent of distribution of data. So its time complexity remains to be O(n log n) in both cases
void A_recursive_function()
{
A_recursive_function();
}
int main()
{
A_recursive_function();
return 0;
}
Every function call is stored in the stack memory. In this case, there is no terminating condition(base case). So, A_recursive_function() will be called continuously till the stack overflows and there is no more space to store the function calls. At this point of time, the program will stop abruptly.
void A_recursive_function(int n)
{
if(n == 0)
return;
printf("%d ",n);
A_recursive_function(n-1);
}
int main()
{
A_recursive_function(10);
return 0;
}
output: 10 9 8 7 6 5 4 3 2 1
public static void main(String args[]){
int x = 30; int y = 60;
System.out.printf("Before swap 'x': %d, 'y': %d %n", x, y);
x = x ^ y;
y = x ^ y;
x = x ^ y;
System.out.printf("After swapping, 'x': %d, 'y': %d %n", x, y); }
Heap sort is used to sort binary heap data structure.
Steps:
Build max heap.
The largest item is stored at the root of the heap. Replace it with the last item of the heap followed by reducing the size of heap by 1.
Finally, heapify the root of the tree.
Repeat step 2 and step 3 while size of heap is greater than 1.
Heapify: heapify is the process of converting a binary tree into a Heap data structure.
It is the operation in which we reorder the tree based on the heap property.
#include<stdio.h>
int main()
{
int n,r,sum=0,temp;
printf("enter the number=");
scanf("%d",&n);
temp=n;
while(n>0)
{
r=n%10;
sum=(sum*10)+r;
n=n/10;
}
if(temp==sum)
printf("palindrome number ");
else
printf("not palindrome");
return 0;
}
Question 39. Write c program to check two string are anagrams or not.
#include <stdio.h>
int check_anagram(char [], char []);
int main()
{
char string1[10000], string2[10000];
printf("Enter two strings\n");
gets(string1);
gets(string2);
if (check_anagram(string1, string2))
printf("The strings are anagrams.\n");
else
printf("The strings aren't anagrams.\n");
return 0;
}
int check_anagram(char a[], char b[])
{
int first[26] = {0}, second[26] = {0}, c=0;
while (a[c] != '\0') {
first[a[c]-'a']++;
c++;
}
c = 0;
while (b[c] != '\0') {
second[b[c]-'a']++;
c++;
}
for (c = 0; c < 26; c++)
if (first[c] != second[c])
return 0;
return 1;
}
int A(int n)
{
int count = 0;
for (int i = n; i > 0; i /= 2)
for (int j = 0; j < i; j++)
count += 1;
return count;
}
For a input integer n, the innermost statement of A() is executed following times. n + n/2 + n/4 + ... 1 So time complexity T(n) can be written as T(n) = O(n + n/2 + n/4 + ... 1) = O(n) The value of count is also n + n/2 + n/4 + .. + 1.
FLOYD’S ALGORITHM Given a weighted connected graph (undirected or directed), the all-pairs shortest paths problem asks to find the distances—i.e., the lengths of the shortest paths— from each vertex to all other vertices.
ALGORITHM Floyd(W[1..n, 1..n])
//Implements Floyd‘s algorithm for the all-pairs shortest-paths problem
//Input: The weight matrix W of a graph with no negative-length cycle
//Output: The distance matrix of the shortest paths‘ lengths
D ←W //is not necessary if W can be overwritten
for k←1 to n do
for i ←1 to n do
for j ←1 to n do
D[i, j ]←min{D[i, j ], D[i, k]+ D[k, j]}
return D
Efficiency of Floyd‘s Algorithm: Time efficiency O(n^3) and Space Efficiency is O(n^2)
Scan the input string (infix notation) from left to right. One pass is sufficient.
If the next symbol scanned is an operand, it may be immediately appended to the postfix string.
If the next symbol is an operator,
i. Pop and append to the postfix string every operator on the stack that
a. is above the most recently scanned left parenthesis, and
b. has precedence higher than or is a right-associative operator of equal precedence to that of the new operator symbol.
ii. Push the new operator onto the stack.
When a left parenthesis is seen, it must be pushed onto the stack.
When a right parenthesis is seen, all operators down to the most recently scanned left parenthesis must be popped and appended to the postfix string. Furthermore, this pair of parentheses must be discarded.
When the infix string is completely scanned, the stack may still contain some operators.
[Why are there no parentheses on the stack at this point?] All the remaining operators should be popped and appended to the postfix string.
current symbol operator stack postfix string
( (
A A
+ ( + A
B ( + A B
) A B +
* * A B +
( * ( A B +
C * ( A B + C
^ * ( ^ A B + C
( * ( ^ ( A B + C
D * ( ^ ( A B + C D
- * ( ^ ( - A B + C D
E * ( ^ ( - A B + C D E
) * ( ^ A B + C D E -
+ * ( + A B + C D E - ^
F * ( + A B + C D E - ^ F
) * A B + C D E - ^ F +
- - A B + C D E - ^ F + *
G - A B + C D E - ^ F + * G
AB + C D E - ^ F + * G -
Using Stack
Step 1: Push 20
Step 2: Push 30
Step 3: Pop 20, Pop 30, Push(20+30=50)
Step 4: Push 50
Step 5: Push 40
Step 6: Pop 40, Pop 50 ,Push(50-40=10)
Step 7: Pop 10,Pop 50 ,Push(50*10=500)
Ans=500
T (n) = 2T(n/2)+n n>1
We guess the solution is O (n (logn)).Thus for constant 'c'.
T (n) ≤c n logn
Put this in given Recurrence Equation.
Now,
T (n) ≤2c(n/2)log(n/2)+n
≤cnlogn-cnlog2+n
=cn logn-n (clog2-1)
≤cn logn for (c≥1)
Thus T (n) = O (n logn).
These Data Structure interview questions would give you an insight into what kind of questions could be asked.