 ### Attend our Free Webinar on How to Nail Your Next Technical Interview

WEBINAR +LIVE Q&A

## How To Nail Your Next Tech Interview Hosted By
Ryan Valles
Founder, Interview Kickstart Our tried & tested strategy for cracking interviews How FAANG hiring process works The 4 areas you must prepare for How you can accelerate your learnings

# Learn In-Place Merge Sort

In-place merge sort is an important sorting algorithm to learn. Especially if you are a software developer preparing for your next tech interview, this is one algorithm you must review. Sorting algorithm questions are a big part of coding interviews at tech companies. In this article, we will help you get one step closer to cracking that interview.

We’ll cover:

• What Is In-Place Merge Sort?
• When and Why It Should Be Used?
• Approach 1: Simple Brute Force
• Approach 2: Efficient Approach Using Shell Sorting
• FAQs on In-Place Merge Sort

For each approach, we’ll cover:

• How it works
• Algorithm
• Pseudocode
• Code
• Complexities
• Strengths and weaknesses

## What Is In-Place Merge Sort?

Merge sort is an efficient way of sorting a list of elements. It is a comparison-based sorting algorithm. It belongs to the divide-and-conquer paradigm, wherein we divide the problem into subproblems, solve them individually and combine their solutions to form the solution to the original problem.

An in-place algorithm processes the input and produces the output in the same memory location that contains the input without using any auxiliary space. However, a constant amount of extra space is allowed to be used.

The standard implementation of merge sort is not in-place; but we can make it in-place by modifying the way we merge the lists. However, this will affect the run-time complexity of the algorithm.

So basically, standard merge sort with a modified method to merge the lists in-place is called in-place merge sort.

In this article, we’ll mainly focus on modifying the merge method of the merge sort algorithm. So, before you read this, please brush up on the standard merge sort algorithm.

## When and Why Should In-Place Merge Sort Be Used?

We are often required to sort enormous datasets for various purposes. It becomes tough to work with O(n) extra memory space on systems where the memory is very limited. The best option in such cases is to use an efficient algorithm requiring a constant amount of additional memory space, O(1). An in-place sorting algorithm, such as in-place merge sort, is a practical solution to this problem.

## Before We Begin ...

Before we get into the details of how to make merge sort in-place, have a look at the pseudocode below — this is the pseudocode of in-place merge sort algorithm without the InPlaceMerge procedure.

We will discuss two approaches to merge the lists in-place; we will write the InPlaceMerge pseudocode for each approach separately.

function InPlaceMergeSort(arr, start, end)
if start = end
return

mid = (start + end) / 2

InPlaceMergeSort(arr, start, mid)
InPlaceMergeSort(arr, mid + 1, end)

InPlaceMerge(arr, start, mid, end)
end function

And the following is the C++ code to demonstrate the in-place merge sort algorithm without the merge function. We will write the modified code for each approach separately.

// C++ code to demonstrate In-Place Merge sort algorithm

#include<bits/stdc++.h>
using namespace std;

void InPlaceMergeSort(int arr[], int low, int high) {

// checking if the size of array is 1
// if the size is 1 then the array is already sorted
// and we need not do anything
if(low == high) {
return;
}

/*

we need to divide the array into two almost equal halves
in case of an odd length array, the size of the halves will differ   by 1
we divide the array [low, high] into [low, mid] and [mid + 1, high]
mid is the midpoint of the range [low, high]

*/
int mid = (low + high) / 2;

// recursive call to sort arr[low, mid]
InPlaceMergeSort(arr, low, mid);

// recursive call to sort arr[mid + 1, high]
InPlaceMergeSort(arr, mid + 1, high);

// merging both sorted subarrays arr[low, mid] and arr[mid + 1, high] to form sorted array[low, high]
InPlaceMerge(arr, low, mid, high);

}

void InPlaceMergeSort(int arr[], int n) {
InPlaceMergeSort(arr, 0, n - 1);
}

int main() {

int arr[] = {5, 4, 1, 6, 3, 2, 8, 7};

int n = sizeof(arr) / sizeof(int);

cout << "Array before sorting : \n";
for(int i = 0; i < n; i++) {
cout << arr[i] << " ";
}
cout << "\n\n";

InPlaceMergeSort(arr, n);

cout << "Array after sorting : \n";
for(int i = 0; i < n; i++) {
cout << arr[i] << " ";
}

return 0;
}

## Approach 1: Simple Brute Force

### How Does Simple Brute Force Work?

We start by comparing the starting element of both the sublists. If the current element of the left sublist is at the desired position, we move to the next element of the left sublist. If not, we perform a right cyclic shift on the subarray containing all the elements —  from the left sublist’s current element to the right sublist’s current element.

### Algorithm for Simple Brute Force

We maintain two pointers (p1 and p2), pointing to the first unprocessed element of the left and right sublist, respectively.

1. If all the elements of the left or right sublist are processed, we terminate the algorithm.
2. Else, we compare the element at p1 and p2.
3. If the element at p1 <=element at p2, we increment p1 by 1.
4. Else, we shift all the elements between p1 and p2 (including the one at p1, but not p2) to the right by 1 and move the element at p2 to p1. We then increment p1 and p2 by 1. We also increment the endpoint of the left sublist by 1 because one element from the right sublist, the one at p2, moved to position p1 in the left sublist, increasing the size of the left sublist by 1.

Input:  list[ ] = 3 6 8 11 4 7 9 12

left sublist[ ] = 3 6 8 11

right sublist[ ] = 4 7 9 12

“Mid” is the endpoint of the left sublist. We initialize p1 as 0 and p2 as mid+1 = 3+1 = 4.

Step 1:

Step 2:

Step 3:

Step 4:

Step 5:

Step 6:

Step 7:

Step 8:

Output: 3 4 6 7 8 9 11 12

### InPlaceMerge Pseudocode for Simple Brute Force Approach

Here’s the pseudocode for the simple brute force approach for in-place merge sort.

function InPlaceMerge(arr, start, mid, end)
p1 := start
p2 := mid + 1
while p1 <= mid and p2 <= end:
if arr[p1] <= arr[p2]:
p1 := p1 + 1
else
temp := arr[p2]
idx := p2
while idx > p1:
arr[idx] := arr[idx - 1]

idx := idx - 1

end while
arr[p1] = temp
p1 := p1 + 1
p2 := p2 + 1
mid := mid + 1
end while
end function

### InPlaceMerge Code for the Simple Brute Force Approach (C++)

We have implemented the simple brute force approach for in-place merge sort in C++. Feel free to use the same logic to implement it in a programming language of your choice.

void InPlaceMerge(int arr[], int start, int mid, int end) {
// p1 stores the index of first unprocessed element of left sublist
int p1 = start;
// p2 stores the index of first unprocessed element of right sublist
int p2 = mid + 1;

// while both of the sublist is non-empty
while(p1 <= mid and p2 <= end) {
if(arr[p1] <= arr[p2]) {
// arr[p1] is at the correct position
// we simply move to the next element
p1 = p1 + 1;
}
else {
int temp = arr[p2];
int idx = p2;

// shifting all the elements from arr[p1, p2-1] to the right by 1
while(idx > p1) {
arr[idx] = arr[idx - 1];
idx = idx - 1;
}

// moving arr[p2] which was stored in temp to p1
arr[p1] = temp;

// incrementing all the pointers by 1
p1 = p1 + 1;
p2 = p2 + 1;
mid = mid + 1;
}
}
}

### Time Complexity

The time complexity of this approach for in-place merge sorting is O(n^2). This is how it’s calculated:

#### First, Calculate the Time Complexity of Merging Two Lists

The worst-case occurs when even the largest element of the right sublist is smaller than the smallest element of the left sublist.

Since the difference in the size of the sublists can’t be more than 1, let’s consider the size of both sublists to be “m.” After every comparison, we will shift all the elements between the first element of both the sublists. So, the total number of operations will be(m + m + m + ..........+ m) = m^2.

#### Then, Calculate the Time Taken to Merge All the Lists in a Row

Let’s consider a row at depth “d.” There will be a total 2^d sublists of n / 2^d size.

Time required to merge a pair of consecutive sublists will be  n / 2^d *  n / 2^d = (n / 2^d)^2

Time required to merge all pairs of consecutive sublists =  (n / 2^d)^2* (2^d / 2) =n^2 / 2^d+1

#### Finally, Put It All Together

The overall time complexity of InPlaceMergeSort  =  ∑ (d = 1 to log_2n) (n^2 / 2^d+1) = (n^2 / 2) * ∑ _(d = 1 to log2n) (1 / 2)^d

Using sum of GP a + ar + ar^2 + .... + ar^n-1 = a (1-r^n) / (1-r)

Therefore,

∑_(d = 1 to log_2n) (1 / 2)^d = 1 / 2 + (1 / 2)^2 + ...  + (1 / 2)^ log_2n = (1 / 2) (1-(1 / 2) log2n ) / (1-1 / 2) +(1 / 2)^ log_2n

= (n^2/ 2) * (1 / 2) (1 - (1 / 2)^log_2n) / (1 - 1/2) + (1 / 2)^ log_2n

= (n^2/ 2) * (1 -1 / n) + (1 / 2) log_2n

= n^2/ 2 - n / 2 + (1 / 2) ^log_2n =O(n^2)

### Space Complexity

The space complexity of this approach for in-place merge sorting is O(1) since we have used a constant amount of additional memory.

### Strengths and Weaknesses

This approach is easier to understand and implement. However, it is not an efficient implementation of in-place merge sort in terms of time complexity. Also, it is a lot slower for larger lists compared to other sorting algorithms.

## Approach 2: Efficient Approach Using Shell Sorting

Note: This idea is similar to the standard shell sorting, but it is not the same.

### How Does Shell Sorting Work?

Shell sorting algorithm is very similar to the insertion sort algorithm. But unlike insertion sort, where we only compare adjacent elements to sort the list, in shell sort, elements at different intervals are compared to find the correct position of every element.

We first select a large interval “h” and sort the elements which are h distance apart. We successively decrease the value of h and apply the same process for the reduced until h reduces to 1.

### Algorithm for Merge Sort With Shell Sort

1. We start with the value of “h” equal to “ceil” (length of the list / 2).
2. We iterate through the whole list and compare each element to the next element that is at a distance of h. If they are in the correct order(the current element <= element at h distance from it), we continue; otherwise, we swap them.
3. If h is equal to 1, we terminate the algorithm; otherwise, we reduce h to ceil(h / 2) and repeat step 2.

Input:  list[ ] = 3 6 8 14 4 7 9 12

left sublist[ ] = 3 6 8 14

right sublist[ ] = 4 7 9 12

Step 1: h = 4; we compare elements that are 4 positions apart

Now, h = ceil (h/2) = ceil (4/2) = 2

Step 2: h = 2; we compare elements that are 2 positions apart

h = ceil(h/2) = ceil(2/2) = 1

Step 3: h=1; we compare elements that are 1 position apart

### InPlaceMerge Pseudocode for Shell Sort Approach

Following is the pseudocode for in-place merge sort using shell sort.

function InPlaceMerge(arr, start, mid, end)
len := end - start + 1
h := ceil(len / 2.0)
while h >= 1:
i := start
while i + h <= end:
if arr[i] > arr[i + h]:
swap(arr[i], arr[i + h])
i := i + 1
end while
if h == 1:
break
h := ceil(h / 2.0)
end while
end function

### InPlaceMerge Code for Shell Sort Approach (C++)

We’re using C++ to demonstrate the implementation of in-place merge sort using shell sort.

// C++ code to demonstrate In-Place Merge sort algorithm

#include<bits/stdc++.h>
using namespace std;

void InPlaceMerge(int arr[], int start, int mid, int end) {
// len is the length of the combined list
int len = end - start + 1;

// h is the interval length for which we will sort the list
int h = ceil(len / 2.0);

while(h >= 1) {
int idx = start;

// iterate through all the elements until there is no next element
while(idx + h <= end) {

// swap if the element is greater than the next element
if(arr[idx] > arr[idx + h]) {
swap(arr[idx], arr[idx + h]);
}

// increment idx by 1
idx = idx + 1;
}

// we need to break when h = 1, otherwise the loop will
// infinitely run since ceil(1) = 1
if(h == 1) {
break;
}

// reduce h to ceil(h / 2)
h = ceil(h / 2.0);
}
}

void InPlaceMergeSort(int arr[], int low, int high) {

// checking if the size of array is 1
// if the size is 1 then the array is already sorted
// and we need not do anything
if(low == high) {
return;
}

// we need to divide the array into two almost equal halves
// in case of an odd length array, the size of the halves will differ by 1
// we divide the array [low, high] into [low, mid] and [mid + 1, high]
// mid is the midpoint of the range [low, high]
int mid = (low + high) / 2;

// recursive call to sort arr[low, mid]
InPlaceMergeSort(arr, low, mid);

// recursive call to sort arr[mid + 1, high]
InPlaceMergeSort(arr, mid + 1, high);

// merging both sorted subarrays arr[low, mid] and arr[mid + 1, high] to form sorted array[low, high]
InPlaceMerge(arr, low, mid, high);

}

void InPlaceMergeSort(int arr[], int n) {
InPlaceMergeSort(arr, 0, n - 1);
}

int main() {

int arr[] = {5, 4, 1, 6, 3, 2, 8, 7};

int n = sizeof(arr) / sizeof(int);

cout << "Array before sorting : \n";
for(int i = 0; i < n; i++) {
cout << arr[i] << " ";
}
cout << "\n\n";

InPlaceMergeSort(arr, n);

cout << "Array after sorting : \n";
for(int i = 0; i < n; i++) {
cout << arr[i] << " ";
}

return 0;
}

### Time Complexity

The time complexity of this approach for in-place merge sorting is O(n (log2n)2). This is how it’s calculated:

We iterate through the whole array for different values of h.

Initially, h = n / 2

After every step h is reduced to h / 2

We stop when h becomes equal to 1

Therefore, (n / 2) / 2^steps = 1

⇒ n / 2 = 2^steps

⇒ n =2^steps+1

⇒ log_2n = steps + 1

⇒ steps =log_2n - 1

⇒ steps = O(log_2n)

The overall time complexity of InPlaceMerge function = n * steps = n * log_2n = O(n log_2n)

The overall time complexity of In-Place Merge sort = log_2n * O(n log_2n) = O(n (log_2n)^2)

### Space Complexity

The space complexity of this approach for in-place merge sorting is O(1), since we haven’t used any auxiliary data structure.

### Strengths and Weaknesses

This approach is an efficient implementation of in-place merge sort in terms of time  complexity. However, it is slower for smaller lists compared to other sorting algorithms.

## FAQs on In-Place Merge Sort

Question 1: Is the in-place merge sort with the simple brute force method to merge the sublists a stable sorting algorithm?

Answer: A sorting algorithm is said to be stable if the relative order of any two equal elements in the original list and the sorted list stays the same.

The brute force method will produce a stable sort because we compare the elements sequentially and swap only when the element in the right sublist is smaller than the left sublist. So, when the elements being compared in the sublists are the same, we don’t do anything. This maintains the relative order of the two equal elements.

Question 2: Is the in-place merge sort with the shell sorting method to merge the sublists a stable sorting algorithm?

Answer: The method using shell sorting to merge the sublists will not produce a stable sort since shell short is itself unstable. The shell sort is unstable as it doesn’t take into account the elements in between while swapping two elements.

For example, Input: 4_0  3_1  3_2

Initially, h = ceil(3 / 2) = 2

So, we compare 40 and 32and swap them. We don’t take into account that there is also a 3_1 in between index 0 and 2.

Output: 3_2  3_1 4_0

This output is not stable since the relative order of the two equal elements 3_1 and 3_2 is not the same as the input.

Note: The subscript denotes the index of the element in the original array.

If you’re looking for guidance and help with getting started, sign up for our free webinar. As pioneers in the field of technical interview prep, we have trained thousands of software engineers to crack the toughest coding interviews and land jobs at their dream companies, such as Google, Facebook, Apple, Netflix, Amazon, and more!

• Read more articles on data structures and algorithms: Learn
• Practice the most popular tech interview problems: Problems
• Find out what questions FAANG and Tier-1 tech companies ask at tech interviews: Interview Questions

-------------

Article contributed by Taara Sinh Aatrey

### Attend our Free Webinar on How to Nail Your Next Technical Interview

WEBINAR +LIVE Q&A

## How To Nail Your Next Tech Interview Hosted By
Ryan Valles
Founder, Interview Kickstart Our tried & tested strategy for cracking interviews How FAANG hiring process works The 4 areas you must prepare for How you can accelerate your learnings