Interview Kickstart has enabled over 3500 engineers to uplevel.

Getting ready for your next technical interview? As a software developer, you must be well-versed with sorting algorithms, as they’re a frequent feature in coding interviews! These algorithms will help you solve many complex coding problems.

In this article, we will see differences between three sorting algorithms — selection sort, bubble sort, and insertion sort. Knowing these differences will help you choose the right algorithm based on the problem you’re solving.

For each algorithm, we’ll cover:

- What it is
- The algorithm
- The pseudocode

Then, we’ll compare the three algorithms based on:

- Time complexity
- Space complexity
- Stability
- What they work well on
- Application
- Efficiency
- Disadvantages

We’ve also included a **Selection Sort vs. Bubble Sort vs. Insertion Sort Cheat Sheet** to help you review key differences between these algorithms at a glance.

**Selection sort** is a comparison-based sorting algorithm. It divides the input array into two subarrays:

- Sorted subarray, which is initially empty
- Unsorted subarray, which initially contains all the elements

In every iteration, we pick the minimum element (considering we have to sort the array in increasing order) from the unsorted subarray and place it at the end of the sorted subarray. We repeat this entire process until the unsorted array becomes empty.

Following are the steps to sort an array in increasing order using selection sort:

- Step 1: Iterate over the unsorted subarray and pick the minimum element
- Step 2: Place this minimum element at the end of the sorted subarray
- Step 3: Repeat the above steps until the unsorted subarray becomes empty (or a total of N times, where N is the size of the input array)

Following is the pseudocode to sort an array in ascending order using selection sort. We have followed 1-based indexing for the array.

Bubble sort is also a comparison-based sorting algorithm. It makes several passes over the input array and, in each pass, swaps adjacent elements that are not in the required order (ascending or descending). It keeps on making passes until the entire array is sorted (makes N - 1 pass in the worst case when N is the size of the array).

Following are the steps to sort an array containing N elements in increasing order using bubble sort.

**Step 1:**Consider the first two elements. If the first element is greater than the second element, we swap them; else, we leave them as is.**Step 2:**Next, we consider the second and the third element and repeat the same process as in step 1. We do this until we compare the last two elements of the input array.

The first pass is complete — the largest element will be at the end of the array (as it will get consecutively swapped with the next adjacent element). So in the next pass, we don’t need to compare the last two elements anymore. Formally, in the i-th pass, the last (i - 1) elements are correctly placed, so we need to compare elements from 1st position to (N - i + 1)th position only.

**Step 3:**We repeat the above process a total of N - 1 times.

Optimizing bubble sort:If, in a pass, we do not make any swaps, then we don’t need to make any further passes.

Following is the pseudocode to sort an array in ascending order using bubble sort. We have followed 1-based indexing for the array.

**Insertion sort** is also a comparison-based sorting algorithm. It also maintains two subarrays: a sorted subarray that initially contains only the first element and an unsorted subarray that contains the rest of the array.

In this method, we pick each element from the unsorted part one by one and insert it in the sorted part at its correct place.

Following are the steps to sort an array containing N elements in increasing order using insertion sort.

**Step 1:**Consider the first element to be a sorted subarray and the rest as an unsorted subarray.**Step 2:**Pick the first element from unsorted elements.**Step 3:**Iterate over the sorted subarray and find the correct place to insert the picked element.

**Step 4:**Insert the picked element at this chosen place and shift all the elements after this position (if any) in the sorted array by one place.

Repeat steps 2, 3, and 4 until there is no element left in the unsorted array (precisely N - 1 times, as there are that many elements in the unsorted part initially).

Following is the pseudocode to sort an array in ascending order using insertion sort. We have followed 1-based indexing for the array.

If you need more information, we have covered each of these algorithms articles in great detail along with code implementation and examples in the following articles:Selection Sort,Bubble Sort,Insertion Sort.

**Selection sort:**The best-case complexity is O(N2) as to find the minimum element at every iteration, we will have to traverse the entire unsorted array.**Bubble sort:**The best-case complexity is O(N). It happens when we have an already sorted array, as in that case, we will stop the procedure after a single pass.**Insertion sort:**The best-case complexity is O(N). It occurs when we have a sorted array; as in that case, each element has already been placed at its correct position, and no swap operation will be required.

**Selection sort:**The worst-case complexity is O(N2) as to find the minimum element at every iteration, we will have to traverse the entire unsorted array.**Bubble sort:**The worst-case complexity is O(N2). It happens when we have a reverse sorted array, as in that case, we will have to make all the passes.**Insertion sort:**The worst-case complexity is O(N2). It occurs when we have a reverse sorted array, as in that case, to find the correct position for any element, we will have to fully traverse the sorted array each time.

For random datasets, **all three algorithms take O(N2) time complexity. **The expected (also average) number of inversions in a randomly generated dataset of size N having distinct elements is N * (N - 1) / 4.

**In bubble sort and insertion sort,**we swap adjacent elements, which reduces the inversion count by 1, and in one iteration, we can make a maximum of N - 1 adjacent swap. So to make the inversion count zero, we have to do more than one iteration (around N/4 iterations), and hence the complexity becomes O(N2).**In selection sort,**we make just 1 swap (which is not adjacent) and place an element at its correct place. This can reduce the inversion count by N-1, but to find that element, we have to traverse the entire unsorted array, which happens multiple times making up for O(N2) complexity.

- It is O(1) for all three algorithms as we don’t need to take any extra space to sort them, thus making them in-place algorithms.

**Bubble sort**and**insertion sort**are stable, whereas**selection sort**isn’t.- The selection sort can be made stable by incorporating the indices of equal elements when comparing and sorting them. But then the space complexity increases from O(1) to O(N), where N is the size of the array.

- When swapping elements, we also swap their indices which are stored in another array. When finding the appropriate element to replace the current element, we consider lesser elements (assuming we have to sort in ascending order) and elements having equal values with lesser indices.

**Selection sort**works similarly on all kinds of dataset- Both,
**Bubble sort**and**Insertion sort**, are**adaptive**which means they perform a lesser number of operations if a partially sorted array is provided as input

**Selection sort**can be used to sort a linked list as we can efficiently remove the smallest element and append it in the sorted list.**Bubble sort**can detect minor errors in a sorted dataset and sort an almost sorted dataset quickly.**Insertion sort**is used to sort online data — it can start the sorting process even if the complete data set isn’t available.

**Selection sort**is efficient where swapping operation is costly as it makes a maximum of N swaps for an array of size N.**Bubble sort**is the simplest stable in-place sorting algorithm and very easy to code.**Insertion sort**makes fewer comparisons compared to the other two algorithms and hence is efficient where comparison operation is costly.

- All three algorithms have a quadratic worst-case time complexity and hence work slowly on large datasets.
- Bubble sort makes many comparisons and swaps and thus is not efficient if swapping or comparison is a costly operation.
- Insertion sort makes a lot of swaps and thus becomes inefficient if swapping operations are costly.

You can refer to the following sheet to see the comparison among selection, bubble, and merge sort at a glance.

**Question 1: Out of Bubble sort, selection sort, and insertion sort, which sorting algorithm should be used if swapping between two memory locations is expensive?**

Answer: Selection sort should be used in such cases. For an array of size N, in the worst case, this algorithm makes N-1 swaps as opposed to N*(N-1)/2 swaps in bubble sort and selection sort.

**Question 2: In which context is insertion sort is better than selection sort?**

Answer: Insertion sort is stable and adaptive, while selection sort is not. Also, insertion sort is online, meaning that it can start sorting data even if the data isn’t fully available in the beginning.

**Question 3: What is common among bubble sort, selection sort, and insertion sort?**

Answer: **Bubble sort, selection sort, and insertion sort **algorithms are comparison-based. All have the same worst-case and average-case complexity. All of them are in-place as well.

Sorting algorithms interview questions feature in almost every coding interview for software developers. If you’re looking for guidance and help to nail these questions and more, sign up for our free webinar.

As pioneers in the field of technical interview prep, we have trained thousands of software engineers to crack the toughest coding interviews and land jobs at their dream companies, such as Google, Facebook, Apple, Netflix, Amazon, and more!

--------

*Article contributed by Divyansh Gupta*