Register for our webinar

How to Nail your next Technical Interview

1 hour
Loading...
1
Enter details
2
Select webinar slot
*Invalid Name
*Invalid Name
By sharing your contact details, you agree to our privacy policy.
Step 1
Step 2
Congratulations!
You have registered for our webinar
check-mark
Oops! Something went wrong while submitting the form.
1
Enter details
2
Select webinar slot
*All webinar slots are in the Asia/Kolkata timezone
Step 1
Step 2
check-mark
Confirmed
You are scheduled with Interview Kickstart.
Redirecting...
Oops! Something went wrong while submitting the form.
close-icon
Iks white logo

You may be missing out on a 66.5% salary hike*

Nick Camilleri

Head of Career Skills Development & Coaching
*Based on past data of successful IK students
Iks white logo
Help us know you better!

How many years of coding experience do you have?

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Iks white logo

FREE course on 'Sorting Algorithms' by Omkar Deshpande (Stanford PhD, Head of Curriculum, IK)

Thank you! Please check your inbox for the course details.
Oops! Something went wrong while submitting the form.
Our June 2021 cohorts are filling up quickly. Join our free webinar to Uplevel your career
close
closeAbout usWhy usInstructorsReviewsCostFAQContactBlogRegister for Webinar

Time and Space Complexities of Sorting Algorithms Explained

Last updated by Utkarsh Sahu on Sep 25, 2024 at 11:02 PM | Reading time: 13 minutes

The fast well prepared banner

Attend our Free Webinar on How to Nail Your Next Technical Interview

WEBINAR +LIVE Q&A

How To Nail Your Next Tech Interview

Time and Space Complexities of Sorting Algorithms Explained
Hosted By
Ryan Valles
Founder, Interview Kickstart
strategy
Our tried & tested strategy for cracking interviews
prepare list
How FAANG hiring process works
hiring process
The 4 areas you must prepare for
hiring managers
How you can accelerate your learnings

Learning the time and space complexities of different sorting algorithms is important for any software developer to crack the interview rounds of any tech company. This knowledge will come in handy when you need to decide which approach to take to solve a particular problem.

In this article, we are going to discuss:

  • What Is Time Complexity?
  • What is Space Complexity?
  • Why Are Time and Space Complexities Important?
  • Time and Space Complexities of Common Sorting Algorithms
  • Cheat Sheet
Note: We have several other articles where sorting algorithms are discussed in great detail, including their implementations, code, examples, pros, and cons. Head to the Learn page for more!

What Is Time Complexity?

The time complexity of an algorithm describes the amount of time an algorithm takes to run in terms of the characteristics of the input. 

In other words, we can say time complexity is an approximation of the total number of elementary operations (arithmetic/bitwise instructions, memory referencing, control flow, etc.) executed throughout the program. It is assumed that each elementary operation takes a fixed amount of time to perform.

Following are the various notations used for expressing time complexity:

  • Big Oh Notation, O: The notation Ο(n) is used to express the upper bound of an algorithm's running time. 
  • Omega Notation, Ω: The notation Ω(n) is used to express the lower bound of an algorithm's running time. 
  • Theta Notation, θ: The notation θ(n) lies between O(n) and Ω(n) and is used to express the exact asymptotic behavior of an algorithm’s running time.

Generally, the algorithm’s performance is heavily reliant on the input data and its type; therefore, the worst-case time complexity is normally used because sometimes it’s impossible to predict all variations in the input data.

What Is Space Complexity?

The space complexity of an algorithm describes the amount of memory an algorithm takes to run in terms of the characteristics of the input. In other words, we can say space complexity is the approximate total extra space required by the program to run.

Why Are Time and Space Complexities Important?

In real-world applications, we are bound by the physical memory and computation power of the systems that we intend to run on. This is where space and time complexities become important because we never want to run a function or process that exceeds the amount of space the system has at any given time. 

We also don’t want our functions to take so long that our applications get bogged down and slowed. So we tend to choose an algorithm that is best suited for a specific problem and fits within our limit of space and time.

Time and Space Complexities of Common Sorting Algorithms

We've covered the time and space complexities of 9 popular sorting algorithms: Bubble Sort, Selection Sort, Insertion Sort, Merge Sort, Quicksort, Heap Sort, Counting Sort, Radix Sort, and Bucket Sort.

1. Bubble Sort 

In bubble sort, we compare each adjacent pair. If they are not in the correct order, we swap them. We keep repeating the previous step until no swaps are needed, which indicates all the elements are sorted.

Time Complexity:

  • Worse case: O(n2)

When the array is reverse-sorted, we iterate through the array (n - 1) times. In the first iteration, we do (n - 1) swaps, (n - 2) in the second, and so on until in the last iteration where we do only one swap. Thus the total number of swaps sum up to n * (n - 1) / 2.

  • Average case: O(n2)

For a completely random array, the total number of swaps averages out to be around n2 / 4, which is again O(n2).

  • Best case: O(n)

In the first iteration of the array, if we do not perform any swap, we know that the array is already sorted so stop sorting, therefore the time complexity turns out to be linear.

Space Complexity:

Since we use only a constant amount of additional memory apart from the input array, the space complexity is O(1).

2. Selection Sort

Selection sort is a simple sorting algorithm that divides the array into two parts: a subarray of already sorted elements and a subarray of remaining elements to be sorted. The sorted subarray is initially empty. We iterate over the array (n - 1) times. In each iteration, we find the smallest element from the unsorted subarray and place it at the end of the sorted subarray.

Time Complexity:

Worst case = Average Case = Best Case = O(n2)
We perform the same number of comparisons for an array of any given size.

In the first iteration, we perform (n - 1) comparisons, (n - 2) in the second, and so on until the last iteration where we perform only one comparison. Thus the total number of comparisons sum up to n * (n - 1) / 2. The number of swaps performed is at most n - 1. So the overall time complexity is quadratic.

Space Complexity: 

Since we are not using any extra data structure apart from the input array, the space complexity is O(1).

3. Insertion Sort

Like selection sort, the insertion sort algorithm also divides the array into two parts: a subarray of already sorted elements and a subarray of remaining elements to be sorted. The sorted subarray initially contains only the first element of the array. We iterate over each of the remaining elements and put them in their correct position in the sorted subarray.

Time Complexity:

  • Worse case: O(n2)

When we apply insertion sort on a reverse-sorted array, it will insert each element at the beginning of the sorted subarray, making it the worst time complexity of insertion sort.

  • Average case: O(n2)

When the array elements are in random order, the average running time is O(n2 / 4) = O(n2).

  • Best case: O(n)

When we initiate insertion sort on an already sorted array, it will only compare each element to its predecessor, thereby requiring n steps to sort the already sorted array of n elements.

Space Complexity:

Since we use only a constant amount of additional memory apart from the input array, the space complexity is O(1).

4. Merge Sort

In merge sort, we divide the unsorted array into n subarrays, each of size one, which can be considered sorted trivially. Then, we repeatedly merge the sorted subarrays to produce new sorted subarrays until there is only one sorted array of size n remaining. 

Time Complexity: 

Worst case = Average Case = Best Case = O(n log n)

Merge sort performs the same number of operations for any input array of a given size.

In this algorithm, we keep dividing the array into two subarrays recursively which will create O(log n) rows where each element is present in each row exactly once. 

For each row, it takes O(n) time to merge every pair of subarrays. So the overall time complexity becomes O(n log n).

Space Complexity: 

Since we use an auxiliary array of size at most n to store the merged subarray, the space complexity is O(n).

5. Quicksort

Quicksort is a relatively more complex algorithm. It uses a divide-and-conquer strategy to divide the array into two subarrays. We choose an element called pivot and we then place it in its correct index and then reorder the array by moving all the elements less than pivot to its left and all the elements greater than it to its right. 

We then recursively sort the subarrays of these subarrays until the entire array is sorted. The efficiency of the quicksort algorithm heavily depends on the selection of the pivot element.

Time Complexity:

  • Worse case: O(n2)

When the array is sorted and we choose the leftmost element as pivot, or the array is reverse-sorted and we choose the rightmost element as pivot, the time complexity becomes quadratic since partitioning the array results in highly unbalanced subarrays in such cases.

Also when there are a large number of identical elements in the array, optimal partitioning becomes hard, resulting in quadratic time complexity.

  • Average case and best case: O(n log n)
    The best case for quick-sort happens when we successfully pick the median element for partitioning every time. Such partitioning allows us to divide the array in half every time. 

We can avoid the worst-case in quicksort almost always by choosing an appropriate pivot. There are various ways to achieve this:

  • Pick the pivot from the middle of the array
  • Adopt a random selection of pivots
  • Take the median of three pivot candidates, i,e., choose the median of the first, middle, and the last elements of the array as the pivot. 

These methods result in almost equal partitioning of the array, on average. This way the average case time complexity of quicksort practically becomes O(n log n).

Space Complexity:

Although quicksort doesn’t use auxiliary space to store array elements, additional space is required for creating stack frames in recursive calls. 

  • Worst case: O(n)

This happens when the pivot element is the largest or smallest element of the array in every recursive call. The size of the subarray after partitioning will be n-1 and 1. In this case, the size of the recursive tree will be n. 

  • Best case: O(log n)

This happens when the pivot element’s correct position in the partitioned array is in the middle every time. The size of subarrays will be half the size of the original array. In this case, the recursive tree’s size will be O(log n). 

6. Heap Sort

In heap sort, we convert the array into a heap. Then we keep extracting the maximum element from the heap and place it accordingly. Heap sort reconstructs the heap after each extraction.

Time Complexity:

Worst case = Average Case = Best Case = O(n log n)

The order of time taken by the heap sort algorithm for an array of any given size is the same. 

The process of extraction in a heap structure with n elements takes logarithmic time, O(log n). When there are n elements in the heap it takes O(log n) time to extract, then there remain (n - 1) elements, the next extraction takes O(log (n - 1)), and so on until there is only one element in the heap where the extraction takes O(log 1) time.

The total time complexity sums up to O(log n) + O(log (n -1)) + … + O(1) = O(log (n!)). The time complexity of O(n log n) best represents this complexity in a simplified form.

Space Complexity:

Since we are not using any extra data structure, heap sort is an in-place sorting algorithm. Therefore, its space complexity is O(1).

7. Counting Sort                                    

Counting sort works by keeping track of the number of times each unique element appears in the input array, into an auxiliary array whose size, k, is equal to the length to the range of the input values. 

Mathematically, k = (maximum element - minimum element + 1) holds. This array is then used to place the elements directly into their correct position.

Time Complexity:

Worst case = Average Case = Best Case = O(n + k)

We iterate over each element of the input array which contributes O(n) time complexity. We also iterate over the auxiliary array which contributes O(k) time. So the overall time complexity is O(n + k).

Space Complexity: 

We are using an auxiliary array of size k, so the space complexity comes out to be O(k).

8. Radix Sort

In the radix sort algorithm, we sort the array based on the place values of the number. We sort the array digit by digit starting from the least significant digit. 

In the next step, we move on to the next significant digit and sort based on that digit preserving the ordering of the previous step in case of a tie. This goes on till the sorting is complete on the most significant digit. It uses the counting sort algorithm to sort the list considering a certain digit.

Time Complexity:

Worst case = Average Case = Best Case = O(n * k)

Let's call the number of digits/characters in the maximum value of the input as “k.”

In this algorithm, we apply the counting sort algorithm for each digit which is k times.

So the time complexity is O(k * (n + b)), where b is the base for representing numbers, and k is the number of digits, or the radix, of the largest number in the array.. 

Since b is constant for a given problem, the time complexity can be denoted as O (n * k). 

Space Complexity:

The space complexity comes from the counting sort, which requires O(n + k) space to hold counts, indices, and output arrays.

9. Bucket Sort

In bucket sort, we divide the array into several groups called buckets by appropriately mapping each element to a bucket. We then sort each bucket using any appropriate sorting algorithm and then gather the elements from the buckets sequentially to get the sorted array. The mapping function is a function of the characteristic of input.

Time Complexity:

  • Worse case: O(n2)

When the elements in the input array are of close range, they are likely to be placed in the same bucket and this may result in some buckets having a greater number of elements than others. This way the overall complexity depends on the algorithm which is used for sorting each bucket which is generally insertion sort, thus giving quadratic complexity.

  • Average case and best case: O(n + k)

When the elements are distributed randomly in the array, bucket sort runs in linear time in all cases as long as the sum of the squares of the bucket sizes is linear in the total number of elements.

Space Complexity:

We need O(k) memory to store k empty buckets and then we divide the array of O(n) size into these buckets that require a total of O(n + k) space in total, given that we use insertion sort to sort the elements within a bucket.

For a detailed coverage of all key information related to these 9 popular sorting algorithms, read "Everything About Sorting Algorithms."

Sorting Algorithms Time and Space Complexity Cheat Sheet

Here’s a cheat sheet to help you memorize the basic attributes of each algorithm:

Are You Ready to Nail Your Next Coding Interview?

Knowing the time and space complexities of different sorting algorithms can help solve many interview questions in almost every coding interview for software developers. If you’re looking for guidance and help to nail these questions and more, sign up for our free webinar. 

As pioneers in the field of technical interview prep, we have trained thousands of software engineers to crack the toughest coding interviews and land jobs at their dream companies, such as Google, Facebook, Apple, Netflix, Amazon, and more!

Sign up now!

----------
Article contributed by Shivam Gupta

Last updated on: 
October 10, 2024
Author

Utkarsh Sahu

Director, Category Management @ Interview Kickstart || IIM Bangalore || NITW.

Attend our Free Webinar on How to Nail Your Next Technical Interview

Register for our webinar

How to Nail your next Technical Interview

1
Enter details
2
Select webinar slot
First Name Required*
Last Name Required*
By sharing your contact details, you agree to our privacy policy.
Step 1
Step 2
Congratulations!
You have registered for our webinar
check-mark
Oops! Something went wrong while submitting the form.
1
Enter details
2
Select webinar slot
Step 1
Step 2
check-mark
Confirmed
You are scheduled with Interview Kickstart.
Redirecting...
Oops! Something went wrong while submitting the form.

Time and Space Complexities of Sorting Algorithms Explained

Worried About Failing Tech Interviews?

Attend our webinar on
"How to nail your next tech interview" and learn

Ryan-image
Hosted By
Ryan Valles
Founder, Interview Kickstart
blue tick
Our tried & tested strategy for cracking interviews
blue tick
How FAANG hiring process works
blue tick
The 4 areas you must prepare for
blue tick
How you can accelerate your learnings
Register for Webinar
entroll-image