If you’re from a computer science background, you might already know what algorithms are. But if you’re just starting out in the field of programming, you might want to pay close attention. Algorithms form the basis of all programming languages and give a clear definition of your program.

It may sound really intimidating, but an algorithm is just a set of statements that have a purpose and define what your program will do and how it will do that. That’s pretty much it. However, they do form the building blocks of a lot of programs making them extremely essential to learn.

In this article, let’s look at some of the most popular algorithms that you should have the basic knowledge of when going in for an interview. We have used Python to explain the workings, but the logic of each technique can be replicated in any other language.

## 1. Dynamic Programming

Dynamic programming is a type of algorithm that works on the principle of recursion where each problem is broken into smaller sub-problems and the solution of the final problem is dependent on the solutions of the smaller ones. This works by storing the solutions of each sub-problem and then using these states later to simplify complexities and reduce computation time.

To understand how it works, let’s consider the example of a Fibonacci sequence. A Fibonacci sequence is a series of numbers that starts with 0 & 1, and every subsequent number is the sum of the preceding two. So it would go something like 0,1,1,2,3,5,8,13,... and so on.

If we had to find the Nth number in the sequence, we could write the code using simple recursion as follows:

As can be seen from the example above, if we had to find the 10th term in the sequence, the entire code would be executed multiple times over and over again and is thus inefficient. Let’s optimize this with dynamic programming. The code for the same is shown below.

Every time this piece of code is run, the array is updated with any new terms in the Fibonacci sequence, and the “else” part is run only for those specific cases when the term is not existing in the array. This greatly reduces the required computation power in terms of CPU cycles and is thus used to solve complex problems efficiently.

## 2. Tree Traversal Algorithms

Trees are a special form of data structure that include a root node connected to sub-trees in a linked node format. The most commonly used type is called the Binary Tree where each node can have a maximum of two children only. Being hierarchical in nature, a binary tree is extremely efficient to traverse and has the benefits of both － ordered arrays and linked lists.

Tree traversal is essentially the process of visiting each node of the tree while also performing some functions on all the values. Being hierarchical in nature where the root node is connected to its children via edges. There are many types of tree traversals possible, but these are the three most common ones to start with. These are:

- Pre-order traversal
- Post-order traversal
- In-order traversal

Here’s a comparison of these three traversal techniques. For reference, consider a sample tree shown below.

The code for any of the tree traversal techniques involves the creation of a class to store the locations of the root node, as well as the left and right nodes. We then create an empty list to add the values of the root node, the left node, and the right node in the desired order recursively to obtain the final traversal of the binary tree. Here’s the code for all three traversal techniques.

## 3. Graph Traversal

Similar to trees, graphs are also a type of non-linear data structure that consists of nodes connected to each other through edges. But unlike trees that have a root node, graphs have no unique node called the root and can even be in the form of a cycle. Due to this, graphs are generally used to denote networks and solve complex real-life problems such as finding the shortest path in a large network.

To read and understand the data in a graph, we use a technique called the graph traversal. There are two ways a graph can be traversed － breadth-first or depth-first.

These algorithms are similar to that of a tree traversal, but the only catch here is the presence of cycles causing nodes to be counted multiple times.

**Depth-First Search**: The depth-first algorithm is similar to the one used in tree traversal, but with the added visited[ ] array that keeps a track of all the visited nodes to avoid re-counting. It starts by selecting a random arbitrary node in the graph and traversing the entire extent of each branch before backtracking. In the example above, the Depth-first Search output would be: A-B-D-E-C or A-B-C-D-E.**Breadth-First Search**: In the breadth-first traversal algorithm, after each node is visited, it’s children are put into a First-In-First-Out queue. It starts with a random arbitrary node, and then reads all the nodes connected to it at the same depth before moving on to the next depth level. In the example above, the Breadth-first Search output would be: A-B-C-D-E or A-B-D-C-E.

## Search Algorithms

An essential part of using any data set, searching algorithms are used to search for an element stored in any form of data structure. These algorithms form the foundation of any search system and find use cases in essentially four areas － databases, virtual spaces, sub-structures, or quantum computers.

Search algorithms, based on how they function, can be divided into two classes － linear and interval. Let’s understand how each works and their code. Consider the list of numbers shown below as an example.

## 4. Linear Search

Linear search works by starting from the 0th element and comparing the user’s input element with each term and finally returning the position of the element. This generally works well for unstructured data as shown in *List 2*.

Let’s say the user wants to search for the number 32 in *List 2*. The program would compare the input to the 0th element, check if it’s the same, proceed to the second one, check again, and so on till a term in the list equals 32. It will then return the position of that element in the list. Here’s the code for a linear search algorithm.

## 5. Binary Search

While linear search compares the input element with every element in the list starting from the first element, the binary search algorithm starts with the middle element. As it is a type of interval search algorithm, it is used only on sorted data structures as shown in *List 1.*

To understand, let’s consider that the user wants to find the position of the number 49 in *List 2*. To determine that, the binary search algorithm would start by comparing 49 with the middle term and then divide the entire list into two sub-sets－ one side with terms smaller than the middle term and another side that contains terms larger than the middle term.

As 49 is larger than the middle term, the algorithm then starts comparing it with the numbers in the right-hand-side subset following the same process until it finds the location of the input term. The Binary search algorithm works with a sorted list and decreases the time complexity of the problem making it a highly efficient one.

## Sorting Algorithms

In our daily lives, there are several instances when we sort items in a particular order to turn chaos into structures. Similarly, in the field of programming, sorting algorithms help you arrange your data in a particular form to make meaning out of it.

They take an array or list of values as input and based on user needs, they can be customized to sort data into an ascending or descending order. This order can either be set in the algorithm itself or taken as an input from the user.

Sorting algorithms are generally differentiated based on the method they use to sort the data and each technique has its own set of codes and uses. Let’s understand three popular sorting algorithms － bubble sort, insertion sort, and selection sort.

## 6. Bubble Sort

This is the easiest sorting algorithm and works by comparing adjacent values in the list and places them in the correct order. It then continues to do that throughout the list over and over again until all the values in the list have been arranged in the preferred order.

Let’s take the input set of values as [7,2,5,4,3,1], and let’s say we want to arrange this in ascending order. In the first round, it starts from the 0th position and compares the first 2 elements, arranges them if not in order, and then moves to the next 2 elements and so on. Let’s see how the code looks like for this.

Now you can call the function by just defining the array and then calling the function as follows:

*arr = [7, 2, 5, 4, 3, 1] #define the array*

*bubble_sort(arr) #sort the array*

*print(arr)*

The output of this program would be a sorted array and shows as below:

[1, 2, 3, 4, 5, 7]

## 7. Insertion Sort

The second sorting algorithm that we’ll look at is the Insertion Sort. Unlike bubble sort, where the program starts at the first element, sorts the adjacent elements, reaches the last element, and then starts at the first element again to repeatedly go through this process and sort the complete array, insertion sort starts at the first element and as it moves towards the last element, sorts all the elements that it passes.

To understand this, let’s take the same example array as before: A[6]= [7,2,5,4,3,1]

Continues till the end of the array and the entire array is sorted

The code for this algorithm also uses a nested loop but there is a small difference in the way it’s used here compared to the way it’s used in Bubble Sort. Here’s the code for sorting the example array above.

Now we call the function to sort our example array as follows:

*arr = [7, 2, 5, 4, 3, 1] #define the array*

*insertion_sort(arr) #sort the array*

The output of this program would be each step of the sorting process like this:

[7, 2, 5, 4, 3, 1]

[2, 7, 5, 4, 3, 1]

[2, 5, 7, 4, 3, 1]

[2, 4, 5, 7, 3, 1]

[2, 3, 4, 5, 7, 1]

[1, 2, 3, 4, 5, 7]

## 8. Selection Sort

Another simple sorting algorithm, Selection Sort works by finding the smallest element (in case of ascending order) or the largest element (in case of descending order) and placing them in the first position. It then omits the first element and repeats the process with the leftover array and places the next smallest or largest element in the second position. This process continues until all elements in the array have been sorted into their proper positions.

The selection sort program works with a nested for loop to:

- Divide the array into 2 sub-arrays
- Find the smallest value in the sub-array and swap it with the first position element
- Re-create the sub-array excluding this minimum value
- Set the first element as the minimum
- Compare this minimum with other values to find the actual minimum and then swap positions
- Go back to step 3 and repeat until the entire array has been sorted in ascending order.

Here’s the code for you to understand it better and try for yourself.

To test this out, we define our example array and call the SelectionSort function as below:

*arr = [7, 2, 5, 4, 3, 1] #define the array*

*print(arr)*

*selection_sort(arr) #sort the array*

The output consists of the array at each step of the sorting process and looks like this:

[7, 2, 5, 4, 3, 1]

[1, 2, 5, 4, 3, 7]

[1, 2, 5, 4, 3, 7]

[1, 2, 3, 4, 5, 7]

[1, 2, 3, 4, 5, 7]

[1, 2, 3, 4, 5, 7]

## 9. Hashing Algorithms

To understand what hashing algorithms are and how they work, we first need to understand what hashing is and what hash functions are. Hash functions are, essentially, mathematical functions that convert complex input numbers into compressed numerical values. With any arbitrary length of input values, the output is always of a fixed length and the final output value is called the hash value.

It is also important to understand the difference between a hash function and a hash algorithm. A hash function creates a hash value based on the input data blocks that have fixed-length data. On the other hand, a hashing algorithm describes how the hash function will be used and defines the complete process of breaking up the message and bringing it back together.

There are a lot of different hashing algorithms. A few commonly used hashing algorithms are as follows.

- Message Digest (MD5)
- Secure Hash Algorithm
- RACE Integrity Primitives Evaluation Message Digest
*(RIPEMD)* - Whirlpool
- RSA
*(Rivest–Shamir–Adleman)*

It's important that you practice these Algorithms before your next tech interview. They may seem easy and obvious, but sometimes they become tricky to solve in an actual interview. Also, these algorithms are used to test the understanding of a software engineer on whether or not he knows the working of the code. Practising these problems before an interview will not only make you familiar with them but also give you confidence in explaining the solution to the interviewer.

Join an interactive live session with our founder who will take you through how to nail complex technical interviews - the Interview Kickstart way.

Details to join this workshop will be sent by email after you register.