FullStackFSCCafé     
 
 
Kill Your Tech Interview
3877 Full-Stack, Algorithms & System Design Interview Questions
Answered To Get Your Next Six-Figure Job Offer
      
🤖 Having Machine Learning & DS Interview? Check  MLStack.Cafe - 1704 Data Science & ML Interview Questions & Answers!Having ML & DS Interview? Check 🤖 MLStack.Cafe - 1704 ML & DS Interview Questions and Answers

21 Sorting Algorithms Interview Questions (SOLVED) For Coding Interview

Efficient Sorting Algorithms is important for optimising the efficiency of other algorithms (such as search and merge algorithms) that require input data to be in sorted lists. Sorting is also often useful for canonicalising data and for producing human-readable output. Follow along and check 21 most commonly asked Sorting Algorithms Interview Questions and Answers experienced developers must know before their next coding interview.

Q1: 
Explain how Bubble Sort works

Answer

Bubble Sort is based on the idea of repeatedly comparing pairs of adjacent elements and then swapping their positions if they are in the wrong order. Bubble sort is a stable, in-place sort algorithm.

How it works:

  • In an unsorted array of n elements, start with the first two elements and sort them in ascending order. (Compare the element to check which one is greater).
  • Compare the second and third element to check which one is greater, and sort them in ascending order.
  • Compare the third and fourth element to check which one is greater, and sort them in ascending order.
  • ...
  • Repeat steps 1–n until no more swaps are required.

Visualisation:

Complexity Analysis
Time:
Constant
O(1)
Dbl. Logarithmic
O(log log n)
Logarithmic
O(log n)
Square Root
O(√n)
Linear
O(n)
Linearithmic
O(n log n)
Quadratic
O(n2)
Exponential
O(2n)
Factorial
O(n!)
Space:
Constant
O(1)
Dbl. Logarithmic
O(log log n)
Logarithmic
O(log n)
Square Root
O(√n)
Linear
O(n)
Linearithmic
O(n log n)
Quadratic
O(n2)
Exponential
O(2n)
Factorial
O(n!)

Time: O(n^2)

Space: O(1)

Bubble sort has a worst-case and average complexity of O(n2), where n is the number of items being sorted. When the list is already sorted (best-case), the complexity of bubble sort is only O(n). The space complexity for Bubble Sort is O(1), because only single additional memory space is required (for temp swap element).

Implementation
// Normal const bubbleSort = function(array) { let swaps; do { swaps = false; for (let i = 0; i < array.length - 1; i++) { if (array[i] > array[i + 1]) { let temp = array[i + 1]; array[i + 1] = array[i]; array[i] = temp; swaps = true; } } } while (swaps); return array; }; // Recursively const bubbleSort = function (array, pointer = array.length - 1) { // Base Case if (pointer === 0) { return array; } for (let i = 0; i < pointer; i++) { if (array[i] > array[i + 1]) { let temp = array[i + 1]; array[i + 1] = array[i]; array[i] = temp; } } // Recursive call on smaller portion of the array return bubbleSort(array, pointer - 1); };

Having Tech or Coding Interview? Check 👉 26 Sorting Interview Questions
Source: github.com

Q2: 
Why Sorting algorithms are important?

Answer

Efficient sorting is important for optimizing the efficiency of other algorithms (such as search and merge algorithms) that require input data to be in sorted lists. Sorting is also often useful for canonicalizing data and for producing human-readable output. Sorting have direct applications in database algorithms, divide and conquer methods, data structure algorithms, and many more.


Having Tech or Coding Interview? Check 👉 26 Sorting Interview Questions

Q3: 
Classify Sorting Algorithms

Answer

Sorting algorithms can be categorised based on the following parameters:

  1. Based on Number of Swaps or Inversion. This is the number of times the algorithm swaps elements to sort the input. Selection Sort requires the minimum number of swaps.
  2. Based on Number of Comparisons. This is the number of times the algorithm compares elements to sort the input. Using Big-O notation, the sorting algorithm examples listed above require at least O(n log n) comparisons in the best case and O(n2) comparisons in the worst case for most of the outputs.
  3. Based on Recursion or Non-Recursion. Some sorting algorithms, such as Quick Sort, use recursive techniques to sort the input. Other sorting algorithms, such as Selection Sort or Insertion Sort, use non-recursive techniques. Finally, some sorting algorithm, such as Merge Sort, make use of both recursive as well as non-recursive techniques to sort the input.
  4. Based on Stability. Sorting algorithms are said to be stable if the algorithm maintains the relative order of elements with equal keys. In other words, two equivalent elements remain in the same order in the sorted output as they were in the input.
    • Insertion sort, Merge Sort, and Bubble Sort are stable
    • Heap Sort and Quick Sort are not stable
  5. Based on Extra Space Requirement. Sorting algorithms are said to be in place if they require a constant O(1) extra space for sorting.
    • Insertion sort and Quick-sort are in place sort as we move the elements about the pivot and do not actually use a separate array which is NOT the case in merge sort where the size of the input must be allocated beforehand to store the output during the sort.
    • Merge Sort is an example of out place sort as it require extra memory space for it’s operations.

Having Tech or Coding Interview? Check 👉 26 Sorting Interview Questions

Q4: 
Explain how Insertion Sort works

Answer

Insertion Sort is an in-place, stable, comparison-based sorting algorithm. The idea is to maintain a sub-list which is always sorted. An element which is to be 'insert'ed in this sorted sub-list, has to find its appropriate place and then it has to be inserted there. Hence the name, insertion sort.

Steps on how it works:

  • If it is the first element, it is already sorted.
  • Pick the next element.
  • Compare with all the elements in sorted sub-list.
  • Shift all the the elements in sorted sub-list that is greater than the value to be sorted.
  • Insert the value.
  • Repeat until list is sorted.

Visualisation:

Complexity Analysis
Time:
Constant
O(1)
Dbl. Logarithmic
O(log log n)
Logarithmic
O(log n)
Square Root
O(√n)
Linear
O(n)
Linearithmic
O(n log n)
Quadratic
O(n2)
Exponential
O(2n)
Factorial
O(n!)
Space:
Constant
O(1)
Dbl. Logarithmic
O(log log n)
Logarithmic
O(log n)
Square Root
O(√n)
Linear
O(n)
Linearithmic
O(n log n)
Quadratic
O(n2)
Exponential
O(2n)
Factorial
O(n!)

Time: O(n^2)

Space: O(1)

  • Insertion sort runs in O(n2) in its worst and average cases. It runs in O(n) time in its best case.
  • Insertion sort performs two operations: it scans through the list, comparing each pair of elements, and it swaps elements if they are out of order. Each operation contributes to the running time of the algorithm. If the input array is already in sorted order, insertion sort compares O(n) elements and performs no swaps. Therefore, in the best case, insertion sort runs in O(n) time.
  • Space complexity is O(1) because an extra variable key is used (as a temp variable for insertion).
Implementation
var insertionSort = function(a) { // Iterate through our array for (var i = 1, value; i < a.length; i++) { // Our array is split into two parts: values preceeding i are sorted, while others are unsorted // Store the unsorted value at i value = a[i]; // Interate backwards through the unsorted values until we find the correct location for our `next` value for (var j = i; a[j - 1] > value; j--) { // Shift the value to the right a[j] = a[j - 1]; } // Once we've created an open "slot" in the correct location for our value, insert it a[j] = value; } // Return the sorted array return a; };

Having Tech or Coding Interview? Check 👉 26 Sorting Interview Questions
Source: medium.com

Q5: 
Explain what is ideal Sorting algorithm?

Answer

The Ideal Sorting Algorithm would have the following properties:

  • Stable: Equal keys aren’t reordered.
  • Operates in place: requiring O(1) extra space.
  • Worst-case O(n log n) key comparisons.
  • Worst-case O(n) swaps.
  • Adaptive: Speeds up to O(n) when data is nearly sorted or when there are few unique keys.

There is no algorithm that has all of these properties, and so the choice of sorting algorithm depends on the application.


Having Tech or Coding Interview? Check 👉 26 Sorting Interview Questions

Q6: 
How would you optimise Bubble Sort?

Answer

In Bubble sort, you know that after k passes, the largest k elements are sorted at the k last entries of the array, so the conventional Bubble sort uses:

public static void bubblesort(int[] a) { for (int i = 1; i < a.length; i++) { boolean is_sorted = true; for (int j = 0; j < a.length - i; j++) { // skip the already sorted largest elements, compare to a.length - 1 if (a[j] > a[j+1]) { int temp = a[j]; a[j] = a[j+1]; a[j+1] = temp; is_sorted = false; } } if(is_sorted) return; } }

Now, that would still do a lot of unnecessary iterations when the array has a long sorted tail of largest elements. If you remember where you made your last swap, you know that after that index, there are the largest elements in order, so:

public static void bubblesort(int[] a) { int lastSwap = a.length - 1; for (int i = 1; i< a.length; i++) { boolean is_sorted = true; int currentSwap = -1; for (int j = 0; j < lastSwap; j++) { // compare to a.length - i if (a[j] > a[j+1]) { int temp = a[j]; a[j] = a[j+1]; a[j+1] = temp; is_sorted = false; currentSwap = j; } } if (is_sorted) return; lastSwap = currentSwap; } }

This allows to skip over many elements, resulting in about a worst case 50% improvement in comparison count (though no improvement in swap counts), and adds very little complexity.


Having Tech or Coding Interview? Check 👉 26 Sorting Interview Questions

Q7: 
Insert an item in a sorted Linked List maintaining order

Answer

The add() method below walks down the list until it finds the appropriate position. Then, it splices in the new node and updates the start, prev, and curr pointers where applicable.

Note that the reverse operation, namely removing elements, doesn't need to change, because you are simply throwing things away which would not change any order in the list.

Implementation
public void add(T x) { Node newNode = new Node(); newNode.info = x; // case: start is null; just assign start to the new node and return if (start == null) { start = newNode; curr = start; // prev is null, hence not formally assigned here return; } // case: new node to be inserted comes before the current start; // in this case, point the new node to start, update pointers, and return if (x.compareTo(start.info) < 0) { newNode.link = start; start = newNode; curr = start; // again we leave prev undefined, as it is null return; } // otherwise walk down the list until reaching either the end of the list // or the first position whose element is greater than the node to be // inserted; then insert the node and update the pointers prev = start; curr = start; while (curr != null && x.compareTo(curr.info) >= 0) { prev = curr; curr = curr.link; } // splice in the new node and update the curr pointer (prev already correct) newNode.link = prev.link; prev.link = newNode; curr = newNode; }

Having Tech or Coding Interview? Check 👉 26 Sorting Interview Questions

Q8: 
What are advantages and disadvantages of Bubble Sort?

Answer

Advantages:

  • Simple to understand
  • Ability to detect that the list is sorted efficiently is built into the algorithm. When the list is already sorted (best-case), the complexity of bubble sort is only O(n).

Disadvantages:

  • It is very slow and runs in O(n2) time in worst as well as average case. Because of that Bubble sort does not deal well with a large set of data. For example Bubble sort is three times slower than Quicksort even for n = 100

Having Tech or Coding Interview? Check 👉 26 Sorting Interview Questions
🤖 Having Machine Learning & DS Interview? Check  MLStack.Cafe - 1704 Data Science & ML Interview Questions & Answers!Having ML & DS Interview? Check 🤖 MLStack.Cafe - 1704 ML & DS Interview Questions and Answers

Q9: 
What is meant by to "Sort in Place"?

Answer

The idea of an in-place algorithm isn't unique to sorting, but sorting is probably the most important case, or at least the most well-known. The idea is about space efficiency - using the minimum amount of RAM, hard disk or other storage that you can get away with.

The idea is to produce an output in the same memory space that contains the input by successively transforming that data until the output is produced. This avoids the need to use twice the storage - one area for the input and an equal-sized area for the output.

Quicksort is one example of In-Place Sorting.


Having Tech or Coding Interview? Check 👉 26 Sorting Interview Questions

Q10: 
Explain how Merge Sort works

Answer

Merge sort is a divide-and-conquer, comparison-based sorting algorithm based on the idea of breaking down a list into several sub-lists until each sublist consists of a single element and merging those sublists in a manner that results into a sorted list. Most implementations produce a stable sort, which means that the order of equal elements is the same in the input and output. Can be used as external sorting (when the data to be sorted is too large to fit into memory).


There are two methods for implementing a Merge Sort algorithm:

  • The Top-Down (recursive) approach. Given an array of size N, the algorithm recursively breaks the array in half and then merges the results together.
  • The Bottom-Up (iterative) approach. Rather than breaking the overall array into distinct pieces, bottum-up mergesort loops over the array using intervals of varying sizes. Each interval is sorted and merged together; subsequent loops over the array have larger intervals, effectively merging our previously sorted (smaller) intervals together.
Complexity Analysis
Time:
Constant
O(1)
Dbl. Logarithmic
O(log log n)
Logarithmic
O(log n)
Square Root
O(√n)
Linear
O(n)
Linearithmic
O(n log n)
Quadratic
O(n2)
Exponential
O(2n)
Factorial
O(n!)
Space:
Constant
O(1)
Dbl. Logarithmic
O(log log n)
Logarithmic
O(log n)
Square Root
O(√n)
Linear
O(n)
Linearithmic
O(n log n)
Quadratic
O(n2)
Exponential
O(2n)
Factorial
O(n!)

Time: O(n log n)

Space: O(n)

The list of size n is divided into a max of log n parts (the height of the conquer step tree is log n + 1 for a given n), and the merging of all sublists into a single list takes O(n) time, the worst case run time of this algorithm is O(n log n).

Space complexity is always Ω(n) as you have to store the elements somewhere. Additional space complexity can be O(n) in an implementation using arrays and O(1) in linked list implementations. In practice implementations using lists need additional space for list pointers, so unless you already have the list in memory it shouldn't matter.

Implementation

The Top-Down (recursive) approach

function mergeSort(arr) { if (arr.length < 2) { return arr; } let midpoint = Math.floor(arr.length / 2); let leftHalf = arr.slice(0, midpoint); //slice an array of left-half items let rightHalf = arr.slice(midpoint); //slice an array of right-half items //recurse into each half until arrays are 1 item each //then merge() the sorted arrays on each layer of the stack return merge(mergeSort(leftHalf), mergeSort(rightHalf)); } //first time this merge() gets called, the arr's are both 1 element long //on each layer of the stack the arrs will be doubled and always be sorted function merge(arrA, arrB) { let a = 0; let b = 0; let result = []; while (a < arrA.length && b < arrB.length) { if (arrA[a] < arrB[b]) { result.push(arrA[a]); a++; } else { result.push(arrB[b]); b++; } } //using slice to concat remaining items left in either arrA or arrB return result.concat(arrA.slice(a)).concat(arrB.slice(b)); } //usage: mergeSort([4,2,8,3,6,9,1]); //=> [1,2,3,4,6,8,9]

The Bottom-Up (iterative) approach

function mergeSort(arr) { var sorted = arr.slice(), n = sorted.length, buffer = new Array(n); for (var size = 1; size < n; size *= 2) { for (var leftStart = 0; leftStart < n; leftStart += 2 * size) { var left = leftStart, right = Math.min(left + size, n), leftLimit = right, rightLimit = Math.min(right + size, n), i = left; while (left < leftLimit && right < rightLimit) { if (sorted[left] <= sorted[right]) { buffer[i++] = sorted[left++]; } else { buffer[i++] = sorted[right++]; } } while (left < leftLimit) { buffer[i++] = sorted[left++]; } while (right < rightLimit) { buffer[i++] = sorted[right++]; } } var temp = sorted, sorted = buffer, buffer = temp; } return sorted; }

Having Tech or Coding Interview? Check 👉 6 Divide & Conquer Interview Questions

Q11: 
Explain how Heap Sort works

Answer

Heapsort is a comparison-based sorting algorithm. Heapsort can be thought of as an improved selection sort: like that algorithm, it divides its input into a sorted and an unsorted region, and it iteratively shrinks the unsorted region by extracting the largest element and moving that to the sorted region. The improvement consists of the use of a heap data structure rather than a linear-time search to find the maximum.

A heap is a complete binary tree (every level filled as much as possible beginning at the left side of the tree) that follows this condition: the value of a parent element is always greater than the values of its left and right child elements. So, by taking advantage of this property, we can pop off the max of the heap repeatedly and establish a sorted array.

In an array representation of a heap binary tree, this essentially means that for a parent element with an index of i, its value must be greater than its left child [2i+1] and right child [2i+2] (if it has one).

The HeapSort steps are:

  • Create a max heap from the unsorted list
  • Swap the max element, located at the root, with the last element
  • Re-heapify or rebalance the array again to establish the max heap order. The new root is likely not in its correct place.
  • Repeat Steps 2–3 until complete (when the list size is 1)

Visualisation:

Complexity:

Complexity Analysis
Time:
Constant
O(1)
Dbl. Logarithmic
O(log log n)
Logarithmic
O(log n)
Square Root
O(√n)
Linear
O(n)
Linearithmic
O(n log n)
Quadratic
O(n2)
Exponential
O(2n)
Factorial
O(n!)
Space:
Constant
O(1)
Dbl. Logarithmic
O(log log n)
Logarithmic
O(log n)
Square Root
O(√n)
Linear
O(n)
Linearithmic
O(n log n)
Quadratic
O(n2)
Exponential
O(2n)
Factorial
O(n!)

Time: O(n log n)

Space: O(1)

  • Best: O(n log n)
  • Average: O(n log n)
  • Worst: O(n log n)
Implementation
var a = [ 9, 10, 2, 1, 5, 4, 3, 6, 8, 7, 13 ]; function swap(a, i, j) { var tmp = a[i]; a[i] = a[j]; a[j] = tmp; } function max_heapify(a, i, length) { while (true) { var left = i*2 + 1; var right = i*2 + 2; var largest = i; if (left < length && a[left] > a[largest]) { largest = left; } if (right < length && a[right] > a[largest]) { largest = right; } if (i == largest) { break; } swap(a, i, largest); i = largest; } } function heapify(a, length) { for (var i = Math.floor(length/2); i >= 0; i--) { max_heapify(a, i, length); } } function heapsort(a) { heapify(a, a.length); for (var i = a.length - 1; i > 0; i--) { swap(a, i, 0); max_heapify(a, 0, i-1); } } heapsort(a);

Having Tech or Coding Interview? Check 👉 12 Heaps and Maps Interview Questions
Source: medium.com

Q12: 
Sort a Stack using Recursion

Problem

Note we can't use another stack or queue.

Answer

The idea of the solution is to hold all values in system stack until the stack becomes empty. When the stack becomes empty, insert all held items one by one in sorted order.

Complexity Analysis
Time:
Constant
O(1)
Dbl. Logarithmic
O(log log n)
Logarithmic
O(log n)
Square Root
O(√n)
Linear
O(n)
Linearithmic
O(n log n)
Quadratic
O(n2)
Exponential
O(2n)
Factorial
O(n!)
Space:
Constant
O(1)
Dbl. Logarithmic
O(log log n)
Logarithmic
O(log n)
Square Root
O(√n)
Linear
O(n)
Linearithmic
O(n log n)
Quadratic
O(n2)
Exponential
O(2n)
Factorial
O(n!)

Time: O(n^2)

Space: O(n)

Implementation
void sort(stack s) { if (!IsEmpty(s)) { int x = Pop(s); sort(s); insert(s, x); } } void insert(stack s, int x) { if (!IsEmpty(s)) { int y = Top(s); if (x < y) { Pop(s); insert(s, x); Push(s, y); } else { Push(s, x); } } else { Push(s, x); } }

Having Tech or Coding Interview? Check 👉 26 Sorting Interview Questions

Q13: 
What's the difference between External vs Internal sorting?

Answer
  • In internal sorting all the data to sort is stored in memory at all times while sorting is in progress.
  • In external sorting data is stored outside memory (like on disk) and only loaded into memory in small chunks. External sorting is usually applied in cases when data can't fit into memory entirely.

So in internal sorting you can do something like shell sort - just access whatever array elements you want at whatever moment you want. You can't do that in external sorting - the array is not entirely in memory, so you can't just randomly access any element in memory and accessing it randomly on disk is usually extremely slow. The external sorting algorithm has to deal with loading and unloading chunks of data in optimal manner.

Divide and conquer algorithms like merge sort are commonly used for external sorting, because they break up the problem of sorting everything into a series of smaller sorts on chunks at a time. It doesn’t require random access to the the dataset and can be made to operate in chunks which fit in memory. In some cases the in-memory chunks maybe sorted using an in-memory (internal) sorting algorithm.


Having Tech or Coding Interview? Check 👉 26 Sorting Interview Questions

Q14: 
When is Quicksort better than Mergesort?

Problem

They're both O(n log n) and yet most people use Quicksort instead of Mergesort. Why is that?

Answer
  • Quicksort has O(n2) worst-case runtime and O(n log n) average case runtime. However, it’s superior to merge sort in many scenarios because many factors influence an algorithm’s runtime, and, when taking them all together, quicksort wins out.

  • Quicksort in particular requires little additional space (it's in-place and MergeSort requires extra memory linear to number of elements to be sorted) and exhibits good cache locality (does half as many reads as the other algorithms), and this makes it faster than merge sort in many cases. In addition, it’s very easy to avoid quicksort’s worst-case run time of O(n2) almost entirely by using an appropriate choice of the pivot – such as picking it at random (this is an excellent strategy).

The average case performance for quicksort is faster than mergesort. But this is only true if you are assuming constant time to access any piece of memory on demand. In RAM this assumption is generally not too bad (it is not always true because of caches, but it is not too bad). However if your data structure is big enough to live on disk, then quicksort gets killed by the fact that your average disk does something like 200 random seeks per second.

if data has to be sorted on disk, you really, really want to use some variation of mergesort.


Having Tech or Coding Interview? Check 👉 26 Sorting Interview Questions

Q15: 
Which of the following algorithms would be the fastest?

Problem

If you had an unsorted list of one million unique items, and knew that you would only search it once for a value, which of the following algorithms would be the fastest?

  1. Use linear search on the unsorted list
  2. Use insertion sort to sort the list and then binary search on the sorted list
Answer

Linear search takes just O(n), while sorting a list first takes O(n log n). Since you are going to search the list only once for a value, the fact that subsequent searches in the sorted list with a binary search takes only O(log n) does not help overcome the overhead of the O(n log n) time complexity involved in the sorting, and hence a linear search would be more efficient for the task.


Having Tech or Coding Interview? Check 👉 24 Searching Interview Questions
🤖 Having Machine Learning & DS Interview? Check  MLStack.Cafe - 1704 Data Science & ML Interview Questions & Answers!Having ML & DS Interview? Check 🤖 MLStack.Cafe - 1704 ML & DS Interview Questions and Answers

Q16: 
Which sort algorithm works best on mostly sorted data?

Answer
  • Only a few items: Insertion Sort
  • Items are mostly sorted already: Insertion Sort
  • Concerned about worst-case scenarios: Heap Sort
  • Interested in a good average-case result: Quick Sort
  • Items are drawn from a dense universe: Bucket Sort
  • Desire to write as little code as possible: Insertion Sort

Having Tech or Coding Interview? Check 👉 26 Sorting Interview Questions

Q17: 
Explain how Radix Sort works

Answer

Radix sort is a sorting technique that sorts the elements by first grouping the individual digits of the same place value. Then, sort the elements according to their increasing/decreasing order. Radix sort uses counting sort as a subroutine to sort an array of numbers. Because integers can be used to represent strings (by hashing the strings to integers), radix sort works on data types other than just integers.

Radix sort is a non-comparative sorting algorithm. It avoids comparison by creating and distributing elements into buckets according to their radix. For example, a binary system (using numbers 0 and 1) has a radix of 2 and a decimal system (using numbers 0 to 9) has a radix of 10.

Radix sorts can be implemented to start at either the most significant digit (MSD) or least significant digit (LSD). For example, with 1234, one could start with 1 (MSD) or 4 (LSD).

Assume the input array is [123, 2, 999, 609, 111]:

  1. Based on the algorithm, we will sort the input array according to the one's digit (least significant digit, LSD):
[0]: [1]: 111 [2]: 002 [3]: 123 [4]: [5]: [6]: [7]: [8]: [9]: 999, 609 // Note that we placed 999 before 609 because it appeared first.
  1. Now, we'll sort according to the ten's digit:
[0]: 609, 002 [1]: 111 [2]: 123 [3]: [4]: [5]: [6]: [7]: [8]: [9]: 999
  1. Finally , we sort according to the hundred's digit (most significant digit):
[0]: 002 [1]: 111, 123 [2]: [3]: [4]: [5]: [6]: 609 [7]: [8]: [9]: 999

Complexity Analysis
Time:
Constant
O(1)
Dbl. Logarithmic
O(log log n)
Logarithmic
O(log n)
Square Root
O(√n)
Linear
O(n)
Linearithmic
O(n log n)
Quadratic
O(n2)
Exponential
O(2n)
Factorial
O(n!)
Space:
Constant
O(1)
Dbl. Logarithmic
O(log log n)
Logarithmic
O(log n)
Square Root
O(√n)
Linear
O(n)
Linearithmic
O(n log n)
Quadratic
O(n2)
Exponential
O(2n)
Factorial
O(n!)

Time: O(n)

Space: O(n)

Radix sort will operate on n d-digit numbers where each digit can be one of at most b different values (since b is the base (or radix) being used). For example, in base 10, a digit can be 0, 1, 2, 3, 4, 5, 6, 7, 8 or 9.

Radix sort uses counting sort on each digit. Each pass over n d-digit numbers will take O(n + b) time, and there are d passes total. Therefore, the total running time of radix sort is O(d(n+b)). When d is a constant and b isn't much larger than n (in other words, b=O(n)), then radix sort takes linear time.

The space complexity comes from counting sort, which requires O(n+b) space to hold the counts, indices, and output lists.

Implementation
/* Radix Sort implementation in JavaScript */ function radixSort(arr) { var maxDigit = getMaxDigitLen(arr); //Looping for every digit index from leftmost digits to rightmost digits for (var i = 1; i <= maxDigit; i++) { // rebuild the digit buckets according to this digit var digitBuckets = []; for (var j = 0; j < arr.length; j++) { var digit = getDigit(arr[j], i); //get the i-th digit of the j-th element of the array	digitBuckets[digit] = digitBuckets[digit] || []; //If empty initialize with empty array	digitBuckets[digit].push(arr[j]); //Add the number in it's respective digits bucket } // rebuild the array according to this digit var index = 0; for (var k = 0; k < digitBuckets.length; k++) { if (digitBuckets[k] && digitBuckets[k].length > 0) { for (j = 0; j < digitBuckets[k].length; j++) {	arr[index++] = digitBuckets[k][j]; } } } } return arr; } // helper function to get the last n-th(position) digit of a number var getDigit = function(num, position) { var num_length = num.toString().length; if (num_length < position) return 0; var radixPosition = Math.pow(10, position - 1); //number with the positions-th power of 10 /* Logic: To find the 2nd digit of the number 123 we can divide it by 10, which is 12 and the using the modulus operator(%) we can find 12 % 10 = 2 */ var digit = (Math.floor(num / radixPosition)) % 10; return digit; }; // helper function get the length of digits of the max value in this array var getMaxDigitLen = function(arr) { var maxVal = Math.max.apply(Math, arr); //Finding max value from the array return Math.ceil(Math.log10(maxVal)); };

Having Tech or Coding Interview? Check 👉 26 Sorting Interview Questions
Source: github.com

Q18: 
Explain how QuickSort works

Answer

Quick Sort is a divide and conquer, comparison, in-place algorithm. Efficient implementations of Quicksort are not a stable sort. When implemented well, it can be about two or three times faster than its main competitors, merge sort and heapsort.

Quicksort determines something called a pivot, which is a somewhat arbitrary element in the collection. Using the pivot point, quicksort partitions (or divides) the larger unsorted collection into two, smaller lists. It moves all the elements smaller than the pivot to the left (before) the pivot element, and moves all the elements larger than the pivot to the right (after) the pivot element. Even though the list isn’t completely sorted yet, we know that the items are in the correct order in relation to the pivot. This means that we never have to compare elements on the left side of the partition to elements on the right side of the partition. We already know they are in their correct spots in relation to the pivot. The sub-lists are then sorted recursively.

Ideally, partitioning would use the median of the given values (in the array), but the median can only be found by scanning the whole array and this would slow the algorithm down. In that case the two partitions would be of equal size; In the simplest versions of quick sort an arbitrary element, typically the last (rightmost) element is used as an estimate (guess) of the median.

Complexity Analysis
Time:
Constant
O(1)
Dbl. Logarithmic
O(log log n)
Logarithmic
O(log n)
Square Root
O(√n)
Linear
O(n)
Linearithmic
O(n log n)
Quadratic
O(n2)
Exponential
O(2n)
Factorial
O(n!)
Space:
Constant
O(1)
Dbl. Logarithmic
O(log log n)
Logarithmic
O(log n)
Square Root
O(√n)
Linear
O(n)
Linearithmic
O(n log n)
Quadratic
O(n2)
Exponential
O(2n)
Factorial
O(n!)

Time: O(n log n)

Space: O(log n)

Overall time complexity of Quick Sort is O(n log n). In the worst case, it makes O(n2) comparisons (all elements are same or array is already sorted).

The partition step used in quicksort ideally is an in-place operation (without creating any additional arrays). But since quicksort calls itself on the order of log(n) times (in the average case), worst case number of calls is O(n), at each recursive call a new stack frame of constant size must be allocated. Hence the O(log n) space complexity.

Implementation

With additional memory (naive):

function quickSort (arr) { if(arr.length <= 1) return arr; // find middle point of array (effectively random pivot)  var swapPosition = Math.floor((arr.length -1) / 2); var pivot = arr.splice(swapPosition, 1); var left = []; var right = []; for(var i = 0; i < arr.length; i++) { if(arr[i] < pivot) { left.push(arr[i]); } else { right.push(arr[i]) } } return quickSort(left).concat(pivot, quickSort(right)); }

In-place:

var quickSort = function(array, left, right) { var leftIndex = partition(array, left, right); if (left < leftIndex - 1) { quickSort(array, left, leftIndex - 1); } if (right > leftIndex) { quickSort(array, leftIndex, right); } return array; } var swap = function(array, left, right) { var temp; temp = array[leftIndex]; array[leftIndex] = array[rightIndex]; array[rightIndex] = temp; } var partition = function(array, left, right) { var pivotIndex = Math.floor((left + right) / 2); var pivot = array[pivotIndex]; leftIndex = left; rightIndex = right; while (leftIndex <= rightIndex) { while (array[leftIndex] < pivot) { leftIndex++; } while (array[rightIndex] > pivot) { rightIndex--; } if (leftIndex <= rightIndex) { swap(array, left, right); leftIndex++; rightIndex--; } } return leftIndex; }

Having Tech or Coding Interview? Check 👉 6 Divide & Conquer Interview Questions
Source: medium.com

Q19: 
Explain how to find 100 largest numbers out of an array of 1 billion numbers

Answer
  • The brute force solution will be to sort the array in O(n log n) time complexity and take the last 100 numbers but you can do better.
  • You can keep a priority queue (implemented as a heap) of the 100 biggest numbers, iterate through the billion numbers, whenever you encounter a number greater than the smallest number in the queue (the head of the queue), remove the head of the queue and add the new number to the queue. In general, if you need the largest K numbers from a set of N numbers, the complexity is O(N log K) rather than O(N log N), this can be very significant when K is very small comparing to N.
  • You can also apply problem solving process. Ask interviewer the range or meaning of these numbers to make the problem clear. If, for example, these numbers stands for people's age of within a country (e.g. China), then it's a much easier problem. With a reasonable assumption that nobody alive is older than 200, you can use an int array of size 200 (maybe 201) to count the number of people with the same age in just one iteration. Here the index means the age. After this it's a piece of cake to find 100 largest number. By the way this also is called counting sort. Counting sort takes O(n + k) time and O(n + k) space, where n is the number of items we're sorting and k is the number of possible values.

Having Tech or Coding Interview? Check 👉 12 Heaps and Maps Interview Questions

Q20: 
When Merge Sort is preferred over Quick Sort?

Answer

Mergesort advantages over Quicksort are:

  • Mergesort is worst-case O(n log n), Quicksort is average case O(n log n), but has a worst case of O(n2)
  • Mergesort is stable by design, equal elements keep their original order
  • Mergesort is well suited to be implemented parallel (multithreading)
  • Mergesort uses (about 30%) less comparisons than QuickSort. This is an often overlooked advantage, because a comparison can be quite expensive (e.g. when comparing several fields of database rows)
  • Mergesort is a superior algorithm for sorting linked lists, since it makes fewer total comparisons
  • Linked lists can be merged with only O(1) space overhead instead of O(n) space overhead

Quicksort usually is better than mergesort for two reasons:

  • Quicksort has better locality of reference than mergesort, which means that the accesses performed in quicksort are usually faster than the corresponding accesses in mergesort.
  • Quicksort uses worst-case O(log n) memory (if implemented correctly), while mergesort requires O(n) memory due to the overhead of merging.

Scenarios when quicksort is worse than mergesort:

  • Array is already sorted
  • All elements in the array are the same
  • Array is sorted in reverse order

Having Tech or Coding Interview? Check 👉 26 Sorting Interview Questions

Q21: 
Why is Merge sort preferred over Quick sort for sorting Linked Lists?

Answer
  • Quicksort depends on being able to index (or random access) into an array or similar structure. When that's possible, it's hard to beat Quicksort.
  • You can't index directly into a linked list very quickly. That is, if myList is a linked list, then myList[x], were it possible to write such syntax, would involve starting at the head of the list and following the first x links. That would have to be done twice for every comparison that Quicksort makes, and that would get expensive real quick.
  • Same thing on disk - Quicksort would have to seek and read every item it wants to compare.
  • Merge sort is faster in these situations because it reads the items sequentially, typically making log(n) passes over the data. There is much less I/O involved, and much less time spent following links in a linked list.
  • Quicksort is fast when the data fits into memory and can be addressed directly. Mergesort is faster when data won't fit into memory or when it's expensive to get to an item.

Having Tech or Coding Interview? Check 👉 43 Linked Lists Interview Questions
 

Rust has been Stack Overflow’s most loved language for four years in a row and emerged as a compelling language choice for both backend and system developers, offering a unique combination of memory safety, performance, concurrency without Data races...

Clean Architecture provides a clear and modular structure for building software systems, separating business rules from implementation details. It promotes maintainability by allowing for easier updates and changes to specific components without affe...

Azure Service Bus is a crucial component for Azure cloud developers as it provides reliable and scalable messaging capabilities. It enables decoupled communication between different components of a distributed system, promoting flexibility and resili...

Cosmos DB has gained popularity among developers and organizations across various industries, including finance, e-commerce, gaming, IoT, and more. Follow along and learn the 24 most common and advanced Azure Cosmos DB interview questions and answers...
More than any other NoSQL database, and dramatically more than any relational database, MongoDB's document-oriented data model makes it exceptionally easy to add or change fields, among other things. It unlocks Iteration on the project. Iteration f...
Unit Tests and Test Driven Development (TDD) help you really understand the design of the code you are working on. Instead of writing code to do something, you are starting by outlining all the conditions you are subjecting the code to and what outpu...
Domain-Driven Design is nothing magical but it is crucial to understand the importance of Ubiquitous Language, Domain Modeling, Context Mapping, extracting the Bounded Contexts correctly, designing efficient Aggregates and etc. before your next DDD p...
At its core, Microsoft Azure is a public cloud computing platform - with solutions including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) that can be used for services such as analytics, virtual c...
As an asynchronous event-driven JavaScript runtime, Node.js is designed to build scalable network applications. Follow along to refresh your knowledge and explore the 52 most frequently asked and advanced Node JS Interview Questions and Answers every...
Dependency Injection is most useful when you're aiming for code reuse, versatility and robustness to changes in your problem domain. DI is also useful for decoupling your system. DI also allows easier unit testing without having to hit a database and...