Algorithm Cheat Sheet: Must-Know Algorithms for Interviews
Are you preparing for a technical interview at a top tech company? Whether you’re aiming for FAANG (Facebook, Amazon, Apple, Netflix, Google) or any other tech giant, having a solid grasp of key algorithms is crucial. This comprehensive cheat sheet will guide you through the must-know algorithms that frequently appear in coding interviews. We’ll cover their concepts, implementations, and typical use cases to help you ace your next technical interview.
1. Binary Search
Binary search is an efficient algorithm for finding an element in a sorted array. It works by repeatedly dividing the search interval in half.
Key Concepts:
- Time Complexity: O(log n)
- Space Complexity: O(1) for iterative, O(log n) for recursive
- Requires a sorted array
Implementation:
def binary_search(arr, target):
left, right = 0, len(arr) - 1
while left <= right:
mid = (left + right) // 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
left = mid + 1
else:
right = mid - 1
return -1 # Target not found
Use Cases:
- Searching in large sorted datasets
- Implementing efficient lookup operations
- Finding insertion points in sorted arrays
2. Depth-First Search (DFS)
DFS is a graph traversal algorithm that explores as far as possible along each branch before backtracking.
Key Concepts:
- Time Complexity: O(V + E) where V is the number of vertices and E is the number of edges
- Space Complexity: O(V)
- Can be implemented recursively or using a stack
Implementation (Recursive):
def dfs(graph, node, visited=None):
if visited is None:
visited = set()
visited.add(node)
print(node, end=' ')
for neighbor in graph[node]:
if neighbor not in visited:
dfs(graph, neighbor, visited)
Use Cases:
- Detecting cycles in graphs
- Topological sorting
- Solving maze problems
- Finding connected components
3. Breadth-First Search (BFS)
BFS is another graph traversal algorithm that explores all the vertices of a graph in breadth-first order, i.e., it visits all the nodes at the present depth before moving to the nodes at the next depth level.
Key Concepts:
- Time Complexity: O(V + E)
- Space Complexity: O(V)
- Implemented using a queue
Implementation:
from collections import deque
def bfs(graph, start):
visited = set()
queue = deque([start])
visited.add(start)
while queue:
node = queue.popleft()
print(node, end=' ')
for neighbor in graph[node]:
if neighbor not in visited:
visited.add(neighbor)
queue.append(neighbor)
Use Cases:
- Finding shortest paths in unweighted graphs
- Web crawling
- Social networking features (e.g., finding degrees of separation)
- GPS navigation systems
4. Quick Sort
Quick Sort is a divide-and-conquer algorithm that picks an element as a pivot and partitions the array around the chosen pivot.
Key Concepts:
- Time Complexity: Average O(n log n), Worst case O(n²)
- Space Complexity: O(log n)
- In-place sorting algorithm
Implementation:
def quicksort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quicksort(left) + middle + quicksort(right)
Use Cases:
- General-purpose sorting
- Efficiently sorting large datasets
- Implementing other algorithms that require sorted data
5. Merge Sort
Merge Sort is another divide-and-conquer algorithm that divides the input array into two halves, recursively sorts them, and then merges the two sorted halves.
Key Concepts:
- Time Complexity: O(n log n)
- Space Complexity: O(n)
- Stable sorting algorithm
Implementation:
def merge_sort(arr):
if len(arr) <= 1:
return arr
mid = len(arr) // 2
left = merge_sort(arr[:mid])
right = merge_sort(arr[mid:])
return merge(left, right)
def merge(left, right):
result = []
i, j = 0, 0
while i < len(left) and j < len(right):
if left[i] <= right[j]:
result.append(left[i])
i += 1
else:
result.append(right[j])
j += 1
result.extend(left[i:])
result.extend(right[j:])
return result
Use Cases:
- External sorting
- Sorting linked lists
- Counting inversions in an array
6. Dijkstra’s Algorithm
Dijkstra’s algorithm finds the shortest path between nodes in a graph, which may represent, for example, road networks.
Key Concepts:
- Time Complexity: O((V + E) log V) with a binary heap
- Space Complexity: O(V)
- Works on graphs with non-negative edge weights
Implementation:
import heapq
def dijkstra(graph, start):
distances = {node: float('infinity') for node in graph}
distances[start] = 0
pq = [(0, start)]
while pq:
current_distance, current_node = heapq.heappop(pq)
if current_distance > distances[current_node]:
continue
for neighbor, weight in graph[current_node].items():
distance = current_distance + weight
if distance < distances[neighbor]:
distances[neighbor] = distance
heapq.heappush(pq, (distance, neighbor))
return distances
Use Cases:
- GPS navigation systems
- Network routing protocols
- Finding shortest paths in social networks
7. Dynamic Programming
Dynamic Programming is a method for solving complex problems by breaking them down into simpler subproblems. It is especially useful for optimization problems.
Key Concepts:
- Optimal Substructure
- Overlapping Subproblems
- Memoization (Top-down) or Tabulation (Bottom-up) approaches
Example: Fibonacci Sequence
def fibonacci(n):
if n <= 1:
return n
dp = [0] * (n + 1)
dp[1] = 1
for i in range(2, n + 1):
dp[i] = dp[i-1] + dp[i-2]
return dp[n]
Use Cases:
- Solving optimization problems
- Sequence alignment in bioinformatics
- Resource allocation in economics
- Shortest path algorithms
8. Binary Tree Traversals
Binary tree traversals are fundamental algorithms for processing tree structures. The three main types are in-order, pre-order, and post-order traversals.
Key Concepts:
- In-order: Left, Root, Right
- Pre-order: Root, Left, Right
- Post-order: Left, Right, Root
Implementation:
class TreeNode:
def __init__(self, val=0, left=None, right=None):
self.val = val
self.left = left
self.right = right
def inorder_traversal(root):
result = []
if root:
result.extend(inorder_traversal(root.left))
result.append(root.val)
result.extend(inorder_traversal(root.right))
return result
def preorder_traversal(root):
result = []
if root:
result.append(root.val)
result.extend(preorder_traversal(root.left))
result.extend(preorder_traversal(root.right))
return result
def postorder_traversal(root):
result = []
if root:
result.extend(postorder_traversal(root.left))
result.extend(postorder_traversal(root.right))
result.append(root.val)
return result
Use Cases:
- Evaluating mathematical expressions
- Creating a copy of a tree
- Deleting a tree
- Solving problems on binary search trees
9. Heap (Priority Queue)
A Heap is a specialized tree-based data structure that satisfies the heap property. It’s commonly used to implement priority queues.
Key Concepts:
- Max Heap: Parent nodes are always greater than or equal to their children
- Min Heap: Parent nodes are always less than or equal to their children
- Time Complexity: O(log n) for insert and delete operations
Implementation (Min Heap):
import heapq
class MinHeap:
def __init__(self):
self.heap = []
def push(self, val):
heapq.heappush(self.heap, val)
def pop(self):
return heapq.heappop(self.heap)
def peek(self):
return self.heap[0] if self.heap else None
def size(self):
return len(self.heap)
Use Cases:
- Implementing priority queues
- Dijkstra’s algorithm for shortest paths
- Huffman coding for data compression
- Heap Sort algorithm
10. Trie (Prefix Tree)
A Trie is an efficient information retrieval data structure. It’s particularly useful for string-related problems and prefix matching.
Key Concepts:
- Each node represents a character
- Paths from root to leaf represent words
- Efficient for prefix matching operations
Implementation:
class TrieNode:
def __init__(self):
self.children = {}
self.is_end = False
class Trie:
def __init__(self):
self.root = TrieNode()
def insert(self, word):
node = self.root
for char in word:
if char not in node.children:
node.children[char] = TrieNode()
node = node.children[char]
node.is_end = True
def search(self, word):
node = self.root
for char in word:
if char not in node.children:
return False
node = node.children[char]
return node.is_end
def starts_with(self, prefix):
node = self.root
for char in prefix:
if char not in node.children:
return False
node = node.children[char]
return True
Use Cases:
- Autocomplete features
- Spell checkers
- IP routing tables
- Solving word games (e.g., Boggle)
Conclusion
Mastering these algorithms is crucial for succeeding in technical interviews, especially at top tech companies. However, it’s not just about memorizing implementations; understanding the underlying concepts, time and space complexities, and appropriate use cases is equally important.
Remember, practice is key. Implement these algorithms from scratch, solve related problems, and analyze their performance in different scenarios. This hands-on experience will not only prepare you for interviews but also make you a better programmer overall.
As you prepare for your interviews, consider using platforms like AlgoCademy that offer interactive coding tutorials and AI-powered assistance. These resources can help you progress from basic coding skills to mastering complex algorithmic problems, giving you the confidence you need to tackle any technical interview.
Good luck with your interview preparation, and may this algorithm cheat sheet serve as a valuable reference in your coding journey!