Welcome to another exciting tutorial from AlgoCademy! Today, we’re diving deep into a classic algorithmic problem that frequently appears in coding interviews and competitions: the Longest Increasing Subsequence (LIS). This problem is not only a favorite among interviewers at top tech companies like Google, Facebook, and Amazon, but it’s also a fantastic way to sharpen your dynamic programming skills.

What is the Longest Increasing Subsequence?

Before we jump into the solution, let’s clearly define the problem. The Longest Increasing Subsequence is exactly what it sounds like: given a sequence of numbers, we need to find the length of the longest subsequence (a sequence that appears in the same relative order, but not necessarily contiguous) where all elements are in increasing order.

For example, consider the sequence: [10, 9, 2, 5, 3, 7, 101, 18]

The longest increasing subsequence is [2, 5, 7, 101], which has a length of 4. Note that there can be multiple longest increasing subsequences, but we’re only interested in the length of the longest one.

Why is this Problem Important?

The LIS problem is significant for several reasons:

  1. It’s a classic example of dynamic programming, showcasing how complex problems can be broken down into simpler subproblems.
  2. It has real-world applications in fields like bioinformatics (for comparing genetic sequences) and version control systems.
  3. It’s a stepping stone to understanding more complex algorithmic concepts.
  4. It’s frequently asked in technical interviews, especially at FAANG companies.

Approaching the Problem

There are multiple ways to solve the LIS problem, each with its own time and space complexity. We’ll start with the simplest approach and gradually optimize our solution.

1. Brute Force Approach

The most straightforward (but least efficient) way to solve this problem is to generate all possible subsequences, check if each is increasing, and keep track of the longest one. However, this approach has a time complexity of O(2^n), which is impractical for large inputs.

2. Dynamic Programming Approach

Dynamic Programming (DP) is where things get interesting. The key idea in DP is to break down the problem into smaller subproblems and store their results for future use. For the LIS problem, we can define our subproblem as follows:

Let DP[i] be the length of the longest increasing subsequence that ends with the element at index i.

Now, for each element, we can look at all previous elements and if the current element is greater than the previous element, we have a chance to extend the subsequence ending at that previous element.

Implementation of the DP Approach

Let’s implement this approach in Python:

def lengthOfLIS(nums):
    if not nums:
        return 0
    
    n = len(nums)
    dp = [1] * n  # Initialize DP array with 1s
    
    for i in range(1, n):
        for j in range(i):
            if nums[i] > nums[j]:
                dp[i] = max(dp[i], dp[j] + 1)
    
    return max(dp)

# Example usage
nums = [10, 9, 2, 5, 3, 7, 101, 18]
print(lengthOfLIS(nums))  # Output: 4

Let’s break down this solution:

  1. We initialize a DP array with all 1s because each individual element is an increasing subsequence of length 1.
  2. We iterate through the array, and for each element, we look at all previous elements.
  3. If the current element is greater than a previous element, we have two choices:
    • Either start a new subsequence with the current element (length 1)
    • Or extend the subsequence ending at the previous element
  4. We take the maximum of these two choices.
  5. Finally, we return the maximum value in the DP array, which represents the length of the longest increasing subsequence.

This approach has a time complexity of O(n^2) and a space complexity of O(n), which is a significant improvement over the brute force approach.

Optimizing Further: The Patience Sorting Approach

While the DP solution is good, we can do even better. There’s a clever approach that solves the LIS problem in O(n log n) time, inspired by a card game called Patience Sorting.

The idea is to maintain a list of piles. We iterate through the array, and for each number, we perform a binary search to find the leftmost pile whose top element is greater than or equal to the current number. If such a pile exists, we place the current number on top of that pile. If not, we start a new pile with the current number.

Here’s the implementation in Python:

import bisect

def lengthOfLIS(nums):
    if not nums:
        return 0
    
    piles = []
    for num in nums:
        pile_idx = bisect.bisect_left(piles, num)
        if pile_idx == len(piles):
            piles.append(num)
        else:
            piles[pile_idx] = num
    
    return len(piles)

# Example usage
nums = [10, 9, 2, 5, 3, 7, 101, 18]
print(lengthOfLIS(nums))  # Output: 4

This solution is not only more efficient but also more elegant. Let’s break it down:

  1. We use Python’s bisect module to perform binary search efficiently.
  2. For each number, we find the leftmost pile where we can place this number (using binary search).
  3. If no such pile exists (i.e., this number is greater than the top of all piles), we start a new pile.
  4. Otherwise, we place this number on top of the found pile, replacing the existing top.
  5. The number of piles at the end is equal to the length of the longest increasing subsequence.

This approach has a time complexity of O(n log n) due to the binary search for each element, and a space complexity of O(n) in the worst case.

Understanding the Intuition

The patience sorting approach might seem a bit magical at first, so let’s dive deeper into the intuition behind it:

  1. Each pile represents a potential end to an increasing subsequence.
  2. By always trying to place a number in the leftmost possible pile, we’re ensuring that we’re building the longest possible increasing subsequence.
  3. The number of piles represents the length of the longest increasing subsequence because each pile corresponds to an element in this subsequence.

It’s worth noting that while this approach gives us the correct length of the LIS, reconstructing the actual subsequence requires some additional bookkeeping.

Variations and Related Problems

The Longest Increasing Subsequence problem has several interesting variations and related problems:

  1. Longest Decreasing Subsequence: This is essentially the same problem, but we’re looking for a decreasing sequence instead of an increasing one. You can solve it by either reversing the input array or by changing the comparison in the LIS algorithm.
  2. Longest Bitonic Subsequence: A bitonic subsequence is one that first increases and then decreases. This can be solved by running the LIS algorithm twice – once from left to right and once from right to left.
  3. Maximum Sum Increasing Subsequence: Instead of finding the longest subsequence, we want to find the increasing subsequence with the maximum sum. This requires a slight modification to our DP approach.
  4. Box Stacking Problem: Given a set of boxes with dimensions (h, w, d), we need to find the tallest possible stack of boxes where each box can only be placed on top of a strictly larger box. This is a 3D version of the LIS problem.

Practice Problems

To truly master the Longest Increasing Subsequence problem and its variations, practice is key. Here are some problems you can try:

  1. LeetCode 300: Longest Increasing Subsequence
  2. LeetCode 354: Russian Doll Envelopes (A 2D version of LIS)
  3. LeetCode 368: Largest Divisible Subset (A variation of LIS)
  4. LeetCode 673: Number of Longest Increasing Subsequence

Conclusion

The Longest Increasing Subsequence problem is a classic example of how dynamic programming can transform a seemingly complex problem into a manageable one. By breaking down the problem into smaller subproblems and leveraging previously computed results, we can achieve efficient solutions to problems that would be infeasible with brute force approaches.

Moreover, the patience sorting approach to LIS showcases how creative thinking can lead to even more efficient algorithms. This problem serves as an excellent introduction to more advanced algorithmic concepts and is a staple in coding interviews at top tech companies.

As you continue your journey in algorithmic problem solving, remember that understanding the underlying principles is just as important as knowing the solution. The ability to recognize patterns, break down problems, and optimize solutions are skills that will serve you well in your coding career, whether you’re aiming for a position at a FAANG company or working on your own projects.

Keep practicing, stay curious, and happy coding!

Further Reading