Given an array of integers, remove the duplicates in-place such that each unique element appears only once.
Do not allocate extra space for another array. You must do this by modifying the input array in-place with O(1) extra memory.
The resulting array can be in any order.
Example:
Input: [2, 3, 1, 1, 4, 3, -2, 1] Output: [2, 3, 1, 4, -2]
Your algorithm should run in O(n log n) time and use O(1) extra space.
The core challenge of this problem is to remove duplicates from an array in-place, meaning we cannot use additional space for another array. This is a common problem in data processing where we need to ensure the uniqueness of elements in a dataset.
Common applications include data cleaning, where duplicate entries need to be removed, and in algorithms where unique elements are required for further processing.
Potential pitfalls include misunderstanding the in-place requirement and using additional space, or not achieving the required time complexity.
To solve this problem, we can follow these steps:
Let's break down the approach:
A naive solution would involve using an additional data structure like a set to keep track of unique elements. However, this violates the O(1) extra space constraint.
We can sort the array first, which takes O(n log n) time. Then, we can use a single pass to remove duplicates, which takes O(n) time. This approach meets the time complexity requirement and uses O(1) extra space.
Here is a step-by-step breakdown of the algorithm:
p
to 0.p
and update nums[p]
with the current element.p + 1
elements of the array are unique.
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
void removeDuplicates(vector<int>& nums) {
// Step 1: Sort the array
sort(nums.begin(), nums.end());
// Step 2: Initialize the pointer for the position of the next unique element
int p = 0;
// Step 3: Traverse the sorted array
for (int i = 1; i < nums.size(); ++i) {
// If the current element is not equal to the previous element
if (nums[i] != nums[i - 1]) {
// Increment the pointer and update the element at position p
p++;
nums[p] = nums[i];
}
}
// Step 4: Resize the array to contain only the unique elements
nums.resize(p + 1);
}
int main() {
vector<int> nums = {2, 3, 1, 1, 4, 3, -2, 1};
removeDuplicates(nums);
// Output the result
for (int num : nums) {
cout << num << " ";
}
return 0;
}
The time complexity of this approach is O(n log n) due to the sorting step. The space complexity is O(1) as we are not using any extra space other than the input array.
Potential edge cases include:
Examples:
Input: [] Output: [] Input: [1, 1, 1, 1] Output: [1] Input: [1, 2, 3, 4] Output: [1, 2, 3, 4]
To test the solution comprehensively, we should include a variety of test cases:
Using a testing framework like Google Test can help automate and manage these tests.
When approaching such problems, it's important to:
In this blog post, we discussed how to remove duplicates from an array in-place with O(n log n) time complexity and O(1) extra space. We covered the problem definition, approach, algorithm, code implementation, complexity analysis, edge cases, and testing. Understanding and solving such problems is crucial for improving algorithmic thinking and coding skills.
For further reading and practice, consider the following resources: