Given an array of integers, remove the duplicates in-place such that each unique element appears only once.
Do not allocate extra space for another array. You must do this by modifying the input array in-place with O(1) extra memory.
The resulting array can be in any order.
Example:
Input: [2, 3, 1, 1, 4, 3, -2, 1] Output: [2, 3, 1, 4, -2]
Your algorithm should run in O(n log n) time and use O(1) extra space.
The core challenge of this problem is to remove duplicates from an array in-place, meaning we cannot use additional space for another array. This is a common problem in data processing where we need to ensure data uniqueness without using extra memory.
Common applications include data cleaning, where duplicate entries need to be removed, and in algorithms where unique elements are required for further processing.
Potential pitfalls include misunderstanding the in-place requirement and using additional data structures, which would violate the constraints.
To solve this problem, we can follow these steps:
Let's discuss a naive solution first:
Naive Solution: We could use an additional array to store unique elements, but this violates the O(1) extra space constraint.
Optimized Solution: By sorting the array first, we can then use a single pass to remove duplicates in-place.
Here is a step-by-step breakdown of the optimized algorithm:
p
to 0.p
and increment p
.p
./**
* Remove duplicates from an array in-place.
* @param {number[]} nums - The input array.
* @return {number[]} The array with duplicates removed.
*/
function removeDuplicates(nums) {
// Step 1: Sort the array
nums.sort((a, b) => a - b);
// Step 2: Initialize a pointer to track the position of the next unique element
let p = 0;
// Step 3: Traverse the array
for (let i = 1; i < nums.length; i++) {
// Step 4: If the current element is different from the previous one
if (nums[i] !== nums[i - 1]) {
// Step 5: Place it at the position indicated by p and increment p
p++;
nums[p] = nums[i];
}
}
// Step 6: Return the subarray from the start to the position indicated by p
return nums.slice(0, p + 1);
}
// Example usage:
const input = [2, 3, 1, 1, 4, 3, -2, 1];
const output = removeDuplicates(input);
console.log(output); // Output: [ -2, 1, 2, 3, 4 ]
Time Complexity: The sorting step takes O(n log n) time, and the traversal step takes O(n) time. Therefore, the overall time complexity is O(n log n).
Space Complexity: The algorithm uses O(1) extra space as it modifies the array in-place.
Consider the following edge cases:
Testing these edge cases ensures the robustness of the solution.
To test the solution comprehensively, consider the following test cases:
// Test case 1: Empty array
console.log(removeDuplicates([])); // Output: []
// Test case 2: Array with all identical elements
console.log(removeDuplicates([1, 1, 1, 1])); // Output: [1]
// Test case 3: Array with no duplicates
console.log(removeDuplicates([1, 2, 3, 4])); // Output: [1, 2, 3, 4]
// Test case 4: Array with some duplicates
console.log(removeDuplicates([2, 3, 1, 1, 4, 3, -2, 1])); // Output: [-2, 1, 2, 3, 4]
When approaching such problems, consider the following tips:
In this blog post, we discussed how to remove duplicates from an array in-place with O(n log n) time complexity and O(1) extra space. We covered the problem definition, approach, algorithm, code implementation, complexity analysis, edge cases, and testing. Understanding and solving such problems is crucial for efficient data processing and algorithm design.
Keep practicing and exploring further to enhance your problem-solving skills!
For further reading and practice problems, consider the following resources: