Exploring Algorithms for Arrays and Strings: Two Pointers, Sliding Window, and Prefix Sum
makkentoshh {{☕}}
Posted on July 2, 2023
Arrays and strings are fundamental data structures in computer science and programming. Efficiently manipulating and processing these data types is crucial for solving a wide range of problems. In this dev post, we will delve into three popular algorithmic techniques for working with arrays and strings: Two Pointers, Sliding Window, and Prefix Sum. We will explore their concepts and provide code examples to illustrate their practical applications. But before we start, we need to understand, why are we using these concepts and what is Big O Notation means
In programming, Big O notation is a way to describe the performance characteristics of an algorithm. It provides an estimate of how the algorithm’s execution time or space requirements grow as the input size increases. Big O notation is commonly used to analyze the time complexity and space complexity of algorithms.
- Time Complexity (Big O Notation for Execution Time): Time complexity represents the amount of time an algorithm takes to run as a function of the input size. It describes the growth rate of the algorithm’s execution time relative to the input size. Big O notation provides an upper bound on the worst-case scenario of an algorithm’s time complexity.
Here are some commonly encountered time complexities:
O(1) — Constant Time: The algorithm’s execution time does not depend on the input size. It performs a fixed number of operations regardless of the input size.
O(log n) — Logarithmic Time: The algorithm’s execution time increases logarithmically with the input size. Examples include binary search or certain divide-and-conquer algorithms.
O(n) — Linear Time: The algorithm’s execution time increases linearly with the input size. It performs a constant number of operations for each element in the input.
O(n log n) — Linearithmic Time: The algorithm’s execution time grows in proportion to the input size multiplied by the logarithm of the input size. It is often seen in efficient sorting algorithms like merge sort or quicksort.
O(n²) — Quadratic Time: The algorithm’s execution time increases quadratically with the input size. It typically involves nested iterations over the input.
O(2^n) — Exponential Time: The algorithm’s execution time doubles with each additional input element. It is commonly found in brute force or recursive algorithms that explore all possible combinations.
- Space Complexity (Big O Notation for Memory Usage): Space complexity measures the amount of memory an algorithm requires as a function of the input size. It describes the growth rate of the algorithm’s memory usage relative to the input size. Similar to time complexity, Big O notation provides an upper bound on the worst-case scenario of an algorithm’s space complexity.
Common space complexities include:
O(1) — Constant Space: The algorithm uses a fixed amount of memory that does not depend on the input size.
O(n) — Linear Space: The algorithm’s memory usage increases linearly with the input size.
O(n²) — Quadratic Space: The algorithm’s memory usage g*rows quadratically with the input size*.
It’s worth noting that space complexity can also depend on factors such as auxiliary data structures used within the algorithm or the depth of recursion in recursive algorithms.
Understanding the time and space complexity of an algorithm is crucial for assessing its efficiency, scalability, and potential performance issues. By analyzing these complexities, developers can make informed decisions about algorithm selection, optimization, and trade-offs in their code.
The Two Pointers technique is commonly used to solve problems involving arrays or strings by maintaining two pointers that traverse the data structure in a specific manner. This technique is particularly useful when dealing with sorted arrays or searching for pairs or subarrays with specific properties
function reverseString(s: string[]): void {
let left: number = 0;
for (let right = s.length - 1; right >= left; right--) {
let temp: string = s[left]; // declaring a variable to memorize left value before it is changed
s[left] = s[right];
s[right] = temp;
left++;
}
}
In this code snippet, the reverseString function takes an array of strings (s) as input and uses two pointers, left and right, to swap elements from the beginning and end of the array. By incrementing left and decrementing right in each iteration, the function effectively reverses the order of elements in the array.
This approach has a time complexity of O(n) as it performs a linear scan through the array
The Sliding Window technique is useful for solving problems that involve finding or processing subarrays or substrings of a given array or string. It involves maintaining a “window” or a subarray/substring that dynamically adjusts its size while moving through the data structure
function findMaxAverage(nums: number[], k: number): number {
let ans = 0;
let curr = 0; // k is a fixed window size
for (let i = 0; i < k; i++) {
curr += nums[i];
}
ans = curr;
for (let i = k; i < nums.length; i++) {
curr += nums[i] - nums[i - k];
ans = Math.max(ans, curr); // here we iterating over all subarrays with size k in array
}
return ans / k;
}
In this code snippet, the findMaxAverage function takes an array of numbers (nums) and a window size (k) as input. It uses a sliding window approach to calculate the maximum average of a subarray of length k. By maintaining the current sum (curr) and updating it as the window slides through the array, the function finds the maximum sum and divides it by k to obtain the maximum average.
The time complexity of the given code is O(n), where n is the length of the nums array.
Here’s a breakdown of the time complexity analysis:
The first for loop iterates k times, where k is the window size. Since k is a constant value, the time complexity of this loop is O(1).
The second for loop iterates from k to nums.length - 1, which is n - k iterations. Hence, the time complexity of this loop is O(n - k).
Within the second for loop, each iteration performs constant-time operations such as addition, subtraction, and comparison. Therefore, the time complexity of the operations inside the loop is O(1).
Hence, the time complexity of the given code is O(n).
The Prefix Sum technique is often employed when dealing with cumulative calculations or finding subarrays with specific properties. It involves precomputing and storing prefix sums, which represent the cumulative sums of elements up to a certain index in an array or string.
function runningSum(nums: number[]): number[] {
let prefix = [nums[0]];
for (let i = 1; i < nums.length; i++) {
prefix.push(nums[i] + prefix[prefix.length - 1]); // adding new value with previous prefix sum
}
return prefix;
}
In this code snippet, the runningSum function takes an array of numbers (nums) as input and calculates the running sum by utilizing the prefix sum technique. It iterates through the array, adding the current element to the last element in the prefix array, and appends the result to the prefix array. The function returns the resulting prefix sum array.
Overall, the dominant factor in the time complexity is the for loop, which has a time complexity of O(n)
Conclusion
The Two Pointers, Sliding Window, and Prefix Sum techniques are powerful tools for solving algorithmic problems related to arrays and strings. By leveraging these techniques, developers can improve the efficiency and effectiveness of their solutions. Understanding these concepts and their applications can be immensely valuable in tackling a wide range of programming challenges.
Please feel free to experiment and add your own code examples to further explore these techniques!
Posted on July 2, 2023
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.
Related
July 2, 2023