Developing Efficient Algorithms - Measuring Algorithm Efficiency Using Big O Notation

paulike

Paul Ngugi

Posted on July 15, 2024

Developing Efficient Algorithms - Measuring Algorithm Efficiency Using Big O Notation

Algorithm design is to develop a mathematical process for solving a problem. Algorithm analysis is to predict the performance of an algorithm. The preceding two chapters introduced classic data structures (lists, stacks, queues, priority queues, sets, and maps) and applied them to solve problems. This chapter will use a variety of examples to introduce common algorithmic techniques (dynamic programming, divide-and-conquer, and backtracking) for developing efficient algorithms.

The Big O notation obtains a function for measuring algorithm time complexity based on the input size. You can ignore multiplicative constants and nondominating terms in the function. Suppose two algorithms perform the same task, such as search (linear search vs. binary search). Which one is better? To answer this question, you might implement these algorithms and run the programs to get execution time. But there are two problems with this approach:

  • First, many tasks run concurrently on a computer. The execution time of a particular program depends on the system load.
  • Second, the execution time depends on specific input. Consider, for example, linear search and binary search. If an element to be searched happens to be the first in the list, linear search will find the element quicker than binary search.

It is very difficult to compare algorithms by measuring their execution time. To overcome these problems, a theoretical approach was developed to analyze algorithms independent of computers and specific input. This approach approximates the effect of a change on the size of the input. In this way, you can see how fast an algorithm’s execution time increases as the input size increases, so you can compare two algorithms by examining their growth rates.

Consider linear search. The linear search algorithm compares the key with the elements in the array sequentially until the key is found or the array is exhausted. If the key is not in the array, it requires n comparisons for an array of size n. If the key is in the array, it requires n/2 comparisons on average. The algorithm’s execution time is proportional to the size of the array. If you double the size of the array, you will expect the number of comparisons to double. The algorithm grows at a linear rate. The growth rate has an order of magnitude of n. Computer scientists use the Big O notation to represent the “order of magnitude.” Using this notation, the complexity of the linear search algorithm is O(n), pronounced as “order of n.” We call an algorithm with a time complexity of O(n) a linear algorithm, and it exhibits a linear growth rate.

For the same input size, an algorithm’s execution time may vary, depending on the input. An input that results in the shortest execution time is called the best-case input, and an input that results in the longest execution time is the worst-case input. Best-case analysis and
worst-case analysis are to analyze the algorithms for their best-case input and worst-case input. Best-case and worst-case analysis are not representative, but worst-case analysis is very useful. You can be assured that the algorithm will never be slower than the worst case.
An average-case analysis attempts to determine the average amount of time among all possible inputs of the same size. Average-case analysis is ideal, but difficult to perform, because for many problems it is hard to determine the relative probabilities and distributions of various input instances. Worst-case analysis is easier to perform, so the analysis is generally conducted for the worst case.

The linear search algorithm requires n comparisons in the worst case and n/2 comparisons in the average case if you are nearly always looking for something known to be in the list. Using the Big O notation, both cases require O(n) time. The multiplicative constant (1/2) can be omitted. Algorithm analysis is focused on growth rate. The multiplicative constants have no impact on growth rates. The growth rate for n/2 or 100_n_ is the same as for n, as illustrated in Table below, Growth Rates. Therefore, O(n) = O(n/2) = O(100n).

Image description

Consider the algorithm for finding the maximum number in an array of n elements. To find the maximum number if n is 2, it takes one comparison; if n is 3, two comparisons. In general, it takes n - 1 comparisons to find the maximum number in a list of n elements. Algorithm analysis is for large input size. If the input size is small, there is no significance in estimating an algorithm’s efficiency. As n grows larger, the n part in the expression n - 1 dominates the complexity. The Big O notation allows you to ignore the nondominating part (e.g., -1 in the
expression n - 1) and highlight the important part (e.g., n in the expression n - 1). Therefore, the complexity of this algorithm is O(n).

The Big O notation estimates the execution time of an algorithm in relation to the input size. If the time is not related to the input size, the algorithm is said to take constant time with the notation O(1). For example, a method that retrieves an element at a given index in an array
takes constant time, because the time does not grow as the size of the array increases.

The following mathematical summations are often useful in algorithm analysis:

Image description

Time complexity is a measure of execution time using the Big-O notation. Similarly, you can also measure space complexity using the Big-O notation. Space complexity measures the amount of memory space used by an algorithm. The space complexity for most algorithms presented in the book is O(n). i.e., they exibit linear growth rate to the input size. For example, the space complexity for linear search is O(n).

💖 💪 🙅 🚩
paulike
Paul Ngugi

Posted on July 15, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related