Big O Notation: A Simple Explanation With Examples

princem

Prince M

Posted on April 20, 2024

Big O Notation: A Simple Explanation With Examples

It’s hard to create efficient algorithms without understanding the time and space complexity of various operations. The concept of Big O notation helps programmers understand how quickly or slowly an algorithm will execute as the input size grows.

In this post, we’ll cover the basics of Big O notation, why it is used and how describe the time and space complexity of algorithms with example.

Why do we use Big O?

As software engineer, our goal is to solve any given problem. And there could be many solutions, so as good engineer, we need to choose the most efficient solution. So how can we determine which solution is more efficient than others? To know that, let’s first start with what efficiency is.

What is efficiency?

Regarding programming, efficiency is all about how much computation resources are used by an algorithm. The least resources used, the more efficient algorithm is.

Generally, we only concern with only two types of resources:

  • Memory space needed (Space Complexity)
  • Time of the algorithm (Time complexity)

It seems like we do not need Big O to measure time complexity. We can use a stopwatch to calculate how long (milliseconds or seconds) an algorithm took to complete, and space complexity can be measured with KB and MB.

However, there is a huge problem with this kind of thinking.

For example, I have two computers with Intel i7 (16GB RAM) and Intel i3 (4GB RAM). So the runtime will not be the same when I run algorithm A on both devices. Obviously, the Intel i7 is more powerful than the i3 so it will solve the problem earlier.

But what if we have both computers with the same configurations? Still, the runtime will sometimes differ because other factors outside our control can influence the computer’s performance, such as concurrency and multi-processing.

To solve this issue, we analyze algorithms on paper using Big O notation for the following reasons.

  • Big O notation can objectively describe the efficiency of code without the use of concrete units such as (milliseconds and seconds)
  • Focus on how the time and space requirements scale in terms of the size of the Input.
  • Prepare for the worst-case scenario.

Now you understand why we need the Big O let’s go over what Time and Space complexity.

Time complexity

The time complexity of an algorithm is the number of steps it takes to complete its task. Time complexity is often used to compare different algorithms, as they all have different performance characteristics.

For example, one algorithm may run much more slowly than another, even if it appears that both should perform equally well on your problem set.

Space complexity

Space complexity is the amount of memory an algorithm uses. It’s measured in bytes, and it’s also called memory usage.

What causes the space complexity?

  • Variables
  • Data Structures
  • Function Call
  • Allocations

Space complexity is more important for large data sets because they require much more memory to store than small ones do.

For example, if you have a list containing hundreds of thousands of numbers and you want to sort that list using bubble sort, then bubble sort will take up a lot of space since it must keep all those numbers in memory at once. However, if your list only has 3 items in it (1 being unique), then sorting won’t take up as much space because there are fewer items to store!

What is Big O?

Big O notation is the way to measure how an algorithm’s running time or space requirements grow as the input size grows.

For example, one dentist takes 30 minutes to treat one patient. As her line of patients increases, the time it takes for her to treat all patients will scale linearly with the number of patients waiting in line.

Big O notation Example

With this in mind, we can categorise her efficiency as being linear. Or, as we would say in Big O terms O(n), where n is equal to the number of patients and the time that it takes for her to finish her work scales linearly or proportionally with the number of patients, we can use the same technique to determine the efficiency of algorithms.

Let’s calculate dentist efficiency using a function.

var time = 0;
const patients = ['James', 'Robert', 'John', 'Patricia', 'Mary', 'Jennifer', 'Michael'];

function calculateTime() {
  for(let i = 0; i < patients.length; i++){
    time += 30;
  }
  console.log("Total Time 👉 ", time);
}
Enter fullscreen mode Exit fullscreen mode

Note: I have created the function only to explain how Big O works with programming logic. That’s why I did not calculate time by multiplying the number of patients and time.

We created the calculateTime function inside the function we wrote the for loop, which iterates each element of a patients array.

function calculateTime() {
  for(let i = 0; i < patients.length; i++){ // 👉 O(n)
    time += 30; // 👉 O(1)
  }
  console.log("Total Time 👉 ", time); // 👉 O(1)
}
Enter fullscreen mode Exit fullscreen mode

The time complexity for the above function is O(n), but how I calculated the complexity of the function.

Let’s learn how to calculate Big O notation for any given algorithm.

Big O Rules

First let’s go over the rules of calculating Big O.

  1. Worst Case
  2. Remove Constants
  3. Drop Non Dominants

1. Worst Case

There are three types of analysis to analyze an algorithm.

  1. Best Case
  2. Average Case
  3. Worst Case

1. Best case
Best case means how fast algorithm can run for provided input.

let friends = ['Joey', 'Rachel', 'Phoebe', 'Ross', 'Monica', 'Chandler'];

function findJoey(){
  for(let i = 0; i < friends.length; i++){
    if(friends[i] === 'Joey'){
      console.log("Found!");
      break;
    }
  }
} 
Enter fullscreen mode Exit fullscreen mode

We created the function findJoey to find “Joey” from the friends array.

Since "Joey" is the first element of an array will only run once then it will break loop because we used the break statement.

So we can say the best case for the function is O(1).

2. Average Case

It provides a prediction about the algorithm’s running time when the input is random.

let friends = ['Rachel', 'Phoebe', 'Ross', 'Joey', 'Monica', 'Chandler'];

function findJoey(){
  for(let i = 0; i < friends.length; i++){
    if(friends[i] === 'Joey'){
      console.log("Found!");
      break;
    }
  }
} 
Enter fullscreen mode Exit fullscreen mode

In the function we provide input array in which element “Joey” will be on random places it could be second, third or forth.

3. Worst case
Worst case means how slow/long algorithm can run for provided input.

let friends = ['Rachel', 'Phoebe', 'Ross', 'Monica', 'Chandler', 'Joey'];

function findJoey(){
  for(let i = 0; i < friends.length; i++){
    if(friends[i] === 'Joey'){
      console.log("Found!");
      break;
    }
  }
} 
Enter fullscreen mode Exit fullscreen mode

In the above function we placed “Joey” at the end of the array, so it need to iterate through whole array to find the element. So, the algorithm’s time complexity is O(n).

Even though, input is random we will always analyze algorithm’s worst case for Big O.

2. Remove Constant

While we are calculating Big O, we should not consider any constants. Let’s understand it using an example.

let friends = ['Rachel', 'Phoebe', 'Ross', 'Monica', 'Chandler', 'Joey'];

function findJoey(){
  for(let i = 0; i < friends.length; i++){ // O(n)
    console.log("Step 👉", i + 1); // O(1)

    if(friends[i] === 'Joey'){ // O(1)
      console.log("Found!"); // O(1) 
    }
  }
} 
Enter fullscreen mode Exit fullscreen mode

Let’s sum the complexity of an above function.

Compelxity = O(n) + O(1) + O(1) + O(1)
           = O(n) + O(3)
           = O(n + 3)
Enter fullscreen mode Exit fullscreen mode

After sum our function’s complexity is O(n + 3).

When calculating the efficiency of a function, constant time does not matter. This is because if our array were some crazy length, like 2 million, then these lines of code would have a negligible effect on the efficiency of the function as a whole. We still need to iterate through 2 million items in an array.

Let’s calculate it again.

Compelxity = O(n + 3)
           = O(n)
Enter fullscreen mode Exit fullscreen mode

So, after removing all the constants its just O(n) now.

However if the function has only constants then the complexity of the function is constant time.

function steps(){
  console.log("Step 1"); // O(1)
  console.log("Step 2"); // O(1)
  console.log("Step 3"); // O(1)
}
Enter fullscreen mode Exit fullscreen mode

If we calculate complexity of the function it will be O(3). Instead of saying it O(3) we will call it O(1) as we assume it will always take the same amount of time to run.

3. Drop Non Dominants

In Big O we have a growth hierarchy.

Big O Growth Hierarchy

Don’t worry about other complexities, we learn everything in this article.

The chart shows the efficiency categories in order from good to bad. The first case of one is the best case and the last is the worst.

As we discussed earlier, when determining the efficiency of an algorithm in terms of Big O, we only care about the worst case.

let friends = ['Rachel', 'Phoebe', 'Ross', 'Monica', 'Chandler', 'Joey'];

function findJoey(){
  for(let i = 0; i < friends.length; i++){ // O(n)
    console.log("Step 👉", i + 1); // O(1)

    if(friends[i] === 'Joey'){ // O(1)
      console.log("Found!"); // O(1) 
    }
  }
} 
Enter fullscreen mode Exit fullscreen mode

We calculated the function’s complexity, which is O(n) because it’s a dominant and worst case than O(1).

Now that you understand how the Big O works let’s discuss major Big O complexities (time and space).

  • O(1) – Constant time
  • O(log n) – Logarithmic time
  • O(n) – Linear time
  • O(n log n) – Polynomial time
  • O(n ^ 2) – Quadratic time
  • O(2 ^ n) – Exponential time
  • O(n!) – Factorial time
  • O(m+n) or O(mn)

Big-O Complexity Chart

Big-O Complexity Chart | source: www.bigocheatsheet.com

How to Calculate Time Complexity

Let’s discuss Big O complexities.

1. O(1) – Constant time

Constant time complexity doesn’t grow. No matter how large the input size gets, the same number of steps is needed.

Since constants do not matter, we can use the number one O(1) to represent constant time complexities.

O(1) – Constant Time

But how could constant time come into existence? Let’s look at a very simple example.

You have an array and want to print the fourth element of an array.

let friends = ['Rachel', 'Phoebe', 'Ross', 'Monica', 'Chandler', 'Joey'];

function printFourthIndex(){
  console.log(friends[3]); // 0(1)
} 
Enter fullscreen mode Exit fullscreen mode

This function takes the same amount of steps, no matter how large the array is. It won’t get more complicated just because the array has a millions elements. You still need to do the same steps. So the runtime complexity of this is constant.

2. O(log n) – Logarithmic time

You probably remember the logarithmic function from high school. It tells you how many times you have to multiply the base by itself to get the given number as a result.

Sounds complicated! let’s understand by an example.

Let’s take the number eight as an example. So we want to raise some number to some power to get an eight.

In computer science, we can always assume that the number we want to raise to sum power is two.

Calculation of Logarithmic time

We can see that if we raise two to the power of three, we get the number that we’re looking for eight, so log base two of eight is three.

Now you know that how logarithmic works let’s move to the meaning of O(log n).

Let’s understand it by simple function.

// n = 8

function logNFunc(n){
  while(n > 1){ 
    n = Math.floor(n / 2); 
  }
}
Enter fullscreen mode Exit fullscreen mode

This function’s time complexity is O(log n). let’s dig deeper to find out why.

So, when we pass a number into this function, it divides it by two or splits it in half, and then calls itself with the new half or divided number.

n = 8

Iteration 1 = 8 / 2 = 4
Iteration 2 = 4 / 2 = 2
Iteration 3 = 2 / 2 = 1
Enter fullscreen mode Exit fullscreen mode

As you can see, we need to iterate only three times to get one. So we can say these three iterations are log iterations.

O(log n) – Logarithmic time

The good thing about logarithmic time is that its growth is slowing down over time. So the curve gets flatter and flatter as we go on. When you get to really high numbers the graph looks almost like a constant runtime complexity.

If a problem can be solved in logarithmic time, you cannot really consider it a problem. It is extremely efficient.

3. O(n) – Linear time

We already discussed the linear time complexity using the dentist’s example.

Linear time complexity essentially means that the growth is constant. You need to perform the same amount of steps to the input size.

O(n) – Linear time

For example, if our input size is eight we need to iterate it eight times.

function nFunc(n){
  for(let i = 0; i < n; i++){ 
    console.log("Step 👉", i + 1); 
  }
} 
Enter fullscreen mode Exit fullscreen mode

If a problem is solvable in linear time it is also not really a challenge. It is very easy to solve and the runtime complexity is extremely desirable.

4. O(n log n) – Pseduo-Linear time

If we combine the last two runtime complexities (logarithmic and linear), we get the pseudo-linear time. It is sometimes also called linearithmic.

This time complexity is often found in divide-and-conquer algorithms, such as merge sort, quick sort and heap sort.

Let’s understand it by a simple example.

// n = 4
function nLogNFunc(n){
  let temp = n;
  while(n > 1){
    n = Math.floor(n/2);
    for(let i = 0; i < temp; i++){ 
      console.log("Step 👉", i + 1); 
    }
  }
} 
Enter fullscreen mode Exit fullscreen mode

The function accepts one argument n and we will assume n = 4.

First of all, we created a temp variable and assign n to it to copy the value of n to temp.

Next, we have a while loop in which we are dividing the value of n by two. In the next line we have for loop which iterates temp times means n times since the temp variable is a copy of the n.

It’s a bit confusing. Let’s visualize it.

O(n log n) Visualization

As you can see the top-level loop’s time complexity is O(log n) and the inner loop’s time complexity is O(n).

Let’s put it all together.

Time Complexity = O(n * log n);
                = O(n log n);

Enter fullscreen mode Exit fullscreen mode

O(n log n) – Pseduo-Linear time

As you can see, the function is not quite linear but since the logarithmic part grows slower and slower over time, it looks like one when we get to larger numbers.

This is the last runtime complexity that we will consider very efficient. If a problem is solvable in O(n log n) time, it is not really a challenge.

5. O(n ^ 2) – Quadratic time

Quadratic time O(n ^ 2) increases faster as the input grows.

Examples of quadratic runtime are inefficient sorting algorithms like the bubble sort, insertion sort or selection sort but also traversing a simple 2D array.

Let’s understand it by an example.

// n = 4
function nSquaredNFunc(n){
 for(let j = 0; j < n; j++){
   for(let k = 0; k < n; k++){
     console.log(j, k);
   }
 }
}  
Enter fullscreen mode Exit fullscreen mode

The above function takes n as an argument. And the top level (first) for loop will iterate until n and for each iteration, it will also iterate nested for loop.

When we run the function (n = 4) as output it will print matrix like below.

O(n^2) Visualization

It has 4 columns and 4 rows so let’s calculate the total steps taken to print this matrix.

Time Complexity = row * columns
                = 4 * 4
                = 16
            4²  = 16
Enter fullscreen mode Exit fullscreen mode

So, if we take n = 8, the function will take 8 * 8 = 64 iterations to print the matrix, so we can say the complexity of the function is O(n ^ 2).

O(n ^ 2) – Quadratic time

You can see that its growing lot faster here. It is still manageable, but quadratic runtime can be a challenge sometimes.

6. O(2 ^ n) – Exponential time

Exponential time O(2 ^ n) increases extremely fast as the input grows.

Let’s understand it by following the function.

function fib(n){
  if(n === 0){
    return 0;
  }

  if(n === 1){
    return 1;
  }

  return fib(n - 1) + fib(n - 2)
}
Enter fullscreen mode Exit fullscreen mode

In the above function, we use recursion to calculate the Fibonacci sequence.

If we visualize the function it will look like below.

Fibonacci sequence

Here we can see that steps get double on every level. For example, on the first level, we need to call the fib() function two times, And on the second level, we need to call the fib() function four times. Fortunately, on a third level, we need to call the function only two times because of our small input.

Actually, the fib() function’s complexity is O(2 ^ (n - 1)), but if you remember we discussed earlier that we will remove the constants. So, we will ignore the -1, which results in O(2 ^ n).

An O(2^n) function’s exponential growth curve starts shallow and then rises rapidly.

O(2 ^ n) – Exponential time

If it is possible, you should avoid exponential runtime complexity.

7. O(n!) – Factorial time

This time complexity is even less desirable than exponential time.

Let’s first go through the factorial’s definition.

n! = n * (n - 1) * (n - 2) * ... * 3 * 2 * 1
Enter fullscreen mode Exit fullscreen mode

We multiply n by all natural numbers that are smaller than n except for zero.

One runtime complexity that is worse than that is of the form n to the power of n, but it rarely occurs. However, factorial time does occur.

For example, when you try to find all possible permutations of a given set of elements. Like in the following function.

function f(n){
  if(n === 0){
    return 0;
  }

  for(let i = 0; i < n; i++){
    f(n - 1);
  }
}
Enter fullscreen mode Exit fullscreen mode

O(n!) – Factorial time

I think it goes without saying that you want to avoid factorial time whenever it is possible. We don’t consider this time complexity manageable.

8. O(m + n) or O(mn)

Until now we calculated complexity using only one input but what if we have multiple inputs.

function mn(m, n){
  for(let i = 0; i < m; i++){
    console.log("m => ", i);
  }

  for(let j = 0; j < n; j++){
    console.log("n => ", j);
  }
}
Enter fullscreen mode Exit fullscreen mode

The above function accepts two inputs and the inputs can be different. So we cannot say that its O(n + n) = O(2n) = O(n), instead we should call it O(m + n) because input might not be the same.

function mn(m, n){
  for(let i = 0; i < m; i++){
    for(let j = 0; j < n; j++){
      console.log(i, j);
    }  
  }
}
Enter fullscreen mode Exit fullscreen mode

In the above function, we have a nested loop so, this function’s complexity is O(m * n) = O(mn).

We discussed the most common time complexities and how to calculate them.

How to Calculate Space Complexity

Let’s learn how to calculate space complexity.

1. O(1) – Constant Space

This is when the space required by the algorithm does not increase with the size of the input data set.

function constantSpace(n) {
    let result = n * n;
    console.log(result);
}
Enter fullscreen mode Exit fullscreen mode

In this example, no matter the size of n, the space used by the algorithm remains constant because we’re only ever using two variables (n and result).

2. O(log n) – Logarithmic Space

When the space needed by the algorithm grows logarithmically with the size of the input data set.

function binarySearch(array, x) {
    let start = 0, end = array.length - 1;

    while (start <= end) {
        let mid = Math.floor((start + end) / 2);

        if (array[mid] === x) return true;

        else if (array[mid] < x) 
             start = mid + 1;
        else
             end = mid - 1;
    }

    return false;
}
Enter fullscreen mode Exit fullscreen mode

In this binary search example, the space complexity is O(log n) because in each iteration of the while loop, we’re reducing the problem size by half.

3. O(n) – Linear Space

When the space required by the algorithm increases linearly with the size of the input data set.

function linearSpace(n) {
    let array = [];
    for(let i = 0; i < n; i++) {
        array[i] = i;
    }
    console.log(array);
}
Enter fullscreen mode Exit fullscreen mode

In this example, we’re creating an array of size n, so the space complexity is O(n).

4. O(n^2) – Quadratic Space

When the space required by the algorithm is proportional to the square of the size of the input data set.

function quadraticSpace(n) {
    let matrix = [];
    for(let i = 0; i < n; i++) {
        matrix[i] = [];
        for(let j = 0; j < n; j++) {
            matrix[i][j] = i + j;
        }
    }
    console.log(matrix);
}
Enter fullscreen mode Exit fullscreen mode

In this example, we’re creating a 2D matrix of size n * n, so the space complexity is O(n^2).

Quick Tips to Identify Big O Notation

Identifying the Big O notation of an algorithm can seem daunting at first, but with practice, it becomes more intuitive. Here are some quick tips to help you:

1. O(1) – Constant Time

If an algorithm performs a fixed number of operations regardless of the input size, it has a constant time complexity. An example of this is accessing an array element by its index.

let value = array[index];
Enter fullscreen mode Exit fullscreen mode

2. O(n) – Linear Time

If the number of operations is proportional to the size of the input, the algorithm has a linear time complexity. An example of this is a simple for loop that traverses an array or a list.

for(let i = 0; i < array.length; i++) {
    // code
}
Enter fullscreen mode Exit fullscreen mode

3. O(log n) – Logarithmic Time

In a logarithmic time complexity, the algorithm splits the problem size with each step. A common example is finding an element in a sorted array.

To keep it simple, let’s consider the operation of reducing the problem size by half with each step.

let i = n;
while (i > 1) {
    i = i / 2;
    // code
}
Enter fullscreen mode Exit fullscreen mode

4. O(n^2) – Quadratic Time

If an algorithm performs operations in a pair of nested loops, it often has a quadratic time complexity. Bubble sort is an example of an algorithm with quadratic time complexity.

for(let i = 0; i < array.length; i++) {
    for(let j = 0; j < array.length - i - 1; j++) {
        // code
    }
}
Enter fullscreen mode Exit fullscreen mode

5. O(n^3) – Cubic Time

If an algorithm performs operations in three nested loops, it has a cubic time complexity. An example is the naive algorithm for matrix multiplication.

for(let i = 0; i < matrix1.length; i++) {
    for(let j = 0; j < matrix2[0].length; j++) {
        for(let k = 0; k < matrix1[0].length; k++) {
            // code
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

6. O(2^n) – Exponential Time

If the number of operations doubles for each addition to the input size, the algorithm has an exponential time complexity. Recursive calculations of Fibonacci numbers often have this time complexity.

function fibonacci(n) {
    if(n <= 1) {
        return n;
    } else {
        return fibonacci(n - 1) + fibonacci(n - 2);
    }
}
Enter fullscreen mode Exit fullscreen mode

Remember, these are general tips. The actual time complexity can depend on various factors, including the specific details of the algorithm and the nature of the input.

Conclusion

And there you have it - a comprehensive guide to Big O Notation.

We've gone through the intricacies of this concept, understanding its importance in evaluating the efficiency of algorithms. We've seen how it helps us predict the behavior of our code, allowing us to make informed decisions when it comes to optimization.

Remember, the goal isn't always to achieve O(1) but to understand the trade-offs we're making with each algorithmic choice.

As we continue to advance in our coding journey, let's keep Big O Notation as our constant companion, guiding us towards more efficient and effective programming.

💖 💪 🙅 🚩
princem
Prince M

Posted on April 20, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related