Rated 4.5/5
based on 11 customer reviews

Blog

Often taking lectures or learning about Data Structures & Algorithms, amateurs tend to ignore this topic, which, as it turns out later, is a very important part of writing efficient algorithms and cracking interviews or coding challenges.

In this Blog, I aim to explain ‘Big O’ in a simple manner! Hope you enjoy!

If you’ve done programming, you know that there can be multiple ways to solve a problem. And a good problem solver always looks for the** best possible solution**!

This is where ‘Big O’ helps us. Big O gives us an idea of **relative performance** of an algorithm.

Why did I use word relative?

How efficiently an algorithm, a program or a software for that matter runs on a computer is not just influenced by its own efficiency, but also by hardware factors like processor, RAM etc. This means that running the same algorithm, taking a same number of inputs, can take different times running on various machines.

This is why we talk about relative performance with Big O, which basically means at what rate running time increases with an increase in input to the algorithm, which is independent of machine.

Another thing to note about ‘Big O’ is that in ‘Big O’ notation, we talk about the worst-case complexity of Algorithm. Why?

*Consider you are preparing for a competitive exam for a while. You’ve given several mock tests and have score data with you. When the paper was easiest, you scored 80/100, in the hardest paper you scored 65/100, and your average score, suppose is 71/100. *

*Suppose there’s a cut-off for that exam. To make sure you pass in the exam what would be the best and safest way?*

*1) Trying to score above cut-off in easy tests. (Best case)*

*2) Trying to score above cut-off on average. (Average case)*

*3) Trying to score above cut-off even in hardest of tests. (Worst case)*

*(3) will ensure that even if the test is hardest and/or you are at your worst performance, you’ll still at least clear the cut-off.*

Similarly, to maximize the efficiency of Algorithms we write, in the worst possible case, makes sure our algorithms are most efficient and work in every scenario.

Now, that we understand what Big O is and its purpose, let’s move on to some examples to understand the concept.

printf(‘Hello world!');

In the above example, since the input is constant, the time taken will not increase but remain constant, even in the worst case. Therefore in terms of Big O, we can represent it as **O(1)**.

for ( i = 0; i < n; i++){ If(ans == arr[i]) { printf(i); } }

In the example above, we can see that if no. of times the code executes, depends on how soon we find the element matching to the array answer.

Since we have to consider the worst case scenario, we’ll assume that nth element is the element we require, i.e., arr[n-1]. So, the complexity becomes **O(n)**.

What O(n) means is that complexity increases linearly as number of inputs increases.

for ( i = 0; i<n; i++) { for(j =0; j<n; j++) { if(arr_1[i] == arr_2[j]) { printf(“i”); } } }

In this example, the worst case arises when we have to iterate through whole arr_1 and arr_2 to get our answer. This means that when last element of arr_1 matches the last element of arr_2 or none matches.

Thus the complexity, in this case, becomes O(n*n), ie, **O(n²)**.

```
while (first <= last)
{
if(arr[middle] < num)
{
first = middle + 1;
}
else if(arr[middle] == num)
{
cout<<num<<" found at the location "<<middle+1<<"\n";
break;
}
else {
last = middle - 1;
}
middle = (first + last)/2;
}
```

Above is the code for Binary search.

In Binary search, input array should be sorted.

We divide the given array into the half, if the value to be found is less than mid-element, we abandon the second half and go comparing the elements only in the first half or vise-versa. This keeps on going until we find the element we need.

We can observe that each step, array size, i.e, N decreases by 50%. Thus, the complexity becomes** O(logN)** in this case.

Above examples helps to start to get an understanding of how ‘Big O’ analysis works. Before we proceed, there are few instructions to keep in mind while working with ‘Big O’.

- If we get complexity like O(n²+n), we
**ignore the lower degree terms**and only consider a higher degree term. This is because as a number of inputs increases, lower degree terms become negligible as compared to higher degree terms. Thus, complexity becomes O(n²). - In a similar way, in complexity like O(2n²+n+1), we can
**ignore coefficients and constants**with similar reasoning. - If your algorithm is in the form “do this, then, when you’re all done, do that” then you
**add the runtimes**. For example, with if-else. * - If your algorithm is in the form “do this for each time you do that” then you
**multiply the runtimes**. For example: Nested ‘for’ loops. *

*( Points 3 & 4 are taken from ‘Cracking the Coding Interview’ )

Now that you know the basics of ‘**Big O’ Notation explained with examples**, you can practice some problems here to understand it more.

**Learning and Understanding Big O notation** are very important to design the best algorithms and solutions, especially in interviews and competitive coding contests.

Big O notation is not asked directly, but it gives a clear understanding of which algorithms or solutions are better, and how to think that way!

Cheers! :)

PS: You can find full C++ running codes of examples mentioned in this article here.

Rated 4.5/5
based on 12 customer reviews

Rated 4.5/5
based on 6 customer reviews

Rated 4.0/5
based on 20 customer reviews

- Concordion course in Toronto
- Angular 2 online training in Mumbai
- Groovy On Grails classroom training in Austin
- Devops Foundation Certification training in Jeddah
- Machine Learning With Python classes
- Email Marketing course in Atlanta
- Devops Leader online training in Dallas
- Angular Js classroom training in Waterloo
- Zend classes in Kolkata
- Fathom Js Framework By Mozilla

## Leave a Reply

Your email address will not be published. Required fields are marked *