Big O notation is a way to describe the complexity of a function. It can be used to calculate the time or memory requirements of a given function. To understand Big O notation, we need to understand the following terms:
|Term||Definition||Big O Notation|
|Constant||A function that grows in a constant manner||O(1)|
|Linear||A function that grows in a linear manner||O(n)|
|Logarithmic||A function that grows in a logarithmic manner||O(log n)|
|Linearithmic||A function that grows in a linearithmic manner||O(n log n)|
|Quadratic||A function that grows in a quadratic manner||O(n^2)|
|Factorial||A function that grows in a factorial manner||O(n!)|
We'll look at these in more detail in the next section, in order of complexity.
Constant functions are the simplest to understand and easiest to predict. They are functions that take the same amount of time to run regardless of the input size. If this function were to take
2ms to run, it would always take
2ms to run, regardless of the size of
n. An example of this would be a function that takes in an array and returns the first element in the array.
The most basic Big O notation is
O(n). This means that the function grows directly with the size of the input. Let's say we have a function that takes an array of numbers and returns the sum of all of the numbers in the array. We can use this notation to calculate the time or memory requirements of this function. Here's what that would look like:
For the function
linear, the input size is
n, and the output size is
n. To put this literally, if each element in the array takes
4ms to process, then the function will take
12ms to process, due to the array being 3 elements long. For each additional element, the function will take
4ms more to process.
A more rapidly growing Big O notation is
O(log n). An example of this would be a binary search function. This is a function that takes an array of numbers and returns the index of the number that is being searched for.
O(n log n)
Continuing on, we have linearithmic growth. An example of this would be a merge sort function. This is a function that takes an array of numbers
n and sorts them in ascending order. Breaking down the complexity, we can see that the function will grow in a linear manner depending on the size of the
n, but will also increase in complexity logarithmically with
n. This function grows rapidly, but is able to handle large inputs.
Next we have Quadratic growth, expressed as
O(n^2). An example of this would be a bubble sort function, which is a function that takes an array of numbers and sorts them in ascending order. This function will take
n elements and compare each element to every other element. This function grows rapidly and is not recommended for large inputs.
Nearing the most rapidly growing Big O notation is
O(n!). This means that the function grows in a factorial manner. An example of this would be a function that returns every possible combination of an array of numbers. This function would take
n elements and return
n! possible combinations. This function grows rapidly and is not recommended for large inputs.
While this seems very straight forward, unknown datasets present a new challenge. In most real-world scenarios, a calculation would be done to determine the best case, worst case, and average scenerio. Take the following search function for example:
With this example, the worst case scenario would be that every element gets iterated over before the target is found. This would be represented as
O(n). The best case scenario would be that the target is found at the beginning of the array. This would be represented as
O(1). When allocating resources, it is important to consider the worst case scenario and the frequency at which it may occur.
While we have only covered the most commonly referenced notation types, there are many more to explore and learn about. For more information check out from Harvard's CS50 materials.