Considering the sheer amount of data processed nowadays, the time and space used by algorithms should be efficient.

**Computational Complexity** is the study of algorithms on the basis of the number of resources it requires to operate. **Time Complexity** is a method to calculate the amount of time an algorithm requires to run.

Time Complexity is estimated by counting the number of elementary operations processed by the algorithm, assuming that each elementary operation takes up a fixed amount of time to process.

Big O Notation is also called ** Asymptotic Notation**. It is a mathematical notation that describes the study of the limiting nature of a function when the arguments tend towards a value or infinity.

**Big O Notation** is used to analyze algorithms on the basis of the increase in the time they take to run with an increasing input size (denoted by n). Thus functions with the same growth rates will have the same Big O notations.

It mathematically describes the upper bound or worst-case scenarios of algorithm time complexities and compares them.

If g and f are functions on natural numbers, the function f is O(g) or Big O of g, if and only if there exists a constant ` c > 0 `

and a natural number ` n`

such that _{0} ` f(n)≤c.g(n) ∀ n ≥ n`

. Summary of the common Time Complexities_{0}

Some common Time Complexities which will be discussed are as follows:

Name |
Time Complexity |

Constant Time | O(1) |

Logarithmic Time | O(log n) |

Linear Time | O(n) |

Quasilinear Time | O(n log n) |

Quadratic Time | O(n ^ 2) |

Exponential Time | O(2 ^ n) |

Factorial Time | O(n!) |

There are other types of complexities too, but these are the most common ones. Thus most algorithms subscribe to these.

When the time elapsed to run an algorithm does not depend on the number of input data being processed, the algorithm is said to follow **Constant Time**. For such algorithms, the running time remains the same irrespective of the number of data being processed.

For example, in Python 3:

```
def display( array ):
print(array[0])
a = [1, 2, 3, 4, 5]
display(a)
```

This prints the first member of the array irrespective of the length of the array is passed. Thus it is independent of the array length.

Algorithms with constant time complexities are the best as the length of the arrays does not affect the running times of these algorithms.

An algorithm that reduces the number of input data with every step follows** logarithmic time** **complexity**. This complexity is usually followed by Binary Trees or Binary Search mechanisms. The Binary Search Python 3 code is as follows

```
def BinarySearch(array,st, sp, search):
mid = int((st+sp)/2)
if array[mid] == search:
print(mid)
elif array[mid] > search:
if mid-1 >= st:
BinarySearch(array, st, mid-1, search)
else:
print("Element Not Found!")
elif array[mid] < search:
if mid+1 <= sp:
BinarySearch(array, mid+1, sp, search)
else:
print("Element Not Found!")
```

**Linear Time Complexity** is followed when the amount of time required for an algorithm to run increases linearly with the number of data being processed. This is the best time one can obtain in case all the elements of the input data need to be accessed.

The following code is the Python 3 code for Linear Search which follows Linear Time:

```
def LinearSearch(array, search):
i = 0
while i < len(array):
if array[i] == search:
print(i)
flag = 1
break
else:
flag = 0
i = i + 1
if flag == 0:
print("Element not Found!!!")
```

In this code snippet, all the values of the array being passed are accessed for processing.

An algorithm adapts **quasilinear time complexity** when each process in the input data has logarithmic time complexity. Sorting algorithms like merge and heap sort follow this complexity.

Quasilinear time is represented in the following code example by defining the Merge Sort function:

```
def merge(list2, le, m, ri):
n1 = m - le + 1
n2 = ri- m
L = [0] * (n1)
R = [0] * (n2)
for i in range(0 , n1):
L[i] = list2[le + i]
for j in range(0 , n2):
R[j] = list2[m + 1 + j]
i = 0
j = 0
k = le
while i < n1 and j < n2 :
if L[i] <= R[j]:
list2[k] = L[i]
i += 1
else:
list2[k] = R[j]
j += 1
k += 1
while i < n1:
list2[k] = L[i]
i += 1
k += 1
while j < n2:
list2[k] = R[j]
j += 1
k += 1
def mergeSort(list1, le, ri):
if le < ri:
m = (le+(ri-1))//2
mergeSort(list1, le, m)
mergeSort(list1, m+1, ri)
merge(list1, le, m, ri)
```

The merge sort method can be pictorially represented as:

An algorithm that follows linear time complexity for each of its input elements, is said to follow **Quadratic Time Complexity**. The Bubble Sort algorithm is an example of an algorithm that follows quadratic time as each of its elements is compared to all the other elements.

The following is a Python 3 code for the Bubble Sort Algorithm:

```
def BubbleSort(array):
i = len(array) - 1
while i > 0:
j = 0
while j+1 <= i:
if array[j] > array[j+1]:
array[j], array[j+1] = array[j+1], array[j]
j = j + 1
i = i - 1
return array
```

When the growth in time elapsed during processing doubles with the inclusion of every new input element, it is said to be in **Exponential Time Complexity**.

Brute Force algorithms often exhibit exponential time complexities. The recursive calculation of Fibonacci Numbers is an example of an algorithm in Exponential Time.

The Python 3 code is as follows:

```
def Fibonacci(n):
if n <= 1:
return 1
else:
return Fibonacci(n - 1) + Fibonacci(n - 2)
```

An algorithm that grows factorially with each increase in input element is said to follow **Factorial Time**. Heap's method is an example that follows such a complexity.

While studying the time complexities of different algorithms, different algorithms may have several processes which may describe different complexity. But during analysis, only the highest complexity should be considered.