## What is Asymptotic Analysis?

Asymptotic notations are a mathematical tool that can be used to determine the time or space complexity of an algorithm without having to implement it in a programming language. This measure is unaffected by machine-specific constants. It is a way of describing a significant part of the cost of the algorithm.

By now, we have already discussed Approaches to efficiency analysis and Framework for efficiency analysis. Its time to jump into mathematical interpretation of measurement of algorithmic complexity.

Machine-specific constants include the machine’s hardware architecture, RAM, supported virtual memory, processor speed, available instruction set (RISC or CISC), and so on.

The asymptotic notations examines algorithms that are not affected by any of these factors.

Finding minimum element from an array of size n takes maximum n comparisons.

This algorithm’s asymptotic complexity is linear (in order of n). The primary term in the complexity formula is this linear behavior. It states that doubling the size of the array generally doubles the number of comparisons.

The

primitive operationis the most frequent operation appearing in the algorithm and it depends on the problem. The primitive operation is the major component of cost.

As previously explained, the fundamental process for sorting and searching problems is a comparison. The primitive operation for adding arrays or matrices is addition. The primitive operation for the factorial problem is multiplication.

The order of function growth is critical in evaluating the algorithm’s performance. Assume the running times of two algorithms A and B are f(n) and g(n), respectively.

f(n) = 2n^{2} + 5

g(n) = 10n

Here, n represents the size of the problem, while polynomials f(n) and g(n) represent the number of basic operations performed by algorithms A and B, respectively. Running time of both the functions for different input size is shown in following table:

n | 1 | 2 | 3 | 4 | 5 | 6 | 7 |

f(n) | 7 | 13 | 23 | 37 | 55 | 77 | 103 |

g(n) | 10 | 20 | 30 | 40 | 50 | 60 | 70 |

Algorithm A may outperform algorithm B for small input sizes, however when input sizes become sufficiently big (in this example n = 5), f(n) always runs slower (performs more steps) than g(n). As a result, understanding the growth rate of functions is critical.

Asymptotic notations describe the function’s limiting behavior. For example, if the function f(n) = 8n^{2} + 4n – 32, then the term 4n – 32 becomes insignificant as n increases. As a result, the n^{2} term limits the growth of f(n).

When doing complexity analysis, the following assumptions are assumed.

**Assumptions:**

- The actual cost of operation is not considered.
- Abstract cost c is ignored : O(c. n
^{2}) reduces to O(n^{2}) - Only leading term of a polynomial is considered : O(n
^{3}+ n) reduces to O(n^{3}) - Drop multiplicative or divisive constant if any : O(2n
^{2}) and O(n^{2}/2) both reduces to O(n^{2}).

## Types of Asymptotic Notations

Various notations like Big Oh (Ο), Big Omega (Ω), Big Theta (Θ), Little Oh (ο), Little Omega (ω) are used to describe the asymptotic running time of the algorithm

### Big Oh Notation (Ο)

This notation is denoted by ‘O’, and it is pronounced as **“Big Oh”**. Big Oh notation defines **upper bound** for the algorithm, it means the running time of algorithm cannot be more than it’s asymptotic upper bound for any random sequence of data.

Let f(n) and g(n) are two nonnegative functions indicating the running time of two algorithms. We say, g(n) is upper bound of f(n) if there exist some positive constants c and n

_{0}such that 0 ≤ f(n) ≤ c.g(n) for all n ≥ n_{0}. It is denoted as f(n) = Ο(g(n)).

In following figure, Horizontal axis represents problem size and the vertical axis represents growth order (steps required to solve the problem of size n) of functions

For small input size, there may be many cross overs between the growth rate of f(n) and c.g(n), but once n becomes significantly large, f(n) grows always slower than c.g(n). This value of n is called

crossover pointand is denoted as n_{0}.

**Loose bounds:**

All the set of functions with growth rate *higher* than its actual bound are called loose upper bound of that function,

23 = O(n) = O(n^{2}) = O(n^{3}) = O(n!)

6n + 3 = O(n^{2}) = O(n^{3}) = O(n!)

3n^{2}+ 2n + 4 = O(n^{3}) = O(n!)

2n^{3}+ 4n + 5 = O(2^{n}) = O(n!)

**Incorrect bounds:**

All the sets of functions with a growth rate *lower* than their actual bounds are called the incorrect bounds of those functions.

6n + 3 ≠ O(1)

3n^{2 }+ 2n + 4 ≠ O(n) ≠ O(1)

2n^{3} + 4n + 5 ≠ O(n^{2}) ≠ O(n) ≠ O(1)

For function, f(n) =2n^{2} + n + 3

f(n) = O(n^{2}) = O(n^{3}) = O(2^{n}) = O(n!) = O(n^{n})

f(n) ≠ O(nlog n) ≠ O(n) ≠ O(log n) ≠ O(1)

### Big Omega Notation (Ω)

This notation is denoted by ‘Ω’, and it is pronounced as **“Big Omega”**. Big Omega notation defines **lower bound** for the algorithm. It means the running time of algorithm cannot be less than its asymptotic lower bound for any random sequence of data.

Let f(n) and g(n) are two nonnegative functions indicating the running time of two algorithms. We say the function g(n) is lower bound of function f(n) if there exist some positive constants c and n

_{0}such that 0 ≤ c.g(n) ≤ f(n) for all n ≥ n_{0}. It is denoted as f(n) = Ω (g(n)).

**Loose bounds:**

All the set of functions with growth rate *slower* than its actual bound are called loose lower bound of that function,

6n + 3 = Ω (1)

3n^{2} + 2n + 4 = Ω (n) = Ω (1)

2n^{3} + 4n + 5 = Ω (n^{2}) = Ω (n) = Ω (1)

**Incorrect bounds:**

All the set of functions with growth rate lower than its actual bound are called incorrect bound of that function.

23 ≠ Ω (n) ≠ Ω (n^{2}) ≠ Ω (n^{3}) ≠ Ω (n!)

6n + 3 ≠ Ω (n^{2} ) ≠ Ω (n^{3}) ≠ Ω (n!)

3n^{2} + 2n + 4 ≠ Ω(n^{3}) ≠ Ω (n!)

2n^{3} + 4n + 5 ≠ Ω (2^{n}) ≠ Ω(n!)

For function f(n) = 2n^{2} + n + 3

f(n) = Ω (n^{2}) = Ω (nlogn) = Ω (n)

= Ω (logn) = Ω(1)

f(n) ≠ Ω (n^{3}) ≠ Ω (2^{n}) ≠ Ω (n!) ≠ Ω (n^{n})

Big Oh and Big Omega notations provide the upper and lower running bounds of the algorithm. These measures tell us that the running time of an algorithm cannot be more or less than their respective bounds. Big Oh notation provides the worst-case running time, whereas Big Omega provides the best-case running time of the algorithm. The running time of the algorithm cannot be better than Big Omega and it cannot be worse than Big Oh.

### Big Theta Notation (Θ)

This notation is denoted by ‘Θ’, and it is pronounced as **“Big Theta”**. Big Theta notation defines **tight bound** for the algorithm. It means the running time of algorithm cannot be less than or greater than it’s asymptotic tight bound for any random sequence of data

Let f(n) and g(n) are two nonnegative functions indicating running time of two algorithms. We say the function g(n) is tight bound of function f(n) if there exist some positive constants c

_{1}, c_{2,}and n_{0}such that 0 ≤ c_{1}×g(n) ≤ f(n) ≤ c_{2}×g(n) for all n ≥ n_{0}. It is denoted as f(n) = Θ (g(n)).

For big oh, big omega and big theta, g(n) is not a single function but it’s a set of functions which always grow quickly, slowly or at the same rate of f(n) respectively for n ≥ n_{0}.

**Incorrect bounds:**

All the set of functions with a growth rate *lower *or* greater* than its actual bound are called incorrect bound of that function.

23 ≠ Θ(n) ≠ Θ(n^{2}) ≠ Θ(n^{3}) ≠ Θ(n!)

6n+3 ≠ Θ(1) ≠ Θ(n^{2}) ≠ Θ(n^{3}) ≠ Θ(n!)

3n^{2 }+ 2n + 4 ≠ Θ(1) ≠ Θ(n) ≠ Θ(n^{3}) ≠ Θ(n!)

2n^{3 }+ 4n + 5 ≠ Θ(1) ≠ Θ(n) ≠ Θ(n^{2}) ≠ Θ(2^{n}) ≠ Θ(n!)

For function f(n) = 2n^{2} + n + 3

f(n) ≠ Θ(1) ≠ Θ(n) ≠ Θ(n^{3} ) ≠ Θ(2^{n} ) ≠ Θ(n!) ≠ Θ(n^{n} )

Loose bound does not exist for tight bound

## Properties of Asymptotic Notations

Asymptotic notations satisfy the following properties

### Correlation:

f(n) = Ο(g(n) ) → f ≤ g

f(n) = ο(g(n) ) → f < g

f(n) = Θ(g(n) ) → f = g

f(n) = Ω(g(n) ) → f ≥ g

f(n) = ω(g(n) ) → f > g

### Reflexivity:

f(n) = Θ(f(n) ) = Ο(f(n) ) = Ω(f(n) )

### Symmetry:

f(n) = Θ(g(n) ) if and only if g(n) = Θ(f(n))

### Transpose symmetry:

f(n) = Ο(g(n)) if and only if g(n) = Ω(f(n))

f(n) = ο(g(n)) if and only if g(n) = ω(f(n))

### Transitivity:

f(n) = Ο(g(n) ) and g(n) = Ο(h(n) ) ≈ f(n) = Ο(h(n) )

f(n) = Ω(g(n) ) and g(n) = Ω(h(n)) ≈ f(n) = Ω(h(n))

f(n) = Θ(g(n) ) and g(n) = Θ(h(n) ) ≈ f(n) = Θ(h(n) )

f(n) = ο(g(n) ) and g(n) = ο(h(n) ) ≈ f(n) = ο(h(n) )

f(n) = ω(g(n) ) and g(n) = ω(h(n) ) ≈ f(n) = ω(h(n))

### Manipulation

c.Ο(f(n) ) = Ο(f(n) )

Ο(Ο(f(n))) = Ο(f(n) )

Ο(f(n)).Ο(g(n))) = Ο(f(n) **× **g(n))

Ο(f(n)+ g(n)) = Ο(max (f(n),g(n))

**Additional Reading**: Common Asymptotic Notations. Click to read