Asymptotic Notations – Big Oh, Omega, and Theta

What is Asymptotic Analysis?

Asymptotic notations are a mathematical tool that can be used to determine the time or space complexity of an algorithm without having to implement it in a programming language. This measure is unaffected by machine-specific constants. It is a way of describing a significant part of the cost of the algorithm.

By now, we have already discussed Approaches to efficiency analysis and Framework for efficiency analysis. Its time to jump into mathematical interpretation of measurement of algorithmic complexity.

Machine-specific constants include the machine’s hardware architecture, RAM, supported virtual memory, processor speed, available instruction set (RISC or CISC), and so on.

The asymptotic notations examines algorithms that are not affected by any of these factors.

Finding minimum element from an array of size n takes maximum n comparisons. 

This algorithm’s asymptotic complexity is linear (in order of n). The primary term in the complexity formula is this linear behavior. It states that doubling the size of the array generally doubles the number of comparisons.

The primitive operation is the most frequent operation appearing in the algorithm and it depends on the problem. The primitive operation is the major component of cost.

As previously explained, the fundamental process for sorting and searching problems is a comparison. The primitive operation for adding arrays or matrices is addition. The primitive operation for the factorial problem is multiplication.

The order of function growth is critical in evaluating the algorithm’s performance. Assume the running times of two algorithms A and B are f(n) and g(n), respectively.

f(n) = 2n2 + 5                                

g(n) = 10n

Here, n represents the size of the problem, while polynomials f(n) and g(n) represent the number of basic operations performed by algorithms A and B, respectively. Running time of both the functions for different input size is shown in following table:

n1234567
f(n)71323375577103
g(n)10203040506070

Algorithm A may outperform algorithm B for small input sizes, however when input sizes become sufficiently big (in this example n = 5), f(n) always runs slower (performs more steps) than g(n). As a result, understanding the growth rate of functions is critical.

Asymptotic notations describe the function’s limiting behavior. For example, if the function f(n) = 8n2 + 4n – 32, then the term 4n – 32 becomes insignificant as n increases. As a result, the n2 term limits the growth of f(n).

When doing complexity analysis, the following assumptions are assumed.

Assumptions:

  • The actual cost of operation is not considered.
  • Abstract cost c is ignored : O(c. n2) reduces to O(n2)
  • Only leading term of a polynomial is considered : O(n3 + n) reduces to O(n3)
  • Drop multiplicative or divisive constant if any : O(2n2) and O(n2/2) both reduces to O(n2).

Types of Asymptotic Notations

Various notations like Big Oh (Ο), Big Omega (Ω), Big Theta (Θ), Little Oh (ο), Little Omega (ω) are used to describe the asymptotic running time of the algorithm

Types of Asymptotic notations
Asymptotic notation

Big Oh Notation (Ο)

This notation is denoted by ‘O’, and it is pronounced as “Big Oh”. Big Oh notation defines upper bound for the algorithm, it means the running time of algorithm cannot be more than it’s asymptotic upper bound for any random sequence of data.

Let f(n) and g(n) are two nonnegative functions indicating the running time of two algorithms. We say, g(n) is upper bound of f(n) if there exist some positive constants c and n0 such that 0 ≤ f(n) ≤ c.g(n) for all n ≥ n0. It is denoted as f(n) = Ο(g(n)).

In following figure, Horizontal axis represents problem size and the vertical axis represents growth order (steps required to solve the problem of size n) of functions

Big oh Asymptotic notations
Big oh notation

For small input size, there may be many cross overs between the growth rate of f(n) and c.g(n), but once n becomes significantly large, f(n) grows always slower than c.g(n). This value of n is called crossover point and is denoted as n0.

Loose bounds:

All the set of functions with growth rate higher than its actual bound are called loose upper bound of that function,

23 = O(n) = O(n2) = O(n3) = O(n!)

6n + 3 = O(n2) = O(n3) = O(n!)

3n2+ 2n + 4 = O(n3) = O(n!)

2n3+ 4n + 5 = O(2n) = O(n!)

Incorrect bounds:

All the sets of functions with a growth rate lower than their actual bounds are called the incorrect bounds of those functions.

6n + 3 ≠ O(1)

3n2 + 2n + 4 ≠ O(n) ≠ O(1)

2n3 + 4n + 5 ≠ O(n2) ≠ O(n) ≠ O(1)

For function, f(n) =2n2 + n + 3

f(n) = O(n2) = O(n3) = O(2n) = O(n!) = O(nn)

f(n) ≠ O(nlog n) ≠ O(n) ≠ O(log n) ≠ O(1)

Big Omega Notation (Ω)

This notation is denoted by ‘Ω’, and it is pronounced as “Big Omega”. Big Omega notation defines lower bound for the algorithm. It means the running time of algorithm cannot be less than its asymptotic lower bound for any random sequence of data.

big omega Asymptotic notations
Big omega

Let f(n) and g(n) are two nonnegative functions indicating the running time of two algorithms. We say the function g(n) is lower bound of function f(n) if there exist some positive constants c and n0 such that 0 ≤ c.g(n) ≤ f(n) for all n ≥ n0. It is denoted as f(n) = Ω (g(n)).

Loose bounds:

All the set of functions with growth rate slower than its actual bound are called loose lower bound of that function,

6n + 3 = Ω (1)

3n2 + 2n + 4 = Ω (n) = Ω (1)

2n3 + 4n + 5 = Ω (n2) = Ω (n) = Ω (1)

Incorrect bounds:

All the set of functions with growth rate lower than its actual bound are called incorrect bound of that function.

23 ≠ Ω (n) ≠ Ω (n2) ≠ Ω (n3) ≠ Ω (n!)

6n + 3 ≠ Ω (n2 ) ≠ Ω (n3) ≠ Ω (n!)

3n2 + 2n + 4 ≠ Ω(n3) ≠ Ω (n!)

2n3 + 4n + 5 ≠ Ω (2n) ≠ Ω(n!)

For function f(n) = 2n2 + n + 3

f(n) = Ω (n2) = Ω (nlogn) = Ω (n)

= Ω (logn) = Ω(1)

f(n) ≠ Ω (n3) ≠ Ω (2n) ≠ Ω (n!) ≠ Ω (nn)

Big Oh and Big Omega notations provide the upper and lower running bounds of the algorithm. These measures tell us that the running time of an algorithm cannot be more or less than their respective bounds. Big Oh notation provides the worst-case running time, whereas Big Omega provides the best-case running time of the algorithm. The running time of the algorithm cannot be better than Big Omega and it cannot be worse than Big Oh.

Big Theta Notation (Θ)

This notation is denoted by ‘Θ’, and it is pronounced as “Big Theta”. Big Theta notation defines tight bound for the algorithm. It means the running time of algorithm cannot be less than or greater than it’s asymptotic tight bound for any random sequence of data

Big theta Asymptotic notations
Big theta notation

Let f(n) and g(n) are two nonnegative functions indicating running time of two algorithms. We say the function g(n) is tight bound of function f(n) if there exist some positive constants c1, c2, and n0 such that 0 ≤ c1× g(n) ≤ f(n) ≤ c2× g(n) for all n ≥ n0. It is denoted as f(n) = Θ (g(n)).

For big oh, big omega and big theta, g(n) is not a single function but it’s a set of functions which always grow quickly, slowly or at the same rate of f(n) respectively for n ≥ n0.

Incorrect bounds:

All the set of functions with a growth rate lower or greater than its actual bound are called incorrect bound of that function.

23 ≠ Θ(n) ≠ Θ(n2) ≠ Θ(n3) ≠ Θ(n!)

6n+3 ≠ Θ(1) ≠ Θ(n2) ≠ Θ(n3) ≠ Θ(n!)

3n2 + 2n + 4 ≠ Θ(1) ≠ Θ(n) ≠ Θ(n3) ≠ Θ(n!)

2n3 + 4n + 5 ≠ Θ(1) ≠ Θ(n) ≠ Θ(n2) ≠ Θ(2n) ≠ Θ(n!)

For function f(n) = 2n2 + n + 3

f(n) ≠ Θ(1) ≠ Θ(n) ≠ Θ(n3 ) ≠ Θ(2n ) ≠ Θ(n!) ≠ Θ(nn )

Loose bound does not exist for tight bound


Suggested Reading: Exmaples on Asymptotic Notations


Properties of Asymptotic Notations

Asymptotic notations satisfy the following properties

Correlation:

f(n) = Ο(g(n) ) → f ≤ g

f(n) = ο(g(n) ) → f < g

f(n) = Θ(g(n) ) → f = g

f(n) = Ω(g(n) ) → f ≥ g

f(n) = ω(g(n) ) → f > g

Reflexivity:

f(n) = Θ(f(n) ) = Ο(f(n) ) = Ω(f(n) )

Symmetry:

f(n) = Θ(g(n) ) if and only if g(n) = Θ(f(n))

Transpose symmetry:

f(n) = Ο(g(n)) if and only if g(n) = Ω(f(n))                

f(n) = ο(g(n)) if and only if g(n) = ω(f(n))

Transitivity:

f(n) = Ο(g(n) ) and g(n) = Ο(h(n) ) ≈ f(n) = Ο(h(n) )

f(n) = Ω(g(n) ) and g(n) = Ω(h(n))  ≈ f(n) = Ω(h(n))

f(n) = Θ(g(n) ) and g(n) = Θ(h(n) ) ≈ f(n) = Θ(h(n) )

f(n) = ο(g(n) ) and g(n) = ο(h(n) ) ≈ f(n) = ο(h(n) )                

f(n) = ω(g(n) ) and g(n) = ω(h(n) )  ≈ f(n) = ω(h(n))

Manipulation

c.Ο(f(n) ) = Ο(f(n) )

Ο(Ο(f(n))) = Ο(f(n) )

Ο(f(n)).Ο(g(n))) = Ο(f(n) × g(n))

Ο(f(n)+ g(n)) = Ο(max (f(n),g(n))


Additional Reading: Common Asymptotic Notations. Click to read

Leave a Reply

Your email address will not be published. Required fields are marked *