Efficiency Analysis Framework of Algorithm

Efficiency Analysis of the algorithm is the way of finding the resources required by the algorithm. This helps to decide the best algorithm among the set of candidate algorithms. Resources may be time, space, power consumption, communication bandwidth, computer hardware etc

The running time of algorithm is proportional to the problem size n. The amount of resources required grows in the order of linear, quadratic, factorial, and so on.

We don’t worry about performance if we’re solving a problem for a small number of instances. We could select the algorithm with the least amount of programming complexity. However, if a problem must be addressed repeatedly for large size, efficiency analysis is important.

The amount of time and space required to solve the problem is referred to as the algorithm’s time and space complexity, respectively.

The question is, how do you choose the best algorithm? We must compare the performance of all potential algorithms; typically, the one that takes the least amount of time is the candidate of choice.

Most of the time, the user is uninterested in space complexity because memory is no longer a major restriction, and it is also becoming more affordable. As a result, researchers are primarily concerned with increasing time complexity.

The components of the efficiency analysis framework are shown below:

Efficiency analysis framework
Efficiency analysis framework

Space Complexity

Space complexity is a very important notion of efficiency analysis.

Problem-solving using a computer requires memory to hold temporary data or final result while the program is in execution. The amount of memory required by the algorithm to solve a given problem is called the space complexity of the algorithm

Controlling parameters for space complexity
Controlling parameters for space complexity

The space complexity of the algorithm is controlled by two components:

Fixed-size components : It includes the programming part whose memory requirement does not alter program execution. For example,

  • Instructions
  • Variables
  • Constants
  • Arrays

Variable size components : It includes the part of the program which whose size depends on the problem being solved. For example,

  • Size of loop
  • Stack required to handle a recursive call
  • Dynamic data structures like a linked list

To indicate the space complexity of the issue with input size n, we use the notation S(n). The word n refers to the size of the input or the size of the problem.

The following examples demonstrate the concept of space complexity:

Example 1 : Addition of two scalar variables

Algorithm ADD_SCALAR(A, B)
// Description : Perform arithmetic addition of two numbers
// Input : Two scalar variables A and B
// Output : variable C, which holds the addition of A and B

C ← A + B
return C

The addition of two scalar numbers requires one extra memory location to hold the result. Thus the space complexity of this algorithm is constant, hence S(n) = O(1).

Example 2 : Addition of two arrays

Algorithm ADD_ARRAY(A, B)
// Description : Performa element wise arithmetic addition of two arrays
// Input : Two number arrays A and B
// Output : Array C holding the element wise sum of array A and B

for i ← 1 to n do
  C[i] ← A[i] + B[i]
end

return C

Adding corresponding elements of two arrays, each of size n requires extra n memory locations to hold the result. As input size n increases, required space to hold the result also grows in the linear order of input. Thus the space complexity of the above code segment would be S(n) = O(n).

Example 3: Sum of elements of an array

Algorithm SUM_ARRAY_ELEMENTS(A)
// Description : Add all elements of array A
// Input : Array A of size n
// Output : Variable Sum which holds the addition of array elements

Sum ← 0
for i ← 1 to n do
  Sum ← Sum + A[i]
end

return Sum

The addition of all array elements requires only one extra variable (i.e. one memory location) denoted as sum. This is independent of array size. Thus the space complexity of the algorithm is constant and S(n) = O(1).

Time Complexity

The time complexity of an algorithm often determines its quality. The most essential component of the efficiency analytical framework is time complexity.

The valid algorithm executes in a finite period of time. The time complexity of the algorithm refers to how long it takes the algorithm to solve a given problem. In algorithm analysis, time complexity is very useful measure.

Example 1: Addition of two scalar variables

Algorithm ADD_SCALAR(A, B)
// Description : Perform arithmetic addition of two numbers
// Input : Two scalar variables A and B
// Output : variable C, which holds the addition of A and B

C ← A + B
return C

The sum of two scalar numbers requires one addition operation. Thus the time complexity of this algorithm is constant, so T(n) = O(1)

Example 2: Perform addition of two arrays

Algorithm ADD_ARRAY(A, B)
// Description : Performa element-wise arithmetic addition of two arrays
// Input : Two number arrays A and B of length n
// Output : Array C holding the element-wise sum of array A and B

for i ← 1 to n do	// one initialization, n increment, n comparison
  C[i] ← A[i] + B[i]	// n addition and n assignment
end

return C

As seen in the above code, adding array elements requires iterating the loop n times. Variable I is initialized once, and the relation between control variable I and n is verified n times before incrementing i. The loop performs addition and assignment operations n times.

Thus the total time of the algorithm is measured as,

T(n) = 1(initialization) + n(comparison + increment addition + assignment)

=   1 + 4n

We are interested in the order of complexity in terms of input size n while performing an efficiency analysis study of the algorithm. What is the connection between the number of steps to be completed and the input size n? As a result, all multiplicative and divisive constants should be eliminated. As a result, for a given algorithm, T(n) = O(n)

Example 3: Sum of elements of an array

Algorithm SUM_ARRAY_ELEMENTS(A)
// Description : Add all elements of array A
// Input : Array A of size n
// Output : Variable Sum which holds the addition of array elements

Sum ← 0
for i ← 1 to n do
  Sum ← Sum + A[i]
end

Adding all array elements necessitates n additions (we shall omit the number of comparisons, assignment, initialization etc. to avoid the multiplicative or additive constants). The number of additions is determined by the size of the array. It grows in the linear order of the input size. As a result, the time complexity of the above code is T(n) = O. (n).
Following table compares the time and space complexity of the discussed problems.

ProblemSpace Complexity S(n)Time Complexity T(n)
Add scalarO(1)O(1)
Add two arraysO(n)O(n)
Add array elementsO(1)O(n)

Measuring Input Size

The algorithm’s execution time is mostly determined by the size of the input. For larger input, the algorithm runs longer. As a result, the algorithm’s complexity is always assessed in terms of input size. The complexity is independent of hardware architecture, available memory, CPU speed, and so forth.

A specific algorithm requires prior knowledge of the size of the input to solve the problem. For example, in order to perform a binary search, we must first know the size of the array. Some applications, such as counting the number of nodes in a linked list, do not require it in advance.

Measuring Running Time

As stated previously, the running time of an algorithm is primarily determined by the amount of the input and, as a result, the number of frequent operations executed by the algorithm. The algorithm’s running time is not measured in terms of real time consumed by the algorithm in seconds or minutes, but rather as a function of a number of primitive/frequent operations. Primitive operation is another major factor of efficiency analysis of algorithm.

Primitive procedures vary depending on the problem. As an example,

ProblemPrimitive operation
SortingComparison
SearchingComparison
Matrix multiplicationMultiplication
Adding two arraysAddition
Count number of nodes in a linked listAddition
Insert element in the treeComparison
Find the greatest common divisorDivision

Running time is commonly indicated by T and is calculated by summing the frequent operations (n). T(n) is the time it takes the algorithm to solve a problem of size n.

An algorithm’s approximate running time is calculated as follows:

Efficiency Analysis by running time
Computing running time of algorithm

Order of Growth

The efficiency of the algorithm is expressed in term of input size n. The relationship between input size and performance of the algorithm is called an order of growth

The order of increase reflects how rapidly the time required by the algorithm rises in relation to the size of the input.

The algorithm may run a number of steps in the sequence of log n, n, n2, n3, or anything else for input size n.

As indicated in table, efficiency classes are classified into many categories.

Efficiency classOrder of growth rateExample
Constant1Delete the first node from a linked list Remove the largest element from the max heap Add two numbers
Logarithmiclog nBinary search Insert/delete element from binary search tree
LinearnLinear search Insert node at the end of the linked list Find minimum / maximum element from an array
nlog nn log nMerge sort Binary search Quicksort Heap Sort
Quadraticn2Selection Sort Bubble sort Input 2D array Find maximum element form 2D matrix
Cubicn3Matrix Multiplication
Exponential2nFinding the power set of any set Find the optimal solution for the Knapsack problem Solve TSP using dynamic programming
Factorialn!Generating permutations of a given set Solve TSP problem using brute force approach

Efficiency classes are sorted as :

O(1) < O(logkn) < O(log (log n)) < O(log n) < O((log n)k) < O(n) < O(n logk n) < O(n log n) < O(n2) < O(n3) < O(2n) < O(n!).

These are the most often used classes; however, there are numerous additional classes that reflect intermediate growth order. Algorithms with a running time at the top of the list are thought to be better. Algorithms with exponential or factorial running times are unsuitable for practical use.

Following table  shows the effect of input size on growth rate.

Input SizeConstant (1)Log (log)Linear (n)Quadratic (n2)Cubic (n3)Exponent (2n)
1101112
2112484
4124166416
813864512256
161416256409665536
3215321024327684294967296

Following figure depicts the connection between several efficiency groups. The growth of a log function is much slower than that of a linear or quadratic function. Observations similar to these may be made for many efficiency classes.

2n, n!, or nn functions grows extremely fast. Even for tiny issue sizes (i.e. for small n), such problems need a large amount of resources.

Figure  depicts a graphical comparison of several efficiency classes.

relation between complexity classes
Relation between complexity classes

Cases of Complexity

The algorithm’s running time for the same data structures and the same data may differ.

With different permutations of input data, the number of steps necessary to solve the problem, and hence the running time, may change. If we want to sort two values and the input is <1, 2>, we don’t need to swap. However, if the input sequence is <2, 1>, swapping is required. The organization of data is equally critical. On the basis of this assumption, we will look at three examples of complexity analysis.

Let T(n) denote the amount of time necessary to solve a problem of size n. We will look at all three examples of linear search complexity. The linear search method is explained below:

Algorithm LINEAR_SEARCH(A, Key)
// Description : Search element ‘Key’ in array A
// Input : Array A of size n, and the element to be searched i.e. Key
// Output : Status: Success / failure

flag ← 1
for i ← 1 to n do
  if A[i] == key then
    flag ← 1
    break
  end
end

if flag == 1 then
  print “Key is found on location i”
else
  print “Key is not present”
end
Efficiency Analysis cases
Cases of complexity

Best case analysis:

If an algorithm takes minimum time to solve the problem for a given set of input, it is called best case time complexity

Assume we wish to remove a node from a linked list of n nodes. First, we must locate the node to be removed from the list. If the node is at the very first place, only one comparison is necessary.

We need to update one link and then release the node. This procedure is independent of list size, which implies it may be executed in a fixed amount of time regardless of input size. As a result, the best-case running time for removing the first node from the list is constant, T(n) = O(1).

Worst-case analysis:

If an algorithm takes maximum time to solve the problem for a given set of input, it is called worst-case time complexity. Worst case is the special case of interest for the mathematicians.

If the element to be removed is at the last position in linked list, we must compare each node value to a key element. In this situation, we must make n comparisons.

The time required for deletion and link updating is constant. As the size of the list grows, so does the number of comparisons. The number of comparisons is proportional to the size of the problem (i.e. list size).

When an element is either at the end or not there at all, the algorithm does n comparisons.

As a result, the worst-case running time for this issue is linear, i.e. T(n) = O(n).

Average case analysis:

Input sequence which is neither best nor worst is called the average case. Average case analysis represents the general behavior of an algorithm

With a random series of input data, the average case occurs. In the preceding example, an average case occurs when the node to be removed is neither at the initial nor at the last place.

The program searches 50% of the list on average. It performs n/2 comparisons roughly. Once again, it appears to be linear growth. As a result, the average-case running time of the algorithm is linear, i.e. T(n) = O(n/2) = O (n).

Average case analysis is more than just taking the best and worst cases and averaging them. The average case is determined by the data distribution.


Additional Reading: Algorithmic Efficiency analysis [WiKi]

Leave a Reply

Your email address will not be published. Required fields are marked *