Algorithms: Question Set – 11

Algorithms: Question Set – 11

Define Big Omega Notations

A function t(n) is said to be in Ω(g(n)), denoted t(n) Є Ω((g(n)), if t(n) is bounded below by some positive constant multiple of g(n) for all large n, i.e., if there exists some positive constant c and some nonnegative integer n0 such that t(n) ≥c*g(n) for all n ≥ n0

Analyze the time complexity of the following segment:

for(i=0; i<N; i++)
	for(j=N/2; j > 0; j--)
		sum++;

Time Complexity= N * N/2

= N2 /2

Є O(N2)

write a general plan for analyzing recursive algorithms.

  • Decide on a parameter indicating an input’s size.
  • Identify the algorithm’s basic operation
  • Checking the no.of times basic operations are executed depends on the size of the input.if it depends on some additional property, then the best, worst, average cases need to be investigated
  • Set up the recurrence relation, with an appropriate initial condition, for the number of times the basic operation is executed
  • Solve recurrence (establishing the order of growth)

Define Big Theta Notations

A function t(n) is said to be in Θ(g(n)) , denoted t(n) Є Θ (g(n)) , if t(n) is bounded both above and below by some positive constant multiple of g(n) for all large n, i.e., if there exist some positive constants c1 and c2 and some nonnegative integer n0 such that c1 * g(n) ≤ t(n) ≤ c2 * g(n) for all n ≥ n0.

What is the running time of a program implementing the algorithm?

  • The running time T(n) is given by the following formula: T(n) ≈ cop C(n)
  • cop is the time of execution of an algorithm‘s basic operation on a particular computer and C(n) is the number of times this operation needs to be executed for the particular algorithm.

Mention the useful property, which can be applied to the asymptotic notations and their use.

If t1(n) O(g1(n)) and t2(n) O(g2(n)) then t1(n) + t2(n) max { g1(n), g2(n) } this property is also true for Ω and θ notations. This property will be useful in analyzing algorithms that comprise of two consecutive executable parts.

Define dynamic programming.

  • When the answer to the problem can be seen as the outcome of a series of decisions, dynamic programming can be utilised as a tool in the algorithm design process.
  • When many, interconnected issues need to be resolved, dynamic programming can help.
  • The original problem can be solved by first finding the solutions to the subproblems, which form as a result of the recurrence, and then storing the findings in a table.
  • US mathematician Richard Bellman came up with the concept in the 1950s.

What is the difference between quicksort and mergesort?

  • The divide-and-conquer strategy is utilised by both quicksort and mergesort.
  • This strategy requires the given array to be partitioned into subarrays before solving it.
  • The method that is used to partition the arrays is what sets these two approaches apart.
  • The arrays are partitioned differently for mergesort and quicksort; for mergesort, they are partitioned according to their position, and for quicksort, they are partitioned according to the element values.
  • When searching in a sorted array, the binary search method is exceptionally effective and efficient.
  • A search key, K, is compared to the element in the array located in the centre, A[m]. If they are identical, the algorithm comes to an end; otherwise, the same operation is performed recursively on the first half of the array if K is less than or equal to A[m] and on the second half of the array if K is greater than or equal to A[m].