In this section, we’ll look at a fascinating problem-solving strategy known as divide and conquer. The approach divides the bigger problem into smaller subproblems, and the solution to the original large problem is achieved by combining the solutions to the smaller subproblems. Such algorithms are ideal candidates for parallelization. We will evaluate and contrast the performance of several issues addressed using the traditional way and a divide and conquer strategy. Traditional algorithms are easily outperformed by the divide and conquer approach.

## General Strategy for Divide and Conquer

Divide and conquer algorithm operates in three stages:

**Divide:**Divide the problem recursively into smaller subproblems.**Solve:**Subproblems are solved independently.**Combine:**Combine subproblem solutions in order to deduce the answer to the original large problem.

Because subproblems are identical to the main problem but have smaller parameters, they can be readily solved using recursion. When a subproblem is reduced to its lowest feasible size, it is solved, and the results are recursively integrated to produce a solution to the original larger problem.

Divide and conquer is a top-down, multi-branched recursive method. Each branch represents a subproblem and calls itself with a smaller argument. Understanding and developing divide and conquer algorithms requires expertise and sound reasoning.

The divide and conquer approach is depicted graphically in following figure. Subproblems may not be exactly n/2 in size.

## Applications of Divide and Conquer Approach

Many computer science problems are effectively solved using divide and conquer. Few of them are listed here:

- Finding exponential of the number
- Multiplying large numbers
- Multiplying matrices (Strassen’s algorithm)
- Sorting data
- Searching element from the list (Binary search)
- Discrete Fourier Transform
- Closest pair problem
- Max-min problem

## Control Abstraction

As previously mentioned, the divide and conquer strategy operates in three stages:

- Divide the problem recursively into smaller subproblems.
- Subproblems are solved independently.
- Combine subproblem solutions to arrive at the answer to the original large problem.

The control abstraction for the divide and conquer (DC) strategy is as follows:

**Algorithm **DC(P)
// P is the problem to be solved
**if **P is small enough **then**
return Solution of P
**else**
**divide **larger problem P into k smaller subproblems P1, P2, â€¦, Pk
**solve **each small subproblem Pi using DC strategy
**return **combined solution (DC(P1), DC(P2), â€¦, DC(Pk))
**end
**

Because each subproblem in divide and conquer is independent, subproblems can be solved multiple times. If we build problems of size n/b, and the cost of division/combination is f(n), the time complexity of such a problem is given by the recurrence,

## Efficiency Analysis of Divide and Conquer Approach

In general form, the time complexity of the problem solved using the divide and conquer approach is given by following recurrence equation:

T(n) = g(n), if n is too small

T(n) = T(n_{1}) + T(n_{2}) + … + T(n_{k}) + f(n), otherwise

T(n) is the total amount of time required to solve a problem of size n. The cost of solving a very small problem is denoted by g(n). It denotes the complexity of resolving the base case. T(n_{i}) represents the cost of solving a subproblem of size n_{i}. The time necessary to split the problem and combine the solutions of subproblems is represented by the function f(n).

Generalized recurrence for this strategy is written as T(n) = a.T(n/b) + f(n), where **a** is the number of sub problems, **n/b** is the size of each sub problem and **f(n)** is the cost of division or combination. Thus,

T(n) = a. T(n/b) + f(n)

Assume that we have at least one problem, so a â‰¥ 1, and size of each problem is n/2, so b = 2,

Assume n = b^{k}, where k = 1, 2, 3, 4, â€¦

T(b^{k}) = a.T(b^{k}/b) + f(b^{k})

= a.T(b^{k â€“ 1}) + f(b^{k})Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â …(1)

To compute T(b^{k â€“ 1}), replace k by k â€“ 1 in Equation (1).

T(b^{k â€“ 1 }) = a.T(b^{k â€“ 2}) + f(b^{k â€“ 1})

â‡’ T(b^{k}) = a[a.T(b^{k â€“ 2}) + f(b^{k â€“ 1})] + f(b^{k})

= a^{2}.T(b^{k â€“ 2}) + af(b^{k â€“ 1})] + f(b^{k})Â Â Â Â Â Â …(2)

By substituting k = k â€“ 2 in Equation (1).

T(b^{k â€“ 2 })Â = Â a.T(b^{k â€“ 3}) + f(b^{k â€“ 2})

Substitute T(b^{k â€“ 2}) in Equation (2)

T(b^{k})Â Â = Â a^{2}[a.T(b^{k â€“ 3}) + f(b^{k â€“ 2})] + af(b^{k â€“ 1}) + f(b^{k})

= a^{3}.T(b^{k â€“ 3}) + a^{2}f(b^{k â€“ 2}) + af(b^{k â€“ 1}) + f(b^{k})

After k iterations,

T(b^{k}) = a^{k}.T(b^{k â€“ k}) + a^{k â€“ 1}f(b) + a^{k â€“ 2}f(b^{2}) + â€¦ + a^{0}f(b^{k})

= a^{k}.T(1) + a^{k â€“ 1}f(b) + a^{k â€“ 2}f(b^{2}) + â€¦ + a^{0}f(b^{k})

= a^{k}T(1) + Â ( a^{k} / a) f(b) + Â (a^{k} / a^{2})f(b^{2}) + â€¦ + ( a^{k} / a^{k} )f(b^{k})

= a^k \left[ T(1) + \sum_{i=1}^{k} \frac{f(b^i)}{a^i} \right]

Let n = b^{k}, so k = log_{b}n

By property of logarithm,

x^{log}_{b}^{y}Â Â = y^{log}_{b}^{x}

Thus, the complexity of problem depends on a number of problems **a**, size of the problem **(n/b)** and the division/combination cost **f(n)**.

Additional Reading: Read on QUORA