Divide and conquer

 Divide-and-conquer 

Both merge sort and quicksort employ a common algorithmic paradigm  based on recursion. This paradigm, divide-and-conquer, breaks a problem  into subproblems that are similar to the original problem, recursively  solves the subproblems, and finally combines the solutions to the  subproblems to solve the original problem. Because divide-and-conquer  solves subproblems recursively, each subproblem must be smaller than the  original problem, and there must be a base case for subproblems. You  should think of a divide-and-conquer algorithm as having three parts: 

1. Divide the problem into a number of subproblems that are smaller  instances of the same problem. 

2. Conquer the subproblems by solving them recursively. If they are small  enough, solve the subproblems as base cases. 

3. Combine the solutions to the subproblems into the solution for the original  problem. 

You can easily remember the steps of a divide-and-conquer algorithm  as divide, conquer, combine. Here's how to view one step, assuming that  each divide step creates two subproblems (though some divide-and conquer algorithms create more than two):

If we expand out two more recursive steps, it looks like this: Because divide-and-conquer creates at least two subproblems, a  divide-and-conquer algorithm makes multiple recursive calls. 

The following computer algorithms are based on divide-and conquer programming approach − 

Merge Sort 

Quick Sort 

Binary Search 

Strassen's Matrix Multiplication

 

 

Merge Sort 

Algorithm: 

void merge(int A[ ] , int start, int mid, int end) { 

//stores the starting position of both parts  in temporary variables. 

int p = start ,q = mid+1; 

int Arr[end-start+1] , k=0; 

for(int i = start ;i <= end ;i++) { 

 if(p > mid) //checks if first part  comes to an end or not . 

 Arr[ k++ ] = A[ q++] ; 

 else if ( q > end) //checks if second part comes to an end or not 

 Arr[ k++ ] = A[ p++ ]; 

 else if( A[ p ] < A[ q ]) //checks which part has smaller element. 

 Arr[ k++ ] = A[ p++ ]; 

 else 

 Arr[ k++ ] = A[ q++]; 

} 

 for (int p=0 ; p< k ;p ++) { 

 /* Now the real array has elements in sorted manner including both  

 parts.*/ 

 A[ start++ ] = Arr[ p ] ;  

 } 

}





void merge_sort (int A[ ] , int start , int end ) 

 { 

 if( start < end ) { 

 int mid = (start + end ) / 2 ;  // defines the current array in 2 parts .  merge_sort (A, start , mid ) ;  // sort the 1st part of array . 

 merge_sort (A,mid+1 , end ) ;  // sort the 2nd part of array. 

 // merge the both parts by comparing  elements of both the parts. 

 merge(A,start , mid , end );   }  

}





Quick Sort: 

Quick Sort is also based on the concept of Divide and  Conquer, just like merge sort. But in quick sort all the heavy  lifting(major work) is done while dividing the array into  subarrays, while in case of merge sort, all the real work  happens during merging the subarrays. In case of quick sort,  the combine step does absolutely nothing. 

It is also called partition-exchange sort. This algorithm divides  the list into three main parts: 

1. Elements less than the Pivot element 

2. Pivot element(Central element) 

3. Elements greater than the pivot element 

Pivot element can be any element from the array, it can be the  first element, the last element or any random element. In this  tutorial, we will take the rightmost element or the last element  as pivot

How Quick Sorting Works? 

Following are the steps involved in quick sort algorithm: 

1. After selecting an element as pivot, which is the last index  of the array in our case, we divide the array for the first  time. 

2. In quick sort, we call this partitioning. It is not simple  breaking down of array into 2 subarrays, but in case of  partitioning, the array elements are so positioned that all  the elements smaller than the pivot will be on the left side  of the pivot and all the elements greater than the pivot will  be on the right side of it. 

3. And the pivot element will be at its final sorted position. 4. The elements to the left and right, may not be sorted.

5. Then we pick subarrays, elements on the left of pivot and  elements on the right of pivot, and we  

perform partitioning on them by choosing a pivot in the  subarrays. 

Algorithm:

quickSort(array, leftmostIndex, rightmostIndex)  if (leftmostIndex < rightmostIndex) 

 pivotIndex <- partition(array,leftmostIndex,  rightmostIndex) 

 quickSort(array, leftmostIndex, pivotIndex) 

 quickSort(array, pivotIndex + 1,  

rightmostIndex) 

partition(array, leftmostIndex, rightmostIndex)  set rightmostIndex as pivotIndex 

 storeIndex <- leftmostIndex - 1 

 for i <- leftmostIndex + 1 to rightmostIndex  if element[i] < pivotElement 

 swap element[i] and element[storeIndex]  storeIndex++ 

 swap pivotElement and element[storeIndex+1] return storeIndex + 1



How QuickSort Works? 

1. A pivot element is chosen from the array. You can choose any  element from the array as the pivot element. 

Here, we have taken the rightmost (ie. the last element) of the  array as the pivot element. 

2. Select a pivot element 

3. The elements smaller than the pivot element are put on the left and the  elements greater than the pivot element are put on the right.

4. Put all the smaller elements on the left and greater on the right of pivot  element 

The above arrangement is achieved by the following steps. a. A pointer is fixed at the pivot element. The pivot element is compared  with the elements beginning from the first index. If the element greater  than the pivot element is reached, a second pointer is set for that  element. 

b. Now, the pivot element is compared with the other elements (a third  pointer). If an element smaller than the pivot element is reached, the 

smaller element is swapped with the greater element found earlier. c. Comparison of pivot element with other elements

d. The process goes on until the second last element is reached. 

Finally, the pivot element is swapped with the second pointer. e. Swap pivot element with the second pointer 

f. Now the left and right subparts of this pivot element are taken for further  processing in the steps below. 

5. Pivot elements are again chosen for the left and the right sub-parts  separately. Within these sub-parts, the pivot elements are placed at their 

right position. Then, step 2 is repeated. 

6. Select pivot element of in each half and put at correct place using recursion 

7. The sub-parts are again divided into smaller sub-parts until each subpart is  formed of a single element. 

8. At this point, the array is already sorted.

Binary Search 

Binary search is the search technique which works  efficiently on the sorted lists. Hence, in order to search  an element into some list by using binary search  technique, we must ensure that the list is sorted. 

Binary search follows divide and conquer approach in  which, the list is divided into two halves and the item is  compared with the middle element of the list. If the  match is found then, the location of middle element is  returned otherwise, we search into either of the halves  depending upon the result produced through the  match. 

Binary search algorithm is given below. 

BINARY_SEARCH(A, lower_bound, upper_bound, VAL) o Step 1: [INITIALIZE] SET BEG = lower_bound END = upper_bound, POS = - 1 

o Step 2: Repeat Steps 3 and 4 while BEG <=END o Step 3: SET MID = (BEG + END)/2 

o Step 4: IF A[MID] = VAL 

SET POS = MID 

PRINT POS 

Go to Step 6 

ELSE IF A[MID] > VAL 

SET END = MID - 1 

ELSE 

SET BEG = MID + 1 

[END OF IF] 

[END OF LOOP] 

o Step 5: IF POS = -1 

PRINT "VALUE IS NOT PRESENT IN THE ARRAY" 

[END OF IF] 

o Step 6: EXIT

Complexity 

SN Performance Complexity

Worst case 

O(log n)

Best case 

O(1)

Average Case 

O(log n)

Worst case space complexity 

O(1)



Example 

Let us consider an array arr = {1, 5, 7, 8, 13, 19, 20,  23, 29}. Find the location of the item 23 in the array. 

In 1st step : 

1.BEG = 0  

2.END = 8ron  

3.MID = 4  

4.a[mid] = a[4] = 13 < 23, therefore  

in Second step: 

1.Beg = mid +1 = 5  

2.End = 8  

3.mid = 13/2 = 6  

4.a[mid] = a[6] = 20 < 23, therefore;  

in third step: 

1.beg = mid + 1 = 7  

2.End = 8  

3.mid = 15/2 =

4.a[mid] = a[7]  

5. a[7] = 23 = item;  

6.therefore, set location = mid;  

7.The location of the item will be 7

Strassen's Matrix Multiplication 

Introduction 

Strassen in 1969 which gives an overview that how we can find  the multiplication of two 2*2 dimension matrix by the brute force algorithm. But by using divide and conquer technique  the overall complexity for multiplication two matrices is  reduced. This happens by decreasing the total number if  multiplication performed at the expenses of a slight increase in  the number of addition. 

For multiplying the two 2*2 dimension matrices Strassen's used  some formulas in which there are seven multiplication and  eighteen addition, subtraction, and in brute force algorithm,  there is eight multiplication and four addition. The utility of  Strassen's formula is shown by its asymptotic superiority when  order n of matrix reaches infinity. Let us consider two  matrices A and B, n*n dimension, where n is a power of two. It  can be observed that we can contain four n/2*n/2 submatrices  from A, B and their product C. C is the resultant matrix  of A and B

Procedure of Strassen matrix multiplication There are some procedures: 

1. Divide a matrix of order of 2*2 recursively till we get the  matrix of 2*2. 

2. Use the previous set of formulas to carry out 2*2 matrix  multiplication. 

3. In this eight multiplication and four additions, subtraction  are performed. 

4. Combine the result of two matrixes to find the final  product or final matrix.

Formulas for Stassen’s matrix multiplication 

In Strassen’s matrix multiplication there are seven multiplication and four  addition, subtraction in total. 

Example:  

Strassen’s algorithm makes use of the same divide and conquer approach as  above, but instead uses only 7 recursive calls rather than 8 as shown in the  equations below. Here we save one recursive call, but have several new  additions of n/2 x n/2 matrices. 

P1=(A11+A22)(B11+B22) 

P2=(A21+A22) B11 

P3=A11(B12−B22) 

P4=A22(B21−B−11) 

P5=(A11+A12) B22 

P6=(A21−A11) (B11+B12) 

P7=(A12−A22) (B21+B22) 

C11=P1+P4−P5+P7 

C12=P3+P5 

C21=P2+P4 

C22=P1−P2+P3+P6 

Example: 

Apply the Strassen’s method to multiply the following two matrices 

[1     4

6     8 ]


[7     3 

9     5 ]

Solution : Let A =    

[1     4 

6     8 ]

B=

[ 7    3

9    5]

c=

[ C11     C12 

C21     C22 ]



Calculate value Of P1,P2,…… P7 

P1=(A11+A22)(B11+B22)  = (1+8)(7+5)=9*12 = 108 

P2=(A21+A22) B11 

 = (6+8)7 =14*7 = 98 

P3=A11(B12−B22) 

 = 1(3 – 5 ) = -2 

P4=A22(B21−B11) 

 = 8(9-7) = 8*2=16 

P5=(A11+A12) B22 

 = (1+4)5 =25 

P6=(A21−A11) (B11+B12)  = ( 6- 1)(7+3) =5*10=50 

P7=(A12−A22) (B21+B22)  = (4-8) (9+5)= -4*14= -56 

C11=P1+P4−P5+P7 

= 108 +16-25-56 = 

C12=P3+P5 

 = -2+25 = 23 

C21=P2+P4 

= 98+16 = 114 

C22=P1−P2+P3+P6 

=108-98-2+50 = 

[43    23 

114     58]

  

Divide and Conquer Method 

Consider two matrices A and B with 4x4 dimension each as shown below,

The matrix multiplication of the above two matrices A and B is Matrix C,

where, 

c11=a11b11+a12b21+a13b31+a14b41(1)c11=a11b11+a12b21+a13b31+a1 4b41(1) 

c12=a11b12+a12b22+a13b32+a14b42(2)c12=a11b12+a12b22+a13b32+a1 4b42(2) 

c21=a21b11+a22b21+a23b31+a24b41(3)c21=a21b11+a22b21+a23b31+a2 4b41(3) 

c22=a21b12+a22b22+a23b32+a24b42(4) 

and so on. 

Now, let's look at the Divide and Conquer approach to multiply two matrices. 

Take two submatrices from the above two matrices A and B each as  (A11A11 & A12A12) and (B11B11 & B21B21) as shown below,

And the matrix multiplication of the two 2x2 matrices A11 and B11 is, Also, the matrix multiplication of two 2x2 matrices A12 and B21 is as follows, So if you observe, I can conclude the following, 

A11B11+A12B21=⎡⎢ 

 

⎢⎣c11c12..c21c22..........⎤⎥ 

 

⎥⎦A11B11+A12B21=[c11c12..c21c22..........] 

Where ‘+’ is Matrix Addition, 

And c11c11, c12c12, c21c21 and c22c22 are equal to equations 1, 2, 3 and 4  respectively. 

So the idea is to recursively divide n x n matrices into n/2 x n/2 matrices until  they are small enough to be multiplied in the naive way, more specifically into  8 multiplications and 4 matrix additions


Comments