Divide d Conquer Textook Reading Chapters 4, 7 & 33.4
Overview Design principle Divide d conquer Proof technique Induction, induction, induction Analysis technique Recurrence relations Prolems Sorting (Merge Sort d Quick Sort) Selection Matrix multiplication Finding the two closest points
Merge Sort 8 7 5 43 3 64 MergeSort(A, `, r) 3 4 5 6 if r ` then return m = (` + r)/c MergeSort(A, `, m) MergeSort(A, m +, r) Merge(A, `, m, r) 8 7 5 43 3 MergeSort(A, `, m) 5 8 64 MergeSort(A, m +, r) 7 43 3 64 Merge(A, `, m, r) 3 5 8 7 43 64
Merging Two Sorted Lists Merge(A, `, m, r) 3 4 5 6 7 8 9 0 3 4 n = m ` + n = r m for i = to n do L[i] = A[l + i ] for i = to n do R[i] = A[m + i] L[n + ] = R[n + ] = + i=j= for k = ` to r do if L[i] R[j] then A[k] = L[i] i=i+ else A[k] = R[j] j=j+ 5 8 7 43 7 43 8 3 m ` 5 3 5 8 64 r 3 7 64 43 64
Divide d Conquer 8 7 5 8 7 5 43 Recursively solve the same prolem on these smaller parts. Comine the solutions computed y the recursive calls to otain the final solution. 3 64 Divide Three steps: Divide the input into smaller parts. 43 3 64 3 Recurse 5 8 7 43 64 Comine 3 5 8 7 43 64
Divide d Conquer 8 7 5 8 7 5 43 Recursively solve the same prolem on these smaller parts. Comine the solutions computed y the recursive calls to otain the final solution. 3 64 Divide Three steps: Divide the input into smaller parts. 43 3 64 3 Recurse 5 8 7 43 64 Comine 3 5 8 7 43 64 This c e viewed as a reduction technique, reducing the original prolem to simpler prolems to e solved in the divide d/or comine steps.
Divide d Conquer 8 7 5 8 7 5 43 Recursively solve the same prolem on these smaller parts. Comine the solutions computed y the recursive calls to otain the final solution. 3 64 Divide Three steps: Divide the input into smaller parts. 43 3 64 3 Recurse 5 8 7 43 64 Comine 3 5 8 7 43 64 This c e viewed as a reduction technique, reducing the original prolem to simpler prolems to e solved in the divide d/or comine steps. Example: Once we unfold the recursion of Merge Sort, we re left with nothing ut Merge steps. Thus, we reduce sorting to the simpler prolem of merging sorted lists.
Loop Invarits... are a technique for proving the correctness of iterative algorithm. The invarit states conditions that should hold efore d after each iteration. Correctness proof using a loop invarit: Initialization: Prove the invarit holds efore the first iteration. Maintence: Prove each iteration maintains the invarit. Termination: Prove that the correctness of the invarit after the last iteration implies correctness of the algorithm.
Correctness of Merge Merge(A, `, m, r) 3 4 5 6 7 8 9 0 3 4 n = m ` + n = r m for i = to n do L[i] = A[l + i ] for i = to n do R[i] = A[m + i] L[n + ] = R[n + ] = + i=j= for k = ` to r do if L[i] R[j] then A[k] = L[i] i=i+ else A[k] = R[j] j=j+ Loop invarit: A[`.. k ] L[i.. n ] R[j.. n ] is the set of elements originally in A[`.. r]. A[`.. k ], L[i.. n ], d R[j.. n ] are sorted. x y for all x A[`.. k ] d y L[i.. n ] R[j.. n ]. L A i R k j
Correctness of Merge Merge(A, `, m, r) 3 4 5 6 7 8 9 0 3 4 n = m ` + n = r m for i = to n do L[i] = A[l + i ] for i = to n do R[i] = A[m + i] L[n + ] = R[n + ] = + i=j= for k = ` to r do if L[i] R[j] then A[k] = L[i] i=i+ else A[k] = R[j] j=j+ Initialization: A[`.. m] is copied to L[.. n ]. A[m +.. r] is copied to R[.. n ]. i =, j =, k =. L A i R k j
Correctness of Merge Merge(A, `, m, r) 3 4 5 6 7 8 9 0 3 4 n = m ` + n = r m for i = to n do L[i] = A[l + i ] for i = to n do R[i] = A[m + i] L[n + ] = R[n + ] = + i=j= for k = ` to r do if L[i] R[j] then A[k] = L[i] i=i+ else A[k] = R[j] j=j+ Termination: k=r+ A[`.. r] contains all items it contained initially, in sorted order. L A i R k j
Correctness of Merge Merge(A, `, m, r) 3 4 5 6 7 8 9 0 3 4 n = m ` + n = r m for i = to n do L[i] = A[l + i ] for i = to n do R[i] = A[m + i] L[n + ] = R[n + ] = + i=j= for k = ` to r do if L[i] R[j] then A[k] = L[i] i=i+ else A[k] = R[j] j=j+ Maintence: A[k0 ] L[i] for all k0 < k 0 0 L[i] L[i ] for all i > i L[i] R[j] R[j0 ] for all j0 > j L A i R k j
Correctness of Merge Merge(A, `, m, r) 3 4 5 6 7 8 9 0 3 4 n = m ` + n = r m for i = to n do L[i] = A[l + i ] for i = to n do R[i] = A[m + i] L[n + ] = R[n + ] = + i=j= for k = ` to r do if L[i] R[j] then A[k] = L[i] i=i+ else A[k] = R[j] j=j+ Maintence: A[k0 ] L[i] for all k0 < k 0 0 L[i] L[i ] for all i > i L[i] R[j] R[j0 ] for all j0 > j L A i R k j
Correctness of Merge Merge(A, `, m, r) 3 4 5 6 7 8 9 0 3 4 n = m ` + n = r m for i = to n do L[i] = A[l + i ] for i = to n do R[i] = A[m + i] L[n + ] = R[n + ] = + i=j= for k = ` to r do if L[i] R[j] then A[k] = L[i] i=i+ else A[k] = R[j] j=j+ Maintence: A[k0 ] L[i] for all k0 < k 0 0 L[i] L[i ] for all i > i L[i] R[j] R[j0 ] for all j0 > j L A i R k j
Correctness of Merge Sort Lemma: Merge Sort correctly sorts y input array. MergeSort(A, `, r) 3 4 5 6 if r ` then return m = (` + r)/c MergeSort(A, `, m) MergeSort(A, m +, r) Merge(A, `, m, r)
Correctness of Merge Sort Lemma: Merge Sort correctly sorts y input array. Proof y induction on n. MergeSort(A, `, r) 3 4 5 6 if r ` then return m = (` + r)/c MergeSort(A, `, m) MergeSort(A, m +, r) Merge(A, `, m, r)
Correctness of Merge Sort Lemma: Merge Sort correctly sorts y input array. Proof y induction on n. Base case: (n = ) An empty or one-element array is already sorted. Merge sort does nothing. MergeSort(A, `, r) 3 4 5 6 if r ` then return m = (` + r)/c MergeSort(A, `, m) MergeSort(A, m +, r) Merge(A, `, m, r)
Correctness of Merge Sort Lemma: Merge Sort correctly sorts y input array. Proof y induction on n. Base case: (n = ) An empty or one-element array is already sorted. Merge sort does nothing. MergeSort(A, `, r) 3 4 5 6 if r ` then return m = (` + r)/c MergeSort(A, `, m) MergeSort(A, m +, r) Merge(A, `, m, r) Inductive step: (n > ) The left d right halves have size less th n each. By the inductive hypothesis, the recursive calls sort them correctly. Merge correctly merges the two sorted sequences.
Correctness of Divide d Conquer Algorithms Divide d conquer algorithms are the algorithmic incarnation of induction: Base case: Solve trivial instces directly, without recursing. Inductive step: Reduce the solution of a given instce to the solution of smaller instces, y recursing.
Correctness of Divide d Conquer Algorithms Divide d conquer algorithms are the algorithmic incarnation of induction: Base case: Solve trivial instces directly, without recursing. Inductive step: Reduce the solution of a given instce to the solution of smaller instces, y recursing. Induction is the natural proof method for divide d conquer algorithms.
Recurrence Relations A recurrence relation defines the value f(n) of a function f( ) for argument n in terms of the values of f( ) for arguments smaller th n.
Recurrence Relations A recurrence relation defines the value f(n) of a function f( ) for argument n in terms of the values of f( ) for arguments smaller th n. Examples: Fionacci numers: ( Fn = Fn + Fn n = 0 or n = otherwise
Recurrence Relations A recurrence relation defines the value f(n) of a function f( ) for argument n in terms of the values of f( ) for arguments smaller th n. Examples: Fionacci numers: ( Fn = Fn + Fn n = 0 or n = otherwise Binomial coefficients ( B(n, k) = k = or k = n B(n, k ) + B(n, k) otherwise
A Recurrence for Merge Sort MergeSort(A, `, r) 3 4 5 6 Recurrence: T(n) = if r ` then return m = (` + r)/c MergeSort(A, `, m) MergeSort(A, m +, r) Merge(A, `, m, r)
A Recurrence for Merge Sort Analysis: MergeSort(A, `, r) If n = 0 or n =, we spend constt time to figure out that there is nothing to do d then exit. 3 4 5 6 if r ` then return m = (` + r)/c MergeSort(A, `, m) MergeSort(A, m +, r) Merge(A, `, m, r) Recurrence: T(n) = ( Θ() n
A Recurrence for Merge Sort Analysis: MergeSort(A, `, r) If n = 0 or n =, we spend constt time to figure out that there is nothing to do d then exit. If n >, we 3 4 5 6 if r ` then return m = (` + r)/c MergeSort(A, `, m) MergeSort(A, m +, r) Merge(A, `, m, r) Recurrence: T(n) = ( Θ() n n>
A Recurrence for Merge Sort Analysis: MergeSort(A, `, r) If n = 0 or n =, we spend constt time to figure out that there is nothing to do d then exit. 3 4 5 6 If n >, we Spend constt time to determine the middle index m, if r ` then return m = (` + r)/c MergeSort(A, `, m) MergeSort(A, m +, r) Merge(A, `, m, r) Recurrence: T(n) = ( Θ() Θ() n n>
A Recurrence for Merge Sort Analysis: MergeSort(A, `, r) If n = 0 or n =, we spend constt time to figure out that there is nothing to do d then exit. 3 4 5 6 If n >, we Spend constt time to determine the middle index m, Make one recursive call on the left half, which has size dn/e, if r ` then return m = (` + r)/c MergeSort(A, `, m) MergeSort(A, m +, r) Merge(A, `, m, r) Recurrence: ( Θ() T(n) = T(dn/e) + Θ() n n>
A Recurrence for Merge Sort Analysis: MergeSort(A, `, r) If n = 0 or n =, we spend constt time to figure out that there is nothing to do d then exit. If n >, we Spend constt time to determine the middle index m, Make one recursive call on the left half, which has size dn/e, Make one recursive call on the right half, which has size n/c, d 3 4 5 6 if r ` then return m = (` + r)/c MergeSort(A, `, m) MergeSort(A, m +, r) Merge(A, `, m, r) Recurrence: ( Θ() T(n) = T(dn/e) + T(n/c) + Θ() n n>
A Recurrence for Merge Sort Analysis: MergeSort(A, `, r) If n = 0 or n =, we spend constt time to figure out that there is nothing to do d then exit. If n >, we Spend constt time to determine the middle index m, Make one recursive call on the left half, which has size dn/e, Make one recursive call on the right half, which has size n/c, d Spend linear time to merge the two resulting sorted sequences. 3 4 5 6 if r ` then return m = (` + r)/c MergeSort(A, `, m) MergeSort(A, m +, r) Merge(A, `, m, r) Recurrence: ( Θ() T(n) = T(dn/e) + T(n/c) + Θ() n n>
A Recurrence for Merge Sort Analysis: MergeSort(A, `, r) If n = 0 or n =, we spend constt time to figure out that there is nothing to do d then exit. If n >, we Spend constt time to determine the middle index m, Make one recursive call on the left half, which has size dn/e, Make one recursive call on the right half, which has size n/c, d Spend linear time to merge the two resulting sorted sequences. 3 4 5 6 if r ` then return m = (` + r)/c MergeSort(A, `, m) MergeSort(A, m +, r) Merge(A, `, m, r) Recurrence: ( Θ() T(n) = T(dn/e) + T(n/c) + Θ(n) n n>
A Recurrence for Binary Search BinarySearch(A, `, r, x) 3 4 5 6 7 8 Recurrence: T(n) = if r < ` then return m = (` + r)/c if x = A[m] then return if x < A[m] then return else return no yes BinarySearch(A, `, m, x) BinarySearch(A, m +, r, x)
A Recurrence for Binary Search Analysis: BinarySearch(A, `, r, x) If n = 0, we spend constt time to swer no. 3 4 5 6 7 8 if r < ` then return m = (` + r)/c if x = A[m] then return if x < A[m] then return else return Recurrence: ( T(n) = Θ() n=0 no yes BinarySearch(A, `, m, x) BinarySearch(A, m +, r, x)
A Recurrence for Binary Search Analysis: BinarySearch(A, `, r, x) If n = 0, we spend constt time to swer no. If n > 0, we 3 4 5 6 7 8 if r < ` then return m = (` + r)/c if x = A[m] then return if x < A[m] then return else return Recurrence: ( T(n) = Θ() n=0 n>0 no yes BinarySearch(A, `, m, x) BinarySearch(A, m +, r, x)
A Recurrence for Binary Search Analysis: BinarySearch(A, `, r, x) If n = 0, we spend constt time to swer no. If n > 0, we Spend constt time to find the middle element d compare it to x. 3 4 5 6 7 8 if r < ` then return m = (` + r)/c if x = A[m] then return if x < A[m] then return else return Recurrence: ( T(n) = Θ() n=0 Θ() n > 0 no yes BinarySearch(A, `, m, x) BinarySearch(A, m +, r, x)
A Recurrence for Binary Search Analysis: BinarySearch(A, `, r, x) If n = 0, we spend constt time to swer no. If n > 0, we Spend constt time to find the middle element d compare it to x. Make one recursive call on one of the two halves, which has size at most n/c, 3 4 5 6 7 8 if r < ` then return m = (` + r)/c if x = A[m] then return if x < A[m] then return else return Recurrence: ( T(n) = Θ() n=0 T(n/c) + Θ() n > 0 no yes BinarySearch(A, `, m, x) BinarySearch(A, m +, r, x)
Simplified Recurrence Notation The recurrences we use to alyze algorithms all have a ase case of the form T(n) c n n0. The exact choices of c d n0 affect the value of T(n) for y n y only a constt factor.
Simplified Recurrence Notation The recurrences we use to alyze algorithms all have a ase case of the form T(n) c n n0. The exact choices of c d n0 affect the value of T(n) for y n y only a constt factor. Floors d ceilings usually affect the value of T(n) y only a constt factor.
Simplified Recurrence Notation The recurrences we use to alyze algorithms all have a ase case of the form T(n) c n n0. The exact choices of c d n0 affect the value of T(n) for y n y only a constt factor. Floors d ceilings usually affect the value of T(n) y only a constt factor. So we are lazy d write Merge Sort: T(n) = T(n/) + Θ(n) Binary search: T(n) = T(n/) + Θ()
Solving Recurrences Given two algorithms A d B with running times TA (n) = T(n/) + Θ(n) TB (n) = 3T(n/) + Θ(lg n), which one is faster?
Solving Recurrences Given two algorithms A d B with running times TA (n) = T(n/) + Θ(n) TB (n) = 3T(n/) + Θ(lg n), which one is faster? A recurrence for T(n) precisely defines T(n), ut it is hard for us to look at the function d say which one grows faster.
Solving Recurrences Given two algorithms A d B with running times TA (n) = T(n/) + Θ(n) TB (n) = 3T(n/) + Θ(lg n), which one is faster? A recurrence for T(n) precisely defines T(n), ut it is hard for us to look at the function d say which one grows faster. We wt a closed-form expression for T(n), that is, one of the form T(n) Θ(n), T(n) Θ(n ),..., one that does not depend on T(n0 ) for y n0 < n.
Methods for Solving Recurrences Sustitution: Guess the solution. (Intuition, experience, lack magic, recursion trees, trial d error) Use induction to prove that the guess is correct.
Methods for Solving Recurrences Sustitution: Guess the solution. (Intuition, experience, lack magic, recursion trees, trial d error) Use induction to prove that the guess is correct. Recursion trees: Draw a tree that visualizes how the recurrence unfolds. Sum up the costs of the nodes in the tree to Otain exact swer if the alysis is done rigorously enough or Otain a guess that c then e verified rigorously using sustitution.
Methods for Solving Recurrences Sustitution: Guess the solution. (Intuition, experience, lack magic, recursion trees, trial d error) Use induction to prove that the guess is correct. Recursion trees: Draw a tree that visualizes how the recurrence unfolds. Sum up the costs of the nodes in the tree to Otain exact swer if the alysis is done rigorously enough or Otain a guess that c then e verified rigorously using sustitution. Master Theorem: Cook ook recipe for solving common recurrences. Immediately tells us the solution after we verify some simple conditions to determine which case of the theorem applies.
Sustitution: Merge Sort Lemma: The running time of Merge Sort is in O(n lg n).
Sustitution: Merge Sort Lemma: The running time of Merge Sort is in O(n lg n). Recurrence: T(n) = T(n/) + O(n)
Sustitution: Merge Sort Lemma: The running time of Merge Sort is in O(n lg n). Recurrence: T(n) = T(n/) + O(n), that is, T(n) T(n/) +, for some a > 0 d all n n0.
Sustitution: Merge Sort Lemma: The running time of Merge Sort is in O(n lg n). Recurrence: T(n) = T(n/) + O(n), that is, T(n) T(n/) +, for some a > 0 d all n n0. Guess: T(n) cn lg n, for some c > 0 d all n n.
Sustitution: Merge Sort Lemma: The running time of Merge Sort is in O(n lg n). Recurrence: T(n) = T(n/) + O(n), that is, T(n) T(n/) +, for some a > 0 d all n n0. Guess: T(n) cn lg n, for some c > 0 d all n n. Base case: For n < 4, T(n) c0 c0 n c0 n lg n, for some c0 > 0.
Sustitution: Merge Sort Lemma: The running time of Merge Sort is in O(n lg n). Recurrence: T(n) = T(n/) + O(n), that is, T(n) T(n/) +, for some a > 0 d all n n0. Guess: T(n) cn lg n, for some c > 0 d all n n. Base case: For n < 4, T(n) c0 c0 n c0 n lg n, for some c0 > 0. T(n) cn lg n as long as c c0.
Sustitution: Merge Sort Inductive step: (n 4)
Sustitution: Merge Sort Inductive step: (n 4) T(n) T(n/) +
Sustitution: Merge Sort Inductive step: (n 4) T(n) T(n/) + cn n lg +
Sustitution: Merge Sort Inductive step: (n 4) T(n) T(n/) + cn n lg + = cn(lg n ) +
Sustitution: Merge Sort Inductive step: (n 4) T(n) T(n/) + cn n lg + = cn(lg n ) + = cn lg n + (a c)n
Sustitution: Merge Sort Inductive step: (n 4) T(n) T(n/) + cn n lg + = cn(lg n ) + = cn lg n + (a c)n cn lg n, for all c a.
Sustitution: Merge Sort Inductive step: (n 4) T(n) T(n/) + cn n lg + = cn(lg n ) + = cn lg n + (a c)n cn lg n, for all c a. Notes: We only proved the upper ound. The lower ound, T(n) Ω(n lg n) c e proven alogously.
Sustitution: Merge Sort Inductive step: (n 4) T(n) T(n/) + cn n lg + = cn(lg n ) + = cn lg n + (a c)n cn lg n, for all c a. Notes: We only proved the upper ound. The lower ound, T(n) Ω(n lg n) c e proven alogously. Since the ase case is valid only for n d we use the inductive hypothesis for n/ in the inductive step, the inductive step is valid only for n 4. Hence, a ase case for n < 4.
Sustitution: Binary Search Lemma: The running time of inary search is in O(lg n).
Sustitution: Binary Search Lemma: The running time of inary search is in O(lg n). Recurrence: T(n) = T(n/) + O(), that is, T(n) T(n/) + a, for some a > 0 d all n n0.
Sustitution: Binary Search Lemma: The running time of inary search is in O(lg n). Recurrence: T(n) = T(n/) + O(), that is, T(n) T(n/) + a, for some a > 0 d all n n0. Guess: T(n) c lg n, for some c > 0 d all n n.
Sustitution: Binary Search Lemma: The running time of inary search is in O(lg n). Recurrence: T(n) = T(n/) + O(), that is, T(n) T(n/) + a, for some a > 0 d all n n0. Guess: T(n) c lg n, for some c > 0 d all n n. Base case: For n < 4, T(n) c0 c0 lg n, for some c0 > 0.
Sustitution: Binary Search Lemma: The running time of inary search is in O(lg n). Recurrence: T(n) = T(n/) + O(), that is, T(n) T(n/) + a, for some a > 0 d all n n0. Guess: T(n) c lg n, for some c > 0 d all n n. Base case: For n < 4, T(n) c0 c0 lg n, for some c0 > 0. T(n) c lg n as long as c c0.
Sustitution: Binary Search Inductive step: (n 4)
Sustitution: Binary Search Inductive step: (n 4) T(n) T(n/) + a
Sustitution: Binary Search Inductive step: (n 4) T(n) T(n/) + a n c lg + a
Sustitution: Binary Search Inductive step: (n 4) T(n) T(n/) + a n c lg + a = c(lg n ) + a
Sustitution: Binary Search Inductive step: (n 4) T(n) T(n/) + a n c lg + a = c(lg n ) + a = c lg n + (a c)
Sustitution: Binary Search Inductive step: (n 4) T(n) T(n/) + a n c lg + a = c(lg n ) + a = c lg n + (a c) c lg n, for all c a.
Sustitution d Asymptotic Notation Why did we expd the Merge Sort recurrence d the claim T(n) = T(n/) + O(n) to T(n) T(n/) + T(n) O(n lg n) to T(n) cn lg n?
Sustitution d Asymptotic Notation Why did we expd the Merge Sort recurrence d the claim T(n) = T(n/) + O(n) to T(n) T(n/) + T(n) O(n lg n) to T(n) cn lg n? If we re not careful, we may prove nonsensical results: Recurrence: T(n) = T(n ) + n Claim: T(n) O(n) Note that T(n) Θ(n )!
Sustitution d Asymptotic Notation Why did we expd the Merge Sort recurrence T(n) = T(n/) + O(n) to T(n) T(n/) + T(n) O(n lg n) to T(n) cn lg n? d the claim If we re not careful, we may prove nonsensical results: Recurrence: T(n) = T(n ) + n Claim: T(n) O(n) Base case: (n = ) T(n) = T() = O(n)
Sustitution d Asymptotic Notation Why did we expd the Merge Sort recurrence T(n) = T(n/) + O(n) to T(n) T(n/) + T(n) O(n lg n) to T(n) cn lg n? d the claim If we re not careful, we may prove nonsensical results: Recurrence: T(n) = T(n ) + n Claim: T(n) O(n) Base case: (n = ) T(n) = T() = O(n) Inductive step: (n > ) T(n) = T(n ) + n = O(n ) + n = O(n) + n = O(n)
Sustitution d Asymptotic Notation Why did we expd the Merge Sort recurrence T(n) = T(n/) + O(n) to T(n) T(n/) + T(n) O(n lg n) to T(n) cn lg n? d the claim If we re not careful, we may prove nonsensical results: Recurrence: T(n) = T(n ) + n Claim: T(n) O(n) Base case: (n = ) T(n) = T() = O(n) Inductive step: (n > ) T(n) = T(n ) + n = O(n ) + n = O(n) + n = O(n) T(n) cn
Sustitution d Asymptotic Notation Why did we expd the Merge Sort recurrence T(n) = T(n/) + O(n) to T(n) T(n/) + T(n) O(n lg n) to T(n) cn lg n? d the claim If we re not careful, we may prove nonsensical results: Recurrence: T(n) = T(n ) + n Claim: T(n) O(n) T(n) cn Base case: (n = ) T(n) = T() = O(n) Inductive step: (n > ) T(n) = T(n ) + n = O(n ) + n = O(n) + n = O(n) T(n) = T() = c cn
Sustitution d Asymptotic Notation Why did we expd the Merge Sort recurrence T(n) = T(n/) + O(n) to T(n) T(n/) + T(n) O(n lg n) to T(n) cn lg n? d the claim If we re not careful, we may prove nonsensical results: Recurrence: T(n) = T(n ) + n Claim: T(n) O(n) T(n) cn Base case: (n = ) T(n) = T() = O(n) T(n) = T() = c cn Inductive step: (n > ) T(n) = T(n ) + n = O(n ) + n = O(n) + n = O(n) T(n) = T(n ) + n c(n ) + n = cn + (n c) > cn!
A Recursion Tree for Merge Sort Recurrence: T(n) = T(n/) + Θ(n) T(n) = T(n/) +
A Recursion Tree for Merge Sort Recurrence: T(n) = T(n/) + Θ(n) T(n) = T(n/) + Strategy: Expd the recurrence all the way down to the ase case
A Recursion Tree for Merge Sort Recurrence: T(n) = T(n/) + Θ(n) T(n) = T(n/) + Strategy: Expd the recurrence all the way down to the ase case T(n)
A Recursion Tree for Merge Sort Recurrence: T(n) = T(n/) + Θ(n) T(n) = T(n/) + Strategy: Expd the recurrence all the way down to the ase case T n T n
A Recursion Tree for Merge Sort Recurrence: T(n) = T(n/) + Θ(n) T(n) = T(n/) + Strategy: Expd the recurrence all the way down to the ase case T n 4 T n 4 T n 4 T n 4
A Recursion Tree for Merge Sort Recurrence: T(n) = T(n/) + Θ(n) T(n) = T(n/) + Strategy: Expd the recurrence all the way down to the ase case 4 T n 8 4 T n 8 T n 8 4 T n 8 T n 8 4 T n 8 T n 8 T n 8
A Recursion Tree for Merge Sort Recurrence: T(n) = T(n/) + Θ(n) T(n) = T(n/) + Strategy: Expd the recurrence all the way down to the ase case 4 4 8 8 4 8 8 4 8 8 8 8
A Recursion Tree for Merge Sort Recurrence: T(n) = T(n/) + Θ(n) T(n) = T(n/) + Strategy: Expd the recurrence all the way down to the ase case 4 4 8 8 4 8 8 4 8 8 8 8
A Recursion Tree for Merge Sort Recurrence: T(n) = T(n/) + Θ(n) T(n) = T(n/) + Strategy: Expd the recurrence all the way down to the ase case 4 4 8 8 4 8 8 4 8 8 8 8
A Recursion Tree for Merge Sort Recurrence: T(n) = T(n/) + Θ(n) T(n) = T(n/) + Strategy: Expd the recurrence all the way down to the ase case 4 4 8 8 4 8 8 4 8 8 8 8
A Recursion Tree for Merge Sort Recurrence: T(n) = T(n/) + Θ(n) T(n) = T(n/) + Strategy: Expd the recurrence all the way down to the ase case 4 4 8 8 4 8 8 4 8 8 8 8 n
A Recursion Tree for Merge Sort Recurrence: T(n) = T(n/) + Θ(n) T(n) = T(n/) + Strategy: Expd the recurrence all the way down to the ase case 4 4 8 8 4 8 8 4 8 8 8 8 n
A Recursion Tree for Merge Sort Recurrence: T(n) = T(n/) + Θ(n) T(n) = T(n/) + Strategy: Expd the recurrence all the way down to the ase case 4 lg n 4 8 8 4 8 8 4 8 8 8 8 n
A Recursion Tree for Merge Sort Recurrence: T(n) = T(n/) + Θ(n) T(n) = T(n/) + Strategy: Expd the recurrence all the way down to the ase case 4 lg n 4 8 8 4 8 Solution: T(n) Θ(n lg n) 8 4 8 8 8 8 n
A Recursion Tree for Binary Search Recurrence: T(n) = T(n/) + Θ() T(n) = T(n/) + a
A Recursion Tree for Binary Search Recurrence: T(n) = T(n/) + Θ() T(n) = T(n/) + a T(n)
A Recursion Tree for Binary Search Recurrence: T(n) = T(n/) + Θ() T(n) = T(n/) + a a T n
A Recursion Tree for Binary Search Recurrence: T(n) = T(n/) + Θ() T(n) = T(n/) + a a a T n 4
A Recursion Tree for Binary Search Recurrence: T(n) = T(n/) + Θ() T(n) = T(n/) + a a a a T n 8
A Recursion Tree for Binary Search Recurrence: T(n) = T(n/) + Θ() T(n) = T(n/) + a a a a a
A Recursion Tree for Binary Search Recurrence: T(n) = T(n/) + Θ() T(n) = T(n/) + a a a lg n a a
A Recursion Tree for Binary Search Recurrence: T(n) = T(n/) + Θ() T(n) = T(n/) + a a a lg n a a Solution: T(n) Θ(lg n)
A Less Ovious Recursion Tree Recurrence: T(n) = T(n/3) + T(n/3) + Θ(n) T(n) = T(n/3) + T(n/3) +
A Less Ovious Recursion Tree Recurrence: T(n) = T(n/3) + T(n/3) + Θ(n) T(n) = T(n/3) + T(n/3) + a a a 8n 7 4n 9 n 3 n a n 3 a n 9 a n 7
A Less Ovious Recursion Tree Recurrence: T(n) = T(n/3) + T(n/3) + Θ(n) T(n) = T(n/3) + T(n/3) + a a log 3 n a 8n 7 4n 9 n 3 n a n 3 a n 9 a n 7
A Less Ovious Recursion Tree Recurrence: T(n) = T(n/3) + T(n/3) + Θ(n) T(n) = T(n/3) + T(n/3) + a a log 3 n a 8n 7 4n 9 n 3 n a n 3 a n 9 a log3 n n 7
A Less Ovious Recursion Tree Recurrence: T(n) = T(n/3) + T(n/3) + Θ(n) T(n) = T(n/3) + T(n/3) + log 3 n log3 n
A Less Ovious Recursion Tree Recurrence: T(n) = T(n/3) + T(n/3) + Θ(n) T(n) = T(n/3) + T(n/3) + log 3 n log3 n
A Less Ovious Recursion Tree Recurrence: T(n) = T(n/3) + T(n/3) + Θ(n) T(n) = T(n/3) + T(n/3) + log 3 n log3 n
A Less Ovious Recursion Tree Recurrence: T(n) = T(n/3) + T(n/3) + Θ(n) T(n) = T(n/3) + T(n/3) + log 3 n Solution: T(n) Θ(n lg n) log3 n
Sometimes Only Sustitution Will Do Recurrence: T(n) = T(n/3) + T(n/) + Θ(n) log n ith level: Θ(n (7/6)i ) log 3 n Lower ound: T(n) Ω(n+log (7/6) ) Ω(n. ) Upper ound: T(n) O(n+log3/ (7/6) ) O(n.38 )
Master Theorem Master Theorem: Let a d >, let f(n) e a positive function d let T(n) e given y the following recurrence: n + f(n). T(n) = a T (i) If f(n) O(nlog a ), for some > 0, then T(n) Θ(nlog a ). (ii) If f(n) Θ(nlog a ), then T(n) Θ(nlog a lg n). (iii) If f(n) Ω(nlog a+ ) d a f(n/) cf(n), for some > 0 d c < d for all n n0, then T(n) Θ(f(n)).
Master Theorem Master Theorem: Let a d >, let f(n) e a positive function d let T(n) e given y the following recurrence: n + f(n). T(n) = a T (i) If f(n) O(nlog a ), for some > 0, then T(n) Θ(nlog a ). (ii) If f(n) Θ(nlog a ), then T(n) Θ(nlog a lg n). (iii) If f(n) Ω(nlog a+ ) d a f(n/) cf(n), for some > 0 d c < d for all n n0, then T(n) Θ(f(n)). Example : Merge Sort again T(n) = T(n/) + Θ(n)
Master Theorem Master Theorem: Let a d >, let f(n) e a positive function d let T(n) e given y the following recurrence: n + f(n). T(n) = a T (i) If f(n) O(nlog a ), for some > 0, then T(n) Θ(nlog a ). (ii) If f(n) Θ(nlog a ), then T(n) Θ(nlog a lg n). (iii) If f(n) Ω(nlog a+ ) d a f(n/) cf(n), for some > 0 d c < d for all n n0, then T(n) Θ(f(n)). Example : Merge Sort again T(n) = T(n/) + Θ(n) a= = f(n) Θ(n)
Master Theorem Master Theorem: Let a d >, let f(n) e a positive function d let T(n) e given y the following recurrence: n + f(n). T(n) = a T (i) If f(n) O(nlog a ), for some > 0, then T(n) Θ(nlog a ). (ii) If f(n) Θ(nlog a ), then T(n) Θ(nlog a lg n). (iii) If f(n) Ω(nlog a+ ) d a f(n/) cf(n), for some > 0 d c < d for all n n0, then T(n) Θ(f(n)). Example : Merge Sort again T(n) = T(n/) + Θ(n) a= = f(n) Θ(n) = Θ(nlog )
Master Theorem Master Theorem: Let a d >, let f(n) e a positive function d let T(n) e given y the following recurrence: n + f(n). T(n) = a T (i) If f(n) O(nlog a ), for some > 0, then T(n) Θ(nlog a ). (ii) If f(n) Θ(nlog a ), then T(n) Θ(nlog a lg n). (iii) If f(n) Ω(nlog a+ ) d a f(n/) cf(n), for some > 0 d c < d for all n n0, then T(n) Θ(f(n)). Example : Merge Sort again T(n) = T(n/) + Θ(n) a= = f(n) Θ(n) = Θ(nlog ) T(n) Θ(n lg n)
Master Theorem Master Theorem: Let a d >, let f(n) e a positive function d let T(n) e given y the following recurrence: n + f(n). T(n) = a T (i) If f(n) O(nlog a ), for some > 0, then T(n) Θ(nlog a ). (ii) If f(n) Θ(nlog a ), then T(n) Θ(nlog a lg n). (iii) If f(n) Ω(nlog a+ ) d a f(n/) cf(n), for some > 0 d c < d for all n n0, then T(n) Θ(f(n)). Example : Matrix multiplication T(n) = 7T(n/) + Θ(n )
Master Theorem Master Theorem: Let a d >, let f(n) e a positive function d let T(n) e given y the following recurrence: n + f(n). T(n) = a T (i) If f(n) O(nlog a ), for some > 0, then T(n) Θ(nlog a ). (ii) If f(n) Θ(nlog a ), then T(n) Θ(nlog a lg n). (iii) If f(n) Ω(nlog a+ ) d a f(n/) cf(n), for some > 0 d c < d for all n n0, then T(n) Θ(f(n)). Example : Matrix multiplication T(n) = 7T(n/) + Θ(n ) a=7 = f(n) Θ(n )
Master Theorem Master Theorem: Let a d >, let f(n) e a positive function d let T(n) e given y the following recurrence: n + f(n). T(n) = a T (i) If f(n) O(nlog a ), for some > 0, then T(n) Θ(nlog a ). (ii) If f(n) Θ(nlog a ), then T(n) Θ(nlog a lg n). (iii) If f(n) Ω(nlog a+ ) d a f(n/) cf(n), for some > 0 d c < d for all n n0, then T(n) Θ(f(n)). Example : Matrix multiplication T(n) = 7T(n/) + Θ(n ) a=7 = f(n) Θ(n ) O(nlog 7 ) for all 0 < log 7
Master Theorem Master Theorem: Let a d >, let f(n) e a positive function d let T(n) e given y the following recurrence: n + f(n). T(n) = a T (i) If f(n) O(nlog a ), for some > 0, then T(n) Θ(nlog a ). (ii) If f(n) Θ(nlog a ), then T(n) Θ(nlog a lg n). (iii) If f(n) Ω(nlog a+ ) d a f(n/) cf(n), for some > 0 d c < d for all n n0, then T(n) Θ(f(n)). Example : Matrix multiplication T(n) = 7T(n/) + Θ(n ) a=7 = f(n) Θ(n ) O(nlog 7 ) for all 0 < log 7 T(n) Θ(nlog 7 ) Θ(n.8 )
Master Theorem Master Theorem: Let a d >, let f(n) e a positive function d let T(n) e given y the following recurrence: n + f(n). T(n) = a T (i) If f(n) O(nlog a ), for some > 0, then T(n) Θ(nlog a ). (ii) If f(n) Θ(nlog a ), then T(n) Θ(nlog a lg n). (iii) If f(n) Ω(nlog a+ ) d a f(n/) cf(n), for some > 0 d c < d for all n n0, then T(n) Θ(f(n)). Example 3: T(n) = T(n/) + n
Master Theorem Master Theorem: Let a d >, let f(n) e a positive function d let T(n) e given y the following recurrence: n + f(n). T(n) = a T (i) If f(n) O(nlog a ), for some > 0, then T(n) Θ(nlog a ). (ii) If f(n) Θ(nlog a ), then T(n) Θ(nlog a lg n). (iii) If f(n) Ω(nlog a+ ) d a f(n/) cf(n), for some > 0 d c < d for all n n0, then T(n) Θ(f(n)). Example 3: T(n) = T(n/) + n a= f(n) = n =
Master Theorem Master Theorem: Let a d >, let f(n) e a positive function d let T(n) e given y the following recurrence: n + f(n). T(n) = a T (i) If f(n) O(nlog a ), for some > 0, then T(n) Θ(nlog a ). (ii) If f(n) Θ(nlog a ), then T(n) Θ(nlog a lg n). (iii) If f(n) Ω(nlog a+ ) d a f(n/) cf(n), for some > 0 d c < d for all n n0, then T(n) Θ(f(n)). Example 3: T(n) = T(n/) + n a= = f(n) = n Ω(nlog + ) for all 0 <
Master Theorem Master Theorem: Let a d >, let f(n) e a positive function d let T(n) e given y the following recurrence: n + f(n). T(n) = a T (i) If f(n) O(nlog a ), for some > 0, then T(n) Θ(nlog a ). (ii) If f(n) Θ(nlog a ), then T(n) Θ(nlog a lg n). (iii) If f(n) Ω(nlog a+ ) d a f(n/) cf(n), for some > 0 d c < d for all n n0, then T(n) Θ(f(n)). Example 3: T(n) = T(n/) + n a= = f(n) = n Ω(nlog + ) for all 0 < f(n/) = n/ f(n)/
Master Theorem Master Theorem: Let a d >, let f(n) e a positive function d let T(n) e given y the following recurrence: n + f(n). T(n) = a T (i) If f(n) O(nlog a ), for some > 0, then T(n) Θ(nlog a ). (ii) If f(n) Θ(nlog a ), then T(n) Θ(nlog a lg n). (iii) If f(n) Ω(nlog a+ ) d a f(n/) cf(n), for some > 0 d c < d for all n n0, then T(n) Θ(f(n)). Example 3: T(n) = T(n/) + n a= = f(n) = n Ω(nlog + ) for all 0 < T(n) Θ(n) f(n/) = n/ f(n)/
Master Theorem: Proof T(n) = a T n + f(n) f(n) a f( n ) a f i a f n n i alog n Θ() = Θ(nlog a ) log n
Master Theorem: Proof T(n) = a T n + f(n) Case : f(n) O(nlog a ) f(n) a f( n ) a f i a f n n i alog n Θ() = Θ(nlog a ) log n
Master Theorem: Proof T(n) = a T n + f(n) Case : f(n) O(nlog a ) i a f n i i O a log a! n i log a = O(n i f(n) ) a f( n ) a f i a f n n i alog n Θ() = Θ(nlog a ) log n
Master Theorem: Proof T(n) = a T n + f(n) Case : f(n) O(nlog a ) i a f n i i O a log a! n i log a = O(n i ) f(n) O(...) O(...) log n O(...) i alog n Θ() = Θ(nlog a )
Master Theorem: Proof T(n) = a T n + f(n) Case : f(n) O(nlog a ) i a f n i O a i log a T(n) = Θ(n log a! n ) + O(n i log a log n ) X i= log a = O(n i ) f(n) O(...) i O(...) log n O(...) i alog n Θ() = Θ(nlog a )
Master Theorem: Proof T(n) = a T n + f(n) Case : f(n) O(nlog a ) i a f n i O a i log a T(n) = Θ(n log a = Θ(n log a! n ) + O(n ) + O(n i log a log a log n ) X log a = O(n i ) f(n) O(...) i i= log n ( ) ) O(...) log n O(...) i alog n Θ() = Θ(nlog a )
Master Theorem: Proof T(n) = a T n + f(n) Case : f(n) O(nlog a ) i a f n i O a i log a T(n) = Θ(n log a! n ) + O(n i log a log n ) X log a = O(n log a ) f(n) O(...) i i= log n ( ) = Θ(n ) + O(n ) = Θ(nlog a ) + O(nlog a ) n log a i O(...) log n O(...) i alog n Θ() = Θ(nlog a )
Master Theorem: Proof T(n) = a T n + f(n) Case : f(n) O(nlog a ) i a f n i O a i log a T(n) = Θ(n log a! n ) + O(n i log a log n ) X log a = O(n log a ) f(n) O(...) i i= log n ( ) = Θ(n ) + O(n ) = Θ(nlog a ) + O(nlog a ) n log a i O(...) log n O(...) i = Θ(nlog a ) alog n Θ() = Θ(nlog a )
Master Theorem: Proof T(n) = a T n + f(n) Case : f(n) Θ(nlog a ) f(n) a f( n ) a f i a f n n i alog n Θ() = Θ(nlog a ) log n
Master Theorem: Proof T(n) = a T n + f(n) Case : f(n) Θ(nlog a ) ai f n i Θ ai log a! n i = Θ(nlog a ) f(n) a f( n ) a f i a f n n i alog n Θ() = Θ(nlog a ) log n
Master Theorem: Proof T(n) = a T n + f(n) Case : f(n) Θ(nlog a ) ai f n i Θ ai log a! n i = Θ(nlog a ) f(n) Θ(nlog a ) Θ(nlog a ) log n Θ(nlog a ) alog n Θ() = Θ(nlog a )
Master Theorem: Proof T(n) = a T n + f(n) Case : f(n) Θ(nlog a ) ai f n i Θ ai log a! n T(n) = Θ(nlog a ) log n i = Θ(nlog a ) f(n) Θ(nlog a ) Θ(nlog a ) log n Θ(nlog a ) alog n Θ() = Θ(nlog a )
Master Theorem: Proof T(n) = a T n + f(n) Case : f(n) Θ(nlog a ) ai f n i Θ ai log a! n T(n) = Θ(nlog a ) log n = Θ(nlog a lg n) i = Θ(nlog a ) f(n) Θ(nlog a ) Θ(nlog a ) log n Θ(nlog a ) alog n Θ() = Θ(nlog a )
Master Theorem: Proof T(n) = a T n + f(n) Case 3: f(n) Ω(nlog a+ ) d a f(n/) c f(n) for some c < f(n) a f( n ) a f i a f n n i alog n Θ() = Θ(nlog a ) log n
Master Theorem: Proof T(n) = a T n + f(n) Case 3: f(n) Ω(nlog a+ ) d a f(n/) c f(n) for some c < n i Claim: a f i ci f(n) f(n) a f( n ) a f i a f n n i alog n Θ() = Θ(nlog a ) log n
Master Theorem: Proof T(n) = a T n + f(n) Case 3: f(n) Ω(nlog a+ ) d a f(n/) c f(n) for some c < n i Claim: a f i ci f(n) f(n) n 0 For i = 0, a f 0 = f(n) = c0 f(n). a f( n ) n a f i a f n i alog n Θ() = Θ(nlog a ) log n
Master Theorem: Proof T(n) = a T n + f(n) Case 3: f(n) Ω(nlog a+ ) d a f(n/) c f(n) for some c < n i Claim: a f i ci f(n) f(n) For i > 0,!! n i a f( ) n n/ ai f i = ai a f n a f i a f n i alog n Θ() = Θ(nlog a ) log n
Master Theorem: Proof T(n) = a T n + f(n) Case 3: f(n) Ω(nlog a+ ) d a f(n/) c f(n) for some c < n i Claim: a f i ci f(n) f(n) For i > 0,!! n i a f( ) n n/ ai f i = ai a f n a f n i a c f i n i a f i alog n Θ() = Θ(nlog a ) log n
Master Theorem: Proof T(n) = a T n + f(n) Case 3: f(n) Ω(nlog a+ ) d a f(n/) c f(n) for some c < n i Claim: a f i ci f(n) f(n) For i > 0,!! n i a f( ) n n/ ai f i = ai a f n a f n i a c f i n n i a f i = c ai f i alog n Θ() = Θ(nlog a ) log n
Master Theorem: Proof T(n) = a T n + f(n) Case 3: f(n) Ω(nlog a+ ) d a f(n/) c f(n) for some c < n i Claim: a f i ci f(n) f(n) For i > 0,!! n i a f( ) n n/ ai f i = ai a f n a f n i a c f i n n i a f i = c ai f i i c c f(n) alog n Θ() = Θ(nlog a ) log n
Master Theorem: Proof T(n) = a T n + f(n) Case 3: f(n) Ω(nlog a+ ) d a f(n/) c f(n) for some c < n i Claim: a f i ci f(n) f(n) For i > 0,!! n i a f( ) n n/ ai f i = ai a f n a f n i a c f i n n i a f i = c ai f i i c c f(n) = ci f(n) alog n Θ() = Θ(nlog a ) log n
Master Theorem: Proof T(n) = a T n + f(n) Case 3: f(n) Ω(nlog a+ ) d a f(n/) c f(n) for some c < n i Claim: a f i ci f(n) f(n) c f(n) c f(n) log n ci f(n) alog n Θ() = Θ(nlog a )
Master Theorem: Proof T(n) = a T n + f(n) Case 3: f(n) Ω(nlog a+ ) d a f(n/) c f(n) for some c < n i Claim: a f i ci f(n) f(n) T(n) Ω(nlog a + f(n)) = Ω(f(n)) c f(n) c f(n) log n ci f(n) alog n Θ() = Θ(nlog a )
Master Theorem: Proof T(n) = a T n + f(n) Case 3: f(n) Ω(nlog a+ ) d a f(n/) c f(n) for some c < n i Claim: a f i ci f(n) f(n) T(n) Ω(nlog a + f(n)) = Ω(f(n))! X T(n) O nlog a + ci f(n) i=0 c f(n) c f(n) log n ci f(n) alog n Θ() = Θ(nlog a )
Master Theorem: Proof T(n) = a T n + f(n) Case 3: f(n) Ω(nlog a+ ) d a f(n/) c f(n) for some c < n i Claim: a f i ci f(n) f(n) T(n) Ω(nlog a + f(n)) = Ω(f(n))! X T(n) O nlog a + ci f(n) i=0 = O nlog a + f(n) X i=0 c f(n) c f(n) log n! ci ci f(n) alog n Θ() = Θ(nlog a )
Master Theorem: Proof T(n) = a T n + f(n) Case 3: f(n) Ω(nlog a+ ) d a f(n/) c f(n) for some c < n i Claim: a f i ci f(n) f(n) T(n) Ω(nlog a + f(n)) = Ω(f(n))! X T(n) O nlog a + ci f(n) i=0 = O nlog a + f(n) X c f(n) log n! ci i=0 = O nlog a + f(n) c f(n) c ci f(n) alog n Θ() = Θ(nlog a )
Master Theorem: Proof T(n) = a T n + f(n) Case 3: f(n) Ω(nlog a+ ) d a f(n/) c f(n) for some c < n i Claim: a f i ci f(n) f(n) T(n) Ω(nlog a + f(n)) = Ω(f(n))! X T(n) O nlog a + ci f(n) i=0 = O nlog a + f(n) X = O(f(n)) c f(n) log n! ci i=0 = O nlog a + f(n) c f(n) c ci f(n) alog n Θ() = Θ(nlog a )
Quick Sort QuickSort(A, `, r) 3 4 5 if r ` then return m = Partition(A, `, r) QuickSort(A, `, m ) QuickSort(A, m +, r)
Quick Sort QuickSort(A, `, r) 3 4 5 if r ` then return m = Partition(A, `, r) QuickSort(A, `, m ) QuickSort(A, m +, r) p Partition p p p
Quick Sort QuickSort(A, `, r) 3 4 5 if r ` then return m = Partition(A, `, r) QuickSort(A, `, m ) QuickSort(A, m +, r) Worst case: p Partition p p p
Quick Sort QuickSort(A, `, r) 3 4 5 if r ` then return m = Partition(A, `, r) QuickSort(A, `, m ) QuickSort(A, m +, r) p Partition p Worst case: T(n) = Θ(n) + T(n ) p p
Quick Sort QuickSort(A, `, r) 3 4 5 if r ` then return m = Partition(A, `, r) QuickSort(A, `, m ) QuickSort(A, m +, r) p Partition p p Worst case: T(n) = Θ(n) + T(n ) = Θ(n ) p
Quick Sort QuickSort(A, `, r) 3 4 5 if r ` then return m = Partition(A, `, r) QuickSort(A, `, m ) QuickSort(A, m +, r) p Partition p p Worst case: T(n) = Θ(n) + T(n ) = Θ(n ) Best case: p
Quick Sort QuickSort(A, `, r) 3 4 5 if r ` then return m = Partition(A, `, r) QuickSort(A, `, m ) QuickSort(A, m +, r) p Partition p p Worst case: T(n) = Θ(n) + T(n ) = Θ(n ) Best case: T(n) = Θ(n) + T(n/c) p
Quick Sort QuickSort(A, `, r) 3 4 5 if r ` then return m = Partition(A, `, r) QuickSort(A, `, m ) QuickSort(A, m +, r) p Partition p p Worst case: T(n) = Θ(n) + T(n ) = Θ(n ) Best case: T(n) = Θ(n) + T(n/c) = Θ(n lg n) p
Quick Sort QuickSort(A, `, r) 3 4 5 if r ` then return m = Partition(A, `, r) QuickSort(A, `, m ) QuickSort(A, m +, r) p Partition p p Worst case: T(n) = Θ(n) + T(n ) = Θ(n ) Best case: T(n) = Θ(n) + T(n/c) = Θ(n lg n) Average case: T(n) = Θ(n lg n) p
Two Partitioning Algorithms HoarePartition(A, l, r) 3 4 5 6 7 8 9 0 x = A[r] i=l j=r+ while True do repeat i = i + until A[i] x repeat j = j until A[j] x if i < j then swap A[i] d A[j] else return j Loop invarits: x x? i j
Two Partitioning Algorithms HoarePartition(A, l, r) 3 4 5 6 7 8 9 0 LomutoPartition(A, l, r) x = A[r] i=l j=r+ while True do repeat i = i + until A[i] x repeat j = j until A[j] x if i < j then swap A[i] d A[j] else return j 3 4 5 6 7 i=l for j = l to r do if A[j] A[r] then i = i + swap A[i] d A[j] swap A[i + ] d A[r] return i + Loop invarits: x x? i j >x x i? j x
Two Partitioning Algorithms HoarePartition(A, l, r) 3 4 5 6 7 8 9 0 x = A[r] i=l j=r+ while True do repeat i = i + until A[i] x repeat j = j until A[j] x if i < j then swap A[i] d A[j] else return j LomutoPartition(A, l, r) 3 4 5 6 7 i=l for j = l to r do if A[j] A[r] then i = i + swap A[i] d A[j] swap A[i + ] d A[r] return i + HoarePartition is more efficient in practice. LomutoPartition has some properties that make average-case alysis easier. HoarePartition is more convenient for worst-case Quick Sort.
Two Partitioning Algorithms HoarePartition(A, l, r, x) 3 4 5 6 7 8 9 0 i=l j=r+ while True do repeat i = i + until A[i] x repeat j = j until A[j] x if i < j then swap A[i] d A[j] else return j LomutoPartition(A, l, r) 3 4 5 6 7 i=l for j = l to r do if A[j] A[r] then i = i + swap A[i] d A[j] swap A[i + ] d A[r] return i + HoarePartition is more efficient in practice. LomutoPartition has some properties that make average-case alysis easier. HoarePartition is more convenient for worst-case Quick Sort.
Selection Goal: Given unsorted array A d integer k n, find the kth smallest element in A. 8 7 5 43 3 64 4th smallest element
Selection Goal: Given unsorted array A d integer k n, find the kth smallest element in A. 8 7 5 43 3 64 4th smallest element First idea: Sort the array d then return the element in the kth position. O(n lg n) time
Selection Goal: Given unsorted array A d integer k n, find the kth smallest element in A. 8 7 5 43 3 64 4th smallest element First idea: Sort the array d then return the element in the kth position. O(n lg n) time We c find the minimum (k = ) or the maximum (k = n) in O(n) time!
Selection Goal: Given unsorted array A d integer k n, find the kth smallest element in A. 8 7 5 43 3 64 4th smallest element First idea: Sort the array d then return the element in the kth position. O(n lg n) time We c find the minimum (k = ) or the maximum (k = n) in O(n) time! It would e nice if we were ale to find the kth smallest element, for y k, in O(n) time.
The Sorting Idea Isn t Half Bad, But... To find the kth smallest element, we don t need to sort the input completely. We only need to verify that there are exactly k elements smaller th the element we return.
The Sorting Idea Isn t Half Bad, But... To find the kth smallest element, we don t need to sort the input completely. We only need to verify that there are exactly k elements smaller th the element we return. p Partition p L p p R
The Sorting Idea Isn t Half Bad, But... To find the kth smallest element, we don t need to sort the input completely. We only need to verify that there are exactly k elements smaller th the element we return. p Partition p p L If L = k, then p is the kth smallest element. p R
The Sorting Idea Isn t Half Bad, But... To find the kth smallest element, we don t need to sort the input completely. We only need to verify that there are exactly k elements smaller th the element we return. p Partition p p L p R If L = k, then p is the kth smallest element. If L k, then the kth smallest element in L is the kth smallest element in A.
The Sorting Idea Isn t Half Bad, But... To find the kth smallest element, we don t need to sort the input completely. We only need to verify that there are exactly k elements smaller th the element we return. p Partition p p L p R If L = k, then p is the kth smallest element. If L k, then the kth smallest element in L is the kth smallest element in A. If L < k, then the (k L + )st element in R is the kth smallest element in A.
Quick Select QuickSelect(A, `, r, k) 3 4 5 6 7 8 if r ` Partition then return A[`] m = Partition(A, `, r) p p if m ` = k then return A[m] else if m ` k then return QuickSelect(A, `, m, k) else return QuickSelect(A, m +, r, k (m + `)) p p
Quick Select QuickSelect(A, `, r, k) 3 4 5 6 7 8 if r ` Partition then return A[`] m = Partition(A, `, r) p p if m ` = k then return A[m] else if m ` k then return QuickSelect(A, `, m, k) else return QuickSelect(A, m +, r, k (m + `)) Worst case: p p
Quick Select QuickSelect(A, `, r, k) 3 4 5 6 7 8 p if r ` Partition then return A[`] m = Partition(A, `, r) p p if m ` = k then return A[m] else if m ` k then return QuickSelect(A, `, m, k) else return QuickSelect(A, m +, r, k (m + `)) Worst case: T(n) = Θ(n) + T(n ) = Θ(n ) p
Quick Select QuickSelect(A, `, r, k) 3 4 5 6 7 8 p if r ` Partition then return A[`] m = Partition(A, `, r) p p if m ` = k then return A[m] else if m ` k then return QuickSelect(A, `, m, k) else return QuickSelect(A, m +, r, k (m + `)) Worst case: Best case: T(n) = Θ(n) + T(n ) = Θ(n ) p
Quick Select QuickSelect(A, `, r, k) 3 4 5 6 7 8 p if r ` Partition then return A[`] m = Partition(A, `, r) p p if m ` = k then return A[m] else if m ` k then return QuickSelect(A, `, m, k) else return QuickSelect(A, m +, r, k (m + `)) Worst case: Best case: T(n) = Θ(n) + T(n ) = Θ(n ) T(n) = Θ(n) p
Quick Select QuickSelect(A, `, r, k) 3 4 5 6 7 8 p if r ` Partition then return A[`] m = Partition(A, `, r) p p if m ` = k then return A[m] else if m ` k then return QuickSelect(A, `, m, k) else return QuickSelect(A, m +, r, k (m + `)) Worst case: Best case: T(n) = Θ(n) + T(n ) = Θ(n ) T(n) = Θ(n) + T(n/) = Θ(n) p
Quick Select QuickSelect(A, `, r, k) 3 4 5 6 7 8 p if r ` Partition then return A[`] m = Partition(A, `, r) p p if m ` = k then return A[m] else if m ` k then return QuickSelect(A, `, m, k) else return QuickSelect(A, m +, r, k (m + `)) Worst case: Best case: T(n) = Θ(n) + T(n ) = Θ(n ) T(n) = Θ(n) + T(n/) = Θ(n) Average case: T(n) = Θ(n) p
Worst-Case Selection QuickSelect(A, `, r, k) 3 4 5 6 7 if r ` then return A[`] p = FindPivot(A, `, r) m = HoarePartition(A, `, r, p) if m ` + k then return QuickSelect(A, `, m, k) else return QuickSelect(A, m +, r, k (m + `))
Worst-Case Selection QuickSelect(A, `, r, k) 3 4 5 6 7 if r ` then return A[`] p = FindPivot(A, `, r) m = HoarePartition(A, `, r, p) if m ` + k then return QuickSelect(A, `, m, k) else return QuickSelect(A, m +, r, k (m + `)) If we could guartee that p is the medi of A[`.. r], then we d recurse on at most n/ elements.
Worst-Case Selection QuickSelect(A, `, r, k) 3 4 5 6 7 if r ` then return A[`] p = FindPivot(A, `, r) m = HoarePartition(A, `, r, p) if m ` + k then return QuickSelect(A, `, m, k) else return QuickSelect(A, m +, r, k (m + `)) If we could guartee that p is the medi of A[`.. r], then we d recurse on at most n/ elements. T(n) = Θ(n) + T(n/) = Θ(n).
Worst-Case Selection QuickSelect(A, `, r, k) 3 4 5 6 7 if r ` then return A[`] p = FindPivot(A, `, r) m = HoarePartition(A, `, r, p) if m ` + k then return QuickSelect(A, `, m, k) else return QuickSelect(A, m +, r, k (m + `)) If we could guartee that p is the medi of A[`.. r], then we d recurse on at most n/ elements. T(n) = Θ(n) + T(n/) = Θ(n). Alas, finding the medi is selection!
Making Do With An Approximate Medi p n p p n If there are at least n elements smaller th p d at least n elements greater th p, then T(n) Θ(n) + T(( )n) = Θ(n).
Finding An Approximate Medi FindPivot(A, `, r) 3 4 5 6 7 n0 = (r `)/5c + for i = 0 to n0 do InsertionSort(A, ` + 5 i, min(` + 5 i + 4, r)) if ` + 5i + 4 r then B[i + ] = A[` + 5 i + ] else B[i + ] = A[` + 5 i] return QuickSelect(B,, n0, dn0 /e) Approximate medi
Finding An Approximate Medi
Finding An Approximate Medi There are at least dn0 /e medis smaller th the medi of medis.
Finding An Approximate Medi There are at least dn0 /e medis smaller th the medi of medis. For at least dn0 /e of the medis, there are two elements in each of their groups that are smaller th the medi of medis.
Finding An Approximate Medi There are at least dn0 /e medis smaller th the medi of medis. For at least dn0 /e of the medis, there are two elements in each of their groups that are smaller th the medi of medis.
Finding An Approximate Medi There are at least dn0 /e medis smaller th the medi of medis. For at least dn0 /e of the medis, there are two elements in each of their groups that are smaller th the medi of medis. n0 = dn/5e
Finding An Approximate Medi There are at least dn0 /e medis smaller th the medi of medis. For at least dn0 /e of the medis, there are two elements in each of their groups that are smaller th the medi of medis. n0 = dn/5e Total numer of elements smaller th the medi of medis: 0 n dn/5e 3n 3 =3 3 3 0
Finding An Approximate Medi There are at least dn0 /e medis smaller th the medi of medis. For at least dn0 /e of the medis, there are two elements in each of their groups that are smaller th the medi of medis. n0 = dn/5e Total numer of elements smaller th the medi of medis: 0 n dn/5e 3n 3 =3 3 3 0 The same alysis holds for counting the numer of elements greater th the medi of medis.
Finding An Approximate Medi There are at least dn0 /e medis smaller th the medi of medis. For at least dn0 /e of the medis, there are two elements in each of their groups that are smaller th the medi of medis. n0 = dn/5e Total numer of elements smaller th the medi of medis: 0 n dn/5e 3n 3 =3 3 3 0 The same alysis holds for counting the numer of elements greater th the medi of medis.
Worst-Case Selection: Analysis FindPivot (excluding the recursive call to QuickSelect) d Partition take O(n) time.
Worst-Case Selection: Analysis FindPivot (excluding the recursive call to QuickSelect) d Partition take O(n) time. FindPivot recurses on d 5n e < n 5 + elements.
Worst-Case Selection: Analysis FindPivot (excluding the recursive call to QuickSelect) d Partition take O(n) time. FindPivot recurses on d 5n e < n 5 + elements. QuickSelect itself recurses on at most 7n 0 + 3 elements.
Worst-Case Selection: Analysis FindPivot (excluding the recursive call to QuickSelect) d Partition take O(n) time. FindPivot recurses on d 5n e < n 5 + elements. QuickSelect itself recurses on at most 7n 0 + 3 elements. T(n) T( 5n + ) + T( 7n 0 + 3) + O(n)
Worst-Case Selection: Analysis Claim: T(n) O(n), that is, T(n) cn, for some c > 0 d all n.
Worst-Case Selection: Analysis Claim: T(n) O(n), that is, T(n) cn, for some c > 0 d all n. Base case: (n < 80)
Worst-Case Selection: Analysis Claim: T(n) O(n), that is, T(n) cn, for some c > 0 d all n. Base case: (n < 80) We already oserved that the running time is in O(n ) in the worst case. Since n O(), n O().
Worst-Case Selection: Analysis Claim: T(n) O(n), that is, T(n) cn, for some c > 0 d all n. Base case: (n < 80) We already oserved that the running time is in O(n ) in the worst case. Since n O(), n O(). T(n) c0 cn for c sufficiently large.
Worst-Case Selection: Analysis Claim: T(n) O(n), that is, T(n) cn, for some c > 0 d all n. Base case: (n < 80) We already oserved that the running time is in O(n ) in the worst case. Since n O(), n O(). T(n) c0 cn for c sufficiently large. Inductive Step: (n 80)
Worst-Case Selection: Analysis Claim: T(n) O(n), that is, T(n) cn, for some c > 0 d all n. Base case: (n < 80) We already oserved that the running time is in O(n ) in the worst case. Since n O(), n O(). T(n) c0 cn for c sufficiently large. Inductive Step: (n 80) n 7n T(n) T + +T + 3 + 5 0
Worst-Case Selection: Analysis Claim: T(n) O(n), that is, T(n) cn, for some c > 0 d all n. Base case: (n < 80) We already oserved that the running time is in O(n ) in the worst case. Since n O(), n O(). T(n) c0 cn for c sufficiently large. Inductive Step: (n 80) n 7n T(n) T + +T + 3 + 5 0 n 7n c + +c + 3 + 5 0