# 7.1 Introduction. CSci 335 Software Design and Analysis III Chapter 7 Sorting. Prof. Stewart Weiss

Save this PDF as:

Size: px
Start display at page:

Download "7.1 Introduction. CSci 335 Software Design and Analysis III Chapter 7 Sorting. Prof. Stewart Weiss"

## Transcription

1 Chapter 7 Sorting 7.1 Introduction Insertion sort is the sorting algorithm that splits an array into a sorted and an unsorted region, and repeatedly picks the lowest index element of the unsorted region and inserts it into the proper position in the sorted region, as shown in Figure 7.1. x sorted region unsorted region x sorted region unsorted region Figure 7.1: Insertion sort modeled as an array with a sorted and unsorted region. Each iteration moves the lowest-index value in the unsorted region into its position in the sorted region, which is initially of size 1. The process starts at the second position and stops when the rightmost element has been inserted, thereby forcing the size of the unsorted region to zero. Insertion sort belongs to a class of sorting algorithms that sort by comparing keys to adjacent keys and swapping the items until they end up in the correct position. Another sort like this is bubble sort. Both of these sorts move items very slowly through the array, forcing them to move one position at a time until they reach their nal destination. It stands to reason that the number of data moves would be excessive for the work accomplished. After all, there must be smarter ways to put elements into the correct position. The insertion sort algorithm is below. for ( int i = 1; i < a.size(); i++) { tmp = a[i]; for ( j = i; j >= 1 && tmp < a[j-1]; j = j-1) a[j] = a[j-1]; a[j] = tmp; You can verify that it does what I described. The step of assigning to tmp and then copying the value into its nal position is a form of ecient swap. An ordinary swap takes three data moves; this reduces the swap to just one per item compared, plus the moves at the beginning and end of the loop. The worst case number of comparisons and data moves is O(N 2 ) for an array of N elements. The best case is Ω(N) data moves andω(n) comparisons. We can prove that the average number of comparisons and data moves is Ω(N 2 ). 1

2 7.2 A Lower Bound for Simple Sorting Algorithms An inversion in an array is an ordered pair (i, j) such that i < j but a[i] > a[j]. An exchange of adjacent elements removes one inversion from an array. Insertion sort needs to remove all inversions by swapping, so if there are m inversions, m swaps will be necessary. Theorem 1. The average number of inversions in an array of n distinct elements is n(n-1)/4. Proof. Let L be any list and L R be its reverse. Pick any pair of elements (i, j) L with i < j. There are n(n 1)/2 ways to pick such pairs. This pair is an inversion in exactly one of L or L R. This implies that L and L R combined have exactly n(n 1)/2 inversions. The set of all n! permutations of n distinct elements can be divided into two disjoint sets containing lists and their reverses. In other words, half of all of these lists are the reverses of the others. This implies that the average number of inversions per list over the entire set of permutations is n(n 1)/4. Theorem 2. Any algorithm that sorts by exchanging adjacent elements requires Ω(n 2 ) time on average. Proof. The average number of inversions is initially n(n 1)/4. Each swap reduces the number of inversions by 1, and an array is sorted if and only if it has 0 inversions, so n(n 1)/4 swaps are required. 7.3 Shell Sort Shell sort was invented by Donald Shell. It is like a sequence of insertion sorts carried out over varying distances in the array and has the advantage that in the early passes, it moves data items close to where they belong by swapping distant elements with each other. Consider the original insertion sort modied so that the gap between adjacent elements can be a number besides 1: Listing 7.1: A Generalized Insertion Sort Algorithm int gap = 1; for ( int i = gap ; i < a. size () ; i ++) { tmp = a[i ]; for ( j = i; j >= gap && tmp < a[j - gap ]; j = j - gap ) a[j] = a[j - gap ]; a[j] = tmp ; Now suppose we let the variable gap start with a large value and get smaller with successive passes. Each pass is a modied form of insertion sort on each of a set of interleaved sequences in the array. When the gap is h, it is an insertion sort of the h sequences 0, h, 2h, 3h,..., k 0 h, 1, 1 + h, 1 + 2h, 1 + 3h,..., 1 + k 1 h 2, 2 + h, 2 + 2h, 2 + 3h,..., 2 + k 2 h... h 1, h 1 + h, h 1 + 2h,..., h 1 + k h 1 h 2

3 where k i is the largest number such that i + k i h < n. For example, if the array size is 15, and the gap is h = 5, then each of the following subsequences of indices in the array, which we will call slices, will be insertion-sorted independently: slice 0: slice 1: slice 2: slice 3: slice 4: For each xed value of the gap, h, the sort starts at array element a[h] and inserts it in the lower sorted region of the slice, then it picks a[h + 1] and inserts it in the sorted region of its slice, and then a[h + 2] is inserted, and so on, until a[n 1] is inserted into its slice's sorted region. For each element a[i], the sorted region of the slice is the set of array elements at indices i, i h, i 2h,... and so on. When the gap is h, we say that the array has been h-sorted. It is obviously not sorted, because the dierent slices can have dierent values, but each slice is sorted. Of course when the gap h = 1 the array is fully sorted. The following table shows an array of 15 elements before and after a 5-sort of the array Original Sequence: After 5-sort: The intuition is that the large initial gap sorts move items much closer to where they have to be, and then the successively smaller gaps move them closer to their nal positions. In Shell's original algorithm the sequence of gaps was n/2, n/4, n/8,..., 1. This proved to be a poor choice of gaps because the worst case running time was no better than ordinary insertion sort, as is proved below. Listing 7.2: Shell's Original Algorithm for ( int gap = a. size () /2; gap > 0; gap / =2) for ( int i = gap ; i < a. size () ; i ++) { tmp = a[i ]; for ( j = i; j >= gap && tmp < a[j - gap ]; j = j - gap ) a[j] = a[j - gap ]; a[j] = tmp ; Analysis of Shell Sort Using Shell's Original Increments Lemma 3. The running time of Shell Sort when the increment is h k is O(n 2 /h k ). Proof. When the increment is h k, there are h k insertion sorts of n/h k keys. An insertion sort of m elements requires in the worst case O(m 2 ) steps. Therefore, when the increment is h k the total number of steps is ( ) ( ) n 2 n 2 h k O = O h 2 k h k 3

4 Theorem 4. The worst case for Shell Sort, using Shell's original increment sequence, is Θ(n 2 ). Proof. The proof has two parts, one that the running time has a lower bound that is asymptotic to n 2, and one that it has an upper bound of n 2. Proof of Lower Bound. Consider any array of size n = 2 m, where m may be any positive integer. There are innitely many of these, so this will establish asymptotic behavior. Let the n/2 largest numbers be in the odd-numbered positions of the array and let the n/2 smallest number be in the even numbered positions. When n is a power of 2, halving the gap, which is initially n, in each pass results in an even number. Therefore, in all passes of the algorithm until the very last pass, the gaps are even numbers, which implies that all of the smallest numbers must remain in even-numbered positions and all of the largest numbers will remain in the odd-numbered positions. The j th smallest number is in position 2j 1 and must be moved to position j when h = 1. Therefore, this number must be moved(2j 1) j = j 1 positions. Moving all n/2 smallest numbers to their correct positions when h = 1 requires n/2 (n/2) 1 (n/2 1)(n/2) (j 1) = j = = Θ(n 2 ) 2 j=1 j=0 steps. This proves that the running time is at least Θ(n 2 ). Proof of Upper Bound. By Lemma 3, the running time of the pass when the increment is h k is O(n 2 /h k ). Suppose there are t passes of the sort. If we sum over all passes, we have for the total running time, ( t ) ( ) ( ) n 2 t O = O n 2 1 t = O n 2 1 h k h k 2 k = O(n 2 ) k=1 because, as we proved in Chapter 1, t k=1 k=1 k=1 1 2 k = t 1 t 2 t < 2 and this shows that n 2 is an asymptotic upper bound as well. It follows that the worst case is Θ(n 2 ) Other Increment Schemes Better choices of gap sequences were found afterwards. Before looking at some of the good ones, consider an example using a simple sequence such as 5, 3, 1. Example This uses the three gaps, 5, 3, 1 on an arbitrary array whose initial state is on the rst row in the table below. 4

5 Original Sequence: After 5-sort: After 3-sort: After 1-sort: Hibbard proposed using the sequence h 1 = 1, h 2 = 3, h 3 = 7, h 4 = 15,..., h t = 2 t 1 which is simply the numbers one les than powers of 2. Such a sequence, 1, 3, 7, 15, 31, 63, 127,..., has the property that no two adjacent terms have a common factor: if they had a common factor p, then their dierence would also be divisible by p. But the dierence between successive terms is (2 k+1 1) (2 k 1) = (2 k+1 2 k ) = 2 k which is divisible by powers of 2 alone. Hence the only number that can divide successive terms is a power of 2. Since all of the terms are odd numbers, none are divisible by 2, so they have no common factor. Successive terms are related by the recurrence h k+1 = 2h k + 1 Sedgewick proposed a few dierent sequences, such as this one, dened recursively: which generates the sequence h 1 = 1, h k+1 = 3h k + 1 1, 4, 13, 40, 121, 364,... Hibbard's increment sequence turns out to be an improvement over Shell's original increment sequence. Theorem 5. The worst case running time of Shell Sort using Hibbard's sequence, h 1 = 1, h 2 = 3, h 3 = 7, h 4 = 15,..., h t = 2 t 1, is Θ(n 3/2 ). Proof. The upper bound alone is proved here. Lemma 3 establishes that the running time of a pass when the increment is h k is O(n 2 /h k ). We will use this fact for all increments h k such that h k > n 1/2. This is not a tight enough upper bound for the smaller increments, and we need to work harder to get a tighter bound for them. The general idea will be to show that by the time the increments are that small, most elements do not need to be moved very far. So we turn to the case when h k n 1/2. By the time that the array is h k sorted, it has already been h k+1 -sorted and h k+2 -sorted. Now consider the array elements at positions p and p i for all i p. If i is a multiple of h k+1 or h k+2, then a[p i] < a[p] because these two elements were part of the same slice, either for increment h k+1 or increment h k+2, and so they were sorted with respect to each other. More generally, suppose that i is a non-negative linear combination of h k+1 and h k+2, i.e., i = c 1 h k+1 + c 2 h k+2, for some non-negative integers c 1 and c 2. Then p i = p (c 1 h k+1 + c 2 h k+2 ) = p c 1 h k+1 c 2 h k+2 5

6 and (p i) + c 2 h k+2 = p c 1 h k+1 which implies that after the h k+2 -sort, a[p i] < a[p c 1 h k+1 ] because they are part of the same h k+2 -slice. Similarly, after the h k+1 -sort, a[p c 1 h k+1 ] < a[p] because they are part of the same h k+1 -slice and p c 1 h k+1 < p. Thus, a[p i] < a[p c 1 h k+1 ] < a[p]. Since h k+2 = 2h k+1 by the denition of Shell's increment scheme, h k+2 and h k+1 are relatively prime. (If not, they have a common factor >1 and so their dierence h k+2 h k+1 = 2h k h k+1 = h k would have a common factor with each of them, and in particular h k+1 and h k would have a common factor, which is impossible.) Because h k+2 and h k+1 are relatively prime, their greatest common divisor (gcd) is 1. An established theorem of number theory is that if c = gcd(x, y) then there are integers a and b such that c = ax + by. When x and y are relatively prime, this implies that there exist a, b such that 1 = ax + by, which further implies that every integer can be expressed as a linear combination of x and y. A stronger result is that all integers at least as large as (x 1)(y 1) can be expressed as non-negative linear combinations of x and y. Let. m (h k+2 1)(h k+1 1) = (2h k )(2h k + 1 1) = 2(h k+1 )(2h k ) = 4h k (2h k + 1) = 8h 2 k + 4h k By the preceding statement, m can be expressed in the form c 1 h k+1 + c 2 h k+2 for non-negative integers c 1 and c 2. From the preceding discussion, we can also conclude that a[p m] < a[p]. What does this mean? For any index i > 8h 2 k + 4h k, a[p i] < a[p]. During the h k -sort, no element has to be moved more than 8h k 4 times (it moves h k positions each time.) There are at most n h k such positions, and so the total number of comparisons in this pass is O(nh k ). How many increments h k are smaller than n 1/2? Roughly half of them, because if we approximate n as a power of 2, say 2 t, then n = 2 t/2 and the increments 1, 3, 7,..., 2 t/2 1 are half of the total number of increments. For simplicity, assume t is an even number. Then the total running time of Shell Sort is t/2 t O n 2 nh k + k=1 t/2 = O n h k + n 2 k=1 h k k=t/2+1 t 1 h k k=t/2+1 Since the rst sum is a geometric series with an upper term of h t/2 = Θ(n 1/2 ), it is bounded by a constant times n 1/2 and so it is O(n n 1/2 ) = O(n 3/2 ). Since h t/2+1 = Θ(n 1/2 ), the second sum can be approximated as O(n 2 (1/n 1/2 ) = O(n 3/2 ). This shows that the upper bound in the worst case running time using Hibbard's sequence is O(n 3/2 ). That there is a sequence that achieves it is an exercise. 6

7 Many other sequences have been studied. Studies have shown that the sequence dened by (h i, h i+1 ) = (9 (4 i 1 2 i 1 ) + 1, 4 i i + 1) i = 1, 2,... which generates 1,, 5, 19, 41, 109,... has O(n 4/3 ) worst case running time (Sedgewick, 1986). The notation above means that the formula generates pairs of gaps. Example This shows how Shell sort works on an array using the gap sequence 13,4,1: A S O R T I N G E X A M P L E After 13-sort A E O R T I N G E X A M P L S After 4-sort A E A G E I N M P L O R T X S After 1-sort A A E E G I L M N O P R S T X 7.4 Heap Sort The idea underlying heap is to create a maximum heap in O(n) steps. A maximum heap is a heap in which the heap -order property is reversed each node is larger than its children. Then repeatedly swap the maximum element with the one at the end of the array, shortening the array by one element afterwards, so that the next swap is with the preceding element The Algorithm There are less ecient ways to do this. This algorithm builds the max-heap, and then pretends top delete the maximum element by swapping it with the element in the last position, and decrementing the variable that denes wehere the heap ends in the array, eectively treating the largest element as not being part of the heap anymore. // First build the heap using the same algorithm described in Chapter 6. // Assume the array is named A and A. size () is its length. int N = A. size () ; for ( int i = N /2; i > 0; i - -) // percolate down in a max heap stopping if we reach N PercolateDown ( A, i, N ); // A is now a heap // Now repeatedly swap the max element with the last element in the heap for ( int j = N - 1; j > 0; j - -) { swap (A [0], A[j ]) ; PercolateDown (A, 0, j); 7

8 The PercolateDown function must be changed slightly from the one described in Chapter 6, Priority Queues. because it needs to prevent an element from being dropped too far down the heap. It therefore has a parameter that acts like an upper bound on the heap. void PercolateDown ( vector < Comparable > & array, int hole, int last ) { int child ; Comparable tmp = array [ hole ]; while ( hole * 2 <= last -1 ) { child = hole * 2; if ( child!= last -1 && array [ child + 1] < array [ child ]) child ++; if ( array [ child ] < tmp ) array [ hole ] = array [ child ]; else break ; hole = child ; array [ hole ] = tmp ; Analysis In Chapter 6 we proved that building the heap takes O(n) steps (to be precise, 2n steps.) In the second stage of the algorithm, the swap takes constant time, but the PercolateDown step, in the worst case, will require a number of comparisons and moves in proportion to the current height of the heap. The j th iteration uses at most 2 log(n j + 1) comparisons. We have N 1 j=1 log(n j + 1) = N log(j) = log N! It can be proved that log(n!) = Θ(N logn), so heapsort in the worst case has a running time that is Θ(N log N). A more accurate analysis will show that the most number of comparisons is 2N log N O(N). Experiments have shown that heapsort is consistent and averages only slightly fewer comparisons than its worst case. The following theorem has been proved, but the proof is omitted here. Theorem 6. The average number of comparisons performed to heapsort a random permutation of n distinct items is 2n log no(n log log n). j=2 7.5 Quicksort Quicksort is the fastest known sorting algorithm when implemented correctly, meaning that the non-recursive version should be used and the longer partition should be stacked rather than the shorter one. Quicksort has an average running time of Θ(N log N) but a worst case performance of O(N 2 ). The reason that quicksort is considered the fastest algorithm for sorting when used carefully, is that a good design can make the probability of achieving the worst case negligible. Here we review the algorithm and analyze its performance. 8

9 7.5.1 The Algorithm Basic idea: Let S represent the set of elements to be sorted, i.e., S is an array. Let quicksort(s) be dened as the following sequence of steps: 1. If the number of elements in S is 0 or 1, then return, because it is already sorted. 2. Pick any element v in S. Call this element v the pivot. 3. Partition Sv) into two disjoint sets S 1 = {x S x v and S 2 = {x S x v. 4. Return quicksort(s 1 ) followed by v followed by quicksort(s 2 ). Picking a Pivot A poor choice of pivot will cause one set to be very small and the other to be very large. The result will be that the algorithm achieves its O(n 2 ) worst case behavior. If the pivot is smaller than all other keys or larger than all other keys this will happen. We want a pivot that is the median without the trouble of nding the median. One could pick the pivot randomly, but then there is no gaurantee about what it will be and the cost of the random number generation buys little in the end. A good compromise between safety and performance is to pick three elements and take their median. Doing this almost assuredly eliminates the possibility of quadratic behavior. A good choice is the rst, last, and middle elements of the array. Partitioning The best way to partition is to push the pivot element out of the way by placing it at the beginning or end of the array. Having done that, all of the implementations do basically the same thing, except for how they handle elements equal to the pivot. The general idea is to advance i and j pointers towards the middle swapping elements larger than the pivot until i and j cross. The following example shows an array with the initial placements of the i and j pointers and the pivot, after it has been moved to the last position in the array. In each step, rst j travels down the array looking for a culprit that doesn't belong, and then i travels up the array doing the same thing. When they each stop, the elements are swapped and they each advance one element. 9

10 i j pivot i j i j i j i j j i It stops at this point and the pivot is swapped, so the nal array is The real issue is handling elements that are equal to the pivot. The safest way to handle elements equal to the pivot is to swap them as if they were smaller or larger, otherwise the algorithm can degrade to the worst case. A recursive version of quicksort that uses this strategy is in the listing below. void quicksort ( vector <C > & a, int left, int right ) { if ( left + 10 <= right ) { // pick the pivot and swap it into a [ right -1] C pivot = median3 (a, left, right ); // partition step int i = left ; int j = right -1; while ( true ) { while ( a [++ i] < pivot ) { while ( pivot < a[--j] ) { if ( i < j ) swap ( a [i], a [j] ); else break ; // move pivot into position swap ( a [i], a [ right - 1] ); // recursively sort each set quicksort (a, left, i -1) ; quicksort (a,i +1, right ); else // the array is too small to quicksort - use insertionsort instead insertionsort (a, left, right ); 10

11 // insertionsort must have two extra parameters to receive left and right bounds Notes. 1. That i starts at left+1 is intentional. The median3 function rearranges the array so that a[left] pivot, so it is not necessary to examine it. 2. A symmetric statement is true of j: median3 places the pivot in a[right-1] so j can start by comparing pivot to a[right-2]. 3. Neither i nor j can exceed the array bounds because of the above reasoning. The ends act as sentinels for i and j Analysis of Quicksort A recurrence relation describes the total number of comparisons. Letting T (n) denote the number of comparisons with an array of size n, and let i denote the number of elements in the lower part of the partition, S 1. T (1) is some constant, say a. Then T (n) = T (i) + T (n i1) + cn Worst Case The worst case is when the partition always creates one array of size 0 and the other of size n 1. The recurrence in this case is for n > 1, which implies that T (n) = T (n 1) + cn T (n) = T (n 2) + cn + c(n 1) = T (n 3) + c(n + n 1 + n 2) = = T (1) + c(n + n 1 + n ) cn(n + 1) = a + = Θ(n 2 ) 2 Average Case The average case analysis assumes that all sizes for S 1 are equally likely. We now let T (n) represent the average running time for an array of size n. It follows that T (n) is the average of the running 11

12 times for all possible sizes of S 1, which are each of probability 1/n. T (n) = 1 n 1 (T (j) + T (n j 1)) + cn n j=0 = 2 n 1 T (j) + cn n j=0 n 1 nt (n) = 2 T (j) + cn 2 j=0 n 2 (n 1)T (n 1) = 2 T (j) + c(n 1) 2 Now subtract the last two equalities and get j=0 iff iff iff nt (n) (n 1)T (n 1) = 2T (n 1) + c(n 2 (n 1) 2 ) Rearranging and dropping the constant c, = 2T (n 1) + 2cn c nt (n) = (n + 1)T (n 1) + 2cn Dividing by n(n + 1), we get a formula that can be telescoped by successive adding: Adding these equations gives so that T (n) n + 1 = T (1) 2 T (n) n + 1 T (n 1) n T (2) 3 n+1 + 2c i=3 = =. T (n 1) n T (n 2) n 1 = T (1) 2 + 2c 3 + 2c n c n 1 i T (1) + 2c log(n + 1) = O(log n) 2 T (n) = O(n log n) 7.6 A Lower Bound for Sorting The question is, can we ever nd a sorting algorithm better than the O(n log n) algorithms we have seen so far, or is it theoretically impossible to achieve this? The answer is that if our sorting algorithm sorts by making binary comparisons, then the number of comparisons is Θ(n log n). The proof is based on the following argument: Any sorting algorithm that uses only comparisons requires log(n!) comparisons in the worst case and log(n!) on average. To prove this we use a decision tree. 12

13 a < b < c a < c < b b < a < c b < c < a c < a < b c < b < a a < b b < a a < b < c a < c < b b < a < c c < a < b b < c < a a < c c < a b < c c < b < a c < b a < b < c a < c < b c < a < b b < a < c b < c < a c < b < a b < c c < b a < c c < a a < b < c a < c < b b < a < c b < c < a Figure 7.2: A decision tree that represents the decisions to sort three numbers Decision Trees A decision tree is a tree representation of an algorithm that solves a problem by a sequence of successive decisions. In a decision tree, the nodes represent logical assertions and an edge from a node to a child node is labeled by a proposition. A binary decision tree is based on binary decisions. Because a binary decision tree is a binary tree, we can use reasoning about binary trees to arrive at a theorem about algorithms that sort using binary comparisons. Figure 7.2 illustrates a decision tree that represents the comparisons that would sort three distinct numbers. We need some preliminary lemmas. Lemma 7. A binary tree of height d has at most 2 d leaves. Proof. We can prove this by induction on the height of the tree. If d = 0, the tree consists of a root, and it has 1 = 2 0 leaves. If d > 0, assume it is true for d 1. The tree must have a root and two subtrees of height at most d 1, which by assumption have at most 2 d 1 leaves each. Since the root is not a leaf, the tree has at most 2 2 d 1 = 2 d leaves. Lemma 8. If a binary tree has n leaves then it must have height at least log n. Proof. Suppose a tree with n leaves has height less than log n. If n is not a power of 2, its height is at most log n and if n is a power of 2 then it is at most log(n) 1. In either case, let d be the height. From the above lemma, the tree has at most 2 d leaves. But 2 log n 1 < n and 2 log n < n which contradicts the fact that the tree has n leaves. Lemma 9. Any sorting algorithm that uses only binary comparisons between elements requires at least log n! comparisons for an array of size n. 13

14 Proof. There are n! leaves in a decision tree to sort n elements because there are n! dierent possible outcomes. By Lemma 8, the height of this tree is at least log n!. Theorem 10. Any sorting algorithm that uses only comparisons between elements requires Ω(n log n) comparisons. Proof. We show that log(n!) is Ω(n log n). log(n!) = log n + log(n 1) + log(n 2) + log 2 + log 1 log n + log(n 1) + log(n 2) + log(n/2) n 2 log n 2 n 2 (log n 1) n 2 log n n 2 = Ω(n log n) 14

### Merge Sort. 2004 Goodrich, Tamassia. Merge Sort 1

Merge Sort 7 2 9 4 2 4 7 9 7 2 2 7 9 4 4 9 7 7 2 2 9 9 4 4 Merge Sort 1 Divide-and-Conquer Divide-and conquer is a general algorithm design paradigm: Divide: divide the input data S in two disjoint subsets

### CMPS 102 Solutions to Homework 1

CMPS 0 Solutions to Homework Lindsay Brown, lbrown@soe.ucsc.edu September 9, 005 Problem..- p. 3 For inputs of size n insertion sort runs in 8n steps, while merge sort runs in 64n lg n steps. For which

### Converting a Number from Decimal to Binary

Converting a Number from Decimal to Binary Convert nonnegative integer in decimal format (base 10) into equivalent binary number (base 2) Rightmost bit of x Remainder of x after division by two Recursive

### CS 103X: Discrete Structures Homework Assignment 3 Solutions

CS 103X: Discrete Structures Homework Assignment 3 s Exercise 1 (20 points). On well-ordering and induction: (a) Prove the induction principle from the well-ordering principle. (b) Prove the well-ordering

### 1/1 7/4 2/2 12/7 10/30 12/25

Binary Heaps A binary heap is dened to be a binary tree with a key in each node such that: 1. All leaves are on, at most, two adjacent levels. 2. All leaves on the lowest level occur to the left, and all

### 1) The postfix expression for the infix expression A+B*(C+D)/F+D*E is ABCD+*F/DE*++

Answer the following 1) The postfix expression for the infix expression A+B*(C+D)/F+D*E is ABCD+*F/DE*++ 2) Which data structure is needed to convert infix notations to postfix notations? Stack 3) The

### Quicksort is a divide-and-conquer sorting algorithm in which division is dynamically carried out (as opposed to static division in Mergesort).

Chapter 7: Quicksort Quicksort is a divide-and-conquer sorting algorithm in which division is dynamically carried out (as opposed to static division in Mergesort). The three steps of Quicksort are as follows:

### Sorting Algorithms. Nelson Padua-Perez Bill Pugh. Department of Computer Science University of Maryland, College Park

Sorting Algorithms Nelson Padua-Perez Bill Pugh Department of Computer Science University of Maryland, College Park Overview Comparison sort Bubble sort Selection sort Tree sort Heap sort Quick sort Merge

### Elementary Number Theory We begin with a bit of elementary number theory, which is concerned

CONSTRUCTION OF THE FINITE FIELDS Z p S. R. DOTY Elementary Number Theory We begin with a bit of elementary number theory, which is concerned solely with questions about the set of integers Z = {0, ±1,

### Intro. to the Divide-and-Conquer Strategy via Merge Sort CMPSC 465 CLRS Sections 2.3, Intro. to and various parts of Chapter 4

Intro. to the Divide-and-Conquer Strategy via Merge Sort CMPSC 465 CLRS Sections 2.3, Intro. to and various parts of Chapter 4 I. Algorithm Design and Divide-and-Conquer There are various strategies we

### Divide-and-Conquer Algorithms Part Four

Divide-and-Conquer Algorithms Part Four Announcements Problem Set 2 due right now. Can submit by Monday at 2:15PM using one late period. Problem Set 3 out, due July 22. Play around with divide-and-conquer

### Binary Heaps * * * * * * * / / \ / \ / \ / \ / \ * * * * * * * * * * * / / \ / \ / / \ / \ * * * * * * * * * *

Binary Heaps A binary heap is another data structure. It implements a priority queue. Priority Queue has the following operations: isempty add (with priority) remove (highest priority) peek (at highest

### A binary heap is a complete binary tree, where each node has a higher priority than its children. This is called heap-order property

CmSc 250 Intro to Algorithms Chapter 6. Transform and Conquer Binary Heaps 1. Definition A binary heap is a complete binary tree, where each node has a higher priority than its children. This is called

### Homework 5 Solutions

Homework 5 Solutions 4.2: 2: a. 321 = 256 + 64 + 1 = (01000001) 2 b. 1023 = 512 + 256 + 128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = (1111111111) 2. Note that this is 1 less than the next power of 2, 1024, which

### 1 2-3 Trees: The Basics

CS10: Data Structures and Object-Oriented Design (Fall 2013) November 1, 2013: 2-3 Trees: Inserting and Deleting Scribes: CS 10 Teaching Team Lecture Summary In this class, we investigated 2-3 Trees in

### Mergesort and Quicksort

Mergesort Mergesort and Quicksort Basic plan: Divide array into two halves. Recursively sort each half. Merge two halves to make sorted whole. mergesort mergesort analysis quicksort quicksort analysis

### MODULAR ARITHMETIC. a smallest member. It is equivalent to the Principle of Mathematical Induction.

MODULAR ARITHMETIC 1 Working With Integers The usual arithmetic operations of addition, subtraction and multiplication can be performed on integers, and the result is always another integer Division, on

### Outline BST Operations Worst case Average case Balancing AVL Red-black B-trees. Binary Search Trees. Lecturer: Georgy Gimel farb

Binary Search Trees Lecturer: Georgy Gimel farb COMPSCI 220 Algorithms and Data Structures 1 / 27 1 Properties of Binary Search Trees 2 Basic BST operations The worst-case time complexity of BST operations

### 8 Primes and Modular Arithmetic

8 Primes and Modular Arithmetic 8.1 Primes and Factors Over two millennia ago already, people all over the world were considering the properties of numbers. One of the simplest concepts is prime numbers.

### Full and Complete Binary Trees

Full and Complete Binary Trees Binary Tree Theorems 1 Here are two important types of binary trees. Note that the definitions, while similar, are logically independent. Definition: a binary tree T is full

### CSE 326, Data Structures. Sample Final Exam. Problem Max Points Score 1 14 (2x7) 2 18 (3x6) 3 4 4 7 5 9 6 16 7 8 8 4 9 8 10 4 Total 92.

Name: Email ID: CSE 326, Data Structures Section: Sample Final Exam Instructions: The exam is closed book, closed notes. Unless otherwise stated, N denotes the number of elements in the data structure

### AS-2261 M.Sc.(First Semester) Examination-2013 Paper -fourth Subject-Data structure with algorithm

AS-2261 M.Sc.(First Semester) Examination-2013 Paper -fourth Subject-Data structure with algorithm Time: Three Hours] [Maximum Marks: 60 Note Attempts all the questions. All carry equal marks Section A

### The Goldberg Rao Algorithm for the Maximum Flow Problem

The Goldberg Rao Algorithm for the Maximum Flow Problem COS 528 class notes October 18, 2006 Scribe: Dávid Papp Main idea: use of the blocking flow paradigm to achieve essentially O(min{m 2/3, n 1/2 }

### Sorting revisited. Build the binary search tree: O(n^2) Traverse the binary tree: O(n) Total: O(n^2) + O(n) = O(n^2)

Sorting revisited How did we use a binary search tree to sort an array of elements? Tree Sort Algorithm Given: An array of elements to sort 1. Build a binary search tree out of the elements 2. Traverse

### Introduction to Algorithms March 10, 2004 Massachusetts Institute of Technology Professors Erik Demaine and Shafi Goldwasser Quiz 1.

Introduction to Algorithms March 10, 2004 Massachusetts Institute of Technology 6.046J/18.410J Professors Erik Demaine and Shafi Goldwasser Quiz 1 Quiz 1 Do not open this quiz booklet until you are directed

### The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge,

The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge, cheapest first, we had to determine whether its two endpoints

### 6. Standard Algorithms

6. Standard Algorithms The algorithms we will examine perform Searching and Sorting. 6.1 Searching Algorithms Two algorithms will be studied. These are: 6.1.1. inear Search The inear Search The Binary

### CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team

CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team Lecture Summary In this lecture, we learned about the ADT Priority Queue. A

### Binary Heap Algorithms

CS Data Structures and Algorithms Lecture Slides Wednesday, April 5, 2009 Glenn G. Chappell Department of Computer Science University of Alaska Fairbanks CHAPPELLG@member.ams.org 2005 2009 Glenn G. Chappell

### 2.3 Scheduling jobs on identical parallel machines

2.3 Scheduling jobs on identical parallel machines There are jobs to be processed, and there are identical machines (running in parallel) to which each job may be assigned Each job = 1,,, must be processed

### 6 March 2007 1. Array Implementation of Binary Trees

Heaps CSE 0 Winter 00 March 00 1 Array Implementation of Binary Trees Each node v is stored at index i defined as follows: If v is the root, i = 1 The left child of v is in position i The right child of

### Cpt S 223. School of EECS, WSU

Priority Queues (Heaps) 1 Motivation Queues are a standard mechanism for ordering tasks on a first-come, first-served basis However, some tasks may be more important or timely than others (higher priority)

### Mathematical Induction. Lecture 10-11

Mathematical Induction Lecture 10-11 Menu Mathematical Induction Strong Induction Recursive Definitions Structural Induction Climbing an Infinite Ladder Suppose we have an infinite ladder: 1. We can reach

### Binary Heaps. CSE 373 Data Structures

Binary Heaps CSE Data Structures Readings Chapter Section. Binary Heaps BST implementation of a Priority Queue Worst case (degenerate tree) FindMin, DeleteMin and Insert (k) are all O(n) Best case (completely

### Binary Search. Search for x in a sorted array A.

Divide and Conquer A general paradigm for algorithm design; inspired by emperors and colonizers. Three-step process: 1. Divide the problem into smaller problems. 2. Conquer by solving these problems. 3.

### DETERMINANTS. b 2. x 2

DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in

### Approximation Algorithms

Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

### Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

### Mathematical Induction

Mathematical Induction (Handout March 8, 01) The Principle of Mathematical Induction provides a means to prove infinitely many statements all at once The principle is logical rather than strictly mathematical,

### CSC148 Lecture 8. Algorithm Analysis Binary Search Sorting

CSC148 Lecture 8 Algorithm Analysis Binary Search Sorting Algorithm Analysis Recall definition of Big Oh: We say a function f(n) is O(g(n)) if there exists positive constants c,b such that f(n)

### Algorithms and Data Structures

Algorithms and Data Structures Part 2: Data Structures PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering (CiE) Summer Term 2016 Overview general linked lists stacks queues trees 2 2

Lecture 10 Union-Find The union-nd data structure is motivated by Kruskal's minimum spanning tree algorithm (Algorithm 2.6), in which we needed two operations on disjoint sets of vertices: determine whether

### Theory of Computation Prof. Kamala Krithivasan Department of Computer Science and Engineering Indian Institute of Technology, Madras

Theory of Computation Prof. Kamala Krithivasan Department of Computer Science and Engineering Indian Institute of Technology, Madras Lecture No. # 31 Recursive Sets, Recursively Innumerable Sets, Encoding

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### Homework until Test #2

MATH31: Number Theory Homework until Test # Philipp BRAUN Section 3.1 page 43, 1. It has been conjectured that there are infinitely many primes of the form n. Exhibit five such primes. Solution. Five such

### Analysis of Binary Search algorithm and Selection Sort algorithm

Analysis of Binary Search algorithm and Selection Sort algorithm In this section we shall take up two representative problems in computer science, work out the algorithms based on the best strategy to

### GRAPH THEORY LECTURE 4: TREES

GRAPH THEORY LECTURE 4: TREES Abstract. 3.1 presents some standard characterizations and properties of trees. 3.2 presents several different types of trees. 3.7 develops a counting method based on a bijection

### Lecture 13 - Basic Number Theory.

Lecture 13 - Basic Number Theory. Boaz Barak March 22, 2010 Divisibility and primes Unless mentioned otherwise throughout this lecture all numbers are non-negative integers. We say that A divides B, denoted

### Many algorithms, particularly divide and conquer algorithms, have time complexities which are naturally

Recurrence Relations Many algorithms, particularly divide and conquer algorithms, have time complexities which are naturally modeled by recurrence relations. A recurrence relation is an equation which

### Algorithms. Margaret M. Fleck. 18 October 2010

Algorithms Margaret M. Fleck 18 October 2010 These notes cover how to analyze the running time of algorithms (sections 3.1, 3.3, 4.4, and 7.1 of Rosen). 1 Introduction The main reason for studying big-o

### Analysis of Algorithms, I

Analysis of Algorithms, I CSOR W4231.002 Eleni Drinea Computer Science Department Columbia University Thursday, February 26, 2015 Outline 1 Recap 2 Representing graphs 3 Breadth-first search (BFS) 4 Applications

### CS473 - Algorithms I

CS473 - Algorithms I Lecture 9 Sorting in Linear Time View in slide-show mode 1 How Fast Can We Sort? The algorithms we have seen so far: Based on comparison of elements We only care about the relative

### Iteration, Induction, and Recursion

CHAPTER 2 Iteration, Induction, and Recursion The power of computers comes from their ability to execute the same task, or different versions of the same task, repeatedly. In computing, the theme of iteration

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### Lecture 1: Course overview, circuits, and formulas

Lecture 1: Course overview, circuits, and formulas Topics in Complexity Theory and Pseudorandomness (Spring 2013) Rutgers University Swastik Kopparty Scribes: John Kim, Ben Lund 1 Course Information Swastik

### The Prime Numbers. Definition. A prime number is a positive integer with exactly two positive divisors.

The Prime Numbers Before starting our study of primes, we record the following important lemma. Recall that integers a, b are said to be relatively prime if gcd(a, b) = 1. Lemma (Euclid s Lemma). If gcd(a,

### Sequential Data Structures

Sequential Data Structures In this lecture we introduce the basic data structures for storing sequences of objects. These data structures are based on arrays and linked lists, which you met in first year

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### a = bq + r where 0 r < b.

Lecture 5: Euclid s algorithm Introduction The fundamental arithmetic operations are addition, subtraction, multiplication and division. But there is a fifth operation which I would argue is just as fundamental

### Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.

Some Polynomial Theorems by John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.com This paper contains a collection of 31 theorems, lemmas,

### , each of which contains a unique key value, say k i , R 2. such that k i equals K (or to determine that no such record exists in the collection).

The Search Problem 1 Suppose we have a collection of records, say R 1, R 2,, R N, each of which contains a unique key value, say k i. Given a particular key value, K, the search problem is to locate the

### CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis Lecture 7 Binary heap, binomial heap, and Fibonacci heap 1 Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 The slides were

### Divide and Conquer. Textbook Reading Chapters 4, 7 & 33.4

Divide d Conquer Textook Reading Chapters 4, 7 & 33.4 Overview Design principle Divide d conquer Proof technique Induction, induction, induction Analysis technique Recurrence relations Prolems Sorting

### CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis. Linda Shapiro Winter 2015

CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis Linda Shapiro Today Registration should be done. Homework 1 due 11:59 pm next Wednesday, January 14 Review math essential

### Sample Questions Csci 1112 A. Bellaachia

Sample Questions Csci 1112 A. Bellaachia Important Series : o S( N) 1 2 N N i N(1 N) / 2 i 1 o Sum of squares: N 2 N( N 1)(2N 1) N i for large N i 1 6 o Sum of exponents: N k 1 k N i for large N and k

### Induction. Margaret M. Fleck. 10 October These notes cover mathematical induction and recursive definition

Induction Margaret M. Fleck 10 October 011 These notes cover mathematical induction and recursive definition 1 Introduction to induction At the start of the term, we saw the following formula for computing

### Binary Search Trees. Data in each node. Larger than the data in its left child Smaller than the data in its right child

Binary Search Trees Data in each node Larger than the data in its left child Smaller than the data in its right child FIGURE 11-6 Arbitrary binary tree FIGURE 11-7 Binary search tree Data Structures Using

### 1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form

1. LINEAR EQUATIONS A linear equation in n unknowns x 1, x 2,, x n is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b, where a 1, a 2,..., a n, b are given real numbers. For example, with x and

### U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009. Notes on Algebra

U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009 Notes on Algebra These notes contain as little theory as possible, and most results are stated without proof. Any introductory

### Algorithms and Data Structures

Algorithms and Data Structures CMPSC 465 LECTURES 20-21 Priority Queues and Binary Heaps Adam Smith S. Raskhodnikova and A. Smith. Based on slides by C. Leiserson and E. Demaine. 1 Trees Rooted Tree: collection

### Section IV.1: Recursive Algorithms and Recursion Trees

Section IV.1: Recursive Algorithms and Recursion Trees Definition IV.1.1: A recursive algorithm is an algorithm that solves a problem by (1) reducing it to an instance of the same problem with smaller

### Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products

Chapter 3 Cartesian Products and Relations The material in this chapter is the first real encounter with abstraction. Relations are very general thing they are a special type of subset. After introducing

### Heaps & Priority Queues in the C++ STL 2-3 Trees

Heaps & Priority Queues in the C++ STL 2-3 Trees CS 3 Data Structures and Algorithms Lecture Slides Friday, April 7, 2009 Glenn G. Chappell Department of Computer Science University of Alaska Fairbanks

### Binary Search Trees. A Generic Tree. Binary Trees. Nodes in a binary search tree ( B-S-T) are of the form. P parent. Key. Satellite data L R

Binary Search Trees A Generic Tree Nodes in a binary search tree ( B-S-T) are of the form P parent Key A Satellite data L R B C D E F G H I J The B-S-T has a root node which is the only node whose parent

### MLR Institute of Technology

MLR Institute of Technology DUNDIGAL 500 043, HYDERABAD COMPUTER SCIENCE AND ENGINEERING Computer Programming Lab List of Experiments S.No. Program Category List of Programs 1 Operators a) Write a C program

### Data Structures Fibonacci Heaps, Amortized Analysis

Chapter 4 Data Structures Fibonacci Heaps, Amortized Analysis Algorithm Theory WS 2012/13 Fabian Kuhn Fibonacci Heaps Lacy merge variant of binomial heaps: Do not merge trees as long as possible Structure:

### Introduction to Diophantine Equations

Introduction to Diophantine Equations Tom Davis tomrdavis@earthlink.net http://www.geometer.org/mathcircles September, 2006 Abstract In this article we will only touch on a few tiny parts of the field

### Quotient Rings and Field Extensions

Chapter 5 Quotient Rings and Field Extensions In this chapter we describe a method for producing field extension of a given field. If F is a field, then a field extension is a field K that contains F.

### Data Structure [Question Bank]

Unit I (Analysis of Algorithms) 1. What are algorithms and how they are useful? 2. Describe the factor on best algorithms depends on? 3. Differentiate: Correct & Incorrect Algorithms? 4. Write short note:

### Why? A central concept in Computer Science. Algorithms are ubiquitous.

Analysis of Algorithms: A Brief Introduction Why? A central concept in Computer Science. Algorithms are ubiquitous. Using the Internet (sending email, transferring files, use of search engines, online

### Class Overview. CSE 326: Data Structures. Goals. Goals. Data Structures. Goals. Introduction

Class Overview CSE 326: Data Structures Introduction Introduction to many of the basic data structures used in computer software Understand the data structures Analyze the algorithms that use them Know

### THIS CHAPTER studies several important methods for sorting lists, both contiguous

Sorting 8 THIS CHAPTER studies several important methods for sorting lists, both contiguous lists and linked lists. At the same time, we shall develop further tools that help with the analysis of algorithms

### Midterm Practice Problems

6.042/8.062J Mathematics for Computer Science October 2, 200 Tom Leighton, Marten van Dijk, and Brooke Cowan Midterm Practice Problems Problem. [0 points] In problem set you showed that the nand operator

### CS 102: SOLUTIONS TO DIVIDE AND CONQUER ALGORITHMS (ASSGN 4)

CS 10: SOLUTIONS TO DIVIDE AND CONQUER ALGORITHMS (ASSGN 4) Problem 1. a. Consider the modified binary search algorithm so that it splits the input not into two sets of almost-equal sizes, but into three

### Exam study sheet for CS2711. List of topics

Exam study sheet for CS2711 Here is the list of topics you need to know for the final exam. For each data structure listed below, make sure you can do the following: 1. Give an example of this data structure

### Offline sorting buffers on Line

Offline sorting buffers on Line Rohit Khandekar 1 and Vinayaka Pandit 2 1 University of Waterloo, ON, Canada. email: rkhandekar@gmail.com 2 IBM India Research Lab, New Delhi. email: pvinayak@in.ibm.com

### Balanced Binary Search Tree

AVL Trees / Slide 1 Balanced Binary Search Tree Worst case height of binary search tree: N-1 Insertion, deletion can be O(N) in the worst case We want a tree with small height Height of a binary tree with

### 8 Divisibility and prime numbers

8 Divisibility and prime numbers 8.1 Divisibility In this short section we extend the concept of a multiple from the natural numbers to the integers. We also summarize several other terms that express

### Mathematics of Cryptography Modular Arithmetic, Congruence, and Matrices. A Biswas, IT, BESU SHIBPUR

Mathematics of Cryptography Modular Arithmetic, Congruence, and Matrices A Biswas, IT, BESU SHIBPUR McGraw-Hill The McGraw-Hill Companies, Inc., 2000 Set of Integers The set of integers, denoted by Z,

### = 2 + 1 2 2 = 3 4, Now assume that P (k) is true for some fixed k 2. This means that

Instructions. Answer each of the questions on your own paper, and be sure to show your work so that partial credit can be adequately assessed. Credit will not be given for answers (even correct ones) without

### From Last Time: Remove (Delete) Operation

CSE 32 Lecture : More on Search Trees Today s Topics: Lazy Operations Run Time Analysis of Binary Search Tree Operations Balanced Search Trees AVL Trees and Rotations Covered in Chapter of the text From

### Symbol Tables. Introduction

Symbol Tables Introduction A compiler needs to collect and use information about the names appearing in the source program. This information is entered into a data structure called a symbol table. The

### MATH 10034 Fundamental Mathematics IV

MATH 0034 Fundamental Mathematics IV http://www.math.kent.edu/ebooks/0034/funmath4.pdf Department of Mathematical Sciences Kent State University January 2, 2009 ii Contents To the Instructor v Polynomials.

### 6.3 Conditional Probability and Independence

222 CHAPTER 6. PROBABILITY 6.3 Conditional Probability and Independence Conditional Probability Two cubical dice each have a triangle painted on one side, a circle painted on two sides and a square painted

### ER E P M A S S I CONSTRUCTING A BINARY TREE EFFICIENTLYFROM ITS TRAVERSALS DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF TAMPERE REPORT A-1998-5

S I N S UN I ER E P S I T VER M A TA S CONSTRUCTING A BINARY TREE EFFICIENTLYFROM ITS TRAVERSALS DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF TAMPERE REPORT A-1998-5 UNIVERSITY OF TAMPERE DEPARTMENT OF

### Analysis of Algorithms I: Binary Search Trees

Analysis of Algorithms I: Binary Search Trees Xi Chen Columbia University Hash table: A data structure that maintains a subset of keys from a universe set U = {0, 1,..., p 1} and supports all three dictionary

### A Note on Maximum Independent Sets in Rectangle Intersection Graphs

A Note on Maximum Independent Sets in Rectangle Intersection Graphs Timothy M. Chan School of Computer Science University of Waterloo Waterloo, Ontario N2L 3G1, Canada tmchan@uwaterloo.ca September 12,