# If we want to measure the amount of storage that an algorithm uses as a function of the size of the instances, there is a natural unit available Bit.

Save this PDF as:

Size: px
Start display at page:

Download "If we want to measure the amount of storage that an algorithm uses as a function of the size of the instances, there is a natural unit available Bit."

## Transcription

1 (1) Explain why analysis of algorithms is important. When we have a problem to solve, there may be several suitable algorithms available. We would obviously like to choose the best. Analyzing an algorithm has come to mean predicting the resources that the algorithm requires. Generally, by analyzing several candidate algorithms for a problem, a most efficient one can be easily identified. of algorithm is required to decide which of the several algorithms is preferable. There are two different approaches to analyze an algorithm. 1. Empirical (posteriori) approach to choose an algorithm: Programming different competing techniques and trying them on various instances with the help of computer. 2. Theoretical (priori) approach to choose an algorithm: Determining mathematically the quantity of resources needed by each algorithm as a function of the size of the instances considered. The resources of most interest are computing time (time complexity) and storage space (space complexity). The advantage is this approach does not depend on programmer, programming language or computer being used. of algorithm is required to measure the efficiency of algorithm. Only after determining the efficiency of various algorithms, you will be able to make a well informed decision for selecting the best algorithm to solve a particular problem. We will compare algorithms based on their execution time. Efficiency of an algorithm means how fast it runs. If we want to measure the amount of storage that an algorithm uses as a function of the size of the instances, there is a natural unit available Bit. On the other hand, is we want to measure the efficiency of an algorithm in terms of time it takes to arrive at result, there is no obvious choice. This problem is solved by the principle of invariance, which states that two different implementations of the same algorithm will not differ in efficiency by more than some multiplicative constant. [Gopi Sanghani] Page 1

2 Suppose that the time taken by an algorithm to solve an instance of size n is never more than cn seconds, where c is some suitable constant. Practically size of instance means any integer that in some way measures the number of components in an instance. Sorting problem: size is no. of items to be sorted. Graph: size is no. of nodes or edges or both involved. We say that the algorithm takes a time in the order of n i.e. it is a linear time algorithm. If an algorithm never takes more than cn 2 seconds to solve an instance of size n, we say it takes time in the order of cn 2 i.e. quadratic time algorithm. Polynomial : n k, Exponential : c n or n! (2) Explain: Worst Case, Best Case & Average Case Complexity. Best Case Complexity Best case of a given algorithm is considered when the resource usage is at least. Usually the resource being considered is running time. The term best-case performance is used to describe an algorithm's behavior under optimal conditions. The best-case complexity of the algorithm is the function defined by the minimum number of steps taken on any instance of size n. In the best case analysis, we calculate lower bound on running time of an algorithm. The best case behavior of an algorithm is not so useful. For example, the best case for a simple linear search on a list occurs when the desired element is the first element of the list. In sorting problem the best case occurs when the input elements are already sorted in required order. Average Case Complexity Average case of a given algorithm is considered when the resource usage is on average. Average performance and worst-case performance are the most used in algorithm analysis. [Gopi Sanghani] Page 2

3 The average-case complexity of the algorithm is the function defined by the average number of steps taken on any instance of size n. In average case analysis, we take all possible inputs and calculate computing time for all of the inputs. Sum all the calculated values and divide the sum by total number of inputs. For example, the average case for a simple linear search on a list occurs when the desired element is any element of the list. In sorting problem the average case occurs when the input elements are randomly arranged. Worst Case Complexity Worst case of a given algorithm is considered when the resource usage is at most. The worst-case complexity of the algorithm is the function defined by the maximum number of steps taken on any instance of size n. In the worst case analysis, we calculate upper bound on running time of an algorithm. We must know the case that causes maximum number of operations to be executed. For Linear Search, the worst case happens when the element to be searched is not present in the array. In sorting problem the worst case occurs when the input elements are sorted in reverse order. (3) Elementary Operations An elementary operation is one whose execution time can be bounded above by a constant depending only on the particular implementation used the machine, the programming language used. Thus the constant does not depend on either the size or the parameters of the instance being considered. Because when we consider the execution time bounded by a multiplicative constant, it is only the number of elementary operations executed that matters in the analysis and not the exact time required by each of them. For example, some instance of an algorithm needs to carry out a additions, m multiplications and s assignment instructions. [Gopi Sanghani] Page 3

4 Suppose we also know that addition does not more than t a microseconds, multiplication does not take t m microsecond and assignment instruction t s where t m, t a, t s are constants depending on the machine used. Addition, multiplication and assignment can all therefore be considered as elementary operations. The total time t required by an algorithm can be bounded by, t at a + mt m + st s t max(t a + t m + t s ) ( a + m + s) That is t is bounded by a constant multiple of the number of elementary operations to be executed. (4) Write an algorithm / method for Insertion Sort. Analyze the algorithm and find its time complexity. Insertion Sort works by inserting an element into its appropriate position during each iteration. Insertion sort works by comparing an element to all its previous elements until an appropriate position is found. Whenever an appropriate position is found, the element is inserted there by shifting down remaining elements. Algorithm Procedure insert (T[1.n]) cost times for i 2 to n do C1 n x T[i] C2 n-1 j i - 1 C3 n-1 while j > 0 and x < T[j] do C4 T[j+1] T[j] C5-1 j j - 1 C6-1 T[j + 1] x C7 n-1 The running time of an algorithm on a particular input is the number of primitive [Gopi Sanghani] Page 4

5 operations or "steps" executed. A constant amount of time is required to execute each line of our pseudo code. One line may take a different amount of time than another line, but we shall assume that each execution of the i th line takes time c i, where c i is a constant. The running time of the algorithm is the sum of running times for each statement executed; a statement that takes c i steps to execute and is executed n times will contribute c i n to the total running time. Let the time complexity of selection sort is given as T(n), then T(n) = C 1 n+ C 2 (n-1)+ C 3 (n-1)+ C 4 ( )+(C 5 +C 6 ) -1+ C 7 (n-1). = C 1 n+ C 2 n+ C 3 n+ C 4 +C 5 + C 6 + C 4 + C 5 + C 6 + C 7 n-c 2 -C 3 - C 7. = n 2 (C 4 + C 5 + C 6 )+n(c 1 +C 2 +C 3 +C 7 + C 4 + C 5 + C 6 )-1(C 2 +C 3 +C 7 ). = C 8 n 2 +C 9 n+ C 10. Thus, T(n) Θ(n 2 ) Time complexity of insertion sort is Θ(n 2 ) (5) Write an algorithm / method for Selection Sort. Analyze the algorithm and find its time complexity. Selection Sort works by repeatedly selecting elements. The algorithm finds the smallest element in the array first and exchanges it with the element in the first position. Then it finds the second smallest element and exchanges it with the element in the second position and continues in this way until the entire array is sorted. Algorithm Procedure select ( T [1.n] ) Cost times for i 1 to n-1 do C1 n minj i ; minx T[i] C2 n-1 for j i+1 to n do C3 if T[j] < minx then minj j minx T[j] T[minj] T[i] C6 n-1 T[i] minx C7 n-1 [Gopi Sanghani] Page 5 C4 C5

6 The running time of an algorithm on a particular input is the number of primitive operations or "steps" executed. A constant amount of time is required to execute each line of our pseudo code. One line may take a different amount of time than another line, but we shall assume that each execution of the i th line takes time c i, where c i is a constant. The running time of the algorithm is the sum of running times for each statement executed; a statement that takes c i steps to execute and is executed n times will contribute c i n to the total running time. Let the time complexity of selection sort is given as T(n),then T(n)=C 1 n+ C 2 (n-1)+ C 3 ( C 4 ( +C 5 ( +C 6 (n-1)+ C 7 (n-1) = C 1 n+ C 2 n+ C 6 n+ C 7 n+ C 3 +C 4 + C 5 + C 3 + C 4 + C 5 -C 2 -C 6 -C 7 =n(c 1 + C 2 + C 6 +C 7 + C 3 + C 4 + C 5 ) + n 2 (C 3 + C 4 + C 5 )-1(C 2 + C 6 +C 7 ) = C 8 n 2 + C 9 n-c 10 an 2 +bn+c] Thus, T(n) Θ(n 2 ) Time complexity of selection sort is Θ(n 2 ) (6) Explain different asymptotic notations in brief. The following notations are commonly used notations in performance analysis and used to characterize the complexity of an algorithm. Θ-Notation (Same order) For a given function g(n), we denote by Θ(g(n)) the set of functions Θ(g(n)) = { f(n) : there exist positive constants c 1, c 2 and n 0 such that 0 c 1 g(n) f (n) c 2 g(n) for all n n 0 } Because Θ(g(n)) is a set, we could write f(n) Θ(g(n)) to indicate that f(n) is a member of Θ(g(n)). This notation bounds a function to within constant factors. We say f(n) = Θ(g(n)) if there exist positive constants n 0, c 1 and c 2 such that to the right of n 0 the value of f(n) always lies between c 1 g(n) and c 2 g(n) inclusive. Figure a gives an intuitive picture of functions f(n) and g(n). For all values of n to the [Gopi Sanghani] Page 6

7 right of n 0, the value of f(n) lies at or above c 1 g(n) and at or below c 2 g(n). In other words, for all n n 0, the value of f(n) is equal to g(n) to within a constant factor. We say that g(n) is an asymptotically tight bound for f(n). O-Notation (Upper Bound) For a given function g(n), we denote by Ο(g(n)) the set of functions Ο(g(n)) = { f(n) : there exist positive constants c and n 0 such that 0 f (n) cg(n) for all n n 0 } We use Ο notation to give an upper bound on a function, to within a constant factor. For all values of n to the right of n 0, the value of the function f(n) is on or below g(n). This notation gives an upper bound for a function to within a constant factor. We write f(n) = O(g(n)) if there are positive constants n 0 and c such that to the right of n 0, the value of f(n) always lies on or below cg(n). We say that g(n) is an asymptotically upper bound for f(n). [Gopi Sanghani] Page 7

8 Ω-Notation (Lower Bound) For a given function g(n), we denote by Ω(g(n)) the set of functions Ω (g(n)) = { f(n) : there exist positive constants c and n 0 such that 0 cg(n) f (n)for all n n 0 } Ω Notation provides an asymptotic lower bound. For all values of n to the right of n 0, the value of the function f(n) is on or above cg(n). This notation gives a lower bound for a function to within a constant factor. We write f(n) = Ω(g(n)) if there are positive constants n 0 and c such that to the right of n 0, the value of f(n) always lies on or above cg(n). (7) To compute the greatest common divisor (GCD) of two numbers. Let m and n be two positive numbers. The greatest common divisor of m and n, denoted by GCD(m, n) is the largest integer that divides both m and n exactly. [Gopi Sanghani] Page 8

9 When GCD(m, n) = 1, we say that m and n are coprime. The obvious algorithm for calculating GCD(m, n) is obtained directly as, Function GCD(m, n) i min(m, n) + 1 repeat i i 1 until i divides both m and n exactly return i Time taken by this algorithm is in the order of the difference between the smaller of two arguments and their greatest common divisor. In the worst case, when m and n are coprime, time taken is in the order of Θ(n). There exists a much more efficient algorithm for calculating GCD(m, n), known as Euclid s algorithm. Function Euclid(m, n) {Here n m, if not then swap n and m} while m > 0 do t m m n mod m n t return n Total time taken by algorithm is in the exact order of the number of trips round the loop. Considering the while loop execution as recursive algorithm, Let T(k) be the maximum number of times the algorithm goes round the loop on inputs m and n when m n k. 1. If n 2, loop is executed either 0 or 1 time 2. If m = 0 or m divides n exactly and remainder is zero then less then loop is executed at the max twice. 3. m 2 then the value of n is getting half every time during the execution of n mod m. Therefore it takes no more than T (k 2) additional trips round the loop to [Gopi Sanghani] Page 9

10 complete the calculation. Hence the recurrence equation for it is given as, T(k) Є O(logk) Time complexity of Euclid s algorithm is in O(logk). (8) Compare Iterative and Recursive algorithm to find out Fibonacci series. Iterative Algorithm for Fibonacci series Function fibiter(n) i 1; j 0 for k 1 to n do j i + j i j i return j If we count all arithmetic operations at unit cost; the instructions inside for loop take constant time. Let the time taken by these instructions be bounded above by some constant c. The time taken by the for loop is bounded above by n times this constant, i.e. nc. Since the instructions before and after this loop take negligible time, the algorithm takes time in Θ (n). Hence the time complexity of iterative Fibonacci algorithm is Θ (n). If the value of n is large then time needed to execute addition operation increases linearly with the length of operand. At the end of k th iteration, the value of i and j will be f k-1 and f k. As per De Moiver s formula the size of f k is in Θ(k). So, k th iteration takes time in Θ(k). let c be some constant such that this time is bounded above by ck for all k 1. The time taken by fibiter algorithm is bounded above by, [Gopi Sanghani] Page 10

11 Hence the time complexity of iterative Fibonacci algorithm for larger value of n is Θ (n 2 ). Recursive Algorithm for Fibonacci series Function fibrec(n) if n < 2 then return n else return fibrec (n 1) + fibrec (n 2) Let T(n) be the time taken by a call on fibrec(n). The recurrence equation for the above algorithm is given as, Solving this will give the complexity, T(n ) Є Θ (Ф n ) Time taken by algorithm grows exponentially. Hence the time complexity of recursive Fibonacci algorithm is Θ (Ф n ). Recursive algorithm is very inefficient because it recalculates the same values many times. Iterative algorithm takes linear time if n is small and quadratic time ( n 2 ) if n is large which is still faster than the exponential time fibrec. (9) What is Recursion? Give the algorithm of Tower of Hanoi problem using Recursion. Recursion is a method of solving problems that involves breaking a problem down into smaller and smaller sub-problems until you get to a small enough problem that it can be solved trivially. Usually recursion involves a function calling itself. [Gopi Sanghani] Page 11

12 Recursion allows us to write elegant solutions to problems that may otherwise be very difficult to program. All recursive algorithms must obey three important characteristics: 1. A recursive algorithm must have a base case. 2. A recursive algorithm must change its state and move toward the base case. 3. A recursive algorithm must call itself, recursively. In a recursive algorithm, there are one or more base cases for which no recursion is required. All chains of recursion eventually end up at one of the base cases. The simplest way to guarantee that these conditions are met is to make sure that each recursion always occurs on a smaller version of the original problem. A very small version of the problem that can be solved without recursion then becomes the base case. The Tower of Hanoi puzzle was invented by the French mathematician Edouard Lucas in We are given a tower of n disks, initially stacked in increasing size on one of three pegs. The objective is to transfer the entire tower to one of the other pegs, moving only one disk at a time and never a larger one onto a smaller. The Rules: 1. There are n disks (1, 2, 3... n) and three towers. The towers are labeled 'A', 'B', and 'C'. 2. All the disks are initially placed on the first tower (the 'A' peg). 3. No disk may be placed on top of a smaller disk. 4. You may only move one disk at a time and this disk must be the top disk on a tower. For a given number N of disks, the problem appears to be solved if we know how to accomplish the following tasks: Move the top N - 1 disks from Src to Aux (using Dst as an intermediary tower) Move the bottom disk from Src to Dst Move N - 1 disks from Aux to Dst (using Src as an intermediary tower) [Gopi Sanghani] Page 12

13 Algorithm Hanoi(N, Src, Aux, Dst) if N is 0 exit else Hanoi(N - 1, Src, Dst, Aux) Move from Src to Dst Hanoi(N - 1, Aux, Src, Dst) : The number of movements of a ring required in the Hanoi problem is given by the recurrence equation, Solving this will give, t(m) = 2 m 1 Hence the time complexity of Tower of Hanoi problem is Θ ( 2 m ) to solve the problem with m rings. (10) Heaps A heap data structure is a binary tree with the following properties. 1. It is a complete binary tree; that is each level of the tree is completely filled, except possibly the bottom level. At this level it is filled from left to right. 2. It satisfies the heap order property; the data item stored in each node is greater than or equal to the data item stored in its children node. Example: [Gopi Sanghani] Page 13

14 Heap can be implemented using an array or a linked list structure. It is easier to implement heaps using arrays. We simply number the nodes in the heap from top to bottom, numbering the nodes on each level from left to right and store the i th node in the i th location of the array. An array A that represents a heap is an object with two attributes: i. length[a], which is the number of elements in the array, and ii. heap-size[a], the number of elements in the heap stored within array A. The root of the tree is A[1], and given the index i of a node, the indices of its parent PARENT(i), left child LEFT(i), and right child RIGHT(i) can be computed simply: PARENT(i) return i/2 LEFT(i) return 2i RIGHT(i) return 2i + 1 Example: The array form for the above heap is, There are two kinds of binary heaps: max-heaps and min-heaps. In both kinds, the values in the nodes satisfy a heap property. [Gopi Sanghani] Page 14

15 In a max-heap, the max-heap property is that for every node i other than the root, A[PARENT(i)] A[i], That is, the value of a node is at most the value of its parent. Thus, the largest element in a max-heap is stored at the root, and the sub-tree rooted at a node contains values no larger than that contained at the node itself. A min-heap is organized in the opposite way; the min-heap property is that for every node i other than the root, A[PARENT(i)] A[i]. The smallest element in a min-heap is at the root. For the heap-sort algorithm, we use max-heaps. Min-heaps are commonly used in priority Queues. Viewing a heap as a tree, we define the height of a node in a heap to be the number of edges on the longest simple downward path from the node to a leaf, and we define the height of the Heap to be the height of its root. Height of an n element heap based on a binary tree is lg n The basic operations on heap run in time at most proportional to the height of the tree and thus take O(lg n) time. Since a heap of n elements is a complete binary tree which has height k; that is one node on level k, two nodes on level k-1 and so on There will be 2 k-1 nodes on level 1 and at least 1 and not more than 2 k nodes on level 0. Building a heap For the general case of converting a complete binary tree to a heap, we begin at the last node that is not a leaf; apply the percolate down routine to convert the subtree rooted at this current root node to a heap. We then move onto the preceding node and percolate down that sub-tree. We continue on in this manner, working up the tree until we reach the root of the given tree. We can use the procedure MAX-HEAPIFY in a bottom-up manner to convert an array A[1,,n], where n = length[a], into a max-heap. [Gopi Sanghani] Page 15

16 The elements in the sub-array A[( n/2 +1),, n] are all leaves of the tree, and so each is a 1-element heap to begin with. The procedure BUILD-MAX-HEAP goes through the remaining nodes of the tree and runs MAXHEAPIFY on each one. Algorithm BUILD-MAX-HEAP(A) heap-size[a] length[a] for i length[a]/2 downto 1 do MAX-HEAPIFY(A, i) Each call to Heapify costs O(lg n) time, and there are O(n) such calls. Thus, the running time is at most O(n lg n) Maintaining the heap property One of the most basic heap operations is converting a complete binary tree to a heap. Such an operation is called Heapify. Its inputs are an array A and an index i into the array. When MAX-HEAPIFY is called, it is assumed that the binary trees rooted at LEFT(i) and RIGHT(i) are max-heaps, but that A[i] may be smaller than its children, thus violating the max-heap property. The function of MAX-HEAPIFY is to let the value at A[i] "float down" in the max-heap so that the sub-tree rooted at index i becomes a max-heap. Algorithm MAX-HEAPIFY(A, i) left LEFT(i) right RIGHT(i) if left heap-size[a] and A[left] > A[i] [Gopi Sanghani] Page 16

17 then largest left else largest i if right heap-size[a] and A[right] > A[largest] then largest right if largest i then exchange A[i] A[largest] MAX-HEAPIFY(A, largest) At each step, the largest of the elements A[i], A[LEFT(i)], and A[RIGHT(i)] is determined, and its index is stored in largest. If A[i] is largest, then the sub-tree rooted at node i is a max-heap and the procedure terminates. Otherwise, one of the two children has the largest element, and A[i] is swapped with A[largest], which causes node i and its children to satisfy the max-heap property. The node indexed by largest, however, now has the original value A[i], and thus the sub-tree rooted at largest may violate the max-heap property. therefore, MAX- HEAPIFY must be called recursively on that sub-tree. The running time of MAX-HEAPIFY on a sub-tree of size n rooted at given node i is the Θ(1) time to fix up the relationships among the elements A[i], A[LEFT(i)], and A[RIGHT(i)], plus the time to run MAX-HEAPIFY on a sub-tree rooted at one of the children of node i. The children's sub-trees can have size of at most 2n/3 and the running time of MAX- HEAPIFY can therefore be described by the recurrence T (n) T(2n/3) + Θ(1) The solution to this recurrence is T (n) = O ( lg n). The heapsort algorithm The heapsort algorithm starts by using BUILD-MAX-HEAP to build a max-heap on the input array A[1,, n], where n = length[a]. [Gopi Sanghani] Page 17

18 Since the maximum element of the array is stored at the root A[1], it can be put into its correct final position by exchanging it with A[n]. If we now "discard" node n from the heap (by decrementing heap-size[a]), we observe that A[1,, (n - 1)] can easily be made into a max-heap. The children of the root remain max-heaps, but the new root element may violate the max-heap property. All that is needed to restore the max heap property, however, is one call to MAX- HEAPIFY(A, 1), which leaves a max-heap in A[1,, (n - 1)]. The heapsort algorithm then repeats this process for the max-heap of size n 1 down to a heap of size 2. Algorithm HEAPSORT(A) BUILD-MAX-HEAP(A) for i length[a] downto 2 do exchange A[1] A[i] heap-size[a] heap-size[a] - 1 MAX-HEAPIFY(A, 1) The HEAPSORT procedure takes time O(n lg n), since the call to BUILD-MAX-HEAP takes time O(n) and each of the n - 1 calls to MAX-HEAPIFY takes time O(lg n). (10) Explain linear search and binary search methods. Let T[1... n] be an array of non-decreasing sorted order; that is T [i] T [j] whenever 1 i j n. Let x be some number. The problem consists of finding x in the array T if it is there. If x is not in the array, then we want to find the position where it might be inserted. Sequential (Linear) search algorithm [Gopi Sanghani] Page 18

19 Function sequential ( T[1,,n], x ) for i = 1 to n do if T [i] x then return index i return n + 1 Here we look sequentially at each element of T until either we reach to end of array or find a number no smaller than x. This algorithm clearly takes time in Θ (r), where r is the index returned. Θ (n) in worst case and O (1) in best case. Binary Search Algorithm (Iterative) The basic idea of binary search is that for a given element we check out the middle element of the array. We continue in either the lower or upper segment of the array, depending on the outcome of the search until we reached the required (given) element. Function biniter ( T[1,,n], x ) i 1; j n while i < j do k (i + j ) 2 case x < T [k] : j k 1 x = T [k] : i, j k {return k} x > T [k] : i k + 1 return i To analyze the running time of a while loop, we must find a function of the variables involved whose value decreases each time round the loop. Here it is j i + 1 Which we call as d. d represents the number of elements of T still under consideration. Initially d = n. [Gopi Sanghani] Page 19

20 Loop terminates when i j, which is equivalent to d 1 Each time round the loop there are three possibilities, I. Either j is set to k 1 II. Or i is set to k + 1 III. Or both i and j are set to k Let d and d stand for the value of j i +1 before and after the iteration under consideration. Similarly i, j, i and j. Case I : if x < T [k] So, j k 1 is executed. Thus i = i and j = k -1 where k = (i + j) 2 Substituting the value of k, j = [(i + j) 2] 1 d = j i + 1 Substituting the value of j, d = [(i + j) 2] 1 i + 1 d (i + j) /2 i d (j i)/2 (j i + 1) / 2 d d/2 Case II : if x > T [k] So, i k + 1 is executed Thus i = K + 1 and j = j where k = (i + j) 2 Substituting the value of k, i = [(i + j) 2] + 1 d = j i + 1 Substituting the value of I, d = j - [(i + j) 2] d j - (i + j -1) /2 (2j i j + 1) / 2 ( j i + 1) / 2 d d/2 Case III : if x = T [k] i = j d = 1 We conclude that whatever may be case, d d/2 which means that the value of d is at least getting half each time round the loop. Let d k denote the value of j i + 1 at the end of k th trip around the loop. d 0 = n. [Gopi Sanghani] Page 20

21 We have already proved that d k = d k -1 / 2 For n integers, how many times does it need to cut in half before it reaches or goes below 1? n / 2 k 1 n 2 k k = lgn, search takes time. The complexity of binary search iterative is Θ (lg n). [Gopi Sanghani] Page 21

### Binary Heaps * * * * * * * / / \ / \ / \ / \ / \ * * * * * * * * * * * / / \ / \ / / \ / \ * * * * * * * * * *

Binary Heaps A binary heap is another data structure. It implements a priority queue. Priority Queue has the following operations: isempty add (with priority) remove (highest priority) peek (at highest

### Intro. to the Divide-and-Conquer Strategy via Merge Sort CMPSC 465 CLRS Sections 2.3, Intro. to and various parts of Chapter 4

Intro. to the Divide-and-Conquer Strategy via Merge Sort CMPSC 465 CLRS Sections 2.3, Intro. to and various parts of Chapter 4 I. Algorithm Design and Divide-and-Conquer There are various strategies we

### CMPS 102 Solutions to Homework 1

CMPS 0 Solutions to Homework Lindsay Brown, lbrown@soe.ucsc.edu September 9, 005 Problem..- p. 3 For inputs of size n insertion sort runs in 8n steps, while merge sort runs in 64n lg n steps. For which

### Algorithms and Data Structures

Algorithms and Data Structures CMPSC 465 LECTURES 20-21 Priority Queues and Binary Heaps Adam Smith S. Raskhodnikova and A. Smith. Based on slides by C. Leiserson and E. Demaine. 1 Trees Rooted Tree: collection

### 8 Primes and Modular Arithmetic

8 Primes and Modular Arithmetic 8.1 Primes and Factors Over two millennia ago already, people all over the world were considering the properties of numbers. One of the simplest concepts is prime numbers.

### 1/1 7/4 2/2 12/7 10/30 12/25

Binary Heaps A binary heap is dened to be a binary tree with a key in each node such that: 1. All leaves are on, at most, two adjacent levels. 2. All leaves on the lowest level occur to the left, and all

### In mathematics, it is often important to get a handle on the error term of an approximation. For instance, people will write

Big O notation (with a capital letter O, not a zero), also called Landau's symbol, is a symbolism used in complexity theory, computer science, and mathematics to describe the asymptotic behavior of functions.

### The Tower of Hanoi. Recursion Solution. Recursive Function. Time Complexity. Recursive Thinking. Why Recursion? n! = n* (n-1)!

The Tower of Hanoi Recursion Solution recursion recursion recursion Recursive Thinking: ignore everything but the bottom disk. 1 2 Recursive Function Time Complexity Hanoi (n, src, dest, temp): If (n >

### Introduction to Algorithms March 10, 2004 Massachusetts Institute of Technology Professors Erik Demaine and Shafi Goldwasser Quiz 1.

Introduction to Algorithms March 10, 2004 Massachusetts Institute of Technology 6.046J/18.410J Professors Erik Demaine and Shafi Goldwasser Quiz 1 Quiz 1 Do not open this quiz booklet until you are directed

### AS-2261 M.Sc.(First Semester) Examination-2013 Paper -fourth Subject-Data structure with algorithm

AS-2261 M.Sc.(First Semester) Examination-2013 Paper -fourth Subject-Data structure with algorithm Time: Three Hours] [Maximum Marks: 60 Note Attempts all the questions. All carry equal marks Section A

### Fractions and Decimals

Fractions and Decimals Tom Davis tomrdavis@earthlink.net http://www.geometer.org/mathcircles December 1, 2005 1 Introduction If you divide 1 by 81, you will find that 1/81 =.012345679012345679... The first

### Today s Outline. Exercise. Binary Search Analysis. Linear Search Analysis. Asymptotic Analysis. Analyzing Code. Announcements. Asymptotic Analysis

Today s Outline Announcements Assignment #1 due Thurs, Oct 7 at 11:45pm Asymptotic Analysis Asymptotic Analysis CSE 7 Data Structures & Algorithms Ruth Anderson Autumn 2010 Exercise Analyzing Code bool

### Algorithms, Integers

CHAPTER 3 Algorithms, Integers 3.1. Algorithms Consider the following list of instructions to find the maximum of three numbers a, b, c: 1. Assign variable x the value of a. 2. If b > x then assign x the

### CS473 - Algorithms I

CS473 - Algorithms I Lecture 9 Sorting in Linear Time View in slide-show mode 1 How Fast Can We Sort? The algorithms we have seen so far: Based on comparison of elements We only care about the relative

### Converting a Number from Decimal to Binary

Converting a Number from Decimal to Binary Convert nonnegative integer in decimal format (base 10) into equivalent binary number (base 2) Rightmost bit of x Remainder of x after division by two Recursive

### Data Structures and Algorithms

Data Structures and Algorithms Computational Complexity Escola Politècnica Superior d Alcoi Universitat Politècnica de València Contents Introduction Resources consumptions: spatial and temporal cost Costs

### Why? A central concept in Computer Science. Algorithms are ubiquitous.

Analysis of Algorithms: A Brief Introduction Why? A central concept in Computer Science. Algorithms are ubiquitous. Using the Internet (sending email, transferring files, use of search engines, online

### Algorithms. Margaret M. Fleck. 18 October 2010

Algorithms Margaret M. Fleck 18 October 2010 These notes cover how to analyze the running time of algorithms (sections 3.1, 3.3, 4.4, and 7.1 of Rosen). 1 Introduction The main reason for studying big-o

### Analysis of Algorithms: The Non-recursive Case

Analysis of Algorithms: The Non-recursive Case Key topics: * Introduction * Generalizing Running Time * Doing a Timing Analysis * Big-Oh Notation * Big-Oh Operations * Analyzing Some Simple Programs -

### Efficiency of algorithms. Algorithms. Efficiency of algorithms. Binary search and linear search. Best, worst and average case.

Algorithms Efficiency of algorithms Computational resources: time and space Best, worst and average case performance How to compare algorithms: machine-independent measure of efficiency Growth rate Complexity

### Outline BST Operations Worst case Average case Balancing AVL Red-black B-trees. Binary Search Trees. Lecturer: Georgy Gimel farb

Binary Search Trees Lecturer: Georgy Gimel farb COMPSCI 220 Algorithms and Data Structures 1 / 27 1 Properties of Binary Search Trees 2 Basic BST operations The worst-case time complexity of BST operations

### 1 2-3 Trees: The Basics

CS10: Data Structures and Object-Oriented Design (Fall 2013) November 1, 2013: 2-3 Trees: Inserting and Deleting Scribes: CS 10 Teaching Team Lecture Summary In this class, we investigated 2-3 Trees in

### Lecture Notes on Linear Search

Lecture Notes on Linear Search 15-122: Principles of Imperative Computation Frank Pfenning Lecture 5 January 29, 2013 1 Introduction One of the fundamental and recurring problems in computer science is

### Divide and Conquer. Textbook Reading Chapters 4, 7 & 33.4

Divide d Conquer Textook Reading Chapters 4, 7 & 33.4 Overview Design principle Divide d conquer Proof technique Induction, induction, induction Analysis technique Recurrence relations Prolems Sorting

### Merge Sort. 2004 Goodrich, Tamassia. Merge Sort 1

Merge Sort 7 2 9 4 2 4 7 9 7 2 2 7 9 4 4 9 7 7 2 2 9 9 4 4 Merge Sort 1 Divide-and-Conquer Divide-and conquer is a general algorithm design paradigm: Divide: divide the input data S in two disjoint subsets

### Data Structure [Question Bank]

Unit I (Analysis of Algorithms) 1. What are algorithms and how they are useful? 2. Describe the factor on best algorithms depends on? 3. Differentiate: Correct & Incorrect Algorithms? 4. Write short note:

### , each of which contains a unique key value, say k i , R 2. such that k i equals K (or to determine that no such record exists in the collection).

The Search Problem 1 Suppose we have a collection of records, say R 1, R 2,, R N, each of which contains a unique key value, say k i. Given a particular key value, K, the search problem is to locate the

### Algorithms and Data Structures

Algorithms and Data Structures Part 2: Data Structures PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering (CiE) Summer Term 2016 Overview general linked lists stacks queues trees 2 2

### Elementary Number Theory We begin with a bit of elementary number theory, which is concerned

CONSTRUCTION OF THE FINITE FIELDS Z p S. R. DOTY Elementary Number Theory We begin with a bit of elementary number theory, which is concerned solely with questions about the set of integers Z = {0, ±1,

### Analysis of Algorithms I: Binary Search Trees

Analysis of Algorithms I: Binary Search Trees Xi Chen Columbia University Hash table: A data structure that maintains a subset of keys from a universe set U = {0, 1,..., p 1} and supports all three dictionary

### Heap. Binary Search Tree. Heaps VS BSTs. < el el. Difference between a heap and a BST:

Heaps VS BSTs Difference between a heap and a BST: Heap el Binary Search Tree el el el < el el Perfectly balanced at all times Immediate access to maximal element Easy to code Does not provide efficient

### A binary heap is a complete binary tree, where each node has a higher priority than its children. This is called heap-order property

CmSc 250 Intro to Algorithms Chapter 6. Transform and Conquer Binary Heaps 1. Definition A binary heap is a complete binary tree, where each node has a higher priority than its children. This is called

### Sequential Data Structures

Sequential Data Structures In this lecture we introduce the basic data structures for storing sequences of objects. These data structures are based on arrays and linked lists, which you met in first year

### 8 Divisibility and prime numbers

8 Divisibility and prime numbers 8.1 Divisibility In this short section we extend the concept of a multiple from the natural numbers to the integers. We also summarize several other terms that express

### Divide And Conquer Algorithms

CSE341T/CSE549T 09/10/2014 Lecture 5 Divide And Conquer Algorithms Recall in last lecture, we looked at one way of parallelizing matrix multiplication. At the end of the lecture, we saw the reduce SUM

### The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge,

The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge, cheapest first, we had to determine whether its two endpoints

### Lecture P6: Recursion

Overview Lecture P6: Recursion What is recursion? When one function calls ITSELF directly or indirectly. Why learn recursion? New mode of thinking. Start Goal Powerful programming tool. Many computations

### CSC148 Lecture 8. Algorithm Analysis Binary Search Sorting

CSC148 Lecture 8 Algorithm Analysis Binary Search Sorting Algorithm Analysis Recall definition of Big Oh: We say a function f(n) is O(g(n)) if there exists positive constants c,b such that f(n)

### Module 2 Stacks and Queues: Abstract Data Types

Module 2 Stacks and Queues: Abstract Data Types A stack is one of the most important and useful non-primitive linear data structure in computer science. It is an ordered collection of items into which

### Introduction to Diophantine Equations

Introduction to Diophantine Equations Tom Davis tomrdavis@earthlink.net http://www.geometer.org/mathcircles September, 2006 Abstract In this article we will only touch on a few tiny parts of the field

### 12 Abstract Data Types

12 Abstract Data Types 12.1 Source: Foundations of Computer Science Cengage Learning Objectives After studying this chapter, the student should be able to: Define the concept of an abstract data type (ADT).

### Many algorithms, particularly divide and conquer algorithms, have time complexities which are naturally

Recurrence Relations Many algorithms, particularly divide and conquer algorithms, have time complexities which are naturally modeled by recurrence relations. A recurrence relation is an equation which

### 1) The postfix expression for the infix expression A+B*(C+D)/F+D*E is ABCD+*F/DE*++

Answer the following 1) The postfix expression for the infix expression A+B*(C+D)/F+D*E is ABCD+*F/DE*++ 2) Which data structure is needed to convert infix notations to postfix notations? Stack 3) The

### Section IV.1: Recursive Algorithms and Recursion Trees

Section IV.1: Recursive Algorithms and Recursion Trees Definition IV.1.1: A recursive algorithm is an algorithm that solves a problem by (1) reducing it to an instance of the same problem with smaller

### Zeros of a Polynomial Function

Zeros of a Polynomial Function An important consequence of the Factor Theorem is that finding the zeros of a polynomial is really the same thing as factoring it into linear factors. In this section we

### Algorithms Chapter 12 Binary Search Trees

Algorithms Chapter 1 Binary Search Trees Outline Assistant Professor: Ching Chi Lin 林 清 池 助 理 教 授 chingchi.lin@gmail.com Department of Computer Science and Engineering National Taiwan Ocean University

### Solutions for Introduction to algorithms second edition

Solutions for Introduction to algorithms second edition Philip Bille The author of this document takes absolutely no responsibility for the contents. This is merely a vague suggestion to a solution to

### CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team

CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team Lecture Summary In this lecture, we learned about the ADT Priority Queue. A

### Divide-and-Conquer Algorithms Part Four

Divide-and-Conquer Algorithms Part Four Announcements Problem Set 2 due right now. Can submit by Monday at 2:15PM using one late period. Problem Set 3 out, due July 22. Play around with divide-and-conquer

### Functions Recursion. C++ functions. Declare/prototype. Define. Call. int myfunction (int ); int myfunction (int x){ int y = x*x; return y; }

Functions Recursion C++ functions Declare/prototype int myfunction (int ); Define int myfunction (int x){ int y = x*x; return y; Call int a; a = myfunction (7); function call flow types type of function

### (x + a) n = x n + a Z n [x]. Proof. If n is prime then the map

22. A quick primality test Prime numbers are one of the most basic objects in mathematics and one of the most basic questions is to decide which numbers are prime (a clearly related problem is to find

### CHAPTER 5. Number Theory. 1. Integers and Division. Discussion

CHAPTER 5 Number Theory 1. Integers and Division 1.1. Divisibility. Definition 1.1.1. Given two integers a and b we say a divides b if there is an integer c such that b = ac. If a divides b, we write a

### Theory of Computation Prof. Kamala Krithivasan Department of Computer Science and Engineering Indian Institute of Technology, Madras

Theory of Computation Prof. Kamala Krithivasan Department of Computer Science and Engineering Indian Institute of Technology, Madras Lecture No. # 31 Recursive Sets, Recursively Innumerable Sets, Encoding

### Class Overview. CSE 326: Data Structures. Goals. Goals. Data Structures. Goals. Introduction

Class Overview CSE 326: Data Structures Introduction Introduction to many of the basic data structures used in computer software Understand the data structures Analyze the algorithms that use them Know

### Cpt S 223. School of EECS, WSU

Priority Queues (Heaps) 1 Motivation Queues are a standard mechanism for ordering tasks on a first-come, first-served basis However, some tasks may be more important or timely than others (higher priority)

### Quiz 1 Solutions. (a) T F The height of any binary search tree with n nodes is O(log n). Explain:

Introduction to Algorithms March 9, 2011 Massachusetts Institute of Technology 6.006 Spring 2011 Professors Erik Demaine, Piotr Indyk, and Manolis Kellis Quiz 1 Solutions Problem 1. Quiz 1 Solutions True

### Recursive Algorithms. Recursion. Motivating Example Factorial Recall the factorial function. { 1 if n = 1 n! = n (n 1)! if n > 1

Recursion Slides by Christopher M Bourke Instructor: Berthe Y Choueiry Fall 007 Computer Science & Engineering 35 Introduction to Discrete Mathematics Sections 71-7 of Rosen cse35@cseunledu Recursive Algorithms

### Pythagorean Triples. Chapter 2. a 2 + b 2 = c 2

Chapter Pythagorean Triples The Pythagorean Theorem, that beloved formula of all high school geometry students, says that the sum of the squares of the sides of a right triangle equals the square of the

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### CS 102: SOLUTIONS TO DIVIDE AND CONQUER ALGORITHMS (ASSGN 4)

CS 10: SOLUTIONS TO DIVIDE AND CONQUER ALGORITHMS (ASSGN 4) Problem 1. a. Consider the modified binary search algorithm so that it splits the input not into two sets of almost-equal sizes, but into three

### Induction Problems. Tom Davis November 7, 2005

Induction Problems Tom Davis tomrdavis@earthlin.net http://www.geometer.org/mathcircles November 7, 2005 All of the following problems should be proved by mathematical induction. The problems are not necessarily

### Binary Heaps. CSE 373 Data Structures

Binary Heaps CSE Data Structures Readings Chapter Section. Binary Heaps BST implementation of a Priority Queue Worst case (degenerate tree) FindMin, DeleteMin and Insert (k) are all O(n) Best case (completely

### Induction. Margaret M. Fleck. 10 October These notes cover mathematical induction and recursive definition

Induction Margaret M. Fleck 10 October 011 These notes cover mathematical induction and recursive definition 1 Introduction to induction At the start of the term, we saw the following formula for computing

### Mathematical Induction

Mathematical Induction (Handout March 8, 01) The Principle of Mathematical Induction provides a means to prove infinitely many statements all at once The principle is logical rather than strictly mathematical,

### Cost Model: Work, Span and Parallelism. 1 The RAM model for sequential computation:

CSE341T 08/31/2015 Lecture 3 Cost Model: Work, Span and Parallelism In this lecture, we will look at how one analyze a parallel program written using Cilk Plus. When we analyze the cost of an algorithm

### CSE 326, Data Structures. Sample Final Exam. Problem Max Points Score 1 14 (2x7) 2 18 (3x6) 3 4 4 7 5 9 6 16 7 8 8 4 9 8 10 4 Total 92.

Name: Email ID: CSE 326, Data Structures Section: Sample Final Exam Instructions: The exam is closed book, closed notes. Unless otherwise stated, N denotes the number of elements in the data structure

### CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis Lecture 7 Binary heap, binomial heap, and Fibonacci heap 1 Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 The slides were

### MLR Institute of Technology

MLR Institute of Technology DUNDIGAL 500 043, HYDERABAD COMPUTER SCIENCE AND ENGINEERING Computer Programming Lab List of Experiments S.No. Program Category List of Programs 1 Operators a) Write a C program

### Outline. Introduction Linear Search. Transpose sequential search Interpolation search Binary search Fibonacci search Other search techniques

Searching (Unit 6) Outline Introduction Linear Search Ordered linear search Unordered linear search Transpose sequential search Interpolation search Binary search Fibonacci search Other search techniques

### 3 Some Integer Functions

3 Some Integer Functions A Pair of Fundamental Integer Functions The integer function that is the heart of this section is the modulo function. However, before getting to it, let us look at some very simple

### Data Structures. Algorithm Performance and Big O Analysis

Data Structures Algorithm Performance and Big O Analysis What s an Algorithm? a clearly specified set of instructions to be followed to solve a problem. In essence: A computer program. In detail: Defined

### 6 March 2007 1. Array Implementation of Binary Trees

Heaps CSE 0 Winter 00 March 00 1 Array Implementation of Binary Trees Each node v is stored at index i defined as follows: If v is the root, i = 1 The left child of v is in position i The right child of

### NODAL ANALYSIS. Circuits Nodal Analysis 1 M H Miller

NODAL ANALYSIS A branch of an electric circuit is a connection between two points in the circuit. In general a simple wire connection, i.e., a 'short-circuit', is not considered a branch since it is known

### CS 253: Algorithms. Chapter 12. Binary Search Trees. * Deletion and Problems. Credit: Dr. George Bebis

CS 2: Algorithms Chapter Binary Search Trees * Deletion and Problems Credit: Dr. George Bebis Binary Search Trees Tree representation: A linked data structure in which each node is an object Node representation:

### 1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form

1. LINEAR EQUATIONS A linear equation in n unknowns x 1, x 2,, x n is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b, where a 1, a 2,..., a n, b are given real numbers. For example, with x and

### Tables so far. set() get() delete() BST Average O(lg n) O(lg n) O(lg n) Worst O(n) O(n) O(n) RB Tree Average O(lg n) O(lg n) O(lg n)

Hash Tables Tables so far set() get() delete() BST Average O(lg n) O(lg n) O(lg n) Worst O(n) O(n) O(n) RB Tree Average O(lg n) O(lg n) O(lg n) Worst O(lg n) O(lg n) O(lg n) Table naïve array implementation

### Chapter 20 Recursion. Liang, Introduction to Java Programming, Ninth Edition, (c) 2013 Pearson Education, Inc. All rights reserved.

Chapter 20 Recursion 1 Motivations Suppose you want to find all the files under a directory that contains a particular word. How do you solve this problem? There are several ways to solve this problem.

### Binary Search Trees CMPSC 122

Binary Search Trees CMPSC 122 Note: This notes packet has significant overlap with the first set of trees notes I do in CMPSC 360, but goes into much greater depth on turning BSTs into pseudocode than

### Lecture 13 - Basic Number Theory.

Lecture 13 - Basic Number Theory. Boaz Barak March 22, 2010 Divisibility and primes Unless mentioned otherwise throughout this lecture all numbers are non-negative integers. We say that A divides B, denoted

### PES Institute of Technology-BSC QUESTION BANK

PES Institute of Technology-BSC Faculty: Mrs. R.Bharathi CS35: Data Structures Using C QUESTION BANK UNIT I -BASIC CONCEPTS 1. What is an ADT? Briefly explain the categories that classify the functions

### Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

### Homework 5 Solutions

Homework 5 Solutions 4.2: 2: a. 321 = 256 + 64 + 1 = (01000001) 2 b. 1023 = 512 + 256 + 128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = (1111111111) 2. Note that this is 1 less than the next power of 2, 1024, which

### 8.1 Makespan Scheduling

600.469 / 600.669 Approximation Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programing: Min-Makespan and Bin Packing Date: 2/19/15 Scribe: Gabriel Kaptchuk 8.1 Makespan Scheduling Consider an instance

### COMP 250 Fall 2012 lecture 2 binary representations Sept. 11, 2012

Binary numbers The reason humans represent numbers using decimal (the ten digits from 0,1,... 9) is that we have ten fingers. There is no other reason than that. There is nothing special otherwise about

### a = bq + r where 0 r < b.

Lecture 5: Euclid s algorithm Introduction The fundamental arithmetic operations are addition, subtraction, multiplication and division. But there is a fifth operation which I would argue is just as fundamental

### Data Structures Fibonacci Heaps, Amortized Analysis

Chapter 4 Data Structures Fibonacci Heaps, Amortized Analysis Algorithm Theory WS 2012/13 Fabian Kuhn Fibonacci Heaps Lacy merge variant of binomial heaps: Do not merge trees as long as possible Structure:

### Lecture 1: Course overview, circuits, and formulas

Lecture 1: Course overview, circuits, and formulas Topics in Complexity Theory and Pseudorandomness (Spring 2013) Rutgers University Swastik Kopparty Scribes: John Kim, Ben Lund 1 Course Information Swastik

### 10CS35: Data Structures Using C

CS35: Data Structures Using C QUESTION BANK REVIEW OF STRUCTURES AND POINTERS, INTRODUCTION TO SPECIAL FEATURES OF C OBJECTIVE: Learn : Usage of structures, unions - a conventional tool for handling a

### POLYNOMIAL FUNCTIONS

POLYNOMIAL FUNCTIONS Polynomial Division.. 314 The Rational Zero Test.....317 Descarte s Rule of Signs... 319 The Remainder Theorem.....31 Finding all Zeros of a Polynomial Function.......33 Writing a

### GRAPH THEORY LECTURE 4: TREES

GRAPH THEORY LECTURE 4: TREES Abstract. 3.1 presents some standard characterizations and properties of trees. 3.2 presents several different types of trees. 3.7 develops a counting method based on a bijection

### Working with whole numbers

1 CHAPTER 1 Working with whole numbers In this chapter you will revise earlier work on: addition and subtraction without a calculator multiplication and division without a calculator using positive and

### Parallel Random Access Machine (PRAM) PRAM Algorithms. Shared Memory Access Conflicts. A Basic PRAM Algorithm. Time Optimality.

Parallel Random Access Machine (PRAM) PRAM Algorithms Arvind Krishnamurthy Fall 2 Collection of numbered processors Accessing shared memory cells Each processor could have local memory (registers) Each

### From Last Time: Remove (Delete) Operation

CSE 32 Lecture : More on Search Trees Today s Topics: Lazy Operations Run Time Analysis of Binary Search Tree Operations Balanced Search Trees AVL Trees and Rotations Covered in Chapter of the text From

### Data Structure with C

Subject: Data Structure with C Topic : Tree Tree A tree is a set of nodes that either:is empty or has a designated node, called the root, from which hierarchically descend zero or more subtrees, which

### The following themes form the major topics of this chapter: The terms and concepts related to trees (Section 5.2).

CHAPTER 5 The Tree Data Model There are many situations in which information has a hierarchical or nested structure like that found in family trees or organization charts. The abstraction that models hierarchical

### csci 210: Data Structures Recursion

csci 210: Data Structures Recursion Summary Topics recursion overview simple examples Sierpinski gasket Hanoi towers Blob check READING: GT textbook chapter 3.5 Recursion In general, a method of defining

### CHAPTER 3 Numbers and Numeral Systems

CHAPTER 3 Numbers and Numeral Systems Numbers play an important role in almost all areas of mathematics, not least in calculus. Virtually all calculus books contain a thorough description of the natural,

### Full and Complete Binary Trees

Full and Complete Binary Trees Binary Tree Theorems 1 Here are two important types of binary trees. Note that the definitions, while similar, are logically independent. Definition: a binary tree T is full