Computational Complexity. CISC 121 Summer Algorithm Analysis. Outline. Complexity Basics. Complexity Basics

Similar documents
CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis. Linda Shapiro Winter 2015

In mathematics, it is often important to get a handle on the error term of an approximation. For instance, people will write

Class Overview. CSE 326: Data Structures. Goals. Goals. Data Structures. Goals. Introduction

Analysis of Binary Search algorithm and Selection Sort algorithm

Efficiency of algorithms. Algorithms. Efficiency of algorithms. Binary search and linear search. Best, worst and average case.

CSC148 Lecture 8. Algorithm Analysis Binary Search Sorting

The Running Time of Programs

Data Structures. Algorithm Performance and Big O Analysis

CSC 180 H1F Algorithm Runtime Analysis Lecture Notes Fall 2015

Algorithms. Margaret M. Fleck. 18 October 2010

What Is Recursion? Recursion. Binary search example postponed to end of lecture

Recursive Algorithms. Recursion. Motivating Example Factorial Recall the factorial function. { 1 if n = 1 n! = n (n 1)! if n > 1

6. Standard Algorithms

SIMS 255 Foundations of Software Design. Complexity and NP-completeness

Functions Recursion. C++ functions. Declare/prototype. Define. Call. int myfunction (int ); int myfunction (int x){ int y = x*x; return y; }

Binary search algorithm

The Tower of Hanoi. Recursion Solution. Recursive Function. Time Complexity. Recursive Thinking. Why Recursion? n! = n* (n-1)!

Operation Count; Numerical Linear Algebra

Why? A central concept in Computer Science. Algorithms are ubiquitous.

Chapter 8: Bags and Sets

Lecture 3: Finding integer solutions to systems of linear equations

CS/COE

1 Review of Least Squares Solutions to Overdetermined Systems

DATA STRUCTURES USING C

Data Structures and Algorithms

1) The postfix expression for the infix expression A+B*(C+D)/F+D*E is ABCD+*F/DE*++

Kapitel 1 Multiplication of Long Integers (Faster than Long Multiplication)

Boolean Expressions, Conditions, Loops, and Enumerations. Precedence Rules (from highest to lowest priority)

Binary Adders: Half Adders and Full Adders

Overview. Essential Questions. Precalculus, Quarter 4, Unit 4.5 Build Arithmetic and Geometric Sequences and Series

Faster deterministic integer factorisation

Computer Science 210: Data Structures. Searching

HY345 Operating Systems

1. The memory address of the first element of an array is called A. floor address B. foundation addressc. first address D.

recursion, O(n), linked lists 6/14

Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

Introduction to Algorithms March 10, 2004 Massachusetts Institute of Technology Professors Erik Demaine and Shafi Goldwasser Quiz 1.

Sorting Algorithms. Nelson Padua-Perez Bill Pugh. Department of Computer Science University of Maryland, College Park

Analysis of Computer Algorithms. Algorithm. Algorithm, Data Structure, Program

ALGEBRA. sequence, term, nth term, consecutive, rule, relationship, generate, predict, continue increase, decrease finite, infinite

Section IV.1: Recursive Algorithms and Recursion Trees

10CS35: Data Structures Using C

11 Multivariate Polynomials

Binary Search Trees CMPSC 122

Pseudo code Tutorial and Exercises Teacher s Version

1 Solving LPs: The Simplex Algorithm of George Dantzig

A Note on Maximum Independent Sets in Rectangle Intersection Graphs

1 Review of Newton Polynomials

8 Square matrices continued: Determinants

Lecture Notes on Linear Search

Review of Fundamental Mathematics

APP INVENTOR. Test Review

16. Recursion. COMP 110 Prasun Dewan 1. Developing a Recursive Solution

The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge,

Chapter 13: Query Processing. Basic Steps in Query Processing

Continued Fractions and the Euclidean Algorithm

AP Computer Science AB Syllabus 1

Lecture 1: Course overview, circuits, and formulas

Binary Heaps * * * * * * * / / \ / \ / \ / \ / \ * * * * * * * * * * * / / \ / \ / / \ / \ * * * * * * * * * *

PES Institute of Technology-BSC QUESTION BANK

Sequential Data Structures

Note on growth and growth accounting

Partial Fractions. Combining fractions over a common denominator is a familiar operation from algebra:

3. Mathematical Induction

Chapter 3. if 2 a i then location: = i. Page 40

Sample Questions Csci 1112 A. Bellaachia

CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team

CS473 - Algorithms I

Data Structure and Algorithm I Midterm Examination 120 points Time: 9:10am-12:10pm (180 minutes), Friday, November 12, 2010

1. Define: (a) Variable, (b) Constant, (c) Type, (d) Enumerated Type, (e) Identifier.

Analysis of a Search Algorithm

Solution of Linear Systems

csci 210: Data Structures Recursion

Introduction. The Quine-McCluskey Method Handout 5 January 21, CSEE E6861y Prof. Steven Nowick

Outline BST Operations Worst case Average case Balancing AVL Red-black B-trees. Binary Search Trees. Lecturer: Georgy Gimel farb

Session 7 Bivariate Data and Analysis

Recursion vs. Iteration Eliminating Recursion

COMP 250 Fall 2012 lecture 2 binary representations Sept. 11, 2012

Chapter 7: Sequential Data Structures

Catalan Numbers. Thomas A. Dowling, Department of Mathematics, Ohio State Uni- versity.

Loop Invariants and Binary Search

Simple sorting algorithms and their complexity. Bubble sort. Complexity of bubble sort. Complexity of bubble sort. Bubble sort of lists

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Zeros of a Polynomial Function

Lecture Notes on Binary Search Trees

Figure 1: Graphical example of a mergesort 1.

Application. Outline. 3-1 Polynomial Functions 3-2 Finding Rational Zeros of. Polynomial. 3-3 Approximating Real Zeros of.

Digital System Design Prof. D Roychoudhry Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

STATISTICA Formula Guide: Logistic Regression. Table of Contents

recursion here it is in C++ power function cis15 advanced programming techniques, using c++ fall 2007 lecture # VI.1

Optimizing matrix multiplication Amitabha Banerjee

Module 2 Stacks and Queues: Abstract Data Types

7 Gaussian Elimination and LU Factorization

Algorithm Analysis [2]: if-else statements, recursive algorithms. COSC 2011, Winter 2004, Section N Instructor: N. Vlajic

CSE 326, Data Structures. Sample Final Exam. Problem Max Points Score 1 14 (2x7) 2 18 (3x6) Total 92.

Classification - Examples

Notes on Factoring. MA 206 Kurt Bryan

Transcription:

Computational Complexity CISC 121 Summer 2006 Computational Complexity goals: to understand, to know what Computational Complexity is for how to use big-o notation the complexity of some common algorithms & how to derive those complexities how to figure out the complexity of an algorithm references lecture notes + courseware chapters on Complexity Analysis, Complexity of Recursive Methods optional: section on Mathematical Induction credits slides originally derived from Prof Lamb's presentation CISC 121 Summer 2006 Computational Complexity 1 CISC 121 Summer 2006 Computational Complexity 2 Outline background what do we mean by complexity how is it useful define big-o notation general rules for determining complexity of non-recursive algorithms apply to sample analyses of familiar algorithms additional techniques for recursive algorithms Algorithm Analysis the properties of an algorithm 1. does it work correctly? 2. how much time will it take? 3. how much memory will it require? our primary concern has been #1 now we begin to consider 2 & 3 useful algorithms require acceptable answers for all 3 this syllabus topic: Computational Complexity addresses #2 in a general way CISC 121 Summer 2006 Computational Complexity 3 CISC 121 Summer 2006 Computational Complexity 4 Complexity Basics our complexity analysis examines the running time of an algorithm & how it varies with problem size in previous topics, we noted: pushing & popping a stack is a constant time operation i.e. is independent of stack size sequential search of an array or linked list is a linear time operation: depends on n is proportional to the size of the array or list Complexity Basics more previous complexity examples binary search of a sorted array requires logarithmic time: log 2 n proportional to the logarithm of the size of the array selection sort of array requires quadratic time: n 2 proportional to the square of the size of the array quick sort of an array is claimed to have complexity proportional to: n log n worse than linear, but better than quadratic CISC 121 Summer 2006 Computational Complexity 5 CISC 121 Summer 2006 Computational Complexity 6

Using Complexity Information complexity analysis helps to answer questions such as: is this algorithm practical? computes in milliseconds, minutes, years,... which algorithm is more efficient? will a faster computer help? note: complexity analysis ignores constant factors in comparing algorithms A & B A may always takes 2x as much time as B but for our analysis A & B have the same complexity both are: constant, linear, quadratic... Limitations the complexity measure does not specify exactly how long an algorithm will take for a given input or if the algorithm will take < 100ms for all input these may be important for real-time programming where there are hard time constraints process control, air traffic ctrl, financial transactions... such precision is difficult to specify because it depends on multiple factors the specific programming language & specific compiler details of the implementation the target machine, the operating system, the workload CISC 121 Summer 2006 Computational Complexity 7 CISC 121 Summer 2006 Computational Complexity 8 Complexity & Problem Size our complexity analysis goal: describe how running time changes as the problem size (parameters or input) varies examples of different problem sizes: n if computing factorial of n size of an array for searching or sorting size of the list for list operations the convention: use N or n for problem size Running Time describe the running time as a function of problem size: n let T(n) = time an algorithm takes for problem size n the exact formula for T(n) is often difficult to determine however, an approximation will do what kind of function is T? constant? linear? quadratic?... exponential?. our primary concern is for large n since for small n, factors other than the algorithm often dominate in determining running time CISC 121 Summer 2006 Computational Complexity 9 CISC 121 Summer 2006 Computational Complexity 10 Best, Worst, Average Times the approximate nature of complexity measure T(n) = the time an algorithm takes for problem size n note: T(n) may not be same for all problems of size n example: sorting an array of size n the array may already be partially/mostly sorted generally we want to know the worst-case running time that provides an upper bound on time requirements occasionally may need average or even the best case often these are the same, though sometimes not e.g. selection sort versus insertion sort on a sorted array our complexity discussion will assume worst-case analysis, unless told otherwise Alternative Algorithms given some problem, assume that we have 3 alternative algorithms to solve it w/ corresponding running time functions: T 1 (n), T 2 (n), T 3 (n) plot each as a function of n which is best depends on n small n: irrelevant, all are quick large n: T 2 is bad grows too fast T 1 & T 3 are similar, T 3 a bit better shape of the curve is critical "how fast T changes with n" next: a more formal description for this CISC 121 Summer 2006 Computational Complexity 11

Big O Notation previous slide's actual functions are: T 1 (n) = 3 + 6000n T 2 (n) = 3 + 3n + 6n 2 T 3 (n) = 10000 + 3n the complexity description for both T 1 & T 3 O(n) ("order n") the highest order term is in n but, a different category for T 2 O(n 2 ) ("order n-squared") the highest order term is in n 2 Big O Notation the general rule: to summarize the complexity 1. use only the dominant term & omit constants & lower order terms in n the dominant term is the one that grows fastest 2. omit any coefficient big O notation thus describes the general shape of the curve & omits all the details see the readings for a more formal definition CISC 121 Summer 2006 Computational Complexity 13 CISC 121 Summer 2006 Computational Complexity 14 General Rules other possible T(n) functions if a function, f(n), for T(n) is a polynomial: f(n) = a x *n x + a x-1 *n x-1 + + a 1 *n + a 0 then f(n) = O(n x ), so 3n 2 + 11n + 8 O(n 2 ) possible functions also include logarithms log(n): lower order than n, higher than constant n log(n): lower order than n 2 higher than n & some algorithms, sadly have exponential functions 2 n grows faster than any polynomial constant log n n n log n n 2 n 3 n 4... 2 n 3 n Big O Notation the goal of complexity analysis is to find the big-o version of T(n) constant factors may be difficult to determine they may involve hardware, OS, etc big-o describes the shape of the curve from the first example T 1 (n) = O(n) (linear) T 2 (n) = O(n 2 ) (quadratic) T 3 (n) = O(n) (linear) therefore, T 2 is the one to avoid caution: the notation T 1 (n) = O(n) is not asserting an equality we read it as: "T 1 is O(n)" Sample Running Times some hypothetical solution times running times on a hypothetical computer that can compute 10 6 operations per second for varying complexity categories & problem sizes rows are: complexity (big-o categories) columns are: problem size (N) Complexity O(1) O(log n) O(n) O(n log n) O(n 2 ) O(n 3 ) Running Time on the Hypothetical Computer 10 1 usec 3 usec 10 usec 33 usec 100 usec 1 msec Problem Size: n 10 3 1 usec 10 usec 1 msec 10 msec 1 sec 17 min 10 6 1 usec 20 usec 1 sec 20 sec 12 days 32000 years Big O Notation now, some big-o arithmetic rules for combining partial complexities 1. O(f(n)) + O(g(n)) = O(max of f(n) and g(n)) O(n) + O(n 2 ) = O(n 2 ) O(n) + O(log n) = O(n) 2. c * O(f(n)) = O(f(n)) 5 * O(n 2 ) = O(n 2 ) 3. O(f(n)) * O(g(n)) = O(f(n) * g(n)) O(n) * O(log n) = O(n log n) CISC 121 Summer 2006 Computational Complexity 18

Algorithm Complexity Rules RULE 1: straight-line code is constant time no method calls is idependent of the size of the input constant time: O(1) examples: int i = arr[j]; double x = a * b + 4; both are O(1) the time to execute: is independent of size of arr or other input arithmetic, assignment, array access: constant input or output of a single value is O(1) System.out.println(x); Algorithm Complexity Rules RULE 2: sum consecutive sections of code // O(f(n)) code // O(g(n)) code the whole segment: O(f(n)) followed by O(g(n)) or O(f(n)) + O(g(n)) combined complexity is: O(max of f(n) and g(n)) example: a section of code with running time O(n 2 ) followed by a section with running time O(n) summarize the whole thing as O(n 2 ) CISC 121 Summer 2006 Computational Complexity 20 Algorithm Complexity Rules RULE 3: branching code is max of branches if statements (branching code): if (<condition>) // code which is O(f(n)) // code which is O(g(n)) worst-case complexity: max of f(n) and g(n) since the branch with higher complexity may be the one that is executed assumption the boolean condition can be evaluated in constant time if not, add the time to evaluate the expression Algorithm Complexity Rules RULE 3: branching code is max of branches if statements (branching code): example if (<simple condition>) // O(n) // O(1) complexity of the entire if statement is O(n) CISC 121 Summer 2006 Computational Complexity 21 CISC 121 Summer 2006 Computational Complexity 22 Algorithm Complexity Rules RULE 4: loop complexity complexity of a loop is: complexity of the loop body TIMES number of iterations for example: for (int i = 0; i < 10; i++) // O(n 2 ) code complexity of this loop is 10 * O(n 2 ) = O(n 2 ) Algorithm Complexity Rules RULE 4: loop complexity complexity of a loop is: complexity of the loop body TIMES number of iterations example: int i = 2 * n; while (i > 0) // some O(n 2 ) code i--; // i dependent on n the while loop executes 2n times the resulting complexity is 2n * O(n 2 ) O(n) * O(n 2 ), or O(n * n 2 ) = O(n 3 ) CISC 121 Summer 2006 Computational Complexity 23 CISC 121 Summer 2006 Computational Complexity 24

Combining the Rules To analyze an algorithm, work from the inside out. Start by identifying code which is O(1). // problem size is n x = 2 * n; y = n 4; total = 0; while x > 0: O(1) body is O(1), executed 2n times: O(n) if x is even: total = total + y; O(1) O(1) O(1) total = total y; O(1) x = x 1; O(1) Whole algorithm: O(1) + O(n) = O(n) Sequential Search // problem size is the size of the array public static int search(int arr[ ], int target) for (int i = 0; i < arr.length; i++) if (arr[i] == target) return i; // end for return -1; // end search analysis the loop body is O(1) & it is executed n times (in the worst case) thus, whole loop is O(n) & the method is O(n) Binary Search public static int binarysearch (int target, int[ ] arr) int first = 0, last = arr.length 1, mid; while (first <= last ) mid = (first + last) / 2; if (target == arr[mid]) return mid; if (target < arr[mid]) last = mid 1; // target > arr[mid] first = mid + 1; // end while return -1; // end binarysearch number of loop iterations (in the worst case)? the key observation is: part of the array being searched is cut in half each time Binary Search Analysis for complexity analysis, the loop simplifies to size = n; while (size > 0) // O(1) stuff size = size / 2; // integer arithmetic // end while so, size = n, n/2, n/4, n/8,..., 2, 1, 0 how many steps to go from n to 0 how many loop iterations for simplicity, assume n is a power of 2: n = 2 k then approximately k steps in the worst case the loop executes log 2 n times so, binary search is O(log n) a logarithmic algorithm Logarithmic Complexity note logarithmic complexities don't specify a base O(log n) does not require specifying a base can be log 2 n, log e n, log 10 n why isn't the base specified? logs to different bases differ only by a constant factor e.g. log 10 n = log 2 n / log 2 10 & log 2 10 a constant Logarithmic Complexity so, big-o notation simplifies logarithmic complexity notation by omitting the base of the logarithm note also logarithmic complexity is typical of algorithms that use repeated division of the problem space CISC 121 Summer 2006 Computational Complexity 29 CISC 121 Summer 2006 Computational Complexity 30

Sequential & Binary Search search algorithms binary search is O(log n) sequential search is O(n) binary search complexity < sequential search therefore we should always use binary search? what other factors might influence a decision? Sequential & Binary Search some other considerations is the array sorted? that's required for binary search how large is the array? how often will it be searched? if the array is large, & will be searched repeatedly complexity dictates binary search for some problems there may be a tradeoff running time versus programming time difficulty of programming a lower complexity algorithm could be a consideration CISC 121 Summer 2006 Computational Complexity 31 Nested Loops analysis of simple nested loops as before, work outwards from innermost part for (int i = 0; i < 2*n; i++)( //3:derive this, 2n*O(n) //or O(n 2 ) for (int j = n; j >= 1; j--) //2:check this, O(n) total += i * j; //1:examine this, O(1) // end for j // end for i this O(n 2 ) analysis is easy since the inner loop always runs the same number of times the loops are independent Complexity of Selection Sort complexity analysis: selection sort public static void selectionsort(int[ ] arr) for (int i = 0; i < arr.length - 1; i++) int min = i; for (int j = i + 1; j < arr.length; j++) if (arr[j] < arr[min]) min = j; swap(i, min, arr); // O(1) the innermost code is O(1) is executed how many times? whoops: a different number for each outer loop iteration the loops are not independent CISC 121 Summer 2006 Computational Complexity 33 Complexity of Selection Sort complexity analysis: selection sort the innermost (O(1)) code executes this many times: (n-1) + (n-2) + (n-3) +... + 2 + 1 that is, the sum of i from 1 to n - 1 OR, using the summation formula: n 1 n( n 1) 2 i = = O( n ) 2 i= 1 so selection sort is O(n 2 ) CISC 121 Summer 2006 Computational Complexity 35 Fundamental Step Shortcut analysis for nested loops 1. isolate the fundamental step the line or block of code that executes the most times 2. determine how many times it will be executed yields the complexity of the whole algorithm but have to choose the right fundamental step public static void selectionsort(int[ ] arr) for (int i = 0; i < arr.length - 1; i++) int min = i; for (int j = i + 1; j < arr.length; j++) if (arr[j] < arr[min]) // this one min = j; // swap(i, min, arr);

Sorting Algorithm Complexities complexity of sorting algorithms another iterative sort: bubble sort like selection sort it is loop based, iterative both are O(n 2 ) other sorts can be faster: O(n log n) recursive sorts we previewed quick sort merge sort is coming soon Sorting Algorithm Complexities when picking a sorting algorithm as with searching algorithms need to consider factors other than complexity array size, frequency of operation, programming clarity for small arrays complexity differences are not important programmer comfort may favor iterative sorts however, for large n we need the recursive sorts CISC 121 Summer 2006 Computational Complexity 37 CISC 121 Summer 2006 Computational Complexity 38 Sample Complexities common complexities O(1) constant push onto stack O(log n) logarithmic binary search O(n) linear sequential search O(n log n) n log n recursive sorts O(n 2 ) quadratic iterative sorts O(n 3 ) cubic matrix multiply O(2 n ) exponential bin packing O(b n ) exponential breadth first search O(n!) factorial traveling salesman Recursive Algorithms analysis of recursive algorithms requires another tool: recurrence relations important aspects of recurrence relations 1. how to choose the recurrence relation that describes a recursive algorithm 2. how to "solve" the recurrence relation & convert it to big-o notation CISC 121 Summer 2006 Computational Complexity 39 CISC 121 Summer 2006 Computational Complexity 40 Recursive Factorial analysis of a recursive factorial algorithm let T(n) = amount of time to compute factorial(n) recall: don't need an exact solution for T(n), just big-o base case: if n = 1 computing factorial takes a small fixed amount of time call that base case time a so T(1) = a public static int factorial(int n) int result; if (n <= 1) result = 1; // a result = n * factorial(n-1); return result; // end factorial Recursive Factorial public static int factorial(int n) int result; if (n <= 1) result = 1; // a result = n * factorial(n-1); // T(n-1) return result; // <=, *, return b // end factorial what about n > 1 (the recursive case) then, to compute factorial(n) compare n <= 1 // b call factorial(n-1) // T(n - 1) multiply n * result // b return answer // b

Recursive Factorial public static int factorial(int n) int result; if (n <= 1) result = 1; // a result = n * factorial(n-1); // T(n-1) return result; // <=, *, return all b // end factorial time for all this is some constant: b (independent of n) plus the time it takes to execute factorial(n-1) so, time to compute factorial(n) is expressed as: T(n) = T(n-1) + b Recursive Factorial this gives us the recurrence relation for T(n) T(1) = a T(n) = T(n-1) + b, n > 1 but, we need a closed form we need to express the complexity of factorial with no T term on right hand side an informal way to solve a recurrence relation is by unrolling we "unroll" the recurrence relation CISC 121 Summer 2006 Computational Complexity 43 CISC 121 Summer 2006 Computational Complexity 44 Recursive Factorial unrolling the recurrence relation T(1) = a T(n) = T(n-1) + b, n > 1 do successive substitutions for recursion term & simplify T(n) = T(n-1) + b = [T(n-2) + b] + b = T(n-2) + 2b = [T(n-3) + b] + 2b = T(n-3) + 3b = T(n-4) + 4b and so on detect a pattern: for any k (the number of unrolling steps) T(n) = T(n-k) + kb the goal: to get rid of the T( ) term on the right hand side so, choose k = n-1 T(n) = T(1) + (n-1)b = a + b(n-1) = O(n) so recursive factorial is O(n) Recursion Unrolling the unrolling process is not a formal proof the formal proof technique uses mathematical induction & it's not in our syllabus for CISC 121 however, we can check the informal result by testing with sample values substitute some sample values into the recurrence relation & the resulting big-o solution the recurrence relation is T(1) = a, T(n) = T(n-1) + b, n > 1 the resulting big-o solution is T(n) = a + b(n-1), n >= 1 Recursion Unrolling checking big-o derived from recurrence relation: T(1) = a, T(n) = T(n-1) + b, n > 1 resulting big-o solution is: T(n) = a + b(n-1), n >= 1 substitute some sample values of n: 1, 2, 3 T(1) from recurrence relation: a T(1) from big-o: T(1) = a + b(0) = a T(2) from recurrence relation: T(2) = T(1) + b = a + b T(2) from the solution is: a + b T(3) from recurrence relation: T(3) = T(2) + b = a + b + b = a + 2b T(3) from solution: a + 2b Recursive Complexity for a recursive method 1. let T(n) be the time to execute the method with parameter n or n a measure of parameter size 2. examine the method to find a recurrence relation for T(n) the base case complexity & the recursion case 3. "unroll" the recurrence relation to find a solution for T(n) repeatedly substitute & simplify for recursive steps until a pattern becomes apparent then, use the pattern to pick a value to substitute to eliminate the recurrence term

Recursive Complexity for a recursive method note: in the general case there may not be a solution for T(n) fortunately, there usually is for recurrence relations from simple computer programs Recursive Selection Sort recursive version of selection sort public static void recsort (int arr[ ], int low, int hi) if (low >= hi) return; // T(1) = a int minindex = low; for (int i = low+1; i <= hi; i++) if (arr[i] < arr[minindex]) minindex = i; // end for // c term // bn term swap(arr, low, minindex); recsort(arr, low+1, hi); // end recsort // c term // T(n-1) recurrence term CISC 121 Summer 2006 Computational Complexity 49 Recursive Selection Sort analysis: recursive selection sort T(n) = time to sort for array section of size n so n (# elements left to sort) = hi low + 1 T(1) = a, some constant a for n > 1 loop to find smallest element, swap, recursive call so T(n) = T(n-1) + bn + c for some constants b and c Recursive Selection Sort public static void recsort (int arr[ ], int low, int hi) if (low >= hi) return; // T(1) = a int minindex = low; for (int i = low+1; i <= hi; i++) if (arr[i] < arr[minindex]) minindex = i; // end for swap(arr, low, minindex); recsort(arr, low+1, hi); // end recsort // c term // bn term // c term // T(n-1) recurrence term CISC 121 Summer 2006 Computational Complexity 51 CISC 121 Summer 2006 Computational Complexity 52 Recursive Selection Sort T(1) = a T(n) = T(n-1) + bn + c, n > 1 unroll: T(n) = T(n-1) + bn + c = [T(n-2) + b(n-1) + c] + bn + c // substitute = T(n-2) + b[n + (n-1)] + 2c // simplify = [T(n-3) + b(n-2) + c] + b[n + (n-1)] + 2c // substitute = T(n-3) + b[n + (n-1) + (n-2)] + 3c // simplify so, after k steps: T(n) = T(n-k) + b[n + (n-1) +...+ (n-k+1)] + kc substitute n-1 for k, so: after n-1 steps: T(n) = T(1) + b[n + (n-1) +... + 2] + c(n-1) // n terms in n = O(n 2 n 1 ) // n( n 1) 2 i = = O( n ) i= 1 2 Recursive Search public static int ternarysearch(int tgt,int[] a,int lo,int hi) if (lo > hi) // a return -1; int athird = (hi - lo) / 3; // b int t1 = lo + athird, t2 = hi - athird; if (tgt == a[t1]) return t1; if (tgt == a[t2]) return t2; if (tgt < a[t1]) return ternarysearch(tgt, a, lo, t1-1); if (tgt < a[t2]) return ternarysearch(tgt, a, t1, t2-1); return ternarysearch(tgt, a, t2 + 1, hi); // ternarysearch //T(n/3) //T(n/3) //T(n/3)

Recursive Search let T(n) = worst case time to run ternarysearch with n (number of elements left to search) = hi lo + 1 T(n) = a, if n <= 0 T(n) = T(n/3) + b, for n > 1 (int division for n/3) if (lo > hi) return -1; int athird = (hi - lo) / 3; int t1 = lo + athird, t2 = hi - athird; if (tgt == a[t1]) return t1; if (tgt == a[t2]) return t2; if (tgt < a[t1]) return ternarysearch(tgt, a, lo, t1-1); if (tgt < a[t2]) return ternarysearch(tgt, a, t1, t2-1); return ternarysearch(tgt, a, t2 + 1, hi); Recursive Search T(n) = a if n < 1 T(n) = T(n/3) + b, for n > 1 it's useful to note: *T(1) = T(0) + b = a + b unroll: repeatedly substitute & simplify T(n) = T(n/3) + b = [T(n/9) + b] + b = T(n/9) + 2b = [T(n/27) + b] + 2b = T(n/27) + 3b = T(n/81) + 4b n after k steps: T ( n) = T + kb k 3 for closed form, we want 3 k = n, or k = log 3 n T(n) = *T(1) + b*log 3 n = a + b + b* log 3 n = O(log n) recursive ternary search is O(log n) same as the iterative binary search, same as recursive binary search CISC 121 Summer 2006 Computational Complexity 56 Recursion versus Iteration for 2 example algorithms: recursive & non-recursive versions of selection sort iterative - O(n 2 ), recursive - O(n 2 ) n-ary search iterative - O(log n), recursive - O(log n) the recursive implementation may increase constant factors slightly but does not change the big-o category so, the choice of recursive versus iterative therefore, is not based on complexity is based on ease/comfort in writing & understanding Summary: Recursion analyzed recursive algorithms with big-o O(log n): ternary search O(n): factorial O(n 2 ): selection sort see the Courseware readings for another O(2 n ): Towers of Hanoi CISC 121 Summer 2006 Computational Complexity 57 CISC 121 Summer 2006 Computational Complexity 58