Lecture 4: Properties of and Rules for



Similar documents
Efficiency of algorithms. Algorithms. Efficiency of algorithms. Binary search and linear search. Best, worst and average case.

In mathematics, it is often important to get a handle on the error term of an approximation. For instance, people will write

HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)!

The Running Time of Programs

CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis. Linda Shapiro Winter 2015

CSC148 Lecture 8. Algorithm Analysis Binary Search Sorting

Microeconomic Theory: Basic Math Concepts

Class Overview. CSE 326: Data Structures. Goals. Goals. Data Structures. Goals. Introduction

FINAL EXAM SECTIONS AND OBJECTIVES FOR COLLEGE ALGEBRA

CSC 180 H1F Algorithm Runtime Analysis Lecture Notes Fall 2015

Week 1: Introduction to Online Learning

Lecture 13: Martingales

CS/COE

Week 2: Exponential Functions

MA4001 Engineering Mathematics 1 Lecture 10 Limits and Continuity

Outline BST Operations Worst case Average case Balancing AVL Red-black B-trees. Binary Search Trees. Lecturer: Georgy Gimel farb

Review of Fundamental Mathematics

2.3 Convex Constrained Optimization Problems

Lecture 13 Linear quadratic Lyapunov theory

TMA4213/4215 Matematikk 4M/N Vår 2013

Notes from Week 1: Algorithms for sequential prediction

10.2 Series and Convergence

36 CHAPTER 1. LIMITS AND CONTINUITY. Figure 1.17: At which points is f not continuous?

Chapter 3. if 2 a i then location: = i. Page 40

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

Math 120 Final Exam Practice Problems, Form: A

SIMS 255 Foundations of Software Design. Complexity and NP-completeness

Randomized algorithms

Adaptive Online Gradient Descent

Data Structures. Algorithm Performance and Big O Analysis

Taylor and Maclaurin Series

The Goldberg Rao Algorithm for the Maximum Flow Problem

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88

Solutions of Equations in One Variable. Fixed-Point Iteration II

Inverse Functions and Logarithms

A Note on Maximum Independent Sets in Rectangle Intersection Graphs

Lecture 1: Course overview, circuits, and formulas

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform

1 Lecture: Integration of rational functions by decomposition

Lectures 5-6: Taylor Series

Using a table of derivatives

PRE-CALCULUS GRADE 12

Introduction to Algorithms and Data Structures

Solving Linear Systems, Continued and The Inverse of a Matrix

8.1 Min Degree Spanning Tree

Linear and quadratic Taylor polynomials for functions of several variables.

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES

Scheduling Real-time Tasks: Algorithms and Complexity

( ) = ( ) = {,,, } β ( ), < 1 ( ) + ( ) = ( ) + ( )

Many algorithms, particularly divide and conquer algorithms, have time complexities which are naturally

Pre-Session Review. Part 2: Mathematics of Finance

Polarization codes and the rate of polarization

The Mean Value Theorem

Running Time ( 3.1) Analysis of Algorithms. Experimental Studies ( 3.1.1) Limitations of Experiments. Pseudocode ( 3.1.2) Theoretical Analysis

BANACH AND HILBERT SPACE REVIEW

Dynamic TCP Acknowledgement: Penalizing Long Delays

1 The Line vs Point Test

Homework # 3 Solutions

Notes for a graduate-level course in asymptotics for statisticians. David R. Hunter Penn State University

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

4.1 INTRODUCTION TO THE FAMILY OF EXPONENTIAL FUNCTIONS

Continued Fractions and the Euclidean Algorithm

4.3 Lagrange Approximation

Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm.

GREATEST COMMON DIVISOR

Simplify the rational expression. Find all numbers that must be excluded from the domain of the simplified rational expression.

Classification - Examples

Near Optimal Solutions

14.1 Rent-or-buy problem

3.2 LOGARITHMIC FUNCTIONS AND THEIR GRAPHS. Copyright Cengage Learning. All rights reserved.

9 More on differentiation

Batch Scheduling of Deteriorating Products

Factorization in Polynomial Rings

Examples of Functions

Lecture 22: November 10

100. In general, we can define this as if b x = a then x = log b

it is easy to see that α = a

Mathematics 31 Pre-calculus and Limits

1 Calculus of Several Variables

1 Introduction. 2 Prediction with Expert Advice. Online Learning Lecture 09

Cost Model: Work, Span and Parallelism. 1 The RAM model for sequential computation:

Factoring Polynomials

Real Roots of Univariate Polynomials with Real Coefficients

Study of algorithms for factoring integers and computing discrete logarithms

12.6 Logarithmic and Exponential Equations PREPARING FOR THIS SECTION Before getting started, review the following:

Primality - Factorization

Nonlinear Regression Functions. SW Ch 8 1/54/

n k=1 k=0 1/k! = e. Example 6.4. The series 1/k 2 converges in R. Indeed, if s n = n then k=1 1/k, then s 2n s n = 1 n

Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics

FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z

AI: A Modern Approach, Chpts. 3-4 Russell and Norvig

Lecture 2: August 29. Linear Programming (part I)

Lecture 3 : The Natural Exponential Function: f(x) = exp(x) = e x. y = exp(x) if and only if x = ln(y)

Analysis of Approximation Algorithms for k-set Cover using Factor-Revealing Linear Programs

Adaptive Search with Stochastic Acceptance Probabilities for Global Optimization

Nonlinear Algebraic Equations Example

Mathematics Review for MS Finance Students

Message-passing sequential detection of multiple change points in networks

Lecture 4: AC 0 lower bounds and pseudorandomness

Transcription:

Lecture 4: Properties of and Rules for Asymptotic Big-Oh, Big-Omega, and Big-Theta Notation Georgy Gimel farb COMPSCI 220 Algorithms and Data Structures 1 / 13

1 Time complexity 2 Big-Oh rules Scaling Transitivity Rule of sums Rule of products Limit rule 3 Examples 2 / 13

Time Complexity of Algorithms If running time T (n) is O(f(n)) then the function f measures time complexity Polynomial algorithms: T (n) is O(n k ); k = const. Exponential algorithm: otherwise Intractable problem: if no polynomial algorithm is known for its solution 3 / 13

Time Complexity Growth f(n) Approximate number of data items processed per: 1 minute 1 day 1 year 1 century n 10 14, 400 5.3 10 6 5.3 10 8 n log 10 n 10 4, 000 8.8 10 5 6.7 10 7 n 1.5 10 1.3 10 3 6.5 10 4 1.4 10 6 n 2 10 380 7.3 10 3 7.3 10 4 n 3 10 110 810 3.7 10 3 2 n 10 20 29 35 Beware Exponential Complexity! A linear, O(n), algorithm processing 10 items per minute, can process 1.4 10 4 items per day, 5.3 10 6 items per year, and 5.3 10 8 items per century. An exponential, O(2 n ), algorithm processing 10 items per minute, can process only 20 items per day and only 35 items per century... 4 / 13

Time Complexity Growth f(n) Approximate number of data items processed per: 1 minute 1 day 1 year 1 century n 10 14, 400 5.3 10 6 5.3 10 8 n log 10 n 10 4, 000 8.8 10 5 6.7 10 7 n 1.5 10 1.3 10 3 6.5 10 4 1.4 10 6 n 2 10 380 7.3 10 3 7.3 10 4 n 3 10 110 810 3.7 10 3 2 n 10 20 29 35 Beware Exponential Complexity! A linear, O(n), algorithm processing 10 items per minute, can process 1.4 10 4 items per day, 5.3 10 6 items per year, and 5.3 10 8 items per century. An exponential, O(2 n ), algorithm processing 10 items per minute, can process only 20 items per day and only 35 items per century... 4 / 13

Big-Oh vs. Actual Running Time Example 1: Algorithms A and B with running times T A (n) = 20n time units and T B (n) = 0.1n log 2 n time units, respectively. In the Big-Oh sense, the linear algorithm A is better than the linearithmic algorithm B... But: on which data volume can A outperform B, i.e. for which value n the running time for A is less than for B? T A (n) < T B (n) if 20n < 0.1n log 2 n, or log 2 n > 200, that is, when n > 2 200 10 60! Thus, in all practical cases the algorithm B is better than A... 5 / 13

Big-Oh vs. Actual Running Time Example 1: Algorithms A and B with running times T A (n) = 20n time units and T B (n) = 0.1n log 2 n time units, respectively. In the Big-Oh sense, the linear algorithm A is better than the linearithmic algorithm B... But: on which data volume can A outperform B, i.e. for which value n the running time for A is less than for B? T A (n) < T B (n) if 20n < 0.1n log 2 n, or log 2 n > 200, that is, when n > 2 200 10 60! Thus, in all practical cases the algorithm B is better than A... 5 / 13

Big-Oh vs. Actual Running Time Example 1: Algorithms A and B with running times T A (n) = 20n time units and T B (n) = 0.1n log 2 n time units, respectively. In the Big-Oh sense, the linear algorithm A is better than the linearithmic algorithm B... But: on which data volume can A outperform B, i.e. for which value n the running time for A is less than for B? T A (n) < T B (n) if 20n < 0.1n log 2 n, or log 2 n > 200, that is, when n > 2 200 10 60! Thus, in all practical cases the algorithm B is better than A... 5 / 13

Big-Oh vs. Actual Running Time Example 1: Algorithms A and B with running times T A (n) = 20n time units and T B (n) = 0.1n log 2 n time units, respectively. In the Big-Oh sense, the linear algorithm A is better than the linearithmic algorithm B... But: on which data volume can A outperform B, i.e. for which value n the running time for A is less than for B? T A (n) < T B (n) if 20n < 0.1n log 2 n, or log 2 n > 200, that is, when n > 2 200 10 60! Thus, in all practical cases the algorithm B is better than A... 5 / 13

Big-Oh vs. Actual Running Time Example 1: Algorithms A and B with running times T A (n) = 20n time units and T B (n) = 0.1n log 2 n time units, respectively. In the Big-Oh sense, the linear algorithm A is better than the linearithmic algorithm B... But: on which data volume can A outperform B, i.e. for which value n the running time for A is less than for B? T A (n) < T B (n) if 20n < 0.1n log 2 n, or log 2 n > 200, that is, when n > 2 200 10 60! Thus, in all practical cases the algorithm B is better than A... 5 / 13

Big-Oh vs. Actual Running Time Example 2: Algorithms A and B with running times T A (n) = 20n time units and T B (n) = 0.1n 2 time units, respectively. In the Big-Oh sense, the linear algorithm A is better than the quadratic algorithm B... But: on which data volume can A outperform B, i.e. for which value n the running time for A is less than for B? T A (n) < T B (n) if 20n < 0.1n 2, or n > 200 Thus the algorithm A is better than B in most of practical cases except for n < 200 when B becomes faster... 6 / 13

Big-Oh vs. Actual Running Time Example 2: Algorithms A and B with running times T A (n) = 20n time units and T B (n) = 0.1n 2 time units, respectively. In the Big-Oh sense, the linear algorithm A is better than the quadratic algorithm B... But: on which data volume can A outperform B, i.e. for which value n the running time for A is less than for B? T A (n) < T B (n) if 20n < 0.1n 2, or n > 200 Thus the algorithm A is better than B in most of practical cases except for n < 200 when B becomes faster... 6 / 13

Big-Oh vs. Actual Running Time Example 2: Algorithms A and B with running times T A (n) = 20n time units and T B (n) = 0.1n 2 time units, respectively. In the Big-Oh sense, the linear algorithm A is better than the quadratic algorithm B... But: on which data volume can A outperform B, i.e. for which value n the running time for A is less than for B? T A (n) < T B (n) if 20n < 0.1n 2, or n > 200 Thus the algorithm A is better than B in most of practical cases except for n < 200 when B becomes faster... 6 / 13

Big-Oh vs. Actual Running Time Example 2: Algorithms A and B with running times T A (n) = 20n time units and T B (n) = 0.1n 2 time units, respectively. In the Big-Oh sense, the linear algorithm A is better than the quadratic algorithm B... But: on which data volume can A outperform B, i.e. for which value n the running time for A is less than for B? T A (n) < T B (n) if 20n < 0.1n 2, or n > 200 Thus the algorithm A is better than B in most of practical cases except for n < 200 when B becomes faster... 6 / 13

Big-Oh vs. Actual Running Time Example 2: Algorithms A and B with running times T A (n) = 20n time units and T B (n) = 0.1n 2 time units, respectively. In the Big-Oh sense, the linear algorithm A is better than the quadratic algorithm B... But: on which data volume can A outperform B, i.e. for which value n the running time for A is less than for B? T A (n) < T B (n) if 20n < 0.1n 2, or n > 200 Thus the algorithm A is better than B in most of practical cases except for n < 200 when B becomes faster... 6 / 13

Scaling Big-Oh: Scaling Scaling (Lemma 1.15; p.15) For all constant factors c > 0, the function cf(n) is O(f(n)), or in shorthand notation cf is O(f). The proof: cf(n) < (c + ε)f(n) holds for all n > 0 and ε > 0. Constant factors are ignored. Only the powers and functions of n should be exploited It is this ignoring of constant factors that motivates for such a notation! In particular, f is O(f). Examples: { 50n O(n) 0.05n O(n) 50, 000, 000n O(n) 0.0000005n O(n) 7 / 13

Scaling Big-Oh: Scaling Scaling (Lemma 1.15; p.15) For all constant factors c > 0, the function cf(n) is O(f(n)), or in shorthand notation cf is O(f). The proof: cf(n) < (c + ε)f(n) holds for all n > 0 and ε > 0. Constant factors are ignored. Only the powers and functions of n should be exploited It is this ignoring of constant factors that motivates for such a notation! In particular, f is O(f). Examples: { 50n O(n) 0.05n O(n) 50, 000, 000n O(n) 0.0000005n O(n) 7 / 13

Scaling Big-Oh: Scaling Scaling (Lemma 1.15; p.15) For all constant factors c > 0, the function cf(n) is O(f(n)), or in shorthand notation cf is O(f). The proof: cf(n) < (c + ε)f(n) holds for all n > 0 and ε > 0. Constant factors are ignored. Only the powers and functions of n should be exploited It is this ignoring of constant factors that motivates for such a notation! In particular, f is O(f). Examples: { 50n O(n) 0.05n O(n) 50, 000, 000n O(n) 0.0000005n O(n) 7 / 13

Scaling Big-Oh: Scaling Scaling (Lemma 1.15; p.15) For all constant factors c > 0, the function cf(n) is O(f(n)), or in shorthand notation cf is O(f). The proof: cf(n) < (c + ε)f(n) holds for all n > 0 and ε > 0. Constant factors are ignored. Only the powers and functions of n should be exploited It is this ignoring of constant factors that motivates for such a notation! In particular, f is O(f). Examples: { 50n O(n) 0.05n O(n) 50, 000, 000n O(n) 0.0000005n O(n) 7 / 13

Transitivity Big-Oh: Transitivity Transitivity (Lemma 1.16; p.15) If h is O(g) and g is O(f), then h is O(f). Informally: if h grows at most as fast as g, which grows at most as fast as f, then h grows at most as fast as f. Examples: h O(g); g O(n 2 ) h O(n 2 ) log 10 n O(n 0.01 ); n 0.01 O(n) log 10 n O(n) 2 n O(3 n ); n 50 O(2 n ) n 50 O(3 n ) The proof: If h(n) c 1 g(n) for n > n 1 and g(n) c 2 f(n) for n > n 2, then h(n) c 1 c }{{} 2 f(n) for n > max{n 1, n 2 }. } {{ } c n 0 8 / 13

Transitivity Big-Oh: Transitivity Transitivity (Lemma 1.16; p.15) If h is O(g) and g is O(f), then h is O(f). Informally: if h grows at most as fast as g, which grows at most as fast as f, then h grows at most as fast as f. Examples: h O(g); g O(n 2 ) h O(n 2 ) log 10 n O(n 0.01 ); n 0.01 O(n) log 10 n O(n) 2 n O(3 n ); n 50 O(2 n ) n 50 O(3 n ) The proof: If h(n) c 1 g(n) for n > n 1 and g(n) c 2 f(n) for n > n 2, then h(n) c 1 c }{{} 2 f(n) for n > max{n 1, n 2 }. } {{ } c n 0 8 / 13

Rule of sums Big-Oh: Rule of Sums Rule-of-sums (Lemma 1.17; p.15) If g 1 O(f 1 ) and g 2 O(f 2 ), then g 1 + g 2 O(max{f 1, f 2 }). The sum grows as its fastest-growing term: If g O(f) and h O(f), then g + h O(f). If g O(f), then g + f O(f). Examples: { If h O(n) and g O(n 2 ), then g + h O(n 2 ) If h O(n log n) and g O(n), then g + h O(n log n) The proof: If g 1(n) c 1f 1(n) for n > n 1 and g 2(n) c 2f 2(n) for n > n 2, then g 1(n) + g 2(n) c 1f 1(n) + c 2f 2(n) max{c 1, c 2} (f 1(n) + f 2(n)) } {{ } c 2 max{c 1, c 2} } {{ } c max {f 1(n), f 2(n)} for n > max{n 1, n 2}. } {{ } n 0 9 / 13

Rule of sums Big-Oh: Rule of Sums Rule-of-sums (Lemma 1.17; p.15) If g 1 O(f 1 ) and g 2 O(f 2 ), then g 1 + g 2 O(max{f 1, f 2 }). The sum grows as its fastest-growing term: If g O(f) and h O(f), then g + h O(f). If g O(f), then g + f O(f). Examples: { If h O(n) and g O(n 2 ), then g + h O(n 2 ) If h O(n log n) and g O(n), then g + h O(n log n) The proof: If g 1(n) c 1f 1(n) for n > n 1 and g 2(n) c 2f 2(n) for n > n 2, then g 1(n) + g 2(n) c 1f 1(n) + c 2f 2(n) max{c 1, c 2} (f 1(n) + f 2(n)) } {{ } c 2 max{c 1, c 2} } {{ } c max {f 1(n), f 2(n)} for n > max{n 1, n 2}. } {{ } n 0 9 / 13

Rule of sums Big-Oh: Rule of Sums 10 / 13

Rule of products Big-Oh: Rule of Products Rule-of-products (Lemma 1.18; p.16) If g 1 O(f 1 ) and g 2 O(f 2 ), then g 1 g 2 O(f 1 f 2 ). The product of upper bounds of functions gives an upper bound for the product of the functions: If g O(f) and h O(f), then gh O(f 2 ). If g O(f), then gh O(fh). Examples: If h O(n) and g O(n 2 ), then gh O(n 3 ). If h O(log n) and g O(n), then gh O(n log n). The proof: If g 1(n) c 1f 1(n) for n > n 1 and g 2(n) c 2f 2(n) for n > n 2, then g 1(n)g 2(n) c }{{} 1c 2 f 1(n)f 2(n) for n > max{n 1, n 2}. } {{ } c n 0 11 / 13

Rule of products Big-Oh: Rule of Products Rule-of-products (Lemma 1.18; p.16) If g 1 O(f 1 ) and g 2 O(f 2 ), then g 1 g 2 O(f 1 f 2 ). The product of upper bounds of functions gives an upper bound for the product of the functions: If g O(f) and h O(f), then gh O(f 2 ). If g O(f), then gh O(fh). Examples: If h O(n) and g O(n 2 ), then gh O(n 3 ). If h O(log n) and g O(n), then gh O(n log n). The proof: If g 1(n) c 1f 1(n) for n > n 1 and g 2(n) c 2f 2(n) for n > n 2, then g 1(n)g 2(n) c }{{} 1c 2 f 1(n)f 2(n) for n > max{n 1, n 2}. } {{ } c n 0 11 / 13

Rule of products Big-Oh: Rule of Products Rule-of-products (Lemma 1.18; p.16) If g 1 O(f 1 ) and g 2 O(f 2 ), then g 1 g 2 O(f 1 f 2 ). The product of upper bounds of functions gives an upper bound for the product of the functions: If g O(f) and h O(f), then gh O(f 2 ). If g O(f), then gh O(fh). Examples: If h O(n) and g O(n 2 ), then gh O(n 3 ). If h O(log n) and g O(n), then gh O(n log n). The proof: If g 1(n) c 1f 1(n) for n > n 1 and g 2(n) c 2f 2(n) for n > n 2, then g 1(n)g 2(n) c }{{} 1c 2 f 1(n)f 2(n) for n > max{n 1, n 2}. } {{ } c n 0 11 / 13

Rule of products Big-Oh: Rule of Products Rule-of-products (Lemma 1.18; p.16) If g 1 O(f 1 ) and g 2 O(f 2 ), then g 1 g 2 O(f 1 f 2 ). The product of upper bounds of functions gives an upper bound for the product of the functions: If g O(f) and h O(f), then gh O(f 2 ). If g O(f), then gh O(fh). Examples: If h O(n) and g O(n 2 ), then gh O(n 3 ). If h O(log n) and g O(n), then gh O(n log n). The proof: If g 1(n) c 1f 1(n) for n > n 1 and g 2(n) c 2f 2(n) for n > n 2, then g 1(n)g 2(n) c }{{} 1c 2 f 1(n)f 2(n) for n > max{n 1, n 2}. } {{ } c n 0 11 / 13

Rule of products Big-Oh: Rule of Products Rule-of-products (Lemma 1.18; p.16) If g 1 O(f 1 ) and g 2 O(f 2 ), then g 1 g 2 O(f 1 f 2 ). The product of upper bounds of functions gives an upper bound for the product of the functions: If g O(f) and h O(f), then gh O(f 2 ). If g O(f), then gh O(fh). Examples: If h O(n) and g O(n 2 ), then gh O(n 3 ). If h O(log n) and g O(n), then gh O(n log n). The proof: If g 1(n) c 1f 1(n) for n > n 1 and g 2(n) c 2f 2(n) for n > n 2, then g 1(n)g 2(n) c }{{} 1c 2 f 1(n)f 2(n) for n > max{n 1, n 2}. } {{ } c n 0 11 / 13

Limit rule Big-Oh: The Limit Rule f(n) Suppose the ratio s limit lim n g(n) = L exists (may be infinite, ). Then if L = 0 then f O(g) if 0 < L < then f Θ(g) if L = then f Ω(g) When f and g are positive and differentiable functions for x > 0, but lim f(x) = lim g(x) = or lim f(x) = lim g(x) = 0, the limit L x x x x can be computed using the standard L Hopital rule of calculus: f(x) lim x g(x) = lim f (x) x g (x) where z (x) denotes the first derivative of the function z(x). 12 / 13

Examples 1.22 and 1.23 (Textbook, p.16) Example 1.22: Exponential functions grow faster than powers: n k is O(b n ) for all b > 1, n > 1, and k 0. Proof: by induction or by the limit rule (the L Hopital approach): Successive (k times) differentiation of n k and b n by n: n lim k kn n b = lim k 1 n n b n ln b = lim k(k 1)n k 2 k! n b n (ln b) =... = lim = 0. 2 n b n (ln b) k Example 1.23: Logarithmic functions grow slower than powers: log b n is O(n k ) for all b > 1, k > 0. Proof: This is the inverse of the preceding feature. As a result, log n O(n) and n log n O(n 2 ). log b n is O(log n) for all b > 1 because log b n = log b a log a n 13 / 13

Examples 1.22 and 1.23 (Textbook, p.16) Example 1.22: Exponential functions grow faster than powers: n k is O(b n ) for all b > 1, n > 1, and k 0. Proof: by induction or by the limit rule (the L Hopital approach): Successive (k times) differentiation of n k and b n by n: n lim k kn n b = lim k 1 n n b n ln b = lim k(k 1)n k 2 k! n b n (ln b) =... = lim = 0. 2 n b n (ln b) k Example 1.23: Logarithmic functions grow slower than powers: log b n is O(n k ) for all b > 1, k > 0. Proof: This is the inverse of the preceding feature. As a result, log n O(n) and n log n O(n 2 ). log b n is O(log n) for all b > 1 because log b n = log b a log a n 13 / 13