Algorithms 2/17/2015. Double Summations. Enough Mathematical Appetizers! Algorithms. Algorithms. Algorithm Examples. Algorithm Examples

Similar documents
Why? A central concept in Computer Science. Algorithms are ubiquitous.

In mathematics, it is often important to get a handle on the error term of an approximation. For instance, people will write

CS/COE

Operation Count; Numerical Linear Algebra

Lecture Notes on Linear Search

Math 115 Spring 2011 Written Homework 5 Solutions

The Tower of Hanoi. Recursion Solution. Recursive Function. Time Complexity. Recursive Thinking. Why Recursion? n! = n* (n-1)!

6. Standard Algorithms

CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis. Linda Shapiro Winter 2015

Lecture 13 - Basic Number Theory.

Integer Programming: Algorithms - 3

Algorithms. Margaret M. Fleck. 18 October 2010

Computer Science 210: Data Structures. Searching

Recursive Algorithms. Recursion. Motivating Example Factorial Recall the factorial function. { 1 if n = 1 n! = n (n 1)! if n > 1

Efficiency of algorithms. Algorithms. Efficiency of algorithms. Binary search and linear search. Best, worst and average case.

Overview. Essential Questions. Precalculus, Quarter 4, Unit 4.5 Build Arithmetic and Geometric Sequences and Series

Math 55: Discrete Mathematics

Data Structures and Algorithms

CSC148 Lecture 8. Algorithm Analysis Binary Search Sorting

4/1/2017. PS. Sequences and Series FROM 9.2 AND 9.3 IN THE BOOK AS WELL AS FROM OTHER SOURCES. TODAY IS NATIONAL MANATEE APPRECIATION DAY

CSC 180 H1F Algorithm Runtime Analysis Lecture Notes Fall 2015

10.2 Series and Convergence

Analysis of Binary Search algorithm and Selection Sort algorithm

Sorting revisited. Build the binary search tree: O(n^2) Traverse the binary tree: O(n) Total: O(n^2) + O(n) = O(n^2)

Data Structures. Algorithm Performance and Big O Analysis

Loop Invariants and Binary Search

What is Linear Programming?

Number Systems and Radix Conversion

Sources: On the Web: Slides will be available on:

Mathematical Induction. Lecture 10-11

Near Optimal Solutions

Algorithms are the threads that tie together most of the subfields of computer science.

AP Computer Science AB Syllabus 1

2 Integrating Both Sides

Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities

Linear Equations and Inequalities

9.2 Summation Notation

Searching Algorithms

Chapter 3. if 2 a i then location: = i. Page 40

9. POLYNOMIALS. Example 1: The expression a(x) = x 3 4x 2 + 7x 11 is a polynomial in x. The coefficients of a(x) are the numbers 1, 4, 7, 11.

Math 120 Final Exam Practice Problems, Form: A

Lecture 3: Finding integer solutions to systems of linear equations

Binary Heaps * * * * * * * / / \ / \ / \ / \ / \ * * * * * * * * * * * / / \ / \ / / \ / \ * * * * * * * * * *

South Carolina College- and Career-Ready (SCCCR) Algebra 1

Notes from Week 1: Algorithms for sequential prediction

Regions in a circle. 7 points 57 regions

2010/9/19. Binary number system. Binary numbers. Outline. Binary to decimal

Continued Fractions and the Euclidean Algorithm

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

Software Development Phases

Analysis of Computer Algorithms. Algorithm. Algorithm, Data Structure, Program

Describe the process of parallelization as it relates to problem solving.

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Notes on Complexity Theory Last updated: August, Lecture 1

Sequence of Numbers. Mun Chou, Fong QED Education Scientific Malaysia

DRAFT. Algebra 1 EOC Item Specifications

Year 9 set 1 Mathematics notes, to accompany the 9H book.

GRID SEARCHING Novel way of Searching 2D Array

Lecture 10: Distinct Degree Factoring

1 Lecture: Integration of rational functions by decomposition

TU e. Advanced Algorithms: experimentation project. The problem: load balancing with bounded look-ahead. Input: integer m 2: number of machines

Solving Quadratic Equations

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the school year.

CS 103X: Discrete Structures Homework Assignment 3 Solutions

We will learn the Python programming language. Why? Because it is easy to learn and many people write programs in Python so we can share.

Regular Languages and Finite Automata

ALGEBRA. Find the nth term, justifying its form by referring to the context in which it was generated

Numerical methods for American options

LEARNING OBJECTIVES FOR THIS CHAPTER

COMP 250 Fall 2012 lecture 2 binary representations Sept. 11, 2012

Sample Questions Csci 1112 A. Bellaachia

Linear Programming Notes V Problem Transformations

1 if 1 x 0 1 if 0 x 1

Notes on Determinant

3. INNER PRODUCT SPACES

Zabin Visram Room CS115 CS126 Searching. Binary Search

24. The Branch and Bound Method

ALGEBRA. sequence, term, nth term, consecutive, rule, relationship, generate, predict, continue increase, decrease finite, infinite

WESTMORELAND COUNTY PUBLIC SCHOOLS Integrated Instructional Pacing Guide and Checklist Computer Math

Functions. MATH 160, Precalculus. J. Robert Buchanan. Fall Department of Mathematics. J. Robert Buchanan Functions

FACTORING SPARSE POLYNOMIALS

Regression Verification: Status Report

is identically equal to x 2 +3x +2

Formal Languages and Automata Theory - Regular Expressions and Finite Automata -

Numerical Analysis I

Edelen, Gross, Lovanio Page 1 of 5

Analysis of a Search Algorithm

1 Review of Newton Polynomials

Chapter Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

Linear Programming. Solving LP Models Using MS Excel, 18

CS 3719 (Theory of Computation and Algorithms) Lecture 4

Vieta s Formulas and the Identity Theorem

Symbol Tables. Introduction

Binary Heap Algorithms

A Branch and Bound Algorithm for Solving the Binary Bi-level Linear Programming Problem

Numerical Matrix Analysis

Digital System Design Prof. D Roychoudhry Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Lecture 4 Online and streaming algorithms for clustering

Transcription:

Double Summations Table 2 in 4 th Edition: Section 1.7 5 th Edition: Section.2 6 th and 7 th Edition: Section 2.4 contains some very useful formulas for calculating sums. Enough Mathematical Appetizers! Let us look at something more interesting: In the same Section, Exercises 15 and 17 (7 th Edition: Exercises 1 and ) make a nice homework. 1 2 What is an algorithm? An algorithm is a finite set of precise instructions for performing a computation or for solving a problem. This is a rather vague definition. You will get to know a more precise and mathematically useful definition when you att CS420 or CS620. But this one is good enough for now Properties of algorithms: Input from a specified set, Output from a specified set (solution), Definiteness of every step in the computation, Correctness of output for every possible input, Finiteness of the number of calculation steps, Effectiveness of each calculation step and Generality for a class of problems. 4 We will use a pseudocode to specify algorithms, which slightly reminds us of Basic and Pascal. Example: an algorithm that finds the maximum element in a finite sequence procedure max(a 1, a 2,, a n : max := a 1 for i := 2 to n if max < a i then max := a i {max is the largest element} Another example: a linear search algorithm, that is, an algorithm that linearly searches a sequence for a particular element. procedure linear_search(x: integer; a 1, a 2,, a n : i := 1 i := i + 1 5 6 1

If the terms in a sequence are ordered, a binary search algorithm is more efficient than linear search. The binary search algorithm iteratively restricts the relevant until it closes in on the position of the element to be located. 7 8 9 10 found! 11 12 2

procedure binary_search(x: integer; a 1, a 2,, a n : i := 1 {i is left point of } j := n {j is right point of } if x > a m then i := m + 1 1 Obviously, on sorted sequences, binary search is more efficient than linear search. How can we analyze the efficiency of algorithms? We can measure the time (number of elementary computations) and space (number of memory cells) that the algorithm requires. These measures are called computational complexity and space complexity, respectively. 14 What is the time complexity of the linear search algorithm? We will determine the worst-case number of comparisons as a function of the number n of terms in the sequence. The worst case for the linear algorithm occurs when the element to be located is not included in the sequence. In that case, every item in the sequence is compared to the element to be located. Here is the linear search algorithm again: procedure linear_search(x: integer; a 1, a 2,, a n : i:= 1 i:= i+ 1 15 16 For n elements, the loop i := i + 1 is processed n times, requiring 2n comparisons. When it is entered for the (n+1)th time, only the comparison i n is executed and terminates the loop. Finally, the comparison is executed, so all in all we have a worst-case time complexity of 2n + 2. 17 Reminder: Binary Search Algorithm procedure binary_search(x: integer; a 1, a 2,, a n : i := 1 {i is left point of } j := n {j is right point of } if x > a m then i:= m + 1 18

What is the time complexity of the binary search algorithm? Again, we will determine the worst-case number of comparisons as a function of the number n of terms in the sequence. Let us assume there are n = 2 k elements in the list, which means that k = log n. If n is not a power of 2, it can be considered part of a larger list, where 2 k < n < 2 k+1. In the first cycle of the loop if x > a m then i := m + 1 the is restricted to 2 k-1 elements, using two comparisons. 19 20 In the second cycle, the is restricted to 2 k-2 elements, again using two comparisons. This is repeated until there is only one (2 0 ) element left in the. At this point 2k comparisons have been conducted. Then, the comparison exits the loop, and a final comparison determines whether the element was found. Therefore, the overall time complexity of the binary search algorithm is 2k + 2 = 2 log n + 2. 21 22 In general, we are not so much interested in the time and space complexity for small inputs. For example, while the difference in time complexity between linear and binary search is meaningless for a sequence with n = 10, it is gigantic for n = 2 0. For example, let us assume two algorithms A and B that solve the same class of problems. The time complexity of A is 5,000n, the one for B is 1.1 n for an input with n elements. 2 24 4

Comparison: time complexity of algorithms A and B Input Size Algorithm A Algorithm B n 10 100 1,000 1,000,000 5,000n 50,000 500,000 5,000,000 5 10 9 1.1 n 1,781 2.5 10 41 4.8 10 4192 This means that algorithm B cannot be used for large inputs, while running algorithm A is still feasible. So what is important is the growth of the complexity functions. The growth of time and space complexity with increasing input size n is a suitable measure for the comparison of algorithms. 25 26 The growth of functions is usually described using the big-o notation. Definition: Let f and g be functions from the integers or the real numbers to the real numbers. We say that f(x) is O(g(x)) if there are constants C and k such that f(x) C g(x) When we analyze the growth of complexity functions, f(x) and g(x) are always positive. Therefore, we can simplify the big-o requirement to f(x) C g(x) whenever x > k. If we want to show that f(x) is O(g(x)), we only need to find one pair (C, k) (which is never unique). whenever x > k. 27 28 The idea behind the big-o notation is to establish an upper boundary for the growth of a function f(x) for large x. This boundary is specified by a function g(x) that is usually much simpler than f(x). We accept the constant C in the requirement f(x) C g(x) whenever x > k, because C does not grow with x. We are only interested in large x, so it is OK if f(x) > C g(x) for x k. Example: Show that f(x) = x 2 + 2x + 1 is O(x 2 ). For x > 1 we have: x 2 + 2x + 1 x 2 + 2x 2 + x 2 x 2 + 2x + 1 4x 2 Therefore, for C = 4 and k = 1: f(x) Cx 2 whenever x > k. f(x) is O(x 2 ). 29 0 5