Solutions to Homework 6

Similar documents
Analysis of Binary Search algorithm and Selection Sort algorithm

CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis. Linda Shapiro Winter 2015

Lecture 3: Finding integer solutions to systems of linear equations

Why? A central concept in Computer Science. Algorithms are ubiquitous.

Section IV.1: Recursive Algorithms and Recursion Trees

Binary search algorithm

Data Structures. Algorithm Performance and Big O Analysis

6. Standard Algorithms

Computer Science 210: Data Structures. Searching

Binary Heaps. CSE 373 Data Structures

Algorithms. Margaret M. Fleck. 18 October 2010

COMP 250 Fall 2012 lecture 2 binary representations Sept. 11, 2012

In mathematics, it is often important to get a handle on the error term of an approximation. For instance, people will write

Recursive Algorithms. Recursion. Motivating Example Factorial Recall the factorial function. { 1 if n = 1 n! = n (n 1)! if n > 1

CS/COE

Zabin Visram Room CS115 CS126 Searching. Binary Search

APP INVENTOR. Test Review

Factoring & Primality

1 Solving LPs: The Simplex Algorithm of George Dantzig

Binary Number System. 16. Binary Numbers. Base 10 digits: Base 2 digits: 0 1

CSC 180 H1F Algorithm Runtime Analysis Lecture Notes Fall 2015

Binary Adders: Half Adders and Full Adders

CS 103X: Discrete Structures Homework Assignment 3 Solutions

Binary Heaps * * * * * * * / / \ / \ / \ / \ / \ * * * * * * * * * * * / / \ / \ / / \ / \ * * * * * * * * * *

PROG0101 Fundamentals of Programming PROG0101 FUNDAMENTALS OF PROGRAMMING. Chapter 3 Algorithms

For example, we have seen that a list may be searched more efficiently if it is sorted.

Zeros of a Polynomial Function

Lecture Notes on Linear Search

A binary heap is a complete binary tree, where each node has a higher priority than its children. This is called heap-order property

Loop Invariants and Binary Search

The Tower of Hanoi. Recursion Solution. Recursive Function. Time Complexity. Recursive Thinking. Why Recursion? n! = n* (n-1)!

Dynamic Programming Problem Set Partial Solution CMPSC 465

Faster deterministic integer factorisation

Algorithm Design and Recursion

What Is Recursion? Recursion. Binary search example postponed to end of lecture

Kapitel 1 Multiplication of Long Integers (Faster than Long Multiplication)

Symbol Tables. Introduction

Lecture 1: Course overview, circuits, and formulas

Class Overview. CSE 326: Data Structures. Goals. Goals. Data Structures. Goals. Introduction

Binary Search Trees. Data in each node. Larger than the data in its left child Smaller than the data in its right child

Sources: On the Web: Slides will be available on:

Sorting revisited. Build the binary search tree: O(n^2) Traverse the binary tree: O(n) Total: O(n^2) + O(n) = O(n^2)

Class : MAC 286. Data Structure. Research Paper on Sorting Algorithms

The last three chapters introduced three major proof techniques: direct,

Notes on Complexity Theory Last updated: August, Lecture 1

Outline. Introduction Linear Search. Transpose sequential search Interpolation search Binary search Fibonacci search Other search techniques

Discrete Optimization

Math 120 Final Exam Practice Problems, Form: A

This Unit: Floating Point Arithmetic. CIS 371 Computer Organization and Design. Readings. Floating Point (FP) Numbers

Chapter 3. Distribution Problems. 3.1 The idea of a distribution The twenty-fold way

The Taxman Game. Robert K. Moniot September 5, 2003

Efficiency of algorithms. Algorithms. Efficiency of algorithms. Binary search and linear search. Best, worst and average case.

Session 7 Bivariate Data and Analysis

Category 3 Number Theory Meet #1, October, 2000

Binary Heap Algorithms

Lecture 13 - Basic Number Theory.

Chapter 8: Bags and Sets

Introduction to Programming (in C++) Loops. Jordi Cortadella, Ricard Gavaldà, Fernando Orejas Dept. of Computer Science, UPC

Example. Introduction to Programming (in C++) Loops. The while statement. Write the numbers 1 N. Assume the following specification:

Mixed 2 x 3 ANOVA. Notes

CSC148 Lecture 8. Algorithm Analysis Binary Search Sorting

An Ecient Dynamic Load Balancing using the Dimension Exchange. Ju-wook Jang. of balancing load among processors, most of the realworld

1) The postfix expression for the infix expression A+B*(C+D)/F+D*E is ABCD+*F/DE*++

Data Structure and Algorithm I Midterm Examination 120 points Time: 9:10am-12:10pm (180 minutes), Friday, November 12, 2010

16. Recursion. COMP 110 Prasun Dewan 1. Developing a Recursive Solution

Mathematical Induction

CSE 135: Introduction to Theory of Computation Decidability and Recognizability

From Last Time: Remove (Delete) Operation

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

Sequential Data Structures

Take-Home Exercise. z y x. Erik Jonsson School of Engineering and Computer Science. The University of Texas at Dallas

IB Maths SL Sequence and Series Practice Problems Mr. W Name

Chapter 3. if 2 a i then location: = i. Page 40

Using EXCEL Solver October, 2000

Linear Programming Notes VII Sensitivity Analysis

Analysis of a Search Algorithm

Lecture 2 Mathcad Basics

CHAPTER 5 Round-off errors

Session 6 Number Theory

Chapter Objectives. Chapter 9. Sequential Search. Search Algorithms. Search Algorithms. Binary Search

NUMBER SYSTEMS. William Stallings

Notes on Assembly Language

Quiz 4 Solutions EECS 211: FUNDAMENTALS OF COMPUTER PROGRAMMING II. 1 Q u i z 4 S o l u t i o n s

CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team

Integer Factorization using the Quadratic Sieve

Near Optimal Solutions

Algorithm & Flowchart & Pseudo code. Staff Incharge: S.Sasirekha

Functions Recursion. C++ functions. Declare/prototype. Define. Call. int myfunction (int ); int myfunction (int x){ int y = x*x; return y; }

Elementary factoring algorithms

Chapter 13: Query Processing. Basic Steps in Query Processing

SIMS 255 Foundations of Software Design. Complexity and NP-completeness

Graph Theory Problems and Solutions

4/1/2017. PS. Sequences and Series FROM 9.2 AND 9.3 IN THE BOOK AS WELL AS FROM OTHER SOURCES. TODAY IS NATIONAL MANATEE APPRECIATION DAY

A Note on Maximum Independent Sets in Rectangle Intersection Graphs

=

Converting a Number from Decimal to Binary

Algorithm Analysis [2]: if-else statements, recursive algorithms. COSC 2011, Winter 2004, Section N Instructor: N. Vlajic

Section 1.5 Exponents, Square Roots, and the Order of Operations

Linear Programming Notes V Problem Transformations

Transcription:

Solutions to Homework 6 Section 3.1 4. Set the answer to be. For i going from 1 through n 1, compute the value of the (i + 1)st element in the list minus the ist element in the list. If this is larger than the answer, reset the answer to be this value. procedure largestdifference(a_1,a_2,...,a_n: integers) answer = -infinity for i = 1 to n-1 value = a_{i+1}-a_i if value > answer then answer = value {output: answer} 15. We need to find where x goes, then slide the rest of the list down to make room for x, then put x into the space created. In the procedure that follows, we employ the trick of temporarily tacking x + 1 onto the of the list, so that the while loop will always terminate. Also note that the indexing in the for loop is slightly tricky since we need to work from the of the list toward the front. procedure insert(x,a_1,a_2,...,a_n: integers) a_{n+1}=x+1 i = 1 while x>a_i i = i+1 for j = 0 to n-i a_{n-j+1} = a_{n-j} 1

a_i = x 16. We let min be the smallest element found so far. At the, it is the smallest element, since we update it as necessary as we scan through the list. procedure smallest(a_1,a_2,...,a_n: natural numbers) min = a_1 for i = 2 to n if a_i < min then min = a_i {min is the smallest integer among the input} 18. procedure lastsmallest(a_1,a_2,...,a_n: integers) min = a_1 location = 1 for i = 2 to n if min >= a_i then min = a_i location = i {location is the location of the last occurrence of the smallest element in the list} 28. This could be thought of as just doing two iterations of binary search at once. We compare the sought-after element to the middle element in the still-active portion of the list, and then to the middle element of either the top half or the bottom half. This will restrict the subsequent search to one of four sublists, each about one-quarter the size of the previous list. We need to stop when the list has length three or less and make explicit checks. Here is the pseudocode. procedure tetray search(x: integer; a_1,...,a_n: increasing integers) 2

i = 1 j = n while i <j-2 l = floor((i+j)/4) m = floor((i+j)/2) u = floor(3*(i+j)/4) if x > a_m then if x <= a_u then i = m+1 j = u else i = u+1 else if x > a_l then u = l+1 j = m else j = l if x = a_i then location = i else if x = a_j then location = j else if x = a_{floor((i+j)/2)} then location = floor((i+j)/2) else location = 0 {location is the subscript of the term equal to x (0 if not found)} 31. The following algorithm goes through the terms of the sequence one by one, and for each term compares it to all previous terms. If it finds a match, then it stores the subscript of that term in location and terminates the search. If no match is ever found, then location is set to 0. procedure find duplicate(a_1,a_2,...,a_n: integers) location = 0 i = 2 while i <= n and location = 0 j = 1 3

while j < i and location = 0 if a_i = a_j then location = i else j = j+1 i = i+1 {location is the subscript of the first value that repeats a previous value int eh sequence and is 0 if there is no such value} 57(a). We keep track of a variable f giving the finishing time of the talk last selected, starting out with f equal to the time the hall becomes available. We ignore any time that might be needed to clear the hall between talks. We order the talks in increasing order of the ing times, and start at the top of the list. At each stage of the algorithm, we go down the list of talks from where we left off, and find the first one whose starting time is not less than f. We schedule that talk and update f to record its finishing time. Section 3.3 4. If we successively square k times, then we have computed x 2k. Thus we can compute x 2k with only k multiplications, rather than the 2 k 1 multiplications that the naive algorithm would require, so this method is much more efficient. 7. (a) Here we have n = 2, a 0 = 1, a 1 = 1, a 2 = 3, and c = 2. Initially, we set power equal to 1 and y equal to 1. The first time through the for loop, power becomes 2 and so y becomes 1 + 1 2 = 3. The second and final time through the loop, power becomes 2 2 = 4 and y becomes 3 + 3 4 = 15. Thus the value of the polynomial at x = 2 is 15. (b) Each pass through the loop requires two multiplications and one addition. Therefore there are a total of 2n multiplications and n additions in all. 12. (a) The number of comparisons does not dep on the values of a 1 through a n. Exactly 2n 1 comparisons are used, as was determined in Example1. In other words, the best case performance is O(n). 4

(b) In the best case x = a 1. We saw in Example 4 that three comparisons are used in that case. The best case performance, then is O(1). (c) It is hard to give an exact answer, since it deps on the binary representation of the number n, among other things. In any case, the best case performance is really not much different from the worst case performance, namely O(log n), since the list is essentially cut in half at each iteration, and the algorithm does not stop until the list has only one element left in it. 13. If the element is not in the list, then 2n + 2 comparisons are needed: two for each pass through the loop, one more to get out of the loop, and one more for the statement just after the loop. If the element is in the list, say as the ith element, then we need to enter the loop i times, each time costing two comparisons, and use one comparison for the final assignment of location. Thus 2i + 1 comparisons are needed. we need to average the number 2i + 1 (for i from 1 to n), to find the average number of comparisons needed for a successful search. This is (3 + 5 + + (2n + 1))/n =(n + (2 + 4 + + 2n))/n =(n + 2(1 + 2 + + n)) =(n + 2(n(n + 1)/2))/n = n + 2. Finally, we average the 2n + 2 comparisons for the unsuccessful search with this average n + 2 comparisons for a successful search to obtain a grand average of (2n + 2 + n + 2)/2 = (3n + 4)/2 comparisons. 16. We will count comparisons of elements in the list to x. Furthermore, we will assume that the number of elements in the list is a power of 4, say n = 4 k. Just as in the case of binary search, we need to determine the maximum number of times the while loop is iterated. Each pass through the loop cuts the number of elements still being considers by a factor of 4. Therefore after k iterations, the active portion of the list will have length 1; that is we will have i = j. The loop terminates at this point. Now each iteration of the loop requires two comparisons in the worst case. Three more comparisons are needed at the. Therefore the number of comparisons is 2k + 3, which is O(k). But k = log 4 n, which is O(log n) since logarithms to different lases differ only by multiplicative constants, so the time complexity of this algorithm is O(log n). 5

19. The worst case is that in which we do not find any term equal to some previous term. In that case, we need to go through all the terms a 2 through a n, and for each of those, we need to go through all the terms occurring prior to that term. Thus the inner loop of our algorithm is executed once for i = 2, twice for i = 3, three times for i = 4, and so on, up to n 1 times for i = n. Thus the number of comparisons that need to be made in the inner loop is 1+2+3+ +(n 1). As was mentioned in this section, that sum is (n 1)(n 1 + 1)/2, which is clearly O(n 2 ) but no better. 27. (a) The linear search algorithm uses about n comparison for a list of length n, and 2n comparisons for a list of length 2n. Therefore the number of comparisons, like the size of the list, doubles. (b) This binary search algorithm uses about log n comparisons for a list of length n, and log(2n) = log 2 + log n = 1 + log n comparisons for a list of length 2n. Therefore the number of comparisons increases by about 1. 6