MIDTERM A Solutions. CMPS Spring 2015 Warmuth

Similar documents
CSE 326, Data Structures. Sample Final Exam. Problem Max Points Score 1 14 (2x7) 2 18 (3x6) Total 92.

2. (a) Explain the strassen s matrix multiplication. (b) Write deletion algorithm, of Binary search tree. [8+8]

Approximation Algorithms

Lecture 3: Finding integer solutions to systems of linear equations

Data Structures and Algorithms Written Examination

THE DIMENSION OF A VECTOR SPACE

The Goldberg Rao Algorithm for the Maximum Flow Problem

Analysis of Algorithms, I

Lecture 1: Course overview, circuits, and formulas

Many algorithms, particularly divide and conquer algorithms, have time complexities which are naturally

COMP 250 Fall 2012 lecture 2 binary representations Sept. 11, 2012

The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge,

Continued Fractions and the Euclidean Algorithm

Near Optimal Solutions

Minimum cost maximum flow, Minimum cost circulation, Cost/Capacity scaling

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

Linear Programming. March 14, 2014

Introduction to Algorithms March 10, 2004 Massachusetts Institute of Technology Professors Erik Demaine and Shafi Goldwasser Quiz 1.

Dynamic Programming. Lecture Overview Introduction

Data Structure [Question Bank]

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Full and Complete Binary Trees

Why? A central concept in Computer Science. Algorithms are ubiquitous.

Converting a Number from Decimal to Binary

Analysis of Binary Search algorithm and Selection Sort algorithm

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Cost Model: Work, Span and Parallelism. 1 The RAM model for sequential computation:

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Method To Solve Linear, Polynomial, or Absolute Value Inequalities:

Inner Product Spaces

Graph Theory Problems and Solutions

3. Mathematical Induction

Section IV.1: Recursive Algorithms and Recursion Trees

Similarity and Diagonalization. Similar Matrices

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA

by the matrix A results in a vector which is a reflection of the given

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

IE 680 Special Topics in Production Systems: Networks, Routing and Logistics*

Exam study sheet for CS2711. List of topics

8 Primes and Modular Arithmetic

6. Standard Algorithms

CS473 - Algorithms I

Actually Doing It! 6. Prove that the regular unit cube (say 1cm=unit) of sufficiently high dimension can fit inside it the whole city of New York.

Zabin Visram Room CS115 CS126 Searching. Binary Search

Mathematical Induction. Lecture 10-11

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products

A Note on Maximum Independent Sets in Rectangle Intersection Graphs

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include

CS 103X: Discrete Structures Homework Assignment 3 Solutions

Thnkwell s Homeschool Precalculus Course Lesson Plan: 36 weeks

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Linear Programming Problems

1 Review of Newton Polynomials

Integer Factorization using the Quadratic Sieve

Midterm Practice Problems

HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)!

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

ONLINE DEGREE-BOUNDED STEINER NETWORK DESIGN. Sina Dehghani Saeed Seddighin Ali Shafahi Fall 2015

DERIVATIVES AS MATRICES; CHAIN RULE

Algorithms and Data Structures

Systems of Linear Equations

136 CHAPTER 4. INDUCTION, GRAPHS AND TREES

Data Structures. Chapter 8

Discuss the size of the instance for the minimum spanning tree problem.

Euclidean Minimum Spanning Trees Based on Well Separated Pair Decompositions Chaojun Li. Advised by: Dave Mount. May 22, 2014

Pigeonhole Principle Solutions

Fairness in Routing and Load Balancing

Row Echelon Form and Reduced Row Echelon Form

Math 55: Discrete Mathematics

Lecture Notes 2: Matrices as Systems of Linear Equations

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

Social Media Mining. Graph Essentials

Lecture 13 - Basic Number Theory.

GRAPH THEORY LECTURE 4: TREES

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Catalan Numbers. Thomas A. Dowling, Department of Mathematics, Ohio State Uni- versity.

Problem Set 7 Solutions

Chapter 11 Number Theory

Sequential Data Structures

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Chapter 4. Polynomial and Rational Functions. 4.1 Polynomial Functions and Their Graphs

Fast Sequential Summation Algorithms Using Augmented Data Structures

3 Some Integer Functions

8 Square matrices continued: Determinants

U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, Notes on Algebra

DATA ANALYSIS II. Matrix Algorithms

Just the Factors, Ma am

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

Zeros of a Polynomial Function

Class Overview. CSE 326: Data Structures. Goals. Goals. Data Structures. Goals. Introduction

T ( a i x i ) = a i T (x i ).

We can express this in decimal notation (in contrast to the underline notation we have been using) as follows: b + 90c = c + 10b


Find-The-Number. 1 Find-The-Number With Comps

Introduction to Matrix Algebra

Orthogonal Diagonalization of Symmetric Matrices

Solutions to Math 51 First Exam January 29, 2015

Transcription:

MIDTERM A Solutions CMPS 10 - Spring 015 Warmuth NAME: Student ID: This exam is closed book, notes, computer and cell phone Show partial solutions to get partial credit Make sure you have answered all parts of a question If your solutions are not written legibly, you won t get full credit Clarity and succinctness will be rewarded Question 1: (out of 15) Question : (out of 10) Question 3: (out of 15) Question 4: (out of 15) Question 5: (out of 15) Question 6: (out of 15) Question 7: (out of 15) ECredit 8: (out of 10) Total: (out of 110) 1

1 Short questions: (a) What is the recurrence relation and worst-case running time of MergeSort? The recurrence relation is T (n) = T (n/) + O(n) The worst-case running time is O(n log n) (b) What is the worst and average case running time of QuickSelect? The worst and average case running time are O(n ) and O(n), respectively (c) What is the number of bit operations of the traditional algorithm for multiplying two n bit numbers? We multiply every bit from one number with every bit from the other number Thus the number of bit operations is O(n ) What is the recurrence for the number of bit operations of Karatsuba s Divide and Conquer type multiplication algorithm for two n bit numbers? T (n) = 3T (n/) + O(n) What is the solution of Karatsuba s recurrence in Θ-notation as a function of n? Θ(n log 3 ) (d) What is the running time of the standard algorithm for multiplying two polynomials of degree at most n 1? We need to multiply every term from one polynomial with every term from the other polynomial Thus the running time will be O(n ) Give a flow diagram of how this can be done with FFT and give the running time of each step C(ω j ) = A(ω j ) B(ω j ) ω is the complex (n)th root of unity, ie ω = exp( π n I) (e) Give a high level description of Prim s greedy algorithm for finding a Minimum Cost Spanning Tree Given a simple graph G = (V, E) and an arbitrary vertex v V, we start with S = {v} and MST = At each step, among all edges with one end in S and one end in V S, we pick the one with minimum weight - say e = (u, w) such that u S and w V S Then we add e to MST and w to S Do n 1 time until S = V

Suppose Dijkstra s algorithm is run on the following graph, starting at node A 5 3 A B C D 7 1 1 3 E F G H 1 4 4 (a) Draw a table showing the intermediate distance values of all nodes at each iteration of the algorithm Iteration A B C D E F G H # 0 0 + + + + + + + # 1 0 5 + + 7 + + # 0 5 + + 3 + + # 3 0 4 + + 3 7 + # 4 0 4 7 + 3 6 + # 5 0 4 7 + 3 6 + # 6 0 4 7 9 3 6 10 # 7 0 4 7 9 3 6 10 # 8 0 4 7 9 3 6 10 (b) Also mark in each iteration for which nodes you already know the shortest path from the source Note the elements in the previous table which are bolded and underlined (c) Show the final shortest path tree 3 A B C D 1 3 E F G H 1 3

3 What are the solutions to the following three recurrences in Θ notation (T (1) = 1 by default): (a) T (n) = 7T ( n 8 ) + n We have a = 7 and b = 8, thus log b a = log 8 7 < 1 Also f(n) = n = Ω(n 1 ) Applying Case (iii) of Master Theorem, we obtain T (n) = Θ(n ) (b) T (n) = 4T ( n ) + n We have a = 4 and b =, thus log b a = log 4 = > 1 Also f(n) = n = O(n 1 ) Applying Case (i) of Master Theorem, we obtain T (n) = Θ(n ) (c) T (n) = 3T ( n ) + n We have a = 3 and b =, thus log b a = log 3 > 1 Also f(n) = n = O(n 1 ) Applying Case (i) of Master Theorem, we obtain T (n) = Θ(n log 3 ) Gain partial credit for correct work towards and incorrect solution by showing work The master theorem is provided below Master Theorem: Let a 1 and b > 1 be constants, let f(n) be a function, and let T (n) be defined on the nonnegative integers by the recurrence T (n) = at ( n b ) + f(n), where we interpret n b as follows to mean either n or n Then T (n) can be bounded asymptotically b b (i) If f(n) = O(n log b a ɛ ) for some constant ɛ > 0, then T (n) = Θ(n log b a ) (ii) If f(n) = Θ(n log b a ), then T (n) = Θ(n log b a log n) (iii) If f(n) = Ω(n log b a+ɛ ) for some constant ɛ > 0, and if af( n ) cf(n) for some constant b c < 1 and all sufficiently large n, then T (n) = Θ(f(n)) 4

4 Reason that the following recurrence is O(n): T ( 1n) + T ( 1 n) + 3n if n 1 3 4 T (n) = O(1) otherwise You are allowed to ignore rounding problems Hints: Expand the recurrence and draw a tree Now sum the approximated cost per level in a geometric sum Note that you can approximate your sum from above as long as the final result is O(n) T (n) T ( 1 3 n) T ( 1 4 n) 3n T ( 1 3 n) T ( 1 1 n) 4 3 T ( 1 1 n) 3 4 T ( 1 4 n) 7 1 (3n) T ( ) T ( ) T ( ) T ( ) ( 7 1 )d 1 (3n) Note that since the T (n/4) branch reaches T (1) earlier than T (n/3) branch does and we have unbalanced binary tree In order to obtain an upper-bound, we calculate the costs of the tree until the depth d = log 3 n Note that the number of leaves are d Therefore we have: d 1 T (n) d T (1) + ( 7 1 )i (3n) i=0 d 1 = log 3 n T (1) + 3n ( 7 1 )i n log 3 T (1) + 3n i=0 + i=0 n T (1) + 3n 1 1 7 1 ( 7 1 )i Thus T (n) is upper-bounded by a linear function and therefore T (n) = O(n) 5

5 Modified Hadamard matrices H 0, H 1, H, are defined recursively as follows: H 0 is the 1 1 matrix [1] For k > 0, H k is the k k matrix [ ] Hk 1 3H H k = k 1 H k 1 4H k 1 Give a Divide and Conquer Algorithm for multiplying such matrices with a column vector on the right If n = k is the dimension of your square matrix, then the matrix vector multiplication is to run in O(n log n) time Describe the essence of the Divide and Conquer approach What is the recurrence and overall running time Let v be an arbitrary column vector of size n = k Also let v u and v l be[ two] column vectors of size n/ = k 1 vu representing the upper and lower halves of v ie v = Therefore, we have: [ ] [ ] [ ] Hk 1 3H H k v = k 1 vu Hk 1 v = u + 3H k 1 v l H k 1 4H k 1 v l H k 1 v u + 4H k 1 v l Thus in order to find H k v it suffices to find H k 1 v u and H k 1 v l and then compute H k 1 v u + 3H k 1 v l and H k 1 v u + 4H k 1 v l See Algorithm 1 If T (n) is the cost Algorithm 1 O(n log n) Algorithm for Finding Modified Hadamard Matrix-Vector Multiplication procedure ModifiedHadamard(k, v) if k = 0 then return v { v is a scalar in this case } end if v l Lower(v) v u Upper(v) m l ModifiedHadamard(k 1, v l ) m u [ ModifiedHadamard(k ] 1, v u ) mu + 3m m l { Takes O(n) time for elementwise vector addition and subtraction} m u + 4m l return m end procedure for finding H k v, then we can find each H k 1 v u and H k 1 v l in T (n/) In addition, H k 1 v u + 3H k 1 v l and H k 1 v u + 4H k 1 v l can be found in linear time since we need to do elementwise vector addition and subtraction Therefore: T (n) = T (n/) + O(n) Observe that f(n) = O(n) and 1 = log Thus using Master Theorem, we obtain T (n) = Θ(n log n) v l 6

6 Lets consider a long, quiet country road with houses scattered very sparsely along it (Picture the road as a long line segment, with an eastern endpoint and a western endpoint The houses are points along the line) Further, lets suppose that despite the bucolic 1 setting, the residents of all these houses are avid cell phone users You want to place cell phone base stations at certain points along the road, so that every house is within four miles of one of the base stations Give an efficient greedy algorithm that achieves this goal, using as few base stations as possible Note that a station covers an interval of eight miles At what point do you place the next base station? Show that your greedy algorithm is optimal using a swapping argument Basically we want to cover a set of points on a real line using intervals of length 8 and the goal is to use the minimum number of intervals Here is an efficient greedy algorithm: Starting from leftmost point, at each step, place the beginning of the interval at the first uncovered point We claim that this algorithm gives us the optimal solution We prove it by contradiction Assume greedy is not optimal Let g 1 < g < < g p denote the set of start points of the intervals chosen by the greedy algorithm Similarly, let f 1 < f < < f q denote the set of start points of the intervals in an optimal solution with f 1 = g 1, f = g,, f r = g r for largest possible value of r Note that g r+1 > f r+1 by the greedy choice of algorithm We change the optimal solution a bit by making f r+1 = g r+1 ie we push the (r + 1)th interval further Observe that: The new configuration is a feasible solution since it still cover all the points This is true since there can not be any points in (f r+1, g r+1 ) according to greedy solution This solution is still optimal since the number of intervals does not change Notice that this new optimal solution has one more start point in common with greedy solution which is a contradiction 1 Of or relating to the pleasant aspects of the countryside and country life 7

7 You are to find a local minimum in an array A[1n] containing n distinct real numbers A local minimum is defined as follows: The leftmost number is a local minimum if it is smaller than the next number, ie A[1] < A[] The rightmost number is a local minimum if it is smaller than the previous number, ie A[n] < A[n 1] A number 1 < i < n away from the ends is a local minimum if it is smaller than the previous and the next number, respectively, ie A[i] < A[i 1] and A[i] < A[i + 1] (a) Consider an algorithm that scans the array from left to right while doing some comparisons until it runs into a local minimum What is the worst case running time of this algorithm? On what array does it perform particularly badly? Scan array from left to right Continue as long as the next number is less than the current number Stop as soon as you are at the end of the array or the next number is larger The worst case occurs when the array is in decreasing order algorithm requires O(n) time In this case the above 8

(b) Give a divide and conquer algorithm that will find a local minimum in O(log n) time function local(p,q) mid=(q-p+1)/ if A[mid-1] > A[mid] then if A[mid+1] > A[mid] then return (mid) else local(mid+1,q) else local(p,mid-1) *main* A[0] := A[1]+1 A[n+1]:= A[n]+1 local(1,n) *A[mid+1] < A[mid]* *A[mid-1] < A[mid]* (c) What is the recurrence for its running time? T (n) = T (n/) + O(1) = O(log n) (d) Prove correctness of your algorithm Note that A[0] contains a number larger than A[1] and A[n + 1] a number larger than A[n] Surrounding the array A[1n] by larger numbers keeps all local minimum in the array unchanged and we are now always in the last case of the definition of a local minimum We claim that whenever we recurse on a subsection A[pq] of the array, then A[p 1] > A[p] and A[q + 1] > A[q] Thus the subsection A[pq] must contain a local minimum and any local minimum of the subsection A[pq] is also a local minimum of the entire array A[1n] By our setup, the claim is certainly true initially Also we either determine that the middle element is a local minimum or we recurse on a side where the middle element is larger than its adjacent element Again the section we recurse has fewer numbers and is surrounded by larger elements 9

8 EC: Prove that every natural number has a unique base 3 representation Hint: This problem is related to the coin changing problem Solution 1: First, note that a base 3 representation for a natural number x is (a n a 1 a 0 ) 3 such that x = a n 3 n + + a 1 3 + a 0 1 and 0 a i for all i Similar to coin changing problem, a greedy approach will give us a representation of x: starting with a i = 0 for all i, at each step increase a k by one for the largest k such that x 3 k We claim that the result of the greedy algorithm is the only base 3 representation We prove by strong induction on the natural number (denoted by x) on which we want find base 3 representation Let r be the maximum integer which 3 r x The greedy algorithm takes 3 r ie increases a r by one We claim that any representation must also take 3 r If not, it needs enough number of 1, 3, 3,, 3 r 1 to add up to x But the maximum number which can be obtained by values 1, 3, 3,, 3 r 1 is: 1 + 3 + 3 + + 3 r 1 = 3r 1 3 1 = 3r 1 < 3 r x Hence the maximum number which can be obtained by values 1, 3, 3,, 3 r 1 is strictly less than x which is a contradiction Therefore we have to take 3 r and the problem reduces to representation of the number x 3 r, which, based on induction hypothesis, is optimally solved by the greedy algorithm Solution : The above greedy construction produces the digits from higher order digits downward You can also produce them the other way: We claim that the last digit a 0 in any base 3 representation of x must be x mod 3 This is because any higher order digits i>0 b i3 i are multiples of 3 and cannot correct for any remainder that is less than 3 Now now recursively find a unique representation for (x a 0 1)/3 and add the last digit a 0 to this representation Since the last digit is unique and by induction all the higher order digits, the entire representation is unique 10