Project Proposals. Your grade does not reflect how good a project you are proposing!

Size: px
Start display at page:

Download "Project Proposals. Your grade does not reflect how good a project you are proposing!"

Transcription

1 Project Proposals Your grade does not reflect how good a project you are proposing! It reflects how well you followed the instructions for the proposal! 1

2 What s the point of the Project? 1. Learn and understand stuff about algorithms 2. Get a good grade Help those of you who don t do well in exams to improve your course grade 2

3 How to get a good grade: At the end of the project, you will submit a written report that will be used as the sole basis for evaluating your project. Specifically, your report will be expected: To show that you understand and can implement standard algorithms. To show that you can write programs that are understandable, and algorithmically sound. To provide evidence that you understand how complexity theory shows up in practice. To demonstrate your initiative, originality, and algorithmic insights. To show that you can communicate your work clearly and concisely in a well-structured document. 3

4 Specific items that may appear in a report include: Descriptions of the algorithm(s) that you are working with, in your own words. Worked examples of your own devising to show how the algorithms work (i.e., not the ones in the book, the slides, the original papers where the algorithms were introduced, or other resources authored by other people). Details of the testing strategy that you have used. This is likely to involve the construction of code to generate large pseudorandom test cases, and code to verify that the results produced by your code are correct. Summaries of experimental data (e.g., tables and/or graphs showing the algorithms' behavior over a range of different inputs). Reflections on what you have learned as a result of your experience. 4

5 Rubric Can understand and implement standard algorithms" /10 Describes or depicts algorithms using examples Discusses implementation issues Can write programs that are understandable and are algorithmically sound /30 Program fragments are presented are understandable are algorithmically sound Sound measurement technique Generation of experimental data sets Testing for correct results Makes connections between implementation and complexity theory" /15 Compares measured and predicted performance Explains discrepancies, if any Discusses why an asymptotically inferior algorithm might perform better 5

6 Rubric, continued Demonstrates initiative, originality, and algorithmic insights" /15 Initiative Originality Algorithmic insights Document Communicates clearly" /30 Document is concise Language is clear and correct Purpose of project is described Experimental procedure is described Experimental results presented Conclusions Presented 6

7 CS 350 Algorithms and Complexity Fall 2015 Lecture 13: Dynamic Programming Andrew P. Black Department of Computer Science Portland State University

8 Dynamic programming: Solves problems by breaking them into smaller sub-problems and solving those. Which algorithm design technique is this like? Brute force Decrease-and-conquer Divide-and-conquer None that we ve seen so far 8

9 Question: Compare Dynamic Programming with Decrease-and-Conquer: A. They are the same B. They both solve a large problem by first solving a smaller problem C. In decrease and conquer, we don t memoize the solutions to the smaller problem D. In dynamic programming, we do memoize the smaller problems E. B, C & D F. B & C G. B & D 9

10 Dynamic Programming Dynamic programming differs from decrease-and-conquer because in dynamic programming we remember the answers to the smaller sub-problems. Why? A. To use more space B. In the hope that we might re-use them C. Because we know that the sub-problems overlap 10

11 Dynamic programming: 11

12 Dynamic programming: Solves problems by breaking them into smaller subproblems and solving those. 11

13 Dynamic programming: Solves problems by breaking them into smaller subproblems and solving those. like: decrease & conquer 11

14 Dynamic programming: Solves problems by breaking them into smaller subproblems and solving those. like: decrease & conquer Key idea: do not compute the solution to any subproblem more than once; 11

15 Dynamic programming: Solves problems by breaking them into smaller subproblems and solving those. like: decrease & conquer Key idea: do not compute the solution to any subproblem more than once; instead: save computed solutions in a table so that they can be reused. 11

16 Dynamic programming: Solves problems by breaking them into smaller subproblems and solving those. like: decrease & conquer Key idea: do not compute the solution to any subproblem more than once; instead: save computed solutions in a table so that they can be reused. Consequently: dynamic programming works well when the sub-problems overlap. 11

17 Dynamic programming: Solves problems by breaking them into smaller subproblems and solving those. like: decrease & conquer Key idea: do not compute the solution to any subproblem more than once; instead: save computed solutions in a table so that they can be reused. Consequently: dynamic programming works well when the sub-problems overlap. unlike: decrease & conquer 11

18 Why Dynamic Programming? If the subproblems are not independent, i.e. subproblems share sub-subproblems, then a decrease and conquer algorithm repeatedly solves the common subsubproblems. Thus: it does more work than necessary The memo table in DP ensures that each sub-problem is solved (at most) once. 12

19 For dynamic programming to be applicable: At most polynomial-number of subproblems otherwise: still exponential Solution to original problem is easy to compute from solutions to subproblems Natural ordering on subproblems from smallest to largest An easy-to-compute recurrence that allows solving a larger subproblem from a smaller subproblem 13

20 Optimization problems: Dynamic programming is typically (but not always) applied to optimization problems In an optimization problem, the goal is to find a solution among many possible candidates that minimizes or maximizes some particular value. Such solutions are said to be optimal. There may be more than one optimal solution: true or false? 14

21 Optimization problems: Dynamic programming is typically (but not always) applied to optimization problems In an optimization problem, the goal is to find a solution among many possible candidates that minimizes or maximizes some particular value. Such solutions are said to be optimal. There may be more than one optimal solution: true or false? 14

22 Example: Fibonacci Numbers The familiar recursive definition: fib 0 = 0 fib 1 = 1 fib (n+2) = fib (n+1) + fib n Grows very rapidly: 0,1,1,2,3,5,8,13,21,34,55,89,144, (30 th ), (100 th ),... 15

23 Question What is the order of growth of the Fibonacci function? A. O(n) B. O(n 2 ) C. O( n ) D. O(2 n ) E. O(e n ) F. O(n!) 16

24 In fact, the i th Fibonacci number is the integer closest to ' i / p 5 where: ' = 1+p 5 2 (the golden ratio ) = Thus, the result of the Fibonacci function grows exponentially. 17

25 Complexity of brute-force fib: let nfib be the number of calls needed to evaluate fib n, implemented according to the definition. nfib 0 = 1 nfib 1 = 1 nfib(n+2)= 1 + nfib (n+1) + nfib n Grows even more rapidly than fib! Hence fib is at least exponential! However: many calls to fib have the same argument 18

26 Repeated calls, same argument: 19

27 Repeated calls, same argument: 6 19

28 Repeated calls, same argument:

29 Repeated calls, same argument:

30 Repeated calls, same argument:

31 Repeated calls, same argument:

32 Repeated calls, same argument:

33 Repeated calls, same argument:

34 Repeated calls, same argument:

35 Repeated calls, same argument:

36 Repeated calls, same argument:

37 Avoiding repeated calculations: We can use a table to avoid doing a calculation more than once: table[0] 0; table[1] 1; table[2..max] -1; int tablefib(int n) { if (table[n] = -1) { } all other entries in the table are initially set to -1. table[n] tablefib(n-1) + tablefib(n-2); } return table[n]; Table size is fixed, but values can be shared over many calls. 20

38 Riding the wave: Alternatively, we can look at the way the entries in the table are filled:

39 Riding the wave: Alternatively, we can look at the way the entries in the table are filled:

40 Riding the wave: Alternatively, we can look at the way the entries in the table are filled: This leads to code: int a 0, b 1; for i from 0 to n do { int c = a + b; a b; b c; } return a; No limits on n now, but values cannot be reused. 21

41 Riding the wave: Alternatively, we can look at the way the entries in the table are filled: This leads to code: int a 0, b 1; for i from 0 to n do { int c = a + b; a b; b c; } return a; Complexity is O(n)! No limits on n now, but values cannot be reused. 21

42 Binomial Coefficients Again, a recursive definition: C(n, k) = C(n-1, k-1) + C(n-1, k) for n > k > 0 C(n, 0) = 1 C(n, n) = 1 What s the complexity of a naive implementation of C? 22

43 Binomial Coefficients Again, a recursive definition: C(n, k) = C(n-1, k-1) + C(n-1, k) for n > k > 0 C(n, 0) = 1 C(n, n) = 1 Notice: the calls overlap 23

44 Binomial Coefficients Again, a recursive definition: C(n, k) = C(n-1, k-1) + C(n-1, k) for n > k > 0 C(n, 0) = 1 C(n, n) = 1 5, 3 Notice: the calls overlap 23

45 Binomial Coefficients Again, a recursive definition: C(n, k) = C(n-1, k-1) + C(n-1, k) for n > k > 0 C(n, 0) = 1 C(n, n) = 1 5, 3 4, 2 4, 3 Notice: the calls overlap 23

46 Binomial Coefficients Again, a recursive definition: C(n, k) = C(n-1, k-1) + C(n-1, k) for n > k > 0 C(n, 0) = 1 C(n, n) = 1 5, 3 4, 2 4, 3 3, 1 3, 2 Notice: the calls overlap 23

47 Binomial Coefficients Again, a recursive definition: C(n, k) = C(n-1, k-1) + C(n-1, k) for n > k > 0 C(n, 0) = 1 C(n, n) = 1 Notice: the calls overlap 5, 3 4, 2 4, 3 3, 1 3, 2 3, 3 23

48 Binomial Coefficients Again, a recursive definition: C(n, k) = C(n-1, k-1) + C(n-1, k) for n > k > 0 C(n, 0) = 1 C(n, n) = 1 5, 3 4, 2 4, 3 3, 1 3, 2 3, 3 Notice: the calls overlap 2, 0 2, 1 23

49 Binomial Coefficients Again, a recursive definition: C(n, k) = C(n-1, k-1) + C(n-1, k) for n > k > 0 C(n, 0) = 1 C(n, n) = 1 5, 3 4, 2 4, 3 3, 1 3, 2 3, 3 Notice: the calls overlap 2, 0 2, 1 2, 2 23

50 Binomial Coefficients Again, a recursive definition: C(n, k) = C(n-1, k-1) + C(n-1, k) for n > k > 0 C(n, 0) = 1 C(n, n) = 1 5, 3 4, 2 4, 3 3, 1 3, 2 3, 3 Notice: the calls overlap 2, 0 2, 1 2, 2 1, 0 1, 1 23

51 Construct a Table: 24

52 Construct a Table: 24

53 Construct a Table: 24

54 Algorithm 25

55 Algorithm Time efficiency? 26

56 Algorithm Time efficiency? A. O(n) B. O(k) C. O(1) D. O(nk) E. None of the above 26

57 Algorithm Space efficiency? 27

58 Algorithm Space efficiency? A. O(n) B. O(k) C. O(1) D. O(nk) E. None of the above 27

59 Problem 28

60 Problem a) Compute C(6,3) by applying the dynamic programming algorithm and the recurrence C(n, k) = C(n-1, k-1) + C(n-1, k) C(n, 0) = 1, C(n, n) = 1 28

61 Problem a) Compute C(6,3) by applying the dynamic programming algorithm and the recurrence C(n, k) = C(n-1, k-1) + C(n-1, k) C(n, 0) = 1, C(n, n) = 1 b) Is it also possible to compute C(n,k) by filling the algorithm s dynamic programming table column by column rather than row by row? 28

62 Problem Which of the following algorithms for computing a binomial coefficient is most efficient? a. Use the formula b. Use the formula C(n, k) = n! k!(n k)!. C(n, k) = n(n 1)...(n k +1). k! c. Apply recursively the formula C(n, k) = C(n 1,k 1) + C(n 1,k) for n>k>0, C(n, 0) = C(n, n) =1. d. Apply the dynamic programming algorithm. 29

63 Problem Which of the following algorithms for computing a binomial coefficient is most efficient? a. Use the formula b. Use the formula C(n, k) = n! k!(n k)!. 2n multiplications, 1 division C(n, k) = n(n 1)...(n k +1). k! c. Apply recursively the formula C(n, k) = C(n 1,k 1) + C(n 1,k) for n>k>0, C(n, 0) = C(n, n) =1. d. Apply the dynamic programming algorithm. 29

64 Problem Which of the following algorithms for computing a binomial coefficient is most efficient? a. Use the formula b. Use the formula C(n, k) = C(n, k) = c. Apply recursively the formula n! k!(n k)!. n(n 1)...(n k +1) k! 2n multiplications, 1 division. 2k multiplications, 1 division C(n, k) = C(n 1,k 1) + C(n 1,k) for n>k>0, C(n, 0) = C(n, n) =1. d. Apply the dynamic programming algorithm. 29

65 Problem Which of the following algorithms for computing a binomial coefficient is most efficient? a. Use the formula b. Use the formula C(n, k) = C(n, k) = c. Apply recursively the formula n! k!(n k)!. n(n 1)...(n k +1) k! 2n multiplications, 1 division C(n, k) = C(n 1,k 1) + C(n 1,k) for n>k>0, C(n, 0) = C(n, n) =1.. 2k multiplications, 1 division C(n, k) additions d. Apply the dynamic programming algorithm. 29

66 Problem Which of the following algorithms for computing a binomial coefficient is most efficient? a. Use the formula b. Use the formula C(n, k) = C(n, k) = c. Apply recursively the formula n! k!(n k)!. n(n 1)...(n k +1) k! 2n multiplications, 1 division C(n, k) = C(n 1,k 1) + C(n 1,k) for n>k>0, C(n, 0) = C(n, n) =1.. 2k multiplications, 1 division C(n, k) additions d. Apply the dynamic programming algorithm. (nk) 29

67 Knapsack Problem by DP 30

68 Knapsack Problem by DP Given n items of integer weights: w1 w2 wn values: v1 v2 vn 30

69 Knapsack Problem by DP Given n items of integer weights: w1 w2 wn values: v1 v2 vn a knapsack of integer capacity W 30

70 Knapsack Problem by DP Given n items of integer weights: w1 w2 wn values: v1 v2 vn a knapsack of integer capacity W find most valuable subset of the items that fit into the knapsack 30

71 Knapsack Problem by DP Given n items of integer weights: w1 w2 wn values: v1 v2 vn a knapsack of integer capacity W find most valuable subset of the items that fit into the knapsack How can we set this up as a recursion over smaller subproblems? 30

72 Knapsack Problem by DP Given n items of integer weights: w1 w2 wn values: v1 v2 vn a knapsack of integer capacity W find most valuable subset of the items that fit into the knapsack How can we set this up as a recursion over smaller subproblems? 30

73 Knapsack Problem by DP { 31

74 Knapsack Problem by DP Consider problem instance defined by first i items and capacity j (j W). { 31

75 Knapsack Problem by DP Consider problem instance defined by first i items and capacity j (j W). Let V[i, j] be value of optimal solution of this problem instance. Then { 31

76 Knapsack Problem by DP Consider problem instance defined by first i items and capacity j (j W). Let V[i, j] be value of optimal solution of this problem instance. Then { max (V[i-1, j], v i +V[i-1, j-wi]) if j wi V[i,j] = V[i-1,j] if j<w i 31

77 Knapsack Problem by DP Consider problem instance defined by first i items and capacity j (j W). Let V[i, j] be value of optimal solution of this problem instance. Then { max (V[i-1, j], v i +V[i-1, j-wi]) if j wi V[i,j] = V[i-1,j] if j<w i 31

78 Knapsack Problem by DP Consider problem instance defined by first i items and capacity j (j W). Let V[i, j] be value of optimal solution of this problem instance. Then { max (V[i-1, j], v i +V[i-1, j-wi]) if j wi V[i,j] = V[i-1,j] if j<w i Initial conditions: V[0, j] = 0 and V[i, 0] = 0 31

79 Knapsack Problem by DP (example) Knapsack of capacity W = 5 item weight value 1 2 $ $ $ $15 capacity, j w1 = 2, v1= 12 1 w2 = 1, v2= 10 i 2 w3 = 3, v3= 20 3 w4 = 2, v4=

80 Can we do this top down? Yes: use a memo function Not: a memory function D. Michie. Memo functions and machine learning. Nature, 218:19 22, 6 April Idea: record previously computed values just in time 33

81 34

82 Summary Dynamic programming is a good technique to use when: " Solutions defined in terms of solutions to smaller problems of the same type. " Many overlapping subproblems. Implementation can use either: " top-down, recursive definition with memoization " explicit bottom-up tabulation 35

83 Problem a. Apply the bottom-up dynamic programming algorithm to the following instance of the knapsack problem: item weight value 1 3 $ $ $ $ $50, capacity W =6. b. How many different optimal subsets does the instance of part (a) have? c. In general, how can we use the table generated by the dynamic programming algorithm to tell whether there is more than one optimal subset for the knapsack problem s instance? 36

84 Problem: The sequence of values in a row of the dynamic programming table for an instance of the knapsack problem is always non-decreasing: True or False? 37

85 Problem: The sequence of values in a column of the dynamic programming table for an instance of the knapsack problem is always non-decreasing: True or False? 38

86 Problem item weight value 1 3 $ $ $ $ $50, capacity W =6. Apply the memo function method to the above instance of the knapsack problem. Which entries of the dynamic programming table are (i) never computed by the memo function, and (ii) retrieved without recomputation. 39

87 Solution In the table below, the cells marked by a minus indicate the ones for which no entry is computed for the instance in question; the only nontrivial entry that is retrieved without recomputation is (2, 1). capacity j i w 1 =3,v 1 = w 2 =2,v 2 = w 3 =1,v 3 = w 4 =4,v 4 = w 5 =5,v 5 =

88 Warshall s Algorithm Computes the transitive closure of a relation. reachability in a graph is only an example of such a relation 41

89 Warshall s Algorithm Warshall Algorithm 1 Warshall( : 0-1 matrix) ( ) for( =1 to ) for( =1 to ) for( =1 to ) he algorithm. Warshall Algorithm 2 Warshall( : 0-1 matrix) ( ) for( =1 to ) for( =1 to ) if( =1) for( =1 to ) return return 42

90 Example W0 = MR = direct connections between nodes 43

91 Example W0 = MR = direct connections between nodes 43

92 Example W0 = MR = direct connections between nodes 43

93 Example W0 = MR = direct connections between nodes 43

94 Example W0 = MR = direct connections between nodes 43

95 Example W0 = MR = direct connections between nodes 43

96 Example W0 = MR = direct connections between nodes 43

97 Example W0 = MR = direct connections between nodes 43

98 Example W0 = MR = direct connections between nodes 43

99 Example W0 = MR = direct connections between nodes 43

100 Example W0 = MR = direct connections between nodes 43

101 Example W0 = MR = direct connections between nodes 43

102 Example W0 = MR = direct connections between nodes 43

103 1 3 W0 = MR = direct connections between nodes

104 1 3 W0 = MR = direct connections between nodes 4 2 Warshall Algorithm 1 Warshall( : 0-1 matrix) ( ) for( =1 to ) for( =1 to ) for( =1 to ) return 44

105 1 3 W0 = MR = direct connections between nodes W1 = W0 + connections thru node Warshall Algorithm 1 Warshall( : 0-1 matrix) ( ) for( =1 to ) for( =1 to ) for( =1 to ) return 44

106 1 3 W0 = MR = direct connections between nodes W1 = W0 + connections thru node w ij w ij _ (w ik ^ w kj ) Warshall Algorithm 1 Warshall( : 0-1 matrix) ( ) for( =1 to ) for( =1 to ) for( =1 to ) return 44

107 1 3 W0 = MR = direct connections between nodes W1 = W0 + connections thru node w ij w ij _ (w ik ^ w kj ) 44

108 1 3 W0 = MR = direct connections between nodes W1 = W0 + connections thru node w ij w ij _ (w ik ^ w kj ) second column of, th 44

109 1 3 W0 = MR = direct connections between nodes W1 = W0 + connections thru node w ij w ij _ (w i1 ^ w 1j ) second column of, th 44

110 1 3 W0 = MR = direct connections between nodes W1 = W0 + connections thru node w ij w ij _ (w i1 ^ w 1j ) second column of, th 44

111 1 3 W0 = MR = direct connections between nodes W1 = W0 + connections thru node w ij w ij _ (w i1 ^ w 1j ) second column of, th 44

112 1 3 W0 = MR = direct connections between nodes W1 = W0 + connections thru node w ij w ij _ (w i1 ^ w 1j ) second column of, th 44

113 1 3 W0 = MR = direct connections between nodes W1 = W0 + connections thru node second column of w ij w ij _ (w i1 ^ w 1j ) add new 1 when i = 4 and j =3, th 44

114 1 3 W0 = MR = direct connections between nodes W1 = W0 + connections thru node second column of w ij w ij _ (w i1 ^ w 1j ) add new 1 when i = 4 and j =3, th 44

115 1 3 W0 = MR = direct connections between nodes W1 = W0 + connections thru node w ij w ij _ (w i1 ^ w 1j ) second column of, th 44

116 1 3 W0 = MR = direct connections between nodes W1 = W0 + connections thru node second column of, th 44

117 1 3 W0 = MR = direct connections between nodes W1 = W0 + connections thru node second column of, th 44

118 1 3 W0 = MR = direct connections between nodes W1 = W0 + connections thru node w ij w ij _ (w i2 ^ w 2j ) second column of, th 44

119 1 3 W0 = MR = direct connections between nodes W1 = W0 + connections thru node w ij w ij _ (w i2 ^ w 2j ) second column of, th 44

120 1 3 W0 = MR = direct connections between nodes W1 = W0 + connections thru node w ij w ij _ (w i2 ^ w 2j ) second column of, th 44

121 1 3 W0 = MR = direct connections between nodes W1 = W0 + connections thru node w ij w ij _ (w i2 ^ w 2j ) second column of, th 44

122 1 3 W0 = MR = direct connections between nodes W1 = W0 + connections thru node w ij w ij _ (w i2 ^ w 2j ) second column of, th 44

123 1 3 W0 = MR = direct connections between nodes W1 = W0 + connections thru node w ij w ij _ (w i2 ^ w 2j ) second column of, th 44

124 1 3 W0 = MR = direct connections between nodes W1 = W0 + connections thru node w ij w ij _ (w i2 ^ w 2j ) second column of, th 44

125 1 3 W0 = MR = direct connections between nodes W1 = W0 + connections thru node w ij w ij _ (w i2 ^ w 2j ) second column of, th 44

126 1 3 W0 = MR = direct connections between nodes W1 = W0 + connections thru node w ij w ij _ (w i2 ^ w 2j ) second column of, th he fourth column of, 44

127 1 3 W0 = MR = direct connections between nodes W1 = W0 + connections thru node w ij w ij _ (w i2 ^ w 2j ) second column of, th he fourth column of, 44

Warshall s Algorithm: Transitive Closure

Warshall s Algorithm: Transitive Closure CS 0 Theory of Algorithms / CS 68 Algorithms in Bioinformaticsi Dynamic Programming Part II. Warshall s Algorithm: Transitive Closure Computes the transitive closure of a relation (Alternatively: all paths

More information

Dynamic Programming. Lecture 11. 11.1 Overview. 11.2 Introduction

Dynamic Programming. Lecture 11. 11.1 Overview. 11.2 Introduction Lecture 11 Dynamic Programming 11.1 Overview Dynamic Programming is a powerful technique that allows one to solve many different types of problems in time O(n 2 ) or O(n 3 ) for which a naive approach

More information

Lecture 13: The Knapsack Problem

Lecture 13: The Knapsack Problem Lecture 13: The Knapsack Problem Outline of this Lecture Introduction of the 0-1 Knapsack Problem. A dynamic programming solution to this problem. 1 0-1 Knapsack Problem Informal Description: We have computed

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

CSCE 310J Data Structures & Algorithms. Dynamic programming 0-1 Knapsack problem. Dynamic programming. Dynamic Programming. Knapsack problem (Review)

CSCE 310J Data Structures & Algorithms. Dynamic programming 0-1 Knapsack problem. Dynamic programming. Dynamic Programming. Knapsack problem (Review) CSCE J Data Structures & Algorithms Dynamic programming - Knapsac problem Dr. Steve Goddard goddard@cse.unl.edu CSCE J Data Structures & Algorithms Giving credit where credit is due:» Most of slides for

More information

9.2 Summation Notation

9.2 Summation Notation 9. Summation Notation 66 9. Summation Notation In the previous section, we introduced sequences and now we shall present notation and theorems concerning the sum of terms of a sequence. We begin with a

More information

Near Optimal Solutions

Near Optimal Solutions Near Optimal Solutions Many important optimization problems are lacking efficient solutions. NP-Complete problems unlikely to have polynomial time solutions. Good heuristics important for such problems.

More information

Why? A central concept in Computer Science. Algorithms are ubiquitous.

Why? A central concept in Computer Science. Algorithms are ubiquitous. Analysis of Algorithms: A Brief Introduction Why? A central concept in Computer Science. Algorithms are ubiquitous. Using the Internet (sending email, transferring files, use of search engines, online

More information

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 General Integer Linear Program: (ILP) min c T x Ax b x 0 integer Assumption: A, b integer The integrality condition

More information

IB Maths SL Sequence and Series Practice Problems Mr. W Name

IB Maths SL Sequence and Series Practice Problems Mr. W Name IB Maths SL Sequence and Series Practice Problems Mr. W Name Remember to show all necessary reasoning! Separate paper is probably best. 3b 3d is optional! 1. In an arithmetic sequence, u 1 = and u 3 =

More information

Data Structures and Algorithms Written Examination

Data Structures and Algorithms Written Examination Data Structures and Algorithms Written Examination 22 February 2013 FIRST NAME STUDENT NUMBER LAST NAME SIGNATURE Instructions for students: Write First Name, Last Name, Student Number and Signature where

More information

Fast Fourier Transform: Theory and Algorithms

Fast Fourier Transform: Theory and Algorithms Fast Fourier Transform: Theory and Algorithms Lecture Vladimir Stojanović 6.973 Communication System Design Spring 006 Massachusetts Institute of Technology Discrete Fourier Transform A review Definition

More information

CS473 - Algorithms I

CS473 - Algorithms I CS473 - Algorithms I Lecture 4 The Divide-and-Conquer Design Paradigm View in slide-show mode 1 Reminder: Merge Sort Input array A sort this half sort this half Divide Conquer merge two sorted halves Combine

More information

1 Review of Newton Polynomials

1 Review of Newton Polynomials cs: introduction to numerical analysis 0/0/0 Lecture 8: Polynomial Interpolation: Using Newton Polynomials and Error Analysis Instructor: Professor Amos Ron Scribes: Giordano Fusco, Mark Cowlishaw, Nathanael

More information

Dynamic Programming Problem Set Partial Solution CMPSC 465

Dynamic Programming Problem Set Partial Solution CMPSC 465 Dynamic Programming Problem Set Partial Solution CMPSC 465 I ve annotated this document with partial solutions to problems written more like a test solution. (I remind you again, though, that a formal

More information

Lecture 1: Course overview, circuits, and formulas

Lecture 1: Course overview, circuits, and formulas Lecture 1: Course overview, circuits, and formulas Topics in Complexity Theory and Pseudorandomness (Spring 2013) Rutgers University Swastik Kopparty Scribes: John Kim, Ben Lund 1 Course Information Swastik

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

CSC2420 Spring 2015: Lecture 3

CSC2420 Spring 2015: Lecture 3 CSC2420 Spring 2015: Lecture 3 Allan Borodin January 22, 2015 1 / 1 Announcements and todays agenda Assignment 1 due next Thursday. I may add one or two additional questions today or tomorrow. Todays agenda

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

Guessing Game: NP-Complete?

Guessing Game: NP-Complete? Guessing Game: NP-Complete? 1. LONGEST-PATH: Given a graph G = (V, E), does there exists a simple path of length at least k edges? YES 2. SHORTEST-PATH: Given a graph G = (V, E), does there exists a simple

More information

LECTURE 4. Last time: Lecture outline

LECTURE 4. Last time: Lecture outline LECTURE 4 Last time: Types of convergence Weak Law of Large Numbers Strong Law of Large Numbers Asymptotic Equipartition Property Lecture outline Stochastic processes Markov chains Entropy rate Random

More information

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints Chapter 6 Linear Programming: The Simplex Method Introduction to the Big M Method In this section, we will present a generalized version of the simplex method that t will solve both maximization i and

More information

Sample Induction Proofs

Sample Induction Proofs Math 3 Worksheet: Induction Proofs III, Sample Proofs A.J. Hildebrand Sample Induction Proofs Below are model solutions to some of the practice problems on the induction worksheets. The solutions given

More information

Lecture 18: Applications of Dynamic Programming Steven Skiena. Department of Computer Science State University of New York Stony Brook, NY 11794 4400

Lecture 18: Applications of Dynamic Programming Steven Skiena. Department of Computer Science State University of New York Stony Brook, NY 11794 4400 Lecture 18: Applications of Dynamic Programming Steven Skiena Department of Computer Science State University of New York Stony Brook, NY 11794 4400 http://www.cs.sunysb.edu/ skiena Problem of the Day

More information

Recursive Algorithms. Recursion. Motivating Example Factorial Recall the factorial function. { 1 if n = 1 n! = n (n 1)! if n > 1

Recursive Algorithms. Recursion. Motivating Example Factorial Recall the factorial function. { 1 if n = 1 n! = n (n 1)! if n > 1 Recursion Slides by Christopher M Bourke Instructor: Berthe Y Choueiry Fall 007 Computer Science & Engineering 35 Introduction to Discrete Mathematics Sections 71-7 of Rosen cse35@cseunledu Recursive Algorithms

More information

DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH

DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH CHRISTOPHER RH HANUSA AND THOMAS ZASLAVSKY Abstract We investigate the least common multiple of all subdeterminants,

More information

SEQUENCES ARITHMETIC SEQUENCES. Examples

SEQUENCES ARITHMETIC SEQUENCES. Examples SEQUENCES ARITHMETIC SEQUENCES An ordered list of numbers such as: 4, 9, 6, 25, 36 is a sequence. Each number in the sequence is a term. Usually variables with subscripts are used to label terms. For example,

More information

CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis. Linda Shapiro Winter 2015

CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis. Linda Shapiro Winter 2015 CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis Linda Shapiro Today Registration should be done. Homework 1 due 11:59 pm next Wednesday, January 14 Review math essential

More information

Data Structures Fibonacci Heaps, Amortized Analysis

Data Structures Fibonacci Heaps, Amortized Analysis Chapter 4 Data Structures Fibonacci Heaps, Amortized Analysis Algorithm Theory WS 2012/13 Fabian Kuhn Fibonacci Heaps Lacy merge variant of binomial heaps: Do not merge trees as long as possible Structure:

More information

Unit 1 Number Sense. In this unit, students will study repeating decimals, percents, fractions, decimals, and proportions.

Unit 1 Number Sense. In this unit, students will study repeating decimals, percents, fractions, decimals, and proportions. Unit 1 Number Sense In this unit, students will study repeating decimals, percents, fractions, decimals, and proportions. BLM Three Types of Percent Problems (p L-34) is a summary BLM for the material

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

The Taxman Game. Robert K. Moniot September 5, 2003

The Taxman Game. Robert K. Moniot September 5, 2003 The Taxman Game Robert K. Moniot September 5, 2003 1 Introduction Want to know how to beat the taxman? Legally, that is? Read on, and we will explore this cute little mathematical game. The taxman game

More information

Zeros of a Polynomial Function

Zeros of a Polynomial Function Zeros of a Polynomial Function An important consequence of the Factor Theorem is that finding the zeros of a polynomial is really the same thing as factoring it into linear factors. In this section we

More information

WRITING PROOFS. Christopher Heil Georgia Institute of Technology

WRITING PROOFS. Christopher Heil Georgia Institute of Technology WRITING PROOFS Christopher Heil Georgia Institute of Technology A theorem is just a statement of fact A proof of the theorem is a logical explanation of why the theorem is true Many theorems have this

More information

1 Homework 1. [p 0 q i+j +... + p i 1 q j+1 ] + [p i q j ] + [p i+1 q j 1 +... + p i+j q 0 ]

1 Homework 1. [p 0 q i+j +... + p i 1 q j+1 ] + [p i q j ] + [p i+1 q j 1 +... + p i+j q 0 ] 1 Homework 1 (1) Prove the ideal (3,x) is a maximal ideal in Z[x]. SOLUTION: Suppose we expand this ideal by including another generator polynomial, P / (3, x). Write P = n + x Q with n an integer not

More information

Section 1.4. Difference Equations

Section 1.4. Difference Equations Difference Equations to Differential Equations Section 1.4 Difference Equations At this point almost all of our sequences have had explicit formulas for their terms. That is, we have looked mainly at sequences

More information

5.1 Bipartite Matching

5.1 Bipartite Matching CS787: Advanced Algorithms Lecture 5: Applications of Network Flow In the last lecture, we looked at the problem of finding the maximum flow in a graph, and how it can be efficiently solved using the Ford-Fulkerson

More information

Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one

More information

Chapter 3. Distribution Problems. 3.1 The idea of a distribution. 3.1.1 The twenty-fold way

Chapter 3. Distribution Problems. 3.1 The idea of a distribution. 3.1.1 The twenty-fold way Chapter 3 Distribution Problems 3.1 The idea of a distribution Many of the problems we solved in Chapter 1 may be thought of as problems of distributing objects (such as pieces of fruit or ping-pong balls)

More information

Introduction to Algorithms March 10, 2004 Massachusetts Institute of Technology Professors Erik Demaine and Shafi Goldwasser Quiz 1.

Introduction to Algorithms March 10, 2004 Massachusetts Institute of Technology Professors Erik Demaine and Shafi Goldwasser Quiz 1. Introduction to Algorithms March 10, 2004 Massachusetts Institute of Technology 6.046J/18.410J Professors Erik Demaine and Shafi Goldwasser Quiz 1 Quiz 1 Do not open this quiz booklet until you are directed

More information

Row Echelon Form and Reduced Row Echelon Form

Row Echelon Form and Reduced Row Echelon Form These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation

More information

Determinants in the Kronecker product of matrices: The incidence matrix of a complete graph

Determinants in the Kronecker product of matrices: The incidence matrix of a complete graph FPSAC 2009 DMTCS proc (subm), by the authors, 1 10 Determinants in the Kronecker product of matrices: The incidence matrix of a complete graph Christopher R H Hanusa 1 and Thomas Zaslavsky 2 1 Department

More information

1. Prove that the empty set is a subset of every set.

1. Prove that the empty set is a subset of every set. 1. Prove that the empty set is a subset of every set. Basic Topology Written by Men-Gen Tsai email: b89902089@ntu.edu.tw Proof: For any element x of the empty set, x is also an element of every set since

More information

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that

More information

Closest Pair Problem

Closest Pair Problem Closest Pair Problem Given n points in d-dimensions, find two whose mutual distance is smallest. Fundamental problem in many applications as well as a key step in many algorithms. p q A naive algorithm

More information

Discuss the size of the instance for the minimum spanning tree problem.

Discuss the size of the instance for the minimum spanning tree problem. 3.1 Algorithm complexity The algorithms A, B are given. The former has complexity O(n 2 ), the latter O(2 n ), where n is the size of the instance. Let n A 0 be the size of the largest instance that can

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Math 55: Discrete Mathematics

Math 55: Discrete Mathematics Math 55: Discrete Mathematics UC Berkeley, Spring 2012 Homework # 9, due Wednesday, April 11 8.1.5 How many ways are there to pay a bill of 17 pesos using a currency with coins of values of 1 peso, 2 pesos,

More information

Introduction to Logic in Computer Science: Autumn 2006

Introduction to Logic in Computer Science: Autumn 2006 Introduction to Logic in Computer Science: Autumn 2006 Ulle Endriss Institute for Logic, Language and Computation University of Amsterdam Ulle Endriss 1 Plan for Today Now that we have a basic understanding

More information

REPEATED TRIALS. The probability of winning those k chosen times and losing the other times is then p k q n k.

REPEATED TRIALS. The probability of winning those k chosen times and losing the other times is then p k q n k. REPEATED TRIALS Suppose you toss a fair coin one time. Let E be the event that the coin lands heads. We know from basic counting that p(e) = 1 since n(e) = 1 and 2 n(s) = 2. Now suppose we play a game

More information

Recursion and Dynamic Programming. Biostatistics 615/815 Lecture 5

Recursion and Dynamic Programming. Biostatistics 615/815 Lecture 5 Recursion and Dynamic Programming Biostatistics 615/815 Lecture 5 Last Lecture Principles for analysis of algorithms Empirical Analysis Theoretical Analysis Common relationships between inputs and running

More information

Patterns in Pascal s Triangle

Patterns in Pascal s Triangle Pascal s Triangle Pascal s Triangle is an infinite triangular array of numbers beginning with a at the top. Pascal s Triangle can be constructed starting with just the on the top by following one easy

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

SIMS 255 Foundations of Software Design. Complexity and NP-completeness

SIMS 255 Foundations of Software Design. Complexity and NP-completeness SIMS 255 Foundations of Software Design Complexity and NP-completeness Matt Welsh November 29, 2001 mdw@cs.berkeley.edu 1 Outline Complexity of algorithms Space and time complexity ``Big O'' notation Complexity

More information

To give it a definition, an implicit function of x and y is simply any relationship that takes the form:

To give it a definition, an implicit function of x and y is simply any relationship that takes the form: 2 Implicit function theorems and applications 21 Implicit functions The implicit function theorem is one of the most useful single tools you ll meet this year After a while, it will be second nature to

More information

The Tower of Hanoi. Recursion Solution. Recursive Function. Time Complexity. Recursive Thinking. Why Recursion? n! = n* (n-1)!

The Tower of Hanoi. Recursion Solution. Recursive Function. Time Complexity. Recursive Thinking. Why Recursion? n! = n* (n-1)! The Tower of Hanoi Recursion Solution recursion recursion recursion Recursive Thinking: ignore everything but the bottom disk. 1 2 Recursive Function Time Complexity Hanoi (n, src, dest, temp): If (n >

More information

Using VLOOKUP to Combine Data in Microsoft Excel

Using VLOOKUP to Combine Data in Microsoft Excel Using VLOOKUP to Combine Data in Microsoft Excel Microsoft Excel includes a very powerful function that helps users combine data from multiple sources into one table in a spreadsheet. For example, if you

More information

Max Flow, Min Cut, and Matchings (Solution)

Max Flow, Min Cut, and Matchings (Solution) Max Flow, Min Cut, and Matchings (Solution) 1. The figure below shows a flow network on which an s-t flow is shown. The capacity of each edge appears as a label next to the edge, and the numbers in boxes

More information

1. What are Data Structures? Introduction to Data Structures. 2. What will we Study? CITS2200 Data Structures and Algorithms

1. What are Data Structures? Introduction to Data Structures. 2. What will we Study? CITS2200 Data Structures and Algorithms 1 What are ata Structures? ata Structures and lgorithms ata structures are software artifacts that allow data to be stored, organized and accessed Topic 1 They are more high-level than computer memory

More information

Special Situations in the Simplex Algorithm

Special Situations in the Simplex Algorithm Special Situations in the Simplex Algorithm Degeneracy Consider the linear program: Maximize 2x 1 +x 2 Subject to: 4x 1 +3x 2 12 (1) 4x 1 +x 2 8 (2) 4x 1 +2x 2 8 (3) x 1, x 2 0. We will first apply the

More information

1 Determinants and the Solvability of Linear Systems

1 Determinants and the Solvability of Linear Systems 1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped

More information

Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities

Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities Algebra 1, Quarter 2, Unit 2.1 Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities Overview Number of instructional days: 15 (1 day = 45 60 minutes) Content to be learned

More information

3. Mathematical Induction

3. Mathematical Induction 3. MATHEMATICAL INDUCTION 83 3. Mathematical Induction 3.1. First Principle of Mathematical Induction. Let P (n) be a predicate with domain of discourse (over) the natural numbers N = {0, 1,,...}. If (1)

More information

Name: Section Registered In:

Name: Section Registered In: Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

More information

CHAPTER 15 NOMINAL MEASURES OF CORRELATION: PHI, THE CONTINGENCY COEFFICIENT, AND CRAMER'S V

CHAPTER 15 NOMINAL MEASURES OF CORRELATION: PHI, THE CONTINGENCY COEFFICIENT, AND CRAMER'S V CHAPTER 15 NOMINAL MEASURES OF CORRELATION: PHI, THE CONTINGENCY COEFFICIENT, AND CRAMER'S V Chapters 13 and 14 introduced and explained the use of a set of statistical tools that researchers use to measure

More information

Session 7 Fractions and Decimals

Session 7 Fractions and Decimals Key Terms in This Session Session 7 Fractions and Decimals Previously Introduced prime number rational numbers New in This Session period repeating decimal terminating decimal Introduction In this session,

More information

Session 7 Bivariate Data and Analysis

Session 7 Bivariate Data and Analysis Session 7 Bivariate Data and Analysis Key Terms for This Session Previously Introduced mean standard deviation New in This Session association bivariate analysis contingency table co-variation least squares

More information

PARALLELIZED SUDOKU SOLVING ALGORITHM USING OpenMP

PARALLELIZED SUDOKU SOLVING ALGORITHM USING OpenMP PARALLELIZED SUDOKU SOLVING ALGORITHM USING OpenMP Sruthi Sankar CSE 633: Parallel Algorithms Spring 2014 Professor: Dr. Russ Miller Sudoku: the puzzle A standard Sudoku puzzles contains 81 grids :9 rows

More information

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm. Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of

More information

PES Institute of Technology-BSC QUESTION BANK

PES Institute of Technology-BSC QUESTION BANK PES Institute of Technology-BSC Faculty: Mrs. R.Bharathi CS35: Data Structures Using C QUESTION BANK UNIT I -BASIC CONCEPTS 1. What is an ADT? Briefly explain the categories that classify the functions

More information

the recursion-tree method

the recursion-tree method the recursion- method recurrence into a 1 recurrence into a 2 MCS 360 Lecture 39 Introduction to Data Structures Jan Verschelde, 22 November 2010 recurrence into a The for consists of two steps: 1 Guess

More information

CS 103X: Discrete Structures Homework Assignment 3 Solutions

CS 103X: Discrete Structures Homework Assignment 3 Solutions CS 103X: Discrete Structures Homework Assignment 3 s Exercise 1 (20 points). On well-ordering and induction: (a) Prove the induction principle from the well-ordering principle. (b) Prove the well-ordering

More information

COMP 250 Fall 2012 lecture 2 binary representations Sept. 11, 2012

COMP 250 Fall 2012 lecture 2 binary representations Sept. 11, 2012 Binary numbers The reason humans represent numbers using decimal (the ten digits from 0,1,... 9) is that we have ten fingers. There is no other reason than that. There is nothing special otherwise about

More information

Math 115 Spring 2011 Written Homework 5 Solutions

Math 115 Spring 2011 Written Homework 5 Solutions . Evaluate each series. a) 4 7 0... 55 Math 5 Spring 0 Written Homework 5 Solutions Solution: We note that the associated sequence, 4, 7, 0,..., 55 appears to be an arithmetic sequence. If the sequence

More information

COUNTING INDEPENDENT SETS IN SOME CLASSES OF (ALMOST) REGULAR GRAPHS

COUNTING INDEPENDENT SETS IN SOME CLASSES OF (ALMOST) REGULAR GRAPHS COUNTING INDEPENDENT SETS IN SOME CLASSES OF (ALMOST) REGULAR GRAPHS Alexander Burstein Department of Mathematics Howard University Washington, DC 259, USA aburstein@howard.edu Sergey Kitaev Mathematics

More information

Recursion vs. Iteration Eliminating Recursion

Recursion vs. Iteration Eliminating Recursion Recursion vs. Iteration Eliminating Recursion continued CS 311 Data Structures and Algorithms Lecture Slides Monday, February 16, 2009 Glenn G. Chappell Department of Computer Science University of Alaska

More information

Polynomial and Rational Functions

Polynomial and Rational Functions Polynomial and Rational Functions Quadratic Functions Overview of Objectives, students should be able to: 1. Recognize the characteristics of parabolas. 2. Find the intercepts a. x intercepts by solving

More information

Overview. Essential Questions. Precalculus, Quarter 4, Unit 4.5 Build Arithmetic and Geometric Sequences and Series

Overview. Essential Questions. Precalculus, Quarter 4, Unit 4.5 Build Arithmetic and Geometric Sequences and Series Sequences and Series Overview Number of instruction days: 4 6 (1 day = 53 minutes) Content to Be Learned Write arithmetic and geometric sequences both recursively and with an explicit formula, use them

More information

Sources: On the Web: Slides will be available on:

Sources: On the Web: Slides will be available on: C programming Introduction The basics of algorithms Structure of a C code, compilation step Constant, variable type, variable scope Expression and operators: assignment, arithmetic operators, comparison,

More information

Optimization Modeling for Mining Engineers

Optimization Modeling for Mining Engineers Optimization Modeling for Mining Engineers Alexandra M. Newman Division of Economics and Business Slide 1 Colorado School of Mines Seminar Outline Linear Programming Integer Linear Programming Slide 2

More information

Binary Adders: Half Adders and Full Adders

Binary Adders: Half Adders and Full Adders Binary Adders: Half Adders and Full Adders In this set of slides, we present the two basic types of adders: 1. Half adders, and 2. Full adders. Each type of adder functions to add two binary bits. In order

More information

Matrix-Chain Multiplication

Matrix-Chain Multiplication Matrix-Chain Multiplication Let A be an n by m matrix, let B be an m by p matrix, then C = AB is an n by p matrix. C = AB can be computed in O(nmp) time, using traditional matrix multiplication. Suppose

More information

5.1 Radical Notation and Rational Exponents

5.1 Radical Notation and Rational Exponents Section 5.1 Radical Notation and Rational Exponents 1 5.1 Radical Notation and Rational Exponents We now review how exponents can be used to describe not only powers (such as 5 2 and 2 3 ), but also roots

More information

Mathematical Induction. Mary Barnes Sue Gordon

Mathematical Induction. Mary Barnes Sue Gordon Mathematics Learning Centre Mathematical Induction Mary Barnes Sue Gordon c 1987 University of Sydney Contents 1 Mathematical Induction 1 1.1 Why do we need proof by induction?.... 1 1. What is proof by

More information

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form Section 1.3 Matrix Products A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form (scalar #1)(quantity #1) + (scalar #2)(quantity #2) +...

More information

TU e. Advanced Algorithms: experimentation project. The problem: load balancing with bounded look-ahead. Input: integer m 2: number of machines

TU e. Advanced Algorithms: experimentation project. The problem: load balancing with bounded look-ahead. Input: integer m 2: number of machines The problem: load balancing with bounded look-ahead Input: integer m 2: number of machines integer k 0: the look-ahead numbers t 1,..., t n : the job sizes Problem: assign jobs to machines machine to which

More information

1 Review of Least Squares Solutions to Overdetermined Systems

1 Review of Least Squares Solutions to Overdetermined Systems cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares

More information

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm. Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Chapter 7 - Roots, Radicals, and Complex Numbers

Chapter 7 - Roots, Radicals, and Complex Numbers Math 233 - Spring 2009 Chapter 7 - Roots, Radicals, and Complex Numbers 7.1 Roots and Radicals 7.1.1 Notation and Terminology In the expression x the is called the radical sign. The expression under the

More information

Math Content by Strand 1

Math Content by Strand 1 Math Content by Strand 1 Number and Operations with Whole Numbers Multiplication and Division Grade 3 In Grade 3, students investigate the properties of multiplication and division, including the inverse

More information

1 Description of The Simpletron

1 Description of The Simpletron Simulating The Simpletron Computer 50 points 1 Description of The Simpletron In this assignment you will write a program to simulate a fictional computer that we will call the Simpletron. As its name implies

More information

Zero: If P is a polynomial and if c is a number such that P (c) = 0 then c is a zero of P.

Zero: If P is a polynomial and if c is a number such that P (c) = 0 then c is a zero of P. MATH 11011 FINDING REAL ZEROS KSU OF A POLYNOMIAL Definitions: Polynomial: is a function of the form P (x) = a n x n + a n 1 x n 1 + + a x + a 1 x + a 0. The numbers a n, a n 1,..., a 1, a 0 are called

More information

ALGEBRA. sequence, term, nth term, consecutive, rule, relationship, generate, predict, continue increase, decrease finite, infinite

ALGEBRA. sequence, term, nth term, consecutive, rule, relationship, generate, predict, continue increase, decrease finite, infinite ALGEBRA Pupils should be taught to: Generate and describe sequences As outcomes, Year 7 pupils should, for example: Use, read and write, spelling correctly: sequence, term, nth term, consecutive, rule,

More information

Assignment 5 - Due Friday March 6

Assignment 5 - Due Friday March 6 Assignment 5 - Due Friday March 6 (1) Discovering Fibonacci Relationships By experimenting with numerous examples in search of a pattern, determine a simple formula for (F n+1 ) 2 + (F n ) 2 that is, a

More information

Automata-based Verification - I

Automata-based Verification - I CS3172: Advanced Algorithms Automata-based Verification - I Howard Barringer Room KB2.20: email: howard.barringer@manchester.ac.uk March 2006 Supporting and Background Material Copies of key slides (already

More information