# max cx s.t. Ax c where the matrix A, cost vector c and right hand side b are given and x is a vector of variables. For this example we have x

Save this PDF as:

Size: px
Start display at page:

Download "max cx s.t. Ax c where the matrix A, cost vector c and right hand side b are given and x is a vector of variables. For this example we have x"

## Transcription

1 Linear Programming Linear programming refers to problems stated as maximization or minimization of a linear function subject to constraints that are linear equalities and inequalities. Although the study of algorithms for these problems is very important, the term programming in linear programming arises from the context of a particular resource allocation program for the United States Air Force for which George Dantzig developed a linear model for and described a method, called the Simplex method, to solve. This was in the late 1940 s before the term computer programming came into general use. Consider the following example of a linear programming problem. In general, a linear programming problem is a maximization or minimization of a linear function subject to constraints that are linear equations or linear inequalities. Some of these may be bounds on variables. For example it is not unusual to require that variables be non-negative in applications. If there is no such constraint a variable will be called free. A linear programming problem is called feasible if there is some solution satisfying the constraints and infeasible otherwise. If the maximum can be made arbitrarily large the the problem is unbounded. max x 1 + 5x 2 + 3x 3 s.t. 2x 1 + 3x 2 4x 3 5 x 1 2x 2 + x 3 3 x 1 + 4x 2 + 2x 3 6 3x 2 + 5x We will often use matrix notation. This instance becomes max cx s.t. Ax c where the matrix A, cost vector c and right hand side b are given and x is a vector of variables. For this example we have x 1 A = c = [ ] b = 3 6 x = x 2 x x 4 We will also write generic instances using sigma notation as follows: n max c j x j s.t. n a ij x j b i for i = 1, 2,..., m. If we want to minimize cx we can instead maximize cx. We can replace equations Ax = b with inequalities Ax b and Ax b. To reverse the process, we can replace Ax b with Ax + Is = b and s 0 where I is an appropriate size identity matrix and s is a vector of slack 1

2 variables which are non-negative. To replace free variables with non-negative variables we use x = u v where u0 and v 0. Alternatively we can write non-negativity constraints as simply another inequality. Using the transformations described above we can convert any linear programming instance to one of three standard forms. It will be convenient to be able to refer to each form with the understanding that any instance can be converted to that form. The forms are max{cx Ax b} max{cx Ax b, x 0} max{cx Ax = b, x 0} We will later see examples of converting between the forms in the context of Farkas Lemma and duality. Variations on Linear Programming We have seen that we might consider various variants of linear programming problems and indicated how they are equivalent. However, the case where there are only linear equalities and all variables are free is different. We will call this a system of equations. Linear programming will refer to any problem where there is either at least one non-negative variable or at least one linear inequality. For both systems of equations and linear programming having an additional constraint that at least one variable must be integral again changes things. Thus we can distinguish four variants that are qualitatively different: Systems of Equations, Integral Systems of Equations, Linear Programming, Integer Linear Programming. We can also distinguish between feasibility problems find a solution to some linear system and optimization problems maximize or minimize a linear function over the set of solutions to some linear system the feasible set. If we have a method for solving the optimization problem then we can easily solve feasibility using the same method. Simply maximize the zero function. Interestingly, feasibility and optimization are equivalent in each of the four cases in the sense that a method for to solve the feasibility problem can be used to solve the optimization problem as will discuss below. We consider each of the four types in turn. In each case, if there is no feasible solution then the optimization problem is infeasible. So, here we will assume feasibility when discussing optimization. Systems of Equations. Determining {x Ax = b} feasibility and max{c T x Ax = b} optimization is the material of basic linear algebra and the problems can be solved efficiently using, for example Gaussian elimination. The optimization problem is not usually discussed in linear algebra but follows easily from feasibility. The set of solutions to {x Ax = b} can be written as x 0 + λ i v i where the sum is over a set {v i } of basis vectors for the nullspace of A. If c T v i = 0 for all v i, including the case that the nullspace is trivial, then the entire feasible set attains the maximum c T x 0. This includes the case that there is a unique solution and the unique solution 2

3 of course solves the optimization problem. On the other hand, if c T v i 0 for some v i, then we can assume that we have cv i = r > 0, as otherwise we could replace v i with v i in the basis. Now we have that x 0 + tv i is feasible for arbitrarily large t, and hence the optimization problem is unbounded since c T x 0 + tv i = c T x 0 + tr as t. Thus for systems of equations, the optimization problem is either infeasible, unbounded or has the entire feasible set attaining the optimal value. This material is covered in linear algebra courses and we will not discuss it here. Integral Systems of Equations. Determining {x Ax = b, x Z} feasibility and max{ctx Ax = b, x Z} optimization. The feasibility problem can be solved by a method somewhat similar to Gaussian elimination. Using what are called elementary unimodular column operations multiply a column by -1, switch two columns and add an integral multiple of one column to another one can reduce a given constraint matrix to what is called hermite normal form. The reduction process can be viewed as an extension of the Euclidean algorithm for greatest common divisors. This process maintains the integrality. From this all feasible solutions can be described just as for systems of equations above except that the x p, v i and λ i are all integral and we get a similar conclusion about the optimization problem. Linear Programming. The various versions {x Ax = b, x 0}, {x Ax b}, {x Ax b, x 0} or general linear systems with combinations of inequalities, equalities, non-negative and unrestricted variables all can easily be converted into the other forms. We will see from the duality theorem that we can solve the optimization problem by solving a related feasibility problem. Observe that these problems are harder then systems of linear equations in that methods like Gaussian elimination for equations can not be directly used to solve linear optimization problems. However, a system consisting only of equations can be viewed as a special case of general linear optimization problems. Integer Linear Optimization. If we take a linear optimization problem and in addition require that the variables take on integral values only we get integer linear optimization problems. A mixed integer programming problem requires that some, but not all of the variables be integral. Except in the case of integral systems of equations discussed above these problems are NP-complete. Fortunately, many graph and network problems formulated as integer linear programming problems are special cases where we can find nice theorems and efficient solutions. Note also one other interesting point. If we look for integral solutions to systems of equations restricting the variables to be only 0 or 1 we have a special case of Integer Optimization which is still hard. Here we have a case where there are a finite number of possible feasible solutions, yet this class of problems is much more difficult to solve than regular linear optimization problems even though in that case there are typically an infinite number of feasible solutions. In part, the difficulty lies in how large the number of finite solutions is. If there are n variables then there are 2 n potential solutions to check. If one naively attempts to check all of these problems are encountered fairly quickly. For a somewhat modest size problem with, say 150 variables, the number of potential solutions is larger than the number of atoms in the known universe. If the universe is a computer, with each atom checking billions of potential cases each second, running since the beginning of time all cases would still not have been checked. So a naive approach to 3

4 solving such finite problems in general is bound to fail. The Dual For a given linear programming problem we will now construct a second linear programming problem whose solution bounds the original problem. Surprisingly, it will turn out that the bound is tight. That is, the optimal value to the second problem, called the dual will be equal to the optimal value for the original problem, called the primal. Consider again the linear programming problem 1. max x 1 + 5x 2 + 3x 3 s.t. 2x 1 + 3x 2 4x 3 5 x 1 2x 2 + x 3 3 x 1 + 4x 2 + 2x 3 6 3x 2 + 5x 3 2 If we multiply the inequalities by appropriate values and add we can bound the maximum value. Note that for inequalities we must use non-negative multipliers so as not to change the direction of the inequalities. For example multiplying the first inequality by 1, the second by 3, the third by 2 and the fourth by 0 we get 2x 1 + 3x 2 4x 3 5 3x 1 6x 2 + 3x 3 9 2x 1 + 8x 2 + 4x x 1 + 0x 2 + 0x 3 0 Combining these results in x 1 + 5x 2 + 3x 3 8. Observe that we have picked the multipliers [ ] carefully so that we get the cost vector c. Another choice would be to use the multipliers [ ] which yield a better bound x 1 + 5x 2 + 3x 3 6. This suggests a new linear programming problem to pick multipliers so as to minimize the bound subject to picking them appropriately. In this case we must get c and use non-negative multipliers. min 5y 1 3y 2 + 6y 3 + 2y 4 s.t. 2y 1 y 2 + y 3 = 1 3y 1 2y 2 + 4y 3 + 3y 4 = 5 4y 1 + y 2 + 2y 3 + 5y 4 = 3 y 1, y 2, y 3, y 4 0 The constraint matrix is the transpose of the original and the roles of c and b have switched. The new problem is called the dual and the original problem is called the primal. 4

5 The multiplier for an equation need not be constrained to be non-negative. For variables that are non-negative we can use in the new problem rather than =. For example, if the above problem had non-negative variables then if the combination resulted in 2x 1 + 5x 2 + 7x 3 10 we would also have x 1 + 5x 2 + 4x 4 10 by adding the inequalities x 1 0 and 3x 3 0 that result from non-negativity. To summarize we have the following versions of dual problems written in matrix form: Primal Dual max{cx Ax b} min{yb ya = c, y 0} max{cx Ax b, x 0} min{yb ya c, y 0} max{cx Ax = b, x 0} min{yb ya c} We motivated the formulation of the dual as a bound on the primal. We state this now as the following: Weak Duality Theorem of Linear Programming: If both the primal and dual are feasible then max{cx Ax b, x 0} min{yb ya c, y 0}. Proof: For any feasible x and y we have cx y Ax = y Ax y b where the first inequality follows since x 0 and y A c and the second inequality follows since y 0 and Ax b. Similar versions hold for each of the primal-dual pairs. We give an alternate proof using summation notation: n n m m n m c j x j a i,j yi x j = a i,j x j yi b i yi. i=1 We will next work toward showing strong duality: we in fact get equality in the weak duality theorem. Linear systems We will first examine linear systems and will use results about feasibility to show strong duality. Consider first the following systems of linear equations. i=1 i=1 x + 4y z = 2 2x 3y + z = 1 3x 2y + z = 0 4x + y z = 1 x + 4y z = 2 2x 3y + z = 1 3x 2y + z = 0 4x + y z = 1 5

6 A solution to the system on the left is x = 0, y = 1, z = 2. The system on the right has no solution. A typical approach to verify that the system on the right has no solution by noting something about the row echelon form of the reduced augmented matrix having a row of zeros with a non-zero right side or some other variant on this. This can be directly stated by producing a certificate of inconsistency such as 1, 3, 3, 1. Multiplying the first row by -1, the second by -3, the third by 3 and the fourth by 1 and adding we get the inconsistency 0 = 6. So the system must not have a solution. The fact that a system of linear equations either has a solution or a certificate of inconsistency is presented in in various guises in typical undergraduate linear algebra texts and often proved as a result of the correctness of Gaussian elimination. Consider the following systems of linear inequalities x + 4y z 2 2x 3y + z 1 3x 2y + z 0 4x + y z 1 2 x + 4y z 1 2x 3y + z 2 3x 2y + z 1 4x + y z 1 3 A solution to 2 is x = 0, y = 1, z = 2. The system 3 has no solution but how do we show this? In order to get a certificate of inconsistency consisting of multipliers for the rows as we did for systems of equations we need to be a bit more careful with the multipliers. Try using the same multipliers 1, 3, 3, 1 from the equations for the inequalities. Multiplying the first row by -1, the second by -3, the third by 3 and the fourth by 1 and combining we get This is not an inconsistency. As before we need a left side of 0 but because of the we need the right side to be negative in order to get an inconsistency. So we try the multipliers 1, 3, 3, 1 and would seem to get the inconsistency However, this is not a certificate of inconsistency. Recall that multiplying an inequality by a negative number also changes the direction of the inequality. In order for our computations to be valid for a system of inequalities the multipliers must be non-negative. It is not difficult to check that 3, 4, 1, 2 is a certificate of inconsistency for the system on the right above. Multiplying the first row by 3, the second by 4, the third by 1 and the fourth by 2 and combining we get the inconsistency 0 2. So the system of inequalities has no solution. In general, for a system of inequalities, a certificate of inconsistency consists of nonnegative multipliers and results in 0 b with b negative. For a mixed system with equalities and inequalities we can drop the non-negativity constraint on multipliers for the equations. 6

7 The fact that a system of linear inequalities either has a solution or a certificate of inconsistency is often called Farkas s lemma. It can be proved by an easy induction using Fourier-Motzkin elimination. Fourier-Motzkin elimination in some respects parallels Gaussian elimination, using non-negative linear combinations of inequalities to create a new system in which a variable is eliminated. From a solution to the new system a solution to the original can be determined and a certificate of inconsistency to the new system can be used to determine a certificate of inconsistency to the original. Fourier-Motzkin Elimination We will start with a small part of an example of Fourier-Motzkin elimination for illustration. Consider the system of inequalities 3. Rewrite each inequality so that it is of the form x or x depending of the sign of the coefficient of x. x 1 4y + z 1 3y/2 + z/2 x 1/3 2y/3 + z/3 x x 1/4 y/4 + z/4 Then pair each upper bound on x with each lower bound on x. 1 3y/2 + z/2 1 4y + z 1 3y/2 + z/2 1/4 y/4 + z/4 1/3 2y/3 + z/3 1 4y + z 1/3 2y/3 + z/3 1/4 y/4 + z/4 4 Simplify to obtain a new system in which x is eliminated. 5y/2 z/2 0 5y/4 + z/4 3/4 10y/3 2z/3 4/3 5y/12 + z/12 7/12 5 The new system 5 has a solution if and only if the original 3 does. The new system is inconsistent. A certificate of inconsistency is 2, 6, 1, 2. Observe that the first row of 5 is obtained from 1/2 the second row and the first row of 3. Similarly, the second row of 5 comes from 1/2 the second row and 1/4 the fourth row of 3. The other inequalities in 5 do not involve the second row of 3. Using the multipliers 2,6 for the first two rows of 5 we translate to a multiplier of 2 1/ /2 = 3 for the second row of 3. Looking at the other rows in a similar manner we translate the certificate of inconsistency 2, 6, 1, 2 for 5 to the certificate of inconsistency 3, 4, 1, 2 for 3. In a similar manner any certificate of inconsistency for the new system determines a certificate of inconsistency for the original. 7

8 Now, consider the system of inequalities 2. Eliminating x as in the previous example we get the system 5y/2 z/2 5/2 5y/4 + z/4 1/4 6 10y/3 2z/3 2 5y/12 + z/12 1/4 A solution to 6 is y = 1, z = 2. Substituting these values into 2 we get x 0 2x 2 3x 0 4x 0 So y = 1, z = 2 along with x = 0 gives a solution to 2. In general, each solution to the new system substituted into the original yields an interval of possible values for x. Since we paired upper and lower bounds to the get the new system, this interval will be well defined for each solution to the new system. Below we will give an inductive proof of Farkas lemma following the patterns of the examples above. The idea is to eliminate one variable to obtain a new system. A solution to the new system can be used to determine a solution to the original and a certificate of inconsistency to the new system can be used to determine a certificate of inconsistency to the original. Note that in these example we get the same number of new inequalities. In general, with n inequalities we might get as many as n 2 /4 new inequalities if upper and lower bounds are evenly split. Iterating to eliminate all variables might then yield an exponential number of inequalities in the end. This is not a practical method for solving systems of inequalities, either by hand or with a computer. It is interesting as it does yield a simple inductive proof of Farkas Lemma. Farkas Lemma Consider the system Ax b which can also be written as n a ij x j b i for i = 1, 2..., m. Let U = {i a in > 0}, L = {i a in < 0} and N = {i a in = 0}. We prove Farkas Lemma with the following steps: i We give a system with variables x 1, x 2,..., x n 1 that has a solution if and only if the original system does. We will then use induction since there is one less varaible. ii We show that Farkas lemma holds for systems with 1 variable. This is the basis of the induction. This is obvious, however there is some work to come up with appropriate notation to make it a real proof. One could also use the no variable case as a basis for induction but the notation is more complicated for this. iii We show that if the system in i is empty show that the original system has a solution. It is empty if L N or U N is empty. iv If the system in i is inconsistent and multipliers u rs for r L, s U, v t for t N provide a 8

9 certificate of inconsistency, we describe in terms of these multipliers a certificate of inconsistency for the original system. v If the system in i has a solution x 1, x 2,..., x n 1 we describe a non-empty set of solutions to the original problem that agrees with the x j for j = 1, 2,... n 1. i We start with n a ij x j b i for i = 1, 2..., m. 7 Let U = {i a in > 0}, L = {i a in < 0} and N = {i a in = 0}. Then 1 n 1 b r a rj x j x n for r L a rn x n 1 n 1 b s a sj x j for s U a sn n 1 a tj x j b t for t N is just a rearrangement of 7. Note that the direction of the inequality changes when r L since a rn < 0 and we multiply by this. We pair each upper bound with each lower bound and carry along the inequalities not involving x n to get 1 a rn n 1 b r a rj x j n 1 a tj x j b t for t N 1 a sn n 1 b s a sj x j for r L, s U Due to the construction 7 has a solution if and only if 9 does. This will also follow from parts iv and v below. ii With one variable we have three types of inequalities as above using n = 1 for L, U, N: a s1 x 1 b s for s U; a r1 x 1 b r for r L and inequalities with no variable t N of the form 0 b t. If we have 0 b t for some b t < 0 then the system is inconsistent, there is clearly no solution to the system. Set all multiplies u i1 = 0 except u t1 = 1 to get a certificate of inconsistency. We can drop inequalities 0 b t for b t 0 so we can assume now that all a i1 are nonzero. Rewriting the system we have x 1 b s /a s1 for s U and b r /a r1 x 1 for r L. There is a solution if and only if max r L b r /a r1 min s U b s /a s1. If this holds then any x 1 [max r L b r /a r1, min s U b s /a s1 ] satisfies all inequalities. If not then for some r, s we have b s /a s 1 < b r /a r 1. For a certificate of inconsistency take all u i1 = 0 except u r 1 = a s 1 > 0 and u s 1 = a r 1 > 0. Multiplying we have a s 1 a r 1x 1 b r and a r 1 a s 1x 1 b s. Combining these we get 0 b r a s 1 b s a r 1 < 0. The last < 0 follows from b s /a s 1 < b r /a r 1 again noting that the direction of the inequality changes as s s 1 <

10 iii If L N is empty the original system is x n 1 n 1 b s a sj x j for s U. For a solution a sn take x 1 = x 2 = x n 1 = 0 and x n = min s U b s /a sn. It is straightforward to check that this is a solution. 1 n 1 If U N is empty the original system is b r a rj x j x n for r L. For a solution a rn take x 1 = x 2 = x n 1 = 0 and x n = max r L b r /a rn. It is straightforward to check that this is a solution. iv If 9 is inconsistent with multipliers u rs for r L, s U and u t for t N construct a certificate y of inconsistency for 7: For t N let y t = u t. For r L let y r = 1 u rs and a rn s U for s U let y s = 1 u rs. Since we started with a certificate for 9 we have that combining a sn r L the inequalities 1 n 1 u rs b r a rj x j 1 n 1 b s a sj x j for r L, s U a rn a sn n 1 u t a tj x j b t for t N yields 0 < b for some b < 0. For 8 which is a rearrangement of 7, combining 1 1 n 1 u rs b r a rj x j x n for r L a rn a rn s U 1 u rs x n 1 n 1 b s a sj x j for s U a sn a sn r L n 1 u t a tj x j b t for t N v Given a solution x 1, x 2,..., x n 1 to 9 take x n to be any value in the interval [ max r L 1 a rn n 1 b r a rj x j This complete the proof of Farkas lemma. Versions of Farkas Lemma, min s U 1 n 1 b s a sn a sj x j ] 10

11 We will state three versions of Farkas Lemma. This is also called the Theorem of the alternative for linear inequalities. A: Exactly one of the following holds: B: Exactly one of the following holds: C: Exactly one of the following holds: I Ax b, has a solution x II ya = 0, y 0, yb < 0 has a solution y I Ax b, x 0 has a solution x II ya 0, y 0, yb < 0 has a solution y I Ax = b, x 0 has a solution x II ya 0, yb < 0 has a solution y Of course the most general version would allow mixing of these three, with equalities and inequalities and free and non-negative variables. However, it is easier to consider these. In the general case, the variables in II corresponding to inequalities in I are constrained to be non-negative and variables corresponding to equalities in I are free. In II there are inequalities corresponding to non-negative variables in I and equalities corresponding to free variables in I. We will show the equivalence of the versions A and B. Showing other equivalences is similar. The ideas here are the same as those discussed in the conversions between various forms of linear programming problems. Note - there are at least two ways to take care of the at most one of the systems has a solution part of the statements. While it is a bit redundant we will show both ways below. First we show it directly and we also show it by the equivalent systems. If the at most one system holds is shown first then only the s are needed for the equivalent systems. First we note that for each it is easy to show that at most one of the systems holds for A and B. If both IA and IIA hold then a contradiction. We have used y 0 in the. If both IB and IIB hold then 0 = 00 = yax = yax yb < 0 0 = 00 yax = yax yb < 0 11

12 a contradiction. We have used y 0 in the second. So for the remainder we will seek to show at least one of the following holds. A B: Note the following equivalences. IB Ax b x 0 Ax b Ix 0 [ A I ] [ b x 0 ] IB and IIB ya 0 y 0 yb < 0 ya si = 0 y 0, s 0 yb s0 < 0 [ ] [ ] A y s = 0 [ I ] y s 0 [ ] [ ] b y s < 0 0 IIB. Applying A, we get that exactly one of IB and IIB has a solution since they are a special case of A. The equivalences then show that exactly one of IB and IIB has a solution. B A: Note following equivalences. and IA Ax b IIA ya = 0 y 0 yb < 0 A r s b r 0, s 0 ya 0 ya 0 y 0 yb < 0 [ ] [ ] r A A b [ ] s r 0 s y [ A A ] 0 y 0 yb < 0 IA IIA. In the first line, given x one can easily pick non-negative r, s such that x = r s so the first in the first line does hold. Applying B, we get that exactly one of IA and IIA has a solution since they are a special case of B. The equivalences then show that exactly one of IA and IIA has a solution. Linear Programming Duality Theorem from the Theorem of the Alternative for Inequalities: We will assume the Theorem of the Alternative for Inequalities in the following form: Exactly one of the following holds: I Ax b, x 0 has a solution x II ya 0, y 0, yb < 0 has a solution y and use this to prove the following duality theorem for linear programming. 12

13 In what follows we will assume that A is an m n matrix, c is a length n row vector, x is a length n column vector of variables, b is a length m column vector and y is a length n row vector of variables. We will use 0 for zero vectors, 0 for zero matrices, and I for identity matrices where appropriate sizes will be assumed and clear from context. We will consider the following primal linear programming problem max{cx Ax b, x 0} and its dual min{yb ya c, y 0}. It can be shown that we can use maximum instead of supremum and minimum instead of infimum as these values are attained if they are finite. We repeat here the statement of the weak duality theorem in one of its forms. Weak Duality Theorem of Linear Programming: If both the primal and dual are feasible then max{cx Ax b, x 0} min{yb ya c, y 0}. Strong Duality Theorem of Linear Programming: If both the primal and dual are feasible then max{cx Ax b, x 0} = min{yb ya c, y 0}. Proof: By weak duality we have max min. Thus it is enough to show that there are primal feasible x and dual feasible y with cx y b. We get this if and only x, y is a feasible solution to Ax b, x 0, ya c, y 0, cx yb. 10 We can write 10 as A x b, x 0 where A 0 A = c b T and x = 0 A T [ x y T ] and b = b 0 c T 11 By the Theorem of the Alternative for Inequalities if 11 has no solution then has a solution. Writing 12 becomes y A 0, y 0, y b < 0 12 y = [ r s t T ] ra sc, At sb, r 0, s 0, t 0, rb ct < If we show that 13 has no solution then 10 must have a solution and we will be done. We will assume that 13 has a solution r, s, t and reach a contradiction. Observe that s is a scalar a number. tc T. We have already used that t T c T = ct is a scaler in writing 13. Case 1: s = 0. From 13 with s = 0 we get r A 0, r 0 and At 0, t 0. Applying the Theorem of the Alternative to primal feasibility Ax 0, x 0 yields r b 0. Applying the Theorem of the Alternative to dual feasibility ya c, y 0 yields ct 0. Then r b 0 and ct 0 contradicts r b ct < 0. 13

14 Case 2: s 0. Let r = r /s and t = t /s T. Then, from 13 we have r A c, At b, r 0, t 0, r b ct < 0. But r b ct < 0 implies ct > r b contradicting weak duality. Thus, 13 has no solution and hence 10 has solution. Equivalently, since y + Mr is dual feasible for any dual feasible y and number M 0 we must have r b 0 or the dual is unbounded and the primal infeasible. Equivalently, since x + Nt is primal feasible for any primal feasible x and number N 0 we must have c t 0 or the primal is unbounded and the dual infeasible. We can in fact easily show that if either the primal or the dual has a finite optimum then so does the other. Weak duality shows that if the primal or dual is unbounded then the other must be infeasible. Thus there are four possibilities for a primal-dual pair: both infeasible; primal unbounded and dual infeasible; dual unbounded and primal infeasible; both primal and dual with equal finite optima. As with Farkas Lemma we can show that the various versions of duality are equivalent. Here is an example where we prove the equivalence of the strong duality theorems for the following primal-dual pairs: max{cx Ax = b, x 0} = min{yb ya c} and max{cx Ax b} = min{yb ya = c, y 0} Let I be the statement max{cx Ax = b, x 0} = min{yb ya c} when both are feasible and II the statement max{cx Ax b} = min{yb ya = c, y 0} when both are feasible. To show II implies I: Assuming the first and last LPs below are feasible we have A b max{cx Ax = b, x 0} = max A x b I 0 [ ] b = min u v w b [ u v w ] A A = c, [ u v w ] 0 0 I = min {yb ya c} The first and third equalities follow from basic manipulations. The second follows from II. To show I implies II: Assuming the first and last LPs below are feasible we have [ ] u max {cx Ax b} = max c c 0 v [ A A I ] u u v = b, v w w w = min { yb y [ A A I ] [ c c 0 ]} = min{yb ya = c, y 0} The first and third equalities follow from basic manipulations. The second follows from I. 14 0

### Definition of a Linear Program

Definition of a Linear Program Definition: A function f(x 1, x,..., x n ) of x 1, x,..., x n is a linear function if and only if for some set of constants c 1, c,..., c n, f(x 1, x,..., x n ) = c 1 x 1

### 4.6 Linear Programming duality

4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

### 1 Solving LPs: The Simplex Algorithm of George Dantzig

Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

### Linear Programming. March 14, 2014

Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

### 3. Linear Programming and Polyhedral Combinatorics

Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

### Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that

### Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

### Duality in Linear Programming

Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadow-price interpretation of the optimal simplex multipliers is a very useful concept. First, these shadow

### Mathematical finance and linear programming (optimization)

Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may

### Solving Systems of Linear Equations

LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

### 1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

### LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method Introduction to dual linear program Given a constraint matrix A, right

### Linear Programming. April 12, 2005

Linear Programming April 1, 005 Parts of this were adapted from Chapter 9 of i Introduction to Algorithms (Second Edition) /i by Cormen, Leiserson, Rivest and Stein. 1 What is linear programming? The first

### 3 Does the Simplex Algorithm Work?

Does the Simplex Algorithm Work? In this section we carefully examine the simplex algorithm introduced in the previous chapter. Our goal is to either prove that it works, or to determine those circumstances

### Linear Programming in Matrix Form

Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,

### Optimization in R n Introduction

Optimization in R n Introduction Rudi Pendavingh Eindhoven Technical University Optimization in R n, lecture Rudi Pendavingh (TUE) Optimization in R n Introduction ORN / 4 Some optimization problems designing

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### What is Linear Programming?

Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

### 26 Linear Programming

The greatest flood has the soonest ebb; the sorest tempest the most sudden calm; the hottest love the coldest end; and from the deepest desire oftentimes ensues the deadliest hate. Th extremes of glory

### Systems of Linear Equations

Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

### Solving Linear Programs

Solving Linear Programs 2 In this chapter, we present a systematic procedure for solving linear programs. This procedure, called the simplex method, proceeds by moving from one feasible solution to another,

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### 2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables

### CLASS 3, GIVEN ON 9/27/2010, FOR MATH 25, FALL 2010

CLASS 3, GIVEN ON 9/27/2010, FOR MATH 25, FALL 2010 1. Greatest common divisor Suppose a, b are two integers. If another integer d satisfies d a, d b, we call d a common divisor of a, b. Notice that as

### Special Situations in the Simplex Algorithm

Special Situations in the Simplex Algorithm Degeneracy Consider the linear program: Maximize 2x 1 +x 2 Subject to: 4x 1 +3x 2 12 (1) 4x 1 +x 2 8 (2) 4x 1 +2x 2 8 (3) x 1, x 2 0. We will first apply the

### Linear Dependence Tests

Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

### MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

### Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

### Lecture 3: Linear Programming Relaxations and Rounding

Lecture 3: Linear Programming Relaxations and Rounding 1 Approximation Algorithms and Linear Relaxations For the time being, suppose we have a minimization problem. Many times, the problem at hand can

### Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

### Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Chapter 6 Linear Programming: The Simplex Method Introduction to the Big M Method In this section, we will present a generalized version of the simplex method that t will solve both maximization i and

### December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

### THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

### SYSTEMS OF EQUATIONS

SYSTEMS OF EQUATIONS 1. Examples of systems of equations Here are some examples of systems of equations. Each system has a number of equations and a number (not necessarily the same) of variables for which

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### Elementary Number Theory We begin with a bit of elementary number theory, which is concerned

CONSTRUCTION OF THE FINITE FIELDS Z p S. R. DOTY Elementary Number Theory We begin with a bit of elementary number theory, which is concerned solely with questions about the set of integers Z = {0, ±1,

### Solving Systems of Linear Equations

LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

### OPRE 6201 : 2. Simplex Method

OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2

### Introduction to Diophantine Equations

Introduction to Diophantine Equations Tom Davis tomrdavis@earthlink.net http://www.geometer.org/mathcircles September, 2006 Abstract In this article we will only touch on a few tiny parts of the field

### The Graphical Method: An Example

The Graphical Method: An Example Consider the following linear program: Maximize 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2 0, where, for ease of reference,

### IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

IEOR 4404 Homework # Intro OR: Deterministic Models February 14, 011 Prof. Jay Sethuraman Page 1 of 5 Homework #.1 (a) What is the optimal solution of this problem? Let us consider that x 1, x and x 3

### Diagonal, Symmetric and Triangular Matrices

Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

### 1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form

1. LINEAR EQUATIONS A linear equation in n unknowns x 1, x 2,, x n is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b, where a 1, a 2,..., a n, b are given real numbers. For example, with x and

### 2.3 Convex Constrained Optimization Problems

42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

### Linear Programming. Solving LP Models Using MS Excel, 18

SUPPLEMENT TO CHAPTER SIX Linear Programming SUPPLEMENT OUTLINE Introduction, 2 Linear Programming Models, 2 Model Formulation, 4 Graphical Linear Programming, 5 Outline of Graphical Procedure, 5 Plotting

### Row Echelon Form and Reduced Row Echelon Form

These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation

### Notes on Factoring. MA 206 Kurt Bryan

The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor

### Basic Terminology for Systems of Equations in a Nutshell. E. L. Lady. 3x 1 7x 2 +4x 3 =0 5x 1 +8x 2 12x 3 =0.

Basic Terminology for Systems of Equations in a Nutshell E L Lady A system of linear equations is something like the following: x 7x +4x =0 5x +8x x = Note that the number of equations is not required

### 1.2 Solving a System of Linear Equations

1.. SOLVING A SYSTEM OF LINEAR EQUATIONS 1. Solving a System of Linear Equations 1..1 Simple Systems - Basic De nitions As noticed above, the general form of a linear system of m equations in n variables

### EXCEL SOLVER TUTORIAL

ENGR62/MS&E111 Autumn 2003 2004 Prof. Ben Van Roy October 1, 2003 EXCEL SOLVER TUTORIAL This tutorial will introduce you to some essential features of Excel and its plug-in, Solver, that we will be using

### Solving Systems of Linear Equations Using Matrices

Solving Systems of Linear Equations Using Matrices What is a Matrix? A matrix is a compact grid or array of numbers. It can be created from a system of equations and used to solve the system of equations.

### Zeros of Polynomial Functions

Zeros of Polynomial Functions The Rational Zero Theorem If f (x) = a n x n + a n-1 x n-1 + + a 1 x + a 0 has integer coefficients and p/q (where p/q is reduced) is a rational zero, then p is a factor of

### Solving Linear Systems, Continued and The Inverse of a Matrix

, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

### 5 Homogeneous systems

5 Homogeneous systems Definition: A homogeneous (ho-mo-jeen -i-us) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x +... + a n x n =.. a m

### Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

### Definition 1 Let a and b be positive integers. A linear combination of a and b is any number n = ax + by, (1) where x and y are whole numbers.

Greatest Common Divisors and Linear Combinations Let a and b be positive integers The greatest common divisor of a and b ( gcd(a, b) ) has a close and very useful connection to things called linear combinations

### Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and

### Induction Problems. Tom Davis November 7, 2005

Induction Problems Tom Davis tomrdavis@earthlin.net http://www.geometer.org/mathcircles November 7, 2005 All of the following problems should be proved by mathematical induction. The problems are not necessarily

### Linear Programming I

Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins

### a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

### 7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

### x if x 0, x if x < 0.

Chapter 3 Sequences In this chapter, we discuss sequences. We say what it means for a sequence to converge, and define the limit of a convergent sequence. We begin with some preliminary results about the

### Math 240: Linear Systems and Rank of a Matrix

Math 240: Linear Systems and Rank of a Matrix Ryan Blair University of Pennsylvania Thursday January 20, 2011 Ryan Blair (U Penn) Math 240: Linear Systems and Rank of a Matrix Thursday January 20, 2011

### Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

### Operation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1

Operation Research Module 1 Unit 1 1.1 Origin of Operations Research 1.2 Concept and Definition of OR 1.3 Characteristics of OR 1.4 Applications of OR 1.5 Phases of OR Unit 2 2.1 Introduction to Linear

### DETERMINANTS. b 2. x 2

DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in

### 24. The Branch and Bound Method

24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

### Integer Factorization using the Quadratic Sieve

Integer Factorization using the Quadratic Sieve Chad Seibert* Division of Science and Mathematics University of Minnesota, Morris Morris, MN 56567 seib0060@morris.umn.edu March 16, 2011 Abstract We give

### MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3

MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................

### Some Notes on Taylor Polynomials and Taylor Series

Some Notes on Taylor Polynomials and Taylor Series Mark MacLean October 3, 27 UBC s courses MATH /8 and MATH introduce students to the ideas of Taylor polynomials and Taylor series in a fairly limited

### SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89 by Joseph Collison Copyright 2000 by Joseph Collison All rights reserved Reproduction or translation of any part of this work beyond that permitted by Sections

### The Equivalence of Linear Programs and Zero-Sum Games

The Equivalence of Linear Programs and Zero-Sum Games Ilan Adler, IEOR Dep, UC Berkeley adler@ieor.berkeley.edu Abstract In 1951, Dantzig showed the equivalence of linear programming problems and two-person

### 1. (a) Multiply by negative one to make the problem into a min: 170A 170B 172A 172B 172C Antonovics Foster Groves 80 88

Econ 172A, W2001: Final Examination, Possible Answers There were 480 possible points. The first question was worth 80 points (40,30,10); the second question was worth 60 points (10 points for the first

### Solution to Homework 2

Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

### Lecture 2: August 29. Linear Programming (part I)

10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.

### EQUATIONS and INEQUALITIES

EQUATIONS and INEQUALITIES Linear Equations and Slope 1. Slope a. Calculate the slope of a line given two points b. Calculate the slope of a line parallel to a given line. c. Calculate the slope of a line

### LS.6 Solution Matrices

LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

### Solutions to Linear Algebra Practice Problems 1. form (because the leading 1 in the third row is not to the right of the

Solutions to Linear Algebra Practice Problems. Determine which of the following augmented matrices are in row echelon from, row reduced echelon form or neither. Also determine which variables are free

### Name: Section Registered In:

Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

### Actually Doing It! 6. Prove that the regular unit cube (say 1cm=unit) of sufficiently high dimension can fit inside it the whole city of New York.

1: 1. Compute a random 4-dimensional polytope P as the convex hull of 10 random points using rand sphere(4,10). Run VISUAL to see a Schlegel diagram. How many 3-dimensional polytopes do you see? How many

### Vector and Matrix Norms

Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

### Solving Linear Diophantine Matrix Equations Using the Smith Normal Form (More or Less)

Solving Linear Diophantine Matrix Equations Using the Smith Normal Form (More or Less) Raymond N. Greenwell 1 and Stanley Kertzner 2 1 Department of Mathematics, Hofstra University, Hempstead, NY 11549

### This exposition of linear programming

Linear Programming and the Simplex Method David Gale This exposition of linear programming and the simplex method is intended as a companion piece to the article in this issue on the life and work of George

### Chapter 2 Solving Linear Programs

Chapter 2 Solving Linear Programs Companion slides of Applied Mathematical Programming by Bradley, Hax, and Magnanti (Addison-Wesley, 1977) prepared by José Fernando Oliveira Maria Antónia Carravilla A

### Kevin James. MTHSC 412 Section 2.4 Prime Factors and Greatest Comm

MTHSC 412 Section 2.4 Prime Factors and Greatest Common Divisor Greatest Common Divisor Definition Suppose that a, b Z. Then we say that d Z is a greatest common divisor (gcd) of a and b if the following

### CHAPTER 9. Integer Programming

CHAPTER 9 Integer Programming An integer linear program (ILP) is, by definition, a linear program with the additional constraint that all variables take integer values: (9.1) max c T x s t Ax b and x integral

### Chapter 4. Duality. 4.1 A Graphical Example

Chapter 4 Duality Given any linear program, there is another related linear program called the dual. In this chapter, we will develop an understanding of the dual linear program. This understanding translates

### Solving Systems of Linear Equations. Substitution

Solving Systems of Linear Equations There are two basic methods we will use to solve systems of linear equations: Substitution Elimination We will describe each for a system of two equations in two unknowns,

### Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

### Linear Programming Problems

Linear Programming Problems Linear programming problems come up in many applications. In a linear programming problem, we have a function, called the objective function, which depends linearly on a number

### a = bq + r where 0 r < b.

Lecture 5: Euclid s algorithm Introduction The fundamental arithmetic operations are addition, subtraction, multiplication and division. But there is a fifth operation which I would argue is just as fundamental

### Using the Simplex Method to Solve Linear Programming Maximization Problems J. Reeb and S. Leavengood

PERFORMANCE EXCELLENCE IN THE WOOD PRODUCTS INDUSTRY EM 8720-E October 1998 \$3.00 Using the Simplex Method to Solve Linear Programming Maximization Problems J. Reeb and S. Leavengood A key problem faced

### 8 Primes and Modular Arithmetic

8 Primes and Modular Arithmetic 8.1 Primes and Factors Over two millennia ago already, people all over the world were considering the properties of numbers. One of the simplest concepts is prime numbers.

### Standard Form of a Linear Programming Problem

494 CHAPTER 9 LINEAR PROGRAMMING 9. THE SIMPLEX METHOD: MAXIMIZATION For linear programming problems involving two variables, the graphical solution method introduced in Section 9. is convenient. However,

### MATH10040 Chapter 2: Prime and relatively prime numbers

MATH10040 Chapter 2: Prime and relatively prime numbers Recall the basic definition: 1. Prime numbers Definition 1.1. Recall that a positive integer is said to be prime if it has precisely two positive

### Lecture 6. Inverse of Matrix

Lecture 6 Inverse of Matrix Recall that any linear system can be written as a matrix equation In one dimension case, ie, A is 1 1, then can be easily solved as A x b Ax b x b A 1 A b A 1 b provided that