We have the following important fact whose proof we leave as an exercise for the reader.
|
|
- Maryann Dennis
- 7 years ago
- Views:
Transcription
1 LP Geometry We now briefly turn to a discussion of LP geometry extending the geometric ideas developed in Section for dimensional LPs to n dimensions. In this regard, the key geometric idea is the notion of a hyperplane. Definition. A hyperplane in R n is any set of the form where a R n, β R, and a 0. H(a, β) = {x : a T x = β} We have the following important fact whose proof we leave as an exercise for the reader. Fact. H R n is a hyperplane if and only if the set H x 0 = {x x 0 : x H} where x 0 H is a subspace of R n of dimension (n ). and Every hyperplane H(a, β) generates two closed half spaces: H + (a, β) = {x R n : a T x β} H (a, β) = {x R n : a T x β}. Note that the constraint region for a linear program is the intersection of finitely many closed half spaces: setting H j = {x : e T j x 0} for j =,...,n and we have H n+i = {x : a ij x j b i } {x : Ax b, 0 x} = for i =,..., m n+m Any set that can be represented in this way is called a convex polyhedron. Definition. Any subset of R n that can be represented as the intersection of finitely many closed half spaces is called a convex polyhedron. Therefore, a linear programming is simply the problem of either maximizing or minimizing a linear function over a convex polyhedron. We now develop some of the underlying geometry of convex polyhedra. i= H i.
2 Fact.4 Given any two points in R n, say x and y, the line segment connecting them is given by [x, y] = {( λ)x + λy : 0 λ }. Definition.5 A subset C R n is said to be convex if [x, y] C whenever x, y C. Fact.6 A convex polyhedron is a convex set. We now consider the notion of vertex, or corner point, for convex polyhedra in R. For this, consider the polyhedron C R defined by the constraints (.6) c : x x c : x 4x 0 c : x + x 6. x 5 4 v c v C c c v 4 5 x The vertices are v = ( 8, ) 6 7 7, v = (0, ), and v = ( 4, ) One of our goals in this section is to discover an intrinsic geometric property of these vertices that generalizes to n dimensions and simultaneously captures our intuitive notion of what a vertex is. For this we examine our notion of convexity which is based on line segments. Is there a way to use line segments to make precise our notion of vertex?
3 Consider any of the vertices in the polyhedron C defined by (.7). Note that any line segment in C that contains one of these vertices must have the vertex as one of its end points. Vertices are the only points that have this property. In addition, this property easily generalizes to convex polyhedra in R n. This is the rigorous mathematical formulation for our notion of vertex that we seek. It is simple, has intuitive appeal, and yields the correct objects in dimensions and. Definition.7 Let C be a convex polyhedron. We say that x C is a vertex of C if whenever x [u, v] for some u, v C, it must be the case that either x = u or x = v. This definition says that a point is a vertex if and only if whenever that point is a member of a line segment contained in the polyhedron, then it must be one of the end points of the line segment. In particular, this implies that vertices must lie in the boundary of the set and the set must somehow make a corner at that point. Our next result gives an important and useful characterization of the vertices of convex polyhedra. Theorem.8 (Fundamental Representation Theorem for Vertices) A point x in the convex polyhedron C = {x R s Tx g}, where T = (t ij ) s n and g R s, is a vertex of this polyhedron if and only if there exist an index set I {,...,s} with such that x is the unique solution to the system of equations (.7) t ij x j = g i i I. Moreover, if x is a vertex, then one can take I = n in (.7), where I denotes the number of elements in I. Proof: We first prove that if there exist an index set I {,...,s} such that x = x is the unique solution to the system of equations (.7), then x is a vertex of the polyhedron C. We do this by proving the contraposition, that is, we assume that x C is not a vertex and show that it cannot be the unique solution to any system of the form (.7) with I {,,..., s}. If x is not a vertex of C, then there exist u, v C and 0 < λ < such that x = ( λ)u + λv. Let A(x) denote the set of active indices at x: { } A(x) = i t ij x j = g i. For every i A( x) (.8) t ij x j = g i, t ij u j g i, and t ij v j g i.
4 Therefore, 0 = g i t ij x j = ( λ)g i + λg i [ = ( λ) g i 0. t ij (( λ)u + λv) ] t ij u j [ ] + λ g i t ij v j Hence, which implies that 0 = ( λ) [ g i = g i ] t ij u j + λ t ij u j and g i = [ g i t ij v j ] t ij v j [ since both g i ] [ n t iju j and g i ] n t ijv j are non-negative. That is, A( x) A(u) A(v). Now if I {,,..., s} is such that (.7) holds at x = x, then I A( x). But then (.7) must also hold for x = u and x = v since A( x) A(u) A(v). Therefore, x is not a unique solution to (.7) for any choice of I {,,..., s}. Let x C. We now show that if x is a vertex of C, then there exist an index set I {,...,s} such that x = x is the unique solution to the system of equations (.7). Again we establish this by contraposition, that is, we assume that if x C is such that, for every index set I {,,..., s} for which x = x satisfies the system (.7) there exists w R n with w x such that (.7) holds with x = w, then x cannot be a vertex of C. To this end take I = A( x) and let w R n with w x be such that (.7) holds with x = w and I = A( x), and set u = w x. Since x C, we know that t ij x j < g i i {,,..., s} \ A( x). Hence, by continuity, there exists τ (0, ] such that (.9) t ij ( x j + tu j ) < g i Also note that t ij ( x j ± τu j ) = ( t ij x j ) ± τ( t ij w j i {,,..., s} \ A( x) and t τ. 4 t ij x j ) = g i ± τ(g i g i ) = g i i A( x).
5 Combining these equivalences with (.9) we find that x+τu and x τu are both in C. Since x = (x + τu) + (x τu) and τu 0, x cannot be a vertex of C. It remains to prove the final statement of the theorem. Let x be a vertex of C and let I {,,..., s} be such that x is the unique solution to the system (.7). First note that since the system (.7) is consistent and its solution unique, we must have I n; otherwise, there are infinitely many solutions since the system has a non-trivial null space when n > I. So we may as well assume that I > n. Let J I be such that the vectors t i = (t i, t i,...,t in ) T, i J is a maximally linearly independent subset of the set of vectors t i = (t i, t i,...,t in ) T, i I. That is, the vectors t i i J form a basis for the subspace spanned by the vectors t i, i I. Clearly, J n since these vectors reside in R n and are linearly independent. Moreover, each of the vectors t r for r I \ J can be written as a linear combination of the vectors t i for i J ; t r = i J λ ri t i, r I \ J. Therefore, g r = t T r x = i J λ ri t T i x = i J λ ri g i, r I \ J, which implies that any solution to the system (.0) t T i x = g i, i J is necessarily a solution to the larger system (.7). But then the smaller system (.0) must have x as its unique solution; otherwise, the system (.7) has more than one solution. Finally, since the set of solutions to (.0) is unique and J n, we must in fact have J = n which completes the proof. We now apply this result to obtain a characterization of the vertices for the constraint region of an LP in standard form. Corollary. A point x in the convex polyhedron described by the system of inequalities Ax b and 0 x, where A = (a ij ) m n, is a vertex of this polyhedron if and only if there exist index sets I {,..., m} and J {,...,n} with I + J = n such that x is the unique solution to the system of equations (.9) a ij x j = b i i I, and x j = 0 j J. 5
6 Proof: Take in the previous theorem. T = [ A I ] and [ b g 0 Recall that the symbols I and J denote the number of elements in the sets I and J, respectively. The constraint hyperplanes associated with these indices are necessarily a subset of the set of active hyperplanes at the solution to (.9). Theorem. is an elementary yet powerful result in the study of convex polyhedra. We make strong use of it in our study of the geometric properties of the simplex algorithm. As a first observation, recall from Math 08 that the coefficient matrix for the system (.9) is necessarily non-singular if this n n system has a unique solution. How do we interpret this system geometrically, and why does Theorem. make intuitive sense? To answer these questions, let us return to the convex polyhedron C defined by (.7). In this case, the dimension n is. Observe that each vertex is located at the intersection of precisely two of the bounding constraint lines. Thus, each vertex can be represented as the unique solution to a system of equations of the form a x + a x = b a x + a x = b, where the coefficient matrix [ a a a a is non-singular. For the set C above, we have the following: (a) The vertex v = ( 8, 6 ) is given as the solution to the system 7 7 ] x x = x 4x = 0, (b) The vertex v = (0, ) is given as the solution to the system and (c) The vertex v = ( 4 5, 8 5 x x = x + x = 6, ) is given as the solution to the system x 4x = 0 x + x = 6. ] 6
7 Theorem. indicates that any subsystem of the form (.9) for which the associated coefficient matrix is non-singular, has as its solution a vertex of the polyhedron (.0) Ax b, 0 x if this solution is in the polyhedron. We now connect these ideas to the operation of the simplex algorithm. The system (.0) describes the constraint region for an LP in standard form. It can be expressed componentwise by a ij x j b i i =,...,m 0 x j j =,...,n. The associated slack variables are defined by the equations (.) x n+i = b i a ij x j i =,...,m. Let x = ( x,..., x n+m ) be any solution to the system (.) and set x = ( x,..., x n ) ( x gives values for the decision variables associated with the underlying LP). Note that if for some j J {,...,n} we have x j = 0, then the hyperplane H j = {x R n : e T j x = 0} is active at x, i.e., x H j. Similarly, if for some i I {,,..., m} we have x n+i = 0, then the hyperplane H n+i = {x R n : a ij x j = b i } is active at x, i.e., x H n+i. Next suppose that x is a basic feasible solution for the LP (P) maxc T x subject to Ax b, 0 x. Then it must be the case that n of the components x k, k {,,..., n + m} are assigned to the value zero since every dictionary has m basic and n non-basic variables. That is, every basic feasible solution is in the polyhedron defined by (.0) and is the unique solution to a system of the form (.9). But then, by Theorem., basic feasible solutions correspond precisely to the vertices of the polyhedron defining the constraint region for the LP P!! This amazing geometric fact implies that the simplex algorithm proceeds by moving from vertex to adjacent vertex of the polyhedron given by (.0). This is the essential underlying geometry of the simplex algorithm for linear programming! 7
8 By way of illustration, let us observe this behavior for the LP (.) maximize x + 4x subject to x + x x x 4 0 x, 0 x 4. The constraint region for this LP is graphed on the next page. x V V 4 V V 5 V V 6 x The simplex algorithm yields the following pivots: 8
9 vertex V = (0, 0) vertex V = (0, ) vertex V = (, 4) The Geometry of Degeneracy vertex V 4 = (, 4) We now give a geometric interpretation of degeneracy in linear programming. Recall that a basic feasible solution, or vertex, is said to be degenerate if one or more of the basic variables is assigned the value zero. In the notation of (.) this implies that more than n of the hyperplanes H k, k =,,..., n+m are active at this vertex. By way of illustration, suppose we add the constraints x + x and x + x 7 to the system of constraints in the LP (.). The picture of the constraint region now looks as follows: 9
10 x V V 4 V V 5 V V 6 x Notice that there are redundant constraints at both of the vertices V and V 4. Therefore, as we pivot we should observe that the tableaus associated with these vertices are degenerate. 0
11 vertex V = (0, 0) vertex V = (0, ) vertex V = (, 4) degenerate vertex V = (, 4) degenerate vertex V 4 = (, 4) optimal degenerate In this way we see that a degenerate pivot arises when we represent the same vertex as the intersection point of a different subset of n active hyperplanes. Cycling implies that we are cycling between different representations of the same vertex. In the example given above, the third pivot is a degenerate pivot. In the third tableau, we represent the vertex
12 V = (, 4) as the intersection point of the hyperplanes and x + x = (since x = 0) x + x =. (since x 5 = 0) The third pivot brings us to the 4th tableau where the vertex V = (, 4) is now represented as the intersection of the hyperplanes and x + x = (since x 5 = 0) x =4 (since x 8 = 0). Observe that the final tableau is both optimal and degenerate. Just for the fun of it let s try pivoting on the only negative entry in the 5th row of this tableau (we choose the 5th row since this is the row that exhibits the degeneracy). Pivoting we obtain the following tableau Observe that this tableau is also optimal, but it provides us with a different set of optimal dual variables. In general, a degenerate optimal tableau implies that the dual problem has infinitely many optimal solutions. Fact: If an LP has an optimal tableau that is degenerate, then the dual LP has infinitely many optimal solutions. We will arrive at an understanding of why this fact is true after we examine the geometry of duality. The Geometry of Duality Consider the linear program (.) maximize x + x subject to x + x 4 x x 0 x, x. This LP is solved graphically below.
13 n = (, ) c = (, ) n = (, ) The solution is x = (, ). In the picture, the vector n = (, ) is the normal to the hyperplane x + x = 4, the vector n = (, ) is the normal to the hyperplane x x =, and the vector c = (, ) is the objective normal. Geometrically, the vector c lies between the vectors n and n. That is to say, the vector c can be represented as a non-negative linear combination of n and n : there exist y 0 and y 0 such that c = y n + y n, or equivalently, ( ) ( ) ( = y + y [ ] [ ] y =. y ) Solving for (y, y ) we have
14 or y = 6, y 5 = 7. I claim that the vector y = 5 (6, 7 ) is the optimal solution to the dual! 5 5 Indeed, this result follows from the complementary slackness theorem and gives another way to recover the solution to the dual from the solution to the primal, or equivalently, to check whether a point that is feasible for the primal is optimal for the primal. Theorem.4 (Geometric Duality Theorem) Consider the LP (P) maximize c T x subject to Ax b, 0 x. where A is an m n matrix. Given a vector x that is feasible for P, define Z( x) = {j {,,..., n} : x j = 0} and E( x) = {i {,..., m} : a ij x j = b i }. The indices Z( x) and E( x) are the active indices at x and correspond to the active hyperplanes at x. Then x solves P if and only if there exist non-negative scalars r j, j Z( x) and y i, i E( x) such that (.4) c = r j e j + y i a i j Z( x) i E( x) where for each i =,..., m, a i = (a i, a i,...,a in ) T is the ith column of the matrix A T, and, for each j =,...,n, e j is the jth unit coordinate vector. Moreover, the vector ȳ R m given by { yi for i E( x) (.5) ȳ i = 0 otherwise, solves the dual problem (D) maximize b T x subject to A T y c, 0 y. 4
15 Proof: Let us first suppose that x solves P. Then there is a ȳ R n solving the dual D with c T x = ȳ T A x = b T ȳ by the Strong Duality Theorem. We need only show that there exist r j, j Z( x) such that (.4) and (.5) hold. The Complementary Slackness Theorem implies that (.6) ȳ i = 0 for i {,,..., m} \ E( x) and (.7) m ȳ i a ij = c j for j {,...,n} \ Z( x). i= Note that (.6) implies that ȳ satisfies (.5). Define r = A T ȳ c. Since ȳ is dual feasible we have both r 0 and ȳ 0. Moreover, by (.7), r j = 0 for j {,..., n} \ Z( x), while r j = ȳ i a ij c j 0 for j Z( x), i= or equivalently, (.8) c j = r j + m ȳ i a ij for j Z( x). i= Combining (.8) with (.7) and (.6) gives c = r j e j + A T ȳ = j Z( x) j Z( x) r j e j + i E( x) ȳ i a i, so that (.4) and (.5) are satisfied with ȳ solving D. Next suppose that x is feasible for P with r j, j Z( x) and ȳ i, i E( x) non-negative and satisfying (.4). We must show that x solves P and ȳ solves D. Let ȳ R m be such that its components are given by the ȳ i s for i E( x) and by (.5) otherwise. Then the non-negativity of the r j s in (.4) imply that A T ȳ = i E( x) j Z( x) ȳ i a i j Z( x) i E( x) r j e j + i E( x) ȳ i a i = c, so that ȳ is feasible for D. Moreover, c T x = r j e T j x + ȳ i a T i x = ȳ T A x = ȳ T b, where the final equality follows from the definition of the vector ȳ and the index set E( x). Hence, by the Weak Duality Theorem x solves P and ȳ solves D as required. 5
16 Remark: As is apparent from the proof, the Geometric Duality Theorem is nearly equivalent to the complementary Slackness Theorem even though it provides a superficially different test for optimality. (.5) We now illustrate how to apply this result with an example. Consider the LP maximize x +x x +x 4 subject to x +x x +4x 4 4x x +x 4 x +x x 4 x x +x x x, x, x, x 4. Does the vector x = (, 0,, 0) T solve this LP? If it does, then according to Theorem.4 we must be able to construct the solution to the dual of (.5) by representing the objective vector c = (,,, ) T as a non-negative linear combination of the outer normals to the active hyperplanes at x. Since the active hyperplanes are x + x x + 4x 4 = x + x x 4 = x = 0 x 4 = 0. This means that y = y 4 = 0 and y and y are obtained by solving y 0 y 0 0 r =. 4 0 r 4 Row reducing, we get Therefore, y = and y =. We now check to see if the vector ȳ = (, 0,, 0) does indeed solve the dual to (.5);. (.6) minimize y + y + y + 4y 4 subject to y y4 y + 4y y y 4 y y + y + y 4 4y + y y y 4 0 y, y, y, y 4. 6
17 Clearly, ȳ is feasible for (.6). In addition, b T ȳ = = c T x. Therefore, ȳ solves (.6) and x solves (.5) by the Weak Duality Theorem. 7
1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
More informationPractical Guide to the Simplex Method of Linear Programming
Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear
More informationNOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
More information1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.
Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S
More information. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2
4. Basic feasible solutions and vertices of polyhedra Due to the fundamental theorem of Linear Programming, to solve any LP it suffices to consider the vertices (finitely many) of the polyhedron P of the
More informationSpecial Situations in the Simplex Algorithm
Special Situations in the Simplex Algorithm Degeneracy Consider the linear program: Maximize 2x 1 +x 2 Subject to: 4x 1 +3x 2 12 (1) 4x 1 +x 2 8 (2) 4x 1 +2x 2 8 (3) x 1, x 2 0. We will first apply the
More information4.6 Linear Programming duality
4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal
More informationWhat is Linear Programming?
Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the
More informationDuality in Linear Programming
Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadow-price interpretation of the optimal simplex multipliers is a very useful concept. First, these shadow
More informationLinear Programming I
Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins
More informationSystems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
More informationLECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method
LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method Introduction to dual linear program Given a constraint matrix A, right
More informationSolving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
More informationLinear Programming Notes V Problem Transformations
Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material
More informationLinear Algebra Notes
Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note
More informationMathematical finance and linear programming (optimization)
Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may
More informationSolutions to Math 51 First Exam January 29, 2015
Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not
More informationLecture 2: August 29. Linear Programming (part I)
10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.
More information4.5 Linear Dependence and Linear Independence
4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column
More informationa 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
More information160 CHAPTER 4. VECTOR SPACES
160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results
More informationLinear Programming. March 14, 2014
Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1
More information26 Linear Programming
The greatest flood has the soonest ebb; the sorest tempest the most sudden calm; the hottest love the coldest end; and from the deepest desire oftentimes ensues the deadliest hate. Th extremes of glory
More informationMatrix Representations of Linear Transformations and Changes of Coordinates
Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under
More informationThis exposition of linear programming
Linear Programming and the Simplex Method David Gale This exposition of linear programming and the simplex method is intended as a companion piece to the article in this issue on the life and work of George
More informationLargest Fixed-Aspect, Axis-Aligned Rectangle
Largest Fixed-Aspect, Axis-Aligned Rectangle David Eberly Geometric Tools, LLC http://www.geometrictools.com/ Copyright c 1998-2016. All Rights Reserved. Created: February 21, 2004 Last Modified: February
More informationTHE DIMENSION OF A VECTOR SPACE
THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field
More informationLinear Programming Problems
Linear Programming Problems Linear programming problems come up in many applications. In a linear programming problem, we have a function, called the objective function, which depends linearly on a number
More informationDuality of linear conic problems
Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least
More informationMATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column
More informationThe Graphical Method: An Example
The Graphical Method: An Example Consider the following linear program: Maximize 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2 0, where, for ease of reference,
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationStandard Form of a Linear Programming Problem
494 CHAPTER 9 LINEAR PROGRAMMING 9. THE SIMPLEX METHOD: MAXIMIZATION For linear programming problems involving two variables, the graphical solution method introduced in Section 9. is convenient. However,
More informationLinear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.
1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that
More information2.3 Convex Constrained Optimization Problems
42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions
More informationName: Section Registered In:
Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are
More informationby the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that
More informationInternational Doctoral School Algorithmic Decision Theory: MCDA and MOO
International Doctoral School Algorithmic Decision Theory: MCDA and MOO Lecture 2: Multiobjective Linear Programming Department of Engineering Science, The University of Auckland, New Zealand Laboratoire
More informationLinear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.
Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.
More informationArrangements And Duality
Arrangements And Duality 3.1 Introduction 3 Point configurations are tbe most basic structure we study in computational geometry. But what about configurations of more complicated shapes? For example,
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More informationLecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method
Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming
More informationChapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors
Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col
More informationMethods for Finding Bases
Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,
More informationMATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3
MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................
More informationLecture Notes 2: Matrices as Systems of Linear Equations
2: Matrices as Systems of Linear Equations 33A Linear Algebra, Puck Rombach Last updated: April 13, 2016 Systems of Linear Equations Systems of linear equations can represent many things You have probably
More informationLinear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)
MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of
More informationClassification of Cartan matrices
Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More informationChapter 4. Duality. 4.1 A Graphical Example
Chapter 4 Duality Given any linear program, there is another related linear program called the dual. In this chapter, we will develop an understanding of the dual linear program. This understanding translates
More informationAu = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
More informationLinear Programming in Matrix Form
Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,
More information( ) which must be a vector
MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are
More informationSensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS
Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and
More information1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
More informationMax-Min Representation of Piecewise Linear Functions
Beiträge zur Algebra und Geometrie Contributions to Algebra and Geometry Volume 43 (2002), No. 1, 297-302. Max-Min Representation of Piecewise Linear Functions Sergei Ovchinnikov Mathematics Department,
More informationT ( a i x i ) = a i T (x i ).
Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationLinear Programming Notes VII Sensitivity Analysis
Linear Programming Notes VII Sensitivity Analysis 1 Introduction When you use a mathematical model to describe reality you must make approximations. The world is more complicated than the kinds of optimization
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More information1 if 1 x 0 1 if 0 x 1
Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or
More informationMAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =
MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the
More informationMath 312 Homework 1 Solutions
Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please
More informationSeveral Views of Support Vector Machines
Several Views of Support Vector Machines Ryan M. Rifkin Honda Research Institute USA, Inc. Human Intention Understanding Group 2007 Tikhonov Regularization We are considering algorithms of the form min
More informationModule1. x 1000. y 800.
Module1 1 Welcome to the first module of the course. It is indeed an exciting event to share with you the subject that has lot to offer both from theoretical side and practical aspects. To begin with,
More informationRow Echelon Form and Reduced Row Echelon Form
These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation
More information3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max
SOLUTION OF LINEAR PROGRAMMING PROBLEMS THEOREM 1 If a linear programming problem has a solution, then it must occur at a vertex, or corner point, of the feasible set, S, associated with the problem. Furthermore,
More informationSolving Linear Programs
Solving Linear Programs 2 In this chapter, we present a systematic procedure for solving linear programs. This procedure, called the simplex method, proceeds by moving from one feasible solution to another,
More informationMATH1231 Algebra, 2015 Chapter 7: Linear maps
MATH1231 Algebra, 2015 Chapter 7: Linear maps A/Prof. Daniel Chan School of Mathematics and Statistics University of New South Wales danielc@unsw.edu.au Daniel Chan (UNSW) MATH1231 Algebra 1 / 43 Chapter
More informationMathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
More informationMA106 Linear Algebra lecture notes
MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector
More informationLinear Programming: Theory and Applications
Linear Programming: Theory and Applications Catherine Lewis May 11, 2008 1 Contents 1 Introduction to Linear Programming 3 1.1 What is a linear program?...................... 3 1.2 Assumptions.............................
More informationDegeneracy in Linear Programming
Degeneracy in Linear Programming I heard that today s tutorial is all about Ellen DeGeneres Sorry, Stan. But the topic is just as interesting. It s about degeneracy in Linear Programming. Degeneracy? Students
More informationSimplex method summary
Simplex method summary Problem: optimize a linear objective, subject to linear constraints 1. Step 1: Convert to standard form: variables on right-hand side, positive constant on left slack variables for
More informationLinear Programming. April 12, 2005
Linear Programming April 1, 005 Parts of this were adapted from Chapter 9 of i Introduction to Algorithms (Second Edition) /i by Cormen, Leiserson, Rivest and Stein. 1 What is linear programming? The first
More information4: SINGLE-PERIOD MARKET MODELS
4: SINGLE-PERIOD MARKET MODELS Ben Goldys and Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2015 B. Goldys and M. Rutkowski (USydney) Slides 4: Single-Period Market
More informationLectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain
Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal
More informationINCIDENCE-BETWEENNESS GEOMETRY
INCIDENCE-BETWEENNESS GEOMETRY MATH 410, CSUSM. SPRING 2008. PROFESSOR AITKEN This document covers the geometry that can be developed with just the axioms related to incidence and betweenness. The full
More informationChapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints
Chapter 6 Linear Programming: The Simplex Method Introduction to the Big M Method In this section, we will present a generalized version of the simplex method that t will solve both maximization i and
More informationMath 4310 Handout - Quotient Vector Spaces
Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable
More information9.4 THE SIMPLEX METHOD: MINIMIZATION
SECTION 9 THE SIMPLEX METHOD: MINIMIZATION 59 The accounting firm in Exercise raises its charge for an audit to $5 What number of audits and tax returns will bring in a maximum revenue? In the simplex
More informationOPRE 6201 : 2. Simplex Method
OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2
More informationOperation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1
Operation Research Module 1 Unit 1 1.1 Origin of Operations Research 1.2 Concept and Definition of OR 1.3 Characteristics of OR 1.4 Applications of OR 1.5 Phases of OR Unit 2 2.1 Introduction to Linear
More informationTHREE DIMENSIONAL GEOMETRY
Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,
More informationLEARNING OBJECTIVES FOR THIS CHAPTER
CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional
More informationInner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
More informationSolving Linear Systems, Continued and The Inverse of a Matrix
, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing
More informationVector Spaces 4.4 Spanning and Independence
Vector Spaces 4.4 and Independence October 18 Goals Discuss two important basic concepts: Define linear combination of vectors. Define Span(S) of a set S of vectors. Define linear Independence of a set
More informationIEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2
IEOR 4404 Homework # Intro OR: Deterministic Models February 14, 011 Prof. Jay Sethuraman Page 1 of 5 Homework #.1 (a) What is the optimal solution of this problem? Let us consider that x 1, x and x 3
More informationLinear Programming. Solving LP Models Using MS Excel, 18
SUPPLEMENT TO CHAPTER SIX Linear Programming SUPPLEMENT OUTLINE Introduction, 2 Linear Programming Models, 2 Model Formulation, 4 Graphical Linear Programming, 5 Outline of Graphical Procedure, 5 Plotting
More information1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
More information2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system
1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables
More informationMATH 551 - APPLIED MATRIX THEORY
MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points
More informationNonlinear Programming Methods.S2 Quadratic Programming
Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective
More informationCHAPTER 9. Integer Programming
CHAPTER 9 Integer Programming An integer linear program (ILP) is, by definition, a linear program with the additional constraint that all variables take integer values: (9.1) max c T x s t Ax b and x integral
More informationThese axioms must hold for all vectors ū, v, and w in V and all scalars c and d.
DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms
More informationAn Introduction to Linear Programming
An Introduction to Linear Programming Steven J. Miller March 31, 2007 Mathematics Department Brown University 151 Thayer Street Providence, RI 02912 Abstract We describe Linear Programming, an important
More information