We have the following important fact whose proof we leave as an exercise for the reader.

Size: px
Start display at page:

Download "We have the following important fact whose proof we leave as an exercise for the reader."

Transcription

1 LP Geometry We now briefly turn to a discussion of LP geometry extending the geometric ideas developed in Section for dimensional LPs to n dimensions. In this regard, the key geometric idea is the notion of a hyperplane. Definition. A hyperplane in R n is any set of the form where a R n, β R, and a 0. H(a, β) = {x : a T x = β} We have the following important fact whose proof we leave as an exercise for the reader. Fact. H R n is a hyperplane if and only if the set H x 0 = {x x 0 : x H} where x 0 H is a subspace of R n of dimension (n ). and Every hyperplane H(a, β) generates two closed half spaces: H + (a, β) = {x R n : a T x β} H (a, β) = {x R n : a T x β}. Note that the constraint region for a linear program is the intersection of finitely many closed half spaces: setting H j = {x : e T j x 0} for j =,...,n and we have H n+i = {x : a ij x j b i } {x : Ax b, 0 x} = for i =,..., m n+m Any set that can be represented in this way is called a convex polyhedron. Definition. Any subset of R n that can be represented as the intersection of finitely many closed half spaces is called a convex polyhedron. Therefore, a linear programming is simply the problem of either maximizing or minimizing a linear function over a convex polyhedron. We now develop some of the underlying geometry of convex polyhedra. i= H i.

2 Fact.4 Given any two points in R n, say x and y, the line segment connecting them is given by [x, y] = {( λ)x + λy : 0 λ }. Definition.5 A subset C R n is said to be convex if [x, y] C whenever x, y C. Fact.6 A convex polyhedron is a convex set. We now consider the notion of vertex, or corner point, for convex polyhedra in R. For this, consider the polyhedron C R defined by the constraints (.6) c : x x c : x 4x 0 c : x + x 6. x 5 4 v c v C c c v 4 5 x The vertices are v = ( 8, ) 6 7 7, v = (0, ), and v = ( 4, ) One of our goals in this section is to discover an intrinsic geometric property of these vertices that generalizes to n dimensions and simultaneously captures our intuitive notion of what a vertex is. For this we examine our notion of convexity which is based on line segments. Is there a way to use line segments to make precise our notion of vertex?

3 Consider any of the vertices in the polyhedron C defined by (.7). Note that any line segment in C that contains one of these vertices must have the vertex as one of its end points. Vertices are the only points that have this property. In addition, this property easily generalizes to convex polyhedra in R n. This is the rigorous mathematical formulation for our notion of vertex that we seek. It is simple, has intuitive appeal, and yields the correct objects in dimensions and. Definition.7 Let C be a convex polyhedron. We say that x C is a vertex of C if whenever x [u, v] for some u, v C, it must be the case that either x = u or x = v. This definition says that a point is a vertex if and only if whenever that point is a member of a line segment contained in the polyhedron, then it must be one of the end points of the line segment. In particular, this implies that vertices must lie in the boundary of the set and the set must somehow make a corner at that point. Our next result gives an important and useful characterization of the vertices of convex polyhedra. Theorem.8 (Fundamental Representation Theorem for Vertices) A point x in the convex polyhedron C = {x R s Tx g}, where T = (t ij ) s n and g R s, is a vertex of this polyhedron if and only if there exist an index set I {,...,s} with such that x is the unique solution to the system of equations (.7) t ij x j = g i i I. Moreover, if x is a vertex, then one can take I = n in (.7), where I denotes the number of elements in I. Proof: We first prove that if there exist an index set I {,...,s} such that x = x is the unique solution to the system of equations (.7), then x is a vertex of the polyhedron C. We do this by proving the contraposition, that is, we assume that x C is not a vertex and show that it cannot be the unique solution to any system of the form (.7) with I {,,..., s}. If x is not a vertex of C, then there exist u, v C and 0 < λ < such that x = ( λ)u + λv. Let A(x) denote the set of active indices at x: { } A(x) = i t ij x j = g i. For every i A( x) (.8) t ij x j = g i, t ij u j g i, and t ij v j g i.

4 Therefore, 0 = g i t ij x j = ( λ)g i + λg i [ = ( λ) g i 0. t ij (( λ)u + λv) ] t ij u j [ ] + λ g i t ij v j Hence, which implies that 0 = ( λ) [ g i = g i ] t ij u j + λ t ij u j and g i = [ g i t ij v j ] t ij v j [ since both g i ] [ n t iju j and g i ] n t ijv j are non-negative. That is, A( x) A(u) A(v). Now if I {,,..., s} is such that (.7) holds at x = x, then I A( x). But then (.7) must also hold for x = u and x = v since A( x) A(u) A(v). Therefore, x is not a unique solution to (.7) for any choice of I {,,..., s}. Let x C. We now show that if x is a vertex of C, then there exist an index set I {,...,s} such that x = x is the unique solution to the system of equations (.7). Again we establish this by contraposition, that is, we assume that if x C is such that, for every index set I {,,..., s} for which x = x satisfies the system (.7) there exists w R n with w x such that (.7) holds with x = w, then x cannot be a vertex of C. To this end take I = A( x) and let w R n with w x be such that (.7) holds with x = w and I = A( x), and set u = w x. Since x C, we know that t ij x j < g i i {,,..., s} \ A( x). Hence, by continuity, there exists τ (0, ] such that (.9) t ij ( x j + tu j ) < g i Also note that t ij ( x j ± τu j ) = ( t ij x j ) ± τ( t ij w j i {,,..., s} \ A( x) and t τ. 4 t ij x j ) = g i ± τ(g i g i ) = g i i A( x).

5 Combining these equivalences with (.9) we find that x+τu and x τu are both in C. Since x = (x + τu) + (x τu) and τu 0, x cannot be a vertex of C. It remains to prove the final statement of the theorem. Let x be a vertex of C and let I {,,..., s} be such that x is the unique solution to the system (.7). First note that since the system (.7) is consistent and its solution unique, we must have I n; otherwise, there are infinitely many solutions since the system has a non-trivial null space when n > I. So we may as well assume that I > n. Let J I be such that the vectors t i = (t i, t i,...,t in ) T, i J is a maximally linearly independent subset of the set of vectors t i = (t i, t i,...,t in ) T, i I. That is, the vectors t i i J form a basis for the subspace spanned by the vectors t i, i I. Clearly, J n since these vectors reside in R n and are linearly independent. Moreover, each of the vectors t r for r I \ J can be written as a linear combination of the vectors t i for i J ; t r = i J λ ri t i, r I \ J. Therefore, g r = t T r x = i J λ ri t T i x = i J λ ri g i, r I \ J, which implies that any solution to the system (.0) t T i x = g i, i J is necessarily a solution to the larger system (.7). But then the smaller system (.0) must have x as its unique solution; otherwise, the system (.7) has more than one solution. Finally, since the set of solutions to (.0) is unique and J n, we must in fact have J = n which completes the proof. We now apply this result to obtain a characterization of the vertices for the constraint region of an LP in standard form. Corollary. A point x in the convex polyhedron described by the system of inequalities Ax b and 0 x, where A = (a ij ) m n, is a vertex of this polyhedron if and only if there exist index sets I {,..., m} and J {,...,n} with I + J = n such that x is the unique solution to the system of equations (.9) a ij x j = b i i I, and x j = 0 j J. 5

6 Proof: Take in the previous theorem. T = [ A I ] and [ b g 0 Recall that the symbols I and J denote the number of elements in the sets I and J, respectively. The constraint hyperplanes associated with these indices are necessarily a subset of the set of active hyperplanes at the solution to (.9). Theorem. is an elementary yet powerful result in the study of convex polyhedra. We make strong use of it in our study of the geometric properties of the simplex algorithm. As a first observation, recall from Math 08 that the coefficient matrix for the system (.9) is necessarily non-singular if this n n system has a unique solution. How do we interpret this system geometrically, and why does Theorem. make intuitive sense? To answer these questions, let us return to the convex polyhedron C defined by (.7). In this case, the dimension n is. Observe that each vertex is located at the intersection of precisely two of the bounding constraint lines. Thus, each vertex can be represented as the unique solution to a system of equations of the form a x + a x = b a x + a x = b, where the coefficient matrix [ a a a a is non-singular. For the set C above, we have the following: (a) The vertex v = ( 8, 6 ) is given as the solution to the system 7 7 ] x x = x 4x = 0, (b) The vertex v = (0, ) is given as the solution to the system and (c) The vertex v = ( 4 5, 8 5 x x = x + x = 6, ) is given as the solution to the system x 4x = 0 x + x = 6. ] 6

7 Theorem. indicates that any subsystem of the form (.9) for which the associated coefficient matrix is non-singular, has as its solution a vertex of the polyhedron (.0) Ax b, 0 x if this solution is in the polyhedron. We now connect these ideas to the operation of the simplex algorithm. The system (.0) describes the constraint region for an LP in standard form. It can be expressed componentwise by a ij x j b i i =,...,m 0 x j j =,...,n. The associated slack variables are defined by the equations (.) x n+i = b i a ij x j i =,...,m. Let x = ( x,..., x n+m ) be any solution to the system (.) and set x = ( x,..., x n ) ( x gives values for the decision variables associated with the underlying LP). Note that if for some j J {,...,n} we have x j = 0, then the hyperplane H j = {x R n : e T j x = 0} is active at x, i.e., x H j. Similarly, if for some i I {,,..., m} we have x n+i = 0, then the hyperplane H n+i = {x R n : a ij x j = b i } is active at x, i.e., x H n+i. Next suppose that x is a basic feasible solution for the LP (P) maxc T x subject to Ax b, 0 x. Then it must be the case that n of the components x k, k {,,..., n + m} are assigned to the value zero since every dictionary has m basic and n non-basic variables. That is, every basic feasible solution is in the polyhedron defined by (.0) and is the unique solution to a system of the form (.9). But then, by Theorem., basic feasible solutions correspond precisely to the vertices of the polyhedron defining the constraint region for the LP P!! This amazing geometric fact implies that the simplex algorithm proceeds by moving from vertex to adjacent vertex of the polyhedron given by (.0). This is the essential underlying geometry of the simplex algorithm for linear programming! 7

8 By way of illustration, let us observe this behavior for the LP (.) maximize x + 4x subject to x + x x x 4 0 x, 0 x 4. The constraint region for this LP is graphed on the next page. x V V 4 V V 5 V V 6 x The simplex algorithm yields the following pivots: 8

9 vertex V = (0, 0) vertex V = (0, ) vertex V = (, 4) The Geometry of Degeneracy vertex V 4 = (, 4) We now give a geometric interpretation of degeneracy in linear programming. Recall that a basic feasible solution, or vertex, is said to be degenerate if one or more of the basic variables is assigned the value zero. In the notation of (.) this implies that more than n of the hyperplanes H k, k =,,..., n+m are active at this vertex. By way of illustration, suppose we add the constraints x + x and x + x 7 to the system of constraints in the LP (.). The picture of the constraint region now looks as follows: 9

10 x V V 4 V V 5 V V 6 x Notice that there are redundant constraints at both of the vertices V and V 4. Therefore, as we pivot we should observe that the tableaus associated with these vertices are degenerate. 0

11 vertex V = (0, 0) vertex V = (0, ) vertex V = (, 4) degenerate vertex V = (, 4) degenerate vertex V 4 = (, 4) optimal degenerate In this way we see that a degenerate pivot arises when we represent the same vertex as the intersection point of a different subset of n active hyperplanes. Cycling implies that we are cycling between different representations of the same vertex. In the example given above, the third pivot is a degenerate pivot. In the third tableau, we represent the vertex

12 V = (, 4) as the intersection point of the hyperplanes and x + x = (since x = 0) x + x =. (since x 5 = 0) The third pivot brings us to the 4th tableau where the vertex V = (, 4) is now represented as the intersection of the hyperplanes and x + x = (since x 5 = 0) x =4 (since x 8 = 0). Observe that the final tableau is both optimal and degenerate. Just for the fun of it let s try pivoting on the only negative entry in the 5th row of this tableau (we choose the 5th row since this is the row that exhibits the degeneracy). Pivoting we obtain the following tableau Observe that this tableau is also optimal, but it provides us with a different set of optimal dual variables. In general, a degenerate optimal tableau implies that the dual problem has infinitely many optimal solutions. Fact: If an LP has an optimal tableau that is degenerate, then the dual LP has infinitely many optimal solutions. We will arrive at an understanding of why this fact is true after we examine the geometry of duality. The Geometry of Duality Consider the linear program (.) maximize x + x subject to x + x 4 x x 0 x, x. This LP is solved graphically below.

13 n = (, ) c = (, ) n = (, ) The solution is x = (, ). In the picture, the vector n = (, ) is the normal to the hyperplane x + x = 4, the vector n = (, ) is the normal to the hyperplane x x =, and the vector c = (, ) is the objective normal. Geometrically, the vector c lies between the vectors n and n. That is to say, the vector c can be represented as a non-negative linear combination of n and n : there exist y 0 and y 0 such that c = y n + y n, or equivalently, ( ) ( ) ( = y + y [ ] [ ] y =. y ) Solving for (y, y ) we have

14 or y = 6, y 5 = 7. I claim that the vector y = 5 (6, 7 ) is the optimal solution to the dual! 5 5 Indeed, this result follows from the complementary slackness theorem and gives another way to recover the solution to the dual from the solution to the primal, or equivalently, to check whether a point that is feasible for the primal is optimal for the primal. Theorem.4 (Geometric Duality Theorem) Consider the LP (P) maximize c T x subject to Ax b, 0 x. where A is an m n matrix. Given a vector x that is feasible for P, define Z( x) = {j {,,..., n} : x j = 0} and E( x) = {i {,..., m} : a ij x j = b i }. The indices Z( x) and E( x) are the active indices at x and correspond to the active hyperplanes at x. Then x solves P if and only if there exist non-negative scalars r j, j Z( x) and y i, i E( x) such that (.4) c = r j e j + y i a i j Z( x) i E( x) where for each i =,..., m, a i = (a i, a i,...,a in ) T is the ith column of the matrix A T, and, for each j =,...,n, e j is the jth unit coordinate vector. Moreover, the vector ȳ R m given by { yi for i E( x) (.5) ȳ i = 0 otherwise, solves the dual problem (D) maximize b T x subject to A T y c, 0 y. 4

15 Proof: Let us first suppose that x solves P. Then there is a ȳ R n solving the dual D with c T x = ȳ T A x = b T ȳ by the Strong Duality Theorem. We need only show that there exist r j, j Z( x) such that (.4) and (.5) hold. The Complementary Slackness Theorem implies that (.6) ȳ i = 0 for i {,,..., m} \ E( x) and (.7) m ȳ i a ij = c j for j {,...,n} \ Z( x). i= Note that (.6) implies that ȳ satisfies (.5). Define r = A T ȳ c. Since ȳ is dual feasible we have both r 0 and ȳ 0. Moreover, by (.7), r j = 0 for j {,..., n} \ Z( x), while r j = ȳ i a ij c j 0 for j Z( x), i= or equivalently, (.8) c j = r j + m ȳ i a ij for j Z( x). i= Combining (.8) with (.7) and (.6) gives c = r j e j + A T ȳ = j Z( x) j Z( x) r j e j + i E( x) ȳ i a i, so that (.4) and (.5) are satisfied with ȳ solving D. Next suppose that x is feasible for P with r j, j Z( x) and ȳ i, i E( x) non-negative and satisfying (.4). We must show that x solves P and ȳ solves D. Let ȳ R m be such that its components are given by the ȳ i s for i E( x) and by (.5) otherwise. Then the non-negativity of the r j s in (.4) imply that A T ȳ = i E( x) j Z( x) ȳ i a i j Z( x) i E( x) r j e j + i E( x) ȳ i a i = c, so that ȳ is feasible for D. Moreover, c T x = r j e T j x + ȳ i a T i x = ȳ T A x = ȳ T b, where the final equality follows from the definition of the vector ȳ and the index set E( x). Hence, by the Weak Duality Theorem x solves P and ȳ solves D as required. 5

16 Remark: As is apparent from the proof, the Geometric Duality Theorem is nearly equivalent to the complementary Slackness Theorem even though it provides a superficially different test for optimality. (.5) We now illustrate how to apply this result with an example. Consider the LP maximize x +x x +x 4 subject to x +x x +4x 4 4x x +x 4 x +x x 4 x x +x x x, x, x, x 4. Does the vector x = (, 0,, 0) T solve this LP? If it does, then according to Theorem.4 we must be able to construct the solution to the dual of (.5) by representing the objective vector c = (,,, ) T as a non-negative linear combination of the outer normals to the active hyperplanes at x. Since the active hyperplanes are x + x x + 4x 4 = x + x x 4 = x = 0 x 4 = 0. This means that y = y 4 = 0 and y and y are obtained by solving y 0 y 0 0 r =. 4 0 r 4 Row reducing, we get Therefore, y = and y =. We now check to see if the vector ȳ = (, 0,, 0) does indeed solve the dual to (.5);. (.6) minimize y + y + y + 4y 4 subject to y y4 y + 4y y y 4 y y + y + y 4 4y + y y y 4 0 y, y, y, y 4. 6

17 Clearly, ȳ is feasible for (.6). In addition, b T ȳ = = c T x. Therefore, ȳ solves (.6) and x solves (.5) by the Weak Duality Theorem. 7

Definition of a Linear Program

Definition of a Linear Program Definition of a Linear Program Definition: A function f(x 1, x,..., x n ) of x 1, x,..., x n is a linear function if and only if for some set of constants c 1, c,..., c n, f(x 1, x,..., x n ) = c 1 x 1

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

Geometry of Linear Programming

Geometry of Linear Programming Chapter 2 Geometry of Linear Programming The intent of this chapter is to provide a geometric interpretation of linear programming problems. To conceive fundamental concepts and validity of different algorithms

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

1 Polyhedra and Linear Programming

1 Polyhedra and Linear Programming CS 598CSC: Combinatorial Optimization Lecture date: January 21, 2009 Instructor: Chandra Chekuri Scribe: Sungjin Im 1 Polyhedra and Linear Programming In this lecture, we will cover some basic material

More information

3 Does the Simplex Algorithm Work?

3 Does the Simplex Algorithm Work? Does the Simplex Algorithm Work? In this section we carefully examine the simplex algorithm introduced in the previous chapter. Our goal is to either prove that it works, or to determine those circumstances

More information

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2 4. Basic feasible solutions and vertices of polyhedra Due to the fundamental theorem of Linear Programming, to solve any LP it suffices to consider the vertices (finitely many) of the polyhedron P of the

More information

Introduction to Linear Programming.

Introduction to Linear Programming. Chapter 1 Introduction to Linear Programming. This chapter introduces notations, terminologies and formulations of linear programming. Examples will be given to show how real-life problems can be modeled

More information

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where. Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

More information

Special Situations in the Simplex Algorithm

Special Situations in the Simplex Algorithm Special Situations in the Simplex Algorithm Degeneracy Consider the linear program: Maximize 2x 1 +x 2 Subject to: 4x 1 +3x 2 12 (1) 4x 1 +x 2 8 (2) 4x 1 +2x 2 8 (3) x 1, x 2 0. We will first apply the

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Theory of Linear Programming

Theory of Linear Programming Theory of Linear Programming Debasis Mishra April 6, 2011 1 Introduction Optimization of a function f over a set S involves finding the maximum (minimum) value of f (objective function) in the set S (feasible

More information

By W.E. Diewert. July, Linear programming problems are important for a number of reasons:

By W.E. Diewert. July, Linear programming problems are important for a number of reasons: APPLIED ECONOMICS By W.E. Diewert. July, 3. Chapter : Linear Programming. Introduction The theory of linear programming provides a good introduction to the study of constrained maximization (and minimization)

More information

max cx s.t. Ax c where the matrix A, cost vector c and right hand side b are given and x is a vector of variables. For this example we have x

max cx s.t. Ax c where the matrix A, cost vector c and right hand side b are given and x is a vector of variables. For this example we have x Linear Programming Linear programming refers to problems stated as maximization or minimization of a linear function subject to constraints that are linear equalities and inequalities. Although the study

More information

Vector Spaces II: Finite Dimensional Linear Algebra 1

Vector Spaces II: Finite Dimensional Linear Algebra 1 John Nachbar September 2, 2014 Vector Spaces II: Finite Dimensional Linear Algebra 1 1 Definitions and Basic Theorems. For basic properties and notation for R N, see the notes Vector Spaces I. Definition

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

More information

Section Notes 3. The Simplex Algorithm. Applied Math 121. Week of February 14, 2011

Section Notes 3. The Simplex Algorithm. Applied Math 121. Week of February 14, 2011 Section Notes 3 The Simplex Algorithm Applied Math 121 Week of February 14, 2011 Goals for the week understand how to get from an LP to a simplex tableau. be familiar with reduced costs, optimal solutions,

More information

LINEAR PROGRAMMING P V Ram B. Sc., ACA, ACMA Hyderabad

LINEAR PROGRAMMING P V Ram B. Sc., ACA, ACMA Hyderabad LINEAR PROGRAMMING P V Ram B. Sc., ACA, ACMA 98481 85073 Hyderabad Page 1 of 19 Question: Explain LPP. Answer: Linear programming is a mathematical technique for determining the optimal allocation of resources

More information

What is Linear Programming?

What is Linear Programming? Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

More information

I. Solving Linear Programs by the Simplex Method

I. Solving Linear Programs by the Simplex Method Optimization Methods Draft of August 26, 2005 I. Solving Linear Programs by the Simplex Method Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston,

More information

CHAPTER 17. Linear Programming: Simplex Method

CHAPTER 17. Linear Programming: Simplex Method CHAPTER 17 Linear Programming: Simplex Method CONTENTS 17.1 AN ALGEBRAIC OVERVIEW OF THE SIMPLEX METHOD Algebraic Properties of the Simplex Method Determining a Basic Solution Basic Feasible Solution 17.2

More information

Linear Programming I

Linear Programming I Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins

More information

Good luck, veel succes!

Good luck, veel succes! Final exam Advanced Linear Programming, May 7, 13.00-16.00 Switch off your mobile phone, PDA and any other mobile device and put it far away. No books or other reading materials are allowed. This exam

More information

Outline. Linear Programming (LP): Simplex Search. Simplex: An Extreme-Point Search Algorithm. Basic Solutions

Outline. Linear Programming (LP): Simplex Search. Simplex: An Extreme-Point Search Algorithm. Basic Solutions Outline Linear Programming (LP): Simplex Search Benoît Chachuat McMaster University Department of Chemical Engineering ChE 4G03: Optimization in Chemical Engineering 1 Basic Solutions

More information

Graphical method. plane. (for max) and down (for min) until it touches the set of feasible solutions. Graphical method

Graphical method. plane. (for max) and down (for min) until it touches the set of feasible solutions. Graphical method The graphical method of solving linear programming problems can be applied to models with two decision variables. This method consists of two steps (see also the first lecture): 1 Draw the set of feasible

More information

Week 5 Integral Polyhedra

Week 5 Integral Polyhedra Week 5 Integral Polyhedra We have seen some examples 1 of linear programming formulation that are integral, meaning that every basic feasible solution is an integral vector. This week we develop a theory

More information

Duality in Linear Programming

Duality in Linear Programming Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadow-price interpretation of the optimal simplex multipliers is a very useful concept. First, these shadow

More information

1 Eigenvalues and Eigenvectors

1 Eigenvalues and Eigenvectors Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

More information

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method Introduction to dual linear program Given a constraint matrix A, right

More information

Quiz 1 Sample Questions IE406 Introduction to Mathematical Programming Dr. Ralphs

Quiz 1 Sample Questions IE406 Introduction to Mathematical Programming Dr. Ralphs Quiz 1 Sample Questions IE406 Introduction to Mathematical Programming Dr. Ralphs These questions are from previous years and should you give you some idea of what to expect on Quiz 1. 1. Consider the

More information

The Simplex Method. yyye

The Simplex Method.  yyye Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 1 The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

Linear Inequalities and Linear Programming. Systems of Linear Inequalities in Two Variables

Linear Inequalities and Linear Programming. Systems of Linear Inequalities in Two Variables Linear Inequalities and Linear Programming 5.1 Systems of Linear Inequalities 5.2 Linear Programming Geometric Approach 5.3 Geometric Introduction to Simplex Method 5.4 Maximization with constraints 5.5

More information

Chap 4 The Simplex Method

Chap 4 The Simplex Method The Essence of the Simplex Method Recall the Wyndor problem Max Z = 3x 1 + 5x 2 S.T. x 1 4 2x 2 12 3x 1 + 2x 2 18 x 1, x 2 0 Chap 4 The Simplex Method 8 corner point solutions. 5 out of them are CPF solutions.

More information

SOLVING LINEAR SYSTEM OF INEQUALITIES WITH APPLICATION TO LINEAR PROGRAMS

SOLVING LINEAR SYSTEM OF INEQUALITIES WITH APPLICATION TO LINEAR PROGRAMS SOLVING LINEAR SYSTEM OF INEQUALITIES WITH APPLICATION TO LINEAR PROGRAMS Hossein Arsham, University of Baltimore, (410) 837-5268, harsham@ubalt.edu Veena Adlakha, University of Baltimore, (410) 837-4969,

More information

Linear Dependence Tests

Linear Dependence Tests Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

More information

Definition 12 An alternating bilinear form on a vector space V is a map B : V V F such that

Definition 12 An alternating bilinear form on a vector space V is a map B : V V F such that 4 Exterior algebra 4.1 Lines and 2-vectors The time has come now to develop some new linear algebra in order to handle the space of lines in a projective space P (V ). In the projective plane we have seen

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

More information

Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

More information

26 Linear Programming

26 Linear Programming The greatest flood has the soonest ebb; the sorest tempest the most sudden calm; the hottest love the coldest end; and from the deepest desire oftentimes ensues the deadliest hate. Th extremes of glory

More information

1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form

1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form 1. LINEAR EQUATIONS A linear equation in n unknowns x 1, x 2,, x n is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b, where a 1, a 2,..., a n, b are given real numbers. For example, with x and

More information

Basic solutions in standard form polyhedra

Basic solutions in standard form polyhedra Recap, and outline of Lecture Previously min s.t. c x Ax = b x Converting any LP into standard form () Interpretation and visualization () Full row rank assumption on A () Basic solutions and bases in

More information

Min-cost flow problems and network simplex algorithm

Min-cost flow problems and network simplex algorithm Min-cost flow problems and network simplex algorithm The particular structure of some LP problems can be sometimes used for the design of solution techniques more efficient than the simplex algorithm.

More information

Linear Programming, Lagrange Multipliers, and Duality Geoff Gordon

Linear Programming, Lagrange Multipliers, and Duality Geoff Gordon lp.nb 1 Linear Programming, Lagrange Multipliers, and Duality Geoff Gordon lp.nb 2 Overview This is a tutorial about some interesting math and geometry connected with constrained optimization. It is not

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible:

Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible: Cramer s Rule and the Adjugate Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible: Theorem [Cramer s Rule] If A is an invertible

More information

Mathematical finance and linear programming (optimization)

Mathematical finance and linear programming (optimization) Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may

More information

Linear Programming: Chapter 5 Duality

Linear Programming: Chapter 5 Duality Linear Programming: Chapter 5 Duality Robert J. Vanderbei October 17, 2007 Operations Research and Financial Engineering Princeton University Princeton, NJ 08544 http://www.princeton.edu/ rvdb Resource

More information

Linear Algebra Notes

Linear Algebra Notes Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

AM 221: Advanced Optimization Spring Prof. Yaron Singer Lecture 7 February 19th, 2014

AM 221: Advanced Optimization Spring Prof. Yaron Singer Lecture 7 February 19th, 2014 AM 22: Advanced Optimization Spring 204 Prof Yaron Singer Lecture 7 February 9th, 204 Overview In our previous lecture we saw the application of the strong duality theorem to game theory, and then saw

More information

Mathematics Notes for Class 12 chapter 12. Linear Programming

Mathematics Notes for Class 12 chapter 12. Linear Programming 1 P a g e Mathematics Notes for Class 12 chapter 12. Linear Programming Linear Programming It is an important optimization (maximization or minimization) technique used in decision making is business and

More information

The Simplex Method. What happens when we need more decision variables and more problem constraints?

The Simplex Method. What happens when we need more decision variables and more problem constraints? The Simplex Method The geometric method of solving linear programming problems presented before. The graphical method is useful only for problems involving two decision variables and relatively few problem

More information

EXPLICIT ABS SOLUTION OF A CLASS OF LINEAR INEQUALITY SYSTEMS AND LP PROBLEMS. Communicated by Mohammad Asadzadeh. 1. Introduction

EXPLICIT ABS SOLUTION OF A CLASS OF LINEAR INEQUALITY SYSTEMS AND LP PROBLEMS. Communicated by Mohammad Asadzadeh. 1. Introduction Bulletin of the Iranian Mathematical Society Vol. 30 No. 2 (2004), pp 21-38. EXPLICIT ABS SOLUTION OF A CLASS OF LINEAR INEQUALITY SYSTEMS AND LP PROBLEMS H. ESMAEILI, N. MAHDAVI-AMIRI AND E. SPEDICATO

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

Lecture 2: August 29. Linear Programming (part I)

Lecture 2: August 29. Linear Programming (part I) 10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.

More information

Lecture 3: Linear Programming Relaxations and Rounding

Lecture 3: Linear Programming Relaxations and Rounding Lecture 3: Linear Programming Relaxations and Rounding 1 Approximation Algorithms and Linear Relaxations For the time being, suppose we have a minimization problem. Many times, the problem at hand can

More information

Largest Fixed-Aspect, Axis-Aligned Rectangle

Largest Fixed-Aspect, Axis-Aligned Rectangle Largest Fixed-Aspect, Axis-Aligned Rectangle David Eberly Geometric Tools, LLC http://www.geometrictools.com/ Copyright c 1998-2016. All Rights Reserved. Created: February 21, 2004 Last Modified: February

More information

Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc.

Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc. 2 Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Theorem 8: Let A be a square matrix. Then the following statements are equivalent. That is, for a given A, the statements are either all true

More information

Sensitivity analysis for production planning model of an oil company

Sensitivity analysis for production planning model of an oil company Sensitivity analysis for production planning model of an oil company Thesis Supervised by By Yu Da Assoc. Professor Tibor Illés Eötvös Loránd University Faculty of Natural Sciences 2004 ACKNOWLEDGEMENT

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

Solving Linear Programming Problem with Fuzzy Right Hand Sides: A Penalty Method

Solving Linear Programming Problem with Fuzzy Right Hand Sides: A Penalty Method Solving Linear Programming Problem with Fuzzy Right Hand Sides: A Penalty Method S.H. Nasseri and Z. Alizadeh Department of Mathematics, University of Mazandaran, Babolsar, Iran Abstract Linear programming

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

4.6 Null Space, Column Space, Row Space

4.6 Null Space, Column Space, Row Space NULL SPACE, COLUMN SPACE, ROW SPACE Null Space, Column Space, Row Space In applications of linear algebra, subspaces of R n typically arise in one of two situations: ) as the set of solutions of a linear

More information

This exposition of linear programming

This exposition of linear programming Linear Programming and the Simplex Method David Gale This exposition of linear programming and the simplex method is intended as a companion piece to the article in this issue on the life and work of George

More information

The Graphical Method: An Example

The Graphical Method: An Example The Graphical Method: An Example Consider the following linear program: Maximize 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2 0, where, for ease of reference,

More information

Lecture 7 - Linear Programming

Lecture 7 - Linear Programming COMPSCI 530: Design and Analysis of Algorithms DATE: 09/17/2013 Lecturer: Debmalya Panigrahi Lecture 7 - Linear Programming Scribe: Hieu Bui 1 Overview In this lecture, we cover basics of linear programming,

More information

1 Orthogonal projections and the approximation

1 Orthogonal projections and the approximation Math 1512 Fall 2010 Notes on least squares approximation Given n data points (x 1, y 1 ),..., (x n, y n ), we would like to find the line L, with an equation of the form y = mx + b, which is the best fit

More information

2.5 Gaussian Elimination

2.5 Gaussian Elimination page 150 150 CHAPTER 2 Matrices and Systems of Linear Equations 37 10 the linear algebra package of Maple, the three elementary 20 23 1 row operations are 12 1 swaprow(a,i,j): permute rows i and j 3 3

More information

Stanford University CS261: Optimization Handout 6 Luca Trevisan January 20, In which we introduce the theory of duality in linear programming.

Stanford University CS261: Optimization Handout 6 Luca Trevisan January 20, In which we introduce the theory of duality in linear programming. Stanford University CS261: Optimization Handout 6 Luca Trevisan January 20, 2011 Lecture 6 In which we introduce the theory of duality in linear programming 1 The Dual of Linear Program Suppose that we

More information

Chapter 3 LINEAR PROGRAMMING GRAPHICAL SOLUTION 3.1 SOLUTION METHODS 3.2 TERMINOLOGY

Chapter 3 LINEAR PROGRAMMING GRAPHICAL SOLUTION 3.1 SOLUTION METHODS 3.2 TERMINOLOGY Chapter 3 LINEAR PROGRAMMING GRAPHICAL SOLUTION 3.1 SOLUTION METHODS Once the problem is formulated by setting appropriate objective function and constraints, the next step is to solve it. Solving LPP

More information

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

7.4 Linear Programming: The Simplex Method

7.4 Linear Programming: The Simplex Method 7.4 Linear Programming: The Simplex Method For linear programming problems with more than two variables, the graphical method is usually impossible, so the simplex method is used. Because the simplex method

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that

More information

The Graphical Simplex Method: An Example

The Graphical Simplex Method: An Example The Graphical Simplex Method: An Example Consider the following linear program: Max 4x 1 +3x Subject to: x 1 +3x 6 (1) 3x 1 +x 3 () x 5 (3) x 1 +x 4 (4) x 1, x 0. Goal: produce a pair of x 1 and x that

More information

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued. Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

MATH 2030: SYSTEMS OF LINEAR EQUATIONS. ax + by + cz = d. )z = e. while these equations are not linear: xy z = 2, x x = 0,

MATH 2030: SYSTEMS OF LINEAR EQUATIONS. ax + by + cz = d. )z = e. while these equations are not linear: xy z = 2, x x = 0, MATH 23: SYSTEMS OF LINEAR EQUATIONS Systems of Linear Equations In the plane R 2 the general form of the equation of a line is ax + by = c and that the general equation of a plane in R 3 will be we call

More information

LINEAR PROGRAMMING PROBLEM: A GEOMETRIC APPROACH

LINEAR PROGRAMMING PROBLEM: A GEOMETRIC APPROACH 59 LINEAR PRGRAMMING PRBLEM: A GEMETRIC APPRACH 59.1 INTRDUCTIN Let us consider a simple problem in two variables x and y. Find x and y which satisfy the following equations x + y = 4 3x + 4y = 14 Solving

More information

We call this set an n-dimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P.

We call this set an n-dimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P. Volumes of parallelograms 1 Chapter 8 Volumes of parallelograms In the present short chapter we are going to discuss the elementary geometrical objects which we call parallelograms. These are going to

More information

Linear Programming Problems

Linear Programming Problems Linear Programming Problems Linear programming problems come up in many applications. In a linear programming problem, we have a function, called the objective function, which depends linearly on a number

More information

7 - Linear Transformations

7 - Linear Transformations 7 - Linear Transformations Mathematics has as its objects of study sets with various structures. These sets include sets of numbers (such as the integers, rationals, reals, and complexes) whose structure

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

6.254 : Game Theory with Engineering Applications Lecture 5: Existence of a Nash Equilibrium

6.254 : Game Theory with Engineering Applications Lecture 5: Existence of a Nash Equilibrium 6.254 : Game Theory with Engineering Applications Lecture 5: Existence of a Nash Equilibrium Asu Ozdaglar MIT February 18, 2010 1 Introduction Outline Pricing-Congestion Game Example Existence of a Mixed

More information

1 Limiting distribution for a Markov chain

1 Limiting distribution for a Markov chain Copyright c 2009 by Karl Sigman Limiting distribution for a Markov chain In these Lecture Notes, we shall study the limiting behavior of Markov chains as time n In particular, under suitable easy-to-check

More information

Summary of week 8 (Lectures 22, 23 and 24)

Summary of week 8 (Lectures 22, 23 and 24) WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

International Doctoral School Algorithmic Decision Theory: MCDA and MOO

International Doctoral School Algorithmic Decision Theory: MCDA and MOO International Doctoral School Algorithmic Decision Theory: MCDA and MOO Lecture 2: Multiobjective Linear Programming Department of Engineering Science, The University of Auckland, New Zealand Laboratoire

More information

Standard Form of a Linear Programming Problem

Standard Form of a Linear Programming Problem 494 CHAPTER 9 LINEAR PROGRAMMING 9. THE SIMPLEX METHOD: MAXIMIZATION For linear programming problems involving two variables, the graphical solution method introduced in Section 9. is convenient. However,

More information

Name: Section Registered In:

Name: Section Registered In: Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

More information

Arrangements And Duality

Arrangements And Duality Arrangements And Duality 3.1 Introduction 3 Point configurations are tbe most basic structure we study in computational geometry. But what about configurations of more complicated shapes? For example,

More information

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col

More information