THEORY OF SIMPLEX METHOD

Size: px
Start display at page:

Download "THEORY OF SIMPLEX METHOD"

Transcription

1 Chapter THEORY OF SIMPLEX METHOD Mathematical Programming Problems A mathematical programming problem is an optimization problem of finding the values of the unknown variables x, x,, x n that maximize or minimize) fx, x,, x n ) subject to g i x, x,, x n ), =, )b i, i =,,, m ) where the b i are real constants and the functions f and g i are real-valued The function fx, x,, x n ) is called the objective function of the problem ) while the functions g i x, x,, x n ) are called the constraints of ) In vector notations, ) can be written as max or min fx T ) subject to g i x T ), =, )b i, i =,,, m where x T = x, x,, x n ) is the solution vector Example Consider the following problem max fx, y) = xy subject to x + y = A classical method for solving this problem is the Lagrange multiplier method Let Lx, y, λ) = xy λx + y ) Then differentiate L with respect to x, y, λ and set the partial derivative to 0 we get L = y λx = 0, x L = x λy = 0, y L λ = x + y = 0 The third equation is redundant here The first two equations give y x = λ = x y

2 Chapter THEORY OF SIMPLEX METHOD which gives x = y or x = ±y We find that the extrema of xy are obtained at x = ±y Since x + y = we then have x = ± and y = ± It is then easy the verify that the maximum occurs at x = y = and x = y == giving fx, y) = y min max x max max A linear programming problem LPP) is a mathematical programming problem having a linear objective function and linear constraints Thus the general form of an LP problem is maxormin subject to z = c x + c x + + c n x n a x + + a n x n, =, )b, a m x + + a mn x n, =, )b m, ) Here the constants a ij, b i and c j are assumed to be real The constants c j are called the cost or price coefficients of the unknowns x j and the vector c,, c n ) T is called the cost or price vector If in problem ), all the constraints are inequality with sign and the unknowns x i are restricted to nonnegative values, then the form is called canonical Thus the canonical form of an LP problem can be written as maxormin subject to z = c x + c x + + c n x n a x + + a n x n b, a m x + + a mn x n b m, where x i 0, i =,,, n 3) If all b i 0, then the form is called a feasible canonical form Before the simplex method can be applied to an LPP, we must first convert it into what is known as the standard form: max subject to z = c x + + c n x n { ai x + + a in x n = b i, i =,,, m x j 0, j =,,, n 4) Here the b i are assumed to be nonnegative We note that the number of variables may or may not be the same as before

3 Mathematical Programming Problems 3 One can always change an LPP problem into the canonical form or into the standard form by the following procedures i) If the LP as originally formulated calls for the minimization of the functional z = c x + c x + + c n x n, we can instead substitute the equivalent objective function maximize z = c )x + c )x + c n )x n = z ii) If any variable x j is free ie, not restricted to non-negative values, then it can be replaced by x j = x + j x j, where x + j = max0, x j) and x j = max0, x j) are now non-negative We substitute x + j x j for x j in the constraints and objective function in ) The problem then has n + ) nonnegative variables x,, x + j, x j,, x n iii) If b i 0, we can multiply the i-th constraint by iv) An equality of the form n a ijx j = b i can be replaced by a ij x j b i and a ij )x j b i ) v) Finally, any inequality constraint in the original formulation can be converted to equations by the addition of non-negative variables called the slack and the surplus variables For example, the constraint a i x + + a ip x p b i can be written as a i x + + a ip x p + x p+ = b i where x p+ 0 is a slack variable Similarly, the constraint a j x + + a jp x p b j can be written as a j x + a jp x p x p+ = b j where x p+ 0 is a surplus variable The new variables would be assigned zero cost coefficients in the objective function, ie c p+i = 0 In matrix notations, the standard form of an LPP is Max z = c T x where A is m n, b is m, x is n and rank A) = m subject to Ax = b 5) and x 0 6) Definition A feasible solution FS) to an LPP is a vector x which satisfies constraints 5) and 6) The set of all feasible solutions is called the feasible region A feasible solution to an LPP is said to be an optimal solution if it maximizes the objective function of the LPP A feasible solution to an LPP is said to be a basic feasible solution BFS) if it is a basic solution with respect to the linear system 5) If a basic feasible solution is non-degenerate, then we call it a non-degenerate basic feasible solution We note that the optimal solution may not be unique, but the optimum value of the problem should be unique For LPP in feasible canonical form, the zero vector is always a feasible solution Hence the feasible region is always non-empty

4 4 Chapter THEORY OF SIMPLEX METHOD Basic Feasible Solutions and Extreme Points In this section, we discuss the relationship between basic feasible solutions to a LPP and extreme points of the corresponding feasible region We will show that they are indeed the same We assume a LLP is given in its feasible canonical form Theorem The feasible region to an LPP is convex, closed and bounded from below Proof That the feasible region is convex follows from Lemma The closeness follows from the fact that the set that satisfies the equality or inequalities of the type and are closed and that the intersection of closed sets are closed Finally, by 6), we see that the feasible region is a subset of R n +, where R n + is given in 3) and is a set bounded from below Hence the feasible region itself must also be bounded from below Theorem If there is a feasible solution then there is a basic feasible solution Proof Assume that there is a feasible solution x with p positive variables where p n Let us reorder the variables so that the first p variables are positive Then the feasible solution can be written as x T = x, x, x 3,, x p, 0,, 0), and we have p x j a j = b 7) Let us first consider the case where {a j } p is linearly independent Then we have p m for m is the largest number of linearly independent vectors in A If p = m, x is basic according to the definition, and in fact it is also non-degenerate If p < m, then there exist a p+,, a m such that the set {a, a,, a m } is linearly independent Since x p+, x p+,, x m are all zero, it follows that p x j a j = x j a j = b and x is a degenerate basic feasible solution Suppose that {a, a,, a p } is linearly dependent and without loss of generality, assume that none of the a j are zero vectors For if a j is zero, we can set x j to zeros and hence reduce p by With this assumption, then there exists {α j } p not all zero such that p α j a j = 0 Let α r 0, we have Substitute this into 7), we have p j r a r = p j r α ) j a j α r ) α j x j x r a j = b α r Hence we have a new solution for 5), namely, [ x x r α α r,, x r x r α r α r, 0, x r+ x r α r+ α r ] T α p,, x p x r, 0,, 0 α r

5 Basic Feasible Solutions and Extreme Points 5 which has no more than p non-zero variables We now claim that, by choosing α r suitably, the new solution above is still a feasible solution In order that this is true, we must choose our α r such that x j x r α j α r 0, j =,,, p 8) For those α j = 0, the inequality obviously holds as x j 0 for all j =,,, p For those α j 0, the inequality becomes x j x r 0, for α j > 0, α j α r 9) x j x r 0, for α j < 0 α j α r 0) If we choose our α r > 0, then 0) automatically holds Moreover if α r is chosen as { } x r xj = min : α j > 0, ) α r j α j then 9) is also satisfied Thus by choosing α r as in ), then 8) holds and x is a feasible solution of p non-zero variables We note that we can also choose α r < 0 and such that { } x r xj = max : α j < 0, ) α r j α j then 8) is also satisfied, though clearly the two α r s in general will not give the same new solution with p ) non-zero entries Assume that a suitable α r has been chosen and the new feasible solution is given by ˆx = [ˆx, ˆx,, ˆx r, 0, ˆx r+,, ˆx p, 0, 0,, 0] T which has no more than p non-zero variables We can now check whether the corresponding column vectors of A are linearly independent or not If it is, then we have a basic feasible solution If it is not, we can repeat the above process to reduce the number of non-zero variables to p Since p is finite, the process must stop after at most p operations, at which we only have one nonzero variable The corresponding column of A is clearly linearly independent If that column is a zero column, then the corresponding α can be arbitrary chosen, and hence it can always be eliminated first) The x obtained then is a basic feasible solution Example Consider the linear system 4 x x 3 5 = where x = x is a feasible solution Since a + a a 3 = 0, we have α =, α =, α 3 = By ), x = 3 { } α = min xj : α j > 0,,3 α j

6 6 Chapter THEORY OF SIMPLEX METHOD Hence r = and α ˆx = x x = 3 α =, ˆx = 0, α 3 ˆx 3 = x 3 x = 3 α = 5, Thus the new feasible solution is x T =, 0, 5 ) Since [ [ 4 a =, a 3] 3 = 5] are linearly independent, the solution is also basic Similarly, if we use ), we get Hence r = 3 and we have x 3 α 3 = { } = max xj : α j < 0,,3 α j ˆx = x x 3 α α 3 = ) = 3, ˆx = x x 3 α α 3 = 3 ) = 5 Thus x T = 3, 5, 0), which is also a basic feasible solution Suppose that we eliminate a instead, then we have ˆx = 0 ˆx = x x α = 3 = α 3 ˆx 3 = x 3 x = ) = 3 α Thus the new solution is x T = 0,, 3) We note that it is a basic solution but is no longer feasible Theorem 3 The basic feasible solutions of an LPP are extreme points of the corresponding feasible region Proof Suppose that x is a basic feasible solution and without loss of generality, assume that x has the form xb x = 0 where x B = B b is an m vector Suppose on the contrary that there exist two feasible solutions x, x, different from x, such that x = λx + λ)x for some λ 0, ) We write x = u v, x = u v

7 Basic Feasible Solutions and Extreme Points 7 where u, u are m-vectors and v, v are n m)-vectors Then we have 0 = λv + λ)v As x, x are feasible, v, v 0 Since λ, λ) > 0, we have v = 0 and v = 0 Thus u b = Ax = [B, R] = Bu v, and similarly, Thus we have b = Ax = Bu Bu = Bu = b = Bx B Since B is non-singular, this implies that u = u = x B Hence xb u u x = = = x 0 0 = = x 0, and that is a contradiction Hence x must be an extreme point of the feasible region The next theorem is the converse of Theorem 3 Theorem 4 The extreme points of a feasible region are basic feasible solutions of the corresponding LPP Proof Suppose x 0 = x,, x n ) T is an extreme point of the feasible region Assume that there are r components of x 0 which are non-zero Without loss of generality, let x i > 0 for i =,,, r and x i = 0 for i = r +, r +,, n Then we have r x i a i = b We claim that {a,, a r } is linearly independent Suppose on contrary that there exist α i, i =,,, r, not all zero, such that r α i a i = 0 3) Let ɛ be such that then x i 0 < ɛ < min α i 0 α i, x i ± ɛ α i > 0, i =,,, r 4) Put x = x 0 + ɛ α and x = x 0 ɛ α where α T = α, α,, α r, 0,, 0) We claim that x, x are feasible solutions Clearly by 4), x 0 and x 0 Moreover by 3), Therefore, x is a feasible solution Similarly, Ax = Ax 0 + ɛaα = Ax = b Ax = Ax 0 ɛaα = Ax = b, and x is also a feasible solution Since clearly x 0 = x + x, x 0 is not an extreme point, hence we have a contradiction Therefore the set {a,, a r } must be linearly independent, and that x 0 is a basic feasible solution

8 8 Chapter THEORY OF SIMPLEX METHOD The significance of the extreme points of feasible regions is given by the following theorem Theorem 5 The optimal solution to an LPP occurs at an extreme point of the feasible region Proof We first claim that no point on the hyperplane that corresponds to the optimal value of the objective function can be an interior point of the feasible region Suppose on contrary that z = c T x is an optimal hyperplane and that the optimal value is attained by a point x 0 in the interior of the feasible region Then there exists an ɛ > 0 such that the sphere X = {x : x x 0 < ɛ} is in the feasible region Then the point x = x 0 + ɛ c c X is a feasible solution But c T x = c T x 0 + c T ɛ c c = z + ɛ c > z, a contradiction to the optimality of z Thus x 0 has to be a boundary point Now since c T x z for all feasible solutions x, we see that the optimal hyperplane is a supporting hyperplane of the feasible region at the point x 0 By Theorem, the feasible region is bounded from below Therefore by Theorem 5, the supporting hyperplane z = c T x contains at least one extreme point of the feasible region Clearly that extreme point must also be an optimal solution to the LPP Summarizing what we have proved so far, we see that the optimal value of an LPP can be obtained at the basic feasible solutions, or equivalently, at the extreme points Simplex method is a method that systematically searches through the basic feasible solutions for the optimal one Example 3 Consider the constraint set in R defined by x x 4 5) x + x 6) x 3 7) x, x 0 8) Adding slack variables x 3, x 4 and x 5 to convert it into standard form gives, x x + x 3 = 4 9) x + x + x 4 = 0) x + x 5 = 3 ) x, x, x 3, x 4, x 5 0 ) A basic solution is obtained by setting any two variables of x, x, x 3, x 4, x 5 to zero and solving for the remaining three For example, let us set x = 0 and x 3 = 0 and solve for x, x 4 and x 5 in 8 3 x = 4 x + x 4 = x 5 = 3

9 3 Improving a Basic Feasible Solution a x + x = x = 3 4 b x + 8/3x = 4 f c 04 0 e d Figure There are 5 extreme points by inspection {a, b, c, d, e} We get the point [0, 3, 0,, 3] From Figure 3, we see that this point corresponds to the extreme point a of the convex polyhedron K defined by 9), 0), ), ) The equations x = 0 and x 3 = 0 are called the binding equations of the extreme point a We note that not all basic solutions are feasible In fact there is a maximum total of 5 3) = 5 ) = 0 basic solutions and here we have only five extreme points The extreme points of the region K and their corresponding binding equations are given in the following table extreme point a b c d e set to zero x x 3 x 4 x x x 3 x 4 x 5 x 5 x If we set x 3 and x 5 to zero, the basic solution we get is [ 3, 5 6, 0, 7, 0] Hence it is not a 6 basic feasible solution and therefore does not correspond to any one of the extreme point in K In fact, it is given by the point f in the above figure 3 Improving a Basic Feasible Solution Let the LPP be max subject to z = c T x { Ax = b x 0 Here we assume that b 0 and ranka) = m Let the columns of A be given by a j, ie A = [a, a,, a n ] Let B be an m m non-singular matrix whose columns are linearly independent

10 0 Chapter THEORY OF SIMPLEX METHOD columns of A and we denote B by [b, b,, b m ] We learn from 5 that for any choice of basic matrix B, there corresponds a basic solution to Ax = b The basic solution is given by the m-vector x B = [x B,, x Bm ] where x B = B b Corresponding to any such x B, we define an m-vector c B, called the reduced cost vector, containing the prices of the basic variables, ie c B = c B c Bm Note that for any BFS x B, the value of the objective function is given by z = c T Bx B To improve z, we must be able to locate other BFS easily, or equivalently, we must be able to replace a basic matrix B by others easily In order to do this, we need to express the columns of A in terms of the columns of B Since A is an m n matrix and ranka) = m, the column space of A is m dimensional Thus the columns of B form a basis for the column space of A Let Put then we have a j = By j Hence a j = y ij b i, for all j =,,, n 3) y j y j y j =, for all j =,,, n, y mj y j = B a j, for all j =,,, n 4) The matrix B can be considered as the change of coordinate matrix from {y,, y n } to {a,, a n } We remark that if a j appears as b i, ie, x j is a basic variable and x j = x Bi, then y j = e i, the i-th unit vector Given a BFS x B = B b corresponding to the basic matrix B, we would like to improve the value of the objective function which is currently given by z = c T B x B For simplicity, we will confine ourselves to those BFS in which only one column of B is changed We claim that if 0 for some r, then b r can be replaced by a j and the new set of vectors still form a basis We can then form the corresponding new basic solution In fact, we note that if 0, then by 3) Using this, we can replace b r in by a j and we get i r b r = a j Bx B = i r y ij b i x Bi b i = b, ) y ij x Bi x Br b i + x B r a j = b

11 3 Improving a Basic Feasible Solution Let y ij x Bi x Br, i =,,, m y ˆx Bi = rj x Br, i = j, we see that the vector ˆx B = [ˆx B,, ˆx Br, 0, ˆx Br+,, ˆx Bm, 0,, 0, ˆx Bj, 0,, 0] T is a basic solution with ˆx Br = 0 Thus the old basic variable x r is replaced by the new basic variable x j Now we have to make sure that the new basic solution is a basic feasible solution with a larger objective value We will see that we can assure the feasibility of the new solution by choosing a suitable b r to be replaced, and we can improve the objective value by choosing a suitable a j to be inserted in B 3 Feasibility: restriction on b r We require that ˆx Bi 0 for all i =,, n Since x Bi = 0 for all m < i n, we only need y ij x Bi x Br 0 x Br 0 i =,,, m Let r be chosen such that { } x Br xbi = min : y ij > 0 5),,,m y ij Then it is easy to check that all ˆx Bi 0 for all i Thus the corresponding column b r is the column to be replaced from B We call b r the leaving column 3 Optimality: restriction on a j Originally, the objective value is given by z = c T Bx B = c Bi x Bi After the removal and insertion of columns of B, the new objective value becomes ẑ = ĉ T B ˆx B = ĉ Bi ˆx Bi

12 Chapter THEORY OF SIMPLEX METHOD where ĉ Bi = c Bi for all i =,,, m and i r, and ĉ Br = c j Thus we have where ẑ = = = c Bi ˆx Bi + ĉ Br ˆx Br i r i r ) y ij x Br c Bi x Bi x Br + c j c Bi x Bi x B r = z x B r c T By j + c j x Br = z + x B r c j z j ) c Bi y ij + c j x Br z j = c T By j = c T BB a j 6) Obviously, by our choice, if c j > z j, then ẑ z Hence we choose our a j such that c j z j > 0 and y ij > 0 for some i For if y ij 0 for all i, then the new solution will not be feasible The a j that we have chosen is called the entering column We note that if x B is non-degenerate, then x Br > 0, and hence ẑ > z Let us summarize the process of improving a basic feasible solution in the following theorem Theorem 6 Let x B be a BFS to an LPP with corresponding basic matrix B and objective value z If i) there exists a column a j in A but not in B such that the condition c j z j > 0 holds and if ii) at least one y ij > 0, then it is possible to obtain a new BFS by replacing one column in B by a j and the new value of the objective function ẑ is larger than or equal to z Furthermore, if x B is non-degenerate, then we have ẑ > z The numbers c j z j are called the reduced cost coefficients with respect to the matrix B We remark that if a j already appears as b i, ie x j is a basic variable and x j = x Bi, then c j z j = 0 In fact z j = c T BB a j = c T By j = c T Be i = c Bi = c j By Theorem 6, it is natural to think that when all z j c j 0, we have reached the optimal solution Theorem 7 Let x B be a BFS to an LPP with corresponding objective value z 0 = c T B x B If z j c j 0 for every column a j in A, then x B is optimal Proof We first note that given any feasible solution x, then by the assumption that z j c j 0 for all j =,,, n and by 6), we have m n z = c j x j z j x j = c T By j )x j = c Bi y ij )x j = y ij x j )c Bi Thus Now we claim that n z y ij x j )c Bi 7) x i y ij x j = x Bi, i =,,, m

13 3 Improving a Basic Feasible Solution 3 Since x is a feasible solution, Ax = b Thus by 4), m n b = x j a j = x j By j ) = y ij b i )x j = y ij x j )b i = x i b i = B x Since B is non-singular and we already have Bx B = b, it follows that x = x B Thus by 7), for all x in the feasible region z x Bi c Bi = z 0 Example 4 Let us consider the LPP with 3 4 A =, 0 0 c T = [, 5, 6, 8] and b = Let us choose our starting B as B = [a, a 4 ] = [b, b ] = 4 [ 5 ] Then it is easily checked that the corresponding basic solution is x B = [, ] T, which is clearly feasible with objective value z = c T Bx B = [, 8] = 0 Since by 4), the y ij are given by Hence and y = 0 B = 3 y = 3 3 z = [, 8] z 3 = [, 8] 4, y 3 = 3 = 4, 3 = 6 y 4 = 0 Since z c = and z 3 c 3 = 0, we see that a is the entering column As remarked above, z c = z 4 c 4 = 0, because x and x 4 are basic variables Looking at the column entries of y, we find that y is the only positive entry Hence b = a 4 is the leaving column Thus ˆB = [a, a ] =, 0 and the corresponding basic solution is found to be ˆx B = [, 3 ]T, which is clearly feasible as expected The new objective value is given by ẑ = [, 5] 3 = 5 > z

14 4 Chapter THEORY OF SIMPLEX METHOD Since by 4), the ŷ ij are given by [ ŷ = 0] Hence and ˆB = [ 0 ŷ = ] 0, ŷ 3 = 0 3 ŷ 4 = 0 z 3 c 3 = [, 5] 3 6 = 5, z 4 c 4 = [, 5] 3 8 = 5 3 Since all z j c j 0, j 4, we see that the point [, 3 ] is an optimal solution The last example illustrates how one can find the optimal solution by searching through the basic feasible solutions That is exactly what simplex method does However, the simplex method uses tableaus to minimize the book-keeping work that we encountered in the last example

3 Does the Simplex Algorithm Work?

3 Does the Simplex Algorithm Work? Does the Simplex Algorithm Work? In this section we carefully examine the simplex algorithm introduced in the previous chapter. Our goal is to either prove that it works, or to determine those circumstances

More information

Definition of a Linear Program

Definition of a Linear Program Definition of a Linear Program Definition: A function f(x 1, x,..., x n ) of x 1, x,..., x n is a linear function if and only if for some set of constants c 1, c,..., c n, f(x 1, x,..., x n ) = c 1 x 1

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

More information

Vector Spaces II: Finite Dimensional Linear Algebra 1

Vector Spaces II: Finite Dimensional Linear Algebra 1 John Nachbar September 2, 2014 Vector Spaces II: Finite Dimensional Linear Algebra 1 1 Definitions and Basic Theorems. For basic properties and notation for R N, see the notes Vector Spaces I. Definition

More information

What is Linear Programming?

What is Linear Programming? Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

Lecture 3: Linear Programming Relaxations and Rounding

Lecture 3: Linear Programming Relaxations and Rounding Lecture 3: Linear Programming Relaxations and Rounding 1 Approximation Algorithms and Linear Relaxations For the time being, suppose we have a minimization problem. Many times, the problem at hand can

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

More information

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints Chapter 6 Linear Programming: The Simplex Method Introduction to the Big M Method In this section, we will present a generalized version of the simplex method that t will solve both maximization i and

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where. Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Linear Programming I

Linear Programming I Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR Solutions Of Some Non-Linear Programming Problems A PROJECT REPORT submitted by BIJAN KUMAR PATEL for the partial fulfilment for the award of the degree of Master of Science in Mathematics under the supervision

More information

Matrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Matrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Matrix Norms Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 28, 2009 Matrix Norms We consider matrix norms on (C m,n, C). All results holds for

More information

SYSTEMS OF EQUATIONS

SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS 1. Examples of systems of equations Here are some examples of systems of equations. Each system has a number of equations and a number (not necessarily the same) of variables for which

More information

Linear Codes. In the V[n,q] setting, the terms word and vector are interchangeable.

Linear Codes. In the V[n,q] setting, the terms word and vector are interchangeable. Linear Codes Linear Codes In the V[n,q] setting, an important class of codes are the linear codes, these codes are the ones whose code words form a sub-vector space of V[n,q]. If the subspace of V[n,q]

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form

1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form 1. LINEAR EQUATIONS A linear equation in n unknowns x 1, x 2,, x n is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b, where a 1, a 2,..., a n, b are given real numbers. For example, with x and

More information

Duality in Linear Programming

Duality in Linear Programming Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadow-price interpretation of the optimal simplex multipliers is a very useful concept. First, these shadow

More information

The Graphical Method: An Example

The Graphical Method: An Example The Graphical Method: An Example Consider the following linear program: Maximize 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2 0, where, for ease of reference,

More information

Mathematical finance and linear programming (optimization)

Mathematical finance and linear programming (optimization) Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may

More information

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand Notes V General Equilibrium: Positive Theory In this lecture we go on considering a general equilibrium model of a private ownership economy. In contrast to the Notes IV, we focus on positive issues such

More information

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

More information

OPRE 6201 : 2. Simplex Method

OPRE 6201 : 2. Simplex Method OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2

More information

Special Situations in the Simplex Algorithm

Special Situations in the Simplex Algorithm Special Situations in the Simplex Algorithm Degeneracy Consider the linear program: Maximize 2x 1 +x 2 Subject to: 4x 1 +3x 2 12 (1) 4x 1 +x 2 8 (2) 4x 1 +2x 2 8 (3) x 1, x 2 0. We will first apply the

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

More information

International Doctoral School Algorithmic Decision Theory: MCDA and MOO

International Doctoral School Algorithmic Decision Theory: MCDA and MOO International Doctoral School Algorithmic Decision Theory: MCDA and MOO Lecture 2: Multiobjective Linear Programming Department of Engineering Science, The University of Auckland, New Zealand Laboratoire

More information

Module1. x 1000. y 800.

Module1. x 1000. y 800. Module1 1 Welcome to the first module of the course. It is indeed an exciting event to share with you the subject that has lot to offer both from theoretical side and practical aspects. To begin with,

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d. DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method Introduction to dual linear program Given a constraint matrix A, right

More information

6.254 : Game Theory with Engineering Applications Lecture 5: Existence of a Nash Equilibrium

6.254 : Game Theory with Engineering Applications Lecture 5: Existence of a Nash Equilibrium 6.254 : Game Theory with Engineering Applications Lecture 5: Existence of a Nash Equilibrium Asu Ozdaglar MIT February 18, 2010 1 Introduction Outline Pricing-Congestion Game Example Existence of a Mixed

More information

POWER SETS AND RELATIONS

POWER SETS AND RELATIONS POWER SETS AND RELATIONS L. MARIZZA A. BAILEY 1. The Power Set Now that we have defined sets as best we can, we can consider a sets of sets. If we were to assume nothing, except the existence of the empty

More information

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2 IEOR 4404 Homework # Intro OR: Deterministic Models February 14, 011 Prof. Jay Sethuraman Page 1 of 5 Homework #.1 (a) What is the optimal solution of this problem? Let us consider that x 1, x and x 3

More information

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2 4. Basic feasible solutions and vertices of polyhedra Due to the fundamental theorem of Linear Programming, to solve any LP it suffices to consider the vertices (finitely many) of the polyhedron P of the

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results

More information

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns L. Vandenberghe EE133A (Spring 2016) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

Lecture 2: August 29. Linear Programming (part I)

Lecture 2: August 29. Linear Programming (part I) 10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.

More information

This exposition of linear programming

This exposition of linear programming Linear Programming and the Simplex Method David Gale This exposition of linear programming and the simplex method is intended as a companion piece to the article in this issue on the life and work of George

More information

Lecture 6. Inverse of Matrix

Lecture 6. Inverse of Matrix Lecture 6 Inverse of Matrix Recall that any linear system can be written as a matrix equation In one dimension case, ie, A is 1 1, then can be easily solved as A x b Ax b x b A 1 A b A 1 b provided that

More information

Solving Linear Programs

Solving Linear Programs Solving Linear Programs 2 In this chapter, we present a systematic procedure for solving linear programs. This procedure, called the simplex method, proceeds by moving from one feasible solution to another,

More information

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming

More information

26 Linear Programming

26 Linear Programming The greatest flood has the soonest ebb; the sorest tempest the most sudden calm; the hottest love the coldest end; and from the deepest desire oftentimes ensues the deadliest hate. Th extremes of glory

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

LEARNING OBJECTIVES FOR THIS CHAPTER

LEARNING OBJECTIVES FOR THIS CHAPTER CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional

More information

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued. Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.

More information

Linear Programming. Solving LP Models Using MS Excel, 18

Linear Programming. Solving LP Models Using MS Excel, 18 SUPPLEMENT TO CHAPTER SIX Linear Programming SUPPLEMENT OUTLINE Introduction, 2 Linear Programming Models, 2 Model Formulation, 4 Graphical Linear Programming, 5 Outline of Graphical Procedure, 5 Plotting

More information

Linear Dependence Tests

Linear Dependence Tests Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

More information

24. The Branch and Bound Method

24. The Branch and Bound Method 24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

Readings. D Chapter 1. Lecture 2: Constrained Optimization. Cecilia Fieler. Example: Input Demand Functions. Consumer Problem

Readings. D Chapter 1. Lecture 2: Constrained Optimization. Cecilia Fieler. Example: Input Demand Functions. Consumer Problem Economics 245 January 17, 2012 : Example Readings D Chapter 1 : Example The FOCs are max p ( x 1 + x 2 ) w 1 x 1 w 2 x 2. x 1,x 2 0 p 2 x i w i = 0 for i = 1, 2. These are two equations in two unknowns,

More information

NUMERICALLY EFFICIENT METHODS FOR SOLVING LEAST SQUARES PROBLEMS

NUMERICALLY EFFICIENT METHODS FOR SOLVING LEAST SQUARES PROBLEMS NUMERICALLY EFFICIENT METHODS FOR SOLVING LEAST SQUARES PROBLEMS DO Q LEE Abstract. Computing the solution to Least Squares Problems is of great importance in a wide range of fields ranging from numerical

More information

Convex Optimization SVM s and Kernel Machines

Convex Optimization SVM s and Kernel Machines Convex Optimization SVM s and Kernel Machines S.V.N. Vishy Vishwanathan vishy@axiom.anu.edu.au National ICT of Australia and Australian National University Thanks to Alex Smola and Stéphane Canu S.V.N.

More information

Linear Programming: Theory and Applications

Linear Programming: Theory and Applications Linear Programming: Theory and Applications Catherine Lewis May 11, 2008 1 Contents 1 Introduction to Linear Programming 3 1.1 What is a linear program?...................... 3 1.2 Assumptions.............................

More information

Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

More information

The Dirichlet Unit Theorem

The Dirichlet Unit Theorem Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if

More information

An Introduction to Linear Programming

An Introduction to Linear Programming An Introduction to Linear Programming Steven J. Miller March 31, 2007 Mathematics Department Brown University 151 Thayer Street Providence, RI 02912 Abstract We describe Linear Programming, an important

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

The Method of Lagrange Multipliers

The Method of Lagrange Multipliers The Method of Lagrange Multipliers S. Sawyer October 25, 2002 1. Lagrange s Theorem. Suppose that we want to maximize (or imize a function of n variables f(x = f(x 1, x 2,..., x n for x = (x 1, x 2,...,

More information

Sec 4.1 Vector Spaces and Subspaces

Sec 4.1 Vector Spaces and Subspaces Sec 4. Vector Spaces and Subspaces Motivation Let S be the set of all solutions to the differential equation y + y =. Let T be the set of all 2 3 matrices with real entries. These two sets share many common

More information

MATH 110 Spring 2015 Homework 6 Solutions

MATH 110 Spring 2015 Homework 6 Solutions MATH 110 Spring 2015 Homework 6 Solutions Section 2.6 2.6.4 Let α denote the standard basis for V = R 3. Let α = {e 1, e 2, e 3 } denote the dual basis of α for V. We would first like to show that β =

More information

Chapter 2 Solving Linear Programs

Chapter 2 Solving Linear Programs Chapter 2 Solving Linear Programs Companion slides of Applied Mathematical Programming by Bradley, Hax, and Magnanti (Addison-Wesley, 1977) prepared by José Fernando Oliveira Maria Antónia Carravilla A

More information

LINEAR SYSTEMS. Consider the following example of a linear system:

LINEAR SYSTEMS. Consider the following example of a linear system: LINEAR SYSTEMS Consider the following example of a linear system: Its unique solution is x +2x 2 +3x 3 = 5 x + x 3 = 3 3x + x 2 +3x 3 = 3 x =, x 2 =0, x 3 = 2 In general we want to solve n equations in

More information

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear

More information

Operation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1

Operation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1 Operation Research Module 1 Unit 1 1.1 Origin of Operations Research 1.2 Concept and Definition of OR 1.3 Characteristics of OR 1.4 Applications of OR 1.5 Phases of OR Unit 2 2.1 Introduction to Linear

More information

LECTURE: INTRO TO LINEAR PROGRAMMING AND THE SIMPLEX METHOD, KEVIN ROSS MARCH 31, 2005

LECTURE: INTRO TO LINEAR PROGRAMMING AND THE SIMPLEX METHOD, KEVIN ROSS MARCH 31, 2005 LECTURE: INTRO TO LINEAR PROGRAMMING AND THE SIMPLEX METHOD, KEVIN ROSS MARCH 31, 2005 DAVID L. BERNICK dbernick@soe.ucsc.edu 1. Overview Typical Linear Programming problems Standard form and converting

More information

Date: April 12, 2001. Contents

Date: April 12, 2001. Contents 2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........

More information

Lecture Notes: Matrix Inverse. 1 Inverse Definition

Lecture Notes: Matrix Inverse. 1 Inverse Definition Lecture Notes: Matrix Inverse Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Inverse Definition We use I to represent identity matrices,

More information

Quadratic Functions, Optimization, and Quadratic Forms

Quadratic Functions, Optimization, and Quadratic Forms Quadratic Functions, Optimization, and Quadratic Forms Robert M. Freund February, 2004 2004 Massachusetts Institute of echnology. 1 2 1 Quadratic Optimization A quadratic optimization problem is an optimization

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 205/6 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions

Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential

More information

Lecture Note on Linear Algebra 15. Dimension and Rank

Lecture Note on Linear Algebra 15. Dimension and Rank Lecture Note on Linear Algebra 15. Dimension and Rank Wei-Shi Zheng, wszheng@ieee.org, 211 November 1, 211 1 What Do You Learn from This Note We still observe the unit vectors we have introduced in Chapter

More information

Chapter 3: Section 3-3 Solutions of Linear Programming Problems

Chapter 3: Section 3-3 Solutions of Linear Programming Problems Chapter 3: Section 3-3 Solutions of Linear Programming Problems D. S. Malik Creighton University, Omaha, NE D. S. Malik Creighton University, Omaha, NE Chapter () 3: Section 3-3 Solutions of Linear Programming

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Convex analysis and profit/cost/support functions

Convex analysis and profit/cost/support functions CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences Convex analysis and profit/cost/support functions KC Border October 2004 Revised January 2009 Let A be a subset of R m

More information

Ri and. i=1. S i N. and. R R i

Ri and. i=1. S i N. and. R R i The subset R of R n is a closed rectangle if there are n non-empty closed intervals {[a 1, b 1 ], [a 2, b 2 ],..., [a n, b n ]} so that R = [a 1, b 1 ] [a 2, b 2 ] [a n, b n ]. The subset R of R n is an

More information

Minimally Infeasible Set Partitioning Problems with Balanced Constraints

Minimally Infeasible Set Partitioning Problems with Balanced Constraints Minimally Infeasible Set Partitioning Problems with alanced Constraints Michele Conforti, Marco Di Summa, Giacomo Zambelli January, 2005 Revised February, 2006 Abstract We study properties of systems of

More information

Linear Algebra Notes

Linear Algebra Notes Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Notes from February 11

Notes from February 11 Notes from February 11 Math 130 Course web site: www.courses.fas.harvard.edu/5811 Two lemmas Before proving the theorem which was stated at the end of class on February 8, we begin with two lemmas. The

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

Linear Programming in Matrix Form

Linear Programming in Matrix Form Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

1 Norms and Vector Spaces

1 Norms and Vector Spaces 008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)

More information