# Geometry of Linear Programming

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Chapter 2 Geometry of Linear Programming The intent of this chapter is to provide a geometric interpretation of linear programming problems. To conceive fundamental concepts and validity of different algorithms encountered in optimization, convexity theory is considered the key of this subject. The last section is on the graphical method of solving linear programming problems. 2.1 Geometric Interpretation Let R n denote the n-dimensional vector space (Euclidean) defined over the field of reals. Suppose X, Y R n. For X = (x 1, x 2,..., x n ) T and Y = (y 1, y 2,..., y n ) T we define the distance between X and Y as X Y = [ (x 1 y 1 ) 2 + (x 2 y 2 ) (x n y n ) 2] 1/2. Neighbourhood. Let X 0 be a point in R n. Then δ-neighbourhood of X 0, denoted by N δ (X 0 ) is defined as the set of points satisfying But N δ (X 0 ) = {X R n : X X 0 < δ, δ > 0}. N δ (X 0 ) \ X 0 = {X R n : 0 < X X 0 < δ} will be termed the deleted neighbourhood of X 0.

2 34 CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING In R 2, N δ (X 0 ) is a circle without circumference, and in R 3, N δ (X 0 ) is sphere without boundary, and for R, an open interval on the real line. For n > 3, figures are hypothetical. Let S R n. We give few elementary definitions. Boundary point. A point X 0 is called a boundary point of S if each deleted neighbourhood of X 0 intersects S and its compliment S c. Interior point. A point X 0 S is said to be an interior point of S, if there exists a neighbourhood of X 0 which is contained in S. Open set. A set S is said to be open if for each X S there exists a neighbourhood of X which is contained in S. For example, S = {X R n : X X 0 < 2} is an open set. The well known results: (i) A set is open it contains all its interior points, and (ii) The union of any number of open sets is an open set, are left as exercises for the reader. Close set. A set S is closed if its compliment S c is open. For example, S = {X R n : X X 0 3} is a closed set. Again a useful result arises: intersection of any number of closed sets is closed. A set S in R n is bounded if there exists a constant M > 0 such that X M for all X in S. Definition 1. A line joining X 1 and X 2 in R n is a set of points given by the linear combination Obviously, L = {X R n : X = α 1 X 1 + α 2 X 2, α 1 + α 2 = 1}. L + = {X : X = α 1 X 1 + α 2 X 2, α 1 + α 2 = 1, α 2 0} is a half-line originating from X 1 in the direction of X 2 as, for α 2 =0, X = X 1 and α 2 =1, X = X 2. Similarly, L = {X : X = α 1 X 1 + α 2 X 2, α 1 + α 2 = 1, α 1 0} is a half-line emanating from X 2 in the direction of X 1 as, for α 1 =0, X = X 2 and α 1 =1, X = X 1. Definition 2. A point X R n is called a convex linear combination (clc) of two points X 1 and X 2, if it can be expressed as X = α 1 X 1 + α 2 X 2, α 1, α 2 0, α 1 + α 2 = 1.

3 2.1. GEOMETRIC INTERPRETATION 35 Geometrically, speaking convex linear combination of any points X 1 and X 2 is a line segment joining X 1 and X 2. For example, let X 1 = (1, 2) and X 2 = (3, 7). Then, ( ) ( ) ( X = (1, 2) + (3, 7) = 3 3 3, 2 ) ( + 2, 14 ) = 3 3 is a point lying on the line joining X 1 and X 2. ( 7 3, 16 ) 3 Convex set. A set S is said to be convex if clc of any two points of S belongs to S, i.e., X = α 1 X 1 + α 2 X 2 S, α 1 + α 2 = 1, α 1, α 2 0 X 1, X 2 S. Geometrically, this definition may be interpreted as the line segment joining every pair of points X 1, X 2 of S lies entirely in S. For more illustration, see Fig convex convex nonconvex convex Figure 2.1 By convention empty set is convex. Every singleton set is convex. A straight line is a convex set, and a plane in R 3 is also a convex set. Convex sets have many pleasant properties that give strong mathematical back ground to the optimization theory. Theorem 1. Intersection of two convex sets is a convex set. Proof. Let S 1 and S 2 be two convex sets. We have to show that S 1 S 2 is a convex set. If this intersection is empty or singleton there is nothing to prove. Let X 1 and X 2 be two arbitrary points in S 1 S 2. Then X 1, X 2 S 1 and X 1, X 2 S 2. Since S 1 and S 2 are convex, we have α 1 X 1 + α 2 X 2 S 1 and α 1 X 1 + α 2 X 2 S 2, α 1, α 2 0, α 1 + α 2 = 1. Thus, α 1 X 1 + α 2 X 2 S 1 S 2, and hence S 1 S 2 is convex.

4 36 CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING Remarks. 1. Moreover, it can be shown that intersection of any number of convex sets is a convex set, see Problem The union of two or more convex sets may not be convex. As an example the sets S 1 = {(x 1, 0) : x 1 R} and S 2 = {(0, x 2 ) : x 2 R} are convex in x 1 x 2 plane, but their union S 1 S 2 = {(x 1, 0), (0, x 2 ) : x 1, x 2 R} is not convex, since (2, 0), (0, 2) S 1 S 2, but their clc, ( ) ( ) 1 1 (2, 0) + (0, 2) = (1, 1) / S 1 S Hyperplanes and Half-spaces. A plane in R 3 is termed a hyperplane. The equation of hyperplane in R 3 is the set of points (x 1, x 2, x 3 ) T satisfying a 1 x 1 + a 2 x 2 + a 3 x 3 = β. Extending the above idea, a hyperplane in R n is the set of points (x 1, x 2,..., x n ) T satisfying the linear equation a 1 x 1 + a 2 x a n x n = β or a T X = β, where a = (a 1, a 2,..., a n ) T. Thus, a hyperplane in R n is the set H = {X R n : a T X = β}. (2.1) A hyperplane separates the whole space into two closed half-spaces H L = {X R n : a T X β}, H U = {X R n : a T X β}. Removing H results in two disjoint open half-spaces H L = {X R n : a T X < β}, H U = {X R n : a T X > β}. From (2.1), it is clear that the defining vector a of hyperplane H is orthogonal to H. Since, for any two vectors X 1 and X 2 H a T (X 1 X 2 ) = a T X 1 a T X 2 = β β = 0. Moreover, for each vector X H and W H L, a T (W X) = a T W a T X < β β = 0. This shows that the normal vector a makes an obtuse angle with any vector that points from the hyperplane toward the interior of H L. In other words, a is directed toward the exterior of H L. Fig. 2.2 illustrates the geometry.

5 2.1. GEOMETRIC INTERPRETATION 37 { X R n : a T X = β } PSfrag replacements a H L H U Figure 2.2 Theorem 2. A hyperplane in R n is a closed convex set. Proof. The hyperplane in R n is the set S = {X R n : a T X = α}. We prove that S is closed convex set. First, we show that S is closed. To do this we prove S c is open, where S c = { X R n : a T X < α } { X R n : a T X > α } = S 1 S 2. Let X 0 S c. Then X 0 / S. This implies a T X 0 < α or a T X 0 > α. Suppose a T X 0 < α. Let a T X 0 = β < α. Define { N δ (X 0 ) = X R n : X X 0 < δ, δ = α β }. (2.2) a If X 1 N δ (X 0 ), then in view of (2.2), a T X 1 a T X 0 a T X 1 a T X 0 = a T (X 1 X 0 ) = a T X 1 X 0 < α β. But a T X 0 = β. This implies a T X 1 < α and hence X 1 S 1. Since X 1 is arbitrary, we conclude that N δ (X 0 ) S 1. This implies S 1 is open. Similarly, it can be shown that S 2 = {X : a T X 0 > α} is open. Now, S c = S 1 S 2 is open (being union of open sets) which proves that S is closed. Let X 1, X 2 S. Then a T X 1 = α and a T X 2 = α, and consider X = β 1 X 1 + β 2 X 2, β 1, β 2 0, β 1 + β 2 = 1

6 38 CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING and operating a T, note that a T X = β 1 a T X 1 + β 2 a T X 2 = β 1 β + β 2 β = β(β 1 + β 2 ) = β. Thus, X S and hence S is convex. Theorem 3. A half-space S = {X R n : a T X α} is a closed convex set. Proof. Let S = {X R n : a T X α}. Suppose X 0 S c. Then a T X 0 > α. Now, a T X 0 = β > α. Consider the neighbourhood N δ (X 0 ) defined by { N δ (X 0 ) = X R n : X X 0 < δ, δ = β α }. a Let X 1 be an arbitrary point in N δ (X 0 ). Then a T X 0 a T X 1 a T X 1 a T X 0 = a T X1 X 0 < β α Since a T X 0 = β, we have a T X 1 < α a T X 1 > α X 1 S c N δ (X 0 ) S c. This implies S c is open and hence S is closed. Take X 1, X 2 S. Hence a T X 1 α, a T X 2 α. For Note that X = α 1 X 1 + α 2 X 2, α 1, α 2 0, α 1 + α 2 = 1. a T X = a T (α 1 X 1 + α 2 X 2 ) = α 1 a T X 1 + α 2 a T X 2 α 1 α + α 2 α = α(α 1 + α 2 ) = α. This implies X S, and hence S is convex. Polyhedral set. A set formed by the intersection of finite number of closed half-spaces is termed polyhedron or polyhedral. If the intersection is nonempty and bounded, it is called a polytope. For a linear programme in standard form, we have m hyperplanes H i = { X R n : a T i X = b i, X 0, b i 0, i = 1, 2,..., m },

7 2.1. GEOMETRIC INTERPRETATION 39 where a T i = (a i1, a i2,..., a in ) is the ith row of the constraint matrix A and b i is the ith element of the right-hand vector b. Moreover, for a linear program in standard form, the hyperplanes H = {X R n : C T X = β, β R} depict the contours of the linear objective function, and cost vector C T becomes the normal of its contour hyperplanes. Set of feasible solutions. The set of all feasible solutions forms a feasible region, generally denoted by P F, and this is the intersection of hyperplanes H i, i=1,2,...,m and the first octant of R n. Note that each hyperplane is intersection of two closed half-spaces H L and H U, and the first octant of R n is the intersection of n closed half-spaces {x i R : x i 0}. Hence the feasible region is a polyhedral set, and is given by P F = {X R n : AX b, X 0, b 0}. When P F is not empty, the linear programme is said to be consistent. For a consistent linear programme with a feasible solution X P F, if C T X attains the minimum or maximum value of the objective function C T X over the feasible region P F, then we say X is an optimal solution to the linear programme. Moreover, we say a linear programme has a bounded feasible region, if there exists a positive constant M such that for every X P F, we have X M. On the other hand for minimization problem, if there exists a constant K such that C T X K for all X P F, then we say linear programme is bounded below. Similarly, we can define bounded linear programme for maximization problem. Remarks. 1. In this context, it is worth mentioning that a linear programme with bounded feasible region is bounded, but the converse may not be true, i.e., a bounded LPP need not to have a bounded feasible region, see Problem In R 3, a polytope has prism like shape. Converting equalities to inequalities. To study the geometric properties of an LPP we consider the LPP in the form, where the constraints are of the type or, i.e., opt s.t. z = C T X g i (X) or 0, i = 1, 2,..., m X 0.

8 40 CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING In case there is equality constraint like x 1 + x 2 2x 3 = 5, we can write this as (equivalently): x 1 + x 2 2x 3 5 and x 1 + x 2 2x 3 5. This tells us that m equality constraints will give rise to 2m inequality constraints. However, we can reduce the system with m equality constraints to an equivalent system which has m+1 inequality constraints. Example 1. Show that the system having m equality constraints n a ij x j = b i, i = 1, 2,..., m j=1 is equivalent to the system with m + 1 inequality constraints. To motivate the idea, note that x = 1 is equivalent to the combination x 1 and x 1. As we can check graphically, the equations x = 1 and y = 2 are equivalent to the combinations x 1, y 2, x + y 3. The other way to write equivalent system is x 1, y 2, x + y 3. This idea can further be generalized to m equations. Consider the system of m equations n a ij x j = b i, i = 1, 2,..., m. j=1 This system has the equivalent form: n a ij x j b i and or j=1 n a ij x j b i j=1 and n a ij x j b i j=1 ( n m ) a ij x j j=1 i=1 m b i. If we look at the second combination, then the above system is equivalent to ( n n m ) m a ij x j b i, and a ij x j b i. j=1 j=1 Consider the constraints and nonnegative restrictions of an LPP in its standard form i=1 x 1 + x 2 + s 1 = 1 x 1 + 2x 2 + s 2 = 1 x 1, x 2, s 1, s 2 0. i=1 i=1

9 2.1. GEOMETRIC INTERPRETATION 41 Although it has four variables, the feasible reason P F can be represented as a two dimensional graph. Write the basic variables s 1 and s 2 in terms of nonbasic variables, see Problem 28, and use the conditions s 1 0, s 2 0 to have and is shown in Fig PSfrag replacements x 1 + x 2 1 x 1 + 2x 2 1 x 1, x 2 0 x 1 + 2x 2 = 1 (0, 1/2) P F (1/3, 2/3) (2/3, 1/3) x 1 + x 2 = 1 ( 1, 0) (0, 0) (1, 0) Figure 2.3 Remark. If a linear programming problem in its standard has n variables and n 2 nonredundant constraints, then the LPP has a twodimensional representation. Why?, see Problem 28. Theorem 4. The set of all feasible solutions of an LPP (feasible region P F ) is a closed convex set. Proof. By definition P F = {X : AX = b, X 0}. Let X 1 and X 2 be two points of P F. This means that AX 1 = b, AX 2 = b, X 1 0, X 2 0. Consider Z = αx 1 + (1 α)x 2, 0 α 1. Clearly, Z 0 and AZ = αax 1 + (1 α)ax 2 = αb + (1 α)b = b. Thus, Z P F, and hence P F is convex. Remark. Note that in above theorem b may be negative, i.e., the LPP may not be in standard form. The only thing is that we have equality constraint. If b is negative, we multiply by minus in the constraint. Alternative proof. Each constraint a T i X = b i, i = 1, 2,... m is closed (being a hyperplane), and hence intersection of these m hyperplanes (AX = b) is closed. Further, each nonnegative restriction x i 0

10 42 CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING is a closed (being closed half-spaces) is closed, and hence their intersection X 0 is closed. Again, intersection P F = {AX = b, X 0} is closed. This concludes that P F is a closed convex set. Clc s of two or more points. Convex linear combination of two points gives a line segment. The studies on different regions need the clc of more than two points. This motivates the idea of extending the concept. Definition 3. The point X is called a clc of m points X 1, X 2,..., X m in R n, if there exist scalars α i, i = 1, 2,..., m such that X = α 1 X 1 + α 2 X α m X m, α i 0, α 1 + α α m = 1. Remark. This definition includes clc of two points also. Henceforth, whenever we talk about clc, it means clc of two or more points. Theorem 5. A set S is convex every clc of points in S belongs to S. Proof. ( ) Given that every clc of points in S belongs to S includes the assertion that every clc of two points belongs to S. Hence S is convex. ( ) Suppose S is convex. we prove the result by induction. S is convex clc of every two points in S belongs to S. Hence the theorem is true for clc of two points. Assume that theorem is true for clc of n points. we must show that it is true for n + 1 points. Consider such that X = β 1 X 1 + β 2 X β n X n + β n+1 X n+1 β 1 + β β n+1 = 1, β i 0. If β n+1 = 0, then X, being clc of n points belongs to S (by assumption). If β n+1 = 1, then β 1 = β 2 = = β n = 0 and X = 1.X n+1, the theorem is trivially true. Assume, β n+1 0 or 1, i.e., 0 < β n+1 < 1. Now, β 1 + β β n 0. or X = β 1 + β β n β 1 + β β n (β 1 X 1 + β 2 X β n X n ) + β n+1 X n+1 X = (β 1 + β β n )(α 1 X 1 + α 2 X α n X n ) + β n+1 X n+1, where, α i = β i /(β 1 + β β n ), i = 1, 2,..., n. Clearly, α i 0 and α 1 + α α n = 1. Hence, by assumption α 1 X 1 + α 2 X 2 + +

11 2.1. GEOMETRIC INTERPRETATION 43 α n X n = Y (say) belongs to S. Again, such that X = (β 1 + β β n )Y + β n+1 X n+1 β 1 + β β n 0, β n+1 0, n+1 β i = 1. i=1 Thus, X is the clc of two points and hence belongs to S. Convex hull. Let S be a nonempty set. Then convex hull of S, denoted by [S] is defined as all clc s of points of S, [S] = {X R n : X is clc of points in S}. Remarks. 1. By convention [ ] = {0}. 2. The above discussion reveals that the convex hull of finite number of points X 1, X 2,..., X m is the convex combination of the m points. This is the convex set having at most m vertices. Here, at most means some points may be interior points. Moreover, convex hull generated in this way is a closed convex set. 3. The convex hull of m points is given a special name as convex polyhedron. Theorem 6. Let S be a nonempty set. Then the convex hull [S] is the smallest convex set containing S. Proof. Let X, Y [S]. Then X = α 1 X 1 + α 2 X α n X n, α i 0, n α i = 1, i=1 Y = β 1 Y 1 + β 2 Y β m Y m, β j 0, m β j = 1. Consider the linear combination αx + βy, α, β 0, α + β = 1, and note that αx + βy = α(α 1 X 1 + α 2 X α n X n ) + β(β 1 Y 1 + β 2 Y β m Y m ) j=1 = (αα 1 )X (αα n )X n + (ββ 1 )Y (ββ m )Y m.

12 44 CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING Now, each αα i 0, ββ j 0 and n (αα i ) + i=1 m (ββ j ) = α j=1 n α i + β i=1 m β j = α + β = 1, i.e., αx + βy is a clc of points X 1, X 2,..., X n, Y 1, Y 2,..., Y m. implies that αx + βy [S] and hence [S] is convex. j=1 This Clearly, it contains S because each X S can be written as X = 1.X + 0.Y, i.e., clc of itself. To prove that [S] is the smallest convex set containing S, we show that if there exists another convex set T containing S, then [S] T. Suppose T is a convex set which contains S. X [S]. Then Take any element X = α 1 X α n X n, α i 0, n α i = 1 X 1, X 2,..., X n S. i=1 Since S T, it follows that X 1, X 2,..., X n T and, moreover convexity of T ensures that Hence [S] T. α 1 X 1 + α 2 X α n X n = X T. Remark. If S is convex, then S = [S]. For convex set S R n, a key geometric figure is due to the following separation theorem. The proof is beyond the scope of the book. Theorem 7 (Separation theorem). Let S R n and X be a boundary point of S. Then there is a hyperplane H containing X with S contained either in lower half-plane or upper half-plane. Based on Theorem 7 we can define a supporting hyperplane H to be the hyperplane such that (i) the intersection of H and S is nonempty; (ii) lower half-plane contains S, see Fig PSfrag replacements a S X H L Figure 2.4 H

13 2.2. VERTICES AND BASIC FEASIBLE SOLUTIONS 45 One very important fact to point out here is that the intersection set of the polyhedral set and the supporting hyperplane with negative cost vector C T as its normal provides optimal solution of an LPP. This is the key idea of solving linear programming problems by the graphical method. To verify this fact, let us take min x 0 = x 1 2x 2 as the objective function for the LPP whose feasible region is shown in Fig Note that x 1 2x 2 = 80 is the hyperplane passing through (0, 40) and the vector C T = (1, 2) is normal to this plane. This is a supporting hyperplane passing through (0, 40), since H L = {(x 1, x 2 ) : x 1 + 2x 2 80} contains P F and is satisfied by the points (20, 20) and (30, 0). However, the hyperplane passing through (20, 20) which is normal to C T = (1, 2) is given by x 1 2x 2 = 60. This is not a supporting hyperplane as point (0, 40) is not in {(x 1, x 2 ) : x 1 + 2x 2 60}. Similarly, it can be shown that hyperplane at (3, 0) which is normal to C T is also not a supporting hyperplane. This implies that x 1 = 0, x 2 = 40 is the optimal solution. 2.2 Vertices and Basic Feasible Solutions Definition 4. A point X of a convex set S is said to be an vertex (extreme point) of S if X is not a clc of any other two distinct points of S, i.e., X can not be expressed as X = α 1 X 1 + α 2 X 2, α 1, α 2 > 0, α 1 + α 2 = 1. In other words, a vertex is a point that does not lie strictly within the line segment connecting two other points of the convex set. From the pictures of convex polyhedron sets, especially in lower dimensional spaces it is clear to see the vertices of a convex polyhedron, Analyze the set P F as depicted in Fig Further, we note the following observations: (a) A(0, 0), B(1, 0), C(1/3, 2/3) and D(0, 1/2) are vertices and moreover, these are boundary points also. But every boundary point need not be a vertex. In Fig. 2.3, as E is boundary point of the

14 46 CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING feasible region P F but not vertex, since it can be written as clc of distinct points of the set P F as ( 2 3, 1 ) ( ) ( ) ( = (1, 0) , 2 ). 3 (b) All boundary points of {(x, y) : x 2 + y 2 9} are vertices. Hence, vertices of a bounded closed set may be infinite. However, in an LPP, if P F is bounded and closed, then it contains finite number of vertices, see Theorem 9. (c) Needless to mention whenever we talk about vertex of a set S, it is implied that S is convex. (d) If S is unbounded, then it may not have a vertex, e.g.,, S = R 2. (e) If S is not closed, then it may not have vertex, e.g.,, S = {(x, y) : 0 < x < 1, 2 < y < 3} has no vertex. Remark. Here extreme points are in reference to convex sets. However, extreme points of a function will be defined in Chapter 13. To characterize the vertices of a feasible region P F = {X R n : AX = b, X 0} of a given LPP in standard form, we may assume A is an m n matrix with m < n and also denote the jth column of the coefficient matrix A by A j, j = 1, 2,..., n. Then, for each X = (x 1, x 2,..., x n ) T P F, we have x 1 A 1 + x 2 A x n A n = b. Therefore, A j is the column of A corresponding to the jth component x j of X. Theorem 8. A point X of feasible region P F is a vertex of P F the columns of A corresponding to the positive components of X are linearly independent. Proof. Without loss of generality, we may assume that the components of X are zero except for the first p components, namely [ ] X X =, X = (x 1, x 2,..., x p ) T > 0. 0 We also denote the first p columns of matrix A by A. Hence AX = A X = b.

15 2.2. VERTICES AND BASIC FEASIBLE SOLUTIONS 47 ( ) Suppose that the columns of A are not linearly independent, then there exists nonzero vector w (at least one of the p components is nonzero) such that Aw = 0. Now, define Y = X + δw and Z = X δw For sufficiently small δ > 0, we note that Y, Z 0 and We further define Y 1 = AY = AZ = A X = b. [ ] Y 0 and Z 1 = [ ] Z 0 Note that Y 1, Z 1 P F and X = (1/2)Y 1 + (1/2)Z 1. In other words X is not an vertex of P F. ( ) Suppose that X is not vertex of P F, then X = αy 1 + (1 α)z 1, α 0 for some distinct Y 1, Z 1 P F. Since Y 1, Z 1 0 and 0 α 1, the last n p components of Y 1 must be zero, as 0 = αy j + (1 α)z j, j = p + 1, p + 2,..., n. Consequently, we have a nonzero vector w = X Y 1 (X Y 1 ) such that Aw = Aw = AX AY 1 = b b = 0. This shows that columns of A are linearly dependent. Consider an LPP in standard, AX = b, suppose we have, n = number of unknowns; m = number of equations. Assume m < n (otherwise the problem is over-specified). However, after introducing the slack and surplus variables, generally this assumption remains valid. Let r(a) and r(a, b) be the ranks of matrix A and augmented matrix (A, b), respectively. (i) r(a) = r(a, b) guarantees the consistency, i.e., AX = b has at least one solution. (ii) r(a) r(a, b) the system is inconsistent, i.e., AX = b has no solution. For example x 1 + x 2 + x 3 = 1 4x 1 + 2x 2 x 3 = 5 9x 1 + 5x 2 x 3 = 11 has no solution and hence inconsistent.

16 48 CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING For consistent systems we have difficulty when r(a) = r(a, b) < m = number of equations. It means that all m rows are not linearly independent. Hence, any row may be written as linear combination of other rows. We consider this row (constraint) as redundant. For example x 1 x 2 + 2x 3 = 4 2x 1 + x 2 x 3 = 3 5x 1 + x 2 = 10 x 1, x 2, s 1, s 2 0 In this example r(a) = r(a, b) = 2 which is less than number of equations. The third constraint is the sum of the first constraint and two times of the second constraint. Hence, the third constraint is redundant. Actually, if the system is consistent, then the rows in echelon from of the matrix which contain pivot elements are to be considered. All nonpivot rows are redundant. Another type of redundancy happens when r(a) = r(a, b), but some of the constraint does not contribute any thing to find the optimal solution. Geometrically, a constraint is redundant if its removal leaves the feasible solution space unchanged. However, in this case r(a) = r(a, b) = m, the number of equations. Such type of cases we shall deal in Chapter 3. The following simple example illustrates the fact. x 1 + x 2 + s 1 = 1 x 1 + 2x 2 + s 2 = 2 x 1, x 2, s 1, s 2 0 Here r(a) = r(a, b) = 2. The vertices of the feasible region are (0, 0), (1, 0) and (0, 1). The second constraint is redundant as it contributes only (0, 1) which is already given by the first constraint. It is advisable to delete redundant constraints, if any, before an LPP is solved to find its optimal solution, otherwise computational difficulty may arise. Basic solutions. Let AX = b be a system of m simultaneous linear equations in n unknowns (m < n) with r(a) = m. This means that there exist m linearly independent column vectors. In this case group these linearly independent column vectors to form a basis B and leave the remaining n m columns as nonbasis N. In other words, we can rearrange A = [B N].

17 2.2. VERTICES AND BASIC FEASIBLE SOLUTIONS 49 We can also rearrange the components of any solution vector X in the corresponding order, namely [ ] X = X B X N For a component in X B, its corresponding column is in the basis B. Similarly, components in X N correspond to a nonbasis matrix N. If all n m variables X N which are not associated with columns of B are equated to zero, then the solution of the resulting system BX B = b is called a basic solution of AX = b. Out of n columns, m columns can be selected in n!/m!(n m)! ways. The m variables (left after putting n m variables equal to zero) are called the basic variables and remaining n m variables as nonbasic variables. The matrix corresponding to basic variables is termed the basis matrix. Basic feasible solution. A basic solution of the system AX = b, X 0 is called a basic feasible solution. Nondegenerate BFS. If all the m basic variables in a BFS are positive than it is called nondegenerate basic feasible solution. The following result is a direct consequences of Theorem 8. Corollary 1. A point X P F is an vertex of P F X is a basic feasible solution corresponding to some basis B. Proof. By Theorem 8, we have X P F is vertex columns A i for x i > 0 (i = 1 to m) are LI B = [A 1,..., A m ] is a nonsingular matrix of X X is a basic feasible solution. Degenerate BFS. A BFS which is not nondegenerate is called degenerate basic feasible solution, i.e., at least one of the basic variable is at zero level in the BFS. Remarks. 1. Corollary 1 reveals that there exists a one-one correspondence between the set of basic feasible solutions and set of vertices of P F only in the absence of degeneracy. Actually, in case of degeneracy, a vertex may correspond to many degenerate basic feasible solutions. Examples 3 will make the remark more clear and justified. 2. When we select m variables out of n variables to define a basic solution it is must that the matrix B formed with the coefficients of

18 50 CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING m variables must be nonsingular, otherwise we may get no solution or infinity of solutions (not basic), see Problem 12, Problem set Every basic solution of the system AX = b is a basic feasible solution. Why? Example 2. Without sketching P F find the vertices for the system x 1 + x 2 1 2x 1 + x 2 2 x 1, x 2 0. First, write the system in standard form x 1 + x 2 + s 1 = 1 2x 1 + x 2 + s 2 = 2. Here n = 4, m = 2 4!/2!2! = 6. Hence the system has at most 6 basic solutions. To find all the basic solutions we take any of the two variables as basic variables from the set {x 1, x 2, s 1, s 2 } to have ( 1 3, 4 ) 3, 0, 0, (1, 0, 2, 0), ( 1, 0, 0, 4), (0, 2, 1, 0), (0, 1, 0, 1), (0, 0, 1, 2). Thus, 4 are basic feasible solutions. The solution set is nondegenerate and hence there exists one-one correspondence between BFS and vertices, i.e., the feasible region P F has 4 vertices. Note that ( 1, 0, 0, 4) and (0, 2, 1, 0) are basic solutions but not feasible. Example 3. The system AX = b is given by Determine the following: (i) a nonbasic feasible solution; x 1 + x 2 8x 3 + 3x 4 = 2 x 1 + x 2 + x 3 3x 4 = 2. (ii) a basic solution with x 1, x 4 as basic variables; (iii) a vertex which corresponds to two different basic feasible solutions.

19 2.2. VERTICES AND BASIC FEASIBLE SOLUTIONS 51 (iv) all nondegenerate basic feasible solutions Here X = (x 1, x 2, x 3, x 4 ) T, b = (2, 2) T and the coefficient matrix A and the augmented matrix (A, b) are written as A = ; (A, b) = We make the following observations: (i) Since r(a) = r(a, b) = 2 < number of unknowns, the system is consistent and has infinity of solutions with two degrees of freedom. The row reduced echelon form of (A, b) is 1 0 9/ /2 0 2 This gives x 1 = 9 2 x 3 3x 4 x 2 = 7 2 x (2.3) Let us assign x 3 = x 4 = 2, i.e., nonzero values to x 3 and x 4 to have x 1 = 3 and x 2 = 9. Thus one of the feasible solution is (3, 9, 2, 2) but not a basic feasible solution, since at least two variables must be at zero level. Moreover, ( 3, 2, 0, 1) is a nonbasic feasible solution of the system. (ii) If x 1 and x 4 are chosen basic variables, then the system becomes x 1 + 3x 4 = 2 x 1 3x 4 = 2 The system has no solution with x 1, x 4 as basic variables. (iii) Further, for x 1, x 2 as basic variables, (0, 2, 0, 0) is a BFS with basis matrix

20 52 CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING while, for x 2, x 4 as basic variables, (0, 2, 0, 0) is a BFS with basis matrix Both BFS given by (0, 2, 0, 0) seem to be same but these are different as their basis matrices are different. However, both BFS correspond to the same vertex (0, 2). The vertex (0, 2) can be identified by writing the equivalent form of LPP as two-dimensional graph in x 1 x 2 plane. (iv) All nondegenerate basic feasible solutions of the system are ( 18/7, 0, 4/7, 0), (0, 0, 4/7, 6/7). Note that these vectors are BFS even though some entries are negative, because all variables are unrestricted. Remarks. 1. Note that system (2.3) can be written as 1x 1 + 0x x 3 + 3x 4 = 0 0x 1 + 1x x 3 + 0x 4 = 2 This is canonical form of constraint equations. Further, note that 8 = = This extracts a good inference that if x 1 and x 2 are basic variables and are pivoted, then coefficients of x 3 and x 4 are coordinate vectors of A 3 and A 4 column vectors of A with respect to the basic vectors A 1 and A 2. This phenomenon will be used in simplex method to be discussed in next chapter. 2. All basic solutions can also be obtained by pivoting at any two variables and assigning zero value to the remaining variables. This can be done in 6 ways and hence will give at most six different basic solutions.

21 2.2. VERTICES AND BASIC FEASIBLE SOLUTIONS For the existence of all basic solutions it is necessary that column vectors of the coefficient matrix A must be linearly independent. It is also possible that after keeping requisite number of variables at zero level the remaining system may have infinity of solutions, see Problem 12(c). This happens when at least two columns are not linearly independent. In Problem 12, A 2 and A 5 are not linearly independent. We do not term these infinity solutions as basic solutions because for a basic solution the coefficient matrix formed with the coefficients of basic variables (in order) must be nonsingular. Theorem 9. The set of all feasible solutions P F of an LPP has finite number of vertices. Proof. Let P F be the set of all feasible solutions of the LPP, where constraints are written in standard form, AX = b, b 0. If A is of full rank m, i.e., each m m submatrix of A is nonsingular, then the system has ( ) n = m n! m!(n m)! basic solutions. Further, we add the condition X 0, then we can say AX = b has at most n!/m!(n m)! basic feasible solutions. As there exists one to one correspondence between BFS set and set of all vertices, provided nondegeneracy persists in the problem, we conclude that set of vertices may have at most n!/m!(n m)! elements and hence number of vertices is finite. In case of degeneracy more than one BFS may correspond to the same vertex. Hence, in this situation number of vertices will be less than n!/m!(n m)!, and again the vertices are finite. Note that two basic feasible solutions (vertices) are adjacent, if they use m 1 basic variables in common to form basis. For example, in Figure 2.3, it is easy to verify that (0, 1/2) is adjacent to (0, 0) but not adjacent to (1, 0) since (0, 1/2) takes x 2 and s 1 as basic variables, while (0, 0) takes s 1 and s 2 and (1, 0) takes x 1 and s 2. Under the nondegeneracy assumption, since each of the n m nonbasic variables could replace one current basic variable in a given basic feasible solution, we know that every BFS (vertex) has n m neighbours. Actually, each BFS can be reached by increasing the value of one nonbasic from zero to positive and decreasing the value of one basic variable from positive to zero. This is the basic concept of pivoting in simplex method to be discussed in next chapter. Suppose that the feasible region P F is bounded, in other words it

22 54 CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING is a polytope as shown in Fig PSfrag replacements X 1 X 5 X X 2 X 4 X 6 Figure 2.5 X 3 From this figure, it is easy to observe that each point of P F can be represented as a convex combination of finite number of vertices of P F, in particular X is the clc of the vertices X 1, X 3, X 4. This idea of convex resolution can be verified for a general polyhedron (may be unbounded) with the help of following definition: Definition 5. An extremal direction of a polyhedron set is a nonzero vector d R n such that for each X 0 P F, the ray is contained in P F. {X R n : X = X 0 + αd, α 0} Remark. From the definition of feasible region, we see that a nonzero vector d R n is an extremal direction of P F Ad = 0 and d 0. Also, P F is unbounded P F has an extremal direction. Considering the vertices and extremal directions every point in P F can be represented by the following useful result known as the resolution theorem. Theorem 10 (Resolution theorem). Let B = {V i R n : i Z} be the set of all vertices of P F with a finite index set Z. Then, for each X P F, we have X = α i V i + d, α i = 1, α i 0, (2.4) i Z where d is either the zero vector or an extremal direction of P F. Proof. To prove the theorem by the induction, we let p be the number of positive components of X P F. When p = 0, X = (0, 0,..., 0) is obviously a vertex. Assume that the theorem holds i Z

23 2.2. VERTICES AND BASIC FEASIBLE SOLUTIONS 55 for p = 0, 1,..., k and X has k + 1 positive components. If X is a vertex, then there is nothing to prove. If X is not a vertex, we let X T = (x 1, x 2,..., x k+1, 0,..., 0) R n such that (x 1, x 2,..., x k+1 ) > 0 and A = [A N], A is the matrix corresponding to positive components of X. Then, by Theorem 8, the columns of A are linearly independent, in other words there exists a nonzero vector w R k+1 such that Aw = 0. We define w = (w, 0,..., 0) R n, then w 0 and Aw = Aw = 0. There are three possibilities: w 0, w < 0 and w has both positive and negative components. For w 0, consider X(θ) = X + θw and pick θ to be the largest value of θ such that X = X(θ ) has at least one more zero component than X. Then follow the induction hypothesis to show that the theorem holds. Similarly, show that in the remaining two cases, the theorem still holds. The direct consequences of the resolution theorem are: Corollary 2. If P F is a bounded feasible region (polytope), then each point X P F is a convex linear combination of its vertices. Proof. Since P F is bounded, by the remark following Definition 5 the extremal direction d is zero, and application of (2.4) ensures that X is a clc of vertices. Corollary 3. If P F is nonempty, then it has at least one vertex. Example 4. Consider the set P F = {(x 1, x 2 ) : x 1 + x 2 1, x 1 + 2x 2 1, x 1, x 2 0}. Show that a point of P F may be clc of different vertices. Take the point (1/3, 1/6) P F. Now ( 1 3, 1 ) = (0, 0) (1, 0) ( 0, 1 ) ( , 2 ) 3 or, ( 1 3, 1 ) = (0, 0) (1, 0) + 1 ( , 2 ) + 1 ( 0, 1 ) Thus, an additional information is gathered here that a point of P F may have different clc s of its vertices.

24 56 CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING 2.3 Basic Theorem of Linear Programming Theorem 11. The maximum of the objective function f(x) of an LPP occurs at least at one vertex of P F, provided P F is bounded. Proof. Given that the LPP is a maximization problem. Suppose that maximum of f(x) occurs at some point X 0 in the feasible region P F. Thus, f(x) f(x 0 ) X P F. We show that this f(x 0 ) occurs at some vertex of P F. Since P F is bounded, closed and convex and the problem is LPP, it contains finite number of vertices X 1, X 2,..., X n. Hence, f(x i ) f(x 0 ), i = 1, 2,..., n. (2.5) By Corollary 2, each X 0 P F can be written as clc of its vertices, i.e., X 0 = α 1 X 1 + α 2 X α n X n, α i 0, Using linearity of f, we have n α i = 1. i=1 Let f(x 0 ) = α 1 f(x 1 ) + α 2 f(x 2 ) + + α n f(x n ). f(x k ) = max {f(x 1 ), f(x 2 ),..., f(x n )}, where f(x k ) is one of the values of f(x 1 ), f(x 2 ),..., f(x n ). Then f(x 0 ) α 1 f(x k ) + α 2 f(x k ) + + α n f(x k ) = f(x k ). (2.6) Combining (2.5) and (2.6), we have f(x 0 ) = f(x k ). This implies that the maximum value f(x 0 ) is attained at the vertex X k and hence the result. The minimization case can be treated on parallel lines just by reversing the inequalities. Note that in the minimization case, we define f(x k ) = min{f(x 1 ), f(x 2 ),..., f(x n )}. Thus, we have proved that the optimum of an LPP occurs at some vertex of P F, provided P F is bounded. Remark. Theorem 11 does not rule out the possibility of having an optimal solution at a point which is not vertex. It simply says among all optimal solutions to an LPP at least one of them is a vertex. The following theorem further strengthens Theorem 11.

25 2.4. GRAPHICAL METHOD 57 Theorem 12. In an LPP, if the objective function f(x) attains its maximum at an interior point of P F, then f is constant, provided P F is bounded. Proof. Given that the problem is maximization, and let X 0 be an interior point of P F, where maximum occurs, i.e., f(x) f(x 0 ) X P F. Assume contrary that f(x) is not constant. Thus, we have X 1 P F such that f(x 1 ) f(x 0 ), f(x 1 ) < f(x 0 ). Since P F is nonempty bounded closed convex set, it follows that X 0 can be written as a clc of two points X 1 and X 2 of P F Using linearity of f, we get X 0 = αx 1 + (1 α)x 2, 0 < α < 1. f(x 0 ) = αf(x 1 ) + (1 α)f(x 2 ) f(x 0 ) < αf(x 0 ) + (1 α)f(x 2 ). Thus f(x 0 ) < f(x 2 ). This is a contradiction and hence the theorem. 2.4 Graphical Method This method is convenient in case of two variables. By Theorem 11, the optimum value of the objective function occurs at one of the vertices of P F. We exploit this result to find an optimal solution of any LPP. First, we sketch the feasible region and identify its vertices. Compute the value of objective function at each vertex, and take largest of these values to decide the optimal value of the objective function, and the vertex at which this largest value occurs is the optimal solution. For minimization problem we consider the smallest value. Example 5. Solve the following LPP by the graphical method max z = x 1 + 5x 2 s.t. x 1 + 3x 2 10 x 1 + x 2 6 x 1 x 2 2 x 1, x 2 0.

26 58 CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING Rewrite each constraint in the forms: x x 2 10/3 1 x x x 1 2 x Draw the each constraint first by treating as linear equation. Then use the inequality condition to decide the feasible region. The feasible PSfrag region replacements and vertices are shown in Fig (0, 10/3) x 1 + x 2 = 6 (2, 4) P F (0, 0) (2, 0) Figure 2.6 x 1 + 3x 2 = 10 (4, 2) x 1 x 2 = 2 The vertices are (0, 0), (2, 0), (4, 2), (2, 4), (0, 10/3). The values of the objective function is computed at these points are z = 0 at (0, 0) z = 2 at (2, 0) z = 14 at (4, 2) z = 22 at (2, 4) z = 50/3 at (0, 10/3) Obviously, the maximum occurs at vertex (2, 4) with maximum value 22. Hence, optimal solution: x 1 = 2, x 2 = 4, z = 22. Example 6. A machine component requires a drill machine operation followed by welding and assembly into a larger subassembly. Two

27 2.4. GRAPHICAL METHOD 59 versions of the component are produced: one for ordinary service and other for heavy-duty operation. A single unit of the ordinary design requires 10 min of drill machine time, 5 min of seam welding, and 15 min for assembly. The profit for each unit is \$ 100. Each heavy-duty unit requires 5 min of screw machine time, 15 min for welding and 5 min for assembly. The profit for each unit is \$ 150. The total capacity of the machine shop is 1500 min; that of the welding shop is 1000 min; that of assembly is 2000 min. What is the optimum mix between ordinary service and heavy-duty components to maximize the total profit? Let x 1 and x 2 be number of ordinary service and heavy-duty components. The LPP formulation is max z = 100x x 2 s.t. 10x 1 + 5x x x x 1 + 5x x 1, x 2 0 and are integers. Draw the feasible region by taking all constraints in the format as given in Example 5 and determine all the vertices. The vertices are (0, 0), (400/3, 0), (125, 25), (0, 200/3). The optimal solution exists at the vertex x 1 = 125, x 2 = 25 and the maximum value: z = Problem Set 2 1. Which of the following sets are convex (a) {(x 1, x 2 ) : x 1 x 2 1}; (b) {(x 1, x 2 ) : x x2 2 < 1}; (c) {(x 1, x 2 ) : x x2 2 3}; (d) {(x 1, x 2 ) : 4x 1 x 2 2 }; (e) {(x 1, x 2 ) : 0 < x x 2 2 4}; (f) {(x 1, x 2 ) : x 2 = 5}. 2. Prove that a linear program with bounded feasible region must be bounded, and give a counterexample to show that the converse need not be true. 3. Prove that arbitrary intersection of convex sets is convex. 4. Prove that the half-space {X R n : a T X α} is a closed convex set.

28 60 CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING 5. Show that the convex sets in R n satisfy the following properties. (a) If S is a convex set and β is a real number, the set is convex; βs = {βx : X S} (b) If S 1 and S 2 are convex sets in R n, then the set is convex. S 1 ± S 2 = {X 1 ± X 2 : X 1 S 1, X 2 S 2 } 6. A point X v in S is a vertex of S S \ {X v } is convex. 7. Write the system x 1 + x 2 = 1 2x 1 4x 3 = 5 into its equivalent system which contains only three inequality constraints. 8. Define the convex hull of a set S as [S] = { i Λ A i : A i S and A i is convex}. Show that this definition and the definition of convex hull in Section 2.1 are equivalent. 9. Using the definition of convex hull in Problem 8, show that [S] is the smallest convex set containing S. 10. Find the convex hull of the following sets (a) {(1, 1), (1, 2), (2, 0), (0, 1)}; (b) {(x 1, x 2 ) : x x 2 2 > 3}; (c) {(x 1, x 2 ) : x x2 2 = 1}; (d) {(0, 0), (1, 0), (0, 1)}. 11. Prove that convex linear combinations of finite number of points is a closed convex set. Suggestion. For convexity, see Theorem 5.

29 2.4. GRAPHICAL METHOD Consider the following constraints of an LPP: x 1 + 2x 2 + 2x 3 + x 4 + x 5 = 6 x 2 2x 3 + x 4 + x 5 = 3 Identify (a) all nondegenerate basic feasible solutions; (b) all degenerate basic infeasible solutions (c) infinity of solutions. Suggestion. Here x 1, x 2, x 3, x 4, x 4 are unrestricted and hence every basic solution will be a basic feasible solution. 13. Use the resolution theorem to prove the following generalization of Theorem 11. For a consistent LP in its standard form with a feasible region P F, the maximum objective value of z = C T X over P F is either unbounded or is achieved at least at one vertex of P F. 14. Prove Theorems 11 and 12 for the minimization case. 15. Prove that if the optimal value of an LPP occurs at more than one vertex of P F, then it also occurs at clc of these vertices. 16. Consider the above problem and mention whether the point other than vertices where optimal solution exists is a basic solution of the LPP. 17. Show that set of all optimal solutions of an LPP is closed convex set. 18. Consider the system AX = b, X 0, b 0 (with m equations and n unknowns). Let X be a basic feasible solution with p < m components positive. How many different bases will correspond to X due to degeneracy in the system. 19. In view of problem 18, the BFS (0, 2, 0, 0) of Example 3 has one more different basis. Find this basis. 20. Write a solution of the constraint equations in Example 3 which is neither basic nor feasible. 21. Let X 0 be an optimal solution of the LPP min z = C T X, subject to AX = b in standard form and let X be any optimal solution when C is replaced by C. Then prove that (C C) T (X X 0 ) 0.

30 62 CHAPTER 2. GEOMETRY OF LINEAR PROGRAMMING 22. To make the graphical method work, prove that the intersection set of the feasible domain P F and the supporting hyperplane whose normal is given by the negative cost vector C T provides the optimal solution to a given linear programming problem. 23. Find the solution of following linear programming problems using the graphical method (a) min z = x 1 + 2x 2 (b) max z = 3x 1 + 4x 2 s.t. x 1 + 3x 2 10 s.t. x 1 2x 2 1 x 1 + x 2 6 x 1 + 2x 2 0 x 1 x 2 2 x 1, x 2 0 x 1, x Prove that a feasible region given by n variables and n 2 nonredundant equality constraints can be represented by two dimensional graph. 25. Let us consider the problem max z = 3x 1 x 2 + x 3 + x 4 + x 5 10 s.t. 3x 1 + x 2 + x 3 + x 4 + x 5 = 10 x 1 + x 3 + 2x 4 + x 5 = 15 2x 1 + 3x 2 + x 3 + x 4 + 2x 5 = 12 x 1, x 2, x 3, x 4, x 4, x 5 0 Using Problem 24, write this as a two-dimensional LPP and then find its optimal solution by graphical method. Suggestion. Do pivoting at x 3, x 4, x Consider the LPP: max z = x 1 + 3x 2 s.t. x 1 + x 2 + x 3 = 2 x 1 + x 2 + x 4 = 4 x 2, x 3, x 4 0 (a) Determine all the basic feasible solutions of problem; (b) Express the problem in two-dimensional plane;

31 2.4. GRAPHICAL METHOD 63 (c) Show that the optimal solution obtained using (a) and (b) are same. (d) Is there any basic infeasible solution? if, yes find it. 27. What difficulty arises if all the constraints are taken as strict inequalities? 28. Show that by properly choosing c i s in objective function of an LPP, every vertex can be made optimal. 29. Let X be a basic solution of the system AX = b having both positive and negative variables. How can X be reduced to a BFS.

32

### What is Linear Programming?

Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

### Definition of a Linear Program

Definition of a Linear Program Definition: A function f(x 1, x,..., x n ) of x 1, x,..., x n is a linear function if and only if for some set of constants c 1, c,..., c n, f(x 1, x,..., x n ) = c 1 x 1

### Systems of Linear Equations

Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

### Special Situations in the Simplex Algorithm

Special Situations in the Simplex Algorithm Degeneracy Consider the linear program: Maximize 2x 1 +x 2 Subject to: 4x 1 +3x 2 12 (1) 4x 1 +x 2 8 (2) 4x 1 +2x 2 8 (3) x 1, x 2 0. We will first apply the

### Vector Spaces II: Finite Dimensional Linear Algebra 1

John Nachbar September 2, 2014 Vector Spaces II: Finite Dimensional Linear Algebra 1 1 Definitions and Basic Theorems. For basic properties and notation for R N, see the notes Vector Spaces I. Definition

### 4.6 Linear Programming duality

4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

### 2.3 Convex Constrained Optimization Problems

42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

### 1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

### 3. Linear Programming and Polyhedral Combinatorics

Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

### Solving Systems of Linear Equations

LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

### 1 Solving LPs: The Simplex Algorithm of George Dantzig

Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

### Linear Programming: Theory and Applications

Linear Programming: Theory and Applications Catherine Lewis May 11, 2008 1 Contents 1 Introduction to Linear Programming 3 1.1 What is a linear program?...................... 3 1.2 Assumptions.............................

### Linear Programming. March 14, 2014

Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

### Lecture 3: Linear Programming Relaxations and Rounding

Lecture 3: Linear Programming Relaxations and Rounding 1 Approximation Algorithms and Linear Relaxations For the time being, suppose we have a minimization problem. Many times, the problem at hand can

### Linear Programming I

Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins

### Lecture 2: August 29. Linear Programming (part I)

10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.

### Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

### 3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max

SOLUTION OF LINEAR PROGRAMMING PROBLEMS THEOREM 1 If a linear programming problem has a solution, then it must occur at a vertex, or corner point, of the feasible set, S, associated with the problem. Furthermore,

### Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

### MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

### OPRE 6201 : 2. Simplex Method

OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2

### 3 Does the Simplex Algorithm Work?

Does the Simplex Algorithm Work? In this section we carefully examine the simplex algorithm introduced in the previous chapter. Our goal is to either prove that it works, or to determine those circumstances

### 1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form

1. LINEAR EQUATIONS A linear equation in n unknowns x 1, x 2,, x n is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b, where a 1, a 2,..., a n, b are given real numbers. For example, with x and

### a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

### 1 if 1 x 0 1 if 0 x 1

Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

### Module1. x 1000. y 800.

Module1 1 Welcome to the first module of the course. It is indeed an exciting event to share with you the subject that has lot to offer both from theoretical side and practical aspects. To begin with,

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### SYSTEMS OF EQUATIONS

SYSTEMS OF EQUATIONS 1. Examples of systems of equations Here are some examples of systems of equations. Each system has a number of equations and a number (not necessarily the same) of variables for which

### We call this set an n-dimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P.

Volumes of parallelograms 1 Chapter 8 Volumes of parallelograms In the present short chapter we are going to discuss the elementary geometrical objects which we call parallelograms. These are going to

### (Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties

Lecture 1 Convex Sets (Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties 1.1.1 A convex set In the school geometry

### Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### . P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

4. Basic feasible solutions and vertices of polyhedra Due to the fundamental theorem of Linear Programming, to solve any LP it suffices to consider the vertices (finitely many) of the polyhedron P of the

### Simplex method summary

Simplex method summary Problem: optimize a linear objective, subject to linear constraints 1. Step 1: Convert to standard form: variables on right-hand side, positive constant on left slack variables for

### Section 1.1. Introduction to R n

The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

### Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

### Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

### Duality of linear conic problems

Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

### Linear Dependence Tests

Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

### Euclidean Geometry. We start with the idea of an axiomatic system. An axiomatic system has four parts:

Euclidean Geometry Students are often so challenged by the details of Euclidean geometry that they miss the rich structure of the subject. We give an overview of a piece of this structure below. We start

### Solving Linear Programs

Solving Linear Programs 2 In this chapter, we present a systematic procedure for solving linear programs. This procedure, called the simplex method, proceeds by moving from one feasible solution to another,

### Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that

### The Graphical Method: An Example

The Graphical Method: An Example Consider the following linear program: Maximize 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2 0, where, for ease of reference,

### 26 Linear Programming

The greatest flood has the soonest ebb; the sorest tempest the most sudden calm; the hottest love the coldest end; and from the deepest desire oftentimes ensues the deadliest hate. Th extremes of glory

### Minimally Infeasible Set Partitioning Problems with Balanced Constraints

Minimally Infeasible Set Partitioning Problems with alanced Constraints Michele Conforti, Marco Di Summa, Giacomo Zambelli January, 2005 Revised February, 2006 Abstract We study properties of systems of

### Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Chapter 6 Linear Programming: The Simplex Method Introduction to the Big M Method In this section, we will present a generalized version of the simplex method that t will solve both maximization i and

### NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

### Operation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1

Operation Research Module 1 Unit 1 1.1 Origin of Operations Research 1.2 Concept and Definition of OR 1.3 Characteristics of OR 1.4 Applications of OR 1.5 Phases of OR Unit 2 2.1 Introduction to Linear

### Arrangements And Duality

Arrangements And Duality 3.1 Introduction 3 Point configurations are tbe most basic structure we study in computational geometry. But what about configurations of more complicated shapes? For example,

### MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3

MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................

### Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

### Standard Form of a Linear Programming Problem

494 CHAPTER 9 LINEAR PROGRAMMING 9. THE SIMPLEX METHOD: MAXIMIZATION For linear programming problems involving two variables, the graphical solution method introduced in Section 9. is convenient. However,

### Linear Programming. April 12, 2005

Linear Programming April 1, 005 Parts of this were adapted from Chapter 9 of i Introduction to Algorithms (Second Edition) /i by Cormen, Leiserson, Rivest and Stein. 1 What is linear programming? The first

### Duality in Linear Programming

Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadow-price interpretation of the optimal simplex multipliers is a very useful concept. First, these shadow

### Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

### 3. INNER PRODUCT SPACES

. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

### Linear Programming. Solving LP Models Using MS Excel, 18

SUPPLEMENT TO CHAPTER SIX Linear Programming SUPPLEMENT OUTLINE Introduction, 2 Linear Programming Models, 2 Model Formulation, 4 Graphical Linear Programming, 5 Outline of Graphical Procedure, 5 Plotting

### This exposition of linear programming

Linear Programming and the Simplex Method David Gale This exposition of linear programming and the simplex method is intended as a companion piece to the article in this issue on the life and work of George

### Solving Systems of Linear Equations

LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

### No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results

### INCIDENCE-BETWEENNESS GEOMETRY

INCIDENCE-BETWEENNESS GEOMETRY MATH 410, CSUSM. SPRING 2008. PROFESSOR AITKEN This document covers the geometry that can be developed with just the axioms related to incidence and betweenness. The full

### 24. The Branch and Bound Method

24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

### LEARNING OBJECTIVES FOR THIS CHAPTER

CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional

### LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method Introduction to dual linear program Given a constraint matrix A, right

### Linear Codes. In the V[n,q] setting, the terms word and vector are interchangeable.

Linear Codes Linear Codes In the V[n,q] setting, an important class of codes are the linear codes, these codes are the ones whose code words form a sub-vector space of V[n,q]. If the subspace of V[n,q]

### Solutions to Review Problems

Chapter 1 Solutions to Review Problems Chapter 1 Exercise 42 Which of the following equations are not linear and why: (a x 2 1 + 3x 2 2x 3 = 5. (b x 1 + x 1 x 2 + 2x 3 = 1. (c x 1 + 2 x 2 + x 3 = 5. (a

### 1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

### Linear Programming Problems

Linear Programming Problems Linear programming problems come up in many applications. In a linear programming problem, we have a function, called the objective function, which depends linearly on a number

### Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Section 1.2: Row Reduction and Echelon Forms Echelon form (or row echelon form): 1. All nonzero rows are above any rows of all zeros. 2. Each leading entry (i.e. left most nonzero entry) of a row is in

### 4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

### Linear Programming in Matrix Form

Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,

### Methods for Finding Bases

Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

### IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

IEOR 4404 Homework # Intro OR: Deterministic Models February 14, 011 Prof. Jay Sethuraman Page 1 of 5 Homework #.1 (a) What is the optimal solution of this problem? Let us consider that x 1, x and x 3

### Diagonal, Symmetric and Triangular Matrices

Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

### Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming

### Images and Kernels in Linear Algebra By Kristi Hoshibata Mathematics 232

Images and Kernels in Linear Algebra By Kristi Hoshibata Mathematics 232 In mathematics, there are many different fields of study, including calculus, geometry, algebra and others. Mathematics has been

### I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

### Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

### Lecture 1: Systems of Linear Equations

MTH Elementary Matrix Algebra Professor Chao Huang Department of Mathematics and Statistics Wright State University Lecture 1 Systems of Linear Equations ² Systems of two linear equations with two variables

### MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

### MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

### TOPIC 4: DERIVATIVES

TOPIC 4: DERIVATIVES 1. The derivative of a function. Differentiation rules 1.1. The slope of a curve. The slope of a curve at a point P is a measure of the steepness of the curve. If Q is a point on the

### Vector and Matrix Norms

Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

### THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

### LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and

### Math 215 HW #1 Solutions

Math 25 HW # Solutions. Problem.2.3. Describe the intersection of the three planes u+v+w+z = 6 and u+w+z = 4 and u + w = 2 (all in four-dimensional space). Is it a line or a point or an empty set? What

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### 2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables

### 1 Sets and Set Notation.

LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

### December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

### Approximation Algorithms

Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

### Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand

Notes V General Equilibrium: Positive Theory In this lecture we go on considering a general equilibrium model of a private ownership economy. In contrast to the Notes IV, we focus on positive issues such

### 1. Prove that the empty set is a subset of every set.

1. Prove that the empty set is a subset of every set. Basic Topology Written by Men-Gen Tsai email: b89902089@ntu.edu.tw Proof: For any element x of the empty set, x is also an element of every set since

### Lecture 6. Inverse of Matrix

Lecture 6 Inverse of Matrix Recall that any linear system can be written as a matrix equation In one dimension case, ie, A is 1 1, then can be easily solved as A x b Ax b x b A 1 A b A 1 b provided that