MATTHIAS GERDTS. LINEAR PROGRAMMING MSM 2Da (G05a,M09a)

Size: px
Start display at page:

Download "MATTHIAS GERDTS. LINEAR PROGRAMMING MSM 2Da (G05a,M09a)"

Transcription

1 MATTHIAS GERDTS LINEAR PROGRAMMING MSM 2Da G5a,M9a

2 Address of the Author: Matthias Gerdts Management Mathematics Group School of Mathematics University of Birmingham Edgbaston Birmingham B15 2TT WWW: web.mat.bham.ac.uk/m.gerdts Preliminary Version: December 4, 28 Copyright c 28 by Matthias Gerdts

3 Contents 1 Introduction Examples Transformation to canonical and standard form Graphical Solution of LP The Fundamental Theorem of Linear Programming Software for LP s The Simplex Method Vertices and Basic Solutions The primal Simplex Method Computing a neighbouring feasible basic solution The Algorithm The Simplex Table Phase 1 of the Simplex Method Computational Issues Finite Termination of the Simplex Method Complexity of the Simplex Method The Revised Simplex Method Duality and Sensitivity The Dual Linear Program Weak and Strong Duality Dual Simplex Method Sensitivities and Shadow Prices Bibliography 81 ii

4 1 Lecture plan Lectures: Examples classes: Class tests: Date Hours Pages :-9: :-9: :-9: :-9: :-9: :-9: :-9: :-9: :-9: :-9: :-9: :-9:5 CT :-9: :-9: :-9: :-9: :-9: :-9: :-9: :-9:5 CT :-9: :-9: Date Hours Questions :-14: :-14: :-14: :-14: :-14:5 Date Hours Questions :-9: :-9:5

5 Chapter 1 Introduction This course is about linear optimisation which is also known as linear programming. Linear programming is an important field of optimisation. Many practical problems in operations research can be expressed as linear programming problems. A number of algorithms for other types of optimisation problems work by solving linear programming problems as sub-problems. Historically, ideas from linear programming have inspired many of the central concepts of optimisation theory, such as duality, decomposition, and the importance of convexity and its generalisations. Likewise, linear programming is heavily used in business management, either to maximize the income or minimize the costs of a production scheme. What is optimisation about? Optimisation plays a major role in many applications from economics, operations research, industry, engineering sciences, and natural sciences. In the famous book of Nocedal and Wright [NW99] we find the following statement: People optimise: Airline companies schedule crews and aircraft to minimise cost. Investors seek to create portfolios that avoid risks while achieving a high rate of return. Manufacturers aim for maximising efficiency in the design and operation of their production processes. Nature optimises: Physical systems tend to a state of minimum energy. The molecules in an isolated chemical system react with each other until the total potential energy of their electrons is minimised, Rays of light follow paths that minimise their travel time. General optimisation problem: For a given function f : R n R and a given set M R n find ˆx M which minimises f on M, that is ˆx satisfies fˆx fx for every x M. Terminology: f is called objective function. 2

6 3 M is called feasible set or admissible set. ˆx is called minimiser of f on M. Any x M is called feasible or admissible. The components x i, i = 1,...,n, of the vector x are called optimisation variables. For brevity we likewise call the vector x optimisation variable or simply variable. Remarks: Without loss of generality, it suffices to consider minimisation problems. For, if the task were to maximise f, an equivalent minimisation problem is given by minimising f. [Exercise: Define what equivalent means in this context!] The function f often is associated with costs. The set M is often associated with resources, e.g. available budget, available material in a production, etc. The above general optimisation problem is much too general to solve it and one needs to specify f and M. In this course we exclusively deal with linear optimisation problems. What is a linear function? Definition 1..1 linear function A function f : R n R m is called linear, if and only if fx + y = fx + fy, fcx = cfx hold for every x, y R n and every c R. Linear vs. nonlinear functions: Example 1..2 The functions f 1 x = x, f 2 x = 3x 1 5x 2, x = f 3 x = c x, x, c R n, x 1 x 2, f 4 x = Ax, x R n, A R m n,

7 4 CHAPTER 1. INTRODUCTION are linear [exercise: prove it!]. The functions f 5 x = 1, f 6 x = x + 1, x R, f 7 x = Ax + b, x R n, A R m n, b R m, f 8 x = x 2, f 9 x = sinx, f 1 x = a n x n + a n 1 x n a 1 x + a, a i R, a n,..., a 2, a are not linear [exercise: prove it!]. Fact: Every linear function f : R n R m can be expressed in the form fx = Ax with some matrix A R m n. Notion: Throughout this course we use the following notion: x R n is the column vector x 1 x 2. x n with components x 1, x 2,..., x n R. By x we mean the row vector x 1, x 2,...,x n. This is called the transposed vector of x. a 11 a 12 a 1n a A R m n is the matrix 21 a 22 a 2n with entries a ij, 1 i m, a m1 a m2 a mn 1 j n. a 11 a 21 a m1 a By A R n m we mean the transposed matrix of A, i.e. 12 a 22 a m a 1n a 2n a mn This notion allows to characterise linear optimisation problems. A linear optimisation problem is a general optimisation problem in which f is a linear function and the set M is defined by the intersection of finitely many linear equality or inequality constraints. A

8 5 linear program in its most general form has the following structure: Definition 1..3 Linear Program LP Minimise subject to the constraints c x = c 1 x c n x n = a i1 x 1 + a i2 x a in x n = b i, n c i x i i=1 i = 1,...,m. In addition, some or all components x i, i I, where I {1,..., n} is an index set, may be restricted by sign conditions according to { } x i, i I. Herein, the quantities c = c 1. R n, b = b 1. R m, A = a 11 a 1n..... R m n c n b m a m1 a mn are given data. Remark 1..4 Without loss of generality we restrict the discussion to minimisation problems. Notice, that every maximisation problem can be transformed easily into an equivalent minimisation problem by multiplying the objective function with 1, i.e. maximisation of c x is equivalent to minimising c x and vice versa. The relations,, and = are applied component-wise to vectors, i.e. given two vectors x = x 1,..., x n and y = y 1,...,y n we have x y x i y i, i = 1,..., n. = = Variables x i, i I, are called free variables, as these variables may assume any real value.

9 6 CHAPTER 1. INTRODUCTION Example 1..5 The LP Minimise 5x 1 + x 2 + 2x 3 + 2x 4 s.t. x 2 4x 3 5 is a general LP with I = {1, 2, 4} and 2x 1 + x 2 x 3 x 4 3 x 1 + 3x 2 4x 3 x 4 = 1 x 1, x 2, x 4 x = x 1 x 2 x 3 x 4, c = , b = 3 1, A = The variable x 3 is free. The following questions arising in the context of linear programming will be investigated throughout this course: Existence of feasible points? Existence of optimal solutions? Uniqueness of an optimal solution? How does an optimal solution depend on problem data? Which algorithms can be used to compute an optimal solution? Which properties do these algorithms possess finiteness, complexity? Beyond linear programming...

10 1.1. EXAMPLES 7 optimal control Optimisation games nonlinear programming integer programming linear programming Typical applications there are many more!: nonlinear programming: portfolio optimisation, shape optimisation, parameter identification, topology optimisation, classification linear programming: allocation of resources, transportation problems, transshipment problems, maximum flows integer programming: assignment problems, travelling salesman problems, VLSI design, matchings, scheduling, shortest paths, telecommunication networks, public transportation networks games: economical behaviour, equilibrium problems, electricity markets optimal control: path planning for robots, cars, flight systems, crystal growth, flows, cooling of steel, economical strategies, inventory problems 1.1 Examples Before defining linear programs precisely, we will discuss some typical examples. Example Maximisation of Profit A farmer intends to plant 4 acres with sugar beets and wheat. He can use 24 pounds and 312 working days to achieve this. For each acre his cultivation costs amount to 4 pounds for sugar beets and to 12 pounds for wheat. For sugar beets he needs 6 working days per acre and for wheat 12 working days per acre. The profit amounts to 1 pounds per acre for sugar beets and to 25 pounds per acre for wheat. Of course, the farmer wants to maximise his profit.

11 8 CHAPTER 1. INTRODUCTION Mathematical formulation: Let x 1 denote the acreage which is used for sugar beets and x 2 those for wheat. Then, the profit amounts to The following restrictions apply: fx 1, x 2 = 1x x 2. maximum size: x 1 + x 2 4 money: 4x x 2 24 working days: 6x x no negative acreage: x 1, x 2 In matrix notation we obtain max c x s.t. Ax b, x, where x = x 1 x 2, c = 1 25, b = , A = Example The Diet Problem by G.J.Stigler, 194ies A human being needs vitamins S1, S2, and S3 for healthy living. Currently, only 4 medications A1,A2,A3,A4 contain these substances: S1 S2 S3 cost A A A A need per day Task: Find combination of medications that satisfy the need at minimal cost! Let x i denote the amount of medication A i, i = 1, 2, 3, 4. The following linear program solves the task: In matrix notation, we obtain Minimise 15x 1 +2x 2 +12x 3 +9x 4 subject to 3x 1 + 5x 2 + 2x 2 + 1x 4 1 1x 1 + 1x 3 + 2x 4 5 5x 1 + 3x 2 + 5x 3 + 3x x 1, x 2, x 3, x 4 min c x s.t. Ax b, x,

12 1.1. EXAMPLES 9 where x = x 1 x 2 x 3 x 4, c = , b = 5 5.5, A = In the 194ies nine people spent in total 12 days! for the numerical solution of a diet problem with 9 inequalities and 77 variables. Example Transportation problem A transport company has m stores and wants to deliver a product from these stores to n consumers. The delivery of one item of the product from store i to consumer j costs c ij pound. Store i has stored a i items of the product. Consumer j has a demand of b j items of the product. Of course, the company wants to satisfy the demand of all consumers. On the other hand, the company aims at minimising the delivery costs. a 1 Delivery b 1 Stores a 2 Consumer b n a m Let x ij denote the amount of products which are delivered from store i to consumer j. In order to find the optimal transport plan, the company has to solve the following linear program: Minimise s.t. m n c ij x ij minimise delivery costs i=1 j=1 n x ij a i, i = 1,...,m, can t deliver more than it is there j=1 m x ij b j, j = 1,..., n, satisfy demand i=1 x ij, i = 1,...,m, j = 1,...,n. can t deliver negative amount

13 1 CHAPTER 1. INTRODUCTION Remark: In practical problems x ij is often restricted in addition by the constraint x ij N = {, 1, 2,...}, i.e. x ij may only assume integer values. This restriction makes the problem much more complicated and standard methods like the simplex method cannot be applied anymore. Example Network problem A company intends to transport as many goods as possible from city A to city D on the road network below. The figure next to each edge in the network denotes the maximum capacity of the edge. B A C D 3 6 How can we model this problem as a linear program? Let V = {A, B, C, D} denote the nodes of the network they correspond to cities. Let E = {A, B, A, C, B, C, B, D, C, D} denote the edges of the network they correspond to roads connecting two cities. For each edge i, j E let x ij denote the actual amount of goods transported on edge i, j and u ij the maximum capacity. The capacity constraints hold: x ij u ij i, j E. Moreover, it is reasonable to assume that no goods are lost or no goods are added in cities B and C, i.e. the conservation equations outflow - inflow = hold: x BD + x BC x AB =, x CD x AC x BC =. The task is to maximize the amount of goods leaving city A note that this is the same amount that enters city D: x AB + x AC max. All examples are special cases of the general LP Transformation to canonical and standard form LP in its most general form is rather tedious for the construction of algorithms or the derivation of theoretical properties. Therefore, it is desirable to restrict the discussion to certain standard forms of LP into which any arbitrarily structured LP can be transformed.

14 1.2. TRANSFORMATION TO CANONICAL AND STANDARD FORM 11 In the sequel we will restrict the discussion to two types of LP: the canonical LP and the standard LP. We start with the canonical LP. Definition Canonical Linear Program LP C Let c = c 1,..., c n R n, b = b 1,...,b m R m and a 11 a 12 a 1n a A = 21 a 22 a 2n Rm n a m1 a m2 a mn be given. The canonical linear program reads as follows: Find x = x 1,...,x n R n such that the objective function n c i x i i=1 becomes minimal subject to the constraints n a ij x j b i, i = 1,..., m, 1.1 j=1 x j, j = 1,..., n. 1.2 In matrix notation: Minimise c x subject to Ax b, x. 1.3 We need some notation. Definition objective function, feasible set, feasible points, optimality i The function fx = c x is called objective function. ii The set M C := {x R n Ax b, x } is called feasible or admissible set of LP C. iii A vector x M C is called feasible or admissible for LP C. iv ˆx M C is called optimal for LP C, if c ˆx c x x M C.

15 12 CHAPTER 1. INTRODUCTION We now discuss how an arbitrarily structured LP can be transformed into LP C. The following cases may occur: i Inequality constraints: The constraint n a ij x j b i j=1 can be transformed into a constraint of type 1.1 by multiplication with 1: n a ij x j b i. j=1 ii Equality constraints: The constraint n a ij x j = b i j=1 can be written equivalently as b i n a ij x j b i. j=1 Hence, instead of one equality constraint we obtain two inequality constraints. Using i for the inequality on the left we finally obtain two inequality constraints of type 1.1: n a ij x j b i, j=1 n a ij x j b i. j=1 iii Free variables: Any real number x i can be decomposed into x i = x + i x i with nonnegative numbers x + i and x i. Notice, that this decomposition is not unique as for instance 6 = 6 = 1 7 =.... It will be unique if we postulate that either x + i or x i be zero. But this postulation is not a linear constraint anymore, so we don t postulate it. In order to transform a given LP into canonical form, every occurrence of the free variable x i in LP has to be replaced by x + i x i. The constraints x+ i and x i have to be added. Notice, that instead of one variable x i we now have two variables x + i and x i.

16 1.2. TRANSFORMATION TO CANONICAL AND STANDARD FORM 13 iv Maximisation problems: Maximising c x is equivalent to minimising c x. Example Consider the linear program of maximising 1.2x 1 1.8x 2 x 3 subject to the constraints x x 1 2x 3 x 1 2x 2 x 2 x 1 x 3 2x 2 x 1 + x 2 + x 3 = 1 x 2, x 3. This linear program is to be transformed into canonical form 1.3. The variable x 1 is a free variable. Therefore, according to iii we replace it by x 1 := x + 1 x 1, x + 1, x 1. According to ii the equality constraint x 1 + x 2 + x 3 = 1 is replaced by x 1 x 1 +x }{{} 2 + x 3 1 and x x 1 x }{{} 2 x 3 1. =x 1 = x 1 According to i the inequality constraint x 1 = x + 1 x x 1 = x x is replaced by Finally, we obtain a minimisation problem by multiplying the objective function with 1. Summarising, we obtain the following canonical linear program: Minimise 1.2, 1.2, 1.8, 1 x + 1 x 1 x 2 s.t. x x + 1 x 1 x 2 x 3 1/3 1 1, x + 1 x 1 x 2 x 3.

17 14 CHAPTER 1. INTRODUCTION Theorem Using the transformation techniques i iv, we can transform any LP into a canonical linear program 1.3. The canonical form is particularly useful for visualising the constraints and solving the problem graphically. The feasible set M C is an intersection of finitely many halfspaces and can be visualised easily for n = 2 and maybe n = 3. This will be done in the next section. But for the construction of solution methods, in particular for the simplex method, another notation is preferred: the standard linear program. Definition Standard Linear Program LP S Let c = c 1,...,c n R n, b = b 1,...,b m R m and a 11 a 12 a 1n a A = 21 a 22 a 2n Rm n a m1 a m2 a mn be given. Let RankA = m. The standard linear program reads as follows: Find x = x 1,...,x n R n such that the objective function n c i x i i=1 becomes minimal subject to the constraints n a ij x j = b i, i = 1,...,m, 1.4 j=1 x j, j = 1,..., n. 1.5 In matrix notation: Minimise c x subject to Ax = b, x. 1.6 Remark The sole difference between LP C and LP S are the constraints 1.1 and 1.4, respectively. Don t be confused by the same notation. The quantities c, b and A are not the same when comparing LP C and LP S.

18 1.2. TRANSFORMATION TO CANONICAL AND STANDARD FORM 15 The rank assumption RankA = m is useful because it avoids linearly dependent constraints. Linearly dependent constraints always can be detected and eliminated in a preprocessing procedure. LP S is only meaningful if m < n holds. Because in the case m = n and A nonsingular, the equality constraint yields x = A 1 b. Hence, x is completely defined and no degrees of freedom remain for optimising x. If m > n and RankA = n the linear equation Ax = b possesses at most one solution and again no degrees of freedom remain left for optimisation. Without stating it explicitely, we will assume m < n in the sequel whenever we discuss LP S. As in Definition we define the feasible set of LP S to be and call ˆx M S optimal for LP S if M S := {x R n Ax = b, x } c ˆx c x x M S. The feasible set M S is the intersection of the affine subspace {x R Ax = b} e.g. a line or a plane with the nonnegative orthant {x R n x }. The visualisation is more difficult as for the canonical linear program and a graphical solution is rather impossible except for very simple cases. Example Consider the linear program max 9x 1 15x 2 s.t. 1 2 x 1 + x 2 + x 3 = 3, x 1, x 2, x 3. The feasible set M S = {x 1, x 2, x 3 R x 1 + x 2 + x 3 = 3, x 1, x 2, x 3 } is the blue triangle below: x_ x_ x_1

19 16 CHAPTER 1. INTRODUCTION We can transform a canonical linear program into an equivalent standard linear program and vice versa. i Transformation of a canonical problem into a standard problem: Let a canonical linear program 1.3 be given. Define slack variables y = y 1,..., y m R m by resp. n y i := b i a ij x j, i = 1,...,m, j=1 y = b Ax. It holds Ax b if and only if y. With ˆx := x, y R n+m, ĉ := c, R n+m,  := A I R m n+m we obtain a standard linear program of type 1.6: Minimise ĉ ˆx s.t. ˆx = b, ˆx. 1.7 Notice, that Rank = m holds automatically. ii Transformation of a standard problem into a canonical problem: Let a standard linear program 1.6 be given. Rewrite the equality Ax = b as two inequalities Ax b and Ax b. With A  := R 2m n b, ˆb := R 2m A b a canonical linear program of type 1.3 arises: Minimise c x s.t. Âx ˆb, x. 1.8 We obtain Theorem Canonical linear programs and standard linear programs are equivalent in the sense that i x solves 1.3 if and only if ˆx = x, b Ax solves 1.7. ii x solves 1.6 if and only if x solves 1.8. Proof: We only show i as ii is obvious.

20 1.3. GRAPHICAL SOLUTION OF LP 17 i Let x solve 1.3, i.e. it holds c x c z z : Az b, z. Let ẑ = z, y be feasible for 1.7. Then, it holds y = b Az and y and hence, z satisfies Az b and z. Then, ĉ ˆx = c x c z = ĉ ẑ. The assertion follows as ẑ was an arbitrary feasible point. i Let ˆx = x, b Ax solve 1.7, i.e. ĉ ˆx ĉ ẑ ẑ : Âẑ = b, ẑ. Let z be feasible for 1.3. Then ẑ = z, b Az is feasible for 1.7 and c x = ĉ ˆx ĉ ẑ = c z. The assertion follows as z was an arbitrary feasible point. 1.3 Graphical Solution of LP We demonstrate the graphical solution of a canonical linear program in the 2-dimensional space. Example Example revisited The farmer in Example aims at solving the following canonical linear program: Minimise fx 1, x 2 = 1x 1 25x 2 subject to x 1 + x 2 4 4x x x x x 1, x 2 Define the linear functions g 1 x 1, x 2 := x 1 + x 2, g 2 x 1, x 2 := 4x x 2, g 3 x 1, x 2 := 6x x 2.

21 18 CHAPTER 1. INTRODUCTION Let us investigate the first constraint g 1 x 1, x 2 = x 1 + x 2 4, cf Figure 1.1. The equation g 1 x 1, x 2 = x 1 + x 2 = 4 defines a straight line in R 2 a so-called hyperplane H := {x 1, x 2 R 2 x 1 + x 2 = 4}. The vector of coefficients n g1 = 1, 1 is the normal vector on the hyperplane H and it points in the direction of increasing values of the function g 1. I.e. moving a point on H into the direction of n g1 leads to a point x 1, x 2 with g 1 x 1, x 2 > 4. Moving a point on H into the opposite direction n g1 leads to a point x 1, x 2 with g 1 x 1, x 2 < 4. Hence, H separates R 2 into two halfspaces defined by H := {x 1, x 2 R 2 x 1 + x 2 4}, H + := {x 1, x 2 R 2 x 1 + x 2 > 4}. Obviously, points in H + are not feasible, while points in H are feasible w.r.t. to the constraint x 1 + x x_2 3 H n g1 = 1, H = {x 1, x 2 R 2 x 1 + x 2 4} H + = {x 1, x 2 R 2 ; x 1 + x 2 > 4} x_1 Figure 1.1: Geometry of the constraint g 1 x 1, x 2 = x 1 + x 2 4: normal vector n g1 = 1, 1 the length has been scaled, hyperplane H and halfspaces H + and H. All points in H are feasible w.r.t. to this constraint The same discussion can be performed for the remaining constraints don t forget the constraints x 1 and x 2!. The feasible set is the dark area in Figure 1.2 given by the intersection of the respective halfspaces H of the constraints.

22 1.3. GRAPHICAL SOLUTION OF LP 19 4 x_2 g 3 : 6x x 2 = n g3 = 6, 12 n g1 = 1, 1 2 n g2 = 4, x_1 g 1 : x 1 + x 2 = 4 g 2 : 4x x 2 = 24 Figure 1.2: Feasible set of the linear program: Intersection of halfspaces for the constraints g 1, g 2, g 3 and x 1 and x 2 the length of the normal vectors has been scaled. We haven t considered the objective function so far. The red line in the Figure 1.3 corresponds to the line fx 1, x 2 = 1x 1 25x 2 = 35, i.e. for all points on this line the objective function assumes the value 35. Herein, the value 35 is just an arbitrary guess of the optimal value. The green line corresponds to the line fx 1, x 2 = 1x 1 25x 2 = 55, i.e. for all points on this line the objective function assumes the value 55. Obviously, the function values of f increase in the direction of the normal vector n f x 1, x 2 = 1, 25, which is nothing else than the gradient of f. As we intend to minimise f, we have to move the objective function line in the opposite direction n f direction of descent! as long as it does not completely leave the feasible set the intersection of this line and the feasible set has to be nonempty. The feasible points on this extremal objective function line are then optimal. The green line is the optimal line as there would be no feasible points on it if we moved it any further in the direction n f. According to the figure we graphically determine the optimal solution to be x 1 = 3, x 2 = 1 with objective function fx 1, x 2 = 55. The solution is attained in a vertex of the feasible set. In this example, the feasible vertices are given by the points,,, 2, 3, 1, and

23 2 CHAPTER 1. INTRODUCTION 4,. 4 x_ n f = 1, x_1 f : 1x 1 25x 2 = 55 n f = 1, 25 f : 1x 1 25x 2 = 35 Figure 1.3: Solving the LP graphically: Move the objective function line in the opposite direction of n f as long as the objective function line intersects with the feasible set dark area. The green line is optimal. The optimal solution is x 1, x 2 = 3, 1. In general, each row a i1 x 1 + a i2 x a in x n b i, i {1,..., m}, of the constraint Ax b of the canonical linear program defines a hyperplane H = {x R n a i1 x 1 + a i2 x a in x n = b i }, with normal vector a i1,...,a in, a positive halfspace H + = {x R n a i1 x 1 + a i2 x a in x n b i }, and a negative halfspace H = {x R n a i1 x 1 + a i2 x a in x n b i }. Recall that the normal vector n is perpendicular to H and points into the positive halfspace H +, cf. Figure*1.4.

24 1.3. GRAPHICAL SOLUTION OF LP 21 H = {x a x = b} H = {x a x b} H + = {x a x b} a Figure 1.4: Geometry of the inequality a x b: Normal vector a, hyperplane H, and halfspaces H + and H. The feasible set M C of the canonical linear program is given by the intersection of the respective negative halfspaces for the constraints Ax b and x. The intersection of a finite number of halfspaces is called polyedric set or polyeder. A bounded polyeder is called polytope. Figure 1.5 depicts the different constellations that may occur in linear programming.

25 22 CHAPTER 1. INTRODUCTION M C M C c c M C M C c c Figure 1.5: Different geometries of the feasible set grey, objective function line red, optimal solution blue. Summarising, the graphical solution of an LP works as follows: 1 Sketch the feasible set. 2 Make a guess w R of the optimal objective function value or alternatively take an arbitrary feasible point ˆx, introduce this point ˆx into the objective function and compute w = c ˆx. Plot the straight line given by c x = w. 3 Move the straight line in 2 in the direction c as long as the intersection of the line with the feasible set is non-empty. 4 The most extreme line in 3 is optimal. All feasible points on this line are optimal. Their values can be found in the picture. Observations: The feasible set M C can be bounded pictures on top or unbounded pictures at bottom. If the feasible set is bounded, there always exists an optimal solution by the Theorem of Weierstrass M C is compact and the objective function is continuous and thus assumes its minimum and maximum value on M C.

26 1.4. THE FUNDAMENTAL THEOREM OF LINEAR PROGRAMMING 23 The optimal solution can be unique pictures on left or non-unique top-right picture. The bottom-left figure shows that an optimal solution may exist even if the feasible set is unbounded. The bottom-right figure shows that the LP might not have an optimal solution. The following observation is the main motivation for the simplex method. We will see that this observation is always true! Main observation: If an optimal solution exists, there is always a vertex of the feasible set among the optimal solutions. 1.4 The Fundamental Theorem of Linear Programming The main observation suggests that vertices play an outstanding role in linear programming. While it is easy to identify vertices in a picture, it is more difficult to characterise a vertex mathematically. We will define a vertex for convex sets. We note Theorem The feasible set of a linear program is convex. Herein, a set M R n is called convex, if the entire line connecting two points in M belongs to M, i.e. x, y M λx + 1 λy M for all λ 1. convex and non-convex set Proof: Without loss of generality the discussion is restricted to the standard LP. Let x, y M S, λ 1, and z = λx + 1 λy. We have to show that z M S holds. Since λ [, 1] and x, y it holds z. Moreover, it holds Az = λax + 1 λay = λb + 1 λb = b.

27 24 CHAPTER 1. INTRODUCTION Hence, z M S. Now we define a vertex of a convex set. Definition Vertex Let M be a convex set. x M is called vertex of M, if the representation x = λx λx 2 with < λ < 1 and x 1, x 2 M implies x 1 = x 2 = x. Remark The set {λx 1 +1 λx 2 < λ < 1} is the straight line interconnecting the points x 1 and x 2 x 1 and x 2 are not included. Hence, if x is a vertex it cannot be on this line unless x = x 1 = x 2. According to the following theorem it suffices to visit only the vertices of the feasible set in order to find one not every! optimal solution. As any LP can be transformed into standard form, it suffices to restrict the discussion to standard LP s. Of course, the following result holds for any LP. Theorem Fundamental Theorem of Linear Programming Let a standard LP be given with M S. Then: a Either the objective function is unbounded from below on M S or Problem has an optimal solution and at least one vertex of M S is among the optimal solutions. b If M S is bounded, then an optimal solution exists and x M S is optimal, if and only if x is a convex combination of optimal vertices. Proof: The proof exploits the following facts about polyeders, which we don t proof here: i If M S, then at least one vertex exists. ii The number of vertices of M S is finite. iii Let x 1,..., x N with N N denote the vertices of M S. For every x M S there exists a vector d R n and scalars λ i, i = 1,...,N, such that N λ i x i + d x = i=1 and λ i, i = 1,...,N, N λ i = 1, d, Ad =. i=1

28 1.4. THE FUNDAMENTAL THEOREM OF LINEAR PROGRAMMING 25 iv If M S is bounded, then every x M S can be expressed as x = N λ i x i, λ i, i = 1,...,N, i=1 N λ i = 1. i=1 With these facts the assertions can be proven as follows. a Case 1: If in iii there exists a d R n with d, Ad =, and c d <, then the objective function is unbounded from below as for an x M S the points x+td with t are feasible as well because Ax+td = Ax+tAd = Ax = b and x+td x. But c x + td = c x + tc d for t and thus the objective function is unbounded from below on M S. Case 2: Let c d hold for every d R n with d and Ad =. According to i, ii, and iii every point x M S can be expressed as x = N λ i x i + d i=1 with suitable λ i, i = 1,...,N, N λ i = 1, d, Ad =. i=1 Let x be an optimal point. Then, x can be expressed as x = N λ i x i + d, λ i, i=1 N λ i = 1, Ad =, d. i=1 Let x j, j {1,..., N}, denote the vertex with the minimal objective function value, i.e. it holds c x j c x i for all i = 1,..., N. Then c x = N λ i c x i + c d i=1 N N λ i c x i c x j λ i = c x j. i=1 i=1 As x was supposed to be optimal, we found that x j is optimal as well which shows the assertion. b Let M S be bounded. Then the objective function is bounded from below on M S according to the famous Theorem of Weierstrass. Hence, the LP has at least one solution according to a. Let x be an arbitrary optimal point. According to iv x can be expressed as a convex combination of the vertices x i, i = 1,...,N, i.e. x = N λ i x i, λ i, i=1 N λ i = 1. i=1

29 26 CHAPTER 1. INTRODUCTION Let x j, j {1,..., N}, be an optimal vertex which exists according to a, i.e. c x = c x j = min i=1,...,n c x i. Then, using the expression for x it follows c x = N i=1 λ i c x i = c x j = min i=1,...,n c x i. Now let x k, k {1,..., N}, be a non-optimal vertex with c x k = c x + ε, ε >. Then, c x = N N λ i c x i = λ i c x i +λ k c x k c x i=1 i=1,i k i=1,i k N λ i +λ k c x +λ k ε = c x +λ k ε. As ε > this implies λ k = which shows the first part of the assertion. If x is a convex combination of optimal vertices, i.e. x = N λ i x i, λ i, i=1 N λ i = 1, i=1 then due to the linearity of the objective function and the convexity of the feasible set it is easy to show that x is optimal. This completes the proof. 1.5 Software for LP s Matlab provides the command linprog for the solution of linear programs. Scilab is a powerful platform for scientific computing. It provides the command linpro for the solution of linear programs. There are many commercial and non-commercial implementations of algorithms for linear programming on the web. For instance, the company Lindo Systems Inc. develops software for the solution of linear, integer, nonlinear and quadratic programs. On their webpage it is possible to download a free trial version of the software LINDO. The program GeoGebra is a nice geometry program that can be used to visualise feasible sets.

30 Chapter 2 The Simplex Method A real breakthrough in linear programming was the invention of the simplex method by G. B. Dantzig in George Bernard Dantzig Born: in Portland Oregon Died: in Palo Alto California The simplex method is one of the most important and popular algorithms for solving linear programs on a computer. The very basic idea of the simplex method is to move from a feasible vertex to a neighbouring feasible vertex and repeat this procedure until an optimal vertex is reached. According to Theorem it is sufficient to consider only the vertices of the feasible set. In order to construct the simplex algorithm, we need to answer the following questions: How can feasible vertices be computed? Given a feasible vertex, how can we compute a neighbouring feasible vertex? How to check for optimality? In this chapter we will restrict the discussion to LP s in standard form 1.2.5, i.e. min c x s.t. Ax = b, x. Throughout the rest of the chapter we assume ranka = m. This assumption excludes linearly dependent constraints from the problem formulation. Notation: 27

31 28 CHAPTER 2. THE SIMPLEX METHOD The rows of A are denoted by a i := a i1,...,a in R n, i = 1,..., m, i.e. A = a 11 a 12 a 1n a 21 a 22 a 2n = a 1 a 2.. a m1 a m2 a mn a m The columns of A are denoted by a j := a 1j,...,a mj R m, j = 1,..., n, i.e. A = a 11 a 12 a 1n a 21 a 22 a 2n = a 1 a 2 a n. a m1 a m2 a mn 2.1 Vertices and Basic Solutions The geometric definition of a vertex in Definition is not useful if a vertex has to be computed explicitely within an algorithm. Hence, we are looking for an alternative characterisation of a vertex which allows us to actually compute a vertex. For this purpose the standard LP is particularly well suited. Consider the feasible set of the standard LP 1.2.5: M S = {x R n Ax = b, x }. Let x = x 1,...,x n be an arbitrary point in M S. Let B := {i {1,..., n} x i > } be the index set of positive components of x, and N := {1,...,n} \ B = {i {1,..., n} x i = } the index set of vanishing components of x. Because of x M S it holds n b = Ax = a j x j = a j x j. j B j=1 This is a linear equation for the components x j, j B. This linear equation has a unique solution, if the column vectors a j, j B, are linearly independent. In this case, we call x a feasible basic solution.

32 2.1. VERTICES AND BASIC SOLUTIONS 29 Definition feasible basic solution Let A R m n, b R m, and M S = {x R n Ax = b, x }. x M S is called feasible basic solution for the standard LP, if the column vectors {a j j B} are linearly independent, where B = {i {1,..., n} x i > }. The following theorem states that a feasible basic solution is a vertex of the feasible set and vice versa. Theorem x M S is a vertex of M S if and only if x is a feasible basic solution. Proof: Recall that the feasible set M S is convex, see Theorem Let x be a vertex. Without loss of generality let x = x 1,...,x r,,..., R n with x i > for i = 1,..., r. We assume that a j, j = 1,..., r, are linearly dependent. Then there exist α j, j = 1,...,r, not all of them are zero with r α j a j =. Define j=1 y + y = x 1 + εα 1,...,x r + εα r,,..., = x 1 εα 1,..., x r εα r,,..., with { xj < ε < min α j } α j, 1 j r. According to this choice of ε it follows y +, y. Moreover, y + and y are feasible because Ay ± = r x j ± εα j a j = j=1 r x j a j ± ε j=1 r α j a j j=1 }{{} =, lin. dep. Hence, y +, y M S, but y + + y /2 = x. This contradicts the assumption that x is a vertex. Without loss of generality let x = x 1,...,x r,,..., M S with x j > and a j linearly independent for j = 1,..., r. Let y, z M S and x = λy + 1 λz, < λ < 1. Then, i = b. x j > 1 j r y j > or z j > for all 1 j r.

33 3 CHAPTER 2. THE SIMPLEX METHOD ii x j = j = r + 1,...,n y j = z j = for r + 1 j n. b = Ay = Az implies = Ay z = r j=1 y j z j a j. Due to the linear independence of a j it follows y j z j = for j = 1,..., r. Since y j = z j = for j = r + 1,..., n, it holds y = z. Hence, x is a vertex. 2.2 The primal Simplex Method Theorem states that a vertex can be characterised by linearly independent columns of A. According to Theorem it is sufficient to compute only the vertices feasible basic solutions of the feasible set in order to obtain at least one optimal solution provided such a solution exists at all. This is the basic idea of the simplex method. Notation: Let B {1,..., n} be an index set. Let x = x 1,...,x n be a vector. Then, x B is defined to be the vector with components x i, i B. Let A be a m n-matrix with columns a j, j = 1,...,n. Then, A B is defined to be the matrix with columns a j, j B. Example: x = , A = , B = {2, 4} x B = 1 4, A B = The equivalence theorem is of fundamental importance for the simplex method, because we can try to find vertices by solving linear equations as follows: Algorithm Computation of a vertex 1 Choose m linearly independent columns a j, j B, B {1,...,n}, of A and set N = {1,..., n} \ B. 2 Set x N = and solve the linear equation A B x B = b.

34 2.2. THE PRIMAL SIMPLEX METHOD 31 3 If x B, x is a feasible basic solution and thus a vertex. STOP. If there exists an index i B with x i <, then x is infeasible. Go to 1 and repeat the procedure with a different choice of linearly independent columns. Recall ranka = m, which guarantees the existence of m linearly independent columns of A in step 1 of the algorithm. A naive approach to solve a linear program would be to compute all vertices of the feasible set by the above algorithm. As there are at most n m ways to choose m linearly independent columns from a total of n columns there exist at most n = m n! m!n m! vertices resp. feasible basic solutions. Unfortunately, this is a potentially large number, especially if m and n are large 1 million, say. Hence, it is not efficient to compute every vertex. Caution: Computing all vertices and choosing that one with the smallest objective function value may not be sufficient to solve the problem, because it may happen that the feasible set and the objective function is unbounded! In this case, no solution exists. Example Consider the constraints Ax = b, x, with A =, b = There are at most 4 2 = 6 possible combinations of two columns of A: B 1 = {1, 2} x B1 = A 1 2 B 1 b =, x 1 = B 2 = {1, 3} x B2 = A 1 B 2 b = B 3 = {1, 4} x B3 = A 1 B 3 b = , 2 3,,,, x 2 = 2,, 2,,, x 3 = 3,,, 1, B 4 = {2, 3} A B is singular, B 5 = {2, 4} x B5 = A 1 2 B 5 b =, x 5 =, 2,, 2, 2 B 6 = {3, 4} x B6 = A 1 6 B 6 b =, x 6 =,, 6, 2. 2

35 32 CHAPTER 2. THE SIMPLEX METHOD Herein, those components x j with j B are set to. x 3 is not feasible. Hence, the vertices are given by x 1, x 2, x 5, x 6. Remark It is possible to define basic solutions which are not feasible. Let B {1,..., n} be an index set with B = m, N = {1,...,n} \ B, and let the columns a i, i B, be linearly independent. Then, x is called a basic solution if A B x B = b, x N =. Notice that such a x B = A 1 B b does not necessarily satisfy the sign condition x B, i.e. the above x may be infeasible. Those infeasible basic solutions are not of interest for the linear program. We need some more definitions. Definition Basis Let ranka = m and let x be a feasible basic solution of the standard LP. Every system {a j j B} of m linearly independent columns of A, which includes those columns a j with x j >, is called basis of x. Non-basis index set, non-basis matrix, non-basic variable Let {a j j B} be a basis of x. The index set B is called basis index set, the index set N := {1,..., n}\b is called non-basis index set, the matrix A B := a j j B is called basis matrix, the matrix A N := a j j N is called non-basis matrix, the vector x B := x j j B is called basic variable and the vector x N := x j j N is called non-basic variable. Let s consider an example. Example Consider the inequality constraints x 1 + 4x 2 24, 3x 1 + x 2 21, x 1 + x 2 9, x 1, x 2. Introduction of slack variables x 3, x 4, x 5 leads to standard constraints

36 2.2. THE PRIMAL SIMPLEX METHOD 33 Ax = b, x, with A = , b = , x = x 1 x 2 x 3 x 4 x 5. The projection of the feasible set M S = {x R 5 Ax = b, x } into the x 1, x 2 -plane looks as follows: x 2 3x 1 + x 2 = 21, 6 4, 5 6, 3 x 1 + x 2 = 9 x 1 + 4x 2 = 24, 7, x 1 i Consider x = 6, 3, 6,, M S. The first three components are positive and the corresponding columns of A, i.e. 3, 1 and, are linearly 1 1 independent. x is actually a feasible basic solution and according to Theorem a vertex. We obtain the basis , 1, 1 1 with basis index set B = {1, 2, 3}, non-basis index set N = {4, 5}, basic variable x B = 6, 3, 6, non-basic variable x N =, and basis matrix resp. non-basis

37 34 CHAPTER 2. THE SIMPLEX METHOD matrix A B := , A N := 1 1. ii Exercise: Consider as in i the remaining vertices which are given by x 1, x 2 =,, x 1, x 2 =, 6, x 1, x 2 = 7,, and x 1, x 2 = 4, 5. With these definition, we are ready to formulate a draft version of the simplex method : Phase : Transform the linear program into standard form 1.2.5, if necessary at all. 1 Phase 1: Determine a feasible basic solution feasible vertex x with basis index set B, nonbasis index set N, basic variable x B and non-basic variable x N =. 2 Phase 2: Compute a neighbouring feasible basic solution x + with basis index set B + and non-basis index set N + until either an optimum is computed or it can be decided that no such solution exists Computing a neighbouring feasible basic solution In this section we will discuss how a neighbouring feasible basic solution x + can be computed in phase two of the simplex algorithm. In the sequel, we assume that a feasible basic solution x with basis index set B and nonbasis index set N is given. We will construct a new feasible basic solution x + with index sets B + N + = B \ {p} {q}, = N \ {q} {p} by interchanging suitably chosen indices p B and q N. This procedure is called basis change. The indices p and q are not chosen arbitrarily but in such a way that the following requirements are satisfied: i Feasibility: x + has to remain feasible, i.e. Ax = b, x Ax + = b, x +.

38 2.2. THE PRIMAL SIMPLEX METHOD 35 ii Descent property: The objective function value decreases monotonically, i.e. c x + c x. Let a basis with basis index set B be given and let x be a feasible basic solution. Then, A B = a i i B is non-singular. x N =. Hence, the constraint Ax = b can be solved w.r.t. the basic variable x B according to Ax = A B x B + A N x N = b x B = A 1 B }{{} b =β A 1 B A N }{{} =Γ Introducing this expression into the objective function yields the expression x N =: β Γx N. 2.1 c x = c B x B + c N x N = c B }{{} β c B Γ c N x }{{} N =: d ζ x N. 2.2 =d =ζ A feasible basic solution x satisfies x N = in 2.1 and 2.2 and thus, x B = β, x N =, c x = d. 2.3 Herein, we used the notation: β = β i i B, ζ = ζ j j N, Γ = γ ij i B,j N. Now we intend to change the basis in order to get to a neighbouring basis. Therefore, we consider the ray zt = z 1 t. z n t = x 1. x n + t s 1. s n = x + ts, t, 2.4 emanating from the current basic solution x in direction s with step length t, cf. Figure 2.1. x + x s zt Figure 2.1: Idea of the basis change in the simplex method. Find a search direction s such that the objective function decreases monotonically along this direction.

39 36 CHAPTER 2. THE SIMPLEX METHOD We choose the search direction s in such a way that only the non-basic variable x q with suitable index q N will be changed while the remaining components x j =, j N, j q, remain unchanged. For a suitable index q N we define s q := 1, s j := for j N, j q, 2.5 and thus, z q t = t, z j t = for j N, j q. 2.6 Of course, the point zt ought to be feasible, i.e. as in 2.1 the following condition has to hold: Solving this equation leads to b = Azt = }{{} Ax +tas = As = A B s B + A N s N. =b s B = A 1 B A Ns N = Γs N. 2.7 Thus, the search direction s is completely defined by 2.5 and 2.7. Along the ray zt the objective function value computes to c zt 2.4 = c x + tc s 2.3 = d + tc B s B + tc N s N 2.7 = d t c B Γ c N sn = d tζ s N 2.5 = d tζ q. 2.8 This representation immediately reveals that the objective function value decreases along s for t, if ζ q > holds. If, on the other hand, ζ j holds for all j N, then a descent along s in the objective function is impossible. It remains to be investigated whether there are other points, which are not on the ray, but possibly lead to a smaller objective function value. This is not the case as for an arbitrary ˆx M S it holds ˆx and A Bˆx B +A N ˆx N = b resp. ˆx B = β Γˆx N. With ζ j and ˆx N it follows c ˆx = c Bˆx B + c N ˆx N = c B β c B Γ c N ˆx N = d ζ ˆx N d = c x. Hence, if ζ j holds for all j N, then the current basic solution x is optimal! We summarise:

40 2.2. THE PRIMAL SIMPLEX METHOD 37 Choice of the pivot column q: In order to fulfil the descent property in ii, the index q N has to be chosen such that ζ q >. If ζ j for all j N, then the current basic solution x is optimal. Now we aim at satisfying the feasibility constraint in i. The following has to hold: z B t = x B + ts B 2.7 = β tγs N, respectively z i t = β i t j N γ ij s j = β i tγ iq s q t }{{} =1 j N,j q 2.5 = β i γ iq t, i B. γ ij s j }{{} = The conditions β i γ iq t, i B, restrict the step size t. Two cases may occur: a Case 1: It holds γ iq for all i B. Due to β i it holds z i t = β i γ iq t for every t and every i B. Hence, zt is feasible for every t. If in addition ζ q > holds, then the objective function is not bounded from below for t according to 2.8. Thus, the linear program does not have a solution! Unsolvability of LP If for some q N it holds ζ q > and γ iq for every i B, then the LP does not have a solution. The objective function is unbounded from below. b Case 2: It holds γ iq > for at least one i B. β i γ iq t implies t β i γ iq. This postulation restricts the step length t.

41 38 CHAPTER 2. THE SIMPLEX METHOD Choice of pivot row p: The feasibility in i will be satisfied by choosing an index p B with t min := β { p βi } := min γiq >, i B. γ pq γ iq It holds β p z p t min = β p γ pq t min = β p γ pq =, γ pq z i t min = β i γ iq t min }{{} }{{} z i t min = β i γ iq t min }{{} β i γ iq }{{}, for i with γ iq, β i β i γ iq = for i with γ iq >. γ iq Hence, the point x + := zt min is feasible and satisfies x p =. Hence, x p leaves the basic variables and enters the non-basic variables. The following theorem states that x + is actually a feasible basic solution. Theorem Let x be a feasible basic solution with basis index set B. Let a pivot column q N with ζ q > and a pivot row p B with γ pq > exist. Then, x + = zt min is a feasible basic solution and c x + c x. In particular, {a j j B + } with B + = B \ {p} {q} is a basis and A B + is non-singular. Proof: By construction x + is feasible. It remains to show that {a j j B + } with B + = B \ {p} {q} is a basis. We note that x j = for all j B +. The definition of Γ = A 1 B A N implies A N = A B Γ. The column q of this matrix equation reads as a q = a i γ iq = a p γ pq + a i γ iq, 2.9 i B i B,i p where γ iq are the components of column q of Γ and γ pq according to the assumption of this theorem. We will now show that the vectors a i, i B, i p, and a q are linearly independent. Therefore, we consider the equation αa q + i B,i p α i a i = 2.1 with coefficients α R and α i R, i B, i p. We introduce a q from 2.9 into 2.1 and obtain αγ pq a p + i B,i p α i + αγ iq a i =.

42 2.2. THE PRIMAL SIMPLEX METHOD 39 As {a i i B} is a basis, we immediately obtain that all coefficients vanish, i.e. αγ pq =, α i + αγ iq =, i B, i p. Since γ pq it follows α = and this in turn implies α i = for all i B, i p. Hence, all coefficients vanish. This shows the linear independence of the set {a i i B + } The Algorithm We summarise our findings in an algorithm: Algorithm Primal Simplex Method Phase : Transform the linear program into standard form 1.2.5, if necessary at all. 1 Phase 1: Determine a feasible basic solution feasible vertex x for the standard LP with basis index set B, non-basis index set N, basis matrix A B, non-basis matrix A N, basic variable x B and non-basic variable x N =. If no feasible solution exists, STOP. The problem is infeasible. 2 Phase 2: i Compute Γ = γ ij i B,j N, β = β i i B, and ζ = ζ j j N according to Γ = A 1 B A N, β = A 1 B b, ζ = c B Γ c N. ii Check for optimality: If ζ j for every j N, then STOP. The current feasible basic solution x B = β, x N = is optimal. The objective function value is d = c B β. iii Check for unboundedness: If there exists an index q with ζ q > and γ iq for every i B, then the linear program does not have a solution and the objective function is unbounded from below. STOP. iv Determine pivot element: Choose an index q with ζ q >. q defines the pivot column. Choose an index p with p defines the pivot row. v Perform basis change: { β p βi = min γ pq γ iq γ iq >, i B Set B := B \ {p} {q} and N := N \ {q} {p}. vi Go to i. }.

43 4 CHAPTER 2. THE SIMPLEX METHOD Remark It is very important to recognise, that the indexing of the elements of the matrix Γ and the vectors β and ζ in i depends on the current entries and the orders in the index sets B and N. Notice that B and N are altered in each step v of the algorithm. So, Γ, β, and ζ are altered as well in each iteration. For instance, if B = {2, 4, 5} and N = {1, 3}, the entries of the matrix Γ and the vectors β and ζ are indexed as follows: γ 21 γ 23 β 2 ζ 1 Γ = γ 41 γ 43, β = β 4, ζ =. ζ 3 γ 51 γ 53 β 5 More generally, if B = {i 1,...,i m }, i k {1,..., n}, and N = {j 1,...,j n m }, j k {1,..., n} \ B, then γ i1 j 1 γ i1 j 2 γ i1 j n m γ Γ = i2 j 1 γ i2 j 2 γ i2 j n m......, β = β i1 β i2., ζ = ζ j1 ζ j2.. γ imj 1 γ imj 2 γ imj n m β im ζ jn m Example compare Example Consider the standard LP Minimise c x subject to Ax = b, x with the data x 3, x 4, x 5 are slack variables: c =, A = , b = , x = x 1 x 2 x 3 x 4 x 5. Phase 1: A feasible basic solution is given by the basis index set B = {3, 4, 5}, because the columns 3,4,5 of A are obviously linearly independent and x 3 24 x B = x 4 = 21 > and x N = 9 x 5 x 1 x 2 =

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where. Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

More information

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

More information

What is Linear Programming?

What is Linear Programming? Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

Linear Programming I

Linear Programming I Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins

More information

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

More information

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued. Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.

More information

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2 4. Basic feasible solutions and vertices of polyhedra Due to the fundamental theorem of Linear Programming, to solve any LP it suffices to consider the vertices (finitely many) of the polyhedron P of the

More information

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method Introduction to dual linear program Given a constraint matrix A, right

More information

Special Situations in the Simplex Algorithm

Special Situations in the Simplex Algorithm Special Situations in the Simplex Algorithm Degeneracy Consider the linear program: Maximize 2x 1 +x 2 Subject to: 4x 1 +3x 2 12 (1) 4x 1 +x 2 8 (2) 4x 1 +2x 2 8 (3) x 1, x 2 0. We will first apply the

More information

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

Duality in Linear Programming

Duality in Linear Programming Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadow-price interpretation of the optimal simplex multipliers is a very useful concept. First, these shadow

More information

Standard Form of a Linear Programming Problem

Standard Form of a Linear Programming Problem 494 CHAPTER 9 LINEAR PROGRAMMING 9. THE SIMPLEX METHOD: MAXIMIZATION For linear programming problems involving two variables, the graphical solution method introduced in Section 9. is convenient. However,

More information

Module1. x 1000. y 800.

Module1. x 1000. y 800. Module1 1 Welcome to the first module of the course. It is indeed an exciting event to share with you the subject that has lot to offer both from theoretical side and practical aspects. To begin with,

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max

3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max SOLUTION OF LINEAR PROGRAMMING PROBLEMS THEOREM 1 If a linear programming problem has a solution, then it must occur at a vertex, or corner point, of the feasible set, S, associated with the problem. Furthermore,

More information

26 Linear Programming

26 Linear Programming The greatest flood has the soonest ebb; the sorest tempest the most sudden calm; the hottest love the coldest end; and from the deepest desire oftentimes ensues the deadliest hate. Th extremes of glory

More information

Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

More information

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2 IEOR 4404 Homework # Intro OR: Deterministic Models February 14, 011 Prof. Jay Sethuraman Page 1 of 5 Homework #.1 (a) What is the optimal solution of this problem? Let us consider that x 1, x and x 3

More information

9.4 THE SIMPLEX METHOD: MINIMIZATION

9.4 THE SIMPLEX METHOD: MINIMIZATION SECTION 9 THE SIMPLEX METHOD: MINIMIZATION 59 The accounting firm in Exercise raises its charge for an audit to $5 What number of audits and tax returns will bring in a maximum revenue? In the simplex

More information

Lecture 2: August 29. Linear Programming (part I)

Lecture 2: August 29. Linear Programming (part I) 10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.

More information

Linear Programming: Theory and Applications

Linear Programming: Theory and Applications Linear Programming: Theory and Applications Catherine Lewis May 11, 2008 1 Contents 1 Introduction to Linear Programming 3 1.1 What is a linear program?...................... 3 1.2 Assumptions.............................

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

OPRE 6201 : 2. Simplex Method

OPRE 6201 : 2. Simplex Method OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2

More information

24. The Branch and Bound Method

24. The Branch and Bound Method 24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

More information

Mathematical finance and linear programming (optimization)

Mathematical finance and linear programming (optimization) Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may

More information

Chapter 4. Duality. 4.1 A Graphical Example

Chapter 4. Duality. 4.1 A Graphical Example Chapter 4 Duality Given any linear program, there is another related linear program called the dual. In this chapter, we will develop an understanding of the dual linear program. This understanding translates

More information

Linear Programming Problems

Linear Programming Problems Linear Programming Problems Linear programming problems come up in many applications. In a linear programming problem, we have a function, called the objective function, which depends linearly on a number

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

Convex Programming Tools for Disjunctive Programs

Convex Programming Tools for Disjunctive Programs Convex Programming Tools for Disjunctive Programs João Soares, Departamento de Matemática, Universidade de Coimbra, Portugal Abstract A Disjunctive Program (DP) is a mathematical program whose feasible

More information

Arrangements And Duality

Arrangements And Duality Arrangements And Duality 3.1 Introduction 3 Point configurations are tbe most basic structure we study in computational geometry. But what about configurations of more complicated shapes? For example,

More information

Linear Programming in Matrix Form

Linear Programming in Matrix Form Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,

More information

Classification of Cartan matrices

Classification of Cartan matrices Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

Chapter 2 Solving Linear Programs

Chapter 2 Solving Linear Programs Chapter 2 Solving Linear Programs Companion slides of Applied Mathematical Programming by Bradley, Hax, and Magnanti (Addison-Wesley, 1977) prepared by José Fernando Oliveira Maria Antónia Carravilla A

More information

Operation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1

Operation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1 Operation Research Module 1 Unit 1 1.1 Origin of Operations Research 1.2 Concept and Definition of OR 1.3 Characteristics of OR 1.4 Applications of OR 1.5 Phases of OR Unit 2 2.1 Introduction to Linear

More information

Linear Programming Notes VII Sensitivity Analysis

Linear Programming Notes VII Sensitivity Analysis Linear Programming Notes VII Sensitivity Analysis 1 Introduction When you use a mathematical model to describe reality you must make approximations. The world is more complicated than the kinds of optimization

More information

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and

More information

Applied Algorithm Design Lecture 5

Applied Algorithm Design Lecture 5 Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design

More information

Solving Simultaneous Equations and Matrices

Solving Simultaneous Equations and Matrices Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

4: SINGLE-PERIOD MARKET MODELS

4: SINGLE-PERIOD MARKET MODELS 4: SINGLE-PERIOD MARKET MODELS Ben Goldys and Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2015 B. Goldys and M. Rutkowski (USydney) Slides 4: Single-Period Market

More information

Linear Programming. April 12, 2005

Linear Programming. April 12, 2005 Linear Programming April 1, 005 Parts of this were adapted from Chapter 9 of i Introduction to Algorithms (Second Edition) /i by Cormen, Leiserson, Rivest and Stein. 1 What is linear programming? The first

More information

Linear Algebra Notes

Linear Algebra Notes Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

More information

THREE DIMENSIONAL GEOMETRY

THREE DIMENSIONAL GEOMETRY Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

More information

0.1 Linear Programming

0.1 Linear Programming 0.1 Linear Programming 0.1.1 Objectives By the end of this unit you will be able to: formulate simple linear programming problems in terms of an objective function to be maximized or minimized subject

More information

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725 Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T

More information

This exposition of linear programming

This exposition of linear programming Linear Programming and the Simplex Method David Gale This exposition of linear programming and the simplex method is intended as a companion piece to the article in this issue on the life and work of George

More information

International Doctoral School Algorithmic Decision Theory: MCDA and MOO

International Doctoral School Algorithmic Decision Theory: MCDA and MOO International Doctoral School Algorithmic Decision Theory: MCDA and MOO Lecture 2: Multiobjective Linear Programming Department of Engineering Science, The University of Auckland, New Zealand Laboratoire

More information

CHAPTER 9. Integer Programming

CHAPTER 9. Integer Programming CHAPTER 9 Integer Programming An integer linear program (ILP) is, by definition, a linear program with the additional constraint that all variables take integer values: (9.1) max c T x s t Ax b and x integral

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results

More information

Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

More information

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints Chapter 6 Linear Programming: The Simplex Method Introduction to the Big M Method In this section, we will present a generalized version of the simplex method that t will solve both maximization i and

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Solving Linear Programs

Solving Linear Programs Solving Linear Programs 2 In this chapter, we present a systematic procedure for solving linear programs. This procedure, called the simplex method, proceeds by moving from one feasible solution to another,

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

5.1 Bipartite Matching

5.1 Bipartite Matching CS787: Advanced Algorithms Lecture 5: Applications of Network Flow In the last lecture, we looked at the problem of finding the maximum flow in a graph, and how it can be efficiently solved using the Ford-Fulkerson

More information

ALMOST COMMON PRIORS 1. INTRODUCTION

ALMOST COMMON PRIORS 1. INTRODUCTION ALMOST COMMON PRIORS ZIV HELLMAN ABSTRACT. What happens when priors are not common? We introduce a measure for how far a type space is from having a common prior, which we term prior distance. If a type

More information

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 J. Zhang Institute of Applied Mathematics, Chongqing University of Posts and Telecommunications, Chongqing

More information

Using the Simplex Method to Solve Linear Programming Maximization Problems J. Reeb and S. Leavengood

Using the Simplex Method to Solve Linear Programming Maximization Problems J. Reeb and S. Leavengood PERFORMANCE EXCELLENCE IN THE WOOD PRODUCTS INDUSTRY EM 8720-E October 1998 $3.00 Using the Simplex Method to Solve Linear Programming Maximization Problems J. Reeb and S. Leavengood A key problem faced

More information

Linear Programming. Solving LP Models Using MS Excel, 18

Linear Programming. Solving LP Models Using MS Excel, 18 SUPPLEMENT TO CHAPTER SIX Linear Programming SUPPLEMENT OUTLINE Introduction, 2 Linear Programming Models, 2 Model Formulation, 4 Graphical Linear Programming, 5 Outline of Graphical Procedure, 5 Plotting

More information

An Introduction to Linear Programming

An Introduction to Linear Programming An Introduction to Linear Programming Steven J. Miller March 31, 2007 Mathematics Department Brown University 151 Thayer Street Providence, RI 02912 Abstract We describe Linear Programming, an important

More information

(Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties

(Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties Lecture 1 Convex Sets (Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties 1.1.1 A convex set In the school geometry

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

The Graphical Method: An Example

The Graphical Method: An Example The Graphical Method: An Example Consider the following linear program: Maximize 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2 0, where, for ease of reference,

More information

Transportation Polytopes: a Twenty year Update

Transportation Polytopes: a Twenty year Update Transportation Polytopes: a Twenty year Update Jesús Antonio De Loera University of California, Davis Based on various papers joint with R. Hemmecke, E.Kim, F. Liu, U. Rothblum, F. Santos, S. Onn, R. Yoshida,

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

Can linear programs solve NP-hard problems?

Can linear programs solve NP-hard problems? Can linear programs solve NP-hard problems? p. 1/9 Can linear programs solve NP-hard problems? Ronald de Wolf Linear programs Can linear programs solve NP-hard problems? p. 2/9 Can linear programs solve

More information

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

1 Linear Programming. 1.1 Introduction. Problem description: motivate by min-cost flow. bit of history. everything is LP. NP and conp. P breakthrough.

1 Linear Programming. 1.1 Introduction. Problem description: motivate by min-cost flow. bit of history. everything is LP. NP and conp. P breakthrough. 1 Linear Programming 1.1 Introduction Problem description: motivate by min-cost flow bit of history everything is LP NP and conp. P breakthrough. general form: variables constraints: linear equalities

More information

INCIDENCE-BETWEENNESS GEOMETRY

INCIDENCE-BETWEENNESS GEOMETRY INCIDENCE-BETWEENNESS GEOMETRY MATH 410, CSUSM. SPRING 2008. PROFESSOR AITKEN This document covers the geometry that can be developed with just the axioms related to incidence and betweenness. The full

More information

Minimally Infeasible Set Partitioning Problems with Balanced Constraints

Minimally Infeasible Set Partitioning Problems with Balanced Constraints Minimally Infeasible Set Partitioning Problems with alanced Constraints Michele Conforti, Marco Di Summa, Giacomo Zambelli January, 2005 Revised February, 2006 Abstract We study properties of systems of

More information

Proximal mapping via network optimization

Proximal mapping via network optimization L. Vandenberghe EE236C (Spring 23-4) Proximal mapping via network optimization minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping Introduction this lecture:

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR Solutions Of Some Non-Linear Programming Problems A PROJECT REPORT submitted by BIJAN KUMAR PATEL for the partial fulfilment for the award of the degree of Master of Science in Mathematics under the supervision

More information

Nonlinear Optimization: Algorithms 3: Interior-point methods

Nonlinear Optimization: Algorithms 3: Interior-point methods Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris Jean-Philippe.Vert@mines.org Nonlinear optimization c 2006 Jean-Philippe Vert,

More information

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

More information

Several Views of Support Vector Machines

Several Views of Support Vector Machines Several Views of Support Vector Machines Ryan M. Rifkin Honda Research Institute USA, Inc. Human Intention Understanding Group 2007 Tikhonov Regularization We are considering algorithms of the form min

More information

Section 1.1. Introduction to R n

Section 1.1. Introduction to R n The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

More information

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand Notes V General Equilibrium: Positive Theory In this lecture we go on considering a general equilibrium model of a private ownership economy. In contrast to the Notes IV, we focus on positive issues such

More information

Chapter 6. Cuboids. and. vol(conv(p ))

Chapter 6. Cuboids. and. vol(conv(p )) Chapter 6 Cuboids We have already seen that we can efficiently find the bounding box Q(P ) and an arbitrarily good approximation to the smallest enclosing ball B(P ) of a set P R d. Unfortunately, both

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

Some representability and duality results for convex mixed-integer programs.

Some representability and duality results for convex mixed-integer programs. Some representability and duality results for convex mixed-integer programs. Santanu S. Dey Joint work with Diego Morán and Juan Pablo Vielma December 17, 2012. Introduction About Motivation Mixed integer

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

4.1 Learning algorithms for neural networks

4.1 Learning algorithms for neural networks 4 Perceptron Learning 4.1 Learning algorithms for neural networks In the two preceding chapters we discussed two closely related models, McCulloch Pitts units and perceptrons, but the question of how to

More information

3. Reaction Diffusion Equations Consider the following ODE model for population growth

3. Reaction Diffusion Equations Consider the following ODE model for population growth 3. Reaction Diffusion Equations Consider the following ODE model for population growth u t a u t u t, u 0 u 0 where u t denotes the population size at time t, and a u plays the role of the population dependent

More information