# What is Linear Programming?

Size: px
Start display at page:

Transcription

1 Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to be optimized, and a set of constraints to be satisfied by x. A linear program is an optimization problem where all involved functions are linear in x; in particular, all the constraints are linear inequalities and equalities. Linear programming is the subject of studying and solving linear programs. Linear programming was born during the second World War out of the necessity of solving military logistic problems. It remains one of the most used mathematical techniques in today s modern societies. 1.1 A Toy Problem A local furniture shop makes chairs and tables. The projected profits for the two products are, respectively, \$20 per chair and \$30 per table. The projected demands for chairs and tables are 400 and 100, respectively. Each chair requires 2 cubic feet of wood while each table requires 4 cubic feet. The shop has a total amount of 1,000 cubic feet of wood in store. How many chairs and tables should the shop make in order to maximize its profit? Let x 1 be the number of chairs and x 2 the number of tables to be made. There are the two variables, or unknowns, for this problem. The shop wants to maximize its total profit, 20x x 2, subject to the constraints that (a) the total amount of wood used to make the two products can not exceed the 500 cubic feet available, and (b) the numbers of chairs and tables to be made 5

2 6 CHAPTER 1. WHAT IS LINEAR PROGRAMMING? should not exceed the demands. In additional, we should not forget that the number of chairs and tables made need to be nonnegative. Putting all these together, we have an optimization problem: max 20x x 2 s.t. 2x 1 + 4x x x (1.1) where 20x x 2 is the objective function, s.t. is the shorthand for subject to which is followed by constraints of this problem. This optimization problem is clearly a linear program where all the functions involved, both in the objective and in the constraints, are linear functions of x 1 and x From Concrete to Abstract Let us look at an abstract production model. A company produces n products using m kinds of materials. For the next month, the unit profits for the n products are projected to be, respectively, c 1, c 2,, c n. The amounts of materials available to the company in the next month are, respectively, b 1, b 2,, b m. The amount of material i consumed by a unit of product j is given by a ij 0 for i = 1, 2,, m and j = 1, 2,, n (some a ij could be zero if prodict j does not consume material i). The question facing the company is, given the limited availability of materials, what is the quantity the company should produce in the next month for each product in order to achieve the maximum total profit? The decision variables are obviously the amounts of production for the n products in the next month. Let us call them x 1, x 2,, x n. The optimization model is to maximize the total profit, subject to the material availability constraints for all m materials, and the nonnegativity constraints on the n variables. In mathematical terms, the model is the following linear program:

3 1.2. FROM CONCRETE TO ABSTRACT 7 max c 1 x 1 + c 2 x c n x n s.t. a 11 x 1 + a 12 x a 1n x n b 1 a 21 x 1 + a 22 x a 2n x n b 2. a m1 x 1 + a m2 x a mn x n b m x 1, x 2,, x n 0 (1.2) The nonnegativity constraints can be vital, but are often forgotten by beginners. Why is nonnegativity important here? First, in the above context, it does not make sense to expect the company to produce a negative amount of a product. Moreover, if one product, say product k, is not profitable, corresponding to a c k < 0, without the nonnegativity constraints the model would produce a solution x k < 0 and generate a profit c k x k > 0. In fact, since a negative amount of product k would not consume any material but instead generate materials, one could drive the profit to infinity by forcing x k to go to negative infinity. Hence, the model would be totally wrong had one forgot nonnegativity. The linear program in (1.2) is tedious to write. One can shorten the expressions using the summation notation. For example, the total profit can be represented by the left-hand side instead of the right-hand side of the following identity n c i x i = c 1 x 1 + c 2 x c n x n. i=1 However, a much more concise way is to use matrices and vectors. If we let c = (c 1, c 2,, c n ) T and x = (x 1, x 2,, x n ) T. Then the total profit becomes c T x. In a matrix-vector notation, the linear program (1.2) becomes max s.t. c T x Ax b x 0 (1.3) where A R m n and b R m. The inequalities are always understood as component-wise comparisons.

4 8 CHAPTER 1. WHAT IS LINEAR PROGRAMMING? 1.3 A Standard Form A linear program can have an objective of either minimization or maximization, while its constraints can have any combination of linear inequalities and equalities. It is impractical to study linear programs and to design algorithms for them without introducing a so-called standard form a unifying framework encompassing most, if not all, individual forms of linear programs. Different standard forms exist in the literature that may offer different advantages under different circumstances. Here we will mostly use the following standard form: min s.t. c T x Ax = b x 0 (1.4) where the matrix A is assumed to be m n with m < n, b R m and c, x R n. The triple (A, b, c) represents problem data that need to be specified, and x is the variable to be determined. In fact, once the size of A is given, the sizes of all the other quantities follow accordingly from the rule of matrix multiplications. In plain English, a standard linear program is one that is a minimization problem with a set of equality constraints but no inequality constraints except nonnegativity on all variables. We will always assume that A has full rank, or in other words, the rows of A are linearly independent which ensures that the equations in Ax = b are consistent for any right-hand side b R m. We make this assumption to simplify the matter without loss of generality, because redundant or inconsistent linear equations can always be detected, and removed if so desired, through standard linear algebra techniques. Moreover, the requirement m < n ensures that in general there are infinite many solutions to the equation Ax = b, leaving degrees of freedom for nonnegative and optimal solutions. A linear program, say LP-A, is said to be equivalent to a linear program LP-B if an optimal solution of LP-A, if it exists, can be obtained from an optimal solution of LP-B through some simple algebraic operations. This equivalence allows one to solve LP-B and obtain a solution to LP-A. We claim that every linear program is equivalent to a standard-form linear program though a transformation, which usually requires adding extra variables and constraints. Obviously, maximizing a function is equivalent to minimizing the negative of the function. A common trick to transform an inequality a T x β into an equivalent equality is to add a so-called slack

5 1.3. A STANDARD FORM 9 variable η so that a T x β a T x + η = β, η 0. Let us consider the toy problem (1.1). With the addition of slack variables x 3, x 4, x 5 (we can name them anyway we want), we transform the linear program on the left to the one on the right: max 20x x 2 s.t. 2x 1 + 4x x x x 1, x 2 0 = max 20x x 2 s.t. 2x 1 + 4x 2 + x 3 = 1000 x 1 + x 4 = 400 x 2 + x 5 = 100 x 1, x 2, x 3, x 4, x 5 0 After switching to minimization, we turn the linear program on the right to an equivalent linear program of the standard form: min 20x 1 30x 2 + 0x 3 + 0x 4 + 0x 5 s.t. 2x 1 + 4x 2 + x 3 + 0x 4 + 0x 5 = 1000 x 1 + 0x 2 + 0x 3 + x 4 + 0x 5 = 400 0x 1 + x 2 + 0x 3 + 0x 4 + x 5 = 100 x 1, x 2, x 3, x 4, x 5 0 where c T = ( ), b is unchanged and A = (1.5) This new coefficient matrix is obtained by padding the 3-by-3 identity matrix to the right of the original coefficient matrix. Similarly, the general linear program (1.3) can be transformed into min s.t. c T x Ax + s = b x, s 0 (1.6) where s R m is the slack variable vector. This linear program is in the standard form because it is a minimization problem with only equality constraints and nonnegativity on all variables. The variable in (1.6) consists of x R n and s R m, and the new data triple is obtained by the construction ( ) c A [A I], b b, c, 0 where I is the m-by-m identity matrix and 0 R m.

6 10 CHAPTER 1. WHAT IS LINEAR PROGRAMMING? 1.4 Feasibility and Solution Sets Let us call the following set F = {x R n : Ax = b, x 0} R n (1.7) the feasibility set of the linear program (1.4). Points in the feasibility set are called feasible points, out of which we seek an optimal solution x that minimizes the objective function c T x. With the help of the feasibility set notation, we can write our standard linear program into a concise form min{c T x : x F}. (1.8) A polyhedron in R n is a subset of R n defined by points satisfying a collection of linear inequalities and/or equalities. For example, a hyper-plane {x R n : a T x = β} is a polyhedron where a R n and β R, and a half-space {x R n : a T x β} is a polyhedron as well. In geometric terms, a polyhedron is nothing but the intersection of a collection of hyper-planes and half-spaces. In particular, the empty set and a singleton set (that contains only a single point) are polyhedra. Clearly, the feasibility set F in (1.7) for linear program (1.4) is a polyhedron in R n. It may be empty if some constraints are contradictory (say, x 1 1 and x 1 1), or it may be an unbounded set, say, F = {x R 2 : x 1 x 2 = 0, x 0} R 2, (1.9) which is the half diagonal-line emitting from the origin towards infinity. Most of times, F will be a bounded, nonempty set. A bounded polyhedron is also called a polytope. The optimal solution set, or just solution set for short, of a linear program consists of all feasible points that optimizes its objective function. In particular, we denote the solution set of the standard linear program (1.4) or (1.8) as S. Since S F, S is empty whenever F is empty. However, S can be empty even if F is not.

7 1.5. THREE POSSIBILITIES 11 The purpose of a linear programming algorithm is to determine where S is empty or not and, in the latter case, to find a member x of S. In case such an x exists, we can write S = {x F : c T x = c T x } or S = {x R n : c T x = c T x } F. (1.10) That is, S is the intersection of a hyper-plane with the polyhedron F. Hence, S itself is a polyhedron. If the intersection is a singleton S = {x }, then x is the unique optimal solution; otherwise, there must exist infinitely many optimal solutions to the linear program. 1.5 Three Possibilities A linear program is infeasible if its feasibility set is empty; otherwise, it is feasible. A linear program is unbounded if it is feasible but its objective function can be made arbitrarily good. For example, if a linear program is a minimization problem and unbounded, then its objective value can be made arbitrarily small while maintaining feasibility. In other words, we can drive the objective value to negative infinity within the feasibility set. The situation is similar for a unbounded maximization problem where we can drive the objective value to positive infinity. Clearly, a linear program is unbounded only if its feasibility set is a unbounded set. However, a unbounded feasibility set does not necessarily imply that the linear program itself is unbounded. To make it clear, let us formally define the term unbounded for a set and for a linear program. We say a set F is unbounded if there exists a sequence {x k } F such that x k as k. On the other hand, we say a linear program min{c T x : x F} is unbounded if there exists a sequence {x k } F such that c T x k as k. Hence, for a linear program the term unbounded means objective unbounded. When the feasibility set F is unbounded, whether or not the corresponding linear program is unbounded depends entirely on the objective function. For example, consider F given by (1.9). The linear program max{x 1 + x 2 : (x 1, x 2 ) F} = max{2x 1 : x 1 0} is unbounded. However, when the objective is changed to minimization instead, the resulting linear program has an optimal solution at the origin.

8 12 CHAPTER 1. WHAT IS LINEAR PROGRAMMING? If a linear program is feasible but not (objective) unbounded, then it must achieve a finite optimal value within its feasibility set; in other words, it has an optimal solution x S F. To sum up, for any given linear program there are three possibilities: 1. The linear program is infeasible, i.e., F =. In this case, S =. 2. The linear program is feasible but (objective) unbounded. In this case, F must be an unbounded set and S =. 3. The linear program is feasible and has an optimal solution x S F. In this case, the feasibility set F can be either unbounded or bounded. These three possibilities imply that if F is both feasible and bounded, then the corresponding linear program must have an optimal solution x S F, regardless of what objective it has.

### . P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

4. Basic feasible solutions and vertices of polyhedra Due to the fundamental theorem of Linear Programming, to solve any LP it suffices to consider the vertices (finitely many) of the polyhedron P of the

### Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

### Linear Programming. March 14, 2014

Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

### 1 Solving LPs: The Simplex Algorithm of George Dantzig

Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

### 4.6 Linear Programming duality

4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

### Special Situations in the Simplex Algorithm

Special Situations in the Simplex Algorithm Degeneracy Consider the linear program: Maximize 2x 1 +x 2 Subject to: 4x 1 +3x 2 12 (1) 4x 1 +x 2 8 (2) 4x 1 +2x 2 8 (3) x 1, x 2 0. We will first apply the

### Module1. x 1000. y 800.

Module1 1 Welcome to the first module of the course. It is indeed an exciting event to share with you the subject that has lot to offer both from theoretical side and practical aspects. To begin with,

### Linear Programming I

Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins

### Linear Programming Notes VII Sensitivity Analysis

Linear Programming Notes VII Sensitivity Analysis 1 Introduction When you use a mathematical model to describe reality you must make approximations. The world is more complicated than the kinds of optimization

### Systems of Linear Equations

Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

### 3. Linear Programming and Polyhedral Combinatorics

Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

### Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and

### a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### 24. The Branch and Bound Method

24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

### Linear Programming. April 12, 2005

Linear Programming April 1, 005 Parts of this were adapted from Chapter 9 of i Introduction to Algorithms (Second Edition) /i by Cormen, Leiserson, Rivest and Stein. 1 What is linear programming? The first

### Lecture 2: August 29. Linear Programming (part I)

10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### Row Echelon Form and Reduced Row Echelon Form

These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation

### Simplex method summary

Simplex method summary Problem: optimize a linear objective, subject to linear constraints 1. Step 1: Convert to standard form: variables on right-hand side, positive constant on left slack variables for

### Solving Linear Programs

Solving Linear Programs 2 In this chapter, we present a systematic procedure for solving linear programs. This procedure, called the simplex method, proceeds by moving from one feasible solution to another,

### Linear Programming: Theory and Applications

Linear Programming: Theory and Applications Catherine Lewis May 11, 2008 1 Contents 1 Introduction to Linear Programming 3 1.1 What is a linear program?...................... 3 1.2 Assumptions.............................

### Duality of linear conic problems

Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

### Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

### Linear Programming. Solving LP Models Using MS Excel, 18

SUPPLEMENT TO CHAPTER SIX Linear Programming SUPPLEMENT OUTLINE Introduction, 2 Linear Programming Models, 2 Model Formulation, 4 Graphical Linear Programming, 5 Outline of Graphical Procedure, 5 Plotting

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### 3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max

SOLUTION OF LINEAR PROGRAMMING PROBLEMS THEOREM 1 If a linear programming problem has a solution, then it must occur at a vertex, or corner point, of the feasible set, S, associated with the problem. Furthermore,

### Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.

### The Graphical Method: An Example

The Graphical Method: An Example Consider the following linear program: Maximize 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2 0, where, for ease of reference,

### Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

### IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

IEOR 4404 Homework # Intro OR: Deterministic Models February 14, 011 Prof. Jay Sethuraman Page 1 of 5 Homework #.1 (a) What is the optimal solution of this problem? Let us consider that x 1, x and x 3

### Linear Programming in Matrix Form

Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,

### Unified Lecture # 4 Vectors

Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,

### 8.1. Cramer s Rule for Solving Simultaneous Linear Equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Cramer s Rule for Solving Simultaneous Linear Equations 8.1 Introduction The need to solve systems of linear equations arises frequently in engineering. The analysis of electric circuits and the control

### Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that

### Solving Systems of Linear Equations

LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

### Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming

### Mathematical finance and linear programming (optimization)

Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may

### OPRE 6201 : 2. Simplex Method

OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2

### Using the Simplex Method to Solve Linear Programming Maximization Problems J. Reeb and S. Leavengood

PERFORMANCE EXCELLENCE IN THE WOOD PRODUCTS INDUSTRY EM 8720-E October 1998 \$3.00 Using the Simplex Method to Solve Linear Programming Maximization Problems J. Reeb and S. Leavengood A key problem faced

### 8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

Solution by Inverse Matrix Method 8.2 Introduction The power of matrix algebra is seen in the representation of a system of simultaneous linear equations as a matrix equation. Matrix algebra allows us

### Methods for Finding Bases

Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

### Operation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1

Operation Research Module 1 Unit 1 1.1 Origin of Operations Research 1.2 Concept and Definition of OR 1.3 Characteristics of OR 1.4 Applications of OR 1.5 Phases of OR Unit 2 2.1 Introduction to Linear

### Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

### Duality in Linear Programming

Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadow-price interpretation of the optimal simplex multipliers is a very useful concept. First, these shadow

### CHAPTER 11: BASIC LINEAR PROGRAMMING CONCEPTS

Linear programming is a mathematical technique for finding optimal solutions to problems that can be expressed using linear equations and inequalities. If a real-world problem can be represented accurately

### Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

### 2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables

### Linear Algebra Notes

Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

### Vector and Matrix Norms

Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

### Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Chapter 6 Linear Programming: The Simplex Method Introduction to the Big M Method In this section, we will present a generalized version of the simplex method that t will solve both maximization i and

### Lecture L3 - Vectors, Matrices and Coordinate Transformations

S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

### 10.2 Series and Convergence

10.2 Series and Convergence Write sums using sigma notation Find the partial sums of series and determine convergence or divergence of infinite series Find the N th partial sums of geometric series and

### NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

### MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3

MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................

### Solving simultaneous equations using the inverse matrix

Solving simultaneous equations using the inverse matrix 8.2 Introduction The power of matrix algebra is seen in the representation of a system of simultaneous linear equations as a matrix equation. Matrix

### No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results

### Chapter 4. Duality. 4.1 A Graphical Example

Chapter 4 Duality Given any linear program, there is another related linear program called the dual. In this chapter, we will develop an understanding of the dual linear program. This understanding translates

### 3.1 Solving Systems Using Tables and Graphs

Algebra 2 Chapter 3 3.1 Solve Systems Using Tables & Graphs 3.1 Solving Systems Using Tables and Graphs A solution to a system of linear equations is an that makes all of the equations. To solve a system

### Inner Product Spaces

Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

### 1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

### LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method Introduction to dual linear program Given a constraint matrix A, right

### 160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

### Linear Equations and Inequalities

Linear Equations and Inequalities Section 1.1 Prof. Wodarz Math 109 - Fall 2008 Contents 1 Linear Equations 2 1.1 Standard Form of a Linear Equation................ 2 1.2 Solving Linear Equations......................

### Standard Form of a Linear Programming Problem

494 CHAPTER 9 LINEAR PROGRAMMING 9. THE SIMPLEX METHOD: MAXIMIZATION For linear programming problems involving two variables, the graphical solution method introduced in Section 9. is convenient. However,

### Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

### (Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties

Lecture 1 Convex Sets (Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties 1.1.1 A convex set In the school geometry

### Proximal mapping via network optimization

L. Vandenberghe EE236C (Spring 23-4) Proximal mapping via network optimization minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping Introduction this lecture:

### 2.3 Convex Constrained Optimization Problems

42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

### Linear Programming Problems

Linear Programming Problems Linear programming problems come up in many applications. In a linear programming problem, we have a function, called the objective function, which depends linearly on a number

### 4 UNIT FOUR: Transportation and Assignment problems

4 UNIT FOUR: Transportation and Assignment problems 4.1 Objectives By the end of this unit you will be able to: formulate special linear programming problems using the transportation model. define a balanced

### 26 Linear Programming

The greatest flood has the soonest ebb; the sorest tempest the most sudden calm; the hottest love the coldest end; and from the deepest desire oftentimes ensues the deadliest hate. Th extremes of glory

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### Chapter 2 Solving Linear Programs

Chapter 2 Solving Linear Programs Companion slides of Applied Mathematical Programming by Bradley, Hax, and Magnanti (Addison-Wesley, 1977) prepared by José Fernando Oliveira Maria Antónia Carravilla A

### EXCEL SOLVER TUTORIAL

ENGR62/MS&E111 Autumn 2003 2004 Prof. Ben Van Roy October 1, 2003 EXCEL SOLVER TUTORIAL This tutorial will introduce you to some essential features of Excel and its plug-in, Solver, that we will be using

### Equations, Inequalities & Partial Fractions

Contents Equations, Inequalities & Partial Fractions.1 Solving Linear Equations 2.2 Solving Quadratic Equations 1. Solving Polynomial Equations 1.4 Solving Simultaneous Linear Equations 42.5 Solving Inequalities

### Introduction to Matrix Algebra

Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

### An Introduction to Linear Programming

An Introduction to Linear Programming Steven J. Miller March 31, 2007 Mathematics Department Brown University 151 Thayer Street Providence, RI 02912 Abstract We describe Linear Programming, an important

### Date: April 12, 2001. Contents

2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........

### Chapter 5. Linear Inequalities and Linear Programming. Linear Programming in Two Dimensions: A Geometric Approach

Chapter 5 Linear Programming in Two Dimensions: A Geometric Approach Linear Inequalities and Linear Programming Section 3 Linear Programming gin Two Dimensions: A Geometric Approach In this section, we

### Chapter 2: Linear Equations and Inequalities Lecture notes Math 1010

Section 2.1: Linear Equations Definition of equation An equation is a statement that equates two algebraic expressions. Solving an equation involving a variable means finding all values of the variable

### EQUATIONS and INEQUALITIES

EQUATIONS and INEQUALITIES Linear Equations and Slope 1. Slope a. Calculate the slope of a line given two points b. Calculate the slope of a line parallel to a given line. c. Calculate the slope of a line

### God created the integers and the rest is the work of man. (Leopold Kronecker, in an after-dinner speech at a conference, Berlin, 1886)

Chapter 2 Numbers God created the integers and the rest is the work of man. (Leopold Kronecker, in an after-dinner speech at a conference, Berlin, 1886) God created the integers and the rest is the work

### 5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 General Integer Linear Program: (ILP) min c T x Ax b x 0 integer Assumption: A, b integer The integrality condition

### Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

### Lecture 7: Finding Lyapunov Functions 1

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1

### Linear Programming Supplement E

Linear Programming Supplement E Linear Programming Linear programming: A technique that is useful for allocating scarce resources among competing demands. Objective function: An expression in linear programming

### A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

Section 1.3 Matrix Products A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form (scalar #1)(quantity #1) + (scalar #2)(quantity #2) +...

### Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand

Notes V General Equilibrium: Positive Theory In this lecture we go on considering a general equilibrium model of a private ownership economy. In contrast to the Notes IV, we focus on positive issues such

### Minimally Infeasible Set Partitioning Problems with Balanced Constraints

Minimally Infeasible Set Partitioning Problems with alanced Constraints Michele Conforti, Marco Di Summa, Giacomo Zambelli January, 2005 Revised February, 2006 Abstract We study properties of systems of

### MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

### 3. Mathematical Induction

3. MATHEMATICAL INDUCTION 83 3. Mathematical Induction 3.1. First Principle of Mathematical Induction. Let P (n) be a predicate with domain of discourse (over) the natural numbers N = {0, 1,,...}. If (1)

### 1.2 Solving a System of Linear Equations

1.. SOLVING A SYSTEM OF LINEAR EQUATIONS 1. Solving a System of Linear Equations 1..1 Simple Systems - Basic De nitions As noticed above, the general form of a linear system of m equations in n variables

### 4.3-4.4 Systems of Equations

4.3-4.4 Systems of Equations A linear equation in 2 variables is an equation of the form ax + by = c. A linear equation in 3 variables is an equation of the form ax + by + cz = d. To solve a system of

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### ALMOST COMMON PRIORS 1. INTRODUCTION

ALMOST COMMON PRIORS ZIV HELLMAN ABSTRACT. What happens when priors are not common? We introduce a measure for how far a type space is from having a common prior, which we term prior distance. If a type

### Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without