Definition of a Linear Program



Similar documents
4.6 Linear Programming duality

Linear Programming Notes V Problem Transformations

Practical Guide to the Simplex Method of Linear Programming

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Linear Programming. March 14, 2014

Linear Programming I

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

1 Solving LPs: The Simplex Algorithm of George Dantzig

Mathematical finance and linear programming (optimization)

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Duality in Linear Programming

Linear Programming: Theory and Applications

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

LECTURE: INTRO TO LINEAR PROGRAMMING AND THE SIMPLEX METHOD, KEVIN ROSS MARCH 31, 2005

Nonlinear Programming Methods.S2 Quadratic Programming

3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max

Special Situations in the Simplex Algorithm

Linear Programming. April 12, 2005

Lecture 2: August 29. Linear Programming (part I)

3. Linear Programming and Polyhedral Combinatorics

OPRE 6201 : 2. Simplex Method

Linear Programming in Matrix Form

Chapter 2 Solving Linear Programs

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

Simplex method summary

Operation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

What is Linear Programming?

Linear Programming Problems

Systems of Linear Equations

International Doctoral School Algorithmic Decision Theory: MCDA and MOO

2.3 Convex Constrained Optimization Problems

Duality in General Programs. Ryan Tibshirani Convex Optimization /36-725

Duality of linear conic problems

Standard Form of a Linear Programming Problem

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Solving Linear Programs

Degeneracy in Linear Programming

9.4 THE SIMPLEX METHOD: MINIMIZATION

1 Linear Programming. 1.1 Introduction. Problem description: motivate by min-cost flow. bit of history. everything is LP. NP and conp. P breakthrough.

56:171 Operations Research Midterm Exam Solutions Fall 2001

Methods for Finding Bases

Advanced Lecture on Mathematical Science and Information Science I. Optimization in Finance

Solving Systems of Linear Equations

NOTES ON LINEAR TRANSFORMATIONS

Linear Programming Notes VII Sensitivity Analysis

An Introduction to Linear Programming

Solutions to Math 51 First Exam January 29, 2015

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

Using the Simplex Method to Solve Linear Programming Maximization Problems J. Reeb and S. Leavengood

EXCEL SOLVER TUTORIAL

Nonlinear Optimization: Algorithms 3: Interior-point methods

Linear Programming. Solving LP Models Using MS Excel, 18

5.1 Bipartite Matching

0.1 Linear Programming

Module1. x y 800.

Chapter 4. Duality. 4.1 A Graphical Example

24. The Branch and Bound Method

Proximal mapping via network optimization

26 Linear Programming

Row Echelon Form and Reduced Row Echelon Form

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Two-Stage Stochastic Linear Programs

Arrangements And Duality

Optimization in R n Introduction

Linear Algebra Notes

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Linearly Independent Sets and Linearly Dependent Sets

Linear Programming: Chapter 11 Game Theory

Largest Fixed-Aspect, Axis-Aligned Rectangle

Actually Doing It! 6. Prove that the regular unit cube (say 1cm=unit) of sufficiently high dimension can fit inside it the whole city of New York.

OPTIMIZATION. Schedules. Notation. Index

Sensitivity Report in Excel

T ( a i x i ) = a i T (x i ).

Introduction to Linear Programming (LP) Mathematical Programming (MP) Concept

Notes on Determinant

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

Study Guide 2 Solutions MATH 111

CHAPTER 9. Integer Programming

Convex Programming Tools for Disjunctive Programs

Inner Product Spaces

Chapter 3: Section 3-3 Solutions of Linear Programming Problems

1 Determinants and the Solvability of Linear Systems

Chapter 6: Sensitivity Analysis

This exposition of linear programming

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization

Math 312 Homework 1 Solutions

Vector and Matrix Norms

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Several Views of Support Vector Machines

LS.6 Solution Matrices

Equilibrium computation: Part 1

Convex analysis and profit/cost/support functions

The Graphical Method: An Example

Interior Point Methods and Linear Programming

Name: Section Registered In:

4.5 Linear Dependence and Linear Independence

Support Vector Machine (SVM)

Transcription:

Definition of a Linear Program Definition: A function f(x 1, x,..., x n ) of x 1, x,..., x n is a linear function if and only if for some set of constants c 1, c,..., c n, f(x 1, x,..., x n ) = c 1 x 1 + c x + + c n x n. Examples: x 1 5x 1 + 6x x + 1 3 Non-examples: x 1 x 1 + 3x x 3 x 1 x

Linear Inequalities Definition: For any linear function f(x 1, x,..., x n ) and any number b, the inequalities f(x 1, x,..., x n ) b and f(x 1, x,..., x n ) b are linear inequalities. Examples: x 1 + x 5x 1 0 Note: If an inequality can be rewritten as a linear inequality then it is one. Thus x 1 + x 3x 3 is a linear inequality because it can be rewritten as x 1 + x 3x 3 0. Even x 1 /x is a linear inequality because it can be rewritten as x 1 x 0. Note that x 1 /x + x 3 is not a linear inequality, however. Definition: equality is a linear equality. For any linear function f(x 1, x,..., x n ) and any number b, the f(x 1, x,..., x n ) = b

LPs Definition: A linear programming problem (LP) is an optimization problem for which: 1. We attempt to maximize (or minimize) a linear function of the decision variables. (objective function). The values of the decision variables must satisfy a set of constraints, each of which must be a linear inequality or linear equality. 3. A sign restriction on each variable. For each variable x i the sign restriction can either say (a) x i 0, (b) x i 0, (c) x i unrestricted (urs). Definition: A solution to a linear program is a setting of the variables. Definition: A feasible solution to a linear program is a solution that satisfies all constraints. Definition: The feasible region in a linear program is the set of all possible feasible solutions.

Geometry Definitions Definition: A set of points S is a convex set if the line segment joining any points in S is wholly contained in S. Fact: The set of feasible solutions to an LP (feasible region) forms a (possibly unbounded) convex set. Definition: A point p of a convex set S is an extreme point if each line segment that lies completely in S and contains p has p as an endpoint. An extreme point is also called a corner point. Fact: Every linear program has an extreme point that is an optimal solution. Corrolary: An algorithm to solve a linear program only needs to consider extreme points. Definition: A constraint of a linear program is binding at a point p if the inequality is met with equality at p. It is nonbinding otherwise. (Recall that a point is the same as a solution.)

Solutions to Linear Programs Definition: An optimal solution to a linear program is a feasible solution with the largest objective function value (for a maximization problem). The value of the objective function for the optimal solution is said to be the value of the linear program. A linear program may have multiple optimal solutions, but only one optimal solution value. Definition A linear program is infeasible if it has no feasible solutions, i.e. the feasible region is empty. Definition A linear program is unbounded if the optimal solution is unbounded, i.e. it is either or. Note that the feasible region may be unbounded, but this is not the same as the linear program being unbounded. Theorem 1. is infeasible,. is unbounded, Every linear program either: 3. has a unique optimal solution value

Basic and nonbasic variables and solutions Consider a linear programming problem in which all the constraints are equalities (conversion can be accomplished with slack and excess variables). Definition: If there are n variables and m constraints, a solution with at most m non-zero values is a basic solution. Definition In a basic solution, n m of the zero-valued variables are considered non-basic variables and the remaining m variables are considered basic variables. Fact: Any linear program that is not infeasible or unbounded has an optimal solution that is basic. Basic feasible solutions correspond to extreme points of the feasible region. Fact: Let B be formed from m linearly independent columns of the A matrix (basis), and let x B be the corresponding variables, then Bx B = b.

Some facts to prove on the board: The feasible region of an LP is convex. If an LP, max x c T x s.t. Ax = b, x 0 has a feasible solution, it has a basic feasible solution. x is an extreme point of S = {x : Ax = b, x 0} iff x is a basic feasible solution. If an LP has a finite optimal solution, then it has an optimal solution at an extreme point of the feasible set. If there is an optimal solution to an LP problem, then there is an optimal basic feasible solution.

Simplex The simplex algorithm moves from basic feasible solution to basic feasible solution; at each iteration it increases (does not decrease) the objective function value. Each step is a pivot, it chooses a variable to leave the basis, and another to enter the basis. The entering variable is a non-basic variable with positive objective function coefficient. The leaving variable is chosen via a ratio test, and is the basic variable in the constraint that most limits the increase. When the objective function has no negative coefficients in the objective function, we can stop.

Simplex Example Original LP maximize 3x 1 + x + x 3 (1) subject to x 1 + x + 3x 3 30 () x 1 + x + 5x 3 (3) x 1 + x + x 3 36 () x 1, x, x 3 0. (5) Standard form. z = 3x 1 + x + x 3 (6) x = 30 x 1 x 3x 3 (7) x 5 = x 1 x 5x 3 () x 6 = 36 x 1 x x 3. (9)

Iteration 1 z = 3x 1 + x + x 3 (10) x = 30 x 1 x 3x 3 (11) x 5 = x 1 x 5x 3 (1) x 6 = 36 x 1 x x 3. (13) Pivot in x 1. Remove x 6 from the basis. z = 7 + x x 1 = 9 x x = 1 3x x 5 = 6 3x + x 3 x 3 5x 3 3x 6 x 6 + x 6 (1) (15) () x 3 + x 6. (17)

Iteration z = 7 + x x 1 = 9 x x = 1 3x x 5 = 6 3x + x 3 x 3 5x 3 3x 6 x 6 + x 6 (1) (19) (0) x 3 + x 6. (1) Pivot in x 3. Remove x 5. z = 111 x 1 = 33 x 3 = 3 x = 69 + x x 3x + 3x x 5 + x 5 x 5 + 5x 5 11x 6 5x 6 + x 6 () (3) () x 6. (5)

Iteration 3 z = 111 x 1 = 33 x 3 = 3 x = 69 + x x 3x + 3x x 5 + x 5 x 5 + 5x 5 11x 6 5x 6 + x 6 (6) (7) () x 6. (9) Pivot in x. Remove x 3. z = x 3 6 x 1 = + x 3 6 x = x 3 3 x = 1 x 3 x 5 6 + x 5 6 x 5 3 + x 5 x 6 (30) 3 x 6 (31) 3 + x 6 (3) 3. (33)

Duality Given a linear program (primal) max c T x Ax b x 0 The dual is min b T y A T y c y 0 x is an n -dimensional vector y is an m -dimensional vector c is an n -dimensional vector b is an m -dimensional vector A is a m n matrix

Dual Theorem Theorem Given a primal and dual linear program, let x be the optimal solution to the primal and y be the optimal solution to the dual. Then c T x = b T y. Furthermore, the optimal dual solution can be read off as the negative of the coefficients of the corresponding slack/excess variables in the final simplex tableaux. Proof: Denote the constraints of the original LP as: x n+i = b i n a ij x j where x n+i are the original slack variables. Solve using simplex. The final objective function is of the form: Z = Z + n+m where c k are the coefficients of the basic variables and zero for nonbasic variables. Thus, c k 0 because the conditions for stopping is that of slack variables be negative and the basic variables equal zero. k=1 c k x k

Proof continued So we have Z = Z + n+m From this equation it is easy to find the dual solution by reading the coefficients of the slack variables. k=1 c k x k Let y i = c n+i. The claim is that y i satifies n c j x j = m b i y i n c j x j = Z = Z + n+m k=1 c k x k = Z + n ( c j x j ) + m ( c n+i x n+i ) = Z + n ( c j x j ) m yi (b i n a ij x j ) The last line follows from the definition of yi slack variables in the original LP. and the definitions of the

Proof continued So we have that the final objective function can be written as Reordering terms, this is Z + n ( c j x j ) m yi (b i n a ij x j ). (Z m b i y i ) + n ( c j + m a ij y i )x j Now, recall that pivoting is just rewriting in an equivalent way, so we actually have that, for any feasible x : n c j x j = (Z m b i y i ) + n ( c j + m a ij y i )x j Because this equation must be true for an infinite number of x values we must have that (look at colors) and (Z m b i y i ) = 0. c j = c j + m a ij yi.

Proof continued (Z m b i y i ) = 0. and c j = c j + m a ij yi. The first of these says that in other words, y is optimal. The second says that Z = m b i y i m a ij y i = c j c j But recall that we have said that c j 0 because these are coefficients of the objective function in the final tableaux. Therefore, and Y is feasible. m a ij y i c j

Complimentary Slackness Theorem Let x be feasible in P and let y be feasible in D. Necessary and sufficient conditions for optimality of x and y are AND m n a ij y i = c j OR x j = 0 for all j = 1,..., n a ij x j = b i OR y i = 0 for alli = 1,..., m Either constraint is tight or dual variable is zero.