Two-Stage Stochastic Linear Programs

Similar documents
The Multicut L-Shaped Method

Advanced Lecture on Mathematical Science and Information Science I. Optimization in Finance

Duality in General Programs. Ryan Tibshirani Convex Optimization /36-725

Robust Optimization for Unit Commitment and Dispatch in Energy Markets

Duality of linear conic problems

Some representability and duality results for convex mixed-integer programs.

4.6 Linear Programming duality

Mathematical finance and linear programming (optimization)

Nonlinear Optimization: Algorithms 3: Interior-point methods

Re-Solving Stochastic Programming Models for Airline Revenue Management

2.3 Convex Constrained Optimization Problems

What is Linear Programming?

Discrete Optimization

Date: April 12, Contents

Proximal mapping via network optimization

Optimization Methods in Finance

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Optimization in R n Introduction

Linear Programming: Chapter 11 Game Theory

24. The Branch and Bound Method

3. Linear Programming and Polyhedral Combinatorics

Linear Programming in Matrix Form

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

1 Norms and Vector Spaces

Maximum Likelihood Estimation

International Doctoral School Algorithmic Decision Theory: MCDA and MOO

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Lecture 11: 0-1 Quadratic Program and Lower Bounds

Linear Programming Notes V Problem Transformations

Support Vector Machines

Linear Algebra I. Ronald van Luijk, 2012

ALMOST COMMON PRIORS 1. INTRODUCTION

THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok

DUAL METHODS IN MIXED INTEGER LINEAR PROGRAMMING

CHAPTER 9. Integer Programming

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

Linear Threshold Units

Introduction to Support Vector Machines. Colin Campbell, Bristol University

Chapter 4. Duality. 4.1 A Graphical Example

LECTURE: INTRO TO LINEAR PROGRAMMING AND THE SIMPLEX METHOD, KEVIN ROSS MARCH 31, 2005

Probability Theory. Florian Herzog. A random variable is neither random nor variable. Gian-Carlo Rota, M.I.T..

26 Linear Programming

Towards Online Recognition of Handwritten Mathematics

Robust and Data-Driven Optimization: Modern Decision-Making Under Uncertainty

Optimization Modeling for Mining Engineers

Two Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

A Stochastic Inventory Placement Model for a Multi-echelon Seasonal Product Supply Chain with Multiple Retailers

Cyber-Security Analysis of State Estimators in Power Systems

Branch-and-Price Approach to the Vehicle Routing Problem with Time Windows

Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION

1 Solving LPs: The Simplex Algorithm of George Dantzig

Buered Probability of Exceedance: Mathematical Properties and Optimization Algorithms

Spatial Decomposition/Coordination Methods for Stochastic Optimal Control Problems. Practical aspects and theoretical questions

Lecture notes on Moral Hazard, i.e. the Hidden Action Principle-Agent Model

An Introduction to Machine Learning

Solving polynomial least squares problems via semidefinite programming relaxations

Module1. x y 800.

Linear Programming: Theory and Applications

The Multiplicative Weights Update method

No: Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Support Vector Machine (SVM)

Derivative Free Optimization

Cost Minimization and the Cost Function

INSURANCE RISK THEORY (Problems)

Equilibrium computation: Part 1

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

RANDOM INTERVAL HOMEOMORPHISMS. MICHA L MISIUREWICZ Indiana University Purdue University Indianapolis

Chapter 20. Vector Spaces and Bases

Practical Guide to the Simplex Method of Linear Programming

Statistical Machine Learning

Lecture 2: The SVM classifier

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver

Nonlinear Programming Methods.S2 Quadratic Programming

Several Views of Support Vector Machines

Lecture 4: BK inequality 27th August and 6th September, 2007

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March Due:-March 25, 2015.

Math 4310 Handout - Quotient Vector Spaces

The Advantages and Disadvantages of Online Linear Optimization

Probability Generating Functions

A Log-Robust Optimization Approach to Portfolio Management

Optimal shift scheduling with a global service level constraint

Lecture 5 Principal Minors and the Hessian

POLYNOMIAL HISTOPOLATION, SUPERCONVERGENT DEGREES OF FREEDOM, AND PSEUDOSPECTRAL DISCRETE HODGE OPERATORS

A Log-Robust Optimization Approach to Portfolio Management

Transcription:

Two-Stage Stochastic Linear Programs Operations Research Anthony Papavasiliou 1 / 27

Two-Stage Stochastic Linear Programs 1 Short Reviews Probability Spaces and Random Variables Convex Analysis 2 Deterministic Linear Programs 3 Two-Stage Stochastic Linear Programs 2 / 27

Outline 1 Short Reviews Probability Spaces and Random Variables Convex Analysis 2 Deterministic Linear Programs 3 Two-Stage Stochastic Linear Programs 3 / 27

Probability Spaces A probability space is the triplet (Ω, A, P), where Ω: set of all random outcomes (example: six possible results of throwing die) A: collection of random events, where events are subsets of Ω (example: odd numbers, 4) Mapping P : A [0, 1] with P( ) = 0, P(Ω) = 1 and P(A 1 A 2 ) = P(A 1 ) + P(A 2 ) if A 1 A 2 = 4 / 27

Distributions Cumulative distribution function of a random variable ξ is defined as F ξ (x) = P(ξ x) For discrete random variables, probability mass distribution f is defined as f (ξ k ) = P(ξ = ξ k ), k K with k K f (ξk ) = 1 For continuous random variables, density function f is defined by P(a ξ b) = b a f (ξ)dξ = b a df(ξ) with df(ξ) = 1 5 / 27

Expectation, Variance, Quantiles The expectation of a random variable is µ = k K ξk f (ξ k ) (discrete) or ξdf(ξ) (continuous) The variance of a random variable is E[(ξ µ) 2 ] The rth moment of ξ is ξ (r) = E[ξ r ] The α-quantile of ξ is a point η such that for 0 < α < 1, η = min{x F (x) α} 6 / 27

Convex Set, Extreme Point, Convex Hull A convex combination of points x i, i = 1,..., n, is a point n i=1 λ ix i where n i=1 λ i = 1, λ i 0, i = 1,..., n X is a convex set if it contains any convex combination of points x i X Extreme point of a convex set: a point which cannot be expressed as the convex combination of two distinct points in the set Convex hull of a set of points: the set of all convex combinations of these points 7 / 27

Affine Sets and Hyperplanes h(x) = 0 is an affine constraint if it can be expressed as h(x) = Ax b Affine space: the set h(x) = 0 Each set h i (x) = A i x b = 0 is a hyperplane (where A i is the i-th row of A) Ax = 0 is the parallel subspace The dimension of an affine space is the dimension of its parallel subspace 8 / 27

Convex Functions f is a convex function if for all 0 λ 1 and any x 1, x 2 we have f (λx 1 + (1 λ)x 2 ) λf (x 1 ) + (1 λ)f (x 2 ) The domain of f, dom f is the set where f is finite f is concave if f is convex 9 / 27

Epigraph The epigraph of f is the set epi (f ) = {(x, β) β f (x)} f is convex if and only if its epigraph is convex 10 / 27

Piecewise Linearity and Separability A continuous function f is piecewise linear if it is linear over regions defined by linear inequalities A function is separable when it can be written as f = n i=1 f i(x) Both of these properties can be exploited in computation 11 / 27

Outline 1 Short Reviews Probability Spaces and Random Variables Convex Analysis 2 Deterministic Linear Programs 3 Two-Stage Stochastic Linear Programs 12 / 27

Deterministic Linear Programs min z = c T x s.t. Ax = b x 0 x R n, c R n, A R m n, b R m A solution is a vector x such that Ax = b A feasible solution is a solution with x 0 An optimal solution is a feasible solution x such that c T x c T x for all feasible solutions x 13 / 27

Linear Programming Definitions A basis is a choice of n linearly independent columns of A, with A = [B, N] (after rearrangement) Associated with a basis, we have a basic solution x B = B 1 b, x N = 0, z = c T B B 1 b Basic feasible solutions correspond to extreme points of {x Ax = b, x 0} Condition for basis to be feasible: B 1 b 0 optimal: c T N c T B B 1 N 0 (can you prove this?) 14 / 27

Dual Problem max π T b s.t. π T A c T Unbounded dual infeasible primal Unbounded primal infeasible dual Primal and dual can be infeasible If x is primal feasible and π is dual feasible, c T x π T b There exists primal optimal solution x there exists dual optimal solution π Strong duality: c T x = (π ) T b Complementary slackness: (π ) T A c T x 15 / 27

Linear Programming Duality Mnemonic Table Primal Minimize Maximize Dual Constraints b i 0 Variables b i 0 = b i Free Variables 0 c j Constraints 0 c j Free = c j 16 / 27

Outline 1 Short Reviews Probability Spaces and Random Variables Convex Analysis 2 Deterministic Linear Programs 3 Two-Stage Stochastic Linear Programs 17 / 27

2-Stage Stochastic Linear Programs Stochastic linear programs: linear programs with uncertain data Recourse programs: some decisions (recourse actions) can be taken after uncertainty is disclosed First-stage decisions: decisions taken before uncertainty is revealed Second-stage decisions: decisions taken after uncertainty is revealed Sequence of events: x ξ(ω) y(ω, x) First and second stage may contain sequences of decisions 18 / 27

Sequence of Events 19 / 27

Mathematical Formulation min z = c T x + E ξ [min q(ω) T y(ω)] s.t. Ax = b T (ω)x + Wy(ω) = h(ω) x 0, y(ω) 0 First stage decisions x R n 1, c R n 1, b R m 1, A R m 1 n 1 For a given realization ω, second-stage data are q(ω) R n 2, h(ω) R m 2, T (ω) R m 2 n 1 All random variables of the problem are assembled in a single random vector ξ T (ω) = (q(ω) T, h(ω) T, T 1 (ω),..., T m2 (ω)) 20 / 27

Example: Capacity Expansion Planning min x,y 0 s.t. n m (I i x i + E ξ C i T j y ij (ω)) i=1 j=1 n y ij (ω) = D j (ω), j = 1,..., m i=1 m y ij (ω) x i, i = 1,... n j=1 I i, C i : fixed/variable cost of technology i D j (ω), T j : height/width of load block j y ij (ω): capacity of i allocated to j x i : capacity of i Why is D j not dependent on ω? 21 / 27

Example: Procrastination max c 1x + E ξ c 2 y(ω) x,y 0 s.t. x + y(ω) T x + y(ω) T /2 y(ω) N(ω) x, y(ω): amount of work before/after T c 1, c 2 : quality increase per day for work prior to/after deadline T : deadline N(ω): deadline extension 22 / 27

Deterministic Equivalent Program Let Q(x, ξ(ω)) = min y {q(ω) T y Wy = h(ω) T (ω)x, y 0} be the second stage value function Let V (x) = E ξ Q(x, ξ(ω)) be the expected second-stage value function The deterministic equivalent program is min z = c T x + V (x) s. t. Ax = b x 0 If V (x) is given, the problem is just a linear program (we will see why soon) 23 / 27

Convexity of Q(x, ξ(ω)) Q(x, ξ(ω)) is a convex function of x Proof: If y 1 is optimal reaction to x 1 and y 2 is optimal reaction to x 2, then λy 1 + (1 λy 2 ) is feasible reaction to λx 1 + (1 λx 2 ) What can we say about V (x)? 24 / 27

Making Constraints Implicit Q(x, ξ) can be represented as Q(x, ξ) = min y {q(ω) T y + δ(y Y (x, ξ))}, where Y (x, ξ) = {y 0 : T (ω)x + Wy = h(ω)} and δ( ) is an indicator function: { 0 if y Y δ(y Y ) = + otherwise 25 / 27

Some Notations and Terminology K 1 = {x : Ax = b, x 0} K 2 (ξ) = {x : y, T (ω)x + Wy(ω) = h(ω), y(ω) 0} K 2 = {x : V (x) < } Relatively complete recourse: K 1 K 2 Complete recourse: posw = R m 2, where posw = {Wy : y 0} Fixed recourse: W does not depend on ω Does example 1 have complete recourse? fixed recourse? Does example 2 have complete recourse? fixed recourse? 26 / 27

Further Reading 2.1 BL: probability spaces and random variables 2.2 BL: deterministic linear programs 2.3 BL: decisions and stages 2.4 BL: 2-stage stochastic program with fixed recourse 2.6 BL: implicit representation of the second stage 2.11 BL: short reviews 27 / 27