Two-Stage Stochastic Linear Programs Operations Research Anthony Papavasiliou 1 / 27
Two-Stage Stochastic Linear Programs 1 Short Reviews Probability Spaces and Random Variables Convex Analysis 2 Deterministic Linear Programs 3 Two-Stage Stochastic Linear Programs 2 / 27
Outline 1 Short Reviews Probability Spaces and Random Variables Convex Analysis 2 Deterministic Linear Programs 3 Two-Stage Stochastic Linear Programs 3 / 27
Probability Spaces A probability space is the triplet (Ω, A, P), where Ω: set of all random outcomes (example: six possible results of throwing die) A: collection of random events, where events are subsets of Ω (example: odd numbers, 4) Mapping P : A [0, 1] with P( ) = 0, P(Ω) = 1 and P(A 1 A 2 ) = P(A 1 ) + P(A 2 ) if A 1 A 2 = 4 / 27
Distributions Cumulative distribution function of a random variable ξ is defined as F ξ (x) = P(ξ x) For discrete random variables, probability mass distribution f is defined as f (ξ k ) = P(ξ = ξ k ), k K with k K f (ξk ) = 1 For continuous random variables, density function f is defined by P(a ξ b) = b a f (ξ)dξ = b a df(ξ) with df(ξ) = 1 5 / 27
Expectation, Variance, Quantiles The expectation of a random variable is µ = k K ξk f (ξ k ) (discrete) or ξdf(ξ) (continuous) The variance of a random variable is E[(ξ µ) 2 ] The rth moment of ξ is ξ (r) = E[ξ r ] The α-quantile of ξ is a point η such that for 0 < α < 1, η = min{x F (x) α} 6 / 27
Convex Set, Extreme Point, Convex Hull A convex combination of points x i, i = 1,..., n, is a point n i=1 λ ix i where n i=1 λ i = 1, λ i 0, i = 1,..., n X is a convex set if it contains any convex combination of points x i X Extreme point of a convex set: a point which cannot be expressed as the convex combination of two distinct points in the set Convex hull of a set of points: the set of all convex combinations of these points 7 / 27
Affine Sets and Hyperplanes h(x) = 0 is an affine constraint if it can be expressed as h(x) = Ax b Affine space: the set h(x) = 0 Each set h i (x) = A i x b = 0 is a hyperplane (where A i is the i-th row of A) Ax = 0 is the parallel subspace The dimension of an affine space is the dimension of its parallel subspace 8 / 27
Convex Functions f is a convex function if for all 0 λ 1 and any x 1, x 2 we have f (λx 1 + (1 λ)x 2 ) λf (x 1 ) + (1 λ)f (x 2 ) The domain of f, dom f is the set where f is finite f is concave if f is convex 9 / 27
Epigraph The epigraph of f is the set epi (f ) = {(x, β) β f (x)} f is convex if and only if its epigraph is convex 10 / 27
Piecewise Linearity and Separability A continuous function f is piecewise linear if it is linear over regions defined by linear inequalities A function is separable when it can be written as f = n i=1 f i(x) Both of these properties can be exploited in computation 11 / 27
Outline 1 Short Reviews Probability Spaces and Random Variables Convex Analysis 2 Deterministic Linear Programs 3 Two-Stage Stochastic Linear Programs 12 / 27
Deterministic Linear Programs min z = c T x s.t. Ax = b x 0 x R n, c R n, A R m n, b R m A solution is a vector x such that Ax = b A feasible solution is a solution with x 0 An optimal solution is a feasible solution x such that c T x c T x for all feasible solutions x 13 / 27
Linear Programming Definitions A basis is a choice of n linearly independent columns of A, with A = [B, N] (after rearrangement) Associated with a basis, we have a basic solution x B = B 1 b, x N = 0, z = c T B B 1 b Basic feasible solutions correspond to extreme points of {x Ax = b, x 0} Condition for basis to be feasible: B 1 b 0 optimal: c T N c T B B 1 N 0 (can you prove this?) 14 / 27
Dual Problem max π T b s.t. π T A c T Unbounded dual infeasible primal Unbounded primal infeasible dual Primal and dual can be infeasible If x is primal feasible and π is dual feasible, c T x π T b There exists primal optimal solution x there exists dual optimal solution π Strong duality: c T x = (π ) T b Complementary slackness: (π ) T A c T x 15 / 27
Linear Programming Duality Mnemonic Table Primal Minimize Maximize Dual Constraints b i 0 Variables b i 0 = b i Free Variables 0 c j Constraints 0 c j Free = c j 16 / 27
Outline 1 Short Reviews Probability Spaces and Random Variables Convex Analysis 2 Deterministic Linear Programs 3 Two-Stage Stochastic Linear Programs 17 / 27
2-Stage Stochastic Linear Programs Stochastic linear programs: linear programs with uncertain data Recourse programs: some decisions (recourse actions) can be taken after uncertainty is disclosed First-stage decisions: decisions taken before uncertainty is revealed Second-stage decisions: decisions taken after uncertainty is revealed Sequence of events: x ξ(ω) y(ω, x) First and second stage may contain sequences of decisions 18 / 27
Sequence of Events 19 / 27
Mathematical Formulation min z = c T x + E ξ [min q(ω) T y(ω)] s.t. Ax = b T (ω)x + Wy(ω) = h(ω) x 0, y(ω) 0 First stage decisions x R n 1, c R n 1, b R m 1, A R m 1 n 1 For a given realization ω, second-stage data are q(ω) R n 2, h(ω) R m 2, T (ω) R m 2 n 1 All random variables of the problem are assembled in a single random vector ξ T (ω) = (q(ω) T, h(ω) T, T 1 (ω),..., T m2 (ω)) 20 / 27
Example: Capacity Expansion Planning min x,y 0 s.t. n m (I i x i + E ξ C i T j y ij (ω)) i=1 j=1 n y ij (ω) = D j (ω), j = 1,..., m i=1 m y ij (ω) x i, i = 1,... n j=1 I i, C i : fixed/variable cost of technology i D j (ω), T j : height/width of load block j y ij (ω): capacity of i allocated to j x i : capacity of i Why is D j not dependent on ω? 21 / 27
Example: Procrastination max c 1x + E ξ c 2 y(ω) x,y 0 s.t. x + y(ω) T x + y(ω) T /2 y(ω) N(ω) x, y(ω): amount of work before/after T c 1, c 2 : quality increase per day for work prior to/after deadline T : deadline N(ω): deadline extension 22 / 27
Deterministic Equivalent Program Let Q(x, ξ(ω)) = min y {q(ω) T y Wy = h(ω) T (ω)x, y 0} be the second stage value function Let V (x) = E ξ Q(x, ξ(ω)) be the expected second-stage value function The deterministic equivalent program is min z = c T x + V (x) s. t. Ax = b x 0 If V (x) is given, the problem is just a linear program (we will see why soon) 23 / 27
Convexity of Q(x, ξ(ω)) Q(x, ξ(ω)) is a convex function of x Proof: If y 1 is optimal reaction to x 1 and y 2 is optimal reaction to x 2, then λy 1 + (1 λy 2 ) is feasible reaction to λx 1 + (1 λx 2 ) What can we say about V (x)? 24 / 27
Making Constraints Implicit Q(x, ξ) can be represented as Q(x, ξ) = min y {q(ω) T y + δ(y Y (x, ξ))}, where Y (x, ξ) = {y 0 : T (ω)x + Wy = h(ω)} and δ( ) is an indicator function: { 0 if y Y δ(y Y ) = + otherwise 25 / 27
Some Notations and Terminology K 1 = {x : Ax = b, x 0} K 2 (ξ) = {x : y, T (ω)x + Wy(ω) = h(ω), y(ω) 0} K 2 = {x : V (x) < } Relatively complete recourse: K 1 K 2 Complete recourse: posw = R m 2, where posw = {Wy : y 0} Fixed recourse: W does not depend on ω Does example 1 have complete recourse? fixed recourse? Does example 2 have complete recourse? fixed recourse? 26 / 27
Further Reading 2.1 BL: probability spaces and random variables 2.2 BL: deterministic linear programs 2.3 BL: decisions and stages 2.4 BL: 2-stage stochastic program with fixed recourse 2.6 BL: implicit representation of the second stage 2.11 BL: short reviews 27 / 27