Optimization, Separation, and Inverse Optimization

Similar documents
Some representability and duality results for convex mixed-integer programs.

3. Linear Programming and Polyhedral Combinatorics

5.1 Bipartite Matching

CHAPTER 9. Integer Programming

Linear Programming Notes V Problem Transformations

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

Can linear programs solve NP-hard problems?

2.3 Convex Constrained Optimization Problems

Actually Doing It! 6. Prove that the regular unit cube (say 1cm=unit) of sufficiently high dimension can fit inside it the whole city of New York.

International Doctoral School Algorithmic Decision Theory: MCDA and MOO

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

Scheduling and Location (ScheLoc): Makespan Problem with Variable Release Dates

Lecture 7: Finding Lyapunov Functions 1

INTEGER PROGRAMMING. Integer Programming. Prototype example. BIP model. BIP models

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

Mathematical finance and linear programming (optimization)

Largest Fixed-Aspect, Axis-Aligned Rectangle

Duality of linear conic problems

Practical Guide to the Simplex Method of Linear Programming

Discrete Optimization

1 Norms and Vector Spaces

1 Solving LPs: The Simplex Algorithm of George Dantzig

Equilibrium computation: Part 1

Applied Algorithm Design Lecture 5

Completely Positive Cone and its Dual

What is Linear Programming?

How To Solve The Line Connectivity Problem In Polynomatix

Arrangements And Duality

Duality in General Programs. Ryan Tibshirani Convex Optimization /36-725

Minimizing costs for transport buyers using integer programming and column generation. Eser Esirgen

Approximation Algorithms

Lecture 2: August 29. Linear Programming (part I)

Robust Geometric Programming is co-np hard

Basic Components of an LP:

STURMFELS EXPANDED. Niels Lauritzen

(Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties

Chapter Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

Math 4310 Handout - Quotient Vector Spaces

Integer Programming: Algorithms - 3

On Minimal Valid Inequalities for Mixed Integer Conic Programs

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

ORIENTATIONS. Contents

Linear Programming: Theory and Applications

Notes on Complexity Theory Last updated: August, Lecture 1

Nan Kong, Andrew J. Schaefer. Department of Industrial Engineering, Univeristy of Pittsburgh, PA 15261, USA

Polytope Examples (PolyComp Fukuda) Matching Polytope 1

Linear Programming. March 14, 2014

Adaptive Online Gradient Descent

This exposition of linear programming

Optimizing Call Center Staffing using Simulation and Analytic Center Cutting Plane Methods

24. The Branch and Bound Method

Precalculus REVERSE CORRELATION. Content Expectations for. Precalculus. Michigan CONTENT EXPECTATIONS FOR PRECALCULUS CHAPTER/LESSON TITLES

Optimizing Call Center Staffing using Simulation and Analytic Center Cutting Plane Methods

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

Chapter 3: Section 3-3 Solutions of Linear Programming Problems

Key words. Mixed-integer programming, mixing sets, convex hull descriptions, lot-sizing.

26 Linear Programming

Solution of a Large-Scale Traveling-Salesman Problem

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION

arxiv: v1 [math.co] 7 Mar 2012

Definition and Properties of the Production Function: Lecture

6.207/14.15: Networks Lecture 15: Repeated Games and Cooperation

Optimization Modeling for Mining Engineers

Minkowski Sum of Polytopes Defined by Their Vertices

Integrating Benders decomposition within Constraint Programming

Integer Operations. Overview. Grade 7 Mathematics, Quarter 1, Unit 1.1. Number of Instructional Days: 15 (1 day = 45 minutes) Essential Questions

10. Proximal point method

Identification algorithms for hybrid systems

Max-Min Representation of Piecewise Linear Functions

Properties of Real Numbers

Integer factorization is in P

1 if 1 x 0 1 if 0 x 1

NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the school year.

DRAFT. Further mathematics. GCE AS and A level subject content

Differentiating under an integral sign

Follow the Perturbed Leader

Computational Models Lecture 8, Spring 2009

A Working Knowledge of Computational Complexity for an Optimizer

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Convex Programming Tools for Disjunctive Programs

Dantzig-Wolfe bound and Dantzig-Wolfe cookbook

Module1. x y 800.

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA

4.1 Learning algorithms for neural networks

Branch and Cut for TSP

Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams

954 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 3, MARCH 2005

Limits and Continuity

BIG DATA PROBLEMS AND LARGE-SCALE OPTIMIZATION: A DISTRIBUTED ALGORITHM FOR MATRIX FACTORIZATION

Discuss the size of the instance for the minimum spanning tree problem.

Mechanisms for Fair Attribution

4.6 Linear Programming duality

Zero: If P is a polynomial and if c is a number such that P (c) = 0 then c is a zero of P.

Special Situations in the Simplex Algorithm

Transportation Polytopes: a Twenty year Update

Transcription:

Optimization, Separation, and Inverse Optimization Ted Ralphs Aykut Bulut INFORMS Annual Meeting San Francisco, CA, 12 November 2014

What is an Inverse Problem? What is an inverse problem? Given a function, an inverse problem is that of determining input that would produce a given output. The input may be partially specified. We may want an answer as close as possible to a given target. This is precisely the mathematical notion of the inverse of a function. A value function is a function whose value is the optimal solution of an optimization problem defined by the given input. The inverse problem with respect to an optimization problem is to evaluate the inverse of a given value function.

Why is Inverse Optimization Useful? Inverse optimization is useful when we can observe the result of solving an optimization problem and we want to know what the input was. Example: Consumer preferences Let s assume consumers are rational and are making decisions by solving an underlying optimization problem. By observing their choices, we try ascertain their utility function. Example: Analyzing seismic waves We know that the path of seismic waves travels along paths that are optimal with respect to some physical model of the earth. By observing how these waves travel during an earthquake, we can infer things about the composition of the earth.

Formal Setting We consider the inverse of the value function for d R n, where z IP (d) = min x S dt x = min x conv(s) dt x (1) S = {x R n Ax = b, x 0} (Z r R n r )}. (2) for A R m n, b R m. With respect to a given target c R n and a given x 0 S, the inverse problem is defined as min{ c d d x 0 = z IP (d)} (INV) Assumption: S is bounded (for simplicity of presentation).

Formulating as a Mathematical Program To formulate as a mathematical program, we need to represent the implicit constraints of (INV) explicitly. This means describing the cone of feasible objective vectors. This cone can be described as D = {d R n d T x 0 d T x x S}. (3) In the pure integer case, this is a finite number of inequalities and the above is thus an linear program (LP). Even in the mixed case, we need only the inequalities corresponding to extreme points of conv(s). This set of constraints is exponential in size, but we can generate them dynamically, as we will see.

A Small Example The feasible set of the inverse problem is the set of objective vectors that make x 0 optimal. This is precisely the polar of the cone of facet-defining inequalities of conv(s) that are binding at x 0. Figure: conv(s) and cone D of feasible objectives

The Separation Problem For reasons that will become clear, we now consider the separation problem for a polyhedron. Separation Problem Given a polyhedron P R n and x R n, determine whether x P and if not, determine (π, π 0 ), a valid inequality for P such that πx > π 0. The separation problem can be formulated as max{πx π 0 π x π 0 x P, (π, π 0 ) R n+1 } (4) along with some appropriate normalization. When P is a polytope, we can reformulate this problem as the LP max{πx π 0 π x π 0 x E, (π, π 0 ) R n+1 } (5) where E is the set of extreme points of P.

The Polar Assuming 0 is in the interior of P, the set of all inequalities valid for P is and is called its polar. Properties of the Polar P is a polyhedron; P = P; P = {π R n π x 1 x P} (6) x P if and only if π x 1 π P ; If E and R are the extreme points and extreme rays of P, respectively, then P = {π R n π x 1 x E, π r 0 r R}. A converse of the last result also holds. Separation can be interpreted as optimization over the polar.

Separation Using an Optimization Oracle We can solve the separation problem (5) using a cutting plane algorithm. The separation problem for the polar of a polyhedron P is precisely a linear optimization problem over P. Example Figure: x (to be separated) and P

Separation Example: Iteration 1 Figure: Separating x from P (Iteration 1)

Separation Example: Iteration 2 Figure: Separating x from P (Iteration 2)

Separation Example: Iteration 3 Figure: Separating x from P (Iteration 3)

Separation Example: Iteration 4 Figure: Separating x from P (Iteration 4)

Separation Example: Iteration 5 Figure: Separating x from P (Iteration 5)

Solving the Inverse Problem Using Optimization Oracle The feasible solution to the inverse problem are essentially valid inequalities that are satisfied at equality by x 0. What happens if we use the same algorithm to separate x 0? Figure: x 0 and P

Inverse Example: Iteration 1 Figure: Solving the inverse problem for P (Iteration 1)

Inverse Example: Iteration 2 Figure: Solving the inverse problem for P (Iteration 2)

Inverse Example: Iteration 3 Figure: Solving the inverse problem for P (Iteration 3)

Inverse Example: Iteration 4 Figure: Solving the inverse problem for P (Iteration 4)

Inverse Example: Iteration 5 Figure: Solving the inverse problem for P (Iteration 5)

Separation Problems and Inverse Problems The separation problem produces a feasible solution to the inverse problem, but it may not be the one we want. Since we know that the optimal solution value of the separation problem will be zero, we don t actually need to optimize. We can exchange the objective function for one that produces the inequality we want.

Back to the Inverse Problem General Formulation min c d s.t. d T x 0 d T x x S (7) Model can be linearized for l 1 and l norms. Convex hull of S is a polytope. Last constraint set can be represented with the set of extreme points of convex hull of S. Let E be the set of extreme points of convex hull of P, E is finite.

Inverse MILP with l 1 norm Linearized l 1 Formulation min n i=1 θ i s.t. c i d i θ i i {1, 2,..., n} (8) d i c i θ i i {1, 2,..., n} (9) d T x 0 d T x x E.

Inverse MILP with l norm Linearized l Formulation min y s.t. c i d i y i {1, 2,..., n} d i c i y i {1, 2,..., n} (10) d T x 0 d T x x E. For the remainder of the presentation, we deal with the case of l norm.

Inverse Example Figure: x 0 and P

Inverse Example: Iteration 1 Figure: Solving the inverse problem for P (Iteration 1)

Inverse Example: Iteration 2 Figure: Solving the inverse problem for P (Iteration 3)

Inverse Example: Iteration 3 Figure: Solving the inverse problem for P (Iteration 3)

Observations Note the similarity to the separation problem. One difference is that x 0 is implicitly include among the extreme points found so far. In each iteration, we generate a valid inequality for the convex hull of all points found and x 0. We also require x 0 to be on the associated face. The optimization guides the process to produce the optimal objective vector.

Solvability of Inverse MILP Theorem 1 (Bulut, R. 14) Inverse MILP optimization problem under l /l 1 norm is solvable in time polynomial in the size of the problem input, given an oracle for the MILP decision problem. This is a direct result of GLS. Separation over the polar is optimization over the original polyhedron.

Formal Complexity of Inverse MILP Sets K(γ) = {d R n c d γ} X (γ) = {x S d K(γ) s.t. d (x x 0 ) > 0}, K (γ) = {x R n d (x x 0 ) 0 d K(γ)}. Inverse MILP decision problem (INV) Inputs: γ, c, x 0 S and MILP feasible set S. Problem: Decide whether K(γ) D is empty. Theorem 2 (Bulut, R. 14) INV is conp complete.

Proof of Formal Complexity We prove this result by showing the existence of a short certificate for the negative answer. When answer is NO, then conv(x (γ)) int(k (γ)). We thus need short certificates for membership in conv(x (γ)) and int(k (γ)). These are both polyhedral sets for which we have explicit descriptions.

Proof of Formal Complexity The figure shows the relevant sets under for a maximization problem under the l norm for the pure integer case. In this case, the answer is negative.

Conclusions and Generalization This entire framework can be generalized to algorithms on Turing machines. Inverse problems have a natural interpretation in this setting. A separation problem can be defined in terms of the distance from a given target instance to a solution to the inverse problem. Is this a useful generalization? It seems to be an interesting extension of the usual complexity framework, but it remains to be seen... We have developed a framework that interprets solution of inverse problems in terms of problems we already know. We expect to be able to improve the solution of inverse problems using this insight.