Proximal mapping via network optimization
|
|
|
- Ann Richardson
- 10 years ago
- Views:
Transcription
1 L. Vandenberghe EE236C (Spring 23-4) Proximal mapping via network optimization minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping
2 Introduction this lecture: network flow algorithm for evaluating prox-operator of h(x) = = n i= n i,j= i j= A ij x i x j A ij max{,x i x j } coefficients A ij = A ji are nonnegative associated undirected graph has n nodes, edges (i,j) when A ij > applications in image processing and machine learning Proximal mapping via network optimization 2
3 Outline minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping
4 Minimum cut problem find subset I {,2,...,n} that minimizes C(I) = i I,j IA ij + i I b i A S n with A ij = A ji ; no sign restrictions on b R n cost can be expressed as C(I) = x T A( x)+b T x x is the incidence vector of I: x k = if k I, x k = if k I graph interpretation optimal two-way partition of n nodes of undirected graph first term in C(I) is cost of the cut (edges removed by partitioning) Proximal mapping via network optimization 3
5 Discrete optimization formulations binary quadratic maximization minimize x T Ax+(A+b) T x subject to x {,} n cost function is equal to C(I) if x is incidence vector of I binary piecewise-linear minimization minimize i>j subject to x {,} n A ij x i x j +b T x cost function is equal to C(I) if x is incidence vector of I Proximal mapping via network optimization 4
6 Convex relaxation relaxation: replace x {,} n with x (componentwise) minimize i>j subject to x A ij x i x j +b T x we will use LP duality to show that the relaxation is exact relaxation has an optimal solution x {,} n if x {,} n is optimal for the relaxation, then rounding x as ˆx i = if x i > /2, ˆx i = if x i /2 gives an integer optimal solution ˆx {,} n Proximal mapping via network optimization 5
7 Linear program formulation relaxed problem as LP: introduce matrix variable Y R n n minimize tr(ay)+b T x subject to Y x T x T Y x (componentwise) inequalities on Y are equivalent to Y ij max{,x i x j }, i,j =,...,n at optimum, Y ij = max{,x i x j } because A ij ; therefore tr(ay)+b T x = i>j A ij x i x j +b T x Proximal mapping via network optimization 6
8 Dual problem dual linear program (variables U R n n, v,w R n ) maximize T w subject to U A (U U T ) v +w +b = v, w complementary slackness conditions: if x, Y, U, v, w are optimal (Y x T +x T ) U = Y (A U) = x v = ( x) w = denotes Hadamard (componentwise) matrix product Proximal mapping via network optimization 7
9 Exactness of relaxation let x be optimal for the relaxation; round x to ˆx {,} n as ˆx i = if x i > /2, ˆx i = if x i /2 by complementary slackness, if ˆx i = and ˆx j =, then x i > x j ; hence U ij = A ij, U ji =, v i =, w j = implies ˆx T A( ˆx)+b Tˆx is equal to the lower bound from relaxation ˆx T A( ˆx)+b Tˆx = ˆx T (U U T )( ˆx) ˆx T v ( ˆx) T w+b Tˆx = ˆx T ((U U T ) v +w +b) ˆx T (U U T )ˆx T w = T w Proximal mapping via network optimization 8
10 Network flow interpretation of dual LP change of variables (with b +,k = max{b k,}, b,k = max{ b k,}) Z = U U T, z s = b w, z t = b + v reformulated dual problem maximize T z s T b subject to Z z s +z t = A Z A, z s b, z t b + Z ij = Z ji is flow from node i to node j z s is vector of flows from an added source node to nodes,...,n z t is vector of flows from nodes,...,n to an added sink node maximize flow T z s from source to sink, subject to capacity constraints exactness of relaxation is known as the max-flow min-cut theorem Proximal mapping via network optimization 9
11 Outline minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping
12 Parametric min-cut problem min-cut problem: take b = α c with α a scalar parameter minimize n i,j= subject to y {,} n A ij max{,y i y j }+(α c) T y equivalent LP and its dual min. tr(ay)+(α c) T y s.t. Y y T y T Y y max. T w s.t. U A (U U T ) v +w+α = c v, w primal variables y R n, Y R n n ; dual variables U R n n, v,w R n Proximal mapping via network optimization
13 p (α) = inf y {,} n Optimal value function n i,j= A ij max{,y i y j }+(α c) T y immediate properties p (α) is piecewise-linear and concave p (α) = nα T c for α c, with optimal solution y = p (α) = for α c, with optimal solution y = less obvious: p (α) hat at most n+ linear segments Proximal mapping via network optimization
14 Monotonicity parametric min-cut problem minimize f α (y) = y T A( y)+(α c) T y subject to y {,} n monotonicity of solutions if y β is optimal for α = β and y γ is optimal for α = γ > β then y γ y β y γ is zero in all positions where y β is zero implies that optimal value function p (α) has at most n+ segments Proximal mapping via network optimization 2
15 proof of monotonicity property denote component-wise maximum and minimum of y β and y γ as y min = min{y β,y γ }, y max = max{y β,y γ } (submodularity) it is readily verified that y T maxa( y max )+y T mina( y min ) y T βa( y β )+y T γa( y γ ) from submodularity and optimality of y β, y γ : f β (y β )+f β (y γ ) = f β (y β )+f γ (y γ ) (γ β) T y γ f β (y max )+f γ (y min ) (γ β) T y γ = f β (y max )+f β (y min ) (γ β) T (y γ y min ) f β (y β )+f β (y γ ) (γ β) T (y γ y min ) therefore γ > β implies y γ = y min Proximal mapping via network optimization 3
16 Example A = c = α p (α) Proximal mapping via network optimization 4
17 Outline minimum cut and maximum flow problems parametric minimum cut problem proximal mapping via parametric flow maximization
18 Proximal mapping piecewise-linear convex function h(x) = i>j A ij x i x j = n i,j= A ij max{,x i x j } proximal mapping: x = prox h (c) is the solution of minimize h(x)+ 2 x c 2 2 equivalent to a quadratic program efficiently computed from solution of parametric minimum cut problem Proximal mapping via network optimization 5
19 Quadratic program formulation minimize tr(ay)+ 2 x c 2 2 subject to Y x T x T Y at optimum, x = prox h (c) and Y ij = max{,y i y j } optimality conditions: there exists a U R n n that satisfies U A, x+(u U T ) = c and the complementary slackness conditions (Y x T +x T ) U =, Y (A U) = in particular, if x i > x j, then U ij = A ij and U ji = Proximal mapping via network optimization 6
20 Relation with parametric min-cut problem parametric min-cut problem minimize f α (y) = n i,j= subject to y {,} n A ij max{,y i y j }+(α c) T y parametric min-cut solution from proximal mapping if x = prox h (c) then a solution of the parametric min-cut problem is y i = if x i α, y i = if x i < α proximal mapping from parametric min-cut solution x i = sup{α parametric min-cut problem has a solution with y i = } (follows from monotonicity of min-cut solution) Proximal mapping via network optimization 7
21 proof: let x = prox h (c) and U the optimal multiplier (page 6); define w = (x α) +, v = (α x) + U, v, w are dual feasible for parametric LP on page ; therefore f α (ŷ) T (x α) + ŷ {,} n equality holds for y defined on p. 7: f α (y) = y T (U U T )( y)+(α c) T y = y T ((U U T )+x c) y T (U U T )y (x α) T y = T (x α) + first line follows from U ij = A ij, U ji = if y i =, y j = (see p. 6) last line follows from definition of U (p. 6) and construction of y Proximal mapping via network optimization 8
22 Example α p (α) prox h (c) = (,,,3,2) prox h (c) k is value of α at breakpoint where y k switches from to Proximal mapping via network optimization 9
23 Summary proximal mapping of h(x) = i>j A ij x i x j can be computed by solving a parametric min-cut/max-flow problem very efficiently solved by algorithms from network optimization complexity O(mnlog(n 2 /m)) for general graphs (m is # edges) faster algorithms for graphs with special structure Proximal mapping via network optimization 2
24 References D. Goldfarb and W. Yin, Parametric maximum flow algorithms for fast total variation minimization, SIAM Journal on Scientic Computing (29) A. Chambolle and J. Darbon, On total variation minimization and surface evolution using parametric maximum flows, International Journal of Computer Vision (29) J. Mairal, R. Jenatton, G. Obozinski, F. Bach, Network flow algorithms for structured sparsity, arxiv.org/abs/8.529 (2) Proximal mapping via network optimization 2
4.6 Linear Programming duality
4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal
2.3 Convex Constrained Optimization Problems
42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions
5.1 Bipartite Matching
CS787: Advanced Algorithms Lecture 5: Applications of Network Flow In the last lecture, we looked at the problem of finding the maximum flow in a graph, and how it can be efficiently solved using the Ford-Fulkerson
1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.
Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S
Mathematical finance and linear programming (optimization)
Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may
10. Proximal point method
L. Vandenberghe EE236C Spring 2013-14) 10. Proximal point method proximal point method augmented Lagrangian method Moreau-Yosida smoothing 10-1 Proximal point method a conceptual algorithm for minimizing
Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725
Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T
Lecture 11: 0-1 Quadratic Program and Lower Bounds
Lecture : - Quadratic Program and Lower Bounds (3 units) Outline Problem formulations Reformulation: Linearization & continuous relaxation Branch & Bound Method framework Simple bounds, LP bound and semidefinite
5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1
5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 General Integer Linear Program: (ILP) min c T x Ax b x 0 integer Assumption: A, b integer The integrality condition
LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method
LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method Introduction to dual linear program Given a constraint matrix A, right
Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach
MASTER S THESIS Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach PAULINE ALDENVIK MIRJAM SCHIERSCHER Department of Mathematical
24. The Branch and Bound Method
24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no
Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method
Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming
What is Linear Programming?
Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to
Algorithm Design and Analysis
Algorithm Design and Analysis LECTURE 27 Approximation Algorithms Load Balancing Weighted Vertex Cover Reminder: Fill out SRTEs online Don t forget to click submit Sofya Raskhodnikova 12/6/2011 S. Raskhodnikova;
Lecture 4: BK inequality 27th August and 6th September, 2007
CSL866: Percolation and Random Graphs IIT Delhi Amitabha Bagchi Scribe: Arindam Pal Lecture 4: BK inequality 27th August and 6th September, 2007 4. Preliminaries The FKG inequality allows us to lower bound
Discrete Optimization
Discrete Optimization [Chen, Batson, Dang: Applied integer Programming] Chapter 3 and 4.1-4.3 by Johan Högdahl and Victoria Svedberg Seminar 2, 2015-03-31 Todays presentation Chapter 3 Transforms using
! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.
Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three
Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.
Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.
Linear Programming Notes V Problem Transformations
Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material
Linear Programming I
Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins
Approximation Algorithms
Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms
IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2
IEOR 4404 Homework # Intro OR: Deterministic Models February 14, 011 Prof. Jay Sethuraman Page 1 of 5 Homework #.1 (a) What is the optimal solution of this problem? Let us consider that x 1, x and x 3
Discuss the size of the instance for the minimum spanning tree problem.
3.1 Algorithm complexity The algorithms A, B are given. The former has complexity O(n 2 ), the latter O(2 n ), where n is the size of the instance. Let n A 0 be the size of the largest instance that can
Optimization in R n Introduction
Optimization in R n Introduction Rudi Pendavingh Eindhoven Technical University Optimization in R n, lecture Rudi Pendavingh (TUE) Optimization in R n Introduction ORN / 4 Some optimization problems designing
1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
Fairness in Routing and Load Balancing
Fairness in Routing and Load Balancing Jon Kleinberg Yuval Rabani Éva Tardos Abstract We consider the issue of network routing subject to explicit fairness conditions. The optimization of fairness criteria
26 Linear Programming
The greatest flood has the soonest ebb; the sorest tempest the most sudden calm; the hottest love the coldest end; and from the deepest desire oftentimes ensues the deadliest hate. Th extremes of glory
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem
CHAPTER 9. Integer Programming
CHAPTER 9 Integer Programming An integer linear program (ILP) is, by definition, a linear program with the additional constraint that all variables take integer values: (9.1) max c T x s t Ax b and x integral
Chapter 13: Binary and Mixed-Integer Programming
Chapter 3: Binary and Mixed-Integer Programming The general branch and bound approach described in the previous chapter can be customized for special situations. This chapter addresses two special situations:
Introduction to Support Vector Machines. Colin Campbell, Bristol University
Introduction to Support Vector Machines Colin Campbell, Bristol University 1 Outline of talk. Part 1. An Introduction to SVMs 1.1. SVMs for binary classification. 1.2. Soft margins and multi-class classification.
Applied Algorithm Design Lecture 5
Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design
Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling
Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one
Practical Guide to the Simplex Method of Linear Programming
Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear
Optimal shift scheduling with a global service level constraint
Optimal shift scheduling with a global service level constraint Ger Koole & Erik van der Sluis Vrije Universiteit Division of Mathematics and Computer Science De Boelelaan 1081a, 1081 HV Amsterdam The
3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the
Definition 11.1. Given a graph G on n vertices, we define the following quantities:
Lecture 11 The Lovász ϑ Function 11.1 Perfect graphs We begin with some background on perfect graphs. graphs. First, we define some quantities on Definition 11.1. Given a graph G on n vertices, we define
Bargaining Solutions in a Social Network
Bargaining Solutions in a Social Network Tanmoy Chakraborty and Michael Kearns Department of Computer and Information Science University of Pennsylvania Abstract. We study the concept of bargaining solutions,
! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.
Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of
Two-Stage Stochastic Linear Programs
Two-Stage Stochastic Linear Programs Operations Research Anthony Papavasiliou 1 / 27 Two-Stage Stochastic Linear Programs 1 Short Reviews Probability Spaces and Random Variables Convex Analysis 2 Deterministic
Nonlinear Programming Methods.S2 Quadratic Programming
Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective
Linear Programming in Matrix Form
Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,
International Doctoral School Algorithmic Decision Theory: MCDA and MOO
International Doctoral School Algorithmic Decision Theory: MCDA and MOO Lecture 2: Multiobjective Linear Programming Department of Engineering Science, The University of Auckland, New Zealand Laboratoire
Support Vector Machines
Support Vector Machines Charlie Frogner 1 MIT 2011 1 Slides mostly stolen from Ryan Rifkin (Google). Plan Regularization derivation of SVMs. Analyzing the SVM problem: optimization, duality. Geometric
A Branch and Bound Algorithm for Solving the Binary Bi-level Linear Programming Problem
A Branch and Bound Algorithm for Solving the Binary Bi-level Linear Programming Problem John Karlof and Peter Hocking Mathematics and Statistics Department University of North Carolina Wilmington Wilmington,
The Equivalence of Linear Programs and Zero-Sum Games
The Equivalence of Linear Programs and Zero-Sum Games Ilan Adler, IEOR Dep, UC Berkeley [email protected] Abstract In 1951, Dantzig showed the equivalence of linear programming problems and two-person
Support Vector Machines Explained
March 1, 2009 Support Vector Machines Explained Tristan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introduction This document has been written in an attempt to make the Support Vector Machines (SVM),
Linear Programming. March 14, 2014
Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1
DATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
Several Views of Support Vector Machines
Several Views of Support Vector Machines Ryan M. Rifkin Honda Research Institute USA, Inc. Human Intention Understanding Group 2007 Tikhonov Regularization We are considering algorithms of the form min
Date: April 12, 2001. Contents
2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........
Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen
(für Informatiker) M. Grepl J. Berger & J.T. Frings Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2010/11 Problem Statement Unconstrained Optimality Conditions Constrained
Minimizing the Number of Machines in a Unit-Time Scheduling Problem
Minimizing the Number of Machines in a Unit-Time Scheduling Problem Svetlana A. Kravchenko 1 United Institute of Informatics Problems, Surganova St. 6, 220012 Minsk, Belarus [email protected] Frank
THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok
THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE Alexer Barvinok Papers are available at http://www.math.lsa.umich.edu/ barvinok/papers.html This is a joint work with J.A. Hartigan
A Note on the Bertsimas & Sim Algorithm for Robust Combinatorial Optimization Problems
myjournal manuscript No. (will be inserted by the editor) A Note on the Bertsimas & Sim Algorithm for Robust Combinatorial Optimization Problems Eduardo Álvarez-Miranda Ivana Ljubić Paolo Toth Received:
Separation Properties for Locally Convex Cones
Journal of Convex Analysis Volume 9 (2002), No. 1, 301 307 Separation Properties for Locally Convex Cones Walter Roth Department of Mathematics, Universiti Brunei Darussalam, Gadong BE1410, Brunei Darussalam
Adaptive Online Gradient Descent
Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650
Duality in Linear Programming
Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadow-price interpretation of the optimal simplex multipliers is a very useful concept. First, these shadow
Max Flow. Lecture 4. Optimization on graphs. C25 Optimization Hilary 2013 A. Zisserman. Max-flow & min-cut. The augmented path algorithm
Lecture 4 C5 Optimization Hilary 03 A. Zisserman Optimization on graphs Max-flow & min-cut The augmented path algorithm Optimization for binary image graphs Applications Max Flow Given: a weighted directed
Nonlinear Optimization: Algorithms 3: Interior-point methods
Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris [email protected] Nonlinear optimization c 2006 Jean-Philippe Vert,
Linear programming and reductions
Chapter 7 Linear programming and reductions Many of the problems for which we want algorithms are optimization tasks: the shortest path, the cheapest spanning tree, the longest increasing subsequence,
The Impact of Linear Optimization on Promotion Planning
The Impact of Linear Optimization on Promotion Planning Maxime C. Cohen Operations Research Center, MIT, Cambridge, MA 02139, [email protected] Ngai-Hang Zachary Leung Operations Research Center, MIT, Cambridge,
1 Portfolio mean and variance
Copyright c 2005 by Karl Sigman Portfolio mean and variance Here we study the performance of a one-period investment X 0 > 0 (dollars) shared among several different assets. Our criterion for measuring
A Robust Formulation of the Uncertain Set Covering Problem
A Robust Formulation of the Uncertain Set Covering Problem Dirk Degel Pascal Lutter Chair of Management, especially Operations Research Ruhr-University Bochum Universitaetsstrasse 150, 44801 Bochum, Germany
Transportation Polytopes: a Twenty year Update
Transportation Polytopes: a Twenty year Update Jesús Antonio De Loera University of California, Davis Based on various papers joint with R. Hemmecke, E.Kim, F. Liu, U. Rothblum, F. Santos, S. Onn, R. Yoshida,
A Network Flow Approach in Cloud Computing
1 A Network Flow Approach in Cloud Computing Soheil Feizi, Amy Zhang, Muriel Médard RLE at MIT Abstract In this paper, by using network flow principles, we propose algorithms to address various challenges
Statistical Machine Learning
Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes
Branch and Cut for TSP
Branch and Cut for TSP jla,[email protected] Informatics and Mathematical Modelling Technical University of Denmark 1 Branch-and-Cut for TSP Branch-and-Cut is a general technique applicable e.g. to solve symmetric
Solving NP Hard problems in practice lessons from Computer Vision and Computational Biology
Solving NP Hard problems in practice lessons from Computer Vision and Computational Biology Yair Weiss School of Computer Science and Engineering The Hebrew University of Jerusalem www.cs.huji.ac.il/ yweiss
Cyber-Security Analysis of State Estimators in Power Systems
Cyber-Security Analysis of State Estimators in Electric Power Systems André Teixeira 1, Saurabh Amin 2, Henrik Sandberg 1, Karl H. Johansson 1, and Shankar Sastry 2 ACCESS Linnaeus Centre, KTH-Royal Institute
Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}.
Walrasian Demand Econ 2100 Fall 2015 Lecture 5, September 16 Outline 1 Walrasian Demand 2 Properties of Walrasian Demand 3 An Optimization Recipe 4 First and Second Order Conditions Definition Walrasian
Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints
Chapter 6 Linear Programming: The Simplex Method Introduction to the Big M Method In this section, we will present a generalized version of the simplex method that t will solve both maximization i and
Lecture 2: August 29. Linear Programming (part I)
10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.
Routing in Line Planning for Public Transport
Konrad-Zuse-Zentrum für Informationstechnik Berlin Takustraße 7 D-14195 Berlin-Dahlem Germany MARC E. PFETSCH RALF BORNDÖRFER Routing in Line Planning for Public Transport Supported by the DFG Research
Linear Programming Sensitivity Analysis
Linear Programming Sensitivity Analysis Massachusetts Institute of Technology LP Sensitivity Analysis Slide 1 of 22 Sensitivity Analysis Rationale Shadow Prices Definition Use Sign Range of Validity Opportunity
Chapter 4. Duality. 4.1 A Graphical Example
Chapter 4 Duality Given any linear program, there is another related linear program called the dual. In this chapter, we will develop an understanding of the dual linear program. This understanding translates
A Working Knowledge of Computational Complexity for an Optimizer
A Working Knowledge of Computational Complexity for an Optimizer ORF 363/COS 323 Instructor: Amir Ali Ahmadi TAs: Y. Chen, G. Hall, J. Ye Fall 2014 1 Why computational complexity? What is computational
8.1 Min Degree Spanning Tree
CS880: Approximations Algorithms Scribe: Siddharth Barman Lecturer: Shuchi Chawla Topic: Min Degree Spanning Tree Date: 02/15/07 In this lecture we give a local search based algorithm for the Min Degree
Improved Algorithms for Data Migration
Improved Algorithms for Data Migration Samir Khuller 1, Yoo-Ah Kim, and Azarakhsh Malekian 1 Department of Computer Science, University of Maryland, College Park, MD 20742. Research supported by NSF Award
Minimizing costs for transport buyers using integer programming and column generation. Eser Esirgen
MASTER STHESIS Minimizing costs for transport buyers using integer programming and column generation Eser Esirgen DepartmentofMathematicalSciences CHALMERS UNIVERSITY OF TECHNOLOGY UNIVERSITY OF GOTHENBURG
Online Learning and Competitive Analysis: a Unified Approach
Online Learning and Competitive Analysis: a Unified Approach Shahar Chen Online Learning and Competitive Analysis: a Unified Approach Research Thesis Submitted in partial fulfillment of the requirements
Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.
1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that
Operation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1
Operation Research Module 1 Unit 1 1.1 Origin of Operations Research 1.2 Concept and Definition of OR 1.3 Characteristics of OR 1.4 Applications of OR 1.5 Phases of OR Unit 2 2.1 Introduction to Linear
Some representability and duality results for convex mixed-integer programs.
Some representability and duality results for convex mixed-integer programs. Santanu S. Dey Joint work with Diego Morán and Juan Pablo Vielma December 17, 2012. Introduction About Motivation Mixed integer
Linear Programming. April 12, 2005
Linear Programming April 1, 005 Parts of this were adapted from Chapter 9 of i Introduction to Algorithms (Second Edition) /i by Cormen, Leiserson, Rivest and Stein. 1 What is linear programming? The first
The Goldberg Rao Algorithm for the Maximum Flow Problem
The Goldberg Rao Algorithm for the Maximum Flow Problem COS 528 class notes October 18, 2006 Scribe: Dávid Papp Main idea: use of the blocking flow paradigm to achieve essentially O(min{m 2/3, n 1/2 }
constraint. Let us penalize ourselves for making the constraint too big. We end up with a
Chapter 4 Constrained Optimization 4.1 Equality Constraints (Lagrangians) Suppose we have a problem: Maximize 5, (x 1, 2) 2, 2(x 2, 1) 2 subject to x 1 +4x 2 =3 If we ignore the constraint, we get the
Creating a More Efficient Course Schedule at WPI Using Linear Optimization
Project Number: ACH1211 Creating a More Efficient Course Schedule at WPI Using Linear Optimization A Major Qualifying Project Report submitted to the Faculty of the WORCESTER POLYTECHNIC INSTITUTE in partial
Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams
Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams André Ciré University of Toronto John Hooker Carnegie Mellon University INFORMS 2014 Home Health Care Home health care delivery
