# INTERIOR POINT POLYNOMIAL TIME METHODS IN CONVEX PROGRAMMING

Save this PDF as:

Size: px
Start display at page:

Download "INTERIOR POINT POLYNOMIAL TIME METHODS IN CONVEX PROGRAMMING"

## Transcription

1 GEORGIA INSTITUTE OF TECHNOLOGY SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES INTERIOR POINT POLYNOMIAL TIME METHODS IN CONVEX PROGRAMMING ISYE 8813 Arkadi Nemirovski On sabbatical leave from Technion Israel Institute of Technology Spring Semester 2004

2 2 Interior Point Polynomial Methods in Convex Programming Goals. During the last decade the area of interior point polynomial methods (started in 1984 when N. Karmarkar invented his famous algorithm for Linear Programming) became one of the dominating fields, or even the dominating field, of theoretical and computational activity in Convex Optimization. The goal of the course is to present a general theory of interior point polynomial algorithms in Convex Programming. The theory allows to explain all known methods of this type and to extend them from the initial area of interior point technique - Linear and Quadratic Programming - onto a wide variety of essentially nonlinear classes of convex programs. We present in a self-contained manner the basic theory along with its applications to several important classes of convex programs (LP, QP, Quadratically constrained Quadratic programming, Geometrical programming, Eigenvalue problems, etc.) The course follows the recent book Yu. Nesterov, A. Nemirovski Interior-Point Polynomial Algorithms in Convex Programming SIAM Studies in Applied Mathematics, 1994 Prerequisites for the course are the standard Calculus and the most elementary parts of Convex Analysis. Duration: one semester, 2 hours weekly Contents: Introduction: what the course is about Developing Tools, I: self-concordant functions, self-concordant barriers and the Newton method Interior Point Polynomial methods, I: the path-following scheme Developing Tools, II: Conic Duality Interior Point Polynomial methods, II: the potential reduction scheme Developing Tools, III: how to construct self-concordant barriers Applications: Linear and Quadratic Programming Quadratically Constrained Quadratic Problems Geometrical Programming Semidefinite Programming

3 3 About Exercises The majority of Lectures are accompanied by the Exercise sections. In several cases, the exercises relate to the lecture where they are placed; sometimes they prepare the reader to the next lecture. The mark at the word Exercise or at an item of an exercise means that you may use hints given in Appendix Hints. A hint, in turn, may refer you to the solution of the exercise given in the Appendix Solutions ; this is denoted by the mark +. Some exercises are marked by + rather than by ; this refers you directly to the solution of an exercise. Exercises marked by # are closely related to the lecture where they are placed; it would be a good thing to solve such an exercise or at least to become acquainted with its solution (if one is given). Exercises which I find difficult are marked with >. The exercises, usually, are not that simple. They in no sense are obligatory, and the reader is not expected to solve all or even the majority of the exercises. Those who would like to work on the solutions should take into account that the order of exercises is important: a problem which could cause serious difficulties as it is becomes much simpler in the context (at least I hope so).

4 4

5 Contents 1 Introduction to the Course Some history The goal: poynomial time methods The path-following scheme What is inside: self-concordance Structure of the course Self-concordant functions Examples and elementary combination rules Properties of self-concordant functions Exercises: Around Symmetric Forms Self-concordant barriers Definition, examples and combination rules Properties of self-concordant barriers Exercises: Self-concordant barriers Basic path-following method Situation F -generated path-following method Basic path-following scheme Convergence and complexity Initialization and two-phase path-following method Concluding remarks Exercises: Basic path-following method Conic problems and Conic Duality Conic problems Conic duality Fenchel dual to (P) Duality relations Logarithmically homogeneous barriers Exercises: Conic problems Basic properties of cones More on conic duality Complementary slackness: what it means? Conic duality: equivalent form Exercises: Truss Topology Design via Conic duality

6 6 CONTENTS 6 The method of Karmarkar Problem setting and assumptions Homogeneous form of the problem The Karmarkar potential function The Karmarkar updating scheme Overall complexity of the method How to implement the method of Karmarkar Exercises on the method of Karmarkar The Primal-Dual potential reduction method The idea Primal-dual potential The primal-dual updating Overall complexity analysis Large step strategy Exercises: Primal-Dual method Example: Lyapunov Stability Analysis Long-Step Path-Following Methods The predictor-corrector scheme Dual bounds and Dual search line Acceptable steps Summary Exercises: Long-Step Path-Following methods How to construct self-concordant barriers Appropriate mappings and Main Theorem Barriers for epigraphs of functions of one variable Fractional-Quadratic Substitution Proof of Theorem Proof of Proposition Exercises on constructing self-concordant barriers Epigraphs of functions of Euclidean norm How to guess that ln Det x is a self-concordant barrier Fractional-quadratic cone and Truss Topology Design Geometrical mean Applications in Convex Programming Linear Programming Quadratically Constrained Quadratic Programming Approximation in L p norm Geometrical Programming Exercises on applications of interior point methods (Inner) and (Outer) as convex programs Problem (Inner), polyhedral case Problem (Outer), polyhedral case Problem (Outer), ellipsoidal case

7 CONTENTS 7 11 Semidefinite Programming A Semidefinite program Semidefinite Programming: examples Linear Programming Quadratically Constrained Quadratic Programming Minimization of Largest Eigenvalue and Lovasz Capacity of a graph Dual bounds in Boolean Programming Problems arising in Control Interior point methods for Semidefinite Programming Exercises on Semidefinite Programming Sums of eigenvalues and singular values Hints to exercises 193 Solutions to exercises 197

8 8 CONTENTS

9 Chapter 1 Introduction to the Course What we are about to study in this semester are the theory and the applications of interior point polynomial time methods in Convex Programming. Today, in the introductory lecture, I am not going to prove theorems and present algorithms. My goal is to explain what the course is about, what are the interior point methods and why so many researchers and practitioners are now deeply involved in this new area. 1.1 Some history The modern theory of polynomial time interior point methods takes its origin in the seminal paper of Narendra Karmarkar published in Now, after 10 years, there are hundreds of researchers working in the area, and thousands of papers and preprints on the subject. The electronic bibliography on interior point methods collected and maintained by Dr. Eberhard Kranich, although far from being complete, contains now over 1,500 entries. For Optimization Community which covers not so many people, this is a tremendous concentration of effort in a single area, for sure incomparable with all happened in the previous years. Although to the moment the majority of the papers on interior point methods deal with the theoretical issues, the practical yield also is very remarkable. It suffices to say that the Karmarkar algorithm for Linear Programming was used as the working horse for the US Army logistic planning (i.e., planning of all kinds of supplies) in the Gulf War. Another interior point method for Linear Programming, the so called primal-dual one, forms the nucleus of an extremely efficient and very popular now software package OSL2. Let me present you a citation from G. Dantzig: At the present time (1990), interior algorithms are in open competition with variants of the simplex methods 1). It means something when new-borned methods can be competitive against an extremely powerful and polished for almost 50 years by thousands of people Simplex method. Now let me switch from the style of advertisements to the normal one. What actually happened in 1984, was the appearance of a new iterative polynomial-time algorithm for Linear Programming. We already know what does it mean a polynomial time algorithm for LP - recall the lecture about the Ellipsoid method and the Khachiyan theorem on polynomial solvability of LP. As we remember, Khachiyan proved in 1979 that Linear Programming is polynomially solvable, namely, that an LP problem with rational coefficients, m inequality constraints and n variables can be solved exactly in O(n 3 (n + m)l) arithmetic operations, L being the input length of the problem, i.e., the total binary length of the numerical data 1 History of Mathematica Programming, J.K. Lenstra. A.H.G. Rinnooy Kan, A. Schrijver, Eds. CWI, North- Holland,

10 10 CHAPTER 1. INTRODUCTION TO THE COURSE specifying the problem instance. The new method of Karmarkar possessed the complexity bound of O(m 3/2 n 2 L) operations. In the standard for the complexity analysis case of more or less square problems m = O(n) the former estimate becomes O(n 4 L), the latter O(n 3.5 L). Thus, there was some progress in the complexity. And it can be said for sure that neither this moderate progress, nor remarkable elegance of the new algorithm never could cause the revolution in Optimization. What indeed was a sensation, what inspired extremely intensive activity in the new area and in a few years resulted in significant theoretical and computational progress, was the claim that the new algorithm in real-world computations was by order of magnitudes more efficient than the Simplex method. Let me explain you why this was a sensation. It is known that the Simplex method is not polynomial: there exist bad problem instances where the number of pivotings grows exponentially with the dimension of the instance. Thus, any polynomial time algorithm for LP, the Ellipsoid one, the method of Karmarkar or whatever else, for sure is incomparably better in its worst-case behaviour than the Simplex. But this is the theoretical worst-case behaviour which, as is demonstrated by almost 50-year practice, never occurs in realworld applications; from the practical viewpoint, the Simplex method is an extremely efficient algorithm with fairy low empirical complexity; this is why the method is able to solve very largescale real world LP problems in reasonable time. In contrast to this, the Ellipsoid method works more or less in accordance with its theoretical worst-case complexity bound, so that in practical computations this theoretically good method is by far dominated by the Simplex even on very small problems with tens of variables and constraints. If the method of Karmarkar would also behave itself according to its theoretical complexity bound, it would be only slightly better then the Ellipsoid method and still would be incomparably worse than the Simplex. The point, anyhow, is that actual behaviour of the method of Karmarkar turned out to be much better than it is said by the worst-case theoretical complexity bound. This phenomenon combined with the theoretical advantages of a polynomial time algorithm, not the latter advantages alone, (same as, I believe, not the empirical behaviour of the method alone), inspired an actual revolution in optimization which continues up today and hardly will terminate in the nearest future. I have said something about the birth of the interior point science. As it often happens in our field, later it turned out that this was the second birth; the first one was in 1967 in Russia, where Ilya Dikin, then the Ph.D. student of Leonid Kantorovich, invented what is now called the affine scaling algorithm for LP. This algorithm which hardly is theoretically polynomial, is certain simplification of the method of Karmarkar which shares all practical advantages of the basic Karmarkar algorithm; thus, as a computational tool, interior point methods exist at least since A good question is why this computational tool which is in extreme fashion now was completely overlooked in the West, same as in Russia. I think that this happened due to two reasons: first, Dikin came too early, when there was no interest to iterative procedures for LP - a new-borned iterative procedure, even of a great potential, hardly could overcome as a practical tool perfectly polished Simplex method, and the theoretical complexity issues in these years did not bother optimization people (even today we do not know whether the theoretical complexity of the Dikin algorithm is better than that one of the Simplex; and in 1967 the question itself hardly could occur). Second, the Dikin algorithm appeared in Russia, where there were neither hardware base for Dikin to perform large-scale tests of his algorithm, nor social demand for solving large-scale LP problems, so it was almost impossible to realize the practical potential of the new algorithm and to convince people in Russia, not speaking about the West, that this is something which worths attention. Thus, although the prehistory of the interior point technique for LP started in 1967, the actual history of this subject started only in It would be impossible to outline numerous significant contributions to the field done since then; it would require mentioning tens, if not hundreds, of authors. There is, anyhow, one contribution which must be indicated explicitly.

11 1.2. THE GOAL: POYNOMIAL TIME METHODS 11 I mean the second cornerstone of the subject, the paper of James Renegar (1986) where the first path-following polynomial time interior point method for LP was developed. The efficiency estimate of this method was better than that one of the method of Karmarkar, namely, O(n 3 L) 2) - cubic in the dimension, same as for classical methods of solving systems of linear equations; up to now this is the best known theoretical complexity bound for LP. Besides this remarkable theoretical advantage, the method of Renegar possesses an important advantage in, let me say, the human dimension: the method belongs to a quite classical and well-known in Optimization scheme, in contrast to rather unusual Ellipsoid and Karmarkar algorithms. The paper of Renegar was extremely important for the understanding of the new methods and it, same as a little bit later independent paper of Clovis Gonzaga with close result, brought the area in the position very favourable for future developments. To the moment I was speaking about interior point methods for Linear Programming, and this reflects the actual history of the subject: not only the first interior point methods vere developed for this case, but till the very last years the main activity, both theoretical and computational, in the field was focused on Linear Programming and the very close to it Linearly constrained Quadratic Programming. To extend the approach to more general classes of problems, it was actually a challenge: the original constructions and proofs heavily exploited the polyhedral structure of the feasible domain of an LP problem, and in order to pass to the nonlinear case, it required to realize what is the deep intrinsic nature of the methods. This latter problem was solved in a series of papers of Yurii Nesterov in 1988; the ideas of these papers form the basis of the theory the course is devoted to, the theory which now has became a kind of standard for unified explanation and development of polynomial time interior point algorithms for convex problems, both linear and nonlinear. To present this theory and its applications, this is the goal of my course. In the remaining part of this introductory lecture I am going to explain what we are looking for and what will be our general strategy. 1.2 The goal: poynomial time methods I have declared that the purpose of the theory to be presented is developing of polynomial time algorithms for convex problems. Let me start with explaining what a polynomial time method is. Consider a family of convex problems (p) : minimize f(x) s.t. g j (x) 0, i = 1,..., m, x G of a given analytical structure, like the family of LP problems, or Linearly constrained Quadratic problems, or Quadratically constrained Quadratic ones, etc. The only formal assumption on the family is that a problem instance p from it is identified by a finite-dimensional data vector D(p); normally you can understand this vector as the collection of the numeric coefficients in analytical expressions for the objective and the constraints; these expressions themselves are fixed by the description of the family. The dimension of the data vector is called the size l(p) of the problem instance. A numerical method for solving problems from the family is a routine which, given on input the data vector, generates a sequence of approximate solutions to the problem in such a way that every of these solutions is obtained in finitely many operations of precise real arithmetic, like the four arithmetic operations, taking square roots, exponents, logarithms and other elementary functions; each operand in an operation is either an entry of the data vector, or the result of one of the preceding operations. We call a numerical method convergent, if, for any positive ε and for any problem instance p from the family, the approximate solutions x i 2 recall that we are speaking about almost square problems with the number of inequalities m being of order of the number of variables n

12 12 CHAPTER 1. INTRODUCTION TO THE COURSE generated by the method, starting with certain i = i (ε, p), are ε-solutions to the problem, i.e., they belong to G and satisfy the relations f(x i ) f ε, g j (x i ) ε, j = 1,..., m, (f is the optimal value in the problem). We call a method polynomial, if it is convergent and the arithmetic cost C(ε, p) of ε-solution, i.e., the total number of arithmetic operations at the first i (ε, p) steps of the method as applied to p, admits an upper bound as follows: ( ) V(p) C(ε, p) π(l(p)) ln, ε where π is certain polynomial independent on the data and V(p) is certain data-dependent scale factor. The ratio V(p)/ε can be interpreted as the relative accuracy which corresponds to the absolute accuracy ε, and the quantity ln( V(p) ε ) can be thought of as the number of accuracy digits in ε-solution. With this interpretation, the polynomiality of a method means that for this method the arithmetic cost of an accuracy digit is bounded from above by a polynomial of the problem size, and this polynomial can be thought of as the characteristic of the complexity of the method. It is reasonable to compare this approach with the information-based approach we dealt with in the previous course. In the information-based complexity theory the problem was assumed to be represented by an oracle, by a black box, so that a method, starting its work, had no information on the instance; this information was accumulated via sequential calls to the oracle, and the number of these calls sufficient to find an ε-solution was thought of as the complexity of the method; we did not include in this complexity neither the computational effort of the oracle, nor the arithmetic cost of processing the answers of the oracle by the method. In contrast to this, in our now approach the data specifying the problem instance form the input to the method, so that the method from the very beginning possesses complete global information on the problem instance. What the method should do is to transform this input information into ε-solution to the problem, and the complexity of the method (which now might be called algorithmic or combinatorial complexity) is defined by the arithmetic cost of this transformation. It is clear that our new approach is not as general as the information-based one, since now we can speak only on families of problems of a reasonable analytic structure (otherwise the notion of the data vector becomes senseless). As a compensation, the combinatorial complexity is much more adequate measure of the actual computational effort than the information-based complexity. After I have outlined what is our final goals, let me give you an idea of how this goal will be achieved. In what follows we will develop methods of two different types: the path-following and the potential reduction ones; the LP prototypes of these methods are, respectively, the methods of Renegar and Gonzaga, which are path-following routines, and the method of Karmarkar, which is a potential reduction one. In contrast to the actual historical order, we shall start with the quite traditional path-following scheme, since we are unprepared to understand what in fact happens in the methods of the Karmarkar type. 1.3 The path-following scheme The, let me say, classical stage in developing the scheme is summarized in the seminal monograph of Fiacco and McCormic (1967). Assume we intend to solve a convex program (P ) : minimize f(x) s.t. g i (x) 0, i = 1,..., m

13 1.3. THE PATH-FOLLOWING SCHEME 13 associated with smooth (at least twice continuously defferentiable) convex functions f, g i on R n. Let G = {x R n g i (x) 0} be the feasible domain of the problem; assume for the sake of simplicity that this domain is bounded, and let the constraints {g i } satisfy the Slater condition: x : g i (x) < 0, i = 1,..., m. Under these assumptions the feasible domain G is a solid - a closed and bounded convex set in R n with a nonempty interior. In 60 s people believed that it is not difficult to solve unconstrained smooth convex problems, and it was very natural to try to reduce the constrained problem (P ) to a series of unconstrained problems. To this end it was suggested to associate with the feasible domain G of problem (P ) a barrier - an interior penalty function F (x), i.e., a smooth convex function F defined on the interior of G and tending to when we approach from inside the boundary of G: lim F (x i) = for any sequence {x i int G} with i It is also reasonble to assume that F is nondegenerate, i.e., lim x i G. i F (x) > 0, x int G (here > 0 stands for positive definite ). Given such a barrier, one can associate with it and with the objective f of (P ) the barriergenerated family comprised of the problems (P t ) : minimize F t (x) tf(x) + F (x). Here the penalty parameter t is positive. Of course, x in (P t ) is subject to the induced restriction x int G, since F t is outside the latter set. From our assumptions on G it immediately follows that a) every of the problems (P t ) has a unique solution x (t); this solution is, of course, in the interior of G; b) the path x (t) of solutions to (P t ) is a continuous function of t [0, ), and all its limiting, as t, points belong to the set of optimal solutions to (P ). It immediately follows that if we are able to follow the path x (t) along certain sequence t i of values of the penalty parameter, i.e., know how to form good enough approximations x i int G to the points x (t i ), say, such that x i x (t i ) 0, i, (1.1) then we know how to solve (P ): b) and (1.1) imply that all limiting points of the sequance of our iterates {x i } belong to the optimal set of (P ). Now, to be able to meet the requirement (1.1) is, basically, the same as to be able to solve to a prescribed accuracy each of the penalized problems (P t ). What are our abilities in this respect? (P t ) is a minimization problem with smooth and nondegenerate (i.e., with nonsingular Hessian) objective. Of course, this objective is defined on the proper open convex subset of R n rather than on the whole R n, so that the problem, rigorously speaking, is a constrained one, same as the initial problem (P ). The constrained nature of (P t ) is, anyhow, nothing but an illusion: the solution to the problem is unique and belongs to the interior of G, and any converging minimization method of a relaxation type (i.e., monotonically decreasing the

14 14 CHAPTER 1. INTRODUCTION TO THE COURSE value of the objective along the sequence of iterates) started in an interior point of G would automatically keep the iterates away from the boundary of G (since F t together with F as the argument approaches the boundary from inside); thus, qualitatively speaking, the behaviour of the method as applied to (P t ) would be the same as if the objective F t was defined everywhere. In other words, we have basically the same possibilities to solve (P t ) as if it was an unconstrained problem with smooth and nondegenerate objective. Thus, the outlined pathfollowing scheme indeed achieves our goal - it reduces the constrained problem (P ) to a series of in fact unconstrained problems (P t ). We have outlined what are our abilities to solve to a prescribed accuracy every particular problem (P t ) - to this end we can apply to the problem any relaxation iterative routine for smooth unconstrained minimization, starting the routine from an interior point of G. What we need, anyhow, is to solve not a single problem from the family, but a sequence of these problems associated with certain tending to sequence of values of the penalty parameter. Of course, in principle we could choose an arbitrary sequence {t i } and solve each of the problems (P ti ) independently, but anybody understands that it is senseless. What makes sense is to use the approximate solution x i to the previous problem (P ti ) as the starting point when solving the new problem (P ti+1 ). Since x (t), as we just have mentioned, is a continuous function of t, a good approximate solution to the previous problem will be a good initial point for solving the new one, provided that t i+1 t i is not too large; this latter asumption can be ensured by a proper policy of updating the penalty parameter. To implement the aforementioned scheme, one should specify its main blocks, namely, to choose somehow: 1) the barrier F ; 2) the working horse - the unconstrained minimization method for solving the problems (P t ), along with the stopping criterion for the method; 3) the policy for updating the penalty parameter. The traditional recommendations here were rather diffuse. The qualitative theory insisted on at least C 2 -smoothness and nondegeneracy of the barrier, and this was basically all; within this class of barriers, there were no clear theoretical priorities. What people were adviced to do, was for 1): to choose F as certain preserving smoothness aggregate of g i, e.g., with some α > 0, or F (x) = m i=1 ( ) 1 α (1.2) g i (x) m F (x) = ln( g i (x)), (1.3) i=1 or something else of this type; the idea was that the local information on this barrier required by the working horse should be easily computed via similar information on the constraints g i ; for 2): to choose as the working horse the Newton method; this recommendation came from computational experience and had no serious theoretical justification; for 3): qualitatively, updating the penalty at a high rate, we reduce the number of auxiliary unconstrained problems at the cost of elaborating each of the problems (since for large t i+1 t i a good approximation of x (t i ) may be a bad starting point for solving the updated problem; a low rate of updating the penalty simplifies the auxiliary problems and increases the number of the problems to be solved before a prescribed value of the penalty (which corresponds to the required accuracy of solving (P )) is achieved. The traitional theory was unable to offer

### 2.3 Convex Constrained Optimization Problems

42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

### Summer course on Convex Optimization. Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U.

Summer course on Convex Optimization Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U.Minnesota Interior-Point Methods: the rebirth of an old idea Suppose that f is

### Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

### Tutorial on Convex Optimization for Engineers Part I

Tutorial on Convex Optimization for Engineers Part I M.Sc. Jens Steinwandt Communications Research Laboratory Ilmenau University of Technology PO Box 100565 D-98684 Ilmenau, Germany jens.steinwandt@tu-ilmenau.de

### Nonlinear Optimization: Algorithms 3: Interior-point methods

Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris Jean-Philippe.Vert@mines.org Nonlinear optimization c 2006 Jean-Philippe Vert,

### By W.E. Diewert. July, Linear programming problems are important for a number of reasons:

APPLIED ECONOMICS By W.E. Diewert. July, 3. Chapter : Linear Programming. Introduction The theory of linear programming provides a good introduction to the study of constrained maximization (and minimization)

### Introduction and message of the book

1 Introduction and message of the book 1.1 Why polynomial optimization? Consider the global optimization problem: P : for some feasible set f := inf x { f(x) : x K } (1.1) K := { x R n : g j (x) 0, j =

### 3. Linear Programming and Polyhedral Combinatorics

Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

### Linear Programming I

Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins

### (Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties

Lecture 1 Convex Sets (Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties 1.1.1 A convex set In the school geometry

### Duality of linear conic problems

Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

### Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

### Tangent and normal lines to conics

4.B. Tangent and normal lines to conics Apollonius work on conics includes a study of tangent and normal lines to these curves. The purpose of this document is to relate his approaches to the modern viewpoints

### ME128 Computer-Aided Mechanical Design Course Notes Introduction to Design Optimization

ME128 Computer-ided Mechanical Design Course Notes Introduction to Design Optimization 2. OPTIMIZTION Design optimization is rooted as a basic problem for design engineers. It is, of course, a rare situation

### Metric Spaces. Chapter 7. 7.1. Metrics

Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

### Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

### Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.

This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra

### 1. R In this and the next section we are going to study the properties of sequences of real numbers.

+a 1. R In this and the next section we are going to study the properties of sequences of real numbers. Definition 1.1. (Sequence) A sequence is a function with domain N. Example 1.2. A sequence of real

### max cx s.t. Ax c where the matrix A, cost vector c and right hand side b are given and x is a vector of variables. For this example we have x

Linear Programming Linear programming refers to problems stated as maximization or minimization of a linear function subject to constraints that are linear equalities and inequalities. Although the study

### t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction

### Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points

Introduction to Algebraic Geometry Bézout s Theorem and Inflection Points 1. The resultant. Let K be a field. Then the polynomial ring K[x] is a unique factorisation domain (UFD). Another example of a

### Lecture 7: Finding Lyapunov Functions 1

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1

### Linear Programming, Lagrange Multipliers, and Duality Geoff Gordon

lp.nb 1 Linear Programming, Lagrange Multipliers, and Duality Geoff Gordon lp.nb 2 Overview This is a tutorial about some interesting math and geometry connected with constrained optimization. It is not

### Mathematics (MAT) MAT 061 Basic Euclidean Geometry 3 Hours. MAT 051 Pre-Algebra 4 Hours

MAT 051 Pre-Algebra Mathematics (MAT) MAT 051 is designed as a review of the basic operations of arithmetic and an introduction to algebra. The student must earn a grade of C or in order to enroll in MAT

### LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

### 1 Polyhedra and Linear Programming

CS 598CSC: Combinatorial Optimization Lecture date: January 21, 2009 Instructor: Chandra Chekuri Scribe: Sungjin Im 1 Polyhedra and Linear Programming In this lecture, we will cover some basic material

### Section 1.1. Introduction to R n

The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650

### CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e.

CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e. This chapter contains the beginnings of the most important, and probably the most subtle, notion in mathematical analysis, i.e.,

### 1 Solving LPs: The Simplex Algorithm of George Dantzig

Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

### Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction

### Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

### 10. Proximal point method

L. Vandenberghe EE236C Spring 2013-14) 10. Proximal point method proximal point method augmented Lagrangian method Moreau-Yosida smoothing 10-1 Proximal point method a conceptual algorithm for minimizing

### The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem

### Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

### Interior Point Methods and Linear Programming

Interior Point Methods and Linear Programming Robert Robere University of Toronto December 13, 2012 Abstract The linear programming problem is usually solved through the use of one of two algorithms: either

### Advanced Higher Mathematics Course Assessment Specification (C747 77)

Advanced Higher Mathematics Course Assessment Specification (C747 77) Valid from August 2015 This edition: April 2016, version 2.4 This specification may be reproduced in whole or in part for educational

### Course 221: Analysis Academic year , First Semester

Course 221: Analysis Academic year 2007-08, First Semester David R. Wilkins Copyright c David R. Wilkins 1989 2007 Contents 1 Basic Theorems of Real Analysis 1 1.1 The Least Upper Bound Principle................

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### Identify examples of field properties: commutative, associative, identity, inverse, and distributive.

Topic: Expressions and Operations ALGEBRA II - STANDARD AII.1 The student will identify field properties, axioms of equality and inequality, and properties of order that are valid for the set of real numbers

### I. Solving Linear Programs by the Simplex Method

Optimization Methods Draft of August 26, 2005 I. Solving Linear Programs by the Simplex Method Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston,

### Diploma Plus in Certificate in Advanced Engineering

Diploma Plus in Certificate in Advanced Engineering Mathematics New Syllabus from April 2011 Ngee Ann Polytechnic / School of Interdisciplinary Studies 1 I. SYNOPSIS APPENDIX A This course of advanced

### Overview of Math Standards

Algebra 2 Welcome to math curriculum design maps for Manhattan- Ogden USD 383, striving to produce learners who are: Effective Communicators who clearly express ideas and effectively communicate with diverse

### 1 if 1 x 0 1 if 0 x 1

Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

### Mathematics for Economics (Part I) Note 5: Convex Sets and Concave Functions

Natalia Lazzati Mathematics for Economics (Part I) Note 5: Convex Sets and Concave Functions Note 5 is based on Madden (1986, Ch. 1, 2, 4 and 7) and Simon and Blume (1994, Ch. 13 and 21). Concave functions

### Linear Programming: Introduction

Linear Programming: Introduction Frédéric Giroire F. Giroire LP - Introduction 1/28 Course Schedule Session 1: Introduction to optimization. Modelling and Solving simple problems. Modelling combinatorial

### Some Notes on Taylor Polynomials and Taylor Series

Some Notes on Taylor Polynomials and Taylor Series Mark MacLean October 3, 27 UBC s courses MATH /8 and MATH introduce students to the ideas of Taylor polynomials and Taylor series in a fairly limited

### Quotient Rings and Field Extensions

Chapter 5 Quotient Rings and Field Extensions In this chapter we describe a method for producing field extension of a given field. If F is a field, then a field extension is a field K that contains F.

### Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen

(für Informatiker) M. Grepl J. Berger & J.T. Frings Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2010/11 Problem Statement Unconstrained Optimality Conditions Constrained

### Optimization of Design. Lecturer:Dung-An Wang Lecture 12

Optimization of Design Lecturer:Dung-An Wang Lecture 12 Lecture outline Reading: Ch12 of text Today s lecture 2 Constrained nonlinear programming problem Find x=(x1,..., xn), a design variable vector of

### Geometry of Linear Programming

Chapter 2 Geometry of Linear Programming The intent of this chapter is to provide a geometric interpretation of linear programming problems. To conceive fundamental concepts and validity of different algorithms

### 24. The Branch and Bound Method

24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

### 3.7 Non-autonomous linear systems of ODE. General theory

3.7 Non-autonomous linear systems of ODE. General theory Now I will study the ODE in the form ẋ = A(t)x + g(t), x(t) R k, A, g C(I), (3.1) where now the matrix A is time dependent and continuous on some

### We call this set an n-dimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P.

Volumes of parallelograms 1 Chapter 8 Volumes of parallelograms In the present short chapter we are going to discuss the elementary geometrical objects which we call parallelograms. These are going to

### Applied Algorithm Design Lecture 5

Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design

### Chapter 6. Cuboids. and. vol(conv(p ))

Chapter 6 Cuboids We have already seen that we can efficiently find the bounding box Q(P ) and an arbitrarily good approximation to the smallest enclosing ball B(P ) of a set P R d. Unfortunately, both

### The Mathematics of Origami

The Mathematics of Origami Sheri Yin June 3, 2009 1 Contents 1 Introduction 3 2 Some Basics in Abstract Algebra 4 2.1 Groups................................. 4 2.2 Ring..................................

### Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand

Notes V General Equilibrium: Positive Theory In this lecture we go on considering a general equilibrium model of a private ownership economy. In contrast to the Notes IV, we focus on positive issues such

### Math Department Student Learning Objectives Updated April, 2014

Math Department Student Learning Objectives Updated April, 2014 Institutional Level Outcomes: Victor Valley College has adopted the following institutional outcomes to define the learning that all students

### Date: April 12, 2001. Contents

2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........

### 4. Factor polynomials over complex numbers, describe geometrically, and apply to real-world situations. 5. Determine and apply relationships among syn

I The Real and Complex Number Systems 1. Identify subsets of complex numbers, and compare their structural characteristics. 2. Compare and contrast the properties of real numbers with the properties of

### Mathematics Review for MS Finance Students

Mathematics Review for MS Finance Students Anthony M. Marino Department of Finance and Business Economics Marshall School of Business Lecture 1: Introductory Material Sets The Real Number System Functions,

### Chap 4 The Simplex Method

The Essence of the Simplex Method Recall the Wyndor problem Max Z = 3x 1 + 5x 2 S.T. x 1 4 2x 2 12 3x 1 + 2x 2 18 x 1, x 2 0 Chap 4 The Simplex Method 8 corner point solutions. 5 out of them are CPF solutions.

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### 1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

### Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that

### FIRST YEAR CALCULUS. Chapter 7 CONTINUITY. It is a parabola, and we can draw this parabola without lifting our pencil from the paper.

FIRST YEAR CALCULUS WWLCHENW L c WWWL W L Chen, 1982, 2008. 2006. This chapter originates from material used by the author at Imperial College, University of London, between 1981 and 1990. It It is is

### Lecture 3: Linear Programming Relaxations and Rounding

Lecture 3: Linear Programming Relaxations and Rounding 1 Approximation Algorithms and Linear Relaxations For the time being, suppose we have a minimization problem. Many times, the problem at hand can

### MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

### What is Linear Programming?

Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

### THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear

### Advanced Algebra 2. I. Equations and Inequalities

Advanced Algebra 2 I. Equations and Inequalities A. Real Numbers and Number Operations 6.A.5, 6.B.5, 7.C.5 1) Graph numbers on a number line 2) Order real numbers 3) Identify properties of real numbers

### Prime Numbers and Irreducible Polynomials

Prime Numbers and Irreducible Polynomials M. Ram Murty The similarity between prime numbers and irreducible polynomials has been a dominant theme in the development of number theory and algebraic geometry.

### The Dirichlet Unit Theorem

Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if

### Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Chapter 10 Boundary Value Problems for Ordinary Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

### Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities

Algebra 1, Quarter 2, Unit 2.1 Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities Overview Number of instructional days: 15 (1 day = 45 60 minutes) Content to be learned

### OPRE 6201 : 2. Simplex Method

OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2

### (Quasi-)Newton methods

(Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### Polynomial Operations and Factoring

Algebra 1, Quarter 4, Unit 4.1 Polynomial Operations and Factoring Overview Number of instructional days: 15 (1 day = 45 60 minutes) Content to be learned Identify terms, coefficients, and degree of polynomials.

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### LECTURE 6: INTERIOR POINT METHOD. 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm

LECTURE 6: INTERIOR POINT METHOD 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm Motivation Simplex method works well in general, but suffers from exponential-time

### (Refer Slide Time: 01:11-01:27)

Digital Signal Processing Prof. S. C. Dutta Roy Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 6 Digital systems (contd.); inverse systems, stability, FIR and IIR,

### NOTES ON MEASURE THEORY. M. Papadimitrakis Department of Mathematics University of Crete. Autumn of 2004

NOTES ON MEASURE THEORY M. Papadimitrakis Department of Mathematics University of Crete Autumn of 2004 2 Contents 1 σ-algebras 7 1.1 σ-algebras............................... 7 1.2 Generated σ-algebras.........................

### Introduction to time series analysis

Introduction to time series analysis Margherita Gerolimetto November 3, 2010 1 What is a time series? A time series is a collection of observations ordered following a parameter that for us is time. Examples

### Minimize subject to. x S R

Chapter 12 Lagrangian Relaxation This chapter is mostly inspired by Chapter 16 of [1]. In the previous chapters, we have succeeded to find efficient algorithms to solve several important problems such

### 2.5 Gaussian Elimination

page 150 150 CHAPTER 2 Matrices and Systems of Linear Equations 37 10 the linear algebra package of Maple, the three elementary 20 23 1 row operations are 12 1 swaprow(a,i,j): permute rows i and j 3 3

### THREE DIMENSIONAL GEOMETRY

Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

### Vector and Matrix Norms

Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

### Algebra II Pacing Guide First Nine Weeks

First Nine Weeks SOL Topic Blocks.4 Place the following sets of numbers in a hierarchy of subsets: complex, pure imaginary, real, rational, irrational, integers, whole and natural. 7. Recognize that the

### Solution of Linear Systems

Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

### December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

### MAT2400 Analysis I. A brief introduction to proofs, sets, and functions

MAT2400 Analysis I A brief introduction to proofs, sets, and functions In Analysis I there is a lot of manipulations with sets and functions. It is probably also the first course where you have to take

### Absolute Value Programming

Computational Optimization and Aplications,, 1 11 (2006) c 2006 Springer Verlag, Boston. Manufactured in The Netherlands. Absolute Value Programming O. L. MANGASARIAN olvi@cs.wisc.edu Computer Sciences

### MOP 2007 Black Group Integer Polynomials Yufei Zhao. Integer Polynomials. June 29, 2007 Yufei Zhao yufeiz@mit.edu

Integer Polynomials June 9, 007 Yufei Zhao yufeiz@mit.edu We will use Z[x] to denote the ring of polynomials with integer coefficients. We begin by summarizing some of the common approaches used in dealing

### Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

### Lectures 5-6: Taylor Series

Math 1d Instructor: Padraic Bartlett Lectures 5-: Taylor Series Weeks 5- Caltech 213 1 Taylor Polynomials and Series As we saw in week 4, power series are remarkably nice objects to work with. In particular,

### Solving Systems of Linear Equations

LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how