# Absolute Value Programming

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Computational Optimization and Aplications,, 1 11 (2006) c 2006 Springer Verlag, Boston. Manufactured in The Netherlands. Absolute Value Programming O. L. MANGASARIAN Computer Sciences Department University of Wisconsin Madison, WI Editor: Gilbert Strang Abstract. We investigate equations, inequalities and mathematical programs involving absolute values of variables such as the equation Ax + B x = b, where A and B are arbitrary m n real matrices. We show that this absolute value equation is NP-hard to solve, and that solving it with B = I solves the general linear complementarity problem. We give sufficient optimality conditions and duality results for absolute value programs as well as theorems of the alternative for absolute value inequalities. We also propose concave minimization formulations for absolute value equations that are solved by a finite succession of linear programs. These algorithms terminate at a local minimum that solves the absolute value equation in almost all solvable random problems tried. Keywords: absolute value (AV) equations, AV algorithm, AV theorems of alternative, AV duality 1. Introduction We consider problems involving absolute values of variables such as: Ax + B x = b, (1) where A R m n, B R m n and b R m. As will be shown, the general linear complementarity problem [2, 3] which subsumes many mathematical programming problems can be formulated as an absolute value (AV) equation such as (1). Even though these problems involving absolute values are NP-hard, as will be shown in Section 2, they share some very interesting properties with those of linear systems. For example, in Section 4 we formulate optimization problems with AV constraints and give optimality and duality results similar to those of linear programming, even though the problems are inherently nonconvex. Other results that the AV equation (1) shares with linear inequalities are theorems of the alternative that we give in 5. In Section 3 of the paper we propose a finite successive linearization algorithm for solving AV equations that terminates at a necessary optimality condition. This algorithm has solved all solvable random test problems given to it for which mostly m 2n or n 2m, up to size (m,n) of (2000,100) and (100,2000). When m = n and B is invertible, which is the case for the linear complementarity problem formulation, we give a simpler concave minimization formulation that is also solvable by a finite succession of linear programs. Problems with m = n and n between 50 and 1000 were solved by this approach. Section 6 concludes the paper and poses some open questions. This work is motivated in part by the recent interesting paper of Rohn [12] where a theorem of the alternative is given for a special case of (1) with square matrices A

2 2 O. L. MANGASARIAN and B, and a linear complementarity equivalent to (1) is also given. However, both of these results are somewhat unusual in that both alternatives are given in the primal space R n of the variable x instead of the primal and dual spaces as is usual [5], while the linear complementarity equivalent to (1) is the following nonstandard complementarity problem: x + = (A + B) 1 (A B)( x) + + (A + B) 1 b. (2) Here x + denotes suppression of negative components as defined below. In contrast, our result in Section 2 gives an explicit AV equation (1) in terms of the classical linear complementarity problem (3) below, while our theorems of the alternative of Section 5 are in the primal and dual spaces R n and R m. We now describe our notation. All vectors will be column vectors unless transposed to a row vector by a prime. The scalar (inner) product of two vectors x and y in the n-dimensional real space R n is then x y and orthogonality x y = 0 will be denoted by x y. For x R n, the 1-norm will be denoted by x 1 and the 2-norm by x, while x will denote the vector with absolute values of each component of x and x + will denote the vector resulting from setting all negative components of x to zero. The notation A R m n will signify a real m n matrix, with ith row A i and transpose A. A vector of zeros in a real space of arbitrary dimension will be denoted by 0, and the vector of ones is e. The notation argmin x X solution set of min x X f(x). 2. Problem Sources and Hardness f(x) denotes the We will first show how to reduce any linear complementarity problem (LCP) to the AV equation (1): (LCP) 0 z Mz+q 0, (3) where M R n n and q R n. Then we show that solving (1) is NP-hard. Proposition 1 LCP as an AVE Given the LCP (3), without loss of generality, rescale M and q by multiplying by a positive constant if needed, so that no eigenvalue of the rescaled M is 1 and hence (I M) is nonsingular. The rescaled LCP can be solved by solving the following AV equation and computing z as indicated: (I + M)(I M) 1 x x = ((I + M)(I M) 1 + I)q, z = (I M) 1 (x + q). (4) Proof: From the simple fact that for any two real numbers a and b: a + b = a b a 0, b 0, ab = 0, (5) it folows that the LCP (3) is equivalent to: z + Mz + q = z Mz q. (6)

3 ABSOLUTE VALUE PROGRAMMING 3 Defining x from (4) as x = (I M)z q, equation (6) becomes: (I + M)(I M) 1 (x + q) + q = x, (7) which is the AV equation of (4). Hence, solving the AV equation (4) gives us a solution of the LCP (3) through the definition z of (4). Conversely it can also be shown, under the somewhat strong assumption that B is invertible, and by rescaling so is (I + B 1 A), that the AV equation (1) can be solved by solving an LCP (3). We will not give details of that reduction here because it is not essential to the present work but we merely state the LCP to be solved in terms of A, B and b of the AV equation (1): 0 z (I 2(I + B 1 A) 1 )z + (I + B 1 A) 1 B 1 b 0. (8) A solution x to the AV equation (1) can be computed from a solution z of (8) as follows: x = (I + B 1 A) 1 ( 2z + B 1 b). (9) There are other linear complementarity formulations of the AV equation (1) which we shall not go into here. We now show that solving the AV equation (1) is NP-hard by reducing the NPhard knapsack feasibility problem to an AV equation (1). Proposition 2 Solving the AV equation (1) is NP-hard. Proof: The NP-hard knapsack feasibility problem consists of finding an n-dimensional binary variable z such that a z = d where a is an n-dimensional integer vector and d is an integer. It has been shown [1, 6] that this problem is equivalent to the LCP (3) with the following values of M and q: I 0 0 a M = e n 0, q = b. (10) e 0 n b Since eigenvalues of M are { n, n, 1,..., 1}, Proposition 1 applies, without rescaling, and the NP-hard knapsack problem can be reduced to the AV equation (4) which is a special case of (1). Hence solving (1) is NP-hard. Other sources for AV equations are interval linear equations [11] as well as constraints in AV programming problems where absolute values appear in the constraints and the objective function which we discuss in Section 4. Note that as a consequence of Proposition 2, AV programs are NP-hard as well. We turn now to a method for solving AV equations. 3. Successive Linearization Algorithm via Concave Minimization We propose here a method for solving AV equations based on a minimization of a concave function on a polyhedral set by solving a finite succession of linear programs. We shall give two distinct algorithms based on this method, one for m n

4 4 O. L. MANGASARIAN and a simpler one for m = n. Before stating the concave minimization problem we prove a lemma which does not seem to have been given before and which extends the linear programming perturbation results of [8] to nondifferentiable concave perturbations. Lemma 1 Consider the concave minimization problem: min z Z d z + ǫf(z), Z = {z Hz h}, (11) where d R l, f(z) is a concave function on R l, H R k l and h R k. Suppose that d z + ǫf(z) is bounded below on Z for ǫ (0, ǫ] for some ǫ > 0. Suppose also that Z contains no lines going to infinity in both directions. Then there exists an ǫ ǫ such that for all ǫ (0, ǫ] the concave minimization problem (11) is solvable by a vertex solution z that minimizes f(z) over the nonempty solution set Z of the linear program min z Z d z. Proof: By [10, Corollary ] the concave minimization problem (11) has a vertex solution for each ǫ (0, ǫ]. Since Z has a finite number of vertices, we can construct a decreasing sequence {ǫ i 0} i=0 i= such that only the same vertex z of Z will occur as the solution of (11) for {ǫ i 0} i=0 i=. Since for i j: it follows that for λ [0,1]: Hence: d z + ǫ i f( z) d z + ǫ i f(z), z Z, d z + ǫ j f( z) d z + ǫ j f(z), z Z, d z + ((1 λ)ǫ i + λǫ j )f( z) d z + ((1 λ)ǫ i + λǫ j )f(z), z Z. Letting ǫ 0 gives: d z + ǫf( z) d z + ǫf(z), z Z, ǫ (0, ǫ = ǫ 0 ]. d z d z, z Z. Hence the solution set Z of min z Z d z is nonempty and for ǫ (0, ǫ = ǫ 0 ]: ǫf( z) (d z d z) + ǫf(z) = ǫf(z), z Z Z. We formulate now the concave minimization problem that solves the AV equation (1). Proposition 3 AV Equation as Concave Minimization Consider the concave minimization problem: min ǫ( e x + e t) + e s (x,t,s) R n+n+m such that s Ax + Bt b s, t x t. (12)

5 ABSOLUTE VALUE PROGRAMMING 5 Without loss of generality, the feasible region is assumed not to have straight lines going to in both directions. For each ǫ > 0, the concave minimization problem (12) has a vertex solution. For any ǫ (0, ǫ] for some ǫ > 0, any solution ( x, t, s) of (12) satisfies: ( x, t) XT = arg min Ax + Bt b 1, (x,t) R n+n t 1 x 1 = t 1 x 1. min (x,t) XT (13) In particular, if the AV equation (1) is solvable then: x = t, A x + B x = b. (14) Proof: Note that the feasible region of (12) is nonempty and its concave objective function is bounded below by zero, since e x + e t 0 follows from the second constraint x t. Hence by [10, Corollary ] the concave minimization problem (12) has a vertex solution provided it has no lines going to infinity in both directions. The latter fact can be easily ensured by making the standard transformation x = x 1 x 2, (x 1,x 2 ) 0, since (s,t) 0 follow from the problem constraints. For simplicity we shall not make this transformation and assume that (12) has a vertex solution. (That this transformation is not necessary is borne out by our numerical tests.) The results of (13) follow from Lemma 1, while those of (14) hold because the minimum in (13) is zero and x = t renders the perturbation ( e x + e t) in the minimization problem (12) a minimum. We state now a successive linearization algorithm for solving (12) that terminates at a point satisfying a necessary optimality condition. Before doing that we define a supergradient of the concave function e x as: 1 if x i < 0 ( e x ) i = sign(x i ) = 0 if x i = 0, i = 1,...,n. (15) 1 if x i > 0 Algorithm 1 Successive Linearization Algorithm (SLA1) for (1) Let z = [ x t s ]. Denote the feasible region of (12) by Z and its objective function by f(z). Start with a random z 0 R n+n+m. From z i determine z i+1 as follows: Stop if z i Z and f(z i ) (z i+1 z i ) = 0. z i+1 arg vertex min z Z f(zi ) (z z i ). (16) We now state a finite termination result for the SLA1 algorithm. Proposition 4 SLA1 Finite Termination The SLA1 Algorithm 1 generates a finite sequence of feasible iterates {z 1,z 2,...,zī} of strictly decreasing objective

6 6 O. L. MANGASARIAN function values: f(z 1 ) > f(z 2 ) >... > f(zī), such that zī satisfies the minimum principle necessary optimality condition: f(zī) (z zī) 0, z Z. (17) Proof: See [7, Theorem 3]. Preliminary numerical tests on randomly generated solvable AV equations were encouraging and are depicted in Table 1 for 100 runs of Algorithm 1 for cases with m n. All runs were carried out on a Pentium 4 3Ghz machine with 1GB RAM. The CPLEX linear programming package [4] was used within a MATLAB [9] code. Since Algorithm 1 could not effectively solve the case m = n, we developed a simplified concave minimization algorithm for that case when B is invertible. This case is still NP-hard and covers the linear complementarity problem as shown in Proposition 1. The AV equation (1) degenerates to the following when m = n and B 1 exists: x = Gx + g, (18) where G = B 1 A and g = B 1 b. We note that since: x Gx + g Gx g x Gx + g, (19) we immediately have that the following concave minimization problem will solve (18) if it has a solution: min x R n e Gx e x such that ( I G)x g, (I G)x g. (20) The successive linearization Algorithm 1 and its finite termination is applicable to (20) and leads to the following simple linear program at iteration i: min x R (e G sign(x i ) )(x x i ) n such that ( I G)x g, (I G)x g. (21) We shall assume that {x Gx + g 0}, which is the case if (18) is solvable. We also note that the concave objective function of (20) is bounded below on the feasible region of (20) by e g on account of (19). Hence the supporting plane of that function at x i : e Gx i e x i +(e G sign(x i ) )(x x i ) is also bounded below by e g on the feasible region of (21). Consequently, the objective function of (21) is bounded below by e g e Gx i + e x i on its feasible region and thus (21) is solvable. We state now the algorithm for solving (18). Algorithm 2 Successive Linearization Algorithm (SLA2) for (18) Let {x Gx+ g 0}. Start with a random x 0 R n. From x i determine x i+1 by the solvable linear program (21). Stop if x i is feasible and and (e G sign(x i ) )(x i+1 x i ) = 0.

7 ABSOLUTE VALUE PROGRAMMING 7 Table 1. Results of 100 runs of Algorithm 1 for the AV equation (1). Each case is the average of ten random runs for that case. The error is Ax+B x b and t x refers to problem (12). m n No. Iter. ī Time Sec. Error t x Table 2. Results of 100 runs of Algorithm 2 for the AV equation (18). Each case is the average of ten random runs for that case. The error is Gx x +g. NNZ(Error) is the average number of nonzero Gx x + g over ten runs. n No. Iter. ī Time Sec. Error NNZ(Error)

8 8 O. L. MANGASARIAN Table 2 gives results for Algorithm 2 for a hundred randomly generated solvable problems (18) with n between 50 and Of these 100 problems only 5 were not solved by this algorithm. We turn now to duality and optimality results for AV optimization. 4. Duality and Optimality for Absolute Value Programs We derive in this section a weak duality theorem and sufficient optimality conditions for AV programs. These results are somewhat curious in the sense that they hold for nonconvex problems. However, we note that we are missing strong duality and necessary optimality conditions for these AV programs. We begin by defining our primal and dual AV programs as follows. Primal AV P min x X c x + d x, X = {x R n Ax + B x = b, Hx + K x p}. (22) Dual AV P max (u,v) U b u + p v, U = {(u,v) R m+k A u + H v c + B u + K v d, v 0}. Proposition 5 Weak Duality Theorem (23) x X, (u,v) U = c x + d x b u + p v (24) Proof:. d x x A u + H v c + x (B u + K v) u Ax + v Hx c x + u B x + v K x b u + p v c x. Our next result gives sufficient conditions for primal and dual optimality. Proposition 6 Sufficient Optimality Conditions Let x be feasible for the primal AVP (22) and (ū, v) be feasible for the dual AVP (23) with equal primal and dual objective functions, that is: Then x is primal optimal and (ū, v) is dual optimal. Proof: Let x X. Then: c x + d x = b ū + p v. (25) c x + d x c x d x c x + x A ū + H v c + x (B ū + K v) c x d x c x + ū Ax + v Hx c x + ū B x + v K x c x d x b ū + p v c x d x = 0

9 ABSOLUTE VALUE PROGRAMMING 9 Hence x is primal optimal. Now let (u,v) U. Then: b ū + p v b u p v c x + d x u A x u B x v H x v K x c x + x A u + H v c + x (B u + K v) u A x u B x v H x v K x c x + u A x + v H x c x + u B x + v K x u A x u B x v H x v K x = 0. Hence (ū, v) is dual optimal. We turn now to our final results of theorems of the alternative. 5. Theorems of the Alternative for Absolute Value Equations and Inequalities We shall give two propositions of the alternative here. The first one applies under no assumptions and the second one under one assumption. Proposition 7 First AV Theorem of the Alternative Exactly one of the two following alternatives must hold: I. Ax + Bt = b, Hx + Kt p, t x has solution (x,t) R n+n. II. A u + H v + B u + K v 0, b u + p v > 0 v 0 has solution (u,v) R m+k. Proof: Let Ī and II denote negations of I and II respectively. (I = II) If both I and II hold then we have the contradiction: 0 t A u + H v + t B u + t K v x A u + H v + t B u + t K v u Ax + v Hx + u Bt + v Kt b u + v p > 0. (I = II) If II does not hold then: This implies that: 0 = max u,v {b u + p v A u + H v + B u + K v 0, v 0}. 0 = max u,v,s {b u + p v s + B u + K v 0, A u + H v s, v 0}. This implication is true because when (u,v,s) is feasible for the last problem then (u, v) is feasible for the previous problem. By linear programming duality after replacing A u + H v s by the equivalent constraint s A u + H v s, we have that: 0 = min {0 t y z = 0, Bt + A(y z) = b, Kt + H(y z) p, (t,y,z) 0}. t,y,z

10 10 O. L. MANGASARIAN Setting x = y z and noting that t = y+z ±(y z) = ±x we have that feasibility of the last problem implies that the following system has a solution: which is exactly I. Ax + Bt = b, Hx + Kt p, t x, We state and prove now our last proposition. Proposition 8 Second AV Theorem of the Alternative Exactly one of the two following alternatives must hold: I. Ax + B x = b, Hx + K x p has solution x R n, II. A u + H v + B u + K v 0, b u + p v > 0, v 0 has solution (u,v) R m+k, under the assumption that: 0 = max x,t {e x e t Ax + Bt = b, Hx + Kt p, t x }. (26) Note The assumption (26) is needed only in establishing that I = II. Proof: (I = II) If both I and II hold then we have the contradiction: 0 < b u + p v u Ax + u B x + v Hx + v K x x ( A u + H v + B u + K v) 0. (I = II) By Proposition 7 above we have that II implies I of Proposition 7, which in turn implies I of this proposition under the assumption (26). We conclude with some remarks and some possible future extensions. 6. Conclusion and Outlook We have investigated equations, inequalities and mathematical programs involving absolute values of variables. It is very interesting that these inherently nonconvex and difficult problems are amenable to duality results and theorems of the alternative that are typically associated with linear programs and linear inequalities. Furthermore, successive linearization algorithms appear to be effective in solving absolute value equations for which m and n are substantially different or when m = n. Even though we give sufficient optimality conditions for our nonconvex absolute value programs, we are missing necessary optimality conditions. This is somewhat in the spirit of the sufficient saddlepoint optimality condition of mathematical programming [5, Theorem 5.3.1] which holds without any convexity. However, necessity of a saddlepoint condition requires convexity as well as a constraint qualification [5, Theorem 5.4.7], neither of which we have here.

11 ABSOLUTE VALUE PROGRAMMING 11 An interesting topic of future research might be one that deals with absolute value programs for which necessary optimality conditions can be obtained as well as strong duality results and globally convergent algorithms. Also interesting would be further classes of problems that can be cast as absolute value equations, inequalities or optimization problems that can be solved by the proposed algorithms here or their variants. Acknowledgments The research described in this Data Mining Institute Report 05-04, September 2005, was supported by National Science Foundation Grants CCR and IIS , the Microsoft Corporation and ExxonMobil. References 1. S.-J. Chung and K. G. Murty. Polynomially bounded ellipsoid algorithms for convex quadratic programming. In O. L. Mangasarian, R. R. Meyer, and S. M. Robinson, editors, Nonlinear Programming 4, pages , New York, Academic Press. 2. R. W. Cottle and G. Dantzig. Complementary pivot theory of mathematical programming. Linear Algebra and its Applications, 1: , R. W. Cottle, J.-S. Pang, and R. E. Stone. The Linear Complementarity Problem. Academic Press, New York, CPLEX Optimization Inc., Incline Village, Nevada. Using the CPLEX(TM) Linear Optimizer and CPLEX(TM) Mixed Integer Optimizer (Version 2.0), O. L. Mangasarian. Nonlinear Programming. McGraw Hill, New York, Reprint: SIAM Classic in Applied Mathematics 10, 1994, Philadelphia. 6. O. L. Mangasarian. The linear complementarity problem as a separable bilinear program. Journal of Global Optimization, 6: , O. L. Mangasarian. Solution of general linear complementarity problems via nondifferentiable concave minimization. Acta Mathematica Vietnamica, 22(1): , ftp://ftp.cs.wisc.edu/math-prog/tech-reports/96-10.ps. 8. O. L. Mangasarian and R. R. Meyer. Nonlinear perturbation of linear programs. SIAM Journal on Control and Optimization, 17(6): , November MATLAB. User s Guide. The MathWorks, Inc., Natick, MA 01760, R. T. Rockafellar. Convex Analysis. Princeton University Press, Princeton, New Jersey, J. Rohn. Systems of linear interval equations. Linear Algebra and Its Applications, 126:39 78, rohn/publist/47.doc. 12. J. Rohn. A theorem of the alternatives for the equation Ax+B x = b. Linear and Multilinear Algebra, 52(6): , rohn/publist/alternatives.pdf.

### A Hybrid Algorithm for Solving the Absolute Value Equation

A Hybrid Algorithm for Solving the Absolute Value Equation Olvi L. Mangasarian Abstract We propose a hybrid algorithm for solving the NP-hard absolute value equation (AVE): Ax x = b, where A is an n n

### Massive Data Classification via Unconstrained Support Vector Machines

Massive Data Classification via Unconstrained Support Vector Machines Olvi L. Mangasarian and Michael E. Thompson Computer Sciences Department University of Wisconsin 1210 West Dayton Street Madison, WI

### 2.3 Convex Constrained Optimization Problems

42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

### Duality of linear conic problems

Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

### By W.E. Diewert. July, Linear programming problems are important for a number of reasons:

APPLIED ECONOMICS By W.E. Diewert. July, 3. Chapter : Linear Programming. Introduction The theory of linear programming provides a good introduction to the study of constrained maximization (and minimization)

### max cx s.t. Ax c where the matrix A, cost vector c and right hand side b are given and x is a vector of variables. For this example we have x

Linear Programming Linear programming refers to problems stated as maximization or minimization of a linear function subject to constraints that are linear equalities and inequalities. Although the study

### Research Article Residual Iterative Method for Solving Absolute Value Equations

Abstract and Applied Analysis Volume 2012, Article ID 406232, 9 pages doi:10.1155/2012/406232 Research Article Residual Iterative Method for Solving Absolute Value Equations Muhammad Aslam Noor, 1 Javed

### THEORY OF SIMPLEX METHOD

Chapter THEORY OF SIMPLEX METHOD Mathematical Programming Problems A mathematical programming problem is an optimization problem of finding the values of the unknown variables x, x,, x n that maximize

### Geometry of Linear Programming

Chapter 2 Geometry of Linear Programming The intent of this chapter is to provide a geometric interpretation of linear programming problems. To conceive fundamental concepts and validity of different algorithms

### 1 Polyhedra and Linear Programming

CS 598CSC: Combinatorial Optimization Lecture date: January 21, 2009 Instructor: Chandra Chekuri Scribe: Sungjin Im 1 Polyhedra and Linear Programming In this lecture, we will cover some basic material

### Linear Codes. In the V[n,q] setting, the terms word and vector are interchangeable.

Linear Codes Linear Codes In the V[n,q] setting, an important class of codes are the linear codes, these codes are the ones whose code words form a sub-vector space of V[n,q]. If the subspace of V[n,q]

### Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725

Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T

### Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

### THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear

### MATHEMATICAL BACKGROUND

Chapter 1 MATHEMATICAL BACKGROUND This chapter discusses the mathematics that is necessary for the development of the theory of linear programming. We are particularly interested in the solutions of a

### EXPLICIT ABS SOLUTION OF A CLASS OF LINEAR INEQUALITY SYSTEMS AND LP PROBLEMS. Communicated by Mohammad Asadzadeh. 1. Introduction

Bulletin of the Iranian Mathematical Society Vol. 30 No. 2 (2004), pp 21-38. EXPLICIT ABS SOLUTION OF A CLASS OF LINEAR INEQUALITY SYSTEMS AND LP PROBLEMS H. ESMAEILI, N. MAHDAVI-AMIRI AND E. SPEDICATO

Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650

### ME128 Computer-Aided Mechanical Design Course Notes Introduction to Design Optimization

ME128 Computer-ided Mechanical Design Course Notes Introduction to Design Optimization 2. OPTIMIZTION Design optimization is rooted as a basic problem for design engineers. It is, of course, a rare situation

### Introduction to Linear Programming.

Chapter 1 Introduction to Linear Programming. This chapter introduces notations, terminologies and formulations of linear programming. Examples will be given to show how real-life problems can be modeled

### CHAPTER III - MARKOV CHAINS

CHAPTER III - MARKOV CHAINS JOSEPH G. CONLON 1. General Theory of Markov Chains We have already discussed the standard random walk on the integers Z. A Markov Chain can be viewed as a generalization of

### Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 J. Zhang Institute of Applied Mathematics, Chongqing University of Posts and Telecommunications, Chongqing

### 4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

L. Vandenberghe EE133A (Spring 2016) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows

### Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

### Theory of Linear Programming

Theory of Linear Programming Debasis Mishra April 6, 2011 1 Introduction Optimization of a function f over a set S involves finding the maximum (minimum) value of f (objective function) in the set S (feasible

### Minimize subject to. x S R

Chapter 12 Lagrangian Relaxation This chapter is mostly inspired by Chapter 16 of [1]. In the previous chapters, we have succeeded to find efficient algorithms to solve several important problems such

### 3. Linear Programming and Polyhedral Combinatorics

Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

### A Farkas-type theorem for interval linear inequalities. Jiri Rohn. Optimization Letters. ISSN Volume 8 Number 4

A Farkas-type theorem for interval linear inequalities Jiri Rohn Optimization Letters ISSN 1862-4472 Volume 8 Number 4 Optim Lett (2014) 8:1591-1598 DOI 10.1007/s11590-013-0675-9 1 23 Your article is protected

### 2.5 Complex Eigenvalues

1 25 Complex Eigenvalues Real Canonical Form A semisimple matrix with complex conjugate eigenvalues can be diagonalized using the procedure previously described However, the eigenvectors corresponding

Facts About Eigenvalues By Dr David Butler Definitions Suppose A is an n n matrix An eigenvalue of A is a number λ such that Av = λv for some nonzero vector v An eigenvector of A is a nonzero vector v

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### Shiqian Ma, SEEM 5121, Dept. of SEEM, CUHK 1. Chapter 2. Convex Optimization

Shiqian Ma, SEEM 5121, Dept. of SEEM, CUHK 1 Chapter 2 Convex Optimization Shiqian Ma, SEEM 5121, Dept. of SEEM, CUHK 2 2.1. Convex Optimization General optimization problem: min f 0 (x) s.t., f i (x)

### GRA6035 Mathematics. Eivind Eriksen and Trond S. Gustavsen. Department of Economics

GRA635 Mathematics Eivind Eriksen and Trond S. Gustavsen Department of Economics c Eivind Eriksen, Trond S. Gustavsen. Edition. Edition Students enrolled in the course GRA635 Mathematics for the academic

### An important class of codes are linear codes in the vector space Fq n, where F q is a finite field of order q.

Chapter 3 Linear Codes An important class of codes are linear codes in the vector space Fq n, where F q is a finite field of order q. Definition 3.1 (Linear code). A linear code C is a code in Fq n for

### Definition of a Linear Program

Definition of a Linear Program Definition: A function f(x 1, x,..., x n ) of x 1, x,..., x n is a linear function if and only if for some set of constants c 1, c,..., c n, f(x 1, x,..., x n ) = c 1 x 1

### AM 221: Advanced Optimization Spring Prof. Yaron Singer Lecture 7 February 19th, 2014

AM 22: Advanced Optimization Spring 204 Prof Yaron Singer Lecture 7 February 9th, 204 Overview In our previous lecture we saw the application of the strong duality theorem to game theory, and then saw

### 1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

### Markov Chains, part I

Markov Chains, part I December 8, 2010 1 Introduction A Markov Chain is a sequence of random variables X 0, X 1,, where each X i S, such that P(X i+1 = s i+1 X i = s i, X i 1 = s i 1,, X 0 = s 0 ) = P(X

### Chapter 4: Binary Operations and Relations

c Dr Oksana Shatalov, Fall 2014 1 Chapter 4: Binary Operations and Relations 4.1: Binary Operations DEFINITION 1. A binary operation on a nonempty set A is a function from A A to A. Addition, subtraction,

### UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

### Institute of Computer Science. An Iterative Method for Solving Absolute Value Equations

Institute of Computer Science Academy of Sciences of the Czech Republic An Iterative Method for Solving Absolute Value Equations Vahideh Hooshyarbakhsh, Taher Lotfi, Raena Farhadsefat, and Jiri Rohn Technical

### Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

### Advanced Topics in Machine Learning (Part II)

Advanced Topics in Machine Learning (Part II) 3. Convexity and Optimisation February 6, 2009 Andreas Argyriou 1 Today s Plan Convex sets and functions Types of convex programs Algorithms Convex learning

### Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.

### Iterative Techniques in Matrix Algebra. Jacobi & Gauss-Seidel Iterative Techniques II

Iterative Techniques in Matrix Algebra Jacobi & Gauss-Seidel Iterative Techniques II Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin

### 24. The Branch and Bound Method

24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

### Inverses and powers: Rules of Matrix Arithmetic

Contents 1 Inverses and powers: Rules of Matrix Arithmetic 1.1 What about division of matrices? 1.2 Properties of the Inverse of a Matrix 1.2.1 Theorem (Uniqueness of Inverse) 1.2.2 Inverse Test 1.2.3

### Summary of week 8 (Lectures 22, 23 and 24)

WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry

### 1 Solving LPs: The Simplex Algorithm of George Dantzig

Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

### FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z

FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z DANIEL BIRMAJER, JUAN B GIL, AND MICHAEL WEINER Abstract We consider polynomials with integer coefficients and discuss their factorization

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### Section Notes 4. Duality, Sensitivity, Dual Simplex, Complementary Slackness. Applied Math 121. Week of February 28, 2011

Section Notes 4 Duality, Sensitivity, Dual Simplex, Complementary Slackness Applied Math 121 Week of February 28, 2011 Goals for the week understand the relationship between primal and dual programs. know

### MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix.

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. Matrices Definition. An m-by-n matrix is a rectangular array of numbers that has m rows and n columns: a 11

### Matrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Matrix Norms Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 28, 2009 Matrix Norms We consider matrix norms on (C m,n, C). All results holds for

### Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

### MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

### v 1. v n R n we have for each 1 j n that v j v n max 1 j n v j. i=1

1. Limits and Continuity It is often the case that a non-linear function of n-variables x = (x 1,..., x n ) is not really defined on all of R n. For instance f(x 1, x 2 ) = x 1x 2 is not defined when x

### Introduction and message of the book

1 Introduction and message of the book 1.1 Why polynomial optimization? Consider the global optimization problem: P : for some feasible set f := inf x { f(x) : x K } (1.1) K := { x R n : g j (x) 0, j =

### Applied Algorithm Design Lecture 5

Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design

### Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### Linear Algebra Test 2 Review by JC McNamara

Linear Algebra Test 2 Review by JC McNamara 2.3 Properties of determinants: det(a T ) = det(a) det(ka) = k n det(a) det(a + B) det(a) + det(b) (In some cases this is true but not always) A is invertible

### 4.6 Linear Programming duality

4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

### Numerical Linear Algebra Chap. 4: Perturbation and Regularisation

Numerical Linear Algebra Chap. 4: Perturbation and Regularisation Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation TUHH Heinrich Voss Numerical Linear

### Factorization Theorems

Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

### The Power Method for Eigenvalues and Eigenvectors

Numerical Analysis Massoud Malek The Power Method for Eigenvalues and Eigenvectors The spectrum of a square matrix A, denoted by σ(a) is the set of all eigenvalues of A. The spectral radius of A, denoted

### ON THE FIBONACCI NUMBERS

ON THE FIBONACCI NUMBERS Prepared by Kei Nakamura The Fibonacci numbers are terms of the sequence defined in a quite simple recursive fashion. However, despite its simplicity, they have some curious properties

### MATH 240 Fall, Chapter 1: Linear Equations and Matrices

MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS

### Vector and Matrix Norms

Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

### Approximation Algorithms

Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

### MATH 2030: SYSTEMS OF LINEAR EQUATIONS. ax + by + cz = d. )z = e. while these equations are not linear: xy z = 2, x x = 0,

MATH 23: SYSTEMS OF LINEAR EQUATIONS Systems of Linear Equations In the plane R 2 the general form of the equation of a line is ax + by = c and that the general equation of a plane in R 3 will be we call

### Inner products on R n, and more

Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

### Using the Simplex Method in Mixed Integer Linear Programming

Integer Using the Simplex Method in Mixed Integer UTFSM Nancy, 17 december 2015 Using the Simplex Method in Mixed Integer Outline Mathematical Programming Integer 1 Mathematical Programming Optimisation

### The Canonical and Dictionary Forms of Linear Programs

The Canonical and Dictionary Forms of Linear Programs Recall the general optimization problem: Minimize f(x) subect to x D (1) The real valued function f is called the obective function, and D the constraint

### Good luck, veel succes!

Final exam Advanced Linear Programming, May 7, 13.00-16.00 Switch off your mobile phone, PDA and any other mobile device and put it far away. No books or other reading materials are allowed. This exam

### 1. R In this and the next section we are going to study the properties of sequences of real numbers.

+a 1. R In this and the next section we are going to study the properties of sequences of real numbers. Definition 1.1. (Sequence) A sequence is a function with domain N. Example 1.2. A sequence of real

### Date: April 12, 2001. Contents

2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........

### Prime Numbers and Irreducible Polynomials

Prime Numbers and Irreducible Polynomials M. Ram Murty The similarity between prime numbers and irreducible polynomials has been a dominant theme in the development of number theory and algebraic geometry.

### MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

### Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

### Notes on Linear Recurrence Sequences

Notes on Linear Recurrence Sequences April 8, 005 As far as preparing for the final exam, I only hold you responsible for knowing sections,,, 6 and 7 Definitions and Basic Examples An example of a linear

### Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### Continuity of the Perron Root

Linear and Multilinear Algebra http://dx.doi.org/10.1080/03081087.2014.934233 ArXiv: 1407.7564 (http://arxiv.org/abs/1407.7564) Continuity of the Perron Root Carl D. Meyer Department of Mathematics, North

### Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc.

2 Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Theorem 8: Let A be a square matrix. Then the following statements are equivalent. That is, for a given A, the statements are either all true

### Continuous functions

CHAPTER 3 Continuous functions In this chapter I will always denote a non-empty subset of R. This includes more general sets, but the most common examples of I are intervals. 3.1. The ǫ-δ definition of

### 1111: Linear Algebra I

1111: Linear Algebra I Dr. Vladimir Dotsenko (Vlad) Lecture 3 Dr. Vladimir Dotsenko (Vlad) 1111: Linear Algebra I Lecture 3 1 / 12 Vector product and volumes Theorem. For three 3D vectors u, v, and w,

### Linear Dependence Tests

Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

### Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions

Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential

### THE FUNDAMENTAL THEOREM OF ALGEBRA VIA LINEAR ALGEBRA

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA LINEAR ALGEBRA KEITH CONRAD Our goal is to use abstract linear algebra to prove the following result, which is called the fundamental theorem of algebra. Theorem

### Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Definition: A N N matrix A has an eigenvector x (non-zero) with

### Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

### Linear Programming, Lagrange Multipliers, and Duality Geoff Gordon

lp.nb 1 Linear Programming, Lagrange Multipliers, and Duality Geoff Gordon lp.nb 2 Overview This is a tutorial about some interesting math and geometry connected with constrained optimization. It is not

### 10.3 POWER METHOD FOR APPROXIMATING EIGENVALUES

55 CHAPTER NUMERICAL METHODS. POWER METHOD FOR APPROXIMATING EIGENVALUES In Chapter 7 we saw that the eigenvalues of an n n matrix A are obtained by solving its characteristic equation n c n n c n n...

Quadratic Functions, Optimization, and Quadratic Forms Robert M. Freund February, 2004 2004 Massachusetts Institute of echnology. 1 2 1 Quadratic Optimization A quadratic optimization problem is an optimization

### Week 5 Integral Polyhedra

Week 5 Integral Polyhedra We have seen some examples 1 of linear programming formulation that are integral, meaning that every basic feasible solution is an integral vector. This week we develop a theory

### 4.9 Markov matrices. DEFINITION 4.3 A real n n matrix A = [a ij ] is called a Markov matrix, or row stochastic matrix if. (i) a ij 0 for 1 i, j n;

49 Markov matrices DEFINITION 43 A real n n matrix A = [a ij ] is called a Markov matrix, or row stochastic matrix if (i) a ij 0 for 1 i, j n; (ii) a ij = 1 for 1 i n Remark: (ii) is equivalent to AJ n

### These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms

### Sec 4.1 Vector Spaces and Subspaces

Sec 4. Vector Spaces and Subspaces Motivation Let S be the set of all solutions to the differential equation y + y =. Let T be the set of all 2 3 matrices with real entries. These two sets share many common