Constrained Optimization Using Lagrange Multipliers CEE 201L. Uncertainty, Design, and Optimization

Size: px
Start display at page:

Download "Constrained Optimization Using Lagrange Multipliers CEE 201L. Uncertainty, Design, and Optimization"

Transcription

1 Constrained Optimization Using Lagrange Multipliers CEE L. Uncertainty, Design, and Optimization Department of Civil and Environmental Engineering Duke University Henri P. Gavin and Jeffrey T. Scruggs Spring, 6 In optimal design problems, values for a set of n design variables, (x, x, x n ), are to be found that minimize a scalar-valued objective function of the design variables, such that a set of m inequality constraints, are satisfied. Constrained optimization problems are generally expressed as min J = f(x, x,, x n ) such that x,x,,x n g (x, x,, x n ) g (x, x,, x n ). g m (x, x,, x n ) () If the objective function is quadratic in the design variables and the constraint equations are linear in the design variables, the optimization problem usually has a unique solution. Consider the simplest constrained minimization problem: min x kx such that x b. () This problem has a single design variable, the objective function is quadratic (J = kx ), there is a single constraint inequality, and it is linear in x (g(x) = b x). If g >, the constraint equation constrains the optimum and the optimal solution, x, is given by x = b. If g, the constraint equation does not constrain the optimum and the optimal solution is given by x =. Not all optimization problems are so easy; most optimization methods require more advanced methods. The methods of Lagrange multipliers is one such method, and will be applied to this simple problem. Lagrange multiplier methods involve the modification of the objective function through the addition of terms that describe the constraints. The objective function J = f(x) is augmented by the constraint equations through a set of non-negative multiplicative Lagrange multipliers, λ j. The augmented objective function, J A (x), is a function of the n design variables and m Lagrange multipliers, m J A (x, x,, x n, λ, λ,, λ m ) = f(x, x,, x n ) + λ j g j (x, x,, x n ) (3) j= For the problem of equation (), n = and m =, so J A (x, λ) = kx + λ (b x) () The Lagrange multiplier, λ, serves the purpose of modifying (augmenting) the objective function from one quadratic ( kx ) to another quadratic ( kx λx+λb) so that the minimum of the modified quadratic satisfies the constraint (x b).

2 CEE L. Uncertainty, Design, and Optimization Duke Spring 6 Gavin and Scruggs Case : b = If b = then the minimum of kx is constrained by the inequality x b, and the optimal value of λ should minimize J A (x, λ) at x = b. Figure (a) plots J A (x, λ) for a few non-negative values of λ and Figure (b) plots contours of J A (x, λ). augmented cost, J A (x) 8 6 λ= 3 b= (a) Lagrange multiplier, λ * (b) λ = design variable, x design variable, x J A Lagrange multiplier, λ design variable, x.5.5 Figure. J A (x, λ) = kx λx + λb for b = and k =. (x, λ ) = (, )

3 Constrained Optimization using Lagrange Multipliers 3 Figure shows that: J A (x, λ) is independent of λ at x = b, J A (x, λ) is minimized at x = b for λ =, the surface J A (x, λ) is a saddle shape, the point (x, λ ) = (, ) is a saddle point, J A (x, λ) J A (x, λ ) J A (x, λ ), min x J A (x, λ ) = max λ Saddle points have no slope. J A (x, λ) = J A (x, λ ) x λ = (5a) x=x, λ=λ = (5b) x=x, λ=λ For this problem, x λ = kx λ = λ = kx (6a) x=x, λ=λ = x + b = x = b (6b) x=x, λ=λ This example has a physical interpretation. The objective function J = kx represents the potential energy in a spring. The minimum potential energy in a spring corresponds to a stretch of zero (x = ). The constrained problem: min x kx such that x means minimize the potential energy in the spring such that the stretch in the spring is greater than or equal to. The solution to this problem is to set the stretch in the spring equal to the smallest allowable value (x = ). The force applied to the spring in order to achieve this objective is f = kx. This force is the Lagrange multiplier for this problem, (λ = kx ). The Lagrange multiplier is the force required to enforce the constraint.

4 CEE L. Uncertainty, Design, and Optimization Duke Spring 6 Gavin and Scruggs Case : b = If b = then the minimum of kx is not constrained by the inequality x b. The derivation above would give x =, with λ = k. The negative value of λ indicates that the constraint does not affect the optimal solution, and λ should therefore be set to zero. Setting λ =, J A (x, λ) is minimized at x =. Figure (a) plots J A (x, λ) for a few negative values of λ and Figure (b) plots contours of J A (x, λ). augmented cost, J A (x) 8 6 (a) b=- λ= Lagrange multiplier, λ (b) * λ = design variable, x design variable, x J A Lagrange multiplier, λ - design variable, x Figure. J A (x, λ) = kx λx + λb for b = and k =. (x, λ ) = (, )

5 Constrained Optimization using Lagrange Multipliers 5 Figure shows that: J A (x, λ) is independent of λ at x = b, the saddle point of J A (x, λ) occurs at a negative value of λ, so J A / λ for any λ. These examples illustrate the properties of Lagrange-multiplier problems. If the inequality g(x) constrains min f(x) then the optimum point of the augmented objective J A (x, λ) = f(x)+λg(x) is minimized with respect to x and maximized with respect to λ. The optimization problem of equation () may be written in terms of the augmented objective function (equation (3)), max λ,λ,,λ m min J A (x, x,, x n, λ, λ,, λ m ) such that λ j j (7) x,x,,x n The conditions J A x k J A λ k xi =x i, λ j=λ j xi =x i, λ j=λ j = (8a) = (8b) define the optimal design variables x i such that g j (x, x,, x n ). If an inequality g j (x, x,, x n ) constrains the optimum point, the corresponding Lagrange multiplier, λ j is positive. Lagrange multipliers for non-active constraints would be negative if they were not constrained to be non-negative. The value of the Lagrange multiplier at the optimum point is the sensitivity of the objective function J = f(x) to changes in the constraint value, b. Changing the constraint level from b to b + δb changes the constrained objective function from J = f(x ) to J = f(x ) + λ δb. f(x) f(x) δ f = λδ g not ok x x=b λ< ok g(x)+ δg g(x) not ok x=b λ> δ f = λ δ g ok x g(x)+ δg g(x) Figure 3. If increasing the constraint, g(x), results in an improved objective (a reduced cost), then λ <, and the constraint g(x) should not bind the optimum point. If increasing the constraint results in an increased cost, then λ >, and the constraint g(x) must be enforced at the optimum point.

6 6 CEE L. Uncertainty, Design, and Optimization Duke Spring 6 Gavin and Scruggs Multivariable Quadratic Programming Quadratic optimization problems with a single design variable and a single linear inequality constraint are easy enough. In problems with many design variables (n ) and many inequality constraints (m ), determining which inequality constraints are enforced at the optimum point can be difficult. Numerical methods used in solving such problems involve iterative trial-and-error approaches to find the set of active inequality constraints. Consider a second example with n = and m = : min x x,x +.5x + 3x x + 5x such that 3x + x + 5x 3x (9) This example also has a quadratic objective function and inequality constraints that are linear in the design variables. Contours of the objective function and the two inequality constraints are plotted in Figure. The feasible region of these two inequality constraints is to the left of the lines in the figure and are labeled as g ok and g ok. This figure shows that the inequality g (x, x ) constrains the solution and the inequality g (x, x ) does not. This is visible in Figure with n =, but for more complicated problems it may not be immediately clear which inequality constraints are active. Using the method of Lagrange multipliers, the augmented objective function is J A (x, x, λ, λ ) = x +.5x + 3x x + 5x + λ (3x + x + ) + λ (5x 3x ) () Unlike the first examples with n = and m =, we cannot plot contours of J A (x, x, λ, λ ) since this would be a plot in four-dimensional space. Nonetheless, the same optimality conditions hold. min J A J A = x x x,x,λ,λ x x + 3λ + 5λ = (a) min x max λ max λ J A J A x J A J A λ J A J A λ = 3x + x + λ 3λ = (b) x,x,λ,λ = 3x + x + = (c) x,x,λ,λ = 5x 3x = (d) x,x,λ,λ If the objective function is quadratic in the design variables and the constraints are linear in the design variables, the optimality conditions are simply a set of linear equations in the design variables and the Lagrange multipliers. In this example the optimality conditions are expressed as four linear equations with four unknowns. In general we may not know which inequality constraints are active. If there are only a few constraint equations it s not too hard to try all combinations of any number of constraints, fixing the Lagrange multipliers for the other inequality constraints equal to zero.

7 Constrained Optimization using Lagrange Multipliers 7 Let s try this now! First, let s find the unconstrained minimum by assuming neither constraint g (x, x ) or g (x, x ) is active, λ =, λ =, and [ ] [ ] [ ] [ ] [ ] 3 x.5 x =.5 =, () 3. x which is the unconstrained minimum shown in Figure as a. Plugging this solution into the constraint equations gives g (x, x ) =.93 and g (x, x ) = 8.7, so the unconstrained minimum is not feasible with respect to constraint g, since g (.5,.) >. Next, assuming both constraints g (x, x ) and g (x, x ) are active, optimal values for x, x, λ, and λ are sought, and all four equations must be solved together x x λ λ =.5 x x x λ λ = , (3) which is the constrained minimum shown in Figure as a o at the intersection of the g line with the g line in Figure. Note that λ < indicating that constraint g is not active; g (.,.85) = (ok). Enforcing the constraint g needlessly compromises the optimality of this solution. So, while this solution is feasible (both g and g evaluate to zero), the solution could be improved by letting go of the g constraint and moving along the g line. So, assuming only constraint g is active, g is not active, λ =, and 3 3 x.5 x 3 x = x 3 λ = λ.8..6, () which is the constrained minimum as a o on the g line in Figure. Note that λ >, which indicates that this constraint is active. Plugging x and x into g (x, x ) gives a value of -3.78, so this constrained minimum is feasible with respect to both constraints. This is the solution we re looking for. As a final check, assuming only constraint g is active, g is not active, λ =, and 3 5 x x = = λ. x x λ, (5) which is the constrained minimum shown in as a o on the g line in Figure. Note that λ <, which indicates that this constraint is not active, contradicting our assumption. Further, plugging x and x into g (x, x ) gives a value of +., so this constrained minimum is not feasible with respect to g, since g (.6,.3) >.

8 8 CEE L. Uncertainty, Design, and Optimization Duke Spring 6 Gavin and Scruggs (a).5 g ok x g ok x (b).5 g ok x g ok x Figure. Contours of the objective function and the constraint equations for the example of equation (9). (a): J = f(x, x ); (b): J A = f(x, x ) + λ g (x, x ). Note that the the contours of J A are shifted so that the minimum of J A is at the optimal point along the g line.

9 Constrained Optimization using Lagrange Multipliers 9 Primal and Dual If it is possible to solve for the design variables in terms of the Lagrange multipliers, then the design variables can be eliminated from the problem and the optimization is simply a maximization over the set of Lagrange multipliers. Consider the minimization of a quadratic objective function subject to a set of linear inequality constraints min x,x,,x n n n x i H ij x j i= j= such that nj= A j x j b nj= A j x j b. nj= A mj x j b m (6) where H ij = H ji and i j x i H ij x j > for any set of design variables. In matrix-vector notation, equation (6) is written The augmented optimization problem is max λ,,λ m x,,x n min x xt Hx such that Ax b (7) min n n m n x i H ij x j + λ k A kj x j b k, (8) i= j= k= j= such that λ k. Or, in matrix-vector notation, max min λ x [ xt Hx + λ T (Ax b)] such that λ. (9) For this kind of problem, the condition x = T () results in Hx + A T λ = from which x = H A T λ. Substituting this solution into J A results in [ max ] λ λt AH A T λ λ T b such that λ, () or [ min λ λt AH A T λ + b λ] T such that λ, () which is independent of x. Equation (9) is called the primal quadratic programming problem and equation () is called the dual quadratic programming problem. The primal problem has n + m unknown variables (x and λ) whereas the dual problem has only m unknown variables (λ) and is therefore easier to solve.

Support Vector Machine (SVM)

Support Vector Machine (SVM) Support Vector Machine (SVM) CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

More information

Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

More information

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725 Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T

More information

Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

More information

Math 53 Worksheet Solutions- Minmax and Lagrange

Math 53 Worksheet Solutions- Minmax and Lagrange Math 5 Worksheet Solutions- Minmax and Lagrange. Find the local maximum and minimum values as well as the saddle point(s) of the function f(x, y) = e y (y x ). Solution. First we calculate the partial

More information

(a) We have x = 3 + 2t, y = 2 t, z = 6 so solving for t we get the symmetric equations. x 3 2. = 2 y, z = 6. t 2 2t + 1 = 0,

(a) We have x = 3 + 2t, y = 2 t, z = 6 so solving for t we get the symmetric equations. x 3 2. = 2 y, z = 6. t 2 2t + 1 = 0, Name: Solutions to Practice Final. Consider the line r(t) = 3 + t, t, 6. (a) Find symmetric equations for this line. (b) Find the point where the first line r(t) intersects the surface z = x + y. (a) We

More information

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR Solutions Of Some Non-Linear Programming Problems A PROJECT REPORT submitted by BIJAN KUMAR PATEL for the partial fulfilment for the award of the degree of Master of Science in Mathematics under the supervision

More information

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the

More information

constraint. Let us penalize ourselves for making the constraint too big. We end up with a

constraint. Let us penalize ourselves for making the constraint too big. We end up with a Chapter 4 Constrained Optimization 4.1 Equality Constraints (Lagrangians) Suppose we have a problem: Maximize 5, (x 1, 2) 2, 2(x 2, 1) 2 subject to x 1 +4x 2 =3 If we ignore the constraint, we get the

More information

1 Shapes of Cubic Functions

1 Shapes of Cubic Functions MA 1165 - Lecture 05 1 1/26/09 1 Shapes of Cubic Functions A cubic function (a.k.a. a third-degree polynomial function) is one that can be written in the form f(x) = ax 3 + bx 2 + cx + d. (1) Quadratic

More information

Introduction to Support Vector Machines. Colin Campbell, Bristol University

Introduction to Support Vector Machines. Colin Campbell, Bristol University Introduction to Support Vector Machines Colin Campbell, Bristol University 1 Outline of talk. Part 1. An Introduction to SVMs 1.1. SVMs for binary classification. 1.2. Soft margins and multi-class classification.

More information

Proximal mapping via network optimization

Proximal mapping via network optimization L. Vandenberghe EE236C (Spring 23-4) Proximal mapping via network optimization minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping Introduction this lecture:

More information

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization 2.1. Introduction Suppose that an economic relationship can be described by a real-valued

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Charlie Frogner 1 MIT 2011 1 Slides mostly stolen from Ryan Rifkin (Google). Plan Regularization derivation of SVMs. Analyzing the SVM problem: optimization, duality. Geometric

More information

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen (für Informatiker) M. Grepl J. Berger & J.T. Frings Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2010/11 Problem Statement Unconstrained Optimality Conditions Constrained

More information

10. Proximal point method

10. Proximal point method L. Vandenberghe EE236C Spring 2013-14) 10. Proximal point method proximal point method augmented Lagrangian method Moreau-Yosida smoothing 10-1 Proximal point method a conceptual algorithm for minimizing

More information

Optimization in R n Introduction

Optimization in R n Introduction Optimization in R n Introduction Rudi Pendavingh Eindhoven Technical University Optimization in R n, lecture Rudi Pendavingh (TUE) Optimization in R n Introduction ORN / 4 Some optimization problems designing

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

More information

Least-Squares Intersection of Lines

Least-Squares Intersection of Lines Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a

More information

24. The Branch and Bound Method

24. The Branch and Bound Method 24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

More information

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where. Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

More information

Date: April 12, 2001. Contents

Date: April 12, 2001. Contents 2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all. 1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.

More information

Cost Minimization and the Cost Function

Cost Minimization and the Cost Function Cost Minimization and the Cost Function Juan Manuel Puerta October 5, 2009 So far we focused on profit maximization, we could look at a different problem, that is the cost minimization problem. This is

More information

Airport Planning and Design. Excel Solver

Airport Planning and Design. Excel Solver Airport Planning and Design Excel Solver Dr. Antonio A. Trani Professor of Civil and Environmental Engineering Virginia Polytechnic Institute and State University Blacksburg, Virginia Spring 2012 1 of

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

SOLUTIONS. f x = 6x 2 6xy 24x, f y = 3x 2 6y. To find the critical points, we solve

SOLUTIONS. f x = 6x 2 6xy 24x, f y = 3x 2 6y. To find the critical points, we solve SOLUTIONS Problem. Find the critical points of the function f(x, y = 2x 3 3x 2 y 2x 2 3y 2 and determine their type i.e. local min/local max/saddle point. Are there any global min/max? Partial derivatives

More information

Examples of Tasks from CCSS Edition Course 3, Unit 5

Examples of Tasks from CCSS Edition Course 3, Unit 5 Examples of Tasks from CCSS Edition Course 3, Unit 5 Getting Started The tasks below are selected with the intent of presenting key ideas and skills. Not every answer is complete, so that teachers can

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that

More information

Big Data - Lecture 1 Optimization reminders

Big Data - Lecture 1 Optimization reminders Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Schedule Introduction Major issues Examples Mathematics

More information

Interior-Point Algorithms for Quadratic Programming

Interior-Point Algorithms for Quadratic Programming Interior-Point Algorithms for Quadratic Programming Thomas Reslow Krüth Kongens Lyngby 2008 IMM-M.Sc-2008-19 Technical University of Denmark Informatics and Mathematical Modelling Building 321, DK-2800

More information

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA We Can Early Learning Curriculum PreK Grades 8 12 INSIDE ALGEBRA, GRADES 8 12 CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA April 2016 www.voyagersopris.com Mathematical

More information

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming

More information

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression Logistic Regression Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max

More information

minimal polyonomial Example

minimal polyonomial Example Minimal Polynomials Definition Let α be an element in GF(p e ). We call the monic polynomial of smallest degree which has coefficients in GF(p) and α as a root, the minimal polyonomial of α. Example: We

More information

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION 1 A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION Dimitri Bertsekas M.I.T. FEBRUARY 2003 2 OUTLINE Convexity issues in optimization Historical remarks Our treatment of the subject Three unifying lines of

More information

Linear Programming Problems

Linear Programming Problems Linear Programming Problems Linear programming problems come up in many applications. In a linear programming problem, we have a function, called the objective function, which depends linearly on a number

More information

A Simple Introduction to Support Vector Machines

A Simple Introduction to Support Vector Machines A Simple Introduction to Support Vector Machines Martin Law Lecture for CSE 802 Department of Computer Science and Engineering Michigan State University Outline A brief history of SVM Large-margin linear

More information

LECTURE: INTRO TO LINEAR PROGRAMMING AND THE SIMPLEX METHOD, KEVIN ROSS MARCH 31, 2005

LECTURE: INTRO TO LINEAR PROGRAMMING AND THE SIMPLEX METHOD, KEVIN ROSS MARCH 31, 2005 LECTURE: INTRO TO LINEAR PROGRAMMING AND THE SIMPLEX METHOD, KEVIN ROSS MARCH 31, 2005 DAVID L. BERNICK dbernick@soe.ucsc.edu 1. Overview Typical Linear Programming problems Standard form and converting

More information

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system 1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables

More information

Chapter 4. Duality. 4.1 A Graphical Example

Chapter 4. Duality. 4.1 A Graphical Example Chapter 4 Duality Given any linear program, there is another related linear program called the dual. In this chapter, we will develop an understanding of the dual linear program. This understanding translates

More information

Constrained optimization.

Constrained optimization. ams/econ 11b supplementary notes ucsc Constrained optimization. c 2010, Yonatan Katznelson 1. Constraints In many of the optimization problems that arise in economics, there are restrictions on the values

More information

Distributed Machine Learning and Big Data

Distributed Machine Learning and Big Data Distributed Machine Learning and Big Data Sourangshu Bhattacharya Dept. of Computer Science and Engineering, IIT Kharagpur. http://cse.iitkgp.ac.in/~sourangshu/ August 21, 2015 Sourangshu Bhattacharya

More information

Outline. Generalize Simple Example

Outline. Generalize Simple Example Solving Simultaneous Nonlinear Algebraic Equations Larry Caretto Mechanical Engineering 309 Numerical Analysis of Engineering Systems March 5, 014 Outline Problem Definition of solving simultaneous nonlinear

More information

Lecture 11. 3.3.2 Cost functional

Lecture 11. 3.3.2 Cost functional Lecture 11 3.3.2 Cost functional Consider functions L(t, x, u) and K(t, x). Here L : R R n R m R, K : R R n R sufficiently regular. (To be consistent with calculus of variations we would have to take L(t,

More information

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem

More information

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col

More information

Support Vector Machines Explained

Support Vector Machines Explained March 1, 2009 Support Vector Machines Explained Tristan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introduction This document has been written in an attempt to make the Support Vector Machines (SVM),

More information

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints Chapter 6 Linear Programming: The Simplex Method Introduction to the Big M Method In this section, we will present a generalized version of the simplex method that t will solve both maximization i and

More information

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year. This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra

More information

To give it a definition, an implicit function of x and y is simply any relationship that takes the form:

To give it a definition, an implicit function of x and y is simply any relationship that takes the form: 2 Implicit function theorems and applications 21 Implicit functions The implicit function theorem is one of the most useful single tools you ll meet this year After a while, it will be second nature to

More information

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style Solving quadratic equations 3.2 Introduction A quadratic equation is one which can be written in the form ax 2 + bx + c = 0 where a, b and c are numbers and x is the unknown whose value(s) we wish to find.

More information

Prot Maximization and Cost Minimization

Prot Maximization and Cost Minimization Simon Fraser University Prof. Karaivanov Department of Economics Econ 0 COST MINIMIZATION Prot Maximization and Cost Minimization Remember that the rm's problem is maximizing prots by choosing the optimal

More information

Linear Programming. April 12, 2005

Linear Programming. April 12, 2005 Linear Programming April 1, 005 Parts of this were adapted from Chapter 9 of i Introduction to Algorithms (Second Edition) /i by Cormen, Leiserson, Rivest and Stein. 1 What is linear programming? The first

More information

1 Portfolio mean and variance

1 Portfolio mean and variance Copyright c 2005 by Karl Sigman Portfolio mean and variance Here we study the performance of a one-period investment X 0 > 0 (dollars) shared among several different assets. Our criterion for measuring

More information

Grade Level Year Total Points Core Points % At Standard 9 2003 10 5 7 %

Grade Level Year Total Points Core Points % At Standard 9 2003 10 5 7 % Performance Assessment Task Number Towers Grade 9 The task challenges a student to demonstrate understanding of the concepts of algebraic properties and representations. A student must make sense of the

More information

3.1 Solving Systems Using Tables and Graphs

3.1 Solving Systems Using Tables and Graphs Algebra 2 Chapter 3 3.1 Solve Systems Using Tables & Graphs 3.1 Solving Systems Using Tables and Graphs A solution to a system of linear equations is an that makes all of the equations. To solve a system

More information

Math 120 Final Exam Practice Problems, Form: A

Math 120 Final Exam Practice Problems, Form: A Math 120 Final Exam Practice Problems, Form: A Name: While every attempt was made to be complete in the types of problems given below, we make no guarantees about the completeness of the problems. Specifically,

More information

Support Vector Machines

Support Vector Machines CS229 Lecture notes Andrew Ng Part V Support Vector Machines This set of notes presents the Support Vector Machine (SVM) learning algorithm. SVMs are among the best (and many believe are indeed the best)

More information

Constrained Optimization: The Method of Lagrange Multipliers:

Constrained Optimization: The Method of Lagrange Multipliers: Constrained Optimization: The Method of Lagrange Multipliers: Suppose the equation p(x,) x 60x 7 00 models profit when x represents the number of handmade chairs and is the number of handmade rockers produced

More information

We can display an object on a monitor screen in three different computer-model forms: Wireframe model Surface Model Solid model

We can display an object on a monitor screen in three different computer-model forms: Wireframe model Surface Model Solid model CHAPTER 4 CURVES 4.1 Introduction In order to understand the significance of curves, we should look into the types of model representations that are used in geometric modeling. Curves play a very significant

More information

1 Determinants and the Solvability of Linear Systems

1 Determinants and the Solvability of Linear Systems 1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped

More information

1 3 4 = 8i + 20j 13k. x + w. y + w

1 3 4 = 8i + 20j 13k. x + w. y + w ) Find the point of intersection of the lines x = t +, y = 3t + 4, z = 4t + 5, and x = 6s + 3, y = 5s +, z = 4s + 9, and then find the plane containing these two lines. Solution. Solve the system of equations

More information

Mathematical finance and linear programming (optimization)

Mathematical finance and linear programming (optimization) Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may

More information

Linear Programming I

Linear Programming I Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins

More information

Adaptive Online Gradient Descent

Adaptive Online Gradient Descent Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650

More information

15 Kuhn -Tucker conditions

15 Kuhn -Tucker conditions 5 Kuhn -Tucker conditions Consider a version of the consumer problem in which quasilinear utility x 2 + 4 x 2 is maximised subject to x +x 2 =. Mechanically applying the Lagrange multiplier/common slopes

More information

Equations Involving Lines and Planes Standard equations for lines in space

Equations Involving Lines and Planes Standard equations for lines in space Equations Involving Lines and Planes In this section we will collect various important formulas regarding equations of lines and planes in three dimensional space Reminder regarding notation: any quantity

More information

How To Understand And Solve A Linear Programming Problem

How To Understand And Solve A Linear Programming Problem At the end of the lesson, you should be able to: Chapter 2: Systems of Linear Equations and Matrices: 2.1: Solutions of Linear Systems by the Echelon Method Define linear systems, unique solution, inconsistent,

More information

Introduction to the Finite Element Method (FEM)

Introduction to the Finite Element Method (FEM) Introduction to the Finite Element Method (FEM) ecture First and Second Order One Dimensional Shape Functions Dr. J. Dean Discretisation Consider the temperature distribution along the one-dimensional

More information

CS229 Lecture notes. Andrew Ng

CS229 Lecture notes. Andrew Ng CS229 Lecture notes Andrew Ng Part X Factor analysis Whenwehavedatax (i) R n thatcomesfromamixtureofseveral Gaussians, the EM algorithm can be applied to fit a mixture model. In this setting, we usually

More information

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d. DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms

More information

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear

More information

Math 0980 Chapter Objectives. Chapter 1: Introduction to Algebra: The Integers.

Math 0980 Chapter Objectives. Chapter 1: Introduction to Algebra: The Integers. Math 0980 Chapter Objectives Chapter 1: Introduction to Algebra: The Integers. 1. Identify the place value of a digit. 2. Write a number in words or digits. 3. Write positive and negative numbers used

More information

What is Linear Programming?

What is Linear Programming? Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

More information

Coordinate Plane, Slope, and Lines Long-Term Memory Review Review 1

Coordinate Plane, Slope, and Lines Long-Term Memory Review Review 1 Review. What does slope of a line mean?. How do you find the slope of a line? 4. Plot and label the points A (3, ) and B (, ). a. From point B to point A, by how much does the y-value change? b. From point

More information

The Fourth International DERIVE-TI92/89 Conference Liverpool, U.K., 12-15 July 2000. Derive 5: The Easiest... Just Got Better!

The Fourth International DERIVE-TI92/89 Conference Liverpool, U.K., 12-15 July 2000. Derive 5: The Easiest... Just Got Better! The Fourth International DERIVE-TI9/89 Conference Liverpool, U.K., -5 July 000 Derive 5: The Easiest... Just Got Better! Michel Beaudin École de technologie supérieure 00, rue Notre-Dame Ouest Montréal

More information

Unit 1 Equations, Inequalities, Functions

Unit 1 Equations, Inequalities, Functions Unit 1 Equations, Inequalities, Functions Algebra 2, Pages 1-100 Overview: This unit models real-world situations by using one- and two-variable linear equations. This unit will further expand upon pervious

More information

Linear Programming: Chapter 11 Game Theory

Linear Programming: Chapter 11 Game Theory Linear Programming: Chapter 11 Game Theory Robert J. Vanderbei October 17, 2007 Operations Research and Financial Engineering Princeton University Princeton, NJ 08544 http://www.princeton.edu/ rvdb Rock-Paper-Scissors

More information

Anchorage School District/Alaska Sr. High Math Performance Standards Algebra

Anchorage School District/Alaska Sr. High Math Performance Standards Algebra Anchorage School District/Alaska Sr. High Math Performance Standards Algebra Algebra 1 2008 STANDARDS PERFORMANCE STANDARDS A1:1 Number Sense.1 Classify numbers as Real, Irrational, Rational, Integer,

More information

In this section, we will consider techniques for solving problems of this type.

In this section, we will consider techniques for solving problems of this type. Constrained optimisation roblems in economics typically involve maximising some quantity, such as utility or profit, subject to a constraint for example income. We shall therefore need techniques for solving

More information

Tutorial: Using Excel for Linear Optimization Problems

Tutorial: Using Excel for Linear Optimization Problems Tutorial: Using Excel for Linear Optimization Problems Part 1: Organize Your Information There are three categories of information needed for solving an optimization problem in Excel: an Objective Function,

More information

Lagrange Interpolation is a method of fitting an equation to a set of points that functions well when there are few points given.

Lagrange Interpolation is a method of fitting an equation to a set of points that functions well when there are few points given. Polynomials (Ch.1) Study Guide by BS, JL, AZ, CC, SH, HL Lagrange Interpolation is a method of fitting an equation to a set of points that functions well when there are few points given. Sasha s method

More information

Linear Programming. Solving LP Models Using MS Excel, 18

Linear Programming. Solving LP Models Using MS Excel, 18 SUPPLEMENT TO CHAPTER SIX Linear Programming SUPPLEMENT OUTLINE Introduction, 2 Linear Programming Models, 2 Model Formulation, 4 Graphical Linear Programming, 5 Outline of Graphical Procedure, 5 Plotting

More information

EXCEL SOLVER TUTORIAL

EXCEL SOLVER TUTORIAL ENGR62/MS&E111 Autumn 2003 2004 Prof. Ben Van Roy October 1, 2003 EXCEL SOLVER TUTORIAL This tutorial will introduce you to some essential features of Excel and its plug-in, Solver, that we will be using

More information

South Carolina College- and Career-Ready (SCCCR) Algebra 1

South Carolina College- and Career-Ready (SCCCR) Algebra 1 South Carolina College- and Career-Ready (SCCCR) Algebra 1 South Carolina College- and Career-Ready Mathematical Process Standards The South Carolina College- and Career-Ready (SCCCR) Mathematical Process

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

EQUATIONS and INEQUALITIES

EQUATIONS and INEQUALITIES EQUATIONS and INEQUALITIES Linear Equations and Slope 1. Slope a. Calculate the slope of a line given two points b. Calculate the slope of a line parallel to a given line. c. Calculate the slope of a line

More information

The equivalence of logistic regression and maximum entropy models

The equivalence of logistic regression and maximum entropy models The equivalence of logistic regression and maximum entropy models John Mount September 23, 20 Abstract As our colleague so aptly demonstrated ( http://www.win-vector.com/blog/20/09/the-simplerderivation-of-logistic-regression/

More information

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions. Algebra I Overview View unit yearlong overview here Many of the concepts presented in Algebra I are progressions of concepts that were introduced in grades 6 through 8. The content presented in this course

More information

Algebra I Vocabulary Cards

Algebra I Vocabulary Cards Algebra I Vocabulary Cards Table of Contents Expressions and Operations Natural Numbers Whole Numbers Integers Rational Numbers Irrational Numbers Real Numbers Absolute Value Order of Operations Expression

More information

Nonlinear Optimization: Algorithms 3: Interior-point methods

Nonlinear Optimization: Algorithms 3: Interior-point methods Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris Jean-Philippe.Vert@mines.org Nonlinear optimization c 2006 Jean-Philippe Vert,

More information

Course Outlines. 1. Name of the Course: Algebra I (Standard, College Prep, Honors) Course Description: ALGEBRA I STANDARD (1 Credit)

Course Outlines. 1. Name of the Course: Algebra I (Standard, College Prep, Honors) Course Description: ALGEBRA I STANDARD (1 Credit) Course Outlines 1. Name of the Course: Algebra I (Standard, College Prep, Honors) Course Description: ALGEBRA I STANDARD (1 Credit) This course will cover Algebra I concepts such as algebra as a language,

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information