# (We assume that x 2 IR n with n > m f g are twice continuously ierentiable functions with Lipschitz secon erivatives. The Lagrangian function `(x y) i

Save this PDF as:

Size: px
Start display at page:

Download "(We assume that x 2 IR n with n > m f g are twice continuously ierentiable functions with Lipschitz secon erivatives. The Lagrangian function `(x y) i"

## Transcription

1 An Analysis of Newton's Metho for Equivalent Karush{Kuhn{Tucker Systems Lus N. Vicente January 25, 999 Abstract In this paper we analyze the application of Newton's metho to the solution of systems of nonlinear equations arising from equivalent forms of the rst{orer Karush{Kuhn{Tucker necessary conitions for constraine optimization. The analysis is carrie out by using an abstract moel for the original system of nonlinear equations for an equivalent form of this system obtaine by a reformulation that appears often when ealing with rst{orer Karush{Kuhn{ Tucker necessary conitions. The moel is use to etermine the quantities that boun the ierence between the Newton steps corresponing to the two equivalent systems of equations. The moel is suciently abstract to inclue the cases of equality{constraine optimization, minimization with simple bouns, also a class of iscretize optimal control problems. Keywors. Nonlinear programming, Newton's metho, rst{orer Karush{Kuhn{Tucker necessary conitions. AMS subject classications. 49M37, 96, 93 Introuction A popular technique to solve constraine optimization problems is to apply Newton's metho to the system of nonlinear equations arising from the rst{orer necessary conitions. For instance, the system r x`(x y) = rf(x) + ry = (.) = correspons to the rst{orer necessary conitions of the equality{constraine optimization problem minimize f(x) subject to = g (x). g m (x) A = Departamento e Matematica, Universiae e oimbra, 3 oimbra, Portugal. lvicentemat.uc.pt. Support for this author has been provie by Instituto e Telecomunicac~oes, JNIT, Praxis XXI 2/2./MAT/346/94.

2 (We assume that x 2 IR n with n > m f g are twice continuously ierentiable functions with Lipschitz secon erivatives. The Lagrangian function `(x y) is ene as `(x y) = f(x) + y >.) The Newton step associate with the system (.) is given by 2 rxx`(x y) r r > x y =? r x`(x y) (.2) where r > represents the transpose of the Jacobian matrix g x (x) of, r x`(x y) 2 rxx`(x y) are the graient the Hessian of the Lagrangian with respect to x, respectively. There are cases in constraine optimization where the system of rst{orer necessary conitions is reformulate by eliminating variables /or equations. For the example given above, we know that (.) is equivalent (with x = x) to Z(x) > rf(x) = = where the columns of the orthogonal matrix Z(x) form a basis for the null space of r >. The matrix Z(x) shoul be compute as escribe in [7] so that it can be extene smoothly in a neighborhoo of x (see [7, Lemma 2.]). The Newton step associate with (.3) is ene by Z(x) > 2 rxx`(x y(x)) r > x =? Z(x)> rf(x) where y(x) is the vector of least squares multipliers obtaine by solving minimize kry + rf(x)k 2 2 (.3) (.4) with respect to y. See [7]. Two equivalent forms of the necessary conitions gave rise to two ierent Newton methos (computational issues relate to these methos are escribe, e.g., in the books [6] [8]). It is natural to ask how o these two methos compare, in other wors how close x x are from each other. The goal of this work is not to provie the answer only for this particular reformulation. We propose a moel where the answer can be given for the equality{constraine case but also for other cases. One case in which we are also intereste is minimization with simple bouns The rst{orer necessary conitions are minimize f(x) subject to x rf(x)? y = (.5) x > y = (.6) x y (.7) Since this problem involves inequality constraints, we will rather call conitions (.5){(.7), rst{ orer Karush{Kuhn{Tucker (KKT) necessary conitions. A step of Newton's metho applie to (.5){(.6) is the solution of r 2 f(x)?i Y X x y 2 =? rf(x)? y XY e (.8)

3 where X = ia Y = iag(y). (Given a vector u in IR n, iag(u) represents the iagonal matrix of orer n whose i{th iagonal element is u i. Also, e represents a vector of ones. We omit the imension of e since that will be etermine from the context.) The application of Newton's metho is mae by using an interior{point approach, where x y are positive x y are scale by x y so that x + x x y + y y are also positive. The equivalent form of (.5){(.7) that we woul like to consier (with x = x) is given by D(x) 2 rf(x) = (.9) x where D(x) is the iagonal matrix of orer n with i{th iagonal element given by 8 >< D(x) = ii > (x i ) 2 if (rf(x)) i if (rf(x)) i < This simple fact is prove in []. The iagonal element D(x) 2 even continuous, but if D(x) 2 is ierentiable for all i, then ii ii might not be ierentiable, or D(x) 2 rf(x) = D(x) 2 r 2 f(x) + x x (D(x)2 e) rf(x) The iagonal element D(x) 2 is not ierentiable if (rf(x)) i =, in which case the i{th component of the (iagonal) Jacobian matrix x (D(x)2 e) is articially set to zero. So, it makes sense ii to ene E(x) as the iagonal matrix of orer n whose i{th component is given by 8 >< E(x) = ii > (rf(x)) i if (rf(x)) i > otherwise to ene the Newton step corresponing to the equation (.9) as D(x) 2 r 2 f(x) + E(x) x =?D(x) 2 rf(x) (.) See [] [2] for more etails. For these other examples, we are intereste in comparing the two alternative Newton approaches by looking at the istance between the two alternative Newton steps x x. In the next section, we erive an abstract moel to provie a general answer that can be particularize for the ierent examples. The analysis will consier a relating conition U(x x y) =, will establish that the norm of the ierence of the steps is boune by a constant times ku(x x y)k. We will observe in the equality-constraine example (Section 3) that ku(x x y)k converges to zero if both Newton sequences are converging to a point satisfying the rst-orer necessary conitions. As a consequence of our analysis, the alternative Newton steps ten to be the same. However, this is not true in the secon type of examples (Sections 4 5) because ku(x x y)k oes not necessarily converge to zero even if both Newton sequences are converging to the same stationary point. 3

4 2 The abstract reformulation The original system of nonlinear equations is ene in the variables x y F (x y) = (2.) The equivalent form of this system that we consier is base on the equation G(x)H(x) = (2.2) where G satises at a solution (x y ) a ening conition of the form G(x )F (x y ) = G(x )H(x ) (2.3) The equivalence relates the variables x, x, y through the relating conition so that U(x x y) = F (x y) = is equivalent to G(x)H(x) = U(x x y) = The entrances in F, G, H are assume to have Lipschitz rst erivatives. Their meaning is clear for the examples we have given before but we postpone this for a while to analyze Newton's metho when applie to (2.) (2.2). A step of Newton's metho when applie to (2.) is the solution of F whereas a Newton's step for system (2.2) is given by x F (x y)x + (x y)y =?F (x y) (2.4) y x (G(x)H(x)) x =?G(x)H(x) (2.5) Our goal is to boun kk in terms of ku(x x y)k. First, we multiply (2.4) by G(x) y subtracting (2.5) to (2.6), we obtain G(x) F (x y)x + G(x)F (x y)y =?G(x)F (x y) (2.6) x y? x (G(x)H(x)) x =?G(x) F (x y)x? G(x)F x y (x y)y + G(x)H(x)? G(x)F (x y) Finally, we a x G(x)H(x) x to both sies, to get h x (G(x)H(x)) i () = Using the enitions x + F (G(x)H(x))? G(x) x (x y) G(x)H(x)? G(x)F (x y) x? G(x) F y (x y)y R(x) = x (G(x)H(x)) S(x y) = G(x)F (x y) x 4

5 we erive an upper boun for kk kk kr(x)? k kr(x)? S(x y)k + kg(x) F y (x y)k krf (x y)?> k kf (x y)k + kr(x)? k kg(x)h(x)? G(x)F (x y)k If GH, F, F are positive constants such that kr(x)? k GH krf (x y)?> k F kf (x y)k F then kx?xk F F GH kr(x)? S(x y)k + G(x)F y (x y) + GH kg(x)h(x)?g(x)f (x y)k We observe from this inequality that kk is boune above by three important terms. First, kk epens on how close the values for the functions G(x)H(x) G(x)F (x y) are from each other. It oes not matter how small the resiuals G(x)H(x) F (x y) are, but rather how close the function value G(x)H(x) is from the value of F (x y) reuce by G(x). The epenence on R(x)? S(x y) is about the consistency of the erivatives (G(x)H(x)) x G(x) F (x y), the former being the erivative of G(x)H(x) the latter the erivative of x F (x y) with respect to x reuce by the operator G(x). We conclue also that the norm of epens on the norm of G(x) F y quite interesting aspect of the analysis. In the examples given later, the term kg(x) F y (x y) which is a (x y)k is either zero or boune by ku(x x y)k with x = x. One can see that G(x) F y (x y) is the erivative of F (x y) with respect to y reuce by the operator G(x), its norm inuences the ierence between x x. From the inequality given above we can easily prove the following theorem. Theorem 2. onsier a Newton step (2.4) for F (x y) =, where rf (x y) is nonsingular. onsier a Newton step (2.5) for G(x)H(x) =, where R(x) is nonsingular. If there exist positive constants, 2, 3 such that then kr(x)? S(x y)k ku(x x y)k G(x)F y (x y) 2 ku(x x y)k (for some x), kg(x)h(x)? G(x)F (x y)k 3 ku(x x y)k x? x ku(x x y)k for some positive constant. The constant in Theorem 2. epens on x, x, y since, 2, 3 epen also on these points. We can assume that the star Newton assumptions [4] hol for the Newton methos ene by (2.4) (2.5) at the points (x y ) x, respectively. Then, if x y are suciently close to x y, x is suciently close to x, the constants, 2, 3, o not epen on the points x, x, y. We move rapily to the examples to illustrate our analysis. 5

6 3 Equality{constraine optimization In this case The choices for G(x) H(x) are Since G(x) = F (x y) = G(x)H(x) = Z(x) > I G(x)H(x)? G(x)F (x y) = rf(x) + ry Z(x) > rf(x) H(x) = rf(x) Z(x) > rf(x)? Z(x) > rf(x)? the ening conition (2.3) is satise for any pair (x y), even if it oes not verify the rst-orer necessary conitions. From the theory presente in [7], Also, R(x) = x (G(x)H(x)) = S(x y) = Z(x) > 2 rxx`(x y(x)) Z(x) > 2 rxx`(x y) r > r > the Lipschitz continuity of the secon erivatives of `(x y) imply for some positive constant. Moreover, kr(x)? S(x y)k It is natural to ene the relating conition as U(x x y) = G(x) F (x y) = y y? y(x) y? y(x) = Theorem 2. assures the existence of a positive constant such that kk y? y(x) where x x are given by (.2) (.4). It is obvious that F (x y) = is equivalent to G(x)H(x) = U(x x y) =. Also, if (x y ) is a stationary point then lim (x y)? (x y ) ku(x x y)k = (3.) the Newton steps x x ten in this situation to be the same step. 6

7 4 Minimization with simple bouns In this case, the equivalent KKT systems are F (x y) = rf(x)? y XY e = G(x)H(x) = D(x) 2 rf(x) = Of course, we have exclue from the KKT systems the nonnegativity of x, x, y. The choices for G(x) H(x) are G(x) = D(x) 2 I H(x) = rf(x) In this case the ening conition (2.3) is not satise unless we are at a point that veries the rst{orer KKT necessary conitions. In fact, we have G(x)H(x)? G(x)F (x y) = D(x) 2 rf(x)? D(x) 2 rf(x)? R(x) S(x y) are given by Moreover, X? D(x) 2 Y e R(x) = D(x) 2 r 2 f(x) + E(x) S(x y) = D(x) 2 r 2 f(x) + Y Thus, if the relating conition is ene as G(x) F y (x y) =?D(x)2 + X U(x x y) =? X? D(x) 2 e? X? D(x) 2 e (Y? E(x)) e A = then there exist positive constants, 2, 3 satisfying the assumptions of Theorem 2.. This theorem assures the existence of a positive constant such that kk? X? D(x) 2 e? X? D(x) 2 e (Y? E(x)) e A where x x are given by (.8) (.). Note that F (x y) = is equivalent to G(x)H(x) = U(x x y) = provie rf(x). However, in this example, if (x y ) is a point that satises the rst-orer KKT necessary conitions, the limit (3.) might not hol because lim X? D(x) 2 e lim x?x o not necessarily exist or equal zero. 7 x?x X? D(x) 2 e

8 5 Discretize optimal control problems with bouns on the control variables In this section, we consier the class of nonlinear programming problems analyze in [3]. See also [5]. A nonlinear programming problem of this class is formulate as minimize f(x x 2 ) subject to g(x x 2 ) = x 2 where x is in IR m x 2 is in IR n?m. In this class of problems, r is partitione as r = r x r x2 where r x is nonsingular. The rst{orer KKT necessary conitions are rf(x) + ry? = y 2 = x > 2 y 2 = x 2 y 2 A basis for the null space of r > is given by the columns of W (x) = The equivalent KKT system that we consier is?r x?> r x2 > I D(x) 2 W (x) > rf(x) = (5.) = (5.2) x 2 where D(x) is the iagonal matrix of orer n? m with i{th iagonal element given by D(x) = ii 8 >< > (x 2 ) i 2 if if W (x) > rf(x) i W (x) > rf(x) < i The Newton step associate with (5.){(5.2) is the solution of D(x) 2 W (x) > r 2 xx`(x y (x)) + E(x) x =?D(x) 2 W (x) > rf(x) 8

9 where y (x) =?r x? r x f(x) E(x) is a iagonal matrix of orer n?m with i{th iagonal element given by 8 >< E(x) = ii > W (x) > rf(x) i if otherwise W (x) > rf(x) > i See [3] for more etails. In this case, the KKT systems are represente by F (x y) = rf(x) + ry? X 2 Y 2 e y 2 A G(x)H(x) = D(x) 2 W (x) > rf(x) (We have exclue the nonnegativity of x 2, x 2, y 2.) The choices for G(x) H(x) are G(x) = D(x) 2 W (x) > I I H(x) = rf(x) The ening conition (2.3) is not satise unless we are at a point that veries the rst{orer KKT necessary conitions G(x)H(x)? G(x)F (x y) = A D(x) 2 W (x) > rf(x)? D(x) 2 W (x) > rf(x)?? X 2? D(x) 2 Y 2 e? Also, R(x) S(x y) are given by Moreover, R(x) = S(x y) = So, the relating conition is ene as D(x) 2 W (x) > r 2 xx`(x y (x)) + E(x) r > D(x) 2 W (x) > r 2 xx`(x y ) + Y 2 r > G(x) F y (x y) =?D(x) 2 + X 2 U(x x y) = y? y (x)? X2? D(x) 2 e? X2? D(x) 2 e (Y 2? E(x)) e A = (see [3]) 9

10 assuring the existence of the positive constants, 2, 3 in Theorem 2., which in turn guarantees the existence of a positive constant such that kk y? y (x)? X2? D(x) 2 e? X2? D(x) 2 e (Y 2? E(x)) e A Note that F (x y) = is equivalent to G(x)H(x) = U(x x y) = provie W (x) > rf(x). In this example, as in the previous one, even if (x y ) satises the rst-orer KKT conitions, there is no guarantee that the limit (3.) is true. A similar analysis for the nonlinear programming problem is also of interest. For instance, we coul consier the primal{ual, the ane{scaling, the reuce primal{ual algorithms escribe in [9]. References [] T. F. oleman Y. Li, On the convergence of interior{reective Newton methos for nonlinear minimization subject to bouns, Math. Programming, 67 (994), pp. 89{224. [2], An interior trust region approach for nonlinear minimization subject to bouns, SIAM J. Optim., 6 (996), pp. 48{445. [3] J. E. Dennis, M. Heinkenschloss, L. N. Vicente, Trust{region interior{point SQP algorithms for a class of nonlinear programming problems, SIAM J. ontrol Optim., 36 (998), pp. 75{794. [4] J. E. Dennis R.. Schnabel, Numerical Methos for Unconstraine Optimization Nonlinear Equations, Prentice{Hall, Englewoo lis, New Jersey, 983. Republishe by SIAM, Philaelphia, 996. [5] J. E. Dennis L. N. Vicente, On the convergence theory of general trust{region{base algorithms for equality{constraine optimization, SIAM J. Optim., 7 (997), pp. 927{95. [6] R. Fletcher, Practical Methos of Optimization, John Wiley & Sons, hichester, secon e., 987. [7] J. Gooman, Newton's metho for constraine optimization, Math. Programming, 33 (985), pp. 62{7. [8] S. G. Nash A. Sofer, Linear Nonlinear Programming, McGraw-Hill, New York, 996. [9] L. N. Vicente, On interior-point Newton algorithms for iscretize optimal control problems with state constraints, Optimization Methos & Software, 8 (998), pp. 249{275.

### M147 Practice Problems for Exam 2

M47 Practice Problems for Exam Exam will cover sections 4., 4.4, 4.5, 4.6, 4.7, 4.8, 5., an 5.. Calculators will not be allowe on the exam. The first ten problems on the exam will be multiple choice. Work

### SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the

### Mean-Value Theorem (Several Variables)

Mean-Value Theorem (Several Variables) 1 Mean-Value Theorem (Several Variables) THEOREM THE MEAN-VALUE THEOREM (SEVERAL VARIABLES) If f is ifferentiable at each point of the line segment ab, then there

### MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS Contents 1. Moment generating functions 2. Sum of a ranom number of ranom variables 3. Transforms

### Chapter 4 Sequential Quadratic Programming

Optimization I; Chapter 4 77 Chapter 4 Sequential Quadratic Programming 4.1 The Basic SQP Method 4.1.1 Introductory Definitions and Assumptions Sequential Quadratic Programming (SQP) is one of the most

### Iterative Techniques in Matrix Algebra. Jacobi & Gauss-Seidel Iterative Techniques II

Iterative Techniques in Matrix Algebra Jacobi & Gauss-Seidel Iterative Techniques II Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin

### 19.2. First Order Differential Equations. Introduction. Prerequisites. Learning Outcomes

First Orer Differential Equations 19.2 Introuction Separation of variables is a technique commonly use to solve first orer orinary ifferential equations. It is so-calle because we rearrange the equation

### Lagrangian and Hamiltonian Mechanics

Lagrangian an Hamiltonian Mechanics D.G. Simpson, Ph.D. Department of Physical Sciences an Engineering Prince George s Community College December 5, 007 Introuction In this course we have been stuying

### n-parameter families of curves

1 n-parameter families of curves For purposes of this iscussion, a curve will mean any equation involving x, y, an no other variables. Some examples of curves are x 2 + (y 3) 2 = 9 circle with raius 3,

A Globally Convergent Primal-Dual Interior Point Method for Constrained Optimization Hiroshi Yamashita 3 Abstract This paper proposes a primal-dual interior point method for solving general nonlinearly

### Web Appendices of Selling to Overcon dent Consumers

Web Appenices of Selling to Overcon ent Consumers Michael D. Grubb A Option Pricing Intuition This appenix provies aitional intuition base on option pricing for the result in Proposition 2. Consier the

### ME128 Computer-Aided Mechanical Design Course Notes Introduction to Design Optimization

ME128 Computer-ided Mechanical Design Course Notes Introduction to Design Optimization 2. OPTIMIZTION Design optimization is rooted as a basic problem for design engineers. It is, of course, a rare situation

### Math 230.01, Fall 2012: HW 1 Solutions

Math 3., Fall : HW Solutions Problem (p.9 #). Suppose a wor is picke at ranom from this sentence. Fin: a) the chance the wor has at least letters; SOLUTION: All wors are equally likely to be chosen. The

### constraint. Let us penalize ourselves for making the constraint too big. We end up with a

Chapter 4 Constrained Optimization 4.1 Equality Constraints (Lagrangians) Suppose we have a problem: Maximize 5, (x 1, 2) 2, 2(x 2, 1) 2 subject to x 1 +4x 2 =3 If we ignore the constraint, we get the

### 20. Product rule, Quotient rule

20. Prouct rule, 20.1. Prouct rule Prouct rule, Prouct rule We have seen that the erivative of a sum is the sum of the erivatives: [f(x) + g(x)] = x x [f(x)] + x [(g(x)]. One might expect from this that

### On some absolute positiveness bounds

Bull. Math. Soc. Sci. Math. Roumanie Tome 53(101) No. 3, 2010, 269 276 On some absolute positiveness bouns by Doru Ştefănescu Deicate to the memory of Laurenţiu Panaitopol (1940-2008) on the occasion of

### Factoring Dickson polynomials over finite fields

Factoring Dickson polynomials over finite fiels Manjul Bhargava Department of Mathematics, Princeton University. Princeton NJ 08544 manjul@math.princeton.eu Michael Zieve Department of Mathematics, University

### (Quasi-)Newton methods

(Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting

### MATHEMATICAL BACKGROUND

Chapter 1 MATHEMATICAL BACKGROUND This chapter discusses the mathematics that is necessary for the development of the theory of linear programming. We are particularly interested in the solutions of a

### 10.2 Systems of Linear Equations: Matrices

SECTION 0.2 Systems of Linear Equations: Matrices 7 0.2 Systems of Linear Equations: Matrices OBJECTIVES Write the Augmente Matrix of a System of Linear Equations 2 Write the System from the Augmente Matrix

### Lecture 8: Expanders and Applications

Lecture 8: Expaners an Applications Topics in Complexity Theory an Pseuoranomness (Spring 013) Rutgers University Swastik Kopparty Scribes: Amey Bhangale, Mrinal Kumar 1 Overview In this lecture, we will

### Sensor Network Localization from Local Connectivity : Performance Analysis for the MDS-MAP Algorithm

Sensor Network Localization from Local Connectivity : Performance Analysis for the MDS-MAP Algorithm Sewoong Oh an Anrea Montanari Electrical Engineering an Statistics Department Stanfor University, Stanfor,

### HOMEWORK 4 SOLUTIONS. All questions are from Vector Calculus, by Marsden and Tromba

HOMEWORK SOLUTIONS All questions are from Vector Calculus, by Marsden and Tromba Question :..6 Let w = f(x, y) be a function of two variables, and let x = u + v, y = u v. Show that Solution. By the chain

### MSc. Econ: MATHEMATICAL STATISTICS, 1995 MAXIMUM-LIKELIHOOD ESTIMATION

MAXIMUM-LIKELIHOOD ESTIMATION The General Theory of M-L Estimation In orer to erive an M-L estimator, we are boun to make an assumption about the functional form of the istribution which generates the

### ECON Mathematical Economics - ANSWERS FINAL EXAM. 1. (a)- The consumer's utility maximization problem is written as:

ECON 331 - Mathematical Economics - ANSWERS FINAL EXAM 1. a- The consumer's utility maximization problem is written as: max x,y [xy + y2 + 2x + 2y] s.t. 6x + 10y = m, x 0, y 0 - The associated Lagrangian

### Mathematics. Circles. hsn.uk.net. Higher. Contents. Circles 119 HSN22400

hsn.uk.net Higher Mathematics UNIT OUTCOME 4 Circles Contents Circles 119 1 Representing a Circle 119 Testing a Point 10 3 The General Equation of a Circle 10 4 Intersection of a Line an a Circle 1 5 Tangents

### 2 HYPERBOLIC FUNCTIONS

HYPERBOLIC FUNCTIONS Chapter Hyperbolic Functions Objectives After stuying this chapter you shoul unerstan what is meant by a hyperbolic function; be able to fin erivatives an integrals of hyperbolic functions;

### Web Appendices to Selling to Overcon dent Consumers

Web Appenices to Selling to Overcon ent Consumers Michael D. Grubb MIT Sloan School of Management Cambrige, MA 02142 mgrubbmit.eu www.mit.eu/~mgrubb May 2, 2008 B Option Pricing Intuition This appenix

### CHAPTER 5 : CALCULUS

Dr Roger Ni (Queen Mary, University of Lonon) - 5. CHAPTER 5 : CALCULUS Differentiation Introuction to Differentiation Calculus is a branch of mathematics which concerns itself with change. Irrespective

### Chapter 2 Review of Classical Action Principles

Chapter Review of Classical Action Principles This section grew out of lectures given by Schwinger at UCLA aroun 1974, which were substantially transforme into Chap. 8 of Classical Electroynamics (Schwinger

### Lagrange Multipliers without Permanent Scarring

Lagrange Multipliers without Permanent Scarring Dan Klein Introuction This tutorial assumes that you want to know what Lagrange multipliers are, but are more intereste in getting the intuitions an central

### Linear Dependence Tests

Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

### Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen

(für Informatiker) M. Grepl J. Berger & J.T. Frings Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2010/11 Problem Statement Unconstrained Optimality Conditions Constrained

### Sensitivity Analysis of Non-linear Performance with Probability Distortion

Preprints of the 19th Worl Congress The International Feeration of Automatic Control Cape Town, South Africa. August 24-29, 214 Sensitivity Analysis of Non-linear Performance with Probability Distortion

### Integral Regular Truncated Pyramids with Rectangular Bases

Integral Regular Truncate Pyramis with Rectangular Bases Konstantine Zelator Department of Mathematics 301 Thackeray Hall University of Pittsburgh Pittsburgh, PA 1560, U.S.A. Also: Konstantine Zelator

### Answers to the Practice Problems for Test 2

Answers to the Practice Problems for Test 2 Davi Murphy. Fin f (x) if it is known that x [f(2x)] = x2. By the chain rule, x [f(2x)] = f (2x) 2, so 2f (2x) = x 2. Hence f (2x) = x 2 /2, but the lefthan

### Lecture L25-3D Rigid Body Kinematics

J. Peraire, S. Winall 16.07 Dynamics Fall 2008 Version 2.0 Lecture L25-3D Rigi Boy Kinematics In this lecture, we consier the motion of a 3D rigi boy. We shall see that in the general three-imensional

### Economics 241B Hypothesis Testing: Large Sample Inference

Economics 241B Hyothesis Testing: Large Samle Inference Statistical inference in large-samle theory is base on test statistics whose istributions are nown uner the truth of the null hyothesis. Derivation

### Average-Case Analysis of a Nearest Neighbor Algorithm. Moett Field, CA USA. in accuracy. neighbor.

From Proceeings of the Thirteenth International Joint Conference on Articial Intelligence (1993). Chambery, France: Morgan Kaufmann Average-Case Analysis of a Nearest Neighbor Algorithm Pat Langley Learning

### Nonlinear Optimization: Algorithms 3: Interior-point methods

Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris Jean-Philippe.Vert@mines.org Nonlinear optimization c 2006 Jean-Philippe Vert,

### A Generalization of Sauer s Lemma to Classes of Large-Margin Functions

A Generalization of Sauer s Lemma to Classes of Large-Margin Functions Joel Ratsaby University College Lonon Gower Street, Lonon WC1E 6BT, Unite Kingom J.Ratsaby@cs.ucl.ac.uk, WWW home page: http://www.cs.ucl.ac.uk/staff/j.ratsaby/

### PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.

PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include

### Maximum and Minimum Values

Jim Lambers MAT 280 Spring Semester 2009-10 Lecture 8 Notes These notes correspond to Section 11.7 in Stewart and Section 3.3 in Marsden and Tromba. Maximum and Minimum Values In single-variable calculus,

### Using Stein s Method to Show Poisson and Normal Limit Laws for Fringe Subtrees

AofA 2014, Paris, France DMTCS proc. (subm., by the authors, 1 12 Using Stein s Metho to Show Poisson an Normal Limit Laws for Fringe Subtrees Cecilia Holmgren 1 an Svante Janson 2 1 Department of Mathematics,

### The Inefficiency of Marginal cost pricing on roads

The Inefficiency of Marginal cost pricing on roas Sofia Grahn-Voornevel Sweish National Roa an Transport Research Institute VTI CTS Working Paper 4:6 stract The economic principle of roa pricing is that

### The wave equation is an important tool to study the relation between spectral theory and geometry on manifolds. Let U R n be an open set and let

1. The wave equation The wave equation is an important tool to stuy the relation between spectral theory an geometry on manifols. Let U R n be an open set an let = n j=1 be the Eucliean Laplace operator.

### FOURIER TRANSFORM TERENCE TAO

FOURIER TRANSFORM TERENCE TAO Very broaly speaking, the Fourier transform is a systematic way to ecompose generic functions into a superposition of symmetric functions. These symmetric functions are usually

### Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

### Example Optimization Problems selected from Section 4.7

Example Optimization Problems selecte from Section 4.7 19) We are aske to fin the points ( X, Y ) on the ellipse 4x 2 + y 2 = 4 that are farthest away from the point ( 1, 0 ) ; as it happens, this point

### Recall that the gradient of a differentiable scalar field ϕ on an open set D in R n is given by the formula:

Chapter 7 Div, grad, and curl 7.1 The operator and the gradient: Recall that the gradient of a differentiable scalar field ϕ on an open set D in R n is given by the formula: ( ϕ ϕ =, ϕ,..., ϕ. (7.1 x 1

### Markov Chains, Stochastic Processes, and Advanced Matrix Decomposition

Markov Chains, Stochastic Processes, and Advanced Matrix Decomposition Jack Gilbert Copyright (c) 2014 Jack Gilbert. Permission is granted to copy, distribute and/or modify this document under the terms

### LECTURE 15: LINEAR ARRAY THEORY - PART I

LECTURE 5: LINEAR ARRAY THEORY - PART I (Linear arrays: the two-element array; the N-element array with uniform amplitue an spacing; broa - sie array; en-fire array; phase array). Introuction Usually the

### 2 Elimination methods for solving linear equations

Elimination methos for solving linear equations Elimination methos for solving linear equations.... Overview.... Gaussian elimination... 3.. Writing the program - Gaussian elimination... 6.. Writing the

### Differential Calculus: Differentiation (First Principles, Rules) and Sketching Graphs (Grade 12)

OpenStax-CNX moule: m39313 1 Differential Calculus: Differentiation (First Principles, Rules) an Sketcing Graps (Grae 12) Free Hig Scool Science Texts Project Tis work is prouce by OpenStax-CNX an license

### The Strong or differential form is obtained by applying the divergence theorem

JOp p ) The Heat Equation a Parabolic Conservation Law Derivation, preliminaries... 2 An initial-bounary value problem for the D Heat equation... 2 2. Solution estimates in L 2.... 3 3 Continuous problem...

### Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

### Course 221: Analysis Academic year , First Semester

Course 221: Analysis Academic year 2007-08, First Semester David R. Wilkins Copyright c David R. Wilkins 1989 2007 Contents 1 Basic Theorems of Real Analysis 1 1.1 The Least Upper Bound Principle................

### Firewall Design: Consistency, Completeness, and Compactness

C IS COS YS TE MS Firewall Design: Consistency, Completeness, an Compactness Mohame G. Goua an Xiang-Yang Alex Liu Department of Computer Sciences The University of Texas at Austin Austin, Texas 78712-1188,

### An intertemporal model of the real exchange rate, stock market, and international debt dynamics: policy simulations

This page may be remove to conceal the ientities of the authors An intertemporal moel of the real exchange rate, stock market, an international ebt ynamics: policy simulations Saziye Gazioglu an W. Davi

### x4 x3 x2 x1 x0 E x4 x3 x2 x1 x0 E

Chapter 9 xercise 9. (a) Implement a output (ecimal) ecoer for -out-of- coe using NAND gates. The -out-of- coe represents ecimal igits as shown in the following table: i x x x x x 8 9 ach output z i of

### THEORY OF SIMPLEX METHOD

Chapter THEORY OF SIMPLEX METHOD Mathematical Programming Problems A mathematical programming problem is an optimization problem of finding the values of the unknown variables x, x,, x n that maximize

### Cross-Over Analysis Using T-Tests

Chapter 35 Cross-Over Analysis Using -ests Introuction his proceure analyzes ata from a two-treatment, two-perio (x) cross-over esign. he response is assume to be a continuous ranom variable that follows

### f(x) = a x, h(5) = ( 1) 5 1 = 2 2 1

Exponential Functions an their Derivatives Exponential functions are functions of the form f(x) = a x, where a is a positive constant referre to as the base. The functions f(x) = x, g(x) = e x, an h(x)

### 1.5. Factorisation. Introduction. Prerequisites. Learning Outcomes. Learning Style

Factorisation 1.5 Introduction In Block 4 we showed the way in which brackets were removed from algebraic expressions. Factorisation, which can be considered as the reverse of this process, is dealt with

### Search Advertising Based Promotion Strategies for Online Retailers

Search Avertising Base Promotion Strategies for Online Retailers Amit Mehra The Inian School of Business yeraba, Inia Amit Mehra@isb.eu ABSTRACT Web site aresses of small on line retailers are often unknown

### PROPERTIES OF MATRICES

PROPERIES OF MARICES ajoint... 4, 5 algebraic multiplicity... augmente matri... basis..., cofactor... 4 coorinate vector... 9 Cramer's rule... eterminant..., 5 iagonal matri... 6 iagonalizable... 8 imension...

### FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z

FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z DANIEL BIRMAJER, JUAN B GIL, AND MICHAEL WEINER Abstract We consider polynomials with integer coefficients and discuss their factorization

### Section 3.1 Worksheet NAME. f(x + h) f(x)

MATH 1170 Section 3.1 Worksheet NAME Recall that we have efine the erivative of f to be f (x) = lim h 0 f(x + h) f(x) h Recall also that the erivative of a function, f (x), is the slope f s tangent line

### UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

Calibration of the broa ban UV Raiometer Marian Morys an Daniel Berger Solar Light Co., Philaelphia, PA 19126 ABSTRACT Mounting concern about the ozone layer epletion an the potential ultraviolet exposure

### 1 High-Dimensional Space

Contents High-Dimensional Space. Properties of High-Dimensional Space..................... 4. The High-Dimensional Sphere......................... 5.. The Sphere an the Cube in Higher Dimensions...........

### Math 407A: Linear Optimization

Math 407A: Linear Optimization Lecture 4: LP Standard Form 1 1 Author: James Burke, University of Washington LPs in Standard Form Minimization maximization Linear equations to linear inequalities Lower

### Linear and quadratic Taylor polynomials for functions of several variables.

ams/econ 11b supplementary notes ucsc Linear quadratic Taylor polynomials for functions of several variables. c 010, Yonatan Katznelson Finding the extreme (minimum or maximum) values of a function, is

### Mathematics Review for Economists

Mathematics Review for Economists by John E. Floy University of Toronto May 9, 2013 This ocument presents a review of very basic mathematics for use by stuents who plan to stuy economics in grauate school

### Limits and Continuity

Math 20C Multivariable Calculus Lecture Limits and Continuity Slide Review of Limit. Side limits and squeeze theorem. Continuous functions of 2,3 variables. Review: Limits Slide 2 Definition Given a function

### Numerical Methods for Solving Systems of Nonlinear Equations

Numerical Methods for Solving Systems of Nonlinear Equations by Courtney Remani A project submitted to the Department of Mathematical Sciences in conformity with the requirements for Math 4301 Honour s

### BV has the bounded approximation property

The Journal of Geometric Analysis volume 15 (2005), number 1, pp. 1-7 [version, April 14, 2005] BV has the boune approximation property G. Alberti, M. Csörnyei, A. Pe lczyński, D. Preiss Abstract: We prove

### Optimal Control Policy of a Production and Inventory System for multi-product in Segmented Market

RATIO MATHEMATICA 25 (2013), 29 46 ISSN:1592-7415 Optimal Control Policy of a Prouction an Inventory System for multi-prouct in Segmente Market Kuleep Chauhary, Yogener Singh, P. C. Jha Department of Operational

### Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 J. Zhang Institute of Applied Mathematics, Chongqing University of Posts and Telecommunications, Chongqing

### Week 4 - Linear Demand and Supply Curves

Week 4 - Linear Deman an Supply Curves November 26, 2007 1 Suppose that we have a linear eman curve efine by the expression X D = A bp X an a linear supply curve given by X S = C + P X, where b > 0 an

### THE UNIVERSITY OF AUCKLAND

COMPSCI 369 THE UNIVERSITY OF AUCKLAND FIRST SEMESTER, 2011 MID-SEMESTER TEST Campus: City COMPUTER SCIENCE Computational Science (Time allowed: 50 minutes) NOTE: Attempt all questions Use of calculators

### Date: April 12, 2001. Contents

2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........

### it is easy to see that α = a

21. Polynomial rings Let us now turn out attention to determining the prime elements of a polynomial ring, where the coefficient ring is a field. We already know that such a polynomial ring is a UF. Therefore

### 1 Homework 1. [p 0 q i+j +... + p i 1 q j+1 ] + [p i q j ] + [p i+1 q j 1 +... + p i+j q 0 ]

1 Homework 1 (1) Prove the ideal (3,x) is a maximal ideal in Z[x]. SOLUTION: Suppose we expand this ideal by including another generator polynomial, P / (3, x). Write P = n + x Q with n an integer not

### SOME NOTES ON THE HYPERBOLIC TRIG FUNCTIONS SINH AND COSH

SOME NOTES ON THE HYPERBOLIC TRIG FUNCTIONS SINH AND COSH Basic Definitions In homework set # one of the questions involves basic unerstaning of the hyperbolic functions sinh an cosh. We will use this

### 10. Proximal point method

L. Vandenberghe EE236C Spring 2013-14) 10. Proximal point method proximal point method augmented Lagrangian method Moreau-Yosida smoothing 10-1 Proximal point method a conceptual algorithm for minimizing

### Modelling and Calculus

Moelling an Calculus By Collin Phillips This material has been evelope as a joint project between the Mathematics Learning Centre MLC an the Learning Centre LC at the University of Syney. With literary

### Lecture 4: Equality Constrained Optimization. Tianxi Wang

Lecture 4: Equality Constrained Optimization Tianxi Wang wangt@essex.ac.uk 2.1 Lagrange Multiplier Technique (a) Classical Programming max f(x 1, x 2,..., x n ) objective function where x 1, x 2,..., x

### DIFFRACTION AND INTERFERENCE

DIFFRACTION AND INTERFERENCE In this experiment you will emonstrate the wave nature of light by investigating how it bens aroun eges an how it interferes constructively an estructively. You will observe

### 1 Derivatives of Piecewise Defined Functions

MATH 1010E University Matematics Lecture Notes (week 4) Martin Li 1 Derivatives of Piecewise Define Functions For piecewise efine functions, we often ave to be very careful in computing te erivatives.

### 9.5. Interference of Light Waves: Young s double-slit Experiment

Interference of Light Waves: Young s ouble-slit Experiment At the en of the 1600s an into the 1700s, the ebate over the nature of light was in full swing. Newton s theory of light particles face challenges

### Problem 1 (10 pts) Find the radius of convergence and interval of convergence of the series

1 Problem 1 (10 pts) Find the radius of convergence and interval of convergence of the series a n n=1 n(x + 2) n 5 n 1. n(x + 2)n Solution: Do the ratio test for the absolute convergence. Let a n =. Then,

### Studying the Behavior of Active Mass Drivers during an Earthquake Using Discrete Instantaneous Optimal Control Method

Stuying the Behavior of Active Mass Drivers uring an Earthquae Using Discrete Instantaneous Optimal Control Metho O. Bahar 1, M. R. Banan 2, an M. Mahzoon 3 1. Structural Engineering Research Center, International

### MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

MATH 45, PRACTICE FINAL EXAM SOLUTIONS. Exercise. a Is the operator L defined on smooth functions of x, y by L u := u xx + cosu linear? b Does the answer change if we replace the operator L by the operator

### 3. INNER PRODUCT SPACES

. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

### 2.3 Convex Constrained Optimization Problems

42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

### An important class of codes are linear codes in the vector space Fq n, where F q is a finite field of order q.

Chapter 3 Linear Codes An important class of codes are linear codes in the vector space Fq n, where F q is a finite field of order q. Definition 3.1 (Linear code). A linear code C is a code in Fq n for

### The mean-field computation in a supermarket model with server multiple vacations

DOI.7/s66-3-7-5 The mean-fiel computation in a supermaret moel with server multiple vacations Quan-Lin Li Guirong Dai John C. S. Lui Yang Wang Receive: November / Accepte: 8 October 3 SpringerScienceBusinessMeiaNewYor3