14. Nonlinear least-squares



Similar documents
8. Linear least-squares

Inner Product Spaces

Linear Algebra Methods for Data Mining

Nonlinear Algebraic Equations Example

Lecture 5 Least-squares

Linear Equations and Inequalities

The Method of Partial Fractions Math 121 Calculus II Spring 2015

Lecture 3: Linear methods for classification

Vieta s Formulas and the Identity Theorem

Lecture 5 Principal Minors and the Hessian

Lecture 8 February 4

SOLUTIONS. f x = 6x 2 6xy 24x, f y = 3x 2 6y. To find the critical points, we solve

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Nonlinear Optimization: Algorithms 3: Interior-point methods

Principles of Scientific Computing Nonlinear Equations and Optimization

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88

a. all of the above b. none of the above c. B, C, D, and F d. C, D, F e. C only f. C and F

Inner product. Definition of inner product

Electrical Engineering 103 Applied Numerical Computing

A three point formula for finding roots of equations by the method of least squares

Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers

Chapter 13 Introduction to Nonlinear Regression( 非 線 性 迴 歸 )

Machine Learning and Data Mining. Regression Problem. (adapted from) Prof. Alexander Ihler

G.A. Pavliotis. Department of Mathematics. Imperial College London

2.3 Convex Constrained Optimization Problems

LS.6 Solution Matrices

2013 MBA Jump Start Program

(1) S = R, where S is the sum, i is the rate of interest per compounding

1 Lecture: Integration of rational functions by decomposition

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

Factoring Polynomials

The Method of Least Squares. Lectures INF2320 p. 1/80

Introduction to Logistic Regression

Weighted-Least-Square(WLS) State Estimation

Machine Learning and Pattern Recognition Logistic Regression

(Quasi-)Newton methods

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

Quick Tour of Mathcad and Examples

Roots of Equations (Chapters 5 and 6)

Definition 8.1 Two inequalities are equivalent if they have the same solution set. Add or Subtract the same value on both sides of the inequality.

Core Maths C2. Revision Notes

LINEAR INEQUALITIES. less than, < 2x + 5 x 3 less than or equal to, greater than, > 3x 2 x 6 greater than or equal to,

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

Airport Planning and Design. Excel Solver

1 Introduction to Matrices

Solutions for Review Problems

Regression III: Advanced Methods

CHAPTER FIVE. Solutions for Section 5.1. Skill Refresher. Exercises

Examples of Functions

i=1 In practice, the natural logarithm of the likelihood function, called the log-likelihood function and denoted by

Lecture 2 Matrix Operations

The KaleidaGraph Guide to Curve Fitting

Least-Squares Intersection of Lines

Learning Objectives for Section 1.1 Linear Equations and Inequalities

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)

More Quadratic Equations

Lecture 3 : The Natural Exponential Function: f(x) = exp(x) = e x. y = exp(x) if and only if x = ln(y)

Core Maths C3. Revision Notes

COMPUTATION OF THREE-DIMENSIONAL ELECTRIC FIELD PROBLEMS BY A BOUNDARY INTEGRAL METHOD AND ITS APPLICATION TO INSULATION DESIGN

PRACTICE FINAL. Problem 1. Find the dimensions of the isosceles triangle with largest area that can be inscribed in a circle of radius 10cm.

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Vector and Matrix Norms

Linear Algebra Notes for Marsden and Tromba Vector Calculus

EXPLORING THE TRUE GEOMETRY OF THE INELASTIC INSTANTANEOUS CENTER METHOD FOR ECCENTRICALLY LOADED BOLT GROUPS

DATA ANALYSIS II. Matrix Algorithms

constraint. Let us penalize ourselves for making the constraint too big. We end up with a

1 Review of Least Squares Solutions to Overdetermined Systems

Partial Fractions. (x 1)(x 2 + 1)

Maximum Likelihood Estimation of Logistic Regression Models: Theory and Implementation

Objectives. Materials

Function minimization

College of the Holy Cross, Spring 2009 Math 373, Partial Differential Equations Midterm 1 Practice Questions

1.5. Factorisation. Introduction. Prerequisites. Learning Outcomes. Learning Style

A Comparison of Nonlinear. Regression Codes

with functions, expressions and equations which follow in units 3 and 4.

Absolute Value Equations and Inequalities

Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R.

POLYNOMIAL AND MULTIPLE REGRESSION. Polynomial regression used to fit nonlinear (e.g. curvilinear) data into a least squares linear regression model.

minimal polyonomial Example

Microeconomic Theory: Basic Math Concepts

Mechanics 1: Conservation of Energy and Momentum

ALGEBRA REVIEW LEARNING SKILLS CENTER. Exponents & Radicals

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions

STATISTICS AND DATA ANALYSIS IN GEOLOGY, 3rd ed. Clarificationof zonationprocedure described onpp

CHAPTER SIX IRREDUCIBILITY AND FACTORIZATION 1. BASIC DIVISIBILITY THEORY

An Overview of Integer Factoring Algorithms. The Problem

Introduction to Support Vector Machines. Colin Campbell, Bristol University

Probabilistic Linear Classification: Logistic Regression. Piyush Rai IIT Kanpur

Lab 17: Consumer and Producer Surplus

Fixed Point Theorems

Part 2: Analysis of Relationship Between Two Variables

FINAL EXAM SECTIONS AND OBJECTIVES FOR COLLEGE ALGEBRA

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

Multi-variable Calculus and Optimization

The Goldberg Rao Algorithm for the Maximum Flow Problem

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

Transcription:

14 Nonlinear least-squares EE103 (Fall 2011-12) definition Newton s method Gauss-Newton method 14-1

Nonlinear least-squares minimize r i (x) 2 = r(x) 2 r i is a nonlinear function of the n-vector of variables x r(x) = (r 1 (x),r 2 (x),,r m (x)) reduces to linear least-squares if r(x) = Ax b g(x) = r(x) 2 may have multiple local minima; finding the global minimum is usually very hard a nonlinear minimization problem; can apply Newton s method Nonlinear least-squares 14-2

Interpretation as overdetermined nonlinear equations m nonlinear equations in n variables r 1 (x 1,x 2,,x n ) = 0 r 2 (x 1,x 2,,x n ) = 0 r m (x 1,x 2,,x n ) = 0 usually there is no x that satisfies r(x) = 0 instead we can calculate the vector x that minimizes r(x) 2 = r i (x) 2 Nonlinear least-squares 14-3

Inductor modeling example 50 nonlinear equations in 5 variables x 1,, x 5 exp(x 1 )n x 2 i wx 3 i d x 4 i Dx 5 i L i, i = 1,,50 method 1 (exercise 84): suppose we are free to choose the error criterion if we choose to minimize the sum of squared errors on a logarithmic scale, minimize 50 (log(exp(x 1 )n x 2 i wx 3 i d x 4 i Dx 5 i ) logl i ) 2, we obtain a linear least-squares problem: minimize Ax b 2 where A = 1 logn 1 logw 1 logd 1 logd 1 1 logn 2 logw 2 logd 2 logd 2 1 logn 50 logw 50 logd 50 logd 50, b = logl 1 logl 2 logl 50 Nonlinear least-squares 14-4

method 2: minimize the sum of squared errors on a linear scale minimize 50 (exp(x 1 )n x 2 i wx 3 i d x 4 i Dx 5 i L i ) 2 this is a nonlinear least-squares problem: minimize 50 r i (x) = exp(x 1 )n x 2 i wx 3 i d x 4 i Dx 5 i L i r i (x) 2 where = exp(x 1 +x 2 logn i +x 3 logw i +x 4 logd i +x 5 logd i ) L i much harder than linear least-squares (may have multiple local minima) requires an iterative method can use method 1 to find a starting point Nonlinear least-squares 14-5

Navigation from range measurements estimate position (u,v) from distances to m beacons at locations (p k,q k ) measured distances: ρ i = (u p i ) 2 +(v q i ) 2 +w i, i = 1,,m where w i is range error, unknown but small nonlinear least-squares estimate: choose estimates (u, v) by minimizing g(u,v) = = r i (u,v) 2 ( ) 2 (u pi ) 2 +(v q i ) 2 ρ i Nonlinear least-squares 14-6

example correct position is (1, 1) ( ) m = 5 beacons at positions range measurements accurate to ±02 graph of g(u,v) 4 contour lines of g(u,v) 5 4 3 2 1 2 0 0 1 2 3 4 0 u v 4 v 35 3 25 2 15 1 05 0 0 1 2 3 4 (global) minimum at (118, 082) local minimum at (299, 212), local maximum at (194, 187) u Nonlinear least-squares 14-7

Newton s method for nonlinear least-squares apply method of page 13-25 to g(x) = r(x) 2 = m first and second derivatives of g: g(x) x k = 2 2 g(x) x j x k = 2 r i (x) 2 r i (x) r i(x) x k ( r i (x) 2 r i (x) + r ) i(x) r i (x) x j x k x j x k ie, the gradient and Hessian of g are g(x) = 2 2 g(x) = 2 r i (x) r i (x) ( ri (x) 2 r i (x)+ r i (x) r i (x) T) Nonlinear least-squares 14-8

example (inductor problem of page 14-4) method 1 (linear least-squares): x 1 = 725, x 2 = 138, x 3 = 048, x 4 = 028, x 5 = 121 method 2: apply Newton s method to g(x) = 50 (exp(x 1 +x 2 logn i +x 3 logw i +x 4 logd i +x 5 logd i ) L i ) 2 use as starting point the linear least-squares solution converges in four iterations to x 1 = 714, x 2 = 132, x 3 = 053, x 4 = 022, x 5 = 127 Nonlinear least-squares 14-9

example (navigation problem of page 14-6) 4 35 3 v 25 2 2 15 1 05 1 0 0 1 2 3 4 u from starting points (0,0), (0,4), (4,0), converges to 1 (minimum) from starting points (4, 4), (2, 2), converges to point 2 (local minimum) Nonlinear least-squares 14-10

convergence from starting point x (0) = (15,4) x (k) x 10 1 10 0 10 1 10 2 10 3 v 4 35 3 25 2 15 1 05 x (0) x (1) 10 4 0 2 4 6 8 k 0 0 1 2 3 4 u converges to minimum at (118, 082) 2nd and 3rd iteration use negative gradient direction; other iterations use Newton direction Nonlinear least-squares 14-11

Gauss-Newton method for nonlinear least-squares a simpler alternative to Newton s method for minimizing g(x) = r i (x) 2 start at some initial guess x (0) at iteration k, linearize r i (x) around current guess x (k) : r i (x) r i (x (k) )+ r i (x (k) ) T (x x (k) ) new guess x (k+1) is the minimizer of ( ) 2 r i (x (k) )+ r i (x (k) ) T (x x (k) ) ie, sum of the squares of the linearized residuals Nonlinear least-squares 14-12

to find x (k+1) from x (k), solve a linear least-squares problem minimize A (k) x b (k) 2 with A (k) = r 1 (x (k) ) T r 2 (x (k) ) T r m (x (k) ) T, b (k) = r 1 (x (k) ) T x (k) r 1 (x (k) ) r 2 (x (k) ) T x (k) r 2 (x (k) ) r m (x (k) ) T x (k) r m (x (k) ) advantage (over Newton s method): no second derivatives of r i needed disadvantage: convergence is slower Nonlinear least-squares 14-13

Summary: Gauss-Newton method given initial x, tolerance ǫ > 0 repeat 1 evaluate r i (x) and r i (x) for i = 1,,m, and calculate r := r 1 (x) r 2 (x) r m (x), A := r 1 (x) T r 2 (x) T r m (x) T 2 if 2A T r ǫ, return x 3 x := (A T A) 1 A T b until maximum number of iterations is exceeded, b := Ax r in step 2, note that 2A T r = g(x) Nonlinear least-squares 14-14

Interpretation as modified Newton method (notation: x = x (k), x + = x (k+1) ) in Gauss-Newton method (from normal eqs of LS problem on page 14-13) ( m ) 1( m ) x + = r i (x) r i (x) T r i (x)( r i (x) T x r i (x)) = x ( m ) 1( m ) r i (x) r i (x) T r i (x) r i (x) interpretation: take unit step in direction v = H 1 g(x) where H = 2 r i (x) r i (x) T Nonlinear least-squares 14-15

compare with the Newton direction at x: where (from page 14-8) v = 2 g(x) 1 g(x) 2 g(x) = 2 ( r i (x) r i (x) T +r i (x) 2 r i (x)) interpretation of GN-method: replace 2 g(x) in Newton s method by H = 2 r i (x) r i (x) T H 2 g(x) if the residuals r i (x) are small advantage: no need to evaluate 2 r i (x) H is always positive semidefinite Nonlinear least-squares 14-16

Gauss-Newton method with backtracking given initial x, tolerance ǫ > 0, parameter α (0,1/2) repeat 1 evaluate r i (x) and r i (x) for i = 1,,m, and calculate r := r 1(x) r m (x), A := r 1(x) T r m (x) T 2 if 2A T r ǫ, return x 3 v := (A T A) 1 A T r 4 t := 1 while m r i(x+tv) 2 > r 2 +α(2r T Av)t, t := t/2 5 x := x+tv until maximum number of iterations is exceeded Nonlinear least-squares 14-17

example (same problem as page 14-6) x (k) x 10 2 10 0 10 2 10 4 v 4 35 3 25 2 15 1 05 x (1) x (0) 10 6 0 5 10 15 k 0 0 1 2 3 4 u local convergence is slower than Newton s method Nonlinear least-squares 14-18