14. Nonlinear least-squares
|
|
- Spencer Fletcher
- 7 years ago
- Views:
Transcription
1 14 Nonlinear least-squares EE103 (Fall ) definition Newton s method Gauss-Newton method 14-1
2 Nonlinear least-squares minimize r i (x) 2 = r(x) 2 r i is a nonlinear function of the n-vector of variables x r(x) = (r 1 (x),r 2 (x),,r m (x)) reduces to linear least-squares if r(x) = Ax b g(x) = r(x) 2 may have multiple local minima; finding the global minimum is usually very hard a nonlinear minimization problem; can apply Newton s method Nonlinear least-squares 14-2
3 Interpretation as overdetermined nonlinear equations m nonlinear equations in n variables r 1 (x 1,x 2,,x n ) = 0 r 2 (x 1,x 2,,x n ) = 0 r m (x 1,x 2,,x n ) = 0 usually there is no x that satisfies r(x) = 0 instead we can calculate the vector x that minimizes r(x) 2 = r i (x) 2 Nonlinear least-squares 14-3
4 Inductor modeling example 50 nonlinear equations in 5 variables x 1,, x 5 exp(x 1 )n x 2 i wx 3 i d x 4 i Dx 5 i L i, i = 1,,50 method 1 (exercise 84): suppose we are free to choose the error criterion if we choose to minimize the sum of squared errors on a logarithmic scale, minimize 50 (log(exp(x 1 )n x 2 i wx 3 i d x 4 i Dx 5 i ) logl i ) 2, we obtain a linear least-squares problem: minimize Ax b 2 where A = 1 logn 1 logw 1 logd 1 logd 1 1 logn 2 logw 2 logd 2 logd 2 1 logn 50 logw 50 logd 50 logd 50, b = logl 1 logl 2 logl 50 Nonlinear least-squares 14-4
5 method 2: minimize the sum of squared errors on a linear scale minimize 50 (exp(x 1 )n x 2 i wx 3 i d x 4 i Dx 5 i L i ) 2 this is a nonlinear least-squares problem: minimize 50 r i (x) = exp(x 1 )n x 2 i wx 3 i d x 4 i Dx 5 i L i r i (x) 2 where = exp(x 1 +x 2 logn i +x 3 logw i +x 4 logd i +x 5 logd i ) L i much harder than linear least-squares (may have multiple local minima) requires an iterative method can use method 1 to find a starting point Nonlinear least-squares 14-5
6 Navigation from range measurements estimate position (u,v) from distances to m beacons at locations (p k,q k ) measured distances: ρ i = (u p i ) 2 +(v q i ) 2 +w i, i = 1,,m where w i is range error, unknown but small nonlinear least-squares estimate: choose estimates (u, v) by minimizing g(u,v) = = r i (u,v) 2 ( ) 2 (u pi ) 2 +(v q i ) 2 ρ i Nonlinear least-squares 14-6
7 example correct position is (1, 1) ( ) m = 5 beacons at positions range measurements accurate to ±02 graph of g(u,v) 4 contour lines of g(u,v) u v 4 v (global) minimum at (118, 082) local minimum at (299, 212), local maximum at (194, 187) u Nonlinear least-squares 14-7
8 Newton s method for nonlinear least-squares apply method of page to g(x) = r(x) 2 = m first and second derivatives of g: g(x) x k = 2 2 g(x) x j x k = 2 r i (x) 2 r i (x) r i(x) x k ( r i (x) 2 r i (x) + r ) i(x) r i (x) x j x k x j x k ie, the gradient and Hessian of g are g(x) = 2 2 g(x) = 2 r i (x) r i (x) ( ri (x) 2 r i (x)+ r i (x) r i (x) T) Nonlinear least-squares 14-8
9 example (inductor problem of page 14-4) method 1 (linear least-squares): x 1 = 725, x 2 = 138, x 3 = 048, x 4 = 028, x 5 = 121 method 2: apply Newton s method to g(x) = 50 (exp(x 1 +x 2 logn i +x 3 logw i +x 4 logd i +x 5 logd i ) L i ) 2 use as starting point the linear least-squares solution converges in four iterations to x 1 = 714, x 2 = 132, x 3 = 053, x 4 = 022, x 5 = 127 Nonlinear least-squares 14-9
10 example (navigation problem of page 14-6) v u from starting points (0,0), (0,4), (4,0), converges to 1 (minimum) from starting points (4, 4), (2, 2), converges to point 2 (local minimum) Nonlinear least-squares 14-10
11 convergence from starting point x (0) = (15,4) x (k) x v x (0) x (1) k u converges to minimum at (118, 082) 2nd and 3rd iteration use negative gradient direction; other iterations use Newton direction Nonlinear least-squares 14-11
12 Gauss-Newton method for nonlinear least-squares a simpler alternative to Newton s method for minimizing g(x) = r i (x) 2 start at some initial guess x (0) at iteration k, linearize r i (x) around current guess x (k) : r i (x) r i (x (k) )+ r i (x (k) ) T (x x (k) ) new guess x (k+1) is the minimizer of ( ) 2 r i (x (k) )+ r i (x (k) ) T (x x (k) ) ie, sum of the squares of the linearized residuals Nonlinear least-squares 14-12
13 to find x (k+1) from x (k), solve a linear least-squares problem minimize A (k) x b (k) 2 with A (k) = r 1 (x (k) ) T r 2 (x (k) ) T r m (x (k) ) T, b (k) = r 1 (x (k) ) T x (k) r 1 (x (k) ) r 2 (x (k) ) T x (k) r 2 (x (k) ) r m (x (k) ) T x (k) r m (x (k) ) advantage (over Newton s method): no second derivatives of r i needed disadvantage: convergence is slower Nonlinear least-squares 14-13
14 Summary: Gauss-Newton method given initial x, tolerance ǫ > 0 repeat 1 evaluate r i (x) and r i (x) for i = 1,,m, and calculate r := r 1 (x) r 2 (x) r m (x), A := r 1 (x) T r 2 (x) T r m (x) T 2 if 2A T r ǫ, return x 3 x := (A T A) 1 A T b until maximum number of iterations is exceeded, b := Ax r in step 2, note that 2A T r = g(x) Nonlinear least-squares 14-14
15 Interpretation as modified Newton method (notation: x = x (k), x + = x (k+1) ) in Gauss-Newton method (from normal eqs of LS problem on page 14-13) ( m ) 1( m ) x + = r i (x) r i (x) T r i (x)( r i (x) T x r i (x)) = x ( m ) 1( m ) r i (x) r i (x) T r i (x) r i (x) interpretation: take unit step in direction v = H 1 g(x) where H = 2 r i (x) r i (x) T Nonlinear least-squares 14-15
16 compare with the Newton direction at x: where (from page 14-8) v = 2 g(x) 1 g(x) 2 g(x) = 2 ( r i (x) r i (x) T +r i (x) 2 r i (x)) interpretation of GN-method: replace 2 g(x) in Newton s method by H = 2 r i (x) r i (x) T H 2 g(x) if the residuals r i (x) are small advantage: no need to evaluate 2 r i (x) H is always positive semidefinite Nonlinear least-squares 14-16
17 Gauss-Newton method with backtracking given initial x, tolerance ǫ > 0, parameter α (0,1/2) repeat 1 evaluate r i (x) and r i (x) for i = 1,,m, and calculate r := r 1(x) r m (x), A := r 1(x) T r m (x) T 2 if 2A T r ǫ, return x 3 v := (A T A) 1 A T r 4 t := 1 while m r i(x+tv) 2 > r 2 +α(2r T Av)t, t := t/2 5 x := x+tv until maximum number of iterations is exceeded Nonlinear least-squares 14-17
18 example (same problem as page 14-6) x (k) x v x (1) x (0) k u local convergence is slower than Newton s method Nonlinear least-squares 14-18
8. Linear least-squares
8. Linear least-squares EE13 (Fall 211-12) definition examples and applications solution of a least-squares problem, normal equations 8-1 Definition overdetermined linear equations if b range(a), cannot
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationLinear Algebra Methods for Data Mining
Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 Lecture 3: QR, least squares, linear regression Linear Algebra Methods for Data Mining, Spring 2007, University
More informationNonlinear Algebraic Equations Example
Nonlinear Algebraic Equations Example Continuous Stirred Tank Reactor (CSTR). Look for steady state concentrations & temperature. s r (in) p,i (in) i In: N spieces with concentrations c, heat capacities
More informationLecture 5 Least-squares
EE263 Autumn 2007-08 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property
More informationLinear Equations and Inequalities
Linear Equations and Inequalities Section 1.1 Prof. Wodarz Math 109 - Fall 2008 Contents 1 Linear Equations 2 1.1 Standard Form of a Linear Equation................ 2 1.2 Solving Linear Equations......................
More informationThe Method of Partial Fractions Math 121 Calculus II Spring 2015
Rational functions. as The Method of Partial Fractions Math 11 Calculus II Spring 015 Recall that a rational function is a quotient of two polynomials such f(x) g(x) = 3x5 + x 3 + 16x x 60. The method
More informationLecture 3: Linear methods for classification
Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,
More informationVieta s Formulas and the Identity Theorem
Vieta s Formulas and the Identity Theorem This worksheet will work through the material from our class on 3/21/2013 with some examples that should help you with the homework The topic of our discussion
More informationLecture 5 Principal Minors and the Hessian
Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and
More informationLecture 8 February 4
ICS273A: Machine Learning Winter 2008 Lecture 8 February 4 Scribe: Carlos Agell (Student) Lecturer: Deva Ramanan 8.1 Neural Nets 8.1.1 Logistic Regression Recall the logistic function: g(x) = 1 1 + e θt
More informationSOLUTIONS. f x = 6x 2 6xy 24x, f y = 3x 2 6y. To find the critical points, we solve
SOLUTIONS Problem. Find the critical points of the function f(x, y = 2x 3 3x 2 y 2x 2 3y 2 and determine their type i.e. local min/local max/saddle point. Are there any global min/max? Partial derivatives
More informationNumerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen
(für Informatiker) M. Grepl J. Berger & J.T. Frings Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2010/11 Problem Statement Unconstrained Optimality Conditions Constrained
More informationNonlinear Optimization: Algorithms 3: Interior-point methods
Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris Jean-Philippe.Vert@mines.org Nonlinear optimization c 2006 Jean-Philippe Vert,
More informationPrinciples of Scientific Computing Nonlinear Equations and Optimization
Principles of Scientific Computing Nonlinear Equations and Optimization David Bindel and Jonathan Goodman last revised March 6, 2006, printed March 6, 2009 1 1 Introduction This chapter discusses two related
More informationNonlinear Algebraic Equations. Lectures INF2320 p. 1/88
Nonlinear Algebraic Equations Lectures INF2320 p. 1/88 Lectures INF2320 p. 2/88 Nonlinear algebraic equations When solving the system u (t) = g(u), u(0) = u 0, (1) with an implicit Euler scheme we have
More informationa. all of the above b. none of the above c. B, C, D, and F d. C, D, F e. C only f. C and F
FINAL REVIEW WORKSHEET COLLEGE ALGEBRA Chapter 1. 1. Given the following equations, which are functions? (A) y 2 = 1 x 2 (B) y = 9 (C) y = x 3 5x (D) 5x + 2y = 10 (E) y = ± 1 2x (F) y = 3 x + 5 a. all
More informationInner product. Definition of inner product
Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product
More informationElectrical Engineering 103 Applied Numerical Computing
UCLA Fall Quarter 2011-12 Electrical Engineering 103 Applied Numerical Computing Professor L Vandenberghe Notes written in collaboration with S Boyd (Stanford Univ) Contents I Matrix theory 1 1 Vectors
More informationA three point formula for finding roots of equations by the method of least squares
A three point formula for finding roots of equations by the method of least squares Ababu Teklemariam Tiruneh 1 ; William N. Ndlela 1 ; Stanley J. Nkambule 1 1 Lecturer, Department of Environmental Health
More informationVariance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers
Variance Reduction The statistical efficiency of Monte Carlo simulation can be measured by the variance of its output If this variance can be lowered without changing the expected value, fewer replications
More informationChapter 13 Introduction to Nonlinear Regression( 非 線 性 迴 歸 )
Chapter 13 Introduction to Nonlinear Regression( 非 線 性 迴 歸 ) and Neural Networks( 類 神 經 網 路 ) 許 湘 伶 Applied Linear Regression Models (Kutner, Nachtsheim, Neter, Li) hsuhl (NUK) LR Chap 10 1 / 35 13 Examples
More informationMachine Learning and Data Mining. Regression Problem. (adapted from) Prof. Alexander Ihler
Machine Learning and Data Mining Regression Problem (adapted from) Prof. Alexander Ihler Overview Regression Problem Definition and define parameters ϴ. Prediction using ϴ as parameters Measure the error
More informationG.A. Pavliotis. Department of Mathematics. Imperial College London
EE1 MATHEMATICS NUMERICAL METHODS G.A. Pavliotis Department of Mathematics Imperial College London 1. Numerical solution of nonlinear equations (iterative processes). 2. Numerical evaluation of integrals.
More information2.3 Convex Constrained Optimization Problems
42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions
More informationLS.6 Solution Matrices
LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions
More information2013 MBA Jump Start Program
2013 MBA Jump Start Program Module 2: Mathematics Thomas Gilbert Mathematics Module Algebra Review Calculus Permutations and Combinations [Online Appendix: Basic Mathematical Concepts] 2 1 Equation of
More information(1) S = R, where S is the sum, i is the rate of interest per compounding
Solving Annuity Formulas for Interest Rate Iterative Routines on the TI 83/84 Floyd Vest Basic to mathematics of finance is the formula for the sum of an ordinary annuity. The formula in traditional terminology
More information1 Lecture: Integration of rational functions by decomposition
Lecture: Integration of rational functions by decomposition into partial fractions Recognize and integrate basic rational functions, except when the denominator is a power of an irreducible quadratic.
More informationLogistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression
Logistic Regression Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max
More informationFactoring Polynomials
Factoring Polynomials Hoste, Miller, Murieka September 12, 2011 1 Factoring In the previous section, we discussed how to determine the product of two or more terms. Consider, for instance, the equations
More informationThe Method of Least Squares. Lectures INF2320 p. 1/80
The Method of Least Squares Lectures INF2320 p. 1/80 Lectures INF2320 p. 2/80 The method of least squares We study the following problem: Given n points (t i,y i ) for i = 1,...,n in the (t,y)-plane. How
More informationIntroduction to Logistic Regression
OpenStax-CNX module: m42090 1 Introduction to Logistic Regression Dan Calderon This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License 3.0 Abstract Gives introduction
More informationWeighted-Least-Square(WLS) State Estimation
Weighted-Least-Square(WLS) State Estimation Yousu Chen PNNL December 18, 2015 This document is a description of how to formulate the weighted-least squares (WLS) state estimation problem. Most of the formulation
More informationMachine Learning and Pattern Recognition Logistic Regression
Machine Learning and Pattern Recognition Logistic Regression Course Lecturer:Amos J Storkey Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh Crichton Street,
More information(Quasi-)Newton methods
(Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting
More informationSECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA
SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the
More informationQuick Tour of Mathcad and Examples
Fall 6 Quick Tour of Mathcad and Examples Mathcad provides a unique and powerful way to work with equations, numbers, tests and graphs. Features Arithmetic Functions Plot functions Define you own variables
More informationRoots of Equations (Chapters 5 and 6)
Roots of Equations (Chapters 5 and 6) Problem: given f() = 0, find. In general, f() can be any function. For some forms of f(), analytical solutions are available. However, for other functions, we have
More informationDefinition 8.1 Two inequalities are equivalent if they have the same solution set. Add or Subtract the same value on both sides of the inequality.
8 Inequalities Concepts: Equivalent Inequalities Linear and Nonlinear Inequalities Absolute Value Inequalities (Sections 4.6 and 1.1) 8.1 Equivalent Inequalities Definition 8.1 Two inequalities are equivalent
More informationCore Maths C2. Revision Notes
Core Maths C Revision Notes November 0 Core Maths C Algebra... Polnomials: +,,,.... Factorising... Long division... Remainder theorem... Factor theorem... 4 Choosing a suitable factor... 5 Cubic equations...
More informationOutline. Generalize Simple Example
Solving Simultaneous Nonlinear Algebraic Equations Larry Caretto Mechanical Engineering 309 Numerical Analysis of Engineering Systems March 5, 014 Outline Problem Definition of solving simultaneous nonlinear
More informationLINEAR INEQUALITIES. less than, < 2x + 5 x 3 less than or equal to, greater than, > 3x 2 x 6 greater than or equal to,
LINEAR INEQUALITIES When we use the equal sign in an equation we are stating that both sides of the equation are equal to each other. In an inequality, we are stating that both sides of the equation are
More informationThe Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method
The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem
More informationProblem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.
Math 312, Fall 2012 Jerry L. Kazdan Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday. In addition to the problems below, you should also know how to solve
More informationAirport Planning and Design. Excel Solver
Airport Planning and Design Excel Solver Dr. Antonio A. Trani Professor of Civil and Environmental Engineering Virginia Polytechnic Institute and State University Blacksburg, Virginia Spring 2012 1 of
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More informationSolutions for Review Problems
olutions for Review Problems 1. Let be the triangle with vertices A (,, ), B (4,, 1) and C (,, 1). (a) Find the cosine of the angle BAC at vertex A. (b) Find the area of the triangle ABC. (c) Find a vector
More informationRegression III: Advanced Methods
Lecture 16: Generalized Additive Models Regression III: Advanced Methods Bill Jacoby Michigan State University http://polisci.msu.edu/jacoby/icpsr/regress3 Goals of the Lecture Introduce Additive Models
More informationCHAPTER FIVE. Solutions for Section 5.1. Skill Refresher. Exercises
CHAPTER FIVE 5.1 SOLUTIONS 265 Solutions for Section 5.1 Skill Refresher S1. Since 1,000,000 = 10 6, we have x = 6. S2. Since 0.01 = 10 2, we have t = 2. S3. Since e 3 = ( e 3) 1/2 = e 3/2, we have z =
More informationExamples of Functions
Examples of Functions In this document is provided examples of a variety of functions. The purpose is to convince the beginning student that functions are something quite different than polynomial equations.
More informationi=1 In practice, the natural logarithm of the likelihood function, called the log-likelihood function and denoted by
Statistics 580 Maximum Likelihood Estimation Introduction Let y (y 1, y 2,..., y n be a vector of iid, random variables from one of a family of distributions on R n and indexed by a p-dimensional parameter
More informationLecture 2 Matrix Operations
Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or
More informationThe KaleidaGraph Guide to Curve Fitting
The KaleidaGraph Guide to Curve Fitting Contents Chapter 1 Curve Fitting Overview 1.1 Purpose of Curve Fitting... 5 1.2 Types of Curve Fits... 5 Least Squares Curve Fits... 5 Nonlinear Curve Fits... 6
More informationLeast-Squares Intersection of Lines
Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a
More informationLearning Objectives for Section 1.1 Linear Equations and Inequalities
Learning Objectives for Section 1.1 Linear Equations and Inequalities After this lecture and the assigned homework, you should be able to solve linear equations. solve linear inequalities. use interval
More informationa 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)
ROOTS OF POLYNOMIAL EQUATIONS In this unit we discuss polynomial equations. A polynomial in x of degree n, where n 0 is an integer, is an expression of the form P n (x) =a n x n + a n 1 x n 1 + + a 1 x
More informationMore Quadratic Equations
More Quadratic Equations Math 99 N1 Chapter 8 1 Quadratic Equations We won t discuss quadratic inequalities. Quadratic equations are equations where the unknown appears raised to second power, and, possibly
More informationLecture 3 : The Natural Exponential Function: f(x) = exp(x) = e x. y = exp(x) if and only if x = ln(y)
Lecture 3 : The Natural Exponential Function: f(x) = exp(x) = Last day, we saw that the function f(x) = ln x is one-to-one, with domain (, ) and range (, ). We can conclude that f(x) has an inverse function
More informationCore Maths C3. Revision Notes
Core Maths C Revision Notes October 0 Core Maths C Algebraic fractions... Cancelling common factors... Multipling and dividing fractions... Adding and subtracting fractions... Equations... 4 Functions...
More informationCOMPUTATION OF THREE-DIMENSIONAL ELECTRIC FIELD PROBLEMS BY A BOUNDARY INTEGRAL METHOD AND ITS APPLICATION TO INSULATION DESIGN
PERIODICA POLYTECHNICA SER. EL. ENG. VOL. 38, NO. ~, PP. 381-393 (199~) COMPUTATION OF THREE-DIMENSIONAL ELECTRIC FIELD PROBLEMS BY A BOUNDARY INTEGRAL METHOD AND ITS APPLICATION TO INSULATION DESIGN H.
More informationPRACTICE FINAL. Problem 1. Find the dimensions of the isosceles triangle with largest area that can be inscribed in a circle of radius 10cm.
PRACTICE FINAL Problem 1. Find the dimensions of the isosceles triangle with largest area that can be inscribed in a circle of radius 1cm. Solution. Let x be the distance between the center of the circle
More informationLecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10
Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction
More informationNumerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems
Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001,
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More informationEXPLORING THE TRUE GEOMETRY OF THE INELASTIC INSTANTANEOUS CENTER METHOD FOR ECCENTRICALLY LOADED BOLT GROUPS
EXPLORING THE TRUE GEOMETRY OF THE INELASTIC INSTANTANEOUS CENTER METHOD FOR ECCENTRICALLY LOADED BOLT GROUPS L.S. Muir, P.E., Cives Steel Company, The United States W.A. Thornton, P.E., PhD, Cives Steel
More informationDATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
More informationconstraint. Let us penalize ourselves for making the constraint too big. We end up with a
Chapter 4 Constrained Optimization 4.1 Equality Constraints (Lagrangians) Suppose we have a problem: Maximize 5, (x 1, 2) 2, 2(x 2, 1) 2 subject to x 1 +4x 2 =3 If we ignore the constraint, we get the
More information1 Review of Least Squares Solutions to Overdetermined Systems
cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares
More informationPartial Fractions. (x 1)(x 2 + 1)
Partial Fractions Adding rational functions involves finding a common denominator, rewriting each fraction so that it has that denominator, then adding. For example, 3x x 1 3x(x 1) (x + 1)(x 1) + 1(x +
More informationMaximum Likelihood Estimation of Logistic Regression Models: Theory and Implementation
Maximum Likelihood Estimation of Logistic Regression Models: Theory and Implementation Abstract This article presents an overview of the logistic regression model for dependent variables having two or
More informationObjectives. Materials
Activity 4 Objectives Understand what a slope field represents in terms of Create a slope field for a given differential equation Materials TI-84 Plus / TI-83 Plus Graph paper Introduction One of the ways
More informationFunction minimization
Function minimization Volker Blobel University of Hamburg March 2005 1. Optimization 2. One-dimensional minimization 3. Search methods 4. Unconstrained minimization 5. Derivative calculation 6. Trust-region
More informationCollege of the Holy Cross, Spring 2009 Math 373, Partial Differential Equations Midterm 1 Practice Questions
College of the Holy Cross, Spring 29 Math 373, Partial Differential Equations Midterm 1 Practice Questions 1. (a) Find a solution of u x + u y + u = xy. Hint: Try a polynomial of degree 2. Solution. Use
More information1.5. Factorisation. Introduction. Prerequisites. Learning Outcomes. Learning Style
Factorisation 1.5 Introduction In Block 4 we showed the way in which brackets were removed from algebraic expressions. Factorisation, which can be considered as the reverse of this process, is dealt with
More informationA Comparison of Nonlinear. Regression Codes
A Comparison of Nonlinear Regression Codes by Paul Fredrick Mondragon Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Science in Mathematics with Operations Research and
More informationwith functions, expressions and equations which follow in units 3 and 4.
Grade 8 Overview View unit yearlong overview here The unit design was created in line with the areas of focus for grade 8 Mathematics as identified by the Common Core State Standards and the PARCC Model
More informationAbsolute Value Equations and Inequalities
. Absolute Value Equations and Inequalities. OBJECTIVES 1. Solve an absolute value equation in one variable. Solve an absolute value inequality in one variable NOTE Technically we mean the distance between
More informationEpipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R.
Epipolar Geometry We consider two perspective images of a scene as taken from a stereo pair of cameras (or equivalently, assume the scene is rigid and imaged with a single camera from two different locations).
More informationPOLYNOMIAL AND MULTIPLE REGRESSION. Polynomial regression used to fit nonlinear (e.g. curvilinear) data into a least squares linear regression model.
Polynomial Regression POLYNOMIAL AND MULTIPLE REGRESSION Polynomial regression used to fit nonlinear (e.g. curvilinear) data into a least squares linear regression model. It is a form of linear regression
More informationminimal polyonomial Example
Minimal Polynomials Definition Let α be an element in GF(p e ). We call the monic polynomial of smallest degree which has coefficients in GF(p) and α as a root, the minimal polyonomial of α. Example: We
More informationMicroeconomic Theory: Basic Math Concepts
Microeconomic Theory: Basic Math Concepts Matt Van Essen University of Alabama Van Essen (U of A) Basic Math Concepts 1 / 66 Basic Math Concepts In this lecture we will review some basic mathematical concepts
More informationMechanics 1: Conservation of Energy and Momentum
Mechanics : Conservation of Energy and Momentum If a certain quantity associated with a system does not change in time. We say that it is conserved, and the system possesses a conservation law. Conservation
More informationALGEBRA REVIEW LEARNING SKILLS CENTER. Exponents & Radicals
ALGEBRA REVIEW LEARNING SKILLS CENTER The "Review Series in Algebra" is taught at the beginning of each quarter by the staff of the Learning Skills Center at UC Davis. This workshop is intended to be an
More information3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions
3. Convex functions Convex Optimization Boyd & Vandenberghe basic properties and examples operations that preserve convexity the conjugate function quasiconvex functions log-concave and log-convex functions
More informationSTATISTICS AND DATA ANALYSIS IN GEOLOGY, 3rd ed. Clarificationof zonationprocedure described onpp. 238-239
STATISTICS AND DATA ANALYSIS IN GEOLOGY, 3rd ed. by John C. Davis Clarificationof zonationprocedure described onpp. 38-39 Because the notation used in this section (Eqs. 4.8 through 4.84) is inconsistent
More informationCHAPTER SIX IRREDUCIBILITY AND FACTORIZATION 1. BASIC DIVISIBILITY THEORY
January 10, 2010 CHAPTER SIX IRREDUCIBILITY AND FACTORIZATION 1. BASIC DIVISIBILITY THEORY The set of polynomials over a field F is a ring, whose structure shares with the ring of integers many characteristics.
More informationAn Overview of Integer Factoring Algorithms. The Problem
An Overview of Integer Factoring Algorithms Manindra Agrawal IITK / NUS The Problem Given an integer n, find all its prime divisors as efficiently as possible. 1 A Difficult Problem No efficient algorithm
More informationIntroduction to Support Vector Machines. Colin Campbell, Bristol University
Introduction to Support Vector Machines Colin Campbell, Bristol University 1 Outline of talk. Part 1. An Introduction to SVMs 1.1. SVMs for binary classification. 1.2. Soft margins and multi-class classification.
More informationProbabilistic Linear Classification: Logistic Regression. Piyush Rai IIT Kanpur
Probabilistic Linear Classification: Logistic Regression Piyush Rai IIT Kanpur Probabilistic Machine Learning (CS772A) Jan 18, 2016 Probabilistic Machine Learning (CS772A) Probabilistic Linear Classification:
More informationLab 17: Consumer and Producer Surplus
Lab 17: Consumer and Producer Surplus Who benefits from rent controls? Who loses with price controls? How do taxes and subsidies affect the economy? Some of these questions can be analyzed using the concepts
More informationFixed Point Theorems
Fixed Point Theorems Definition: Let X be a set and let T : X X be a function that maps X into itself. (Such a function is often called an operator, a transformation, or a transform on X, and the notation
More informationPart 2: Analysis of Relationship Between Two Variables
Part 2: Analysis of Relationship Between Two Variables Linear Regression Linear correlation Significance Tests Multiple regression Linear Regression Y = a X + b Dependent Variable Independent Variable
More informationFINAL EXAM SECTIONS AND OBJECTIVES FOR COLLEGE ALGEBRA
FINAL EXAM SECTIONS AND OBJECTIVES FOR COLLEGE ALGEBRA 1.1 Solve linear equations and equations that lead to linear equations. a) Solve the equation: 1 (x + 5) 4 = 1 (2x 1) 2 3 b) Solve the equation: 3x
More informationAN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS
AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS Revised Edition James Epperson Mathematical Reviews BICENTENNIAL 0, 1 8 0 7 z ewiley wu 2007 r71 BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc.,
More informationMulti-variable Calculus and Optimization
Multi-variable Calculus and Optimization Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Multi-variable Calculus and Optimization 1 / 51 EC2040 Topic 3 - Multi-variable Calculus
More informationThe Goldberg Rao Algorithm for the Maximum Flow Problem
The Goldberg Rao Algorithm for the Maximum Flow Problem COS 528 class notes October 18, 2006 Scribe: Dávid Papp Main idea: use of the blocking flow paradigm to achieve essentially O(min{m 2/3, n 1/2 }
More informationPATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical
More information