8. Linear least-squares

Size: px
Start display at page:

Download "8. Linear least-squares"

Transcription

1 8. Linear least-squares EE13 (Fall ) definition examples and applications solution of a least-squares problem, normal equations 8-1

2 Definition overdetermined linear equations if b range(a), cannot solve for x Ax = b (A is m n with m > n) least-squares formulation minimize Ax b = m n ( a ij x j b i ) 2 i=1 j=1 1/2 r = Ax b is called the residual or error x with smallest residual norm r is called the least-squares solution equivalent to minimizing Ax b 2 Linear least-squares 8-2

3 Example A = , b = 1 1 least-squares solution minimize (2x 1 1) 2 +( x 1 +x 2 ) 2 +(2x 2 +1) 2 to find optimal x 1, x 2, set derivatives w.r.t. x 1 and x 2 equal to zero: 1x 1 2x 2 4 =, 2x 1 +1x 2 +4 = solution x 1 = 1/3, x 2 = 1/3 (much more on practical algorithms for LS problems later) Linear least-squares 8-3

4 r 2 1 = (2x 1 1) 2 r 2 2 = ( x 1 + x 2 ) x x 1 x x 1 r 2 3 = (2x 2 + 1) 2 r r2 2 + r x x 1 x x 1 Linear least-squares 8-4

5 Outline definition examples and applications solution of a least-squares problem, normal equations

6 Data fitting fit a function g(t) = x 1 g 1 (t)+x 2 g 2 (t)+ +x n g n (t) to data (t 1,y 1 ),..., (t m,y m ), i.e., choose coefficients x 1,..., x n so that g(t 1 ) y 1, g(t 2 ) y 2,..., g(t m ) y m g i (t) : R R are given functions (basis functions) problem variables: the coefficients x 1, x 2,..., x n usually m n, hence no exact solution with g(t i ) = y i for all i applications: developing simple, approximate model of observed data Linear least-squares 8-5

7 Least-squares data fitting compute x by minimizing m (g(t i ) y i ) 2 = i=1 m (x 1 g 1 (t i )+x 2 g 2 (t i )+ +x n g n (t i ) y i ) 2 i=1 in matrix notation: minimize Ax b 2 where A = g 1 (t 1 ) g 2 (t 1 ) g 3 (t 1 ) g n (t 1 ) g 1 (t 2 ) g 2 (t 2 ) g 3 (t 2 ) g n (t 2 ).... g 1 (t m ) g 2 (t m ) g 3 (t m ) g n (t m ), b = y 1 y 2. y m Linear least-squares 8-6

8 Example: data fitting with polynomials g(t) = x 1 +x 2 t+x 3 t 2 + +x n t n 1 basis functions are g k (t) = t k 1, k = 1,...,n A = 1 t 1 t 2 1 t n t 2 t 2 2 t n t m t 2 m t n 1 m, b = y 1 y 2. y m interpolation (m = n): can satisfy g(t i ) = y i exactly by solving Ax = b approximation (m > n): make error small by minimizing Ax b Linear least-squares 8-7

9 example. fit a polynomial to f(t) = 1/(1+25t 2 ) on [ 1,1] pick m = n points t i in [ 1,1], and calculate y i = 1/(1+25t 2 i ) interpolate by solving Ax = b 1.5 n = 5 8 n = (dashed line: f; solid line: polynomial g; circles: the points (t i,y i )) increasing n does not improve the overall quality of the fit Linear least-squares 8-8

10 same example by approximation pick m = 5 points t i in [ 1,1] fit polynomial by minimizing Ax b n = 5 n = (dashed line: f; solid line: polynomial g; circles: the points (t i,y i )) much better fit overall Linear least-squares 8-9

11 Least-squares estimation y = Ax+w x is what we want to estimate or reconstruct y is our measurement(s) w is an unknown noise or measurement error (assumed small) ith row of A characterizes ith sensor or ith measurement least-squares estimation choose as estimate the vector ˆx that minimizes Aˆx y i.e., minimize the deviation between what we actually observed (y), and what we would observe if x = ˆx and there were no noise (w = ) Linear least-squares 8-1

12 Navigation by range measurements find position (u,v) in a plane from distances to beacons at positions (p i,q i ) beacons (p 1,q 1 ) (p 4,q 4 ) ρ ρ 1 4 ρ 2 (u,v) (p 2,q 2 ) unknown position ρ 3 (p 3,q 3 ) four nonlinear equations in two variables u, v: (u pi ) 2 +(v q i ) 2 = ρ i for i = 1,2,3,4 ρ i is the measured distance from unknown position (u,v) to beacon i Linear least-squares 8-11

13 linearized distance function: assume u = u + u, v = v + v where u, v are known (e.g., position a short time ago) u, v are small (compared to ρ i s) (u + u p i ) 2 +(v + v q i ) 2 (u p i ) 2 +(v q i ) 2 + (u p i ) u+(v q i ) v (u p i ) 2 +(v q i ) 2 gives four linear equations in the variables u, v: (u p i ) u+(v q i ) v (u p i ) 2 +(v q i ) 2 ρ i (u p i ) 2 +(v q i ) 2 for i = 1,2,3,4 Linear least-squares 8-12

14 linearized equations Ax b where x = ( u, v) and A is 4 2 with b i = ρ i (u p i ) 2 +(v q i ) 2 a i1 = (u p i ) (u p i ) 2 +(v q i ) 2 a i2 = (v q i ) (u p i ) 2 +(v q i ) 2 due to linearization and measurement error, we do not expect an exact solution (Ax = b) we can try to find u and v that almost satisfy the equations Linear least-squares 8-13

15 numerical example beacons at positions (1,), ( 1,2), (3,9), (1,1) measured distances ρ = (8.22, 11.9, 7.8, 11.33) (unknown) actual position is (2, 2) linearized range equations (linearized around (u,v ) = (,)) [ u v ] least-squares solution: ( u, v) = (1.97, 1.9) (norm of error is.1) Linear least-squares 8-14

16 Least-squares system identification measure input u(t) and output y(t) for t =,...,N of an unknown system u(t) unknown system y(t) example (N = 7): u(t) y(t) t system identification problem: find reasonable model for system based on measured I/O data u, y t Linear least-squares 8-15

17 moving average model y model (t) = h u(t)+h 1 u(t 1)+h 2 u(t 2)+ +h n u(t n) where y model (t) is the model output a simple and widely used model predicted output is a linear combination of current and n previous inputs h,...,h n are parameters of the model called a moving average (MA) model with n delays least-squares identification: choose the model that minimizes the error E = ( N ) 1/2 (y model (t) y(t)) 2 t=n Linear least-squares 8-16

18 formulation as a linear least-squares problem: ( N ) 1/2 E = (h u(t)+h 1 u(t 1)+ +h n u(t n) y(t)) 2 t=n = Ax b A = x = u(n) u(n 1) u(n 2) u() u(n+1) u(n) u(n 1) u(1) u(n+2) u(n+1) u(n) u(2).... u(n) u(n 1) u(n 2) u(n n) h h 1 h 2. h n, b = y(n) y(n+1) y(n+2). y(n) Linear least-squares 8-17

19 example (I/O data of page 8-15) with n = 7: least-squares solution is h =.24, h 1 =.2819, h 2 =.4176, h 3 =.3536, h 4 =.2425, h 5 =.4873, h 6 =.284, h 7 = solid: y(t): actual output dashed: y model (t) t Linear least-squares 8-18

20 model order selection: how large should n be? 1 relative error E/ y suggests using largest possible n for smallest error much more important question: how good is the model at predicting new data (i.e., not used to calculate the model)? n Linear least-squares 8-19

21 model validation: test model on a new data set (from the same system) ū(t) ȳ(t) t t relative prediction error n validation data modeling data for n too large the predictive ability of the model becomes worse! validation data suggest n = 1 Linear least-squares 8-2

22 for n = 5 the actual and predicted outputs on system identification and model validation data are: 5 I/O set used to compute model solid: y(t) dashed: y model (t) 5 model validation I/O set solid: ȳ(t) dashed: ȳ model (t) t t loss of predictive ability when n is too large is called overfitting or overmodeling Linear least-squares 8-21

23 Outline definition examples and applications solution of a least-squares problem, normal equations

24 Geometric interpretation of a LS problem minimize Ax b 2 A is m n with columns a 1,..., a n Ax b is the distance of b to the vector Ax = x 1 a 1 +x 2 a 2 + +x n a n solution x ls gives the linear combination of the columns of A closest to b Ax ls is the projection of b on the range of A Linear least-squares 8-22

25 example A = , b = a 1 a 2 b Ax ls = 2a 1 + a 2 least-squares solution x ls Ax ls = 1 4, x ls = [ 2 1 ] Linear least-squares 8-23

26 The solution of a least-squares problem if A is left-invertible, then x ls = (A T A) 1 A T b is the unique solution of the least-squares problem minimize Ax b 2 in other words, if x x ls, then Ax b 2 > Ax ls b 2 recall from page 4-25 that A T A is positive definite and that (A T A) 1 A T is a left-inverse of A Linear least-squares 8-24

27 proof we show that Ax b 2 > Ax ls b 2 for x x ls : Ax b 2 = A(x x ls )+(Ax ls b) 2 = A(x x ls ) 2 + Ax ls b 2 > Ax ls b 2 2nd step follows from A(x x ls ) (Ax ls b): (A(x x ls )) T (Ax ls b) = (x x ls ) T (A T Ax ls A T b) = 3rd step follows from zero nullspace property of A: x x ls = A(x x ls ) Linear least-squares 8-25

28 The normal equations (A T A)x = A T b if A is left-invertible: least-squares solution can be found by solving the normal equations n equations in n variables with a positive definite coefficient matrix can be solved using Cholesky factorization Linear least-squares 8-26

Lecture 5 Least-squares

Lecture 5 Least-squares EE263 Autumn 2007-08 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property

More information

1 Review of Least Squares Solutions to Overdetermined Systems

1 Review of Least Squares Solutions to Overdetermined Systems cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares

More information

14. Nonlinear least-squares

14. Nonlinear least-squares 14 Nonlinear least-squares EE103 (Fall 2011-12) definition Newton s method Gauss-Newton method 14-1 Nonlinear least-squares minimize r i (x) 2 = r(x) 2 r i is a nonlinear function of the n-vector of variables

More information

Machine Learning and Data Mining. Regression Problem. (adapted from) Prof. Alexander Ihler

Machine Learning and Data Mining. Regression Problem. (adapted from) Prof. Alexander Ihler Machine Learning and Data Mining Regression Problem (adapted from) Prof. Alexander Ihler Overview Regression Problem Definition and define parameters ϴ. Prediction using ϴ as parameters Measure the error

More information

2. Norm, distance, angle

2. Norm, distance, angle L. Vandenberghe EE133A (Spring 2016) 2. Norm, distance, angle norm distance angle hyperplanes complex vectors 2-1 Euclidean norm (Euclidean) norm of vector a R n : a = a 2 1 + a2 2 + + a2 n = a T a if

More information

6. Cholesky factorization

6. Cholesky factorization 6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

More information

Lecture 2 Linear functions and examples

Lecture 2 Linear functions and examples EE263 Autumn 2007-08 Stephen Boyd Lecture 2 Linear functions and examples linear equations and functions engineering examples interpretations 2 1 Linear equations consider system of linear equations y

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form

1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form 1. LINEAR EQUATIONS A linear equation in n unknowns x 1, x 2,, x n is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b, where a 1, a 2,..., a n, b are given real numbers. For example, with x and

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

Linearly Independent Sets and Linearly Dependent Sets

Linearly Independent Sets and Linearly Dependent Sets These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation

More information

Inverses. Stephen Boyd. EE103 Stanford University. October 27, 2015

Inverses. Stephen Boyd. EE103 Stanford University. October 27, 2015 Inverses Stephen Boyd EE103 Stanford University October 27, 2015 Outline Left and right inverses Inverse Solving linear equations Examples Pseudo-inverse Left and right inverses 2 Left inverses a number

More information

18. Linear optimization

18. Linear optimization 18. Linear optimization EE103 (Fall 2011-12) linear program examples geometrical interpretation extreme points simplex method 18-1 Linear program minimize c 1 x 1 +c 2 x 2 + +c n x n subject to a 11 x

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

We shall turn our attention to solving linear systems of equations. Ax = b

We shall turn our attention to solving linear systems of equations. Ax = b 59 Linear Algebra We shall turn our attention to solving linear systems of equations Ax = b where A R m n, x R n, and b R m. We already saw examples of methods that required the solution of a linear system

More information

Inner product. Definition of inner product

Inner product. Definition of inner product Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product

More information

Linear Systems. Singular and Nonsingular Matrices. Find x 1, x 2, x 3 such that the following three equations hold:

Linear Systems. Singular and Nonsingular Matrices. Find x 1, x 2, x 3 such that the following three equations hold: Linear Systems Example: Find x, x, x such that the following three equations hold: x + x + x = 4x + x + x = x + x + x = 6 We can write this using matrix-vector notation as 4 {{ A x x x {{ x = 6 {{ b General

More information

Linear Algebra Review Part 2: Ax=b

Linear Algebra Review Part 2: Ax=b Linear Algebra Review Part 2: Ax=b Edwin Olson University of Michigan The Three-Day Plan Geometry of Linear Algebra Vectors, matrices, basic operations, lines, planes, homogeneous coordinates, transformations

More information

Math 215 HW #6 Solutions

Math 215 HW #6 Solutions Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Quadratic Equations and Inequalities

Quadratic Equations and Inequalities MA 134 Lecture Notes August 20, 2012 Introduction The purpose of this lecture is to... Introduction The purpose of this lecture is to... Learn about different types of equations Introduction The purpose

More information

Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible:

Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible: Cramer s Rule and the Adjugate Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible: Theorem [Cramer s Rule] If A is an invertible

More information

1 Orthogonal projections and the approximation

1 Orthogonal projections and the approximation Math 1512 Fall 2010 Notes on least squares approximation Given n data points (x 1, y 1 ),..., (x n, y n ), we would like to find the line L, with an equation of the form y = mx + b, which is the best fit

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

Direct Methods for Solving Linear Systems. Linear Systems of Equations

Direct Methods for Solving Linear Systems. Linear Systems of Equations Direct Methods for Solving Linear Systems Linear Systems of Equations Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc.

Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc. 2 Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Theorem 8: Let A be a square matrix. Then the following statements are equivalent. That is, for a given A, the statements are either all true

More information

Notes on Factoring. MA 206 Kurt Bryan

Notes on Factoring. MA 206 Kurt Bryan The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 3 Linear Least Squares Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Linear Least Squares

Linear Least Squares Linear Least Squares Suppose we are given a set of data points {(x i,f i )}, i = 1,...,n. These could be measurements from an experiment or obtained simply by evaluating a function at some points. One

More information

Least Squares Estimation

Least Squares Estimation Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David

More information

Eigenvalues and eigenvectors of a matrix

Eigenvalues and eigenvectors of a matrix Eigenvalues and eigenvectors of a matrix Definition: If A is an n n matrix and there exists a real number λ and a non-zero column vector V such that AV = λv then λ is called an eigenvalue of A and V is

More information

Linear Algebraic Equations, SVD, and the Pseudo-Inverse

Linear Algebraic Equations, SVD, and the Pseudo-Inverse Linear Algebraic Equations, SVD, and the Pseudo-Inverse Philip N. Sabes October, 21 1 A Little Background 1.1 Singular values and matrix inversion For non-smmetric matrices, the eigenvalues and singular

More information

SYSTEMS OF EQUATIONS

SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS 1. Examples of systems of equations Here are some examples of systems of equations. Each system has a number of equations and a number (not necessarily the same) of variables for which

More information

Manifold Learning Examples PCA, LLE and ISOMAP

Manifold Learning Examples PCA, LLE and ISOMAP Manifold Learning Examples PCA, LLE and ISOMAP Dan Ventura October 14, 28 Abstract We try to give a helpful concrete example that demonstrates how to use PCA, LLE and Isomap, attempts to provide some intuition

More information

9 Multiplication of Vectors: The Scalar or Dot Product

9 Multiplication of Vectors: The Scalar or Dot Product Arkansas Tech University MATH 934: Calculus III Dr. Marcel B Finan 9 Multiplication of Vectors: The Scalar or Dot Product Up to this point we have defined what vectors are and discussed basic notation

More information

Geometric Camera Parameters

Geometric Camera Parameters Geometric Camera Parameters What assumptions have we made so far? -All equations we have derived for far are written in the camera reference frames. -These equations are valid only when: () all distances

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

Background: State Estimation

Background: State Estimation State Estimation Cyber Security of the Smart Grid Dr. Deepa Kundur Background: State Estimation University of Toronto Dr. Deepa Kundur (University of Toronto) Cyber Security of the Smart Grid 1 / 81 Dr.

More information

Introduction Epipolar Geometry Calibration Methods Further Readings. Stereo Camera Calibration

Introduction Epipolar Geometry Calibration Methods Further Readings. Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration 12.10.2004 Overview Introduction Summary / Motivation Depth Perception Ambiguity of Correspondence

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

The Assignment Problem and the Hungarian Method

The Assignment Problem and the Hungarian Method The Assignment Problem and the Hungarian Method 1 Example 1: You work as a sales manager for a toy manufacturer, and you currently have three salespeople on the road meeting buyers. Your salespeople are

More information

Basic solutions in standard form polyhedra

Basic solutions in standard form polyhedra Recap, and outline of Lecture Previously min s.t. c x Ax = b x Converting any LP into standard form () Interpretation and visualization () Full row rank assumption on A () Basic solutions and bases in

More information

Quiz 1 Sample Questions IE406 Introduction to Mathematical Programming Dr. Ralphs

Quiz 1 Sample Questions IE406 Introduction to Mathematical Programming Dr. Ralphs Quiz 1 Sample Questions IE406 Introduction to Mathematical Programming Dr. Ralphs These questions are from previous years and should you give you some idea of what to expect on Quiz 1. 1. Consider the

More information

Summary of week 8 (Lectures 22, 23 and 24)

Summary of week 8 (Lectures 22, 23 and 24) WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry

More information

Dynamic data processing

Dynamic data processing Dynamic data processing recursive least-squares P.J.G. Teunissen Series on Mathematical Geodesy and Positioning Dynamic data processing recursive least-squares Dynamic data processing recursive least-squares

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.

(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2 IEOR 4404 Homework # Intro OR: Deterministic Models February 14, 011 Prof. Jay Sethuraman Page 1 of 5 Homework #.1 (a) What is the optimal solution of this problem? Let us consider that x 1, x and x 3

More information

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

More information

Vector Spaces 4.4 Spanning and Independence

Vector Spaces 4.4 Spanning and Independence Vector Spaces 4.4 and Independence October 18 Goals Discuss two important basic concepts: Define linear combination of vectors. Define Span(S) of a set S of vectors. Define linear Independence of a set

More information

Lecture 1: Systems of Linear Equations

Lecture 1: Systems of Linear Equations MTH Elementary Matrix Algebra Professor Chao Huang Department of Mathematics and Statistics Wright State University Lecture 1 Systems of Linear Equations ² Systems of two linear equations with two variables

More information

Big Data Interpolation: An Effcient Sampling Alternative for Sensor Data Aggregation

Big Data Interpolation: An Effcient Sampling Alternative for Sensor Data Aggregation Big Data Interpolation: An Effcient Sampling Alternative for Sensor Data Aggregation Hadassa Daltrophe, Shlomi Dolev, Zvi Lotker Ben-Gurion University Outline Introduction Motivation Problem definition

More information

Arithmetic and Algebra of Matrices

Arithmetic and Algebra of Matrices Arithmetic and Algebra of Matrices Math 572: Algebra for Middle School Teachers The University of Montana 1 The Real Numbers 2 Classroom Connection: Systems of Linear Equations 3 Rational Numbers 4 Irrational

More information

1 Gaussian Elimination

1 Gaussian Elimination Contents 1 Gaussian Elimination 1.1 Elementary Row Operations 1.2 Some matrices whose associated system of equations are easy to solve 1.3 Gaussian Elimination 1.4 Gauss-Jordan reduction and the Reduced

More information

Lecture 5: Singular Value Decomposition SVD (1)

Lecture 5: Singular Value Decomposition SVD (1) EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 Lecture 3: QR, least squares, linear regression Linear Algebra Methods for Data Mining, Spring 2007, University

More information

Linear Dependence Tests

Linear Dependence Tests Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

More information

Orthogonal Projections

Orthogonal Projections Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

More information

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above: Section 1.2: Row Reduction and Echelon Forms Echelon form (or row echelon form): 1. All nonzero rows are above any rows of all zeros. 2. Each leading entry (i.e. left most nonzero entry) of a row is in

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Lecture 14: Section 3.3

Lecture 14: Section 3.3 Lecture 14: Section 3.3 Shuanglin Shao October 23, 2013 Definition. Two nonzero vectors u and v in R n are said to be orthogonal (or perpendicular) if u v = 0. We will also agree that the zero vector in

More information

MATH 3795 Lecture 7. Linear Least Squares.

MATH 3795 Lecture 7. Linear Least Squares. MATH 3795 Lecture 7. Linear Least Squares. Dmitriy Leykekhman Fall 2008 Goals Basic properties of linear least squares problems. Normal equation. D. Leykekhman - MATH 3795 Introduction to Computational

More information

Calculus and linear algebra for biomedical engineering Week 4: Inverse matrices and determinants

Calculus and linear algebra for biomedical engineering Week 4: Inverse matrices and determinants Calculus and linear algebra for biomedical engineering Week 4: Inverse matrices and determinants Hartmut Führ fuehr@matha.rwth-aachen.de Lehrstuhl A für Mathematik, RWTH Aachen October 30, 2008 Overview

More information

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A = MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix 7. LU factorization EE103 (Fall 2011-12) factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization algorithm effect of rounding error sparse

More information

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns L. Vandenberghe EE133A (Spring 2016) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows

More information

4.3 Least Squares Approximations

4.3 Least Squares Approximations 18 Chapter. Orthogonality.3 Least Squares Approximations It often happens that Ax D b has no solution. The usual reason is: too many equations. The matrix has more rows than columns. There are more equations

More information

Linear Algebra Notes

Linear Algebra Notes Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

More information

AN INTRODUCTION TO ERROR CORRECTING CODES Part 1

AN INTRODUCTION TO ERROR CORRECTING CODES Part 1 AN INTRODUCTION TO ERROR CORRECTING CODES Part 1 Jack Keil Wolf ECE 154C Spring 2008 Noisy Communications Noise in a communications channel can cause errors in the transmission of binary digits. Transmit:

More information

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

More information

Numerical Linear Algebra Chap. 4: Perturbation and Regularisation

Numerical Linear Algebra Chap. 4: Perturbation and Regularisation Numerical Linear Algebra Chap. 4: Perturbation and Regularisation Heinrich Voss voss@tu-harburg.de Hamburg University of Technology Institute of Numerical Simulation TUHH Heinrich Voss Numerical Linear

More information

Numerical Methods Lecture 5 - Curve Fitting Techniques

Numerical Methods Lecture 5 - Curve Fitting Techniques Numerical Methods Lecture 5 - Curve Fitting Techniques Topics motivation interpolation linear regression higher order polynomial form exponential form Curve fitting - motivation For root finding, we used

More information

Ordered Pairs. Graphing Lines and Linear Inequalities, Solving System of Linear Equations. Cartesian Coordinates System.

Ordered Pairs. Graphing Lines and Linear Inequalities, Solving System of Linear Equations. Cartesian Coordinates System. Ordered Pairs Graphing Lines and Linear Inequalities, Solving System of Linear Equations Peter Lo All equations in two variables, such as y = mx + c, is satisfied only if we find a value of x and a value

More information

Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

Understanding and Applying Kalman Filtering

Understanding and Applying Kalman Filtering Understanding and Applying Kalman Filtering Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton 1 Introduction Objectives: 1. Provide a basic understanding

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

Images and Kernels in Linear Algebra By Kristi Hoshibata Mathematics 232

Images and Kernels in Linear Algebra By Kristi Hoshibata Mathematics 232 Images and Kernels in Linear Algebra By Kristi Hoshibata Mathematics 232 In mathematics, there are many different fields of study, including calculus, geometry, algebra and others. Mathematics has been

More information

What are the place values to the left of the decimal point and their associated powers of ten?

What are the place values to the left of the decimal point and their associated powers of ten? The verbal answers to all of the following questions should be memorized before completion of algebra. Answers that are not memorized will hinder your ability to succeed in geometry and algebra. (Everything

More information

Figure 2.1: Center of mass of four points.

Figure 2.1: Center of mass of four points. Chapter 2 Bézier curves are named after their inventor, Dr. Pierre Bézier. Bézier was an engineer with the Renault car company and set out in the early 196 s to develop a curve formulation which would

More information

x y The matrix form, the vector form, and the augmented matrix form, respectively, for the system of equations are

x y The matrix form, the vector form, and the augmented matrix form, respectively, for the system of equations are Solving Sstems of Linear Equations in Matri Form with rref Learning Goals Determine the solution of a sstem of equations from the augmented matri Determine the reduced row echelon form of the augmented

More information

Electrical Engineering 103 Applied Numerical Computing

Electrical Engineering 103 Applied Numerical Computing UCLA Fall Quarter 2011-12 Electrical Engineering 103 Applied Numerical Computing Professor L Vandenberghe Notes written in collaboration with S Boyd (Stanford Univ) Contents I Matrix theory 1 1 Vectors

More information

1.2 Solving a System of Linear Equations

1.2 Solving a System of Linear Equations 1.. SOLVING A SYSTEM OF LINEAR EQUATIONS 1. Solving a System of Linear Equations 1..1 Simple Systems - Basic De nitions As noticed above, the general form of a linear system of m equations in n variables

More information

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system 1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables

More information

We call this set an n-dimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P.

We call this set an n-dimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P. Volumes of parallelograms 1 Chapter 8 Volumes of parallelograms In the present short chapter we are going to discuss the elementary geometrical objects which we call parallelograms. These are going to

More information

MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3

MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3 MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................

More information

B such that AB = I and BA = I. (We say B is an inverse of A.) Definition A square matrix A is invertible (or nonsingular) if matrix

B such that AB = I and BA = I. (We say B is an inverse of A.) Definition A square matrix A is invertible (or nonsingular) if matrix Matrix inverses Recall... Definition A square matrix A is invertible (or nonsingular) if matrix B such that AB = and BA =. (We say B is an inverse of A.) Remark Not all square matrices are invertible.

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

STATISTICS AND DATA ANALYSIS IN GEOLOGY, 3rd ed. Clarificationof zonationprocedure described onpp. 238-239

STATISTICS AND DATA ANALYSIS IN GEOLOGY, 3rd ed. Clarificationof zonationprocedure described onpp. 238-239 STATISTICS AND DATA ANALYSIS IN GEOLOGY, 3rd ed. by John C. Davis Clarificationof zonationprocedure described onpp. 38-39 Because the notation used in this section (Eqs. 4.8 through 4.84) is inconsistent

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

1 Cubic Hermite Spline Interpolation

1 Cubic Hermite Spline Interpolation cs412: introduction to numerical analysis 10/26/10 Lecture 13: Cubic Hermite Spline Interpolation II Instructor: Professor Amos Ron Scribes: Yunpeng Li, Mark Cowlishaw, Nathanael Fillmore 1 Cubic Hermite

More information

Advanced Techniques for Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Advanced Techniques for Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Advanced Techniques for Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Vectors Arrays of numbers Vectors represent a point in a n dimensional

More information

Linear and Piecewise Linear Regressions

Linear and Piecewise Linear Regressions Tarigan Statistical Consulting & Coaching statistical-coaching.ch Doctoral Program in Computer Science of the Universities of Fribourg, Geneva, Lausanne, Neuchâtel, Bern and the EPFL Hands-on Data Analysis

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information