8. Linear least-squares
|
|
- Julia Houston
- 8 years ago
- Views:
Transcription
1 8. Linear least-squares EE13 (Fall ) definition examples and applications solution of a least-squares problem, normal equations 8-1
2 Definition overdetermined linear equations if b range(a), cannot solve for x Ax = b (A is m n with m > n) least-squares formulation minimize Ax b = m n ( a ij x j b i ) 2 i=1 j=1 1/2 r = Ax b is called the residual or error x with smallest residual norm r is called the least-squares solution equivalent to minimizing Ax b 2 Linear least-squares 8-2
3 Example A = , b = 1 1 least-squares solution minimize (2x 1 1) 2 +( x 1 +x 2 ) 2 +(2x 2 +1) 2 to find optimal x 1, x 2, set derivatives w.r.t. x 1 and x 2 equal to zero: 1x 1 2x 2 4 =, 2x 1 +1x 2 +4 = solution x 1 = 1/3, x 2 = 1/3 (much more on practical algorithms for LS problems later) Linear least-squares 8-3
4 r 2 1 = (2x 1 1) 2 r 2 2 = ( x 1 + x 2 ) x x 1 x x 1 r 2 3 = (2x 2 + 1) 2 r r2 2 + r x x 1 x x 1 Linear least-squares 8-4
5 Outline definition examples and applications solution of a least-squares problem, normal equations
6 Data fitting fit a function g(t) = x 1 g 1 (t)+x 2 g 2 (t)+ +x n g n (t) to data (t 1,y 1 ),..., (t m,y m ), i.e., choose coefficients x 1,..., x n so that g(t 1 ) y 1, g(t 2 ) y 2,..., g(t m ) y m g i (t) : R R are given functions (basis functions) problem variables: the coefficients x 1, x 2,..., x n usually m n, hence no exact solution with g(t i ) = y i for all i applications: developing simple, approximate model of observed data Linear least-squares 8-5
7 Least-squares data fitting compute x by minimizing m (g(t i ) y i ) 2 = i=1 m (x 1 g 1 (t i )+x 2 g 2 (t i )+ +x n g n (t i ) y i ) 2 i=1 in matrix notation: minimize Ax b 2 where A = g 1 (t 1 ) g 2 (t 1 ) g 3 (t 1 ) g n (t 1 ) g 1 (t 2 ) g 2 (t 2 ) g 3 (t 2 ) g n (t 2 ).... g 1 (t m ) g 2 (t m ) g 3 (t m ) g n (t m ), b = y 1 y 2. y m Linear least-squares 8-6
8 Example: data fitting with polynomials g(t) = x 1 +x 2 t+x 3 t 2 + +x n t n 1 basis functions are g k (t) = t k 1, k = 1,...,n A = 1 t 1 t 2 1 t n t 2 t 2 2 t n t m t 2 m t n 1 m, b = y 1 y 2. y m interpolation (m = n): can satisfy g(t i ) = y i exactly by solving Ax = b approximation (m > n): make error small by minimizing Ax b Linear least-squares 8-7
9 example. fit a polynomial to f(t) = 1/(1+25t 2 ) on [ 1,1] pick m = n points t i in [ 1,1], and calculate y i = 1/(1+25t 2 i ) interpolate by solving Ax = b 1.5 n = 5 8 n = (dashed line: f; solid line: polynomial g; circles: the points (t i,y i )) increasing n does not improve the overall quality of the fit Linear least-squares 8-8
10 same example by approximation pick m = 5 points t i in [ 1,1] fit polynomial by minimizing Ax b n = 5 n = (dashed line: f; solid line: polynomial g; circles: the points (t i,y i )) much better fit overall Linear least-squares 8-9
11 Least-squares estimation y = Ax+w x is what we want to estimate or reconstruct y is our measurement(s) w is an unknown noise or measurement error (assumed small) ith row of A characterizes ith sensor or ith measurement least-squares estimation choose as estimate the vector ˆx that minimizes Aˆx y i.e., minimize the deviation between what we actually observed (y), and what we would observe if x = ˆx and there were no noise (w = ) Linear least-squares 8-1
12 Navigation by range measurements find position (u,v) in a plane from distances to beacons at positions (p i,q i ) beacons (p 1,q 1 ) (p 4,q 4 ) ρ ρ 1 4 ρ 2 (u,v) (p 2,q 2 ) unknown position ρ 3 (p 3,q 3 ) four nonlinear equations in two variables u, v: (u pi ) 2 +(v q i ) 2 = ρ i for i = 1,2,3,4 ρ i is the measured distance from unknown position (u,v) to beacon i Linear least-squares 8-11
13 linearized distance function: assume u = u + u, v = v + v where u, v are known (e.g., position a short time ago) u, v are small (compared to ρ i s) (u + u p i ) 2 +(v + v q i ) 2 (u p i ) 2 +(v q i ) 2 + (u p i ) u+(v q i ) v (u p i ) 2 +(v q i ) 2 gives four linear equations in the variables u, v: (u p i ) u+(v q i ) v (u p i ) 2 +(v q i ) 2 ρ i (u p i ) 2 +(v q i ) 2 for i = 1,2,3,4 Linear least-squares 8-12
14 linearized equations Ax b where x = ( u, v) and A is 4 2 with b i = ρ i (u p i ) 2 +(v q i ) 2 a i1 = (u p i ) (u p i ) 2 +(v q i ) 2 a i2 = (v q i ) (u p i ) 2 +(v q i ) 2 due to linearization and measurement error, we do not expect an exact solution (Ax = b) we can try to find u and v that almost satisfy the equations Linear least-squares 8-13
15 numerical example beacons at positions (1,), ( 1,2), (3,9), (1,1) measured distances ρ = (8.22, 11.9, 7.8, 11.33) (unknown) actual position is (2, 2) linearized range equations (linearized around (u,v ) = (,)) [ u v ] least-squares solution: ( u, v) = (1.97, 1.9) (norm of error is.1) Linear least-squares 8-14
16 Least-squares system identification measure input u(t) and output y(t) for t =,...,N of an unknown system u(t) unknown system y(t) example (N = 7): u(t) y(t) t system identification problem: find reasonable model for system based on measured I/O data u, y t Linear least-squares 8-15
17 moving average model y model (t) = h u(t)+h 1 u(t 1)+h 2 u(t 2)+ +h n u(t n) where y model (t) is the model output a simple and widely used model predicted output is a linear combination of current and n previous inputs h,...,h n are parameters of the model called a moving average (MA) model with n delays least-squares identification: choose the model that minimizes the error E = ( N ) 1/2 (y model (t) y(t)) 2 t=n Linear least-squares 8-16
18 formulation as a linear least-squares problem: ( N ) 1/2 E = (h u(t)+h 1 u(t 1)+ +h n u(t n) y(t)) 2 t=n = Ax b A = x = u(n) u(n 1) u(n 2) u() u(n+1) u(n) u(n 1) u(1) u(n+2) u(n+1) u(n) u(2).... u(n) u(n 1) u(n 2) u(n n) h h 1 h 2. h n, b = y(n) y(n+1) y(n+2). y(n) Linear least-squares 8-17
19 example (I/O data of page 8-15) with n = 7: least-squares solution is h =.24, h 1 =.2819, h 2 =.4176, h 3 =.3536, h 4 =.2425, h 5 =.4873, h 6 =.284, h 7 = solid: y(t): actual output dashed: y model (t) t Linear least-squares 8-18
20 model order selection: how large should n be? 1 relative error E/ y suggests using largest possible n for smallest error much more important question: how good is the model at predicting new data (i.e., not used to calculate the model)? n Linear least-squares 8-19
21 model validation: test model on a new data set (from the same system) ū(t) ȳ(t) t t relative prediction error n validation data modeling data for n too large the predictive ability of the model becomes worse! validation data suggest n = 1 Linear least-squares 8-2
22 for n = 5 the actual and predicted outputs on system identification and model validation data are: 5 I/O set used to compute model solid: y(t) dashed: y model (t) 5 model validation I/O set solid: ȳ(t) dashed: ȳ model (t) t t loss of predictive ability when n is too large is called overfitting or overmodeling Linear least-squares 8-21
23 Outline definition examples and applications solution of a least-squares problem, normal equations
24 Geometric interpretation of a LS problem minimize Ax b 2 A is m n with columns a 1,..., a n Ax b is the distance of b to the vector Ax = x 1 a 1 +x 2 a 2 + +x n a n solution x ls gives the linear combination of the columns of A closest to b Ax ls is the projection of b on the range of A Linear least-squares 8-22
25 example A = , b = a 1 a 2 b Ax ls = 2a 1 + a 2 least-squares solution x ls Ax ls = 1 4, x ls = [ 2 1 ] Linear least-squares 8-23
26 The solution of a least-squares problem if A is left-invertible, then x ls = (A T A) 1 A T b is the unique solution of the least-squares problem minimize Ax b 2 in other words, if x x ls, then Ax b 2 > Ax ls b 2 recall from page 4-25 that A T A is positive definite and that (A T A) 1 A T is a left-inverse of A Linear least-squares 8-24
27 proof we show that Ax b 2 > Ax ls b 2 for x x ls : Ax b 2 = A(x x ls )+(Ax ls b) 2 = A(x x ls ) 2 + Ax ls b 2 > Ax ls b 2 2nd step follows from A(x x ls ) (Ax ls b): (A(x x ls )) T (Ax ls b) = (x x ls ) T (A T Ax ls A T b) = 3rd step follows from zero nullspace property of A: x x ls = A(x x ls ) Linear least-squares 8-25
28 The normal equations (A T A)x = A T b if A is left-invertible: least-squares solution can be found by solving the normal equations n equations in n variables with a positive definite coefficient matrix can be solved using Cholesky factorization Linear least-squares 8-26
Lecture 5 Least-squares
EE263 Autumn 2007-08 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property
More information1 Review of Least Squares Solutions to Overdetermined Systems
cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares
More information14. Nonlinear least-squares
14 Nonlinear least-squares EE103 (Fall 2011-12) definition Newton s method Gauss-Newton method 14-1 Nonlinear least-squares minimize r i (x) 2 = r(x) 2 r i is a nonlinear function of the n-vector of variables
More informationMachine Learning and Data Mining. Regression Problem. (adapted from) Prof. Alexander Ihler
Machine Learning and Data Mining Regression Problem (adapted from) Prof. Alexander Ihler Overview Regression Problem Definition and define parameters ϴ. Prediction using ϴ as parameters Measure the error
More information6. Cholesky factorization
6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix
More informationLecture 2 Linear functions and examples
EE263 Autumn 2007-08 Stephen Boyd Lecture 2 Linear functions and examples linear equations and functions engineering examples interpretations 2 1 Linear equations consider system of linear equations y
More information5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal
More informationa 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
More information160 CHAPTER 4. VECTOR SPACES
160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results
More informationLinearly Independent Sets and Linearly Dependent Sets
These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation
More informationInner product. Definition of inner product
Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product
More informationWe shall turn our attention to solving linear systems of equations. Ax = b
59 Linear Algebra We shall turn our attention to solving linear systems of equations Ax = b where A R m n, x R n, and b R m. We already saw examples of methods that required the solution of a linear system
More informationSystems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
More informationMath 215 HW #6 Solutions
Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationNotes on Factoring. MA 206 Kurt Bryan
The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor
More informationMATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0
More informationChapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
More information1 2 3 1 1 2 x = + x 2 + x 4 1 0 1
(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which
More informationLeast Squares Estimation
Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David
More informationSolutions to Math 51 First Exam January 29, 2015
Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not
More informationManifold Learning Examples PCA, LLE and ISOMAP
Manifold Learning Examples PCA, LLE and ISOMAP Dan Ventura October 14, 28 Abstract We try to give a helpful concrete example that demonstrates how to use PCA, LLE and Isomap, attempts to provide some intuition
More informationLinear Algebraic Equations, SVD, and the Pseudo-Inverse
Linear Algebraic Equations, SVD, and the Pseudo-Inverse Philip N. Sabes October, 21 1 A Little Background 1.1 Singular values and matrix inversion For non-smmetric matrices, the eigenvalues and singular
More information9 Multiplication of Vectors: The Scalar or Dot Product
Arkansas Tech University MATH 934: Calculus III Dr. Marcel B Finan 9 Multiplication of Vectors: The Scalar or Dot Product Up to this point we have defined what vectors are and discussed basic notation
More informationGeometric Camera Parameters
Geometric Camera Parameters What assumptions have we made so far? -All equations we have derived for far are written in the camera reference frames. -These equations are valid only when: () all distances
More informationBackground: State Estimation
State Estimation Cyber Security of the Smart Grid Dr. Deepa Kundur Background: State Estimation University of Toronto Dr. Deepa Kundur (University of Toronto) Cyber Security of the Smart Grid 1 / 81 Dr.
More informationDynamic data processing
Dynamic data processing recursive least-squares P.J.G. Teunissen Series on Mathematical Geodesy and Positioning Dynamic data processing recursive least-squares Dynamic data processing recursive least-squares
More informationIntroduction Epipolar Geometry Calibration Methods Further Readings. Stereo Camera Calibration
Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration 12.10.2004 Overview Introduction Summary / Motivation Depth Perception Ambiguity of Correspondence
More information1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
More informationIEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2
IEOR 4404 Homework # Intro OR: Deterministic Models February 14, 011 Prof. Jay Sethuraman Page 1 of 5 Homework #.1 (a) What is the optimal solution of this problem? Let us consider that x 1, x and x 3
More informationMATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column
More informationby the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that
More informationArithmetic and Algebra of Matrices
Arithmetic and Algebra of Matrices Math 572: Algebra for Middle School Teachers The University of Montana 1 The Real Numbers 2 Classroom Connection: Systems of Linear Equations 3 Rational Numbers 4 Irrational
More informationThe Assignment Problem and the Hungarian Method
The Assignment Problem and the Hungarian Method 1 Example 1: You work as a sales manager for a toy manufacturer, and you currently have three salespeople on the road meeting buyers. Your salespeople are
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More informationVector Spaces 4.4 Spanning and Independence
Vector Spaces 4.4 and Independence October 18 Goals Discuss two important basic concepts: Define linear combination of vectors. Define Span(S) of a set S of vectors. Define linear Independence of a set
More informationBig Data Interpolation: An Effcient Sampling Alternative for Sensor Data Aggregation
Big Data Interpolation: An Effcient Sampling Alternative for Sensor Data Aggregation Hadassa Daltrophe, Shlomi Dolev, Zvi Lotker Ben-Gurion University Outline Introduction Motivation Problem definition
More informationLinear Algebra Methods for Data Mining
Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 Lecture 3: QR, least squares, linear regression Linear Algebra Methods for Data Mining, Spring 2007, University
More informationOrthogonal Projections
Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors
More informationLecture 14: Section 3.3
Lecture 14: Section 3.3 Shuanglin Shao October 23, 2013 Definition. Two nonzero vectors u and v in R n are said to be orthogonal (or perpendicular) if u v = 0. We will also agree that the zero vector in
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More informationReduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:
Section 1.2: Row Reduction and Echelon Forms Echelon form (or row echelon form): 1. All nonzero rows are above any rows of all zeros. 2. Each leading entry (i.e. left most nonzero entry) of a row is in
More informationLecture 1: Systems of Linear Equations
MTH Elementary Matrix Algebra Professor Chao Huang Department of Mathematics and Statistics Wright State University Lecture 1 Systems of Linear Equations ² Systems of two linear equations with two variables
More informationLecture 5: Singular Value Decomposition SVD (1)
EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system
More information4.3 Least Squares Approximations
18 Chapter. Orthogonality.3 Least Squares Approximations It often happens that Ax D b has no solution. The usual reason is: too many equations. The matrix has more rows than columns. There are more equations
More informationMAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =
MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the
More information7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix
7. LU factorization EE103 (Fall 2011-12) factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization algorithm effect of rounding error sparse
More informationCS3220 Lecture Notes: QR factorization and orthogonal transformations
CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss
More informationCONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation
Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in
More informationRecall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
More informationIntroduction to General and Generalized Linear Models
Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby
More informationElectrical Engineering 103 Applied Numerical Computing
UCLA Fall Quarter 2011-12 Electrical Engineering 103 Applied Numerical Computing Professor L Vandenberghe Notes written in collaboration with S Boyd (Stanford Univ) Contents I Matrix theory 1 1 Vectors
More informationUnderstanding and Applying Kalman Filtering
Understanding and Applying Kalman Filtering Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton 1 Introduction Objectives: 1. Provide a basic understanding
More informationLinear Algebra Notes
Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note
More informationWhat are the place values to the left of the decimal point and their associated powers of ten?
The verbal answers to all of the following questions should be memorized before completion of algebra. Answers that are not memorized will hinder your ability to succeed in geometry and algebra. (Everything
More informationFigure 2.1: Center of mass of four points.
Chapter 2 Bézier curves are named after their inventor, Dr. Pierre Bézier. Bézier was an engineer with the Renault car company and set out in the early 196 s to develop a curve formulation which would
More informationLectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain
Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal
More informationx y The matrix form, the vector form, and the augmented matrix form, respectively, for the system of equations are
Solving Sstems of Linear Equations in Matri Form with rref Learning Goals Determine the solution of a sstem of equations from the augmented matri Determine the reduced row echelon form of the augmented
More informationInner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
More information1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.
Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S
More informationSection 1.1. Introduction to R n
The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to
More informationMATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3
MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................
More informationSTATISTICS AND DATA ANALYSIS IN GEOLOGY, 3rd ed. Clarificationof zonationprocedure described onpp. 238-239
STATISTICS AND DATA ANALYSIS IN GEOLOGY, 3rd ed. by John C. Davis Clarificationof zonationprocedure described onpp. 38-39 Because the notation used in this section (Eqs. 4.8 through 4.84) is inconsistent
More informationCurves and Surfaces. Goals. How do we draw surfaces? How do we specify a surface? How do we approximate a surface?
Curves and Surfaces Parametric Representations Cubic Polynomial Forms Hermite Curves Bezier Curves and Surfaces [Angel 10.1-10.6] Goals How do we draw surfaces? Approximate with polygons Draw polygons
More informationUniversity of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
More informationDepartment of Chemical Engineering ChE-101: Approaches to Chemical Engineering Problem Solving MATLAB Tutorial VI
Department of Chemical Engineering ChE-101: Approaches to Chemical Engineering Problem Solving MATLAB Tutorial VI Solving a System of Linear Algebraic Equations (last updated 5/19/05 by GGB) Objectives:
More information2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system
1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables
More information1 Cubic Hermite Spline Interpolation
cs412: introduction to numerical analysis 10/26/10 Lecture 13: Cubic Hermite Spline Interpolation II Instructor: Professor Amos Ron Scribes: Yunpeng Li, Mark Cowlishaw, Nathanael Fillmore 1 Cubic Hermite
More informationLS.6 Solution Matrices
LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions
More informationA QCQP Approach to Triangulation. Chris Aholt, Sameer Agarwal, and Rekha Thomas University of Washington 2 Google, Inc.
A QCQP Approach to Triangulation 1 Chris Aholt, Sameer Agarwal, and Rekha Thomas 1 University of Washington 2 Google, Inc. 2 1 The Triangulation Problem X Given: -n camera matrices P i R 3 4 -n noisy observations
More information1.2 Solving a System of Linear Equations
1.. SOLVING A SYSTEM OF LINEAR EQUATIONS 1. Solving a System of Linear Equations 1..1 Simple Systems - Basic De nitions As noticed above, the general form of a linear system of m equations in n variables
More informationDefinition 8.1 Two inequalities are equivalent if they have the same solution set. Add or Subtract the same value on both sides of the inequality.
8 Inequalities Concepts: Equivalent Inequalities Linear and Nonlinear Inequalities Absolute Value Inequalities (Sections 4.6 and 1.1) 8.1 Equivalent Inequalities Definition 8.1 Two inequalities are equivalent
More informationDirect Methods for Solving Linear Systems. Matrix Factorization
Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
More informationIntroduction to the Finite Element Method (FEM)
Introduction to the Finite Element Method (FEM) ecture First and Second Order One Dimensional Shape Functions Dr. J. Dean Discretisation Consider the temperature distribution along the one-dimensional
More informationSolution of Linear Systems
Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start
More informationLINES AND PLANES CHRIS JOHNSON
LINES AND PLANES CHRIS JOHNSON Abstract. In this lecture we derive the equations for lines and planes living in 3-space, as well as define the angle between two non-parallel planes, and determine the distance
More informationBy choosing to view this document, you agree to all provisions of the copyright laws protecting it.
This material is posted here with permission of the IEEE Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services Internal
More information1 Norms and Vector Spaces
008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)
More information1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,
More informationSect 6.7 - Solving Equations Using the Zero Product Rule
Sect 6.7 - Solving Equations Using the Zero Product Rule 116 Concept #1: Definition of a Quadratic Equation A quadratic equation is an equation that can be written in the form ax 2 + bx + c = 0 (referred
More informationα = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
More informationChapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6
Chapter 7 Matrices Definition An m n matrix is an array of numbers set out in m rows and n columns Examples (i ( 1 1 5 2 0 6 has 2 rows and 3 columns and so it is a 2 3 matrix (ii 1 0 7 1 2 3 3 1 is a
More informationLecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10
Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction
More informationDATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
More informationQuestion 2: How do you solve a matrix equation using the matrix inverse?
Question : How do you solve a matrix equation using the matrix inverse? In the previous question, we wrote systems of equations as a matrix equation AX B. In this format, the matrix A contains the coefficients
More informationUnit 1. Today I am going to discuss about Transportation problem. First question that comes in our mind is what is a transportation problem?
Unit 1 Lesson 14: Transportation Models Learning Objective : What is a Transportation Problem? How can we convert a transportation problem into a linear programming problem? How to form a Transportation
More informationThe Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every
More informationLINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
More informationCITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION
No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August
More informationminimal polyonomial Example
Minimal Polynomials Definition Let α be an element in GF(p e ). We call the monic polynomial of smallest degree which has coefficients in GF(p) and α as a root, the minimal polyonomial of α. Example: We
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationFactorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
More informationVector Spaces. Chapter 2. 2.1 R 2 through R n
Chapter 2 Vector Spaces One of my favorite dictionaries (the one from Oxford) defines a vector as A quantity having direction as well as magnitude, denoted by a line drawn from its original to its final
More informationThese axioms must hold for all vectors ū, v, and w in V and all scalars c and d.
DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms
More informationLinear Algebra and TI 89
Linear Algebra and TI 89 Abdul Hassen and Jay Schiffman This short manual is a quick guide to the use of TI89 for Linear Algebra. We do this in two sections. In the first section, we will go over the editing
More informationMathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
More informationNotes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
More informationCOMBINED NEURAL NETWORKS FOR TIME SERIES ANALYSIS
COMBINED NEURAL NETWORKS FOR TIME SERIES ANALYSIS Iris Ginzburg and David Horn School of Physics and Astronomy Raymond and Beverly Sackler Faculty of Exact Science Tel-Aviv University Tel-A viv 96678,
More information(67902) Topics in Theory and Complexity Nov 2, 2006. Lecture 7
(67902) Topics in Theory and Complexity Nov 2, 2006 Lecturer: Irit Dinur Lecture 7 Scribe: Rani Lekach 1 Lecture overview This Lecture consists of two parts In the first part we will refresh the definition
More information