2.29 Numerical Fluid Mechanics Spring 2015 Lecture 7
|
|
- Nancy Neal
- 7 years ago
- Views:
Transcription
1 REVIEW Lecture 6: Spring 215 Lecture 7 Direct Methods for solving linear algebraic equations LU decomposition/factorization Separates time-consuming elimination for A from that for b / B Derivation, assuming no pivoting needed: Number of Ops: Same as for Gauss Elimination Pivoting: Use pivot element pointer vector Variations: Doolittle and Crout decompositions, Matrix Inverse Error Analysis for Linear Systems Matrix norms Condition Number for Perturbed RHS and LHS: Special Matrices: Intro PFJL Lecture 7, 1
2 TODAY (Lecture 7): Systems of Linear Equations III Direct Methods Gauss Elimination LU decomposition/factorization Error Analysis for Linear Systems Special Matrices: LU Decompositions Tri-diagonal systems: Thomas Algorithm General Banded Matrices Algorithm, Pivoting and Modes of storage Sparse and Banded Matrices Symmetric, positive-definite Matrices Iterative Methods Definitions and Properties, Choleski Decomposition Jacobi s method Gauss-Seidel iteration Convergence PFJL Lecture 7, 2
3 Reading Assignment Chapters 11 of Chapra and Canale, Numerical Methods for Engineers, 26/21/214. Any chapter on Solving linear systems of equations in references on CFD references provided. For example: chapter 5 of J. H. Ferziger and M. Peric, Computational Methods for Fluid Dynamics. Springer, NY, 3 rd edition, 22 PFJL Lecture 7, 3
4 Special Matrices Certain Matrices have particular structures that can be exploited, i.e. Reduce number of ops and memory needs Banded Matrices: Square banded matrix that has all elements equal to zero, excepted for a band around the main diagonal. Frequent in engineering and differential equations: Tri-diagonal Matrices Wider bands for higher-order schemes Gauss Elimination or LU decomposition inefficient because, if pivoting is not necessary, all elements outside of the band remain zero (but direct GE/LU would manipulate these zero elements anyway) Symmetric Matrices Iterative Methods: Employ initial guesses, than iterate to refine solution Can be subject to round-off errors PFJL Lecture 7, 4
5 Special Matrices: Tri-diagonal Systems Example Forced Vibration of a String f(x,t) Example of a travelling pluse: x i Y(x,t) Consider the case of a Harmonic excitation f(x,t) =- f(x) cos(wt) Applying Newton s law leads to the wave equation: With separation of variables, one obtains the equation for modal amplitudes, see eq. (1) below: 2 Ytt c Yxx f ( x, t) Y ( x, t) ( t) y( x) Differential Equation for the amplitude: (1) Boundary Conditions: PFJL Lecture 7, 5
6 Special Matrices: Tri-diagonal Systems Forced Vibration of a String f(x,t) x i Y(x,t) Harmonic excitation f(x,t) = f(x) cos(wt) Finite Difference + O(h 2 ) Discrete Difference Equations + Matrix Form: Differential Equation: (1) y Boundary Conditions: Tridiagonal Matrix If kh 1 or kh 3symmetric, negative or positive definite: No pivoting needed Note: for < kh <1 Negative definite => Write: A =-A and y' y ' to render matrix positive definite PFJL Lecture 7, 6
7 Special Matrices: Tri-diagonal Systems General Tri-diagonal Systems: Bandwidth of 3 LU Decomposition Three steps for LU scheme: 1. Decomposition (GE): 2. Forward substitution 3. Backward substitution PFJL Lecture 7, 7
8 Special Matrices: Tri-diagonal Systems Thomas Algorithm By identification with the general LU decomposition, one obtains, 1. Factorization/Decomposition 2. Forward Substitution 3. Back Substitution i Number of Operations: Thomas Algorithm LU Factorization: 3*(n-1) operations Forward substitution: 2*(n-1) operations Back substitution: 3*(n-1)+1 operations Total: 8*(n-1) ~ O(n) operations PFJL Lecture 7, 8
9 Special Matrices: General, Banded Matrix p General Banded Matrix (p q) q Banded Symmetric Matrix (p = q = b) p super-diagonals q sub-diagonals w = p + q + 1 bandwidth w = 2 b + 1 is called the bandwidth b is the half-bandwidth PFJL Lecture 7, 9
10 i j Special Matrices: General, Banded Matrix LU Decomposition via Gaussian Elimination If No Pivoting: the zeros are preserved j p i q = = m ij a ( j) ij ( ) a jj if j i or i j q j u a (i i 1) ij ij ij m i,i a ) a ( (i 1) 1 i 1, j u ij if i j or j i p (as gen. case) (banded) (as gen. case) (banded) PFJL Lecture 7, 1
11 Consider pivoting the 2 rows as below: p Special Matrices: General, Banded Matrix LU Decomposition via Gaussian Elimination With Partial Pivoting (by rows): q q Then, the bandwidth of L remains unchanged, m if j i or i j q ij = q w p but the bandwidth of U becomes as that of A u if i j or j i p q ij w = p + 2 q +1 bandwidth PFJL Lecture 7, 11
12 Special Matrices: General, Banded Matrix Compact Storage p Diagonal (length n) q p q Needed for Pivoting only q i i n q j Matrix size: n 2 j =j i+ q Matrix size: n (p+2q+1) PFJL Lecture 7, 12
13 Special Matrices: Sparse and Banded Matrix Skyline Systems (typically for symmetric matrices) Skyline.. Storage.. Pointers Skyline storage applicable when no pivoting is needed, e.g. for banded, symmetric, and positive definite matrices: FEM and FD methods. Skyline solvers are usually based on Cholesky factorization (which preserves the skyline) PFJL Lecture 7, 13
14 Special Matrices: Symmetric (Positive-Definite) Matrix Symmetric Coefficient Matrices: If no pivoting, the matrix remains symmetric after Gauss Elimination/LU decompositions a a ( k) ( k) ij ji ( k 1) ( k 1) ij ji Proof: Show that if then using: a a Gauss Elimination symmetric (use only the upper triangular portion of A): a a m a ( k 1) ( k ) ( k ) ij ij ik kj a mik, i k 1, k 2,..., n j i, i 1,..., n a ( k ) ki ( k ) kk About half the total number of ops than full GE PFJL Lecture 7, 14
15 Special Matrices: Symmetric, Positive Definite Matrix 1. Sylvester Criterion: A symmetric matrix is Positive Definite if and only if: det(a k ) > for k=1,2,,n, where A k is matrix of k first lines/columns Symmetric Positive Definite matrices frequent in engineering 2. For a symmetric positive definite A, one thus has the following properties a) The maximum elements of A are on the main diagonal b) For a Symmetric, Positive Definite A: No pivoting needed ( k 1) ( k) 2 c) The elimination is stable: a 2 a. To show this, use a a a in a a m a ( k 1) ( k ) ( k ) ij ij ik kj a mik, i k 1, k 2,..., n j i, i 1,..., n a ( k ) ki ( k ) kk ii ii kj kk jj PFJL Lecture 7, 15
16 Special Matrices: Symmetric, Positive Definite Matrix The general GE becomes: a a m a ( k 1) ( k ) ( k ) ij ij ik kj a mik, i k 1, k 2,..., n j i, i 1,..., n a ( k ) ki ( k ) kk Choleski Factorization Complex Conjugate where No pivoting needed Complex Conjugate and Transpose PFJL Lecture 7, 16
17 Linear Systems of Equations: Iterative Methods Sparse (large) Full-bandwidth Systems (frequent in practice) x x x x x ps: B and c could be function of k (non-stationary) x x x x x Iterative Methods are then efficient Analogous to iterative methods obtained for roots of equations, i.e. Open Methods: Fixed-point, Newton-Raphson, Secant Example of Iteration equation A x b A x b x x A x b k 1 k k k x x A x b A I x b General Stationary Iteration Formula x B x c k 1 k k ( ),1,2,... Compatibility condition for Ax=b to be the solution: Write c C b 1 ( ) or 1 1 A b BA b Cb I B A C B I CA PFJL Lecture 7, 17
18 Linear Systems of Equations: Iterative Methods Convergence Convergence Convergence Analysis Iteration Matrix form Sufficient Condition for Convergence: PFJL Lecture 7, 18
19 B <1 for a chosen matrix norm Infinite norm often used in practice Maximum Column Sum Maximum Row Sum The Frobenius norm (also called Euclidean norm), which for matrices differs from: The l-2 norm (also called spectral norm) PFJL Lecture 7, 19
20 Linear Systems of Equations: Iterative Methods Convergence: Necessary and Sufficient Condition Convergence Convergence Analysis Iteration Matrix form Necessary and Sufficient Condition for Convergence: Spectral radius of B is smaller than one: (proof: use eigendecomposition of B) ( B) max i 1, where i eigenvalue( Bn n) i 1... n (This ensures B <1) PFJL Lecture 7, 2
21 MIT OpenCourseWare Spring 215 For information about citing these materials or our Terms of Use, visit:
SOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis
More informationAN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS
AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS Revised Edition James Epperson Mathematical Reviews BICENTENNIAL 0, 1 8 0 7 z ewiley wu 2007 r71 BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc.,
More informationALGEBRAIC EIGENVALUE PROBLEM
ALGEBRAIC EIGENVALUE PROBLEM BY J. H. WILKINSON, M.A. (Cantab.), Sc.D. Technische Universes! Dsrmstedt FACHBEREICH (NFORMATiK BIBL1OTHEK Sachgebieto:. Standort: CLARENDON PRESS OXFORD 1965 Contents 1.
More informationSolution of Linear Systems
Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start
More information10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method
578 CHAPTER 1 NUMERICAL METHODS 1. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after
More information6. Cholesky factorization
6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix
More information13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationNumerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems
Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001,
More informationAbstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).
MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix
More information7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix
7. LU factorization EE103 (Fall 2011-12) factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization algorithm effect of rounding error sparse
More informationGeneral Framework for an Iterative Solution of Ax b. Jacobi s Method
2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,
More informationGEC320 COURSE COMPACT. Four hours per week for 15 weeks (60 hours)
GEC320 COURSE COMPACT Course Course code: GEC 320 Course title: Course status: Course Duration Numerical Methods (2 units) Compulsory Four hours per week for 15 weeks (60 hours) Lecturer Data Name: Engr.
More information7 Gaussian Elimination and LU Factorization
7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method
More informationLINEAR ALGEBRA. September 23, 2010
LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................
More informationMean value theorem, Taylors Theorem, Maxima and Minima.
MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and express-ions. Permutations and Combinations.
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More informationSimilar matrices and Jordan form
Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive
More informationSolving Linear Systems of Equations. Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.
Solving Linear Systems of Equations Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab:
More informationOperation Count; Numerical Linear Algebra
10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floating-point
More informationDirect Methods for Solving Linear Systems. Matrix Factorization
Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
More informationFactorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
More informationCONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation
Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in
More information5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal
More informationApplied Linear Algebra I Review page 1
Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationLS.6 Solution Matrices
LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions
More informationSolving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
More informationCS3220 Lecture Notes: QR factorization and orthogonal transformations
CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss
More informationLecture 5: Singular Value Decomposition SVD (1)
EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More informationThe Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More informationSystems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
More informationMATRICES WITH DISPLACEMENT STRUCTURE A SURVEY
MATRICES WITH DISPLACEMENT STRUCTURE A SURVEY PLAMEN KOEV Abstract In the following survey we look at structured matrices with what is referred to as low displacement rank Matrices like Cauchy Vandermonde
More informationSolution to Homework 2
Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if
More informationNumerical Methods for Solving Systems of Nonlinear Equations
Numerical Methods for Solving Systems of Nonlinear Equations by Courtney Remani A project submitted to the Department of Mathematical Sciences in conformity with the requirements for Math 4301 Honour s
More informationCHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.
CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In
More informationLECTURES IN BASIC COMPUTATIONAL NUMERICAL ANALYSIS
x (m+1) (m) = x [J(ƒ (m) 1 )] ƒ(x (m) ) f f i+1 i 1 D f = 0 i 2h LECTURES IN BASIC COMPUTATIONAL NUMERICAL ANALYSIS J M McDonough y = ƒ(y,t) University of Kentucky Lexington, KY 40506 E-mail: jmmcd@ukyedu
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
More informationLecture 2 Matrix Operations
Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or
More information15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
More informationIntroduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
More informationSection 6.1 - Inner Products and Norms
Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
More information1 Finite difference example: 1D implicit heat equation
1 Finite difference example: 1D implicit heat equation 1.1 Boundary conditions Neumann and Dirichlet We solve the transient heat equation ρc p t = ( k ) (1) on the domain L/2 x L/2 subject to the following
More informationa 11 a 12 a 13... a 1n x 1 b 1 b 2..., (6.2)
Chapter 6 Systems of Linear Equations 6. Introduction Systems of linear equations hold a special place in computational physics, chemistry, and engineering. In fact, multiple computational problems in
More informationDecember 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation
More information[1] Diagonal factorization
8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:
More informationNumerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)
More informationMatrices 2. Solving Square Systems of Linear Equations; Inverse Matrices
Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices Solving square systems of linear equations; inverse matrices. Linear algebra is essentially about solving systems of linear equations,
More information1 Determinants and the Solvability of Linear Systems
1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped
More informationMath 312 Homework 1 Solutions
Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please
More informationIterative Methods for Solving Linear Systems
Chapter 5 Iterative Methods for Solving Linear Systems 5.1 Convergence of Sequences of Vectors and Matrices In Chapter 2 we have discussed some of the main methods for solving systems of linear equations.
More information1 Introduction. 2 Power Flow. J.L. Kirtley Jr.
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.061 Introduction to Power Systems Class Notes Chapter 5 Introduction To Load Flow J.L. Kirtley Jr. 1 Introduction
More informationThe Basics of FEA Procedure
CHAPTER 2 The Basics of FEA Procedure 2.1 Introduction This chapter discusses the spring element, especially for the purpose of introducing various concepts involved in use of the FEA technique. A spring
More information1 2 3 1 1 2 x = + x 2 + x 4 1 0 1
(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which
More informationOrthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
More informationLecture 5 Principal Minors and the Hessian
Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and
More informationA note on companion matrices
Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod
More informationLecture notes on linear algebra
Lecture notes on linear algebra David Lerner Department of Mathematics University of Kansas These are notes of a course given in Fall, 2007 and 2008 to the Honors sections of our elementary linear algebra
More informationSolving Systems of Linear Equations Using Matrices
Solving Systems of Linear Equations Using Matrices What is a Matrix? A matrix is a compact grid or array of numbers. It can be created from a system of equations and used to solve the system of equations.
More informationVariance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers
Variance Reduction The statistical efficiency of Monte Carlo simulation can be measured by the variance of its output If this variance can be lowered without changing the expected value, fewer replications
More informationP164 Tomographic Velocity Model Building Using Iterative Eigendecomposition
P164 Tomographic Velocity Model Building Using Iterative Eigendecomposition K. Osypov* (WesternGeco), D. Nichols (WesternGeco), M. Woodward (WesternGeco) & C.E. Yarman (WesternGeco) SUMMARY Tomographic
More informationYousef Saad University of Minnesota Computer Science and Engineering. CRM Montreal - April 30, 2008
A tutorial on: Iterative methods for Sparse Matrix Problems Yousef Saad University of Minnesota Computer Science and Engineering CRM Montreal - April 30, 2008 Outline Part 1 Sparse matrices and sparsity
More information1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
More informationSection 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj
Section 5. l j v j = [ u u j u m ] l jj = l jj u j + + l mj u m. l mj Section 5. 5.. Not orthogonal, the column vectors fail to be perpendicular to each other. 5..2 his matrix is orthogonal. Check that
More informationMatrices and Polynomials
APPENDIX 9 Matrices and Polynomials he Multiplication of Polynomials Let α(z) =α 0 +α 1 z+α 2 z 2 + α p z p and y(z) =y 0 +y 1 z+y 2 z 2 + y n z n be two polynomials of degrees p and n respectively. hen,
More informationOrthogonal Bases and the QR Algorithm
Orthogonal Bases and the QR Algorithm Orthogonal Bases by Peter J Olver University of Minnesota Throughout, we work in the Euclidean vector space V = R n, the space of column vectors with n real entries
More informationChapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
More informationContinuity of the Perron Root
Linear and Multilinear Algebra http://dx.doi.org/10.1080/03081087.2014.934233 ArXiv: 1407.7564 (http://arxiv.org/abs/1407.7564) Continuity of the Perron Root Carl D. Meyer Department of Mathematics, North
More informationMAT 242 Test 2 SOLUTIONS, FORM T
MAT 242 Test 2 SOLUTIONS, FORM T 5 3 5 3 3 3 3. Let v =, v 5 2 =, v 3 =, and v 5 4 =. 3 3 7 3 a. [ points] The set { v, v 2, v 3, v 4 } is linearly dependent. Find a nontrivial linear combination of these
More informationLinear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
More informationInner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
More informationDATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
More informationLogistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression
Logistic Regression Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max
More informationMatrix Multiplication
Matrix Multiplication CPS343 Parallel and High Performance Computing Spring 2016 CPS343 (Parallel and HPC) Matrix Multiplication Spring 2016 1 / 32 Outline 1 Matrix operations Importance Dense and sparse
More informationMatrix Differentiation
1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationMAT188H1S Lec0101 Burbulla
Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u
More informationSolving Linear Systems, Continued and The Inverse of a Matrix
, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing
More informationClassification of Cartan matrices
Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms
More informationOutline. Generalize Simple Example
Solving Simultaneous Nonlinear Algebraic Equations Larry Caretto Mechanical Engineering 309 Numerical Analysis of Engineering Systems March 5, 014 Outline Problem Definition of solving simultaneous nonlinear
More informationUsing row reduction to calculate the inverse and the determinant of a square matrix
Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible
More informationASEN 3112 - Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1
19 MDOF Dynamic Systems ASEN 3112 Lecture 1 Slide 1 A Two-DOF Mass-Spring-Dashpot Dynamic System Consider the lumped-parameter, mass-spring-dashpot dynamic system shown in the Figure. It has two point
More informationTorgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances
Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances It is possible to construct a matrix X of Cartesian coordinates of points in Euclidean space when we know the Euclidean
More informationBy choosing to view this document, you agree to all provisions of the copyright laws protecting it.
This material is posted here with permission of the IEEE Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services Internal
More informationEigenvalues and Eigenvectors
Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution
More informationA Direct Numerical Method for Observability Analysis
IEEE TRANSACTIONS ON POWER SYSTEMS, VOL 15, NO 2, MAY 2000 625 A Direct Numerical Method for Observability Analysis Bei Gou and Ali Abur, Senior Member, IEEE Abstract This paper presents an algebraic method
More informationIntroduction to Matrices for Engineers
Introduction to Matrices for Engineers C.T.J. Dodson, School of Mathematics, Manchester Universit 1 What is a Matrix? A matrix is a rectangular arra of elements, usuall numbers, e.g. 1 0-8 4 0-1 1 0 11
More informationNumerical Analysis Introduction. Student Audience. Prerequisites. Technology.
Numerical Analysis Douglas Faires, Youngstown State University, (Chair, 2012-2013) Elizabeth Yanik, Emporia State University, (Chair, 2013-2015) Graeme Fairweather, Executive Editor, Mathematical Reviews,
More informationLinearly Independent Sets and Linearly Dependent Sets
These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation
More informationElementary Matrices and The LU Factorization
lementary Matrices and The LU Factorization Definition: ny matrix obtained by performing a single elementary row operation (RO) on the identity (unit) matrix is called an elementary matrix. There are three
More informationLecture 3: Finding integer solutions to systems of linear equations
Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture
More information6.3 Multigrid Methods
6.3 Multigrid Methods The Jacobi and Gauss-Seidel iterations produce smooth errors. The error vector e has its high frequencies nearly removed in a few iterations. But low frequencies are reduced very
More information(Quasi-)Newton methods
(Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting
More informationNumerical Analysis An Introduction
Walter Gautschi Numerical Analysis An Introduction 1997 Birkhauser Boston Basel Berlin CONTENTS PREFACE xi CHAPTER 0. PROLOGUE 1 0.1. Overview 1 0.2. Numerical analysis software 3 0.3. Textbooks and monographs
More informationDerivative Free Optimization
Department of Mathematics Derivative Free Optimization M.J.D. Powell LiTH-MAT-R--2014/02--SE Department of Mathematics Linköping University S-581 83 Linköping, Sweden. Three lectures 1 on Derivative Free
More informationMath 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.
Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(
More information3. Interpolation. Closing the Gaps of Discretization... Beyond Polynomials
3. Interpolation Closing the Gaps of Discretization... Beyond Polynomials Closing the Gaps of Discretization... Beyond Polynomials, December 19, 2012 1 3.3. Polynomial Splines Idea of Polynomial Splines
More information