THE POLYNOMIAL EIGENVALUE PROBLEM

Size: px
Start display at page:

Download "THE POLYNOMIAL EIGENVALUE PROBLEM"

Transcription

1 THE POLYNOMIAL EIGENVALUE PROBLEM A thesis submitted to the University of Manchester for the degree of Doctor of Philosophy in the Faculty of Engineering and Physical Sciences 2005 Michael Berhanu School of Mathematics

2 Contents Abstract 11 Declaration 12 Copyright 13 Statement 14 Acknowledgements 15 1 Introduction Applications of PEPs Notations General Notations Matrix Notation and Special Matrices Mathematical Background Linear Algebra Normed Linear Vector Spaces Scalar Product and Scalar Product Spaces Matrices, Vectors and their Norms Differential Calculus Special Matrix Subsets

3 1.5 (J, J)-Orthogonal and (J, J)-Unitary Matrices Matrix Operators Properties Condition Number and Backward Error The Polynomial Eigenvalue Problem Homogeneous PEPs Condition Numbers for Eigenvalues and Eigenvectors Introduction A Differential Calculus Approach Preliminaries Projective Spaces Condition Numbers Perturbation Analysis Link to the Non-Homogeneous Form Particular Case: the GEP Hermitian Structured Condition Numbers Conclusion Backward Errors Introduction Normwise Backward Error Normwise Structured Backward Error for the Symmetric PEP Normwise Structured Backward Error for the Symmetric GEP Real Eigenpair Complex Eigenvalues Matrix Factorizations and their Sensitivity Introduction

4 4.2 Zeroing with (J 1, J 2 )-Orthogonal Matrices Unified Rotations Householder Reflectors Error Analysis Zeroing Strategies Introduction to Matrix Factorization A General Method for Computing the Condition Number The HR Factorization Perturbation of the HR Factorization Numerical Experiments The Indefinite Polar Factorization Perturbation of the IPF The Polar Factorization Numerical Experiments The Hyperbolic Singular Value Decomposition Perturbation of the HSVD Numerical Experiments Sensitivity of Hyperbolic Eigendecompositions Perturbation Analysis of the Diagonalization by Hyperbolic Matrices Condition Number Theorems Numerical Solutions of PEPs Introduction QEPs with a Rank one Damping Matrix Preliminaries Real Eigenvalues with M > 0, K

5 5.2.3 General Case Solving PEPs Through Linearization Different Linearisations Companion Linearization Symmetric Linearization Influence of the Linearization Pseudocode Numerical Examples with condpolyeig Lack of Numerical Tools condpolyeig Numerical Examples An Overview of Algorithms for Symmetric GEPs The Erhlich-Aberth Method LR Algorithm HR Algorithm The HZ Algorithm Introduction Symmetric Diagonal Reduction Tridiagonal Diagonal Reduction HR or HZ Iterations Preliminaries Practical Implementation of One HZ Step Implementing the Bulge Chasing Pseudocodes Shifting Strategies Flops Count and Storage

6 6.8 Eigenvectors Iterative Refinement Newton s Method Implementation Numerical Experiments with HZ and Comparisons The HZ Algorithm Standard Numerical Experiment Symmetric GEPs and Iterative Refinement HZ on Tridiagonal-Diagonal Pairs Bessel Matrices Lui Matrices Clement Matrices Symmetric QEPs Wave Equation Simply Supported Beam Conclusion Summary Future Projects and Improvements Bibliography 211 6

7 List of Tables 4.1 Relative errors for c and s Perturbation bounds of the HR factorization Values of dg R (A) 2 A ɛ F and 2κ 2 (A ɛ ) A ɛ F as ɛ Perturbation bounds of the indefinite polar factorization Perturbation bounds of the IPF using bounds for the condition numbers c H and c S Perturbation bounds for the singular values from HSVD Perturbation bounds for the orthogonal and hyperbolic factors List of eigentools Eigenvalues of P (A θ, α, b) Condition number and backward error for λ = Condition number and backward error for λ = 1 + θ Average number of iterations for each shifting strategy Average number of iterations per eigenvalue for each shifting strategy Comparison of the number of floating point operations in the HZ and QZ algorithms Numerical results for randomly generated tridiagonal-diagonal pairs Numerical results with randomly generated symmetric pairs

8 7.3 Largest eigenvalue condition number for test matrices 1 10 with n = 100 and n = Largest relative error of the computed eigenvalues for test matrices 1 10 with n = Largest relative error of the computed eigenvalues for test matrices 1 10 with n = Number of HZ iterations and Erhlich-Aberth iterations, n = Normwise backward errors for test matrices 1-10 with n = Largest relative error of the computed eigenvalues of the modified Clement matrices with n = 50 and n = Largest normwise QEP backward error

9 List of Figures 1.1 A 2 degree of freedom mass-spring damped system Condition number and perturbation bounds of the IPF of Hilbert matrices with log 10 ( dg S (A) 2 ) ( ), log 10 ( dg H (A) 2 ) ( ), log 10 (c S ) ( ) and log 10 (c H ) (+) Comparison between the condition number and its bounds with log 10 ( dg Q (A) 2 ) ( ), log 10 ( dg H (A) 2 ) ( ), log 10 (c Q,1 ) (+), log 10 (c H,1 ) ( ), log 10 (c Q,2 ) ( ) and log 10 (c H,2 ) ( ) Spectrum computed with the companion linearization Spectrum computed with the symmetric linearization Normwise unstructured backward errors before ( ) and after (+) iterative refinement The eigenvalues of tests 1 to 4 in the complex plan for n = The eigenvalues of tests 5 to 8 in the complex plan for n = The eigenvalues of tests 9 and 10 in the complex plan for n = Relative errors of the eigenvalues of the Bessel matrix with n = 18, a = 8.5 computed with HZ ( ), EA ( ) and with QR (+) Eigenvalues of Bessel matrices computed in extended precision ( ) and with HZ ( ), EA ( ) and with QR (+)

10 7.7 The eigenvalues of Liu s matrix 5 computed with HZ ( ), EA ( ) and QR (+) The eigenvalues of Liu s matrices 14 and 28 computed with HZ ( ) using shifting strategy mix 1, EA ( ) and QR (+) The eigenvalues of Liu s matrices 14 and 28 computed with HZ ( ) using shifting strategy mix 2 and random shifts, EA ( ) and QR (+) Eigenvalue condition numbers for the Clement matrix for n = 50 and Eigenvalues of the Clement matrix with n = 200 and n = 300 computed with MATLAB s function eig The eigenvalues of the modified Clement matrices for n = The eigenvalues of the modified Clement matrices for n = Eigenvalues of the wave equation for n = Backward errors of the approximate eigenpairs (with λ = α/β) of the wave problem computed with HZ ( ) and QZ (+) with n = Eigenvalues of the beam problem with n=200 computed with HZ ( ) and QZ (+) Backward errors of the approximate eigenpairs (with λ = α/β) of the beam problem computed with HZ ( ) and QZ (+) with n=

11 Abstract In this thesis, we consider polynomial eigenvalue problems. We extend results on eigenvalue and eigenvector condition numbers of matrix polynomials to condition numbers with perturbations measured with a weighted Frobenius norm. We derive an explicit expression for the backward error of an approximate eigenpair of a matrix polynomial written in homogeneous form. We consider structured eigenvalue condition numbers for which perturbations have a certain structure such as symmetry, Hermitian or sparsity. We also obtain explicit and/or computable expressions for the structured backward error of an eigenpair. We present a robust implementation of the HZ (or HR) algorithm for symmetric generalized eigenvalue problems. This algorithm has the advantage of preserving pseudosymmetric tridiagonal forms. It has been criticized for its numerical instability. We propose an implementation of the HZ algorithm that allows stability in most cases and comparable results with other classical algorithms for ill conditioned problems. The HZ algorithm is based on the HR factorization, an extension of the QR factorization in which the H factor is hyperbolic. This yields us to the sensitivity analysis of hyperbolic factorizations. 11

12 Declaration No portion of the work referred to in this thesis has been submitted in support of an application for another degree or qualification of this or any other university or other institution of learning. 12

13 Copyright Copyright in text of this thesis rests with the Author. Copies (by any process) either in full, or of extracts, may be made only in accordance with instructions given by the Author and lodged in the John Rylands University Library of Manchester. Details may be obtained from the Librarian. This page must form part of any such copies made. Further copies (by any process) of copies made in accordance with such instructions may not be made without the permission (in writing) of the Author. The ownership of any intellectual property rights which may be described in this thesis is vested in the University of Manchester, subject to any prior agreement to the contrary, and may not be made available for use by third parties without the written permission of the University, which will prescribe the terms and conditions of any such agreement. Further information on the conditions under which disclosures and exploitation may take place is available from the Head of the Department of Mathematics. 13

14 Statement The material in Chapter 4 is based on the technical report Perturbation Bounds for Hyperbolic Matrix Factorizations, Numerical Analysis Report 469, Manchester Centre for Computational Mathematics, June This work has been submitted for publication in SIAM J. Matrix Anal. Appl. The material in Chapter 6 is based on the technical report A Robust Implementation of the HZ Algorithm (with Françoise Tisseur), Numerical Analysis Report, Manchester Centre for Computational Mathematics. In Preparation. 14

15 Acknowledgements I am extremely grateful to my supervisor Françoise Tisseur for her help, guidance and for sharing with me her expertise. I would like to express my gratitude to Nick Higham for his many helpful suggestions and constructive remarks. Many thanks to my fellow students and friends Matthew Smith, Harikrishna Patel, Craig Lucas, Gareth Hargreaves, Anna Mills and Philip Davis for the enjoyable 3... years in Manchester. ɛυχαριστ ω πoλυ Maria Pampaka, Maria Mastorikou, Panagiotis Kallinikos ( Dr, elare ), mucha gracias to the spanish crew, Big Hands,... Mariella Tsopela thank you for everything, φiλακια. Thanks to my father Berhanu H/W who gave me in my childhood the thirst of knowledge. I am extremely grateful to my sisters Bethlam (Koki), Deborah (Lili), Myriam (Poly). Thanks Lili for your patience and help. Finally, a lot of thanks goes to my mother, Fiorenza Vitali, for her encouragement and unconditional love. I dedicate this thesis to her. Merci beaucoup. 15

16 Chapter 1 Introduction We consider the matrix polynomial (or λ-matrix) of degree m P (A, λ) = λ m A m + λ m 1 A m A 0, (1.1) where A k C n n, k = 0: m. The polynomial eigenvalue problem (PEP) is to find an eigenvalue λ and corresponding nonzero eigenvector x satisfying P (A, λ)x = 0. The case m = 1 corresponds to the generalized eigenvalue problem (GEP) Ax = λbx and if A 0 = I we have the standard eigenvalue problem (SEP) Ax = λx. (1.2) Another important case is the quadratic eigenvalue problem (QEP) with m = 2. The importance of PEPs lies in the diverse roles they play in the solution of problems in science and engineering. We briefly outline some examples. 16

17 d 4 k 4 d 5 k Applications of PEPs QEPs and more generally PEPs appear in a variety of problems in a wide range of applications. There are numerous examples where PEPs arise naturally. Some physical phenomena are modeled by a second order ordinary differential equation (ODE) with matrix coefficients M z + Dż + Kz = f(t), (1.3) z(0) = a, (1.4) ż(0) = b. (1.5) The solutions of the homogeneous equation are of the form e λt u, with u a constant vector. This leads to the QEP (λ 2 M + λd + K)u = 0. (1.6) PSfrag replacements k 1 k 2 m 1 m 2 k 3 d 1 k 6 d 2 k 7 d 6 d 7 d 3 Figure 1.1: A 2 degree of freedom mass-spring damped system. A well known example is the damped mass-spring system. In Figure 1.1, we consider the 2 degree of freedom mass-spring damped system. The dynamics of this system, under some assumptions, are governed by an ODE of the form (1.3)- (1.5). In this case z = (x 1, y 1, x 2, y 2 ) denotes the coordinates of the masses m 1 and 17

18 m 2, M = diag(m 1, m 1, m 2, m 2 ) is the mass matrix, D = diag(d 1 +d 2, d 4 +d 6, d 2 + d 3, d 5 + d 7 ) is the damping matrix and K = diag(k 1 + k 2, k 4 + k 6, k 2 + k 3, k 5 + k 7 ) is the stiffness matrix with d i > 0, k i > 0 for 1 i 7. QEPs arise in structural mechanics, control theory, fluid mechanics and we refer to Tisseur and Meerbergen s survey [73] for more specific applications. Interesting practical examples of higher order PEPs are given in [52]. 1.2 Notations General Notations K denotes the field R or C. The colon notation: i = 1: n means the same as i = 1, 2,..., n. ᾱ denotes the conjugate of the complex number α. K m n denotes the set of m n matrices with coefficients in K. M n (K) m denotes the set of m-tuples of n n matrices with coefficients in K. For x K n, x = (x k ) 1 k n = (x k ), x k denotes the kth component of x. e k denotes the vector with the kth component equal to 1 and all the other entries are zero. For A K m n, A = (α ij ) 1 i m, 1 j n = (α ij ), α ij denotes the (i, j) element of A. We often use the tilde notation to denote a perturbed quantity and the hat notation to denote a computed quantity. 18

19 1.2.2 Matrix Notation and Special Matrices Let A K m n, A = (α ij ). A is a square matrix if m = n. A T K n m is the transpose of A and it is defined by A T = (α ji ). A is symmetric if A T = A. A is J-symmetric if JA is symmetric for some J R n m. A is skewsymmetric if A T = A. A K n m is the conjugate transpose of A and it is defined by A = (ᾱ ji ). A is Hermitian if A = A. A is skew-hermitian if A = A. A is diagonal if α ij = 0 for i j. The identity matrix of order n, I n or simply I, is the diagonal matrix that has all its diagonal entries equal to 1. A permutation matrix is a matrix obtained from the identity matrix by row or column permutation. A K m n with m n is upper trapezoidal if α ij = 0 for i > j. A square matrix A is upper triangular if α ij = 0 for i > j and lower triangular if i < j. If all the diagonal elements of A are equal to 1 then A is called a unit upper or lower triangular. A is an upper Hessenberg matrix if α ij = 0 for i > j

20 A is a tridiagonal matrix if A and A T are upper Hessenberg matrices. For a square matrix A, A 1 denotes its inverse. It is the unique matrix such that A 1 A = A 1 = I. A is also said to be nonsingular when A 1 exits. Otherwise A is singular. For B = (b ij ) K m n the Schur product is defined by A B = (a ij b ij ). For B = (b ij ) K p q the Kronecker product is defined by A B = (a ij B). 1.3 Mathematical Background We recall in this Section some mathematical properties of norms, linear spaces and differentiable functions. A particular attention is given to the linear vector spaces K n and K m n. In the rest of this chapter, E denotes a linear vector space over K, K n or K m n Linear Algebra Let V = {v 1,..., v n } where v k E for 1 k n. The linear subspace generated by V is defined by { n } spanv = α k v k, α k K. k=1 A linear combination is a vector of the type n α k v k, k=1 where (α 1,..., α 2 ) K n. The vectors in V are said to be linearly independent if n α k v k = 0 α k = 0 for k = 1: n. k=1 20

21 The number of linearly independent vectors is the dimension of spanv in K and it is denoted by dim(v ) = dim K (V ). Let V 1 and V 2 be two linear subspaces of E. If V 1 V 2 = {0} and E = V 1 + V 2 then E is said to be the direct sum of V 1 and V 2 and the direct sum decomposition is denoted by E = V 1 V 2. Let A : E 1 E 2 be a linear map or a matrix. The range of A is the linear subspace defined by range(a) = {y E 2 : y = Ax, x E 1 } = A(E 1 ). The null space of A is the linear subspace defined by null(a) = {x E 1 : Ax = 0}. The rank of A is the dimension of range(a), rank(a) = dim(range(a)). With these notations, it follows that dim(e 1 ) = rank(a) + dim(null(a)). A K m n is of full rank if rank(a) = min(m, n). If rank(a) < min(m, n) then A is rank deficient Normed Linear Vector Spaces Definition 1.1 Let E be a linear vector space. A norm is a map : E R satisfying the following properties: 21

22 1. x 0 with equality if and only if x = 0, 2. (λ, x) K E, λx = λ x, 3. (x, y) E 2, x + y x + y. For x E, V x denotes an open neighborhood of x. The open ball of radius ɛ 0 centered at x is defined by B(x, ɛ) = {y E, y x ɛ}. In this thesis, only E = K n and E = K m n are the spaces considered. Thus, all the norms are equivalent meaning that for any norms α and β on E, there exists µ 1 > 0, µ 2 > 0 such that µ 1 α β µ 2 β Scalar Product and Scalar Product Spaces In this thesis,, denotes a bilinear form (respectively a sesquilinear form) over E E if K = R (respectively K = C). Let M K n n be nonsingular. The form, M is defined by x, y M = x, My = y M x for all x, y K n. In what follows, we assume that the form, M is symmetric if K = R, that is or Hermitian if K = C x, y M = y, x M y, x M = x, y M. Definition 1.2 In this thesis, we say that the symmetric or Hermitian form, M is a scalar product if, M is positive definite, that is, x E \ {0}, x, x M > 0. (1.7) Otherwise, we refer to, M as an indefinite scalar product. 22

23 In the rest of this paragraph, we only consider definite positive scalar products. The Cauchy-Schwartz inequality (x, y) E 2, x, y x, x y, y, (1.8) applies to any definite positive scalar product. Then, following Definition 1.1 and using (1.8), x x, x defines a norm over E. This norm is known a the 2-norm and it is usually denoted by 2. Definition 1.3 For a given scalar product, matrices that preserve the scalar product are called orthogonal if K = R or unitary if K = C. O n (respectively U n ) denotes the set of n n orthogonal matrices (respectively the set of m m unitary matrices). It follows immediately that Q T Q = I n, Q O n, Q Q = I n, Q U n. For F E, F denotes the orthogonal complement of F and it is defined by F = {x E : x, y = 0, y F}. If F is a linear subspace of E then we have the direct sum decomposition E = F F Matrices, Vectors and their Norms (x, y) x, y = y x is the usual scalar product over K n. The induced vector 2-norm is denoted by 2 and it is defined by x 2 = ( n k=1 x k 2 ) 1 2 = x x. 23

24 Other useful norms over K n are given by x 1 = n x k, k=1 x = max 1 k n x k. Let A = (a ij ) K m n. The subordinated matrix norm of A is defined by Ax α A α,β = sup, x 0 x β where α is a norm over K m and β is a norm over K n. It follows that A 1 = max 1 j n m a ij, i=1 A 2 = ρ(a A), n A = max 1 i m a ij, j=1 where for X K n n, the spectral radius ρ(x) is ρ(x) = max{ λ, det(x λi) = 0}. The matrix subordinated 2-norm is invariant under orthogonal or unitary transformations, Q 1 XQ 2 2 = X 2, for all X K m n and orthogonal or unitary Q 1, Q 2. The trace of a square matrix is the sum of its diagonal elements and for X K n n, X = (x ij ) it is denoted by trace(x) = n x kk. (X, Y ) trace(y X) is the usual scalar product over K m n. The induced matrix k=1 norm is known as the Frobenius norm and it is defined by ( m ) 1 2 n A F = a ij 2. i=1 24 j=1

25 The Frobenius norm is invariant under orthogonal or unitary transformations, for all X K m n, U U m and V U n. UXV F = X F, Definition 1.4 Let µ = ( 1 µ k ) 0 k m, with µ k > 0. The µ-weighted Frobenius norm is induced by the inner-product over M n (C) m+1, ( m ) 1 A, B = trace B µ ka k k and it is denoted by A F,µ = A, A. k=0 The µ-weighted 2-norm is defined by, ( m A 2,µ = Differential Calculus k=0 A k µ k 2 2 ) 1 2. Let f : E F, where E, F are two normed vector spaces. f is differentiable or Fréchet differentiable at x V x E, where V x is an open neighborhood of x if there exists a linear map df(x) : E F, such that lim h 0 1 (f(x + h) f(x) df(x)h) = 0. h In this thesis, we only consider the case where E has a finite dimension. Thus, if f is linear, then f is differentiable and df = f. All the vector spaces are vector spaces on R and thus all the functions are considered as functions of real variables and the differentiation is real. The following theorem is the well-known implicit function theorem [4], [63] that we are going to use several times in this thesis. Theorem 1.1 Let f : E F G (x, y) f(x, y) 25

26 be differentiable, where E, F and G are normed vector spaces. Assume that f(x, y) = 0 and that f (x, y) is nonsingular for some (x, y) E F. Then, y there exist a neighborhood of x, V x, a neighborhood of y, V y and a differentiable function ϕ : V x V y such that y = ϕ(x) and for all x V x, f( x, ϕ( x)) = 0. Moreover, dϕ(x) = ( f ) 1 f (x, y) y (x, y). x Definition 1.5 Let f : R n R p. Assume that rank(df(x)) = p whenever f(x) = 0. Then, f 1 ({0}) is a (n p)-dimensional manifold in R n. We now give a fundamental result from optimization, the Lagrange multipliers theorem [4]. Theorem 1.2 Let g : E R be differentiable, where E is a normed vector spaces of finite dimension n. Let S E be a differentiable manifold of dimension d defined by S = {y E, f k (y) = 0, k = 1: n d}. Assume that x S is an extremum of g on S. Then, there exist n d scalars c k, k = 1: n d, such that n d dg(x) = c k df k (x). k=1 We refer to [4] and [63] for a more detailed presentation of differential calculus and manifolds. 1.4 Special Matrix Subsets (K) denotes the set of upper triangular matrices in K n n with a real diagonal. Sym(K) and Skew(K) are the linear subspaces of symmetric matrices and skewsymmetric matrices, respectively, with coefficients in K. Herm and SkewH 26

27 are the linear subspaces of Hermitian matrices and skew-hermitian matrices, respectively. dim denotes the dimension of a linear space in R. We recall that dim (R) = dim Sym(R) = n2 + n, (1.9) 2 dim (C) = dim Herm = dim SkewH = n 2, (1.10) dim Skew(R) = n2 n, (1.11) 2 dim Sym(C) = n 2 + n, dim Skew(C) = n 2 n. (1.12) Note that SkewH = iherm. For x K n, diag(x) denotes the n n diagonal matrix with diagonal x. For X K n n, we denote Π d (X) the diagonal part, Π u (X) the strictly upper triangular part and Π l (X), the strictly lower triangular part of X. 1.5 (J, J)-Orthogonal and (J, J)-Unitary Matrices We denote by diag k n(±1) the set of all n n diagonal matrices with k diagonal elements equal to 1 and n k equal to 1. A matrix J diag k n (±1) for some k is called a signature matrix. A matrix H R n n is said to be (J, J)-orthogonal if H T JH = J, where J, J diag n k(±1). We denote by O n (J, J) the set of n n (J, J)-orthogonal matrices. If J = J then we say that H is J-orthogonal or pseudo-orthogonal and the set of J-orthogonal matrices is denoted by O n (J). We say that a matrix is hyperbolic if it is (J, J)-orthogonal or pseudo-orthogonal with J ±I. We recall that if J = ±I, then O n (±I) = O n is the set of orthogonal matrices. We extend the definition of (J, J)-orthogonal matrices to rectangular matrices in R m n, with m n. H R m n is (J, J)-orthogonal if H T JH = J with 27

28 J diag k m (±1) and J diagq n (±1). We denote by O mn(j, J) the set of (J, J)- orthogonal in R m n. The definition of signature matrices can be extended and generalized to complex signature matrices. Let U = {z C : z = 1} denote the unit circle in C. We define the set of complex signature matrices as diagonal matrices such that each diagonal entry is in U and we denote the set of n n complex signature matrices by diag n (U). (J, J)-unitary matrices are the complex counterpart of (J, J)-orthogonal matrices and we say that a matrix H K n n is (J, J)-unitary matrix if H JH = J where J and J are complex signature matrices. We denote by U n (J, J) the set of n n (J, J)-unitary matrices. A similar set is the set of complex (J, J)-orthogonal matrices that we denote by O n (J, J, C). We say that a matrix H K n n is complex (J, J)-orthogonal if H T JH = J, where J, J diagn (U). Similarly, we denote by U mn (J, J) we denote the set of m n (J, J)-unitary matrices and by O mn (J, J, C) the set of m n complex (J, J)-orthogonal matrices. We show that O mn (J, J), U mn (J, J) and O mn (J, J, C) can respectively be identified to R d, R n2 and R 2d, with d = n2 n 2. We show that each of these sets are manifolds and we compute their dimension. Then, the introduction of local coordinate systems enable us to make the identification mentioned above. Lemma 1.3 O n (J, J), U n (J, J) and O n (J, J, C) are manifolds with respective dimension d, n 2 and 2d with d = n2 n 2. Proof. Let q 1 : R n n R n n and q 2, q 3 : C n n C n n be defined by q 1 (X) = X T JX J, q 2 (X) = X JX J and q 3 (X) = X T JX J. We recall that O n (J, J) = q 1 1 ({0}), U n (J, J) = q 1 2 ({0}), and O n (J, J, C) = q 1 3 ({0}). For 1 k 3, q k is clearly differentiable. We have that dq 1 (H 1 ) H 1 = H T 1 J H 1 + H T 1 JH 1, 28

29 dq 2 (H 2 ) H 2 = H 2 J H 2 + H 2 JH 2, dq 3 (H 3 ) H 3 = H T 3 J H 3 + H T 3 JH 3. To compute the dimension of the three manifolds, we need to determine their tangent spaces that is the null space of each dq k (H k ), k = 1: 3, with H k being in one of these manifolds. We have that null(dq 1 (H)) = JH T Skew(R), null(dq 2 (H)) = JH SkewH, null(dq 3 (H)) = JH T Skew(C). Thus, following the dimensions given by (1.9)-(1.12), O n (J, J) is a n2 n 2 dimensional manifold, U n (J, J) is a n 2 dimensional manifold and O n (J, J, C) is n 2 n a dimensional manifold. Let X O mn (J, J), Y U mn (J, J) and Z O mn (J, J, C). There exists differentiable one-to-one functions φ k, 1 k 3, open sets V 1 R d, V 2 R n2, V 3 R 2d, V X R m n, V Y C m n and V Z C m n such that φ 1 (V 1 ) = V X O n (J, J), (1.13) φ 2 (V 2 ) = V Y U n (J, J), (1.14) φ 3 (V 3 ) = V Z O n (J, J, C). (1.15) Moreover, the differential of these maps φ k have full rank over the entire space where they are defined. 1.6 Matrix Operators Properties For an operator or a linear map T defined on K n n, the 2-norm is defined by T 2 = sup T (X) F. X F =1 29

30 Some authors denote this norm by F,F. The choice of this norm is justified by its differentiability properties and its computational simplicity. We now present some notations and we give some results that are needed throughout this thesis. Theorem 1.4 Let A, B, X K n n. We define the operators T 2 X = X A and T 1 X = AXB. Then, If A and B are nonsingular then T 2 2 = max a ij, (1.16) ij T 1 2 = (A B) 2 = A 2 B 2, (1.17) min T 1 (X) F = A B (1.18) X F =1 Proof. It is straightforward to show that the right hand side of (1.16) is an upper bound for T 2 2. Let a pq = max ij a ij. Then, the bound is attained by e p e T q. Let A = Q 1 S 1 Z T 1 and B = Q 2S 2 Z T 2 be the singular value decompositions of A and B. Then (A B) = (Q 1 Q 2 )(S 1 S 2 )(Z T 1 Z T 2 ) so that (A B) 2 = (S 1 S 2 ) 2 = A 2 B 2 proving the second part of (1.17). We have T 1 (X) F = (A B)vec(X) 2, T 1 2 = (A B) 2 = A 2 B 2. Similarly, for (1.18), we have min T 1(X) F = min (S 1 S 2 )vec(z 2 XZ1 T ) F, X F =1 X F =1 = A B

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices. Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0. Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

More information

Section 1.1. Introduction to R n

Section 1.1. Introduction to R n The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

More information

Linear Algebra: Determinants, Inverses, Rank

Linear Algebra: Determinants, Inverses, Rank D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information

ALGEBRAIC EIGENVALUE PROBLEM

ALGEBRAIC EIGENVALUE PROBLEM ALGEBRAIC EIGENVALUE PROBLEM BY J. H. WILKINSON, M.A. (Cantab.), Sc.D. Technische Universes! Dsrmstedt FACHBEREICH (NFORMATiK BIBL1OTHEK Sachgebieto:. Standort: CLARENDON PRESS OXFORD 1965 Contents 1.

More information

3 Orthogonal Vectors and Matrices

3 Orthogonal Vectors and Matrices 3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first

More information

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010 Math 550 Notes Chapter 7 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 7 Fall 2010 1 / 34 Outline 1 Self-Adjoint and Normal Operators

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

Solution of Linear Systems

Solution of Linear Systems Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Lecture 5: Singular Value Decomposition SVD (1)

Lecture 5: Singular Value Decomposition SVD (1) EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5. PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible

More information

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

Mean value theorem, Taylors Theorem, Maxima and Minima.

Mean value theorem, Taylors Theorem, Maxima and Minima. MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and express-ions. Permutations and Combinations.

More information

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

More information

DEFINITION 5.1.1 A complex number is a matrix of the form. x y. , y x

DEFINITION 5.1.1 A complex number is a matrix of the form. x y. , y x Chapter 5 COMPLEX NUMBERS 5.1 Constructing the complex numbers One way of introducing the field C of complex numbers is via the arithmetic of matrices. DEFINITION 5.1.1 A complex number is a matrix of

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

How To Prove The Dirichlet Unit Theorem

How To Prove The Dirichlet Unit Theorem Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if

More information

Adaptive Online Gradient Descent

Adaptive Online Gradient Descent Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650

More information

Finite Dimensional Hilbert Spaces and Linear Inverse Problems

Finite Dimensional Hilbert Spaces and Linear Inverse Problems Finite Dimensional Hilbert Spaces and Linear Inverse Problems ECE 174 Lecture Supplement Spring 2009 Ken Kreutz-Delgado Electrical and Computer Engineering Jacobs School of Engineering University of California,

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

Orthogonal Projections

Orthogonal Projections Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information

6. Cholesky factorization

6. Cholesky factorization 6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

More information

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n. ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

More information

Lecture Notes on Polynomials

Lecture Notes on Polynomials Lecture Notes on Polynomials Arne Jensen Department of Mathematical Sciences Aalborg University c 008 Introduction These lecture notes give a very short introduction to polynomials with real and complex

More information

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points Introduction to Algebraic Geometry Bézout s Theorem and Inflection Points 1. The resultant. Let K be a field. Then the polynomial ring K[x] is a unique factorisation domain (UFD). Another example of a

More information

Nonlinear Iterative Partial Least Squares Method

Nonlinear Iterative Partial Least Squares Method Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

More information

State of Stress at Point

State of Stress at Point State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,

More information

The cover SU(2) SO(3) and related topics

The cover SU(2) SO(3) and related topics The cover SU(2) SO(3) and related topics Iordan Ganev December 2011 Abstract The subgroup U of unit quaternions is isomorphic to SU(2) and is a double cover of SO(3). This allows a simple computation of

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Linear Algebraic Equations, SVD, and the Pseudo-Inverse

Linear Algebraic Equations, SVD, and the Pseudo-Inverse Linear Algebraic Equations, SVD, and the Pseudo-Inverse Philip N. Sabes October, 21 1 A Little Background 1.1 Singular values and matrix inversion For non-smmetric matrices, the eigenvalues and singular

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

Math 215 HW #6 Solutions

Math 215 HW #6 Solutions Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

South Carolina College- and Career-Ready (SCCCR) Pre-Calculus

South Carolina College- and Career-Ready (SCCCR) Pre-Calculus South Carolina College- and Career-Ready (SCCCR) Pre-Calculus Key Concepts Arithmetic with Polynomials and Rational Expressions PC.AAPR.2 PC.AAPR.3 PC.AAPR.4 PC.AAPR.5 PC.AAPR.6 PC.AAPR.7 Standards Know

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

Classification of Cartan matrices

Classification of Cartan matrices Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

Orthogonal Bases and the QR Algorithm

Orthogonal Bases and the QR Algorithm Orthogonal Bases and the QR Algorithm Orthogonal Bases by Peter J Olver University of Minnesota Throughout, we work in the Euclidean vector space V = R n, the space of column vectors with n real entries

More information

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj Section 5. l j v j = [ u u j u m ] l jj = l jj u j + + l mj u m. l mj Section 5. 5.. Not orthogonal, the column vectors fail to be perpendicular to each other. 5..2 his matrix is orthogonal. Check that

More information

BILINEAR FORMS KEITH CONRAD

BILINEAR FORMS KEITH CONRAD BILINEAR FORMS KEITH CONRAD The geometry of R n is controlled algebraically by the dot product. We will abstract the dot product on R n to a bilinear form on a vector space and study algebraic and geometric

More information

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001,

More information

On the representability of the bi-uniform matroid

On the representability of the bi-uniform matroid On the representability of the bi-uniform matroid Simeon Ball, Carles Padró, Zsuzsa Weiner and Chaoping Xing August 3, 2012 Abstract Every bi-uniform matroid is representable over all sufficiently large

More information

A note on companion matrices

A note on companion matrices Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod

More information

Inner products on R n, and more

Inner products on R n, and more Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

More information

Inner product. Definition of inner product

Inner product. Definition of inner product Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product

More information

Chapter 7. Lyapunov Exponents. 7.1 Maps

Chapter 7. Lyapunov Exponents. 7.1 Maps Chapter 7 Lyapunov Exponents Lyapunov exponents tell us the rate of divergence of nearby trajectories a key component of chaotic dynamics. For one dimensional maps the exponent is simply the average

More information

More than you wanted to know about quadratic forms

More than you wanted to know about quadratic forms CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences More than you wanted to know about quadratic forms KC Border Contents 1 Quadratic forms 1 1.1 Quadratic forms on the unit

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

Computing a Nearest Correlation Matrix with Factor Structure

Computing a Nearest Correlation Matrix with Factor Structure Computing a Nearest Correlation Matrix with Factor Structure Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with Rüdiger

More information