Derivatives of Matrix Functions and their Norms.


 Sydney Owens
 2 years ago
 Views:
Transcription
1 Derivatives of Matrix Functions and their Norms. Priyanka Grover Indian Statistical Institute, Delhi India February 24, / 41
2 Notations H : ndimensional complex Hilbert space. L (H) : the space of bounded linear operators on H. A L (H) : identified with n n matrix. S k : the group of permutations on k symbols. L k (X; Y) : the space of continuous klinear mappings of X X into Y. (X, Y are Banach spaces.) 2 / 41
3 Derivative φ : X Y φ is called (Fréchet) differentiable at u if there exists a linear transformation Dφ(u) : X Y such that for all v φ(u + v) φ(u) Dφ(u)(v) = o( v ). (1) The linear operator Dφ(u) is called the derivative of φ at u. If φ is differentiable at u, then for every v X, Dφ(u)(v) = d dt φ(u + tv). t=0 3 / 41
4 Dφ : X L (X; Y) φ is twice differentiable at u if Dφ is differentiable at u. The derivative is the second derivative of φ at u, written as D 2 φ(u). D 2 φ(u) L (X; L (X; Y)). L (X; L (X; Y)) is identified with L 2 (X; Y), the space of continuous bilinear mappings of X X into Y. The action of D 2 φ(u) on (v 1, v 2 ) is given by D 2 φ(u)(v 1, v 2 ) = 2 φ(u + t 1 v 1 + t 2 v 2 ). t 1 t 2 t1 =t 2 =0 4 / 41
5 One can similarly define D k φ at any u. D k φ(u) L k (X; Y). Its action at (v 1,..., v k ) is given by D k φ(u)(v 1,..., v k ) = k t 1 t k t1 = =t k =0 φ(u + t 1 v t k v k ). 5 / 41
6 Taylor s theorem : Let φ : X Y be a (p + 1)times differentiable map. For u X and for small h, φ(u + h) φ(u) p k=1 1 k! Dk φ(u)(h,..., h) = O( h p+1 ). From here, p 1 φ(u + h) φ(u) k! Dk φ(u) h k + O( h p+1 ). k=1 6 / 41
7 Mean Value theorem : Let φ : X Y be a differentiable map. Let u, v X and let L be the line segment joining them. Then φ(u) φ(v) u v sup Dφ(w). w L 7 / 41
8 Tensor power Similar to binomial theorem: m (A + X) = j i 0 j 1 + +j k =m ( j 1 A) ( j 2 X) ( j 3 A) ( j k X). For every X M(n), the expression for D m (A)(X) is the coefficient of t in m (A + tx). D m (A)(X) = X A... A + A X... A A A... X. 8 / 41
9 Norm: For all A M(n), D m (A) = m A m 1. Proof. : Triangle inequality and A X = A X. Equality at X = A/ A. 9 / 41
10 Antisymmetric tensor power m H is the range of the projection P m, defined as P m (x 1 x m ) = 1 sgn(σ) x σ(1) x σ(m). m! σ S m Inner product : x 1 x m = m! P m (x 1 x m ) x 1 x m, y 1 y m = det( x i, y j ). m H is invariant under m A. The operator m A is the restriction of m A to this subspace. It follows D m (A) m A m / 41
11 Theorem (Bhatia, Friedland; 1981) Let s 1 s 2 s n 0 be singular values of A. Then, for 1 m n, D m (A) = m p=1 j=1 j =p m s j = p m 1 (s 1,..., s m ), where p m 1 denotes the (m 1)th elementary symmetric polynomial in m variables. 11 / 41
12 Perturbation bound: By mean value theorem, Corollary For any two elements A, B of L (H), m (B) m (A) m M m 1 B A, where M = max( A, B ). 12 / 41
13 Application Determinant det A = n A. For any two elements A, B of L (H), det(b) det(a) n M n 1 B A, where M = max( A, B ). 13 / 41
14 Eigenvalues Definition (Distance between eigenvalues) For A, B L (H), let Eig A = {α 1,..., α n } and Eig B = {β 1,..., β n } denote their respective eigenvalues counted with multiplicity. A distance between these ntuples can be defined as Theorem For all A, B L (H). d(eiga, EigB) = min max σ S k 1 i n α i β σ(i). d(eiga, EigB) C M 1 1/n B A 1/n, where M = max( A, B ). 14 / 41
15 Symmetric tensor power m H, space of symmetric tensors, is the range of the projection Q m, defined as Q m (x 1 x m ) = 1 x σ(1) x σ(m). m! σ S m Inner product: x 1 x m = m! Q m (x 1 x m ) x 1 x m, y 1 y m = per( x i, y j ). Permanent: The permanent of A = (a ij ), written as per A, is defined by per A = a 1σ(1) a 2σ(2) a nσ(n), where the summation extends over all the permutations of 1,2,...,n. σ 15 / 41
16 m H is invariant under m A. The operator m A is the restriction of m A to this subspace. D m (A) m A m 1. D m A =? 16 / 41
17 Theorem (Bhatia; 1984) D m (A) = m A m 1. Using mean value theorem, Corollary For every A, B L (H), we have m A m B m M m 1 A B, where M = max( A, B ). 17 / 41
18 Permanent per A is one of the diagonal entries of n A : the (I, I)entry for I = (1, 2,..., n). Since, each entry of the matrix is dominated by its norm, we get Theorem For any A, B M(n), per A per B n M n 1 A B, where M = max( A, B ). 18 / 41
19 Higher order derivatives? We study higher order derivatives of these functions. kth order perturbation bounds follow by Taylor s theorem. 19 / 41
20 Tensor power Higher order derivative: For 1 k m, D k m (A)(X 1,..., X k ) = ( j 1 A) X σ(1) ( j k A) X σ(k) ( j k+1 A). σ S k j i 0 j 1 + +j k+1 =m k Norm: D k m (A) = m! (m k)! A m k. Perturbation bound: m (A + X) m A m m k A m k X k k=1 = ( A + X ) m A m. 20 / 41
21 Determinant Derivative : Jacobi s formula : D det A(X) = tr(adj(a)x), where adj(a) stands for the adjugate (the classical adjoint) of A. The following are three equivalent descriptions of Jacobi s formula. 1 adj(a) can be identified as an operator on n 1 H. Call this operator n 1 A. D det A(X) = tr(( n 1 A)X). 21 / 41
22 2 For 1 i, j n, let A(i j) be the (n 1) (n 1) matrix obtained from A by deleting its ith row and jth column. Then, D det A(X) = n ( 1) i+j det A(i j)x ij. i,j=1 3 For 1 j n, let A(j; X) be the matrix obtained from A by replacing the jth column of A by the jth column of X and keeping the rest of the columns unchanged. Then, D det A(X) = n det A(j; X). j=1 22 / 41
23 Higher order derivatives : Let Q k,n denote the set of multiindices I = (i 1,..., i k ) in which 1 i 1 < < i k n. For I, J Q k,n, A[I J ]: the k k submatrix obtained from A by picking its entries from the rows I and columns J. A(I J ): the (n k) (n k) submatrix obtained from A by deleting rows I and columns J. Given operators X 1,..., X k on H, consider the operator 1 X σ(1) X σ(2) X σ(k), k! σ S k on the space k H. This leaves the subspace k H invariant. The restriction to this subspace is denoted by X 1 X k. 23 / 41
24 I denotes the sum i i k. The transpose of the matrix with entries ( 1) I + J det A(I J ) can be identified as an operator on the space n k H. We call it n k A (it is unitarily similar to the transpose of n k A.) Theorem (Bhatia, Jain; 2009) 1 For 1 k n, D k det A(X 1,..., X k ) = k! tr[( n k A)(X 1 X k )]. When k = 1 1 D det A(X) = tr(( n 1 A)X). 24 / 41
25 Theorem 2 For 1 k n, we have D k det A(X 1,..., X k ) = ( 1) I + J det A(I J ) det Y σ [I J ], [J ] I,J Q k,n σ S k where Y σ [J ] : n n matrix whose j pth column is the j p th column of X σ(p) for 1 p k, and the remaining n k columns are zero. When k = 1, 2 n D det A(X) = ( 1) i+j det A(i j)x ij. i,j=1 25 / 41
26 Theorem 3 A(J ; X 1,..., X k ): matrix obtained from A by replacing the j p th column of A by the j p th column of X p for 1 p k, and keeping the rest of the columns unchanged. Then for 1 k n, we have D k det A(X 1,..., X k ) = det A(J ; X σ(1),..., X σ(k) ). J Q k,n When k = 1, σ S k 3 n D det A(X) = det A(j; X). j=1 26 / 41
27 Norm : Theorem Let s 1 s 2 s n 0 be singular values of A. Then, for 1 k n, we have D k det A = k! p n k (s 1,..., s n ). Perturbation bound : Corollary Let A and B be n n matrices. Then n det(a) det(b) p n k (s 1,..., s n ) A B k. k=1 27 / 41
28 Antisymmetric Tensor Power Theorem Let A M(n). Then for 1 k m n, D k m (A)(X 1,..., X k ) = k! det A[γ δ] (X 1 X k ) (m) (γ, δ). γ,δ Q m k,n γ α if {γ 1,..., γ k } {α 1,..., α m }. α γ denotes (γ 1,..., γ m k ) Q m k,n where {γ 1,..., γ m k } = {α 1,..., α m } \ {γ 1,..., γ k }. π α := the permutation on {1, 2,..., n} π α (α i ) = i for all i = 1,..., m. On α = (1,..., n) α, π α (α ) = m + j for all j = 1,..., n m. j (X 1 X k ) (m) (γ, δ) : n n m m matrix whose indexing set is Q m,n and for α, β Q m,n, the (α, β)entry is ( 1) π α(γ) + π β (δ) (α γ, β δ)entry of X 1 X k if γ α and δ β and is 0 otherwise. 28 / 41
29 Norm : Theorem Let A M(n). Then for 1 k m n, D k m A = k! p m k (s 1 (A),..., s m (A)), where s 1 (A) s n (A) are singular values of A. Perturbation bound : Corollary For n n matrices A and X, m (A + X) m A m p m k (s 1 (A),..., s m (A)) X k k=1 m m A m k X k k k=1 = ( A + X ) m A m. 29 / 41
30 Permanent We obtain three different expressions for all higher order derivatives of the permanent of a matrix. The permanental adjoint, denoted by padj (A), is the n n matrix whose (i, j)entry is per A(i j). Similar to the Jacobi formula : Theorem For each X M(n), Proof. D per (A)(X) = tr(padj(a) T X). D per A(X) is the coefficient of t in per (A + tx). per is a linear function of each of its columns. Combine the above two. 30 / 41
31 Let G k,n denote the set {(i 1,..., i k ) 1 i 1 i k n}. For k n, Q k,n is a subset of G k,n. If {e i } is an orthonormal basis of H, then for I = (i 1,..., i k ) G k,n, we define e I = e i1 e ik. Let P k be the canonical projection of k H onto the subspace generated by {e I : I Q k,n }. If we vary I, J in Q k,n, we get the submatrix P k ( k A)P k of k A. The matrix padj(a) T can be identified with a submatrix of an operator on the space n 1 H. We call this operator n 1 A. 1 D per A(X) = tr ((P n 1 ( n 1 A)P n 1 )X). 31 / 41
32 Given two elements I and J of G k,n, let A[I J ] denote the k k matrix whose (r, s)entry is the (i r, j s )entry of A. In general, A[I J ] is not a submatrix of A, unless I, J Q k,n. The Laplace expansion theorem for permanent says that for any I Q k,n, per A = per A[I J ] per A(I J ). J Q k,n In particular, for any i, 1 i n, per A = n a ij per (A(i j)). j=1 Using this, 32 / 41
33 2 n n D per (A)(X) = per A(i j)x ij. i=1 j=1 3 n D per (A)(X) = per A(j; X). j=1 33 / 41
34 Higher order derivatives : Again consider the following operator on k H: 1 X σ(1) X σ(2) X σ(k). k! σ S k It leaves the space k H invariant. We use the notation X 1 X 2 X k for the restriction of this operator to the subspace k H. The transpose of the matrix whose (I, J )entry is per A(I J ) can be identified as a submatrix of an operator on the space n k H. We call this operator n k A. Theorem 1 For 1 k n, D k per A(X 1,..., X k ) = k! tr [(P n k ( n k A)P n k )(P k (X 1 X k )P k )]. 34 / 41
35 Theorem 2 For 1 k n, D k per A(X 1,..., X k ) = In particular, per A(I J ) per Y σ [I J ]. [J ] I,J Q k,n σ S k D k per A(X,..., X) = k! I,J Q k,n per A(I J ) per X[I J ]. Y σ [J ] : n n matrix whose j pth column is the j p th column of X σ(p) for 1 p k, and the remaining n k columns are zero. 35 / 41
36 Theorem 3 For 1 k n, D k per A(X 1,..., X k ) = In particular, σ S k D k per A(X,..., X) = k! J Q k,n per A(J ; X σ(1), X σ(2),..., X σ(k) ). J Q k,n per A(J ; X,..., X). A(J ; X σ(1), X σ(2),..., X σ(k) ) : the matrix obtained from A by replacing the j p th column of A by the j p th column of X σ(p) and keeping the rest of the columns unchanged. 36 / 41
37 Theorem Let A be an n n matrix, we have n D k per A k! A n k. k Proof. D k per A = k! sup tr [(P n k ( n k A)P n k ) X 1 = = X k =1 k! P n k ( n k A) P n k 1 n k! P n k ( n k A) P n k k Use n k A = n k A = A n k. (P k (X 1 X k )P k )] 37 / 41
38 As a corollary, we obtain a perturbation bound for per. Corollary Let A and X be n n matrices. Then n n per (A + X) per A A n k X k. k k=1 Consider the simplest commutative case: A = I, X = xi. Then the expression on both the sides of the inequality is n n x k. k k=1 So no improvement on the bound is possible. 38 / 41
39 Symmetric tensor power γ α : {γ 1,..., γ k } {α 1,..., α m } such that α l occurs in α, say, d α times then α l cannot occur in γ for more than d α times. α γ denotes (γ 1,..., γ m k ) G m k,n where γ l {α 1,..., α m } and occurs in α γ exactly d α d γ times. (X 1 X k ) (m) (γ, δ): n+m 1 n+m 1 m m matrix whose indexing set is G m,n and for α, β G m,n, the (α, β)entry m(α γ)m(β δ) 1/2 is m(α)m(β) (α γ, β δ)entry of X 1 X k if γ α and δ β and zero otherwise. Theorem Let A M(n). Then for 1 k m n, D k m (A)(X 1,..., X k ) = k! per A[γ δ] (X 1 X k ) (m) (γ, δ). γ,δ G m k,n 39 / 41
40 Norm : Theorem Let A M(n). Then for 1 k m n, D k m A = m! (m k)! A m k. Perturbation bound : Corollary For n n matrices A and X, m (A + X) m A ( A + X ) m A m. 40 / 41
41 THANK YOU! 41 / 41
Linear Algebra Concepts
University of Central Florida School of Electrical Engineering Computer Science EGN3420  Engineering Analysis. Fall 2009  dcm Linear Algebra Concepts Vector Spaces To define the concept of a vector
More informationMATH36001 Background Material 2015
MATH3600 Background Material 205 Matrix Algebra Matrices and Vectors An ordered array of mn elements a ij (i =,, m; j =,, n) written in the form a a 2 a n A = a 2 a 22 a 2n a m a m2 a mn is said to be
More informationDeterminants. Dr. Doreen De Leon Math 152, Fall 2015
Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.
More information2.5 Elementary Row Operations and the Determinant
2.5 Elementary Row Operations and the Determinant Recall: Let A be a 2 2 matrtix : A = a b. The determinant of A, denoted by det(a) c d or A, is the number ad bc. So for example if A = 2 4, det(a) = 2(5)
More informationThe Laplace Expansion Theorem: Computing the Determinants and Inverses of Matrices
The Laplace Expansion Theorem: Computing the Determinants and Inverses of Matrices David Eberly Geometric Tools, LLC http://www.geometrictools.com/ Copyright c 19982016. All Rights Reserved. Created:
More information1 Determinants. Definition 1
Determinants The determinant of a square matrix is a value in R assigned to the matrix, it characterizes matrices which are invertible (det 0) and is related to the volume of a parallelpiped described
More information(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.
Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product
More informationCofactor Expansion: Cramer s Rule
Cofactor Expansion: Cramer s Rule MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Today we will focus on developing: an efficient method for calculating
More information2.1: Determinants by Cofactor Expansion. Math 214 Chapter 2 Notes and Homework. Evaluate a Determinant by Expanding by Cofactors
2.1: Determinants by Cofactor Expansion Math 214 Chapter 2 Notes and Homework Determinants The minor M ij of the entry a ij is the determinant of the submatrix obtained from deleting the i th row and the
More informationUsing determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible:
Cramer s Rule and the Adjugate Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible: Theorem [Cramer s Rule] If A is an invertible
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationMATH 240 Fall, Chapter 1: Linear Equations and Matrices
MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 918/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More information= [a ij ] 2 3. Square matrix A square matrix is one that has equal number of rows and columns, that is n = m. Some examples of square matrices are
This document deals with the fundamentals of matrix algebra and is adapted from B.C. Kuo, Linear Networks and Systems, McGraw Hill, 1967. It is presented here for educational purposes. 1 Introduction In
More information5.3 Determinants and Cramer s Rule
290 5.3 Determinants and Cramer s Rule Unique Solution of a 2 2 System The 2 2 system (1) ax + by = e, cx + dy = f, has a unique solution provided = ad bc is nonzero, in which case the solution is given
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationSummary of week 8 (Lectures 22, 23 and 24)
WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry
More informationChapter 4: Binary Operations and Relations
c Dr Oksana Shatalov, Fall 2014 1 Chapter 4: Binary Operations and Relations 4.1: Binary Operations DEFINITION 1. A binary operation on a nonempty set A is a function from A A to A. Addition, subtraction,
More informationMatrix Algebra LECTURE 1. Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 = a 11 x 1 + a 12 x 2 + +a 1n x n,
LECTURE 1 Matrix Algebra Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 a 11 x 1 + a 12 x 2 + +a 1n x n, (1) y 2 a 21 x 1 + a 22 x 2 + +a 2n x n, y m a m1 x 1 +a m2 x
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationLecture 11. Shuanglin Shao. October 2nd and 7th, 2013
Lecture 11 Shuanglin Shao October 2nd and 7th, 2013 Matrix determinants: addition. Determinants: multiplication. Adjoint of a matrix. Cramer s rule to solve a linear system. Recall that from the previous
More informationDETERMINANTS. b 2. x 2
DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in
More informationChapter 5. Matrices. 5.1 Inverses, Part 1
Chapter 5 Matrices The classification result at the end of the previous chapter says that any finitedimensional vector space looks like a space of column vectors. In the next couple of chapters we re
More informationDiagonal, Symmetric and Triangular Matrices
Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by
More informationSergei Silvestrov, Christopher Engström, Karl Lundengård, Johan Richter, Jonas Österberg. November 13, 2014
Sergei Silvestrov,, Karl Lundengård, Johan Richter, Jonas Österberg November 13, 2014 Analysis Todays lecture: Course overview. Repetition of matrices elementary operations. Repetition of solvability of
More informationCalculus and linear algebra for biomedical engineering Week 4: Inverse matrices and determinants
Calculus and linear algebra for biomedical engineering Week 4: Inverse matrices and determinants Hartmut Führ fuehr@matha.rwthaachen.de Lehrstuhl A für Mathematik, RWTH Aachen October 30, 2008 Overview
More informationMatrix Inverse and Determinants
DM554 Linear and Integer Programming Lecture 5 and Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1 2 3 4 and Cramer s rule 2 Outline 1 2 3 4 and
More informationChapter 8. Matrices II: inverses. 8.1 What is an inverse?
Chapter 8 Matrices II: inverses We have learnt how to add subtract and multiply matrices but we have not defined division. The reason is that in general it cannot always be defined. In this chapter, we
More informationFacts About Eigenvalues
Facts About Eigenvalues By Dr David Butler Definitions Suppose A is an n n matrix An eigenvalue of A is a number λ such that Av = λv for some nonzero vector v An eigenvector of A is a nonzero vector v
More informationMATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.
MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted
More informationWeek 5 Integral Polyhedra
Week 5 Integral Polyhedra We have seen some examples 1 of linear programming formulation that are integral, meaning that every basic feasible solution is an integral vector. This week we develop a theory
More information4. MATRICES Matrices
4. MATRICES 170 4. Matrices 4.1. Definitions. Definition 4.1.1. A matrix is a rectangular array of numbers. A matrix with m rows and n columns is said to have dimension m n and may be represented as follows:
More information13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in threespace, we write a vector in terms
More informationINTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL
SOLUTIONS OF THEORETICAL EXERCISES selected from INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL Eighth Edition, Prentice Hall, 2005. Dr. Grigore CĂLUGĂREANU Department of Mathematics
More informationThe treenumber and determinant expansions (Biggs 67)
The treenumber and determinant expansions (Biggs 67) André Schumacher March 20, 2006 Overview Biggs 67 [1] The treenumber κ(γ) κ(γ) and the Laplacian matrix The σ function Elementary (sub)graphs Coefficients
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 3 Linear Least Squares Prof. Michael T. Heath Department of Computer Science University of Illinois at UrbanaChampaign Copyright c 2002. Reproduction
More informationDeterminants. Chapter Properties of the Determinant
Chapter 4 Determinants Chapter 3 entailed a discussion of linear transformations and how to identify them with matrices. When we study a particular linear transformation we would like its matrix representation
More informationMath 4707: Introduction to Combinatorics and Graph Theory
Math 4707: Introduction to Combinatorics and Graph Theory Lecture Addendum, November 3rd and 8th, 200 Counting Closed Walks and Spanning Trees in Graphs via Linear Algebra and Matrices Adjacency Matrices
More informationNOTES ON LINEAR ALGEBRA
NOTES ON LINEAR ALGEBRA LIVIU I. NICOLAESCU CONTENTS 1. Multilinear forms and determinants 3 1.1. Mutilinear maps 3 1.2. The symmetric group 4 1.3. Symmetric and skewsymmetric forms 7 1.4. The determinant
More informationSolutions for Chapter 8
Solutions for Chapter 8 Solutions for exercises in section 8. 2 8.2.1. The eigenvalues are σ (A ={12, 6} with alg mult A (6 = 2, and it s clear that 12 = ρ(a σ (A. The eigenspace N(A 12I is spanned by
More informationMathematics Notes for Class 12 chapter 3. Matrices
1 P a g e Mathematics Notes for Class 12 chapter 3. Matrices A matrix is a rectangular arrangement of numbers (real or complex) which may be represented as matrix is enclosed by [ ] or ( ) or Compact form
More informationUNIT 2 MATRICES  I 2.0 INTRODUCTION. Structure
UNIT 2 MATRICES  I Matrices  I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress
More informationTopic 1: Matrices and Systems of Linear Equations.
Topic 1: Matrices and Systems of Linear Equations Let us start with a review of some linear algebra concepts we have already learned, such as matrices, determinants, etc Also, we shall review the method
More informationNotes on Linear Algebra. Peter J. Cameron
Notes on Linear Algebra Peter J. Cameron ii Preface Linear algebra has two aspects. Abstractly, it is the study of vector spaces over fields, and their linear maps and bilinear forms. Concretely, it is
More informationThe Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
More informationOrthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
More informationGRA6035 Mathematics. Eivind Eriksen and Trond S. Gustavsen. Department of Economics
GRA635 Mathematics Eivind Eriksen and Trond S. Gustavsen Department of Economics c Eivind Eriksen, Trond S. Gustavsen. Edition. Edition Students enrolled in the course GRA635 Mathematics for the academic
More informationAu = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
More informationMATHEMATICAL BACKGROUND
Chapter 1 MATHEMATICAL BACKGROUND This chapter discusses the mathematics that is necessary for the development of the theory of linear programming. We are particularly interested in the solutions of a
More informationLinear Algebra Test 2 Review by JC McNamara
Linear Algebra Test 2 Review by JC McNamara 2.3 Properties of determinants: det(a T ) = det(a) det(ka) = k n det(a) det(a + B) det(a) + det(b) (In some cases this is true but not always) A is invertible
More information( % . This matrix consists of $ 4 5 " 5' the coefficients of the variables as they appear in the original system. The augmented 3 " 2 2 # 2 " 3 4&
Matrices define matrix We will use matrices to help us solve systems of equations. A matrix is a rectangular array of numbers enclosed in parentheses or brackets. In linear algebra, matrices are important
More informationSection 6.1  Inner Products and Norms
Section 6.1  Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus ndimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More informationApplied Linear Algebra I Review page 1
Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties
More informationUnit 18 Determinants
Unit 18 Determinants Every square matrix has a number associated with it, called its determinant. In this section, we determine how to calculate this number, and also look at some of the properties of
More informationThe basic unit in matrix algebra is a matrix, generally expressed as: a 11 a 12. a 13 A = a 21 a 22 a 23
(copyright by Scott M Lynch, February 2003) Brief Matrix Algebra Review (Soc 504) Matrix algebra is a form of mathematics that allows compact notation for, and mathematical manipulation of, highdimensional
More informationMATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix.
MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. Matrices Definition. An mbyn matrix is a rectangular array of numbers that has m rows and n columns: a 11
More informationLecture Notes 1: Matrix Algebra Part B: Determinants and Inverses
University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 57 Lecture Notes 1: Matrix Algebra Part B: Determinants and Inverses Peter J. Hammond email: p.j.hammond@warwick.ac.uk Autumn 2012,
More informationNotes on Linear Recurrence Sequences
Notes on Linear Recurrence Sequences April 8, 005 As far as preparing for the final exam, I only hold you responsible for knowing sections,,, 6 and 7 Definitions and Basic Examples An example of a linear
More informationThe Cayley Hamilton Theorem
The Cayley Hamilton Theorem Attila Máté Brooklyn College of the City University of New York March 23, 2016 Contents 1 Introduction 1 1.1 A multivariate polynomial zero on all integers is identically zero............
More informationProblems for Advanced Linear Algebra Fall 2012
Problems for Advanced Linear Algebra Fall 2012 Class will be structured around students presenting complete solutions to the problems in this handout. Please only agree to come to the board when you are
More informationLecture Notes 1: Matrix Algebra Part B: Determinants and Inverses
University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 75 Lecture Notes 1: Matrix Algebra Part B: Determinants and Inverses Peter J. Hammond revised 2016 September 18th University of Warwick,
More informationHelpsheet. Giblin Eunson Library MATRIX ALGEBRA. library.unimelb.edu.au/libraries/bee. Use this sheet to help you:
Helpsheet Giblin Eunson Library ATRIX ALGEBRA Use this sheet to help you: Understand the basic concepts and definitions of matrix algebra Express a set of linear equations in matrix notation Evaluate determinants
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a nonempty
More informationCHARACTERISTIC ROOTS AND VECTORS
CHARACTERISTIC ROOTS AND VECTORS 1 DEFINITION OF CHARACTERISTIC ROOTS AND VECTORS 11 Statement of the characteristic root problem Find values of a scalar λ for which there exist vectors x 0 satisfying
More informationEC9A0: Presessional Advanced Mathematics Course
University of Warwick, EC9A0: Presessional Advanced Mathematics Course Peter J. Hammond & Pablo F. Beker 1 of 55 EC9A0: Presessional Advanced Mathematics Course Slides 1: Matrix Algebra Peter J. Hammond
More informationLecture 5  Triangular Factorizations & Operation Counts
LU Factorization Lecture 5  Triangular Factorizations & Operation Counts We have seen that the process of GE essentially factors a matrix A into LU Now we want to see how this factorization allows us
More informationIntroduction to Matrix Algebra I
Appendix A Introduction to Matrix Algebra I Today we will begin the course with a discussion of matrix algebra. Why are we studying this? We will use matrix algebra to derive the linear regression model
More informationCoding Theory and Applications. Solved Exercises and Problems of Linear Codes. Enes Pasalic University of Primorska Koper, 2013
Coding Theory and Applications Solved Exercises and Problems of Linear Codes Enes Pasalic University of Primorska Koper, 2013 Contents 1 Preface 3 2 Problems 4 2 1 Preface This is a collection of solved
More informationMatrix Algebra, Class Notes (part 1) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved.
Matrix Algebra, Class Notes (part 1) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved. 1 Sum, Product and Transpose of Matrices. If a ij with
More informationCONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation
Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in
More informationMATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.
MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar
More informationTheta Functions. Lukas Lewark. Seminar on Modular Forms, 31. Januar 2007
Theta Functions Lukas Lewark Seminar on Modular Forms, 31. Januar 007 Abstract Theta functions are introduced, associated to lattices or quadratic forms. Their transformation property is proven and the
More informationMATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.
MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α
More informationLinear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)
MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of
More informationPreliminaries of linear algebra
Preliminaries of linear algebra (for the Automatic Control course) Matteo Rubagotti March 3, 2011 This note sums up the preliminary definitions and concepts of linear algebra needed for the resolution
More informationMA 242 LINEAR ALGEBRA C1, Solutions to Second Midterm Exam
MA 4 LINEAR ALGEBRA C, Solutions to Second Midterm Exam Prof. Nikola Popovic, November 9, 6, 9:3am  :5am Problem (5 points). Let the matrix A be given by 5 6 5 4 5 (a) Find the inverse A of A, if it exists.
More informationPractice Math 110 Final. Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16.
Practice Math 110 Final Instructions: Work all of problems 1 through 5, and work any 5 of problems 10 through 16. 1. Let A = 3 1 1 3 3 2. 6 6 5 a. Use Gauss elimination to reduce A to an upper triangular
More information4.9 Markov matrices. DEFINITION 4.3 A real n n matrix A = [a ij ] is called a Markov matrix, or row stochastic matrix if. (i) a ij 0 for 1 i, j n;
49 Markov matrices DEFINITION 43 A real n n matrix A = [a ij ] is called a Markov matrix, or row stochastic matrix if (i) a ij 0 for 1 i, j n; (ii) a ij = 1 for 1 i n Remark: (ii) is equivalent to AJ n
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More informationWHICH LINEARFRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE?
WHICH LINEARFRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE? JOEL H. SHAPIRO Abstract. These notes supplement the discussion of linear fractional mappings presented in a beginning graduate course
More informationLINEAR ALGEBRA. September 23, 2010
LINEAR ALGEBRA September 3, 00 Contents 0. LUdecomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................
More informationCHAPTER III  MARKOV CHAINS
CHAPTER III  MARKOV CHAINS JOSEPH G. CONLON 1. General Theory of Markov Chains We have already discussed the standard random walk on the integers Z. A Markov Chain can be viewed as a generalization of
More informationUndergraduate Matrix Theory. Linear Algebra
Undergraduate Matrix Theory and Linear Algebra a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn John S Alin Linfield College Colin L Starr Willamette University December 15, 2015 ii Contents 1 SYSTEMS OF LINEAR
More informationMath 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.
Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(
More informationNOTES on LINEAR ALGEBRA 1
School of Economics, Management and Statistics University of Bologna Academic Year 205/6 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura
More informationA matrix over a field F is a rectangular array of elements from F. The symbol
Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F) denotes the collection of all m n matrices over F Matrices will usually be denoted
More informationSolutions to Review Problems
Chapter 1 Solutions to Review Problems Chapter 1 Exercise 42 Which of the following equations are not linear and why: (a x 2 1 + 3x 2 2x 3 = 5. (b x 1 + x 1 x 2 + 2x 3 = 1. (c x 1 + 2 x 2 + x 3 = 5. (a
More informationMath 315: Linear Algebra Solutions to Midterm Exam I
Math 35: Linear Algebra s to Midterm Exam I # Consider the following two systems of linear equations (I) ax + by = k cx + dy = l (II) ax + by = 0 cx + dy = 0 (a) Prove: If x = x, y = y and x = x 2, y =
More information1 Scalars, Vectors and Tensors
DEPARTMENT OF PHYSICS INDIAN INSTITUTE OF TECHNOLOGY, MADRAS PH350 Classical Physics Handout 1 8.8.2009 1 Scalars, Vectors and Tensors In physics, we are interested in obtaining laws (in the form of mathematical
More informationNUMERICALLY EFFICIENT METHODS FOR SOLVING LEAST SQUARES PROBLEMS
NUMERICALLY EFFICIENT METHODS FOR SOLVING LEAST SQUARES PROBLEMS DO Q LEE Abstract. Computing the solution to Least Squares Problems is of great importance in a wide range of fields ranging from numerical
More information1.5 Elementary Matrices and a Method for Finding the Inverse
.5 Elementary Matrices and a Method for Finding the Inverse Definition A n n matrix is called an elementary matrix if it can be obtained from I n by performing a single elementary row operation Reminder:
More informationAlgebraic and Combinatorial Circuits
C H A P T E R Algebraic and Combinatorial Circuits Algebraic circuits combine operations drawn from an algebraic system. In this chapter we develop algebraic and combinatorial circuits for a variety of
More informationInner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 34 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
More informationEigenvalues and Markov Chains
Eigenvalues and Markov Chains Will Perkins April 15, 2013 The Metropolis Algorithm Say we want to sample from a different distribution, not necessarily uniform. Can we change the transition rates in such
More informationMarkov Chains, part I
Markov Chains, part I December 8, 2010 1 Introduction A Markov Chain is a sequence of random variables X 0, X 1,, where each X i S, such that P(X i+1 = s i+1 X i = s i, X i 1 = s i 1,, X 0 = s 0 ) = P(X
More informationAn Advanced Course in Linear Algebra. Jim L. Brown
An Advanced Course in Linear Algebra Jim L. Brown July 20, 2015 Contents 1 Introduction 3 2 Vector spaces 4 2.1 Getting started............................ 4 2.2 Bases and dimension.........................
More information1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each)
Math 33 AH : Solution to the Final Exam Honors Linear Algebra and Applications 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) (1) If A is an invertible
More informationInterpolating Polynomials Handout March 7, 2012
Interpolating Polynomials Handout March 7, 212 Again we work over our favorite field F (such as R, Q, C or F p ) We wish to find a polynomial y = f(x) passing through n specified data points (x 1,y 1 ),
More informationBasics Inversion and related concepts Random vectors Matrix calculus. Matrix algebra. Patrick Breheny. January 20
Matrix algebra January 20 Introduction Basics The mathematics of multiple regression revolves around ordering and keeping track of large arrays of numbers and solving systems of equations The mathematical
More information