Linear Algebra Concepts

Similar documents
Section Inner Products and Norms

1 Introduction to Matrices

Linear Algebra Review. Vectors

Inner Product Spaces and Orthogonality

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Mathematics Course 111: Algebra I Part IV: Vector Spaces

1 VECTOR SPACES AND SUBSPACES

Similarity and Diagonalization. Similar Matrices

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Linear Algebra Notes for Marsden and Tromba Vector Calculus

1 Sets and Set Notation.

Applied Linear Algebra I Review page 1

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Name: Section Registered In:

Introduction to Matrix Algebra

Orthogonal Diagonalization of Symmetric Matrices

Notes on Linear Algebra. Peter J. Cameron

5. Orthogonal matrices

BANACH AND HILBERT SPACE REVIEW

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Inner Product Spaces

Finite dimensional C -algebras

Vector and Matrix Norms

Math 312 Homework 1 Solutions

Chapter 6. Orthogonality

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

NOTES ON LINEAR TRANSFORMATIONS

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

The Characteristic Polynomial

F Matrix Calculus F 1

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

Chapter 17. Orthogonal Matrices and Symmetries of Space

LINEAR ALGEBRA. September 23, 2010

T ( a i x i ) = a i T (x i ).

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra: Vectors

17. Inner product spaces Definition Let V be a real vector space. An inner product on V is a function

Matrix Representations of Linear Transformations and Changes of Coordinates

LINEAR ALGEBRA W W L CHEN

Notes on Symmetric Matrices

Finite Dimensional Hilbert Spaces and Linear Inverse Problems

by the matrix A results in a vector which is a reflection of the given

Systems of Linear Equations

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Notes on Determinant

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Linearly Independent Sets and Linearly Dependent Sets

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra: Determinants, Inverses, Rank

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

α = u v. In other words, Orthogonal Projection

Numerical Analysis Lecture Notes

Lecture 18 - Clifford Algebras and Spin groups

Data Mining: Algorithms and Applications Matrix Math Review

October 3rd, Linear Algebra & Properties of the Covariance Matrix

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

3. INNER PRODUCT SPACES

Inner Product Spaces. 7.1 Inner Products

The Determinant: a Means to Calculate Volume

Section 4.4 Inner Product Spaces

Using row reduction to calculate the inverse and the determinant of a square matrix

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

Inner product. Definition of inner product

LEARNING OBJECTIVES FOR THIS CHAPTER

Lecture 3: Finding integer solutions to systems of linear equations

Introduction to Matrices for Engineers

Matrix Differentiation

Theory of Matrices. Chapter 5

MA106 Linear Algebra lecture notes

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

A Primer on Index Notation

Lecture 1: Schur s Unitary Triangularization Theorem

Structure of the Root Spaces for Simple Lie Algebras

Continued Fractions and the Euclidean Algorithm

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

ISOMETRIES OF R n KEITH CONRAD

Numerical Analysis Lecture Notes

MATH APPLIED MATRIX THEORY

A =

Lecture Notes 2: Matrices as Systems of Linear Equations

THE DIMENSION OF A VECTOR SPACE

Solution to Homework 2

Lecture 2 Matrix Operations

1 Symmetries of regular polyhedra

Metric Spaces. Chapter Metrics

Elements of Abstract Group Theory

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

Geometric Transformations

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Transcription:

University of Central Florida School of Electrical Engineering Computer Science EGN-3420 - Engineering Analysis. Fall 2009 - dcm Linear Algebra Concepts Vector Spaces To define the concept of a vector space we first need to introduce two basic algebraic structures, the group the field. A group is a set G with one binary operation, called multiplication, which satisfies three conditions 1. Associative law: (a, b, c) G a (b c) = (a b) c. 2. Identity element: There is an identity element e G such that a e = e a = a, a G. 3. Inverse element: a G, a 1 such that a a 1 = a 1 a = e. A group G whose operation satisfies the commutative law (i.e., a b = b a) is a commutative, or Abelian, group. A field is a set F equipped with two binary operations, addition multiplication, with the following properties: 1. under addition, F is an Abelian group with the identity (or neutral) element 0 such that 0 + a = a, a F. 2. under multiplication, the nonzero elements form an Abelian group with neutral element 1 such that 1 a = a, a F, 0 a = 0, a F. The additive multiplicative identity elements are different, 0 1. 3. the distributive law holds: a (b + c) = a b + a c. A vector space A assumes three objects: 1. An Abelian group (V, +) whose elements are called vectors whose binary operation + is called addition, 2. A field F (usually R, the real numbers, or C, the complex numbers), whose elements are called scalars, 3. An operation called multiplication with scalars denoted by, which associates to any scalar c F vector α V a new vector c α V. The multiplication by scalars operation has the following properties c (α + β) = c α + c β (c + c ) α = c α + c α where α, β V c, c F. Observations: (c c ) α = c (c α), 1 α = α

(a) Often we omit the symbol write the product of two scalars as cc instead of c c the product of a scalar with a vector as cα instead of c α. (b) Given n scalars {c 1, c 2,..., c n } R, then the set of n vectors {α 1, α 2,..., α n } R n are linearly independent if c 1 α 1 + c 2 α 2 +... + c n α n = 0 = c 1 = c 2 =... = c n = 0. Vectors that are not linearly independent are called linearly dependent. A subspace S of a vector space A is a subset of A which is closed with respect to the operations of addition scalar multiplication. This means that the sum of two vectors in S is in S. For any vector any scalar, the product of a vector with a scalar is also a vector, α S c R then cα S. Examples of subspaces: 1. The set of polynomials of degree at most m is a subspace of the vector space of all polynomials which is a subspace in the vector space of complex valued continuous functions C R (R). 2. The set of all continuous functions f(x) defined for 0 x 2 is a subspace of the linear space of all functions defined on the same domain. The set of all linear combinations of any set of vectors of a vector space A is a subspace of A. Given c, c 1, c 2,... c n F the following two identities allow us to prove this statement (c 1 α 1 + c 2 α 2 +... + c m α m ) + (c 1α 1 + c 2α 2 +... + c mα m ) = (c 1 + c 1)α 1 + (c 2 + c 2)α 2 +... + (c m + c m)α m c (c 1 α 1 + c 2 α 2 +... c m α m ) = (c c 1 )α 1 + (c c 2 )α 2 +... (c c m )α m The subspace consisting of all linear combinations of a set of vectors of A is the smallest subset 1 containing all the given vectors. The set of vectors spans the subspace. A linearly independent subset of vectors which spans the whole space is called a basis of a vector space. A vector space A is finite dimensional if only if it has a finite basis. If {b 1, b 2,..., b m } {c 1, c 2,..., c n } are two bases for a vector space m n are finite then m = n. The minimum number of vectors in any basis of a finite-dimensional vector space A is called the dimension of a vector space, dim(a). For example, the ordinary space R 3 can be spanned by three vectors, (1, 0, 0), (0, 1, 0), (0, 0, 1). n-dimensional Real Euclidean Vector Space Consider now an n-dimensional real vector space, R n. If for every pair of vectors α, β R n we have an associated real number (α, β) such that the following four conditions are satisfied 1. (α, β) = (β, α) 2. (cα, β) = c(α, β), if c R 3. (α + γ, β) = (α, β) + (γ, β), γ R n 4. (α, α) 0 (α, α) = 0 if only if α = 0 1 The smallest subset is the subset with the smallest cardinality.

then we say that we have an n-dimensional Euclidean space that (α, β) is the inner product of vectors α β. All the facts known from Euclidean geometry can be established in an Euclidean space. The length of a vector α in a Euclidean space is defined to be the real number α = (α, α). The angle between two vectors α β is the real number θ: θ = arccos (α, β) α β = cos θ = (α, β) α β. If (α, β) = 0, where α 0 β 0, then θ = π/2, cos θ = 0. Two vectors α β in a Euclidean space are orthogonal if (α, β) = 0. A set of n vectors E = {e 1, e 2,..., e n } is an orthogonal basis in an n-dimensional Euclidean space if the vectors in the set are pairwise orthogonal. If, in addition, each vector has a unit length, then the vectors form an orthonormal basis satisfy the condition { 0 if i j (e i, e j ) = δ i,j = 1 (i, j) n. 1 if i = j with δ i,j the Kronecker delta function. It is relatively easy to show that every n-dimensional Euclidean space contains orthogonal bases. Given an orthonormal basis {e 1, e 2,... e n } of an n-dimensional Euclidean space we can express any two vectors as α = a 1 e 1 + a 2 e 2 +... + a n e n β = b 1 e 1 + b 2 e 2 +... + b n e n. Then, we use the fact that (e i, e j ) = δ i,j to show that that (α, e i ) = a i (α, β) = n a i b i. The simplest functions defined on vector spaces are the linear transformations. The function A(α) is a linear form (function) if i=1 A(α; β) = A(α) + A(β) A(cα) = ca(α).

Consider two vector spaces over the same field F, called A B. Let α A, β A, c F, A(α) B. We are given a function A which maps vectors in A to vectors in B. The function A(α; β) is said to be a bilinear form (function) of vectors α β if 1. For any fixed β, A(α; β) is a linear function of α, 2. For any fixed α, A(α; β) is a linear function of β. This implies that if A B are two vector spaces over the same field F we consider variable vectors α, α A β, β B scalars a, b, c, d F the function A(α, β) with values in F is a bilinear function if A(aα + bα ; β) = aa(α; β) + ba(α ; β) A bilinear function is symmetric if A(α; cβ + dβ ) = ca(α; β) + da(α; β ) A(α; β) = A(β; α). The inner product of two vectors in an Euclidean vector space is an example of a symmetric bilinear function. If A(α; β) is a symmetric quadratic form, then the function A(α; α) is called a quadratic form. A quadratic form A(α; α) is positive definite if for every vector α 0 A(α; α) > 0. The bilinear form A(α; β) is called the polar form associated with the quadratic form A(α; α). It can be shown that A(α; β) is uniquely determined by its quadratic form. Now we can provide an alternative definition of an Euclidean vector space as a vector space with a positive definite quadratic form A(α; α). The inner product (α, β) of two vectors is the value of the bilinear form A(α; β) associated with A(α; α). Linear Operators Matrices A rectangular array of elements of a field F with m rows n columns, A, is called a matrix a 11 a 12... a 1n a 21 a 22... a 2n A =....... a m1 a m2... a mn Matrix A can be interpreted as a linear map from the vector space of dimension n, F n, to the vector space of dimension m, F m equipped with the canonical base. If n = m, then A is a linear map from F n to itself. Let α 1 = (a 11 a 12... a 1n ), α 2 = (a 21 a 22... a 2n ),... α m = (a m1 a m2... a mn ) be a set of vectors spanning a subspace of dimension m of a vector space V n (F ) called the row space of the m n matrix A. The elementary row operations on a matrix are: 1. the interchange of any two rows,

2. multiplication of all the elements of a row by a constant c F, 3. addition of any multiple of a row to any other row. Two matrices are row-equivalent if one is obtained from the other by a finite sequence of row operations. In the case n = m the identity matrix I = [a ij ] is the matrix with a ii = 1 a ij = 0, if i j. For example, the identity matrix in a vector space of dimension 8 is 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 I 8 = 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0. 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 A permutation matrix P is an identity matrix with rows columns permuted. The determinant of an n n matrix A = [a ij ] is the polynomial det(a) = A = φ sgn(φ) a(1, 1φ) a(2, 2φ)... a(n, nφ). φ denotes one of the n! different permutations of integers 1, 2,... n. If φ is an even permutation then sgn(φ) = +1. For an odd permutation sgn(φ) = 1. The determinant can be written as: det(a) = A i1 a i1 + A i2 a i2 +... A in a in. Here the coefficient A ij of a ij is called the cofactor of a ij. A cofactor is a polynomial in the remaining rows of A can be described as the partial derivative ( A / a ij ) of A. The cofactor polynomial contains only entries from an (n 1) (n 1) matrix M ij (also called a minor ) obtained from A by eliminating row i column j. If we permute two rows of A then the sign of the determinant A changes. The determinant of the transpose of a matrix is equal to the determinant of the original matrix: If two rows of A are identical then: A T = A. A = 0. A square (n n) matrix is triangular if all entries below the diagonal are zero. The determinant of a triangular matrix is the product of its diagonal elements. To compute the determinant A one should perform elementary row operations on matrix A reduce it to a triangular form. The characteristic polynomial of matrix A is: c(λ) det(a λi) or c(λ) A λi.

The trace of matrix A is the sum of its diagonal elements. Tr(A) = i a ii. The trace of an operator A is the trace of A, the matrix representation of A. Given any two matrices A B over F a scalar c F it is easy to show that the trace has the following properties: 1. It is cyclic 2. Linearity Tr(AB) = Tr(BA). Tr(A + B) = Tr(A) + Tr(B) Tr(cA) = ctr(a). 3. Invariance under the similarity transformation, S Tr(SAS ) = Tr(S SA) = Tr(A). The determinant, the trace, the characteristic polynomial are quantities associated with any linear map from a finite dimensional vector space into itself. Now we can express a bilinear form A(β; γ) in terms of the projections of the two vectors, β γ, namely b 1, b 2,..., b n c 1, c 2,..., c n 1 on the orthonormal basis e 1, e 2,..., e n. A(β; γ) = (b 1 e 1 + b 2 e 2 +... + b n e n ; c 1 e 1 + c 2 e 2 +... + c n e n ). But A is a bilinear function, thus A(β; γ) = n A(e i ; e j )b i c j. i,j=1 Here A(e i ; e j ) is a real number that we denote it by a ij. Then A(β; γ) = n a ij b i c j. i,j=1 The matrix with elements a ij, A = [a ij ], 1 (i, j) n is called the matrix of the bilinear form A(β; γ) relative to the basis e 1, e 2,... e n. Hermitian Operators in a Complex n-dimensional Euclidean Vector Space It is necessary to consider vector spaces over fields other than R, the field of real numbers. C n, the n-dimensional vector space over C, the field of complex numbers is of particular interest. The concepts of linear bilinear transformations introduced for an n-dimensional Euclidean vector space over the field of real numbers, extend to finite-dimensional vector spaces over other fields. For example, the inner product in a vector space C n over a field C of complex numbers is a bilinear map which for every α C n, β C n, in addition to bilinearity property, satisfies three conditions

(α, β) = (β, α) (α, α) 0 (α, α) = 0 = α = 0. Recall that c = (α, β) is a complex number, c = a + ib that c = a ib with i = 1. The inner product in a finite-dimensional vector space induces a norm, but a norm may exist even if a inner product is not defined. A finite dimensional vector space with a norm is a Banach space. The inner product permits to measure the length of a vector the angle between two vectors. If (α, β) C n c C, then the norm is a non-negative function with the following properties α + β α + β c α = c α α = 0 if only if α = 0. We define orthogonality orthonormal vector bases, in an n-dimensional Euclidean vector space over the field of complex numbers similarly to the ones defined for an n-dimensional Euclidean vector space over the field of real numbers. We now introduce Hermitian operators, a concept analogous to the symmetric bilinear form in a real n-dimensional Euclidean space. If α, β C n then the bilinear form A(α; β) is called Hermitian if A(α; β) = A (β; α). A bilinear form has a matrix associated with it the necessary sufficient condition for an operator A to be Hermitian is that its matrix A = [a ij ] relative to some basis e 1, e 2... e n satisfies the condition a ij = a ji. The adjoint operator associated with a bilinear form A is denoted as A (α; β) = A (β; α). If α C n then by definition α = (α ) T. If A is the matrix representation of the linear operator A we want to obtain the adjoint matrix, we first construct the complex conjugate matrix A then take its transpose For example, the adjoint of matrix A = (A ) T.

is ( 1 5i 1 + i A = 1 + 3i 7i ) Thus, A = A = The adjoint of the matrix is ( 1 5i 1 + i 1 + 3i 7i ( 1 + 5i 1 i 1 3i 7i I 3 = ) = I 3 = ) T = [( 1 5i 1 + i 1 + 3i 7i 1 0 0 0 1 0 0 0 1 1 0 0 0 1 0 0 0 1 ( 1 + 5i 1 3i 1 i 7i ) ] T. the adjoint of matrix A with real elements, (a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p) R a b c d A = e f g h i j k l m n o p is An operator A is normal if A = a e i m b f j n c g k o d h l p AA = A A. If A is a Hermitian (self-adjoint) operator, then it is also a normal operator. A matrix U is unitary if. ). U U = I n where I n is the identity matrix in an n-dimensional vector space 1 0 0... 0 0 1 0... 0 I n = 0 0 1... 0........ 0 0 0... 1

Unitary operators preserve the inner product of vectors. Let (α, β) C n. Then (Uα, Uβ) = (α, β). Usually we write the inner product using the symbol. With this convention the previous equation becomes Uα Uβ = α β. Given two operators A B, the commutator of the two operators is The operator A commutes with B if [A, B] = AB BA. The anti-commutator of A B is [A, B] = 0. The operator A anti-commutes with B if {A, B} = AB + BA. {A, B} = 0.