Tensor product of vector spaces

Size: px
Start display at page:

Download "Tensor product of vector spaces"

Transcription

1 Tensor product of vector spaces Construction Let V,W be vector spaces over K = R or C. Let F denote the vector space freely generated by the set V W and let N F denote the subspace spanned by the elements of the form (αv 1 +v 2,w) α(v 1,w) (v 2,w), (v,αw 1 +w 2 ) α(v,w 1 ) (v,w 2 ), (1) where v,v 1,v 2 V, w,w 1,w 2 W and α K. By definition, the tensor product of V and W is the quotient vector space V W := F/N. Elements For v V and w W, denote v w := [(v,w)] theequivalence class of (v,w). These are special elementsof V W, called puretensor products. Since the pairs (v,w) span F, the pure tensor products v w span V W. 1. Pure tensor products fulfil (αv 1 +v 2 ) w = α(v 1 w)+v 2 w, v (αw 1 +w 2 ) = α(v w 1 )+v w 2. (2) Proof. (αv 1 +v 2 ) w = [(αv 1 +v 2,w)] def of = [α(v 1,w)+(v 2,w)] choose another representative of the same class = α[(v 1,w)]+[(v 2,w)] def of linear structure on F/N = α(v 1 w)+v 2 w. 2. Every element of V W is a finite sum of pure tensor products. Proof. The elements of F are finite sums of the form i α i(v i,w i ) for some α i K, v i V and w i W. Passing to classes, we find that the elements of V W are of the form [ ] α i(v i,w i ) = α i[(v i,w i )] def of linear structure on F/N i i = i α i(v i w i ) notation v w = i (α iv i ) w i. property 2 1

2 Universal property Let X be a further vector space. For every bilinear mapping f : V W X, there exists a unique linear mapping f : V W X such that f(v w) = f(v,w) for all v V and w W. (3) Proof. (Existence) Define a linear mapping ˆf : F X by ( ) ˆf α i(v i,w i ) := α if(v i,w i ). 1 i i Bilinearity of f implies ˆf(N) = 0: ˆf ( (αv 1 +v 2,w) α(v 1,w) (v 2,w) ) = f(αv 1 +v 2,w) αf(v 1,w) f(v 2,w) = 0, and similarly for the other type of spanning vectors. Hence, ˆf descends to a linear mapping f : F/N V W X, f([x]) := ˆf(x). By construction, f(v w) = f ( [(v,w)] ) notation v w = ˆf ( (v,w) ) def of f = f(v,w). def of ˆf (Uniqueness) Since V W is spanned by pure tensor products, every linear mapping is determined by its values on such elements. Tensor product of operators For every pair of linear operators A on V and B on W, there exists a unique linear operator A B on V W such that Proof. Define a mapping (A B)(v w) = (Av) (Bw). f A,B : V W V W, f A,B (v,w) := (Av) (Bw). f A,B is bilinear: f A,B (αv 1 +v 2,w) = ( A(αv 1 +v 2 ) ) (Bw) def of f A,B = (αav 1 +Av 2 ) (Bw) A is linear = α(av 1 ) (Bw)+(Av 2 ) (Bw) property (2) 1 ˆf is the linear extension of the mapping defined on the basis V W by f. 2

3 = αf A,B (v 1,w)+f A,B (v 2,w), def of f A,B and similarly for a linear combination in the second argument. Hence, by the universal property of the tensor product, there exists a unique linear mapping f A,B : V W V W such that Put A B := f A,B. f A,B (v w) = f A,B (v,w) (Av) (Bw). Remark: There are many operators on V W which are not of the form A B with A and B being operators on V and W, respectively. For example, if W = V, the mapping V V V V defined by (v,w) (w,v) induces an operator on V V (check this) which is not of that form. Tensor product of linear functionals For every pair of linear functionals α V and β W, there exists a unique linear functional α β (V W) such that (α β)(v w) = α(v)β(w) Proof. Check that the mapping f α,β : V W K, f α,β (v,w) := α(v)β(w) is bilinear and put α β := f α,β. Remark: Consider the mapping f : V W (V W), f(α,β) := α β. One can check that it is bilinear. Hence, it induces a linear mapping f : V W (V W) and this mapping is defined by the condition f(α β) = α β α V, β W. This equation has to be read with care: on the left hand side, the tensor sign has its genuine meaning as the class of the pair (α,β), whereas on the right hand side, it stands for the tensor product of linear functionals defined above. In particular, f is not the identical mapping (it cannot be, because it runs between distinct spaces). However, one can check that f is an isomorphism. Therefore, one may use it to identify V W with (V W) in a natural way. This justifies (a posteriori) the use of the same notation for the elements of these two spaces. Product bases Let {v i : i I} and {w i : j J} be bases in V and W, respectively. Then, {v i w j : i I,j J} is a basis in V W. In particular, dim(v W) = dim(v)dim(w). 3

4 Proof. Denote B := {v i w j : i I,j J}. It is clear that the linear span of B contains all pure tensor products v w. Hence, B spans V W. To see that B is linearly independent, consider the dual bases {α i : i I} in V and {β j : j J} in W, respectively. They are defined by the conditions α k (v i ) = δ ki, β l (w j ) = δ lj for all i,k I and j,l J. Given numbers c ij K, i I, j J, with only finitely many c ij 0, such that i,j c ijv i w j = 0, then ) 0 = (α k β l ) ( i,j c ijv i w j = c ij(α k β l )(v i w j ) = c ijα k (v i )β l (w j ) = c kl i,j i,j for all k I, l J. Thus, B is linearly independent, indeed. Tensor product of Hilbert spaces Construction Let H and K be Hilbert spaces over C. 2 Let H alg K denote the tensor product as vector spaces, i.e., H alg K = F/N, where F is the vector space freely generated by H K and N is the subspace spanned by the elements of the form (1). The scalar products, H on H and, K on K define a mapping ŝ : F F K by ŝ ( (ϕ,ψ),(ϕ,ψ ) ) := ϕ,ϕ H ψ,ψ K and extension by antilinearity in the first argument and linearity in the second one. I.e., ( ŝ α i(ϕ i,ψ i ), ) i j α j(ϕ j,ψ j) = i,j α iα j ϕ i,ϕ j H ψ i,ψ j K. Antilinearity of, H and, K in the first argument implies ŝ(n F) = 0: ŝ ( (αϕ 1 +ϕ 2,ψ) α(ϕ 1,ψ) (ϕ 2,ψ),(ϕ,ψ ) ) = αϕ 1 +ϕ 2,ϕ H ψ,ψ K α ϕ 1,ϕ H ψ,ψ K ϕ 2,ϕ H ψ,ψ K = 0, and similarly for the other type of spanning vectors. By an analogous argument, linearity of, H and, K in the second argument implies ŝ(f N) = 0. It follows that ŝ descends to a mapping, : (F/N) (F/N) (H K) (H K) C, [x],[y] := ŝ(x,y), 2 To treat the case of real Hilbert spaces, omit complex conjugation in what follows. 4

5 which is antilinear in the first argument and linear in the second one. We compute ϕ ψ,ϕ ψ = [(ϕ,ψ)],[(ϕ,ψ )] notation ϕ ψ = ŝ ( ϕ,ψ),(ϕ,ψ ) ) def of, = ϕ,ϕ H ψ,ψ K. def of ŝ (4) We claim that, is a scalar product. It remains to show that 1. [x],[y] = [y],[x] : by (anti)linearity, it suffices to check this for pure tensor products, ϕ ψ,ϕ ψ = ϕ,ϕ H ψ,ψ K = ϕ,ϕ H ψ,ψ K = ϕ ψ,ϕ ψ ; 2. [x],[x] = 0 implies [x] = 0: We can write [x] = i α iϕ i ψ i for some α i C and nonzero ϕ i H, ψ i K. By means of the orthonormalization procedure we may turn the ϕ i into an orthonormal system in H. Then, i α iϕ i ψ i, j α jϕ j ψ j = i,j α iα j ϕ i ψ i,ϕ j ψ j (anti)linearity = i,j α iα j ϕ i,ϕ j H ψ i,ψ j K Fml (4) = i α i ψ i 2 K. ϕ i,ϕ j H = δ ij It follows that α i = 0 and hence [x] = 0. Now, having a scalar product on H alg K, we can define the tensor product of Hilbert spaces H K to be the completion of H alg K in the corresponding norm. Remarks. 1. For pure tensor products, the norm square fulfils ϕ ψ 2 H K = ϕ ψ,ϕ ψ = ϕ,ϕ H ψ,ψ K = ϕ 2 H ψ 2 K and hence the norm fulfils ϕ ψ H K = ϕ H ψ K (5) 2. If both H and K have finite dimension, H alg K is already complete. Hence, in this situation, H K = H alg K. 5

6 Tensor product of operators For every pair of bounded linear operators A on H and B on K, there exists a unique bounded linear operator A B on H K such that (A B)(ϕ ψ) = (Aϕ) (Bψ) for all ϕ H, ψ K. (6) Proof. By the universal property of the tensor product of vector spaces, there exists a unique linear operator A B on H alg K satisfying (6). One can show that it is bounded and hence continuous, see Exercise 1 of the course. As a consequence, it can be extended uniquely to a bounded linear operator on H K, denoted by the same symbol. Since pure tensor products belong to H alg K, the extension fulfils (6). Since it is uniquely determined by its restriction to H alg K and since the latter is uniquely determined by (6), A B is uniquely determined by (6). Orthonormal product bases Let {φ i : i I} and {ξ j : j J} be orthonormal bases in H and K, respectively. Then, {φ i ξ j : i I,j J} is an orthonormal basis in H K. Proof. Denote B := {φ i ξ j : i I,j J}. B is orthonormal and hence linearly independent: φ i ξ j,φ k ξ l,= φ i,φ k ξ j,ξ l = δ ik δ jl = δ (i,j),(k,l). The span of B is dense in H K: let B denote the closure of the span of B. We have to show B = H K. Since pure tensor products span the dense subspace H alg K, it suffices to show that B contains ϕ ψ for all ϕ H and ψ K. Given ϕ and ψ, denote Then, ϕ n := i n φ i,ϕ φ i, ψ n := j n ξ j,ψ ξ j. ϕ n ψ n = i,j n φ i,ϕ ξ j,ψ φ i ξ j so that ϕ n ψ n B for all n. By the triangle inequality, we have ϕ n ψ n ϕ ψ ϕ n ψ n ϕ n ψ + ϕ n ψ ϕ ψ. Since ϕ n 2 converges, it is bounded for large n. Hence, by (5), Analogously, Hence, ϕ n ψ n ϕ n ψ 2 = ϕ n (ψ n ψ) 2 = ϕ n 2 ψ n ψ 2 n 0. ϕ n ψ ϕ ψ 2 = (ϕ n ϕ) ψ 2 = ϕ n ϕ 2 ψ 2 n 0. This shows that ϕ ψ B, as asserted. ϕ n ψ n n ϕ ψ. 6

7 Tensor product of L 2 -spaces Let C 0 (R 3 ) denote the continuous functions on R 3 with compact support. The mapping F : C 0 (R 3 ) C 0 (R 3 ) C 0 (R 3 R 3 ), ( F(ϕ,ψ) ) ( x, y) := ϕ( x)ψ( y), induces a Hilbert space isomorphism from L 2 (R 3 ) L 2 (R 3 ) onto L 2 (R 3 R 3 ). Proof. F is bilinear: ( F(αϕ1 +ϕ 2,ψ) ) ( x, y) = (αϕ 1 +ϕ 2 )( x)ψ( y) = αϕ 1 ( x)ψ( y)+ϕ 2 ( x)ψ( y) = α ( F(ϕ 1,ψ) ) ( x, y)+ ( F(ϕ 2,ψ) ) ( x, y). = ( αf(ϕ 1,ψ)+F(ϕ 2,ψ) ) ( x, y), and similarly for a linear combination in the second argument. Moreover, for all ϕ,ϕ,ψ,ψ C 0 (R 3 ), we have ( ) F(ϕ,ψ),F(ϕ,ψ ) = ( F(ϕ,ψ) ( x, y) F(ϕ,ψ ) ) ( x, y)d 3 xd 3 y R 3 R 3 = ϕ( x) ψ( y) ϕ ( x)ψ ( y)d 3 xd 3 y R 3 R 3 = ϕ( x) ϕ ( x)d 3 x ψ( y) ψ ( y)d 3 y R 3 R 3 = ϕ,ϕ ψ,ψ. (7) As a consequence of (7), F is bounded and hence continuous. It follows that it extends to a continuous bilinear mapping F : L 2 (R 3 ) L 2 (R 3 ) L 2 (R 3 R 3 ). By the universal property of the tensor product of Hilbert spaces, there exists a unique continuous linear mapping F : L 2 (R 3 ) L 2 (R 3 ) L 2 (R 3 R 3 ) such that F(ϕ ψ) = F(ϕ,ψ) for all ϕ,ψ L 2 (R 3 ). By continuity and by (7), F is isometric. To see that F is an isomorphism of Hilbert spaces, it remains to show that it is surjective. By isometry, the image im( F) of F is closed. Hence, for proving surjectivity, it suffices to show that im( F) is dense in L 2 (R 3 R 3 ). To see this, let ξ C 0 (R 3 R 3 ) be orthogonal to im( F). Then, for all ϕ,ψ C 0 (R 3 ), 0 = F(ϕ ψ),ξ = F(ϕ,ψ),ξ = ϕ( x) ψ( y) ξ( x, y)d 3 xd 3 y R 3 R 3 7

8 We read off that the mapping ( ) = ϕ( x) R 3 ψ( y) ξ( x, y)d 3 y d 3 x. R 3 x ψ( y) ξ( x, y)d 3 y R 3 is orthogonal to all elements of C 0 (R 3 ). Being continuous, it must vanish then. Hence, for all x, the mapping y ξ( x, y) is orthogonal to all elements of C 0 (R 3 ). As before, being continuous, it must vanish then. Thus, ξ = 0. This proves that im( F) is dense in L 2 (R 3 R 3 ). 8

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function 17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function, : V V R, which is symmetric, that is u, v = v, u. bilinear, that is linear (in both factors):

More information

BILINEAR FORMS KEITH CONRAD

BILINEAR FORMS KEITH CONRAD BILINEAR FORMS KEITH CONRAD The geometry of R n is controlled algebraically by the dot product. We will abstract the dot product on R n to a bilinear form on a vector space and study algebraic and geometric

More information

The cover SU(2) SO(3) and related topics

The cover SU(2) SO(3) and related topics The cover SU(2) SO(3) and related topics Iordan Ganev December 2011 Abstract The subgroup U of unit quaternions is isomorphic to SU(2) and is a double cover of SO(3). This allows a simple computation of

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES CHRISTOPHER HEIL 1. Cosets and the Quotient Space Any vector space is an abelian group under the operation of vector addition. So, if you are have studied

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

Let H and J be as in the above lemma. The result of the lemma shows that the integral

Let H and J be as in the above lemma. The result of the lemma shows that the integral Let and be as in the above lemma. The result of the lemma shows that the integral ( f(x, y)dy) dx is well defined; we denote it by f(x, y)dydx. By symmetry, also the integral ( f(x, y)dx) dy is well defined;

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information

Systems with Persistent Memory: the Observation Inequality Problems and Solutions

Systems with Persistent Memory: the Observation Inequality Problems and Solutions Chapter 6 Systems with Persistent Memory: the Observation Inequality Problems and Solutions Facts that are recalled in the problems wt) = ut) + 1 c A 1 s ] R c t s)) hws) + Ks r)wr)dr ds. 6.1) w = w +

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

1 Norms and Vector Spaces

1 Norms and Vector Spaces 008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)

More information

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

More information

Section 1.4. Lines, Planes, and Hyperplanes. The Calculus of Functions of Several Variables

Section 1.4. Lines, Planes, and Hyperplanes. The Calculus of Functions of Several Variables The Calculus of Functions of Several Variables Section 1.4 Lines, Planes, Hyperplanes In this section we will add to our basic geometric understing of R n by studying lines planes. If we do this carefully,

More information

Inner product. Definition of inner product

Inner product. Definition of inner product Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product

More information

Diffusion systems and heat equations on networks

Diffusion systems and heat equations on networks Diffusion systems and heat equations on networks Dissertation der Fakultät für Mathematik und Wirtschaftswissenschaften der Universität Ulm zur Erlangung des Grades eines Doktors der Naturwissenschaften

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible

More information

5.3 The Cross Product in R 3

5.3 The Cross Product in R 3 53 The Cross Product in R 3 Definition 531 Let u = [u 1, u 2, u 3 ] and v = [v 1, v 2, v 3 ] Then the vector given by [u 2 v 3 u 3 v 2, u 3 v 1 u 1 v 3, u 1 v 2 u 2 v 1 ] is called the cross product (or

More information

Lecture 18 - Clifford Algebras and Spin groups

Lecture 18 - Clifford Algebras and Spin groups Lecture 18 - Clifford Algebras and Spin groups April 5, 2013 Reference: Lawson and Michelsohn, Spin Geometry. 1 Universal Property If V is a vector space over R or C, let q be any quadratic form, meaning

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

Reflection Positivity of the Free Overlap Fermions

Reflection Positivity of the Free Overlap Fermions Yoshio Kikukawa Institute of Physics, the University of Tokyo, Tokyo 153-8902, Japan E-mail: kikukawa@hep1.c.u-tokyo.ac.jp Department of Physics, the University of Tokyo 113-0033, Japan Institute for the

More information

Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

How To Prove The Dirichlet Unit Theorem

How To Prove The Dirichlet Unit Theorem Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if

More information

Orthogonal Projections and Orthonormal Bases

Orthogonal Projections and Orthonormal Bases CS 3, HANDOUT -A, 3 November 04 (adjusted on 7 November 04) Orthogonal Projections and Orthonormal Bases (continuation of Handout 07 of 6 September 04) Definition (Orthogonality, length, unit vectors).

More information

Linear Algebra: Vectors

Linear Algebra: Vectors A Linear Algebra: Vectors A Appendix A: LINEAR ALGEBRA: VECTORS TABLE OF CONTENTS Page A Motivation A 3 A2 Vectors A 3 A2 Notational Conventions A 4 A22 Visualization A 5 A23 Special Vectors A 5 A3 Vector

More information

16.3 Fredholm Operators

16.3 Fredholm Operators Lectures 16 and 17 16.3 Fredholm Operators A nice way to think about compact operators is to show that set of compact operators is the closure of the set of finite rank operator in operator norm. In this

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

LEARNING OBJECTIVES FOR THIS CHAPTER

LEARNING OBJECTIVES FOR THIS CHAPTER CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional

More information

4: SINGLE-PERIOD MARKET MODELS

4: SINGLE-PERIOD MARKET MODELS 4: SINGLE-PERIOD MARKET MODELS Ben Goldys and Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2015 B. Goldys and M. Rutkowski (USydney) Slides 4: Single-Period Market

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

Math 231b Lecture 35. G. Quick

Math 231b Lecture 35. G. Quick Math 231b Lecture 35 G. Quick 35. Lecture 35: Sphere bundles and the Adams conjecture 35.1. Sphere bundles. Let X be a connected finite cell complex. We saw that the J-homomorphism could be defined by

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

1 Local Brouwer degree

1 Local Brouwer degree 1 Local Brouwer degree Let D R n be an open set and f : S R n be continuous, D S and c R n. Suppose that the set f 1 (c) D is compact. (1) Then the local Brouwer degree of f at c in the set D is defined.

More information

A Modern Course on Curves and Surfaces. Richard S. Palais

A Modern Course on Curves and Surfaces. Richard S. Palais A Modern Course on Curves and Surfaces Richard S. Palais Contents Lecture 1. Introduction 1 Lecture 2. What is Geometry 4 Lecture 3. Geometry of Inner-Product Spaces 7 Lecture 4. Linear Maps and the Euclidean

More information

Section 4.4 Inner Product Spaces

Section 4.4 Inner Product Spaces Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

9 Multiplication of Vectors: The Scalar or Dot Product

9 Multiplication of Vectors: The Scalar or Dot Product Arkansas Tech University MATH 934: Calculus III Dr. Marcel B Finan 9 Multiplication of Vectors: The Scalar or Dot Product Up to this point we have defined what vectors are and discussed basic notation

More information

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors 1 Chapter 13. VECTORS IN THREE DIMENSIONAL SPACE Let s begin with some names and notation for things: R is the set (collection) of real numbers. We write x R to mean that x is a real number. A real number

More information

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation

More information

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj Section 5. l j v j = [ u u j u m ] l jj = l jj u j + + l mj u m. l mj Section 5. 5.. Not orthogonal, the column vectors fail to be perpendicular to each other. 5..2 his matrix is orthogonal. Check that

More information

Chapter 5. Banach Spaces

Chapter 5. Banach Spaces 9 Chapter 5 Banach Spaces Many linear equations may be formulated in terms of a suitable linear operator acting on a Banach space. In this chapter, we study Banach spaces and linear operators acting on

More information

i=(1,0), j=(0,1) in R 2 i=(1,0,0), j=(0,1,0), k=(0,0,1) in R 3 e 1 =(1,0,..,0), e 2 =(0,1,,0),,e n =(0,0,,1) in R n.

i=(1,0), j=(0,1) in R 2 i=(1,0,0), j=(0,1,0), k=(0,0,1) in R 3 e 1 =(1,0,..,0), e 2 =(0,1,,0),,e n =(0,0,,1) in R n. Length, norm, magnitude of a vector v=(v 1,,v n ) is v = (v 12 +v 22 + +v n2 ) 1/2. Examples v=(1,1,,1) v =n 1/2. Unit vectors u=v/ v corresponds to directions. Standard unit vectors i=(1,0), j=(0,1) in

More information

Section 1.1. Introduction to R n

Section 1.1. Introduction to R n The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

More information

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties SOLUTIONS TO EXERCISES FOR MATHEMATICS 205A Part 3 Fall 2008 III. Spaces with special properties III.1 : Compact spaces I Problems from Munkres, 26, pp. 170 172 3. Show that a finite union of compact subspaces

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Factoring of Prime Ideals in Extensions

Factoring of Prime Ideals in Extensions Chapter 4 Factoring of Prime Ideals in Extensions 4. Lifting of Prime Ideals Recall the basic AKLB setup: A is a Dedekind domain with fraction field K, L is a finite, separable extension of K of degree

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday. Math 312, Fall 2012 Jerry L. Kazdan Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday. In addition to the problems below, you should also know how to solve

More information

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0. Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

More information

Classification of Cartan matrices

Classification of Cartan matrices Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms

More information

Chapter 20. Vector Spaces and Bases

Chapter 20. Vector Spaces and Bases Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit

More information

On Nicely Smooth Banach Spaces

On Nicely Smooth Banach Spaces E extracta mathematicae Vol. 16, Núm. 1, 27 45 (2001) On Nicely Smooth Banach Spaces Pradipta Bandyopadhyay, Sudeshna Basu Stat-Math Unit, Indian Statistical Institute, 203 B.T. Road, Calcutta 700035,

More information

FACTORING IN QUADRATIC FIELDS. 1. Introduction. This is called a quadratic field and it has degree 2 over Q. Similarly, set

FACTORING IN QUADRATIC FIELDS. 1. Introduction. This is called a quadratic field and it has degree 2 over Q. Similarly, set FACTORING IN QUADRATIC FIELDS KEITH CONRAD For a squarefree integer d other than 1, let 1. Introduction K = Q[ d] = {x + y d : x, y Q}. This is called a quadratic field and it has degree 2 over Q. Similarly,

More information

Chapter 6. Linear Transformation. 6.1 Intro. to Linear Transformation

Chapter 6. Linear Transformation. 6.1 Intro. to Linear Transformation Chapter 6 Linear Transformation 6 Intro to Linear Transformation Homework: Textbook, 6 Ex, 5, 9,, 5,, 7, 9,5, 55, 57, 6(a,b), 6; page 7- In this section, we discuss linear transformations 89 9 CHAPTER

More information

Finite dimensional C -algebras

Finite dimensional C -algebras Finite dimensional C -algebras S. Sundar September 14, 2012 Throughout H, K stand for finite dimensional Hilbert spaces. 1 Spectral theorem for self-adjoint opertors Let A B(H) and let {ξ 1, ξ 2,, ξ n

More information

6. Fields I. 1. Adjoining things

6. Fields I. 1. Adjoining things 6. Fields I 6.1 Adjoining things 6.2 Fields of fractions, fields of rational functions 6.3 Characteristics, finite fields 6.4 Algebraic field extensions 6.5 Algebraic closures 1. Adjoining things The general

More information

INVARIANT METRICS WITH NONNEGATIVE CURVATURE ON COMPACT LIE GROUPS

INVARIANT METRICS WITH NONNEGATIVE CURVATURE ON COMPACT LIE GROUPS INVARIANT METRICS WITH NONNEGATIVE CURVATURE ON COMPACT LIE GROUPS NATHAN BROWN, RACHEL FINCK, MATTHEW SPENCER, KRISTOPHER TAPP, AND ZHONGTAO WU Abstract. We classify the left-invariant metrics with nonnegative

More information

1. Let X and Y be normed spaces and let T B(X, Y ).

1. Let X and Y be normed spaces and let T B(X, Y ). Uppsala Universitet Matematiska Institutionen Andreas Strömbergsson Prov i matematik Funktionalanalys Kurs: NVP, Frist. 2005-03-14 Skrivtid: 9 11.30 Tillåtna hjälpmedel: Manuella skrivdon, Kreyszigs bok

More information

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n. ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

GROUP ALGEBRAS. ANDREI YAFAEV

GROUP ALGEBRAS. ANDREI YAFAEV GROUP ALGEBRAS. ANDREI YAFAEV We will associate a certain algebra to a finite group and prove that it is semisimple. Then we will apply Wedderburn s theory to its study. Definition 0.1. Let G be a finite

More information

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices. Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

More information

Mathematical Methods of Engineering Analysis

Mathematical Methods of Engineering Analysis Mathematical Methods of Engineering Analysis Erhan Çinlar Robert J. Vanderbei February 2, 2000 Contents Sets and Functions 1 1 Sets................................... 1 Subsets.............................

More information

1. Let P be the space of all polynomials (of one real variable and with real coefficients) with the norm

1. Let P be the space of all polynomials (of one real variable and with real coefficients) with the norm Uppsala Universitet Matematiska Institutionen Andreas Strömbergsson Prov i matematik Funktionalanalys Kurs: F3B, F4Sy, NVP 005-06-15 Skrivtid: 9 14 Tillåtna hjälpmedel: Manuella skrivdon, Kreyszigs bok

More information

Orthogonal Bases and the QR Algorithm

Orthogonal Bases and the QR Algorithm Orthogonal Bases and the QR Algorithm Orthogonal Bases by Peter J Olver University of Minnesota Throughout, we work in the Euclidean vector space V = R n, the space of column vectors with n real entries

More information

Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product

Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product Geometrical definition Properties Expression in components. Definition in components Properties Geometrical expression.

More information

ON SEQUENTIAL CONTINUITY OF COMPOSITION MAPPING. 0. Introduction

ON SEQUENTIAL CONTINUITY OF COMPOSITION MAPPING. 0. Introduction ON SEQUENTIAL CONTINUITY OF COMPOSITION MAPPING Abstract. In [1] there was proved a theorem concerning the continuity of the composition mapping, and there was announced a theorem on sequential continuity

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

5. Linear algebra I: dimension

5. Linear algebra I: dimension 5. Linear algebra I: dimension 5.1 Some simple results 5.2 Bases and dimension 5.3 Homomorphisms and dimension 1. Some simple results Several observations should be made. Once stated explicitly, the proofs

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

GROUPS ACTING ON A SET

GROUPS ACTING ON A SET GROUPS ACTING ON A SET MATH 435 SPRING 2012 NOTES FROM FEBRUARY 27TH, 2012 1. Left group actions Definition 1.1. Suppose that G is a group and S is a set. A left (group) action of G on S is a rule for

More information

Linear Algebra. A vector space (over R) is an ordered quadruple. such that V is a set; 0 V ; and the following eight axioms hold:

Linear Algebra. A vector space (over R) is an ordered quadruple. such that V is a set; 0 V ; and the following eight axioms hold: Linear Algebra A vector space (over R) is an ordered quadruple (V, 0, α, µ) such that V is a set; 0 V ; and the following eight axioms hold: α : V V V and µ : R V V ; (i) α(α(u, v), w) = α(u, α(v, w)),

More information

MICROLOCAL ANALYSIS OF THE BOCHNER-MARTINELLI INTEGRAL

MICROLOCAL ANALYSIS OF THE BOCHNER-MARTINELLI INTEGRAL PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 0002-9939(XX)0000-0 MICROLOCAL ANALYSIS OF THE BOCHNER-MARTINELLI INTEGRAL NIKOLAI TARKHANOV AND NIKOLAI VASILEVSKI

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

11 Ideals. 11.1 Revisiting Z

11 Ideals. 11.1 Revisiting Z 11 Ideals The presentation here is somewhat different than the text. In particular, the sections do not match up. We have seen issues with the failure of unique factorization already, e.g., Z[ 5] = O Q(

More information

Examination paper for TMA4115 Matematikk 3

Examination paper for TMA4115 Matematikk 3 Department of Mathematical Sciences Examination paper for TMA45 Matematikk 3 Academic contact during examination: Antoine Julien a, Alexander Schmeding b, Gereon Quick c Phone: a 73 59 77 82, b 40 53 99

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

Finite dimensional topological vector spaces

Finite dimensional topological vector spaces Chapter 3 Finite dimensional topological vector spaces 3.1 Finite dimensional Hausdorff t.v.s. Let X be a vector space over the field K of real or complex numbers. We know from linear algebra that the

More information

Vector Spaces 4.4 Spanning and Independence

Vector Spaces 4.4 Spanning and Independence Vector Spaces 4.4 and Independence October 18 Goals Discuss two important basic concepts: Define linear combination of vectors. Define Span(S) of a set S of vectors. Define linear Independence of a set

More information

Orthogonal Projections

Orthogonal Projections Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

More information