Introduction to Hilbert Space Frames

Size: px
Start display at page:

Download "Introduction to Hilbert Space Frames"

Transcription

1 Introduction to Hilbert Space Frames by Robert Crandall Advised by Prof. William Faris University of Arizona Program in Applied Mathematics 67 N. Santa Rita, Tucson, AZ 8579 Abstract We present an introduction to Hilbert space frames. A frame is a generalization of a basis that is useful, for example, in signal processing. It allows us to expand Hilbert space vectors in terms of a set of other vectors that satisfy a certain energy equivalence condition. This condition guarantees that any vector in the Hilbert space can be reconstructed in a numerically stable way from its frame coefficients. We focus on frames in finite dimensional spaces. Preliminaries. Hilbert Space We recall here some basic definitions and facts about Hilbert space. with Hilbert spaces may skip this section. Readers who are familiar Definition. A Hilbert space is a complete, normed vector space H over the complex numbers C, whose norm is induced by an inner product. The inner product is a function, : H H C that satisfies. Linearity in the second argument: a, b C and x, y, z H,. Conjugate symmetry: x, y H, x, a y + bz =a x, y +b x, z. x, y = y, x where the overbar denotes complex conjugation.. Positivity: x 0 H, x, x >0. The norm of H is induced by its inner product: x H, x = x, x. The Hilbert space is required to be complete, which means that every sequence that is Cauchy with respect to this norm converges to a point in H. The most common Hilbert spaces, and the only ones we will be concerned with here, are the Euclidean spaces and the square-integrable function spaces. Example. The Euclidean space C n is a Hilbert space with an inner product defined by n ( x, y ) = i= The norm induced by this inner product is the standard Euclidean distance; for example, in C we have x = x +x. We can generalize these Euclidean spaces to infinite dimensions. Our vectors are then functions instead of n-tuples of numbers, and we must introduce the additional requirement that the functions be square-integrable to ensure that the inner product is well defined. x iy i.

2 Section Example. For a measure space M and a measure µ, define L (M, µ) to be the set of measurable functions f: M C such that f d µ <. This is a Hilbert space with the inner product f, g = f g dµ. We will take µ to be the Lebesgue measure when M is R or an interval on R, and counting measure when M = N or M = Z. In the first case, we use the notation L ([a, b]), and the Hilbert space consists of square-integrable functions. In the second case, we use a lowercase l and write l (N). Recall that integration with respect to counting measure is just summation, so the inner product on l is x, y = i= and l is the space of square-summable sequences. x i y ī, Hilbert spaces are nicer than general Banach spaces because of the additional structure induced by the inner product. The inner product allows us to define angles between vectors, and in particular, leads to the concept of orthogonality: Definition 4. Two vectors x and y in a Hilbert space H are said to be orthogonal if x, y =0. A set of vectors {x i } is said to be orthogonal if x i, x j =0 for i j. In the Euclidean spaces, the inner product is the standard dot product, and two vectors are orthogonal if their dot product is zero.. Linear Operators on Hilbert Space We use the term operator here to refer to a linear map between two Hilbert spaces. some basic terminology regarding operators. We recall Definition 5. For Hilbert spaces H and H, mapping T: H H, is called a linear operator if, for every x, y H and for every c, c C, we have T(c x +c y)=c Tx+c Ty. A linear operator is bounded if there exists a constant K > 0 such that T x K x for all nonzero x H. If T is a bounded operator, then we define the operator norm to be the norm induced by the two Hilbert space norms in the following way: T = inf {K: Tx K x for all x 0}. Every linear operator has an adjoint, which is the unique operator T satisfying T x, y = x, T y for all x, y H. An linear operator is an injection if T x = T y x = y (that is, if T maps distinct elements in H to distinct elements in H ). A linear operator is a surjection if range(t) = H. An operator that is both surjective and injective is called a bijection. In the finite dimensonal case, linear operators are just matrices; the linear operators from C n to C m are precisely the m n matrices in C m n. Infinite dimensional linear operators are the subject of functional analysis, and are much more difficult to classify in general. We will be working with a special type of linear operator called a frame operator whose norm is bounded above and below by two nonzero constants.

3 Hilbert Space Frames l Representations of L Functions A common task in applied mathematics is to represent a function f L in terms of some sequence of coefficients in l. For example, in signal processing applications one often represents an analog signal (an L function) in terms of a sequence of coefficients. In theoretical considerations we may take these coefficients to be in l, but in practice we can only store finitely many coefficients. We hope to be able to choose a finite set of coefficients that capture most of the information in the original signal, in the sense that we can use the coefficients to reconstruct the original signal with a small L error. The most common way to accomplish this task is to find a set of basis vectors {v n } for L, and use the inner products v n, f as the l coefficients representing a function f L. Example 6. Consider the Hilbert space L (0, ). The functions {e πinx, n Z} form an orthonormal basis for this space, and we can represent any function f L uniquely as a sequence in l (Z) defined by These c n are the Fourier coefficients of f. coefficients using the inversion formula c n = e πinx, f, n Z f = n Z A function can be reconstructed from its Fourier c n e πinx. The map F that takes f to its sequence of Fourier coefficients c is unitary. This means that the problem of forming c given f and the inverse problem of forming f given c are both numerically stable, which is especially important in computational applications where we may only have an approximation to f or c. The fact that the operator F is bounded above and below (by in this case) guarantees that, given a sufficiently good approximation for f, we can form an approximation of c It turns out that requiring the set {v n } to be a basis is overly restrictive for some applications. It is possible to form stable representations of arbitrary elements of H in terms of sets of vectors that are not necessarily linearly independent. The most general set of vectors that allows us to form stable representations of arbitrary vectors is called a frame. Hilbert Space Frames A frame is a subset {φ j } of H that satisfies two very useful conditions: first, every other vector in H can be written as a linear combination of the φ j. Second, every f in H can be represented as a sequence of frame coefficients in l, and each f can be reconstructed in a numerically stable way from its frame coefficients. The frame coefficients of a function f are determined by applying the frame operator to f. Reconstruction of f from its frame coefficients is performed with a pseudoinverse. Definition 7. Let {φ j } be a subset of H such that there exist A, B > 0 with A f ( φ j, f ) B f for all f H. Then {φ j } is called a frame of H. The supremum of all A and the infimum of all B that satisfy the above inequality are called the frame bounds. The frame operator is the function F: H l defined by (Ff) n = φ n, f.

4 4 Section By definition, the frame operator satisfies If A=B, then {φ j } is called a tight frame. A f Ff B f. Some basic facts follow immediately from this definition. A frame must span the Hilbert space. Otherwise, F would have a non-trivial nullspace and there would be some f 0 such that F f = 0 <A f. The frame operator is an injection onto its range. If F f = F g, then by linearity F(f g)=0 and A f g 0 B f g f g =0 f = g. A frame does not have to be orthogonal, or even linearly independent. Example 8. Let H = C, and let F = 0 0 C n, and F: C C is the associated frame operator. columns, i.e., all vectors of the form a b a+b. The columns of F = ( ) 0 0 are a frame of Its range in C is the span of its, and F is a bijection from C to this two-dimensional subspace of C. are the square roots of the eigenvalues of F F = ( The obvious question is how a vector x C can be reconstructed from its frame coeffi- Its frame bounds will be the squares of the singular values of F, which ). Thus, A= and B =. cients in C. It turns out that there is another set of vectors in C, called the dual frame, that is used to reconstruct x from its coefficients. Before defining the dual frame, we compute the adjoint of the frame operator. Proposition 9. Let {φ j } H be a frame and let F be the associated frame operator. adjoint of F is the operator F : l H given by Then the Proof. By definition, the adjoint satisfies F c = c j φ j. Ff,c = φ j, f c j Using the conjugate symmetry and linearity of the inner product, this is = f, c j φ j = f, F c. The result follows. So, the adjoint of the frame operator takes a sequence c in l to a linear combination of the frame vectors weighted by the coefficients c j.. The Dual Frame Definition 0. Dual Frames Let {φ j } be a frame in H. given by Then there is another frame {φ j} H, called the dual frame, φ j =(F F) φ j.

5 Hilbert Space Frames 5 It is instructive to look at the equivalent expression φ j = F Fφ j. F φ j is the l sequence of frame coefficients of φ j in terms of the original frame; say F φ j = c. The adjoint F, when applied to this sequence of coefficients, gives F c = i c iφ i = φ j. So, we have written each of the original frame vectors φ j as a linear combination of the other φ i, and the coefficients of this expansion are the inner products of the φ i with the dual frame vector φ j. We will see below that we can expand any vector f H as a linear combination of the φ j, and the coefficients of this expansion will be given by the inner products of f with the dual frame vectors φ j. On the other hand, we can write any vector f as a linear combination of the φ j, and then the coefficients will be given by the inner products of f with the original frame. This is where the terminology dual frame comes from; reconstructing a vector from its frame coefficients and writing a vector as linear combination of frame vectors are in fact dual aspects of the same problem. Proposition. If F is a frame operator (with frame bounds A and B), then F F is invertible; thus, the dual frame is well-defined. Proof. i. Let f H with f 0. Then F F f, f = Ff,Ff = F f A f > 0. It follows that F Ff 0, so F F is injective. ii. The range of F F is closed. Suppose g n is some Cauchy sequence in the range of F F. That is, g n g m 0 as n, m and there is a sequence f n such that F F f n = g n n. For any n, m, F F (f n f m ), (f n f m ) = F (f n f m ) = g n g m A f n f m, since F is a frame with lower bound A > 0. That means f n f m A g n g m, so f n f m 0 as n, m and f n is a Cauchy sequence as well. Every Hilbert space is complete by definition, so f n converges to some f H. F F is a bounded linear map: the norm of an operator and its adjoint are the same, so F F F = B. It follows that F F is continuous, and That is, any Cauchy sequence g n in range(f F), and range(f F) is closed. iii. F F is a surjection, since F F f n = g n F Ff g range(f F). range(f F) converges to some element g in range(f F) = range(f F)=N((F F) ) = N(F F) = {0} = H, where we have used the fact that F F is self-adjoint. Thus, F F is a bijection from H to itself and for every g H, there is a unique f H such that F F f = g. The inverse is the unique operator satisfying (F F) g = f.

6 6 Section We are now in a position to show that the dual frame is indeed a frame of H, and compute its frame operator and frame bounds. Theorem. Suppose {φ j } is a frame of a Hilbert space H, with associated frame operator F and frame bounds 0 < A B <. Then the set {φ j = (F F) φ j } is another frame of H, with frame operator satisfying F = F(F F), B f F f A f. The set {φ j} is called the dual frame associsated with the original frame. Proof. If {φ j} is to be a frame, then its frame operator is some F satisfying (F f) j = φ j, f = (F F) φ j, f. (F F) is the bounded inverse of a bounded self-adjoint operator, so it is self-adjoint, and (F f) j = φ j, (F F) f. By definition of the frame operator F, this is the jth component of F (F F). frame operator is given by F = F(F F). Thus, the dual To compute the frame bounds, we note that the inverse of a bounded self-adjoint operator is also self adjoint, so F = (F F) F, and F f = F F f, f = (F F) F F(F F) f, f Let g =(F F) f, so that = (F F) f, f. F f = g, F F g = F g. Since F is a frame operator with bounds A and B, A g Fg B g. Inserting g = (F F) f back into this inequality gives and rearranging, we have It follows that A (F F) f f B (F F) f, B f (F F) f A f. B f F f A f. Thus, {φ j} is indeed a frame, with bounds 0 < B A <. { Example. Let us return to Example 8, and compute the dual frame. ( ) ( ) ( )} 0, 0, of C. The frame operator is F = 0 0, and the frame bounds are A =, B =. We have a frame

7 Hilbert Space Frames 7 The dual frame operator is so the dual frame is and B = =. B Consider the vector x= ( F = F(F F) = { ( ) ( ) ( )} / /, / /, /. The bounds for the dual frame are à = = / B, ). Applying the operator F to it gives F x=. These are the coefficients of the expansion of x in terms of the dual frame vectors; that is, ( ) ( ) ( ) ( ) / / / x = = + +. / / / If we instead apply the dual frame operator to x, we find F x= 0. These are the coefficients of the expansion of x in terms of the original frame vectors: ( ) ( ) ( ) ( ) 0 x= = In general, the frame operator is not invertible, since it might not be surjective. However, in the example above, we were able to recover a vector x from its frame coefficients by writing it as a linear combination of the dual frame vectors; specifically, F F x=x for any x, so the frame operator has a left inverse F that inverts F on its range. This turns out to be true in general, and an analogous result allows us to recover a vector from its dual frame coefficients. Theorem 4. Pseudo-Inverse of the Frame Operator If F is a frame operator in a Hilbert space H and F is the associated dual frame operator, then That is, F and F are left inverses for F and F respec- where I is the identity operator on H. tively. Proof. The dual frame operator is given by F F = F F = I, F = F(F F). F F is a bounded, self-adjoint operator, so it is invertible and its inverse is self adjoint. Thus, It follows that Also, F = (F F) F. F F =(F F) (F F) =I. F F = F F(F F) = I.

8 8 Section Thus, for any frame {φ j } and its associated dual frame {φ j }, we have for each f H f = φ j, f φ j = φ j, f φ j. When a frame is redundant (that is, F is not a surjection, or equivalently, the frame vectors are linearly dependent), F is not a unique left inverse. We can add any arbitrary operator A that is zero on range(f) and still get a left inverse; if A: l H is a linear operator satisfying A (F f) = 0 for all f H, then clearly (F + A)F f = F F f = f for any f H. The pseudoinverse F is chosen because it is zero on range(f), and so it is optimal in the sense that is the left inverse with the smallest possible norm. Theorem 5. Optimality of the Psuedo-Inverse If F is a frame operator on H and F is the dual frame operator, then F is the left-inverse of F with minimum induced norm. That is, if F F =TF = I, then Proof. First we show that range(f) is closed. is, From the frame inequality it follows that F T. Let c n = F f n be a Cauchy sequence in range(f); that F(f n f m ) 0 as n, m. F(f n f m ) = c n c m A f n f m, so f n f m 0 as n, m. Thus, f n is a Cauchy sequence and it converges to some f H. F is bounded, so it is continuous, and F f n = c n F f = c for some c range(f), and range(f) is closed. Since range(f) is closed, we have l = range(f) (range(f)). Let c l with c 0 and write c = c + c where c = F f range(f) and c (range(f)). be an arbitrary left inverse of F. Then T and F are equal on range(f), so F c c = F c = Tc c c. Since c c, c = c + c and c c, so Thus, F c c = F c c = Tc c T c T c sup c c 0 c. F F c Tc = sup sup = T, c 0 c c 0 c and the pseudoinverse F is the left inverse of F with minimum sup norm. Let T We have shown that the pseudoinverse is the left inverse of minimum norm. If we know the lower frame bound A, then this norm is given by F = F =. Having this bound on the A pseudoinverse is important for computational reasons; if is not too large, then a vector can be reconstructed from its frame coefficients in a numerically stable way. Say we have a vector f H whose frame coefficients are given by F f = c. In practice, we will not have the exact frame coefficients, but some perturbed c = c + δ where hopefully δ. Then the reconstructed vector will be f = F c = f + F δ, and f f = F δ δ. Thus, small perterbations to the frame coefficients result in small perturbations of the reconstructed vector, as A long as is not too large. A A

9 Hilbert Space Frames 9. Tight Frames We have seen that in order to reconstruct a vector from its frame coefficients, we must have knowledge of the dual frame as well. For a special class of frames, this is no trouble, because the dual frame vectors are just constant multiples of the original frame vectors. Theorem 6. The Dual Frame of a Tight Frame Recall that a tight frame is a frame satisfying F f = A f for some A > 0, and for every f H. Let {φ j } be a frame of H. Then {φ j } is a tight frame with frame bound A if and only if the dual frame is given by {φ j = φ A j}. Proof. Suppose {φ j } is a tight frame. Then for any f H, and thus p Ff = F F f, f =A f =A f, f, F F = AI where I is the identity operator on H. It follows that (F F) = A I, and so φ j = A φ j. Conversely, suppose we know that the dual frame satisfies φ j = φ A j. Then the associated frame operator satisfies (F f) j = φ j, f = φ A j, f = (F f) A j, so F = A F. It follows from Theorem 4, then, that so for any f H, and {φ j } is a tight frame with frame bound A. F F = AF F = AI, F F f, f = Ff = A f, f =A f Example 7. Any orthonormal basis is a tight frame with A=B =. Example 8. In R, any set of vectors that are equally distributed on the unit circle (meaning the angle between each of them is 0 degrees) will form a tight frame. For example, the set ( ), cosπ, cos { 4π ( ) ( ( )} 0 = / /, ), 0 / / sin π sin 4π is a tight frame. To see this explicitly, note that for any v = ( ) x y in R, we have ( φ n, v = x + x ) + y +( x ) y n= = x + 4 x + 4 y xy + 4 x + 4 y + x y = x + y = v.. Relationship Between Frames and Bases We have said that a frame is a generalization of a basis. Clearly, in finite dimensions a frame is a basis if and only if it is linearly independent; that is, a frame in C n is a basis if and only if it consists of exactly n vectors.

10 0 Section Definition 9. Normalized Frames Define a normalized frame to be a frame {φ j } of a Hilbert space H such that φ j = for all j. That is, a normalized frame consists of unit vectors. Theorem 0. Conditions for a Normalized Frame to be an Orthonormal Basis A normalized frame is an orthonormal basis if and only if A=B =. Proof. Suppose {v j } j J is an orthonormal basis of a Hilbert space H. the basis expansion u= v j, u v j. j J Then for any u H, we have Since {v j } is an orthonormal basis, v j, v i =δ ij for all i, j J. Thus, u = u, u = j J v j, u u, v j v j, v j = j J v j, u, and v j is a normalized frame with A =B =. Now suppose {φ j } j J is a normalized frame with A =B =. Then for any f H, f = φ j, f. j J Then in particular, for any i, φ i = = + j i φ j, φ i, which implies φ j, φ i = δ ij so the frame vectors are orthonormal. To show that any vector can be expanded in terms of the frame vectors, consider the difference d = f φ j, f φ j. j J This difference must satisfy d = φ j, d j J = φ i, f φ j, f φ j i J j J = ( i J φ i, f ) φ j, f φ i, φ j. j J Since the φ j are orthonormal, this is equal to ( φ i, f φ i, f ) = 0. i J Thus, d =0 and f = j J φ j, f φ j. It follows that {φ j } constitutes an orthonormal basis. If a frame is not normalized, then the result of Theorem 0 does not hold. We can show this by constructing a frame of C consisting of vectors that has bounds A =B =. Example. To construct a frame of C consisting of vectors that satisfies A = B =, we must find a matrix in C that has singular values σ = σ =. Such a matrix can be factored as U S V, where U C and V C are unitary and S =. For example, let U = cos ( π ) sin ( π ) 0 ) 0 sin ( π ) ( π cos cos ( π ) 0 sin ( π 0 0 sin ( π ) ( π ) 0 cos )

11 Hilbert Space Frames and This gives so the set { ( ( /4 ), / clearly not a basis. length. 4 / ) ( /, 0 ( ) 0 V = U S V =, 0 )} is a tight frame of C with A = B =. However, it is Theorem 0 is not violated because the frame vectors do not have unit.4 Frames in Finite Dimensions We give first a suffi- We now consider frames in the finite dimensional Hilbert spaces C n. cient condition for a set of vectors to constitute a frame. Lemma. Any finite, spanning set in C n constitutes a frame. Proof. Suppose {φ j } m j= C n and span({φ j })=C n. Define an operator F: C n C m by (F x) j = φ j, x. F can be written as a matrix F C m n. This matrix has rank n, since the span of its rows is all of C n by assumption; it follows that all the singular values of F are nonzero. We denote the largest singular value by σ, and the smallest by σ n. Then for any x C n, σ n x Fx σ x. Thus, F is a frame operator, {φ j } m j= is a frame of C n, and the frame bounds are given by the squares of the smallest and largest singular values of F (i.e., the absolute values of the eigenvalues of F F). It is also possible to have an infinite frame in finite dimensions, as long as the length of the frame vectors goes to zero sufficiently fast. Lemma. A countable spanning set {φ j } j= in C n is a frame iff j= φ j <. Proof. Let {φ j } j= be a set that spans C n. Suppose j= φ j = B <. For any x C n, the Cauchy-Schwarz inequality gives F x = j= φ j, x x j= φ j =B x, so F is bounded above. Since {φ j } spans C n, we can choose a finite subset that also spans C n, say {φ n i } i= φ i {φ j }. By Lemma 7, this subset is a frame; say its lower frame bound is A. Then where n F x = φ j, x φ j, x A x, j= j= and F is bounded below. It follows that F is a frame operator and the {φ j } are a frame of C n.

12 , =, Section Now suppose j= φ j =. For any 0 δ, define a δ-cone around a nonzero vector x C n to be the subset C δ (x)= { y 0 C n : } x, y x y δ ; that is, C δ (x) is the set of all y such that the angle between x and y is sufficiently small. C 0 (x) is just the one dimensional subspace spanned by x, and C (x) is all of C n. Since C n is finitedimensional, for any δ, we can cover C n with a finite number of δ-cones. From the pigeonhole principle, then, we conclude that for any δ > 0 there is a δ-cone containing an infinite subset {φ j } {φ j } such that φj =. Let C δ (x) be such a cone, with δ <. By definition of the cone, then, for every j we have φ j, x ( δ) φ j x, and Fx = j= φ j, x j= φ j, x ( δ) x j= φ j =, and so F is not bounded above and {φ j } is not a frame. We now have a complete classification of frames in finite dimensions. Theorem 4. Characterization of Frames in Finite Dimensions Suppose {φ j } is a subset (finite or countably infinite) of the Hilbert space C n. a frame for C n if and only if it spans C n and satisfies φ j <. Proof. This theorem follows directly from lemmas and. {φ j } forms Corollary 5. A set {φ j } of unit vectors in C n is a normalized frame iff it is finite and spans C n. Proof. If the frame is normalized, {φ j } can t be infinite because then we would have φ j =..4. Tight Frames in Finite Dimensions We now turn our attention to tight frames in finite dimensional Hilbert spaces. We will only consider tight frames with finitely many frame vectors, although it is possible to have a tight frame in C n that contains infinitely many vectors. For example, if we take any sequence {φ,n, φ,k, φ n,k } k N of orthonormal bases in C n, then the set { φ k,k, φ k,k, φ k n,k: k N} is a tight frame, since for any f C n we have φ j, f = ( ) n k φ j,k, f = k <. j= k= j= k= Finite tight frames in C n can be characterized by their frame operators; a set of m vectors in C n (m n) is a tight frame if and only if the associated frame operator F C m n has singular values σ = σ n σ > 0. The frame bound is given by A = σ. Such an operator can be factored using the singular value decomposition as F =σui m n V, where I m n is given by the first n columns of the m m identity matrix, U is a unitary matrix in C m m, and V is a unitary matrix in C n n. General tight frames which are not normalized do not have any easily recognizable symmetry. For example, the following is a randomly generated tight frame in R consisting of vectors. It was generated by choosing the rows of U I V, where U is a randomly generated unitary matrix, V is a random unitary matrix, and I is the matrix

13 Hilbert Space Frames The following is a tight frame containing 5 vectors, generated using the same procedure:.4. Finite Normalized Tight Frames

14 4 Section We have seen that a general tight frame may not have clear symmetry. A normalized tight frame, on the other hand, exhibits a great deal of symmetry. For example, in R, the vertices of any platonic solid (the regular tetrahedron, cube, octahedron, dodecahedron, and icosahedron) centered at the origin form a normalized tight frame. The finite normalized tight frames have only recently been fully classified in the literature for every Euclidean space C n. We will present only a few illustrative results, and point the reader to (reference []) for a complete description. All results in this section are taken from that paper. The frame bound for a finite, normalized tight frame measures the redundancy of the frame, in the sense that it is the ratio of the number of frame vectors to the dimension of the space. Theorem 7. Frame Bound for a Normalized Tight Frame in C n If {φ j } m j= is a normalized, tight frame in C n, then the frame bound is A = m. That is, for n any x C n, m φ j, x = m n x. j= In other words, the frame bound of a normalized, finite tight frame is entirely determined by the number of vectors in the frame. Proof. n Let {e j } j= be an orthonormal basis for C n. If {φ j } m j= is a tight frame with bound A, then by definition, for any x C n we have A x = m j= φ j, x. Then n n An= A e j = j= j= m φ i, e j = m i= i= m = φ i = m, i= n φ i, e j j= so A= m n. Corollary 8. The finite, normalized tight frames in C n consisting of n vectors are precisely the orthonormal bases in C n. Com- Proof. We have shown that a normalized frame is an orthonormal basis iff A = B =. bining this with the previous theorem gives the result. Theorem 6. Existence of Normalized Tight Frames in C n For all integers k n, there exist normalized tight frames of C n consisting of k nonzero vectors. So, for example, in R the vertices of a dodecahedron form a normalized tight frame with bound 0, since a dodecahedron has 0 vertices. For the purpose of illustration we present, without proof, a classification of finite normalized tight frames in R. Theorem 9. Classication of Finite Normalized Tight Frames in R Identify elements in R with numbers in C according to the map ( ) x y z = x + i y. A sequence of unit vectors {v n } R is a finite, normalized tight frame if and only if the corresponding sequence {z n } C satisfies zn = 0. Note that the z n must lie on the unit circle in C.

15 Bibliography 5 As a sanity check, we can show quickly that this theorem holds for the orthonormal bases in R. If {v, v } is an orthonormal basis in R, the corresponding pair in C is { e iθ, i e iθ} for some θ [0, π). Then z + z = e iθ e iθ = 0. Conversely, if z, z C have unit magnitude, they can be written as e iθ and e iθ for some θ, θ [0, π). If they satisfy z +z =0, then we have e iθ = e iθ, which implies that θ = θ + π or θ = θ + π. This means that the corresponding unit vectors v and v in R are at right angles to eachother and they form an orthonormal basis. 4 Conclusions We have introduced the concept of frames and gone through the basic definitions and important theorems. The real work, of course, is in constructing frames that can be used in applications. References () and () provide constructions of wavelet frames and windowed Fourier frames, which have already found great use in signal processing applications. There are no doubt other interesting potential applications of frames. They provide a natural way of finding discretizations of integral transforms, and since the choice of transform is highly dependent on the application, there are certainly useful frames which have not yet been investigated. The results we have presented about finite frames indicate that a normalized tight frame exhibits a great deal of symmetry, and that these FNTFs can be fully classified. The infinite dimensional analogue has not, to my knowledge, been fully explored; similar classification results for infinite dimensional normalized tight frames could give some insight into the concept of symmetry and equidistribution in the infinite dimensional setting. 5 Bibliography. A Wavelet Tour of Signal Processing, Stephane Mallat. Ten Lectures on Wavelets, Ingrid Daubechies. Finite Normalized Tight Frames, Benedetto & Fickus, Advances in Computational Mathematics 8:57-85, 00

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

More information

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES CHRISTOPHER HEIL 1. Cosets and the Quotient Space Any vector space is an abelian group under the operation of vector addition. So, if you are have studied

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

Mathematical Methods of Engineering Analysis

Mathematical Methods of Engineering Analysis Mathematical Methods of Engineering Analysis Erhan Çinlar Robert J. Vanderbei February 2, 2000 Contents Sets and Functions 1 1 Sets................................... 1 Subsets.............................

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

MA106 Linear Algebra lecture notes

MA106 Linear Algebra lecture notes MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

LEARNING OBJECTIVES FOR THIS CHAPTER

LEARNING OBJECTIVES FOR THIS CHAPTER CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0. Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

More information

Inner product. Definition of inner product

Inner product. Definition of inner product Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product

More information

n k=1 k=0 1/k! = e. Example 6.4. The series 1/k 2 converges in R. Indeed, if s n = n then k=1 1/k, then s 2n s n = 1 n + 1 +...

n k=1 k=0 1/k! = e. Example 6.4. The series 1/k 2 converges in R. Indeed, if s n = n then k=1 1/k, then s 2n s n = 1 n + 1 +... 6 Series We call a normed space (X, ) a Banach space provided that every Cauchy sequence (x n ) in X converges. For example, R with the norm = is an example of Banach space. Now let (x n ) be a sequence

More information

1. Let P be the space of all polynomials (of one real variable and with real coefficients) with the norm

1. Let P be the space of all polynomials (of one real variable and with real coefficients) with the norm Uppsala Universitet Matematiska Institutionen Andreas Strömbergsson Prov i matematik Funktionalanalys Kurs: F3B, F4Sy, NVP 005-06-15 Skrivtid: 9 14 Tillåtna hjälpmedel: Manuella skrivdon, Kreyszigs bok

More information

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010 Math 550 Notes Chapter 7 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 7 Fall 2010 1 / 34 Outline 1 Self-Adjoint and Normal Operators

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

Linearly Independent Sets and Linearly Dependent Sets

Linearly Independent Sets and Linearly Dependent Sets These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function 17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function, : V V R, which is symmetric, that is u, v = v, u. bilinear, that is linear (in both factors):

More information

Section 4.4 Inner Product Spaces

Section 4.4 Inner Product Spaces Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer

More information

On duality of modular G-Riesz bases and G-Riesz bases in Hilbert C*-modules

On duality of modular G-Riesz bases and G-Riesz bases in Hilbert C*-modules Journal of Linear and Topological Algebra Vol. 04, No. 02, 2015, 53-63 On duality of modular G-Riesz bases and G-Riesz bases in Hilbert C*-modules M. Rashidi-Kouchi Young Researchers and Elite Club Kahnooj

More information

Ideal Class Group and Units

Ideal Class Group and Units Chapter 4 Ideal Class Group and Units We are now interested in understanding two aspects of ring of integers of number fields: how principal they are (that is, what is the proportion of principal ideals

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

1 Symmetries of regular polyhedra

1 Symmetries of regular polyhedra 1230, notes 5 1 Symmetries of regular polyhedra Symmetry groups Recall: Group axioms: Suppose that (G, ) is a group and a, b, c are elements of G. Then (i) a b G (ii) (a b) c = a (b c) (iii) There is an

More information

1 Norms and Vector Spaces

1 Norms and Vector Spaces 008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform MATH 433/533, Fourier Analysis Section 11, The Discrete Fourier Transform Now, instead of considering functions defined on a continuous domain, like the interval [, 1) or the whole real line R, we wish

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

THREE DIMENSIONAL GEOMETRY

THREE DIMENSIONAL GEOMETRY Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

GROUPS ACTING ON A SET

GROUPS ACTING ON A SET GROUPS ACTING ON A SET MATH 435 SPRING 2012 NOTES FROM FEBRUARY 27TH, 2012 1. Left group actions Definition 1.1. Suppose that G is a group and S is a set. A left (group) action of G on S is a rule for

More information

Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics

Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics Undergraduate Notes in Mathematics Arkansas Tech University Department of Mathematics An Introductory Single Variable Real Analysis: A Learning Approach through Problem Solving Marcel B. Finan c All Rights

More information

16.3 Fredholm Operators

16.3 Fredholm Operators Lectures 16 and 17 16.3 Fredholm Operators A nice way to think about compact operators is to show that set of compact operators is the closure of the set of finite rank operator in operator norm. In this

More information

Name: Section Registered In:

Name: Section Registered In: Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

More information

Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

Orthogonal Projections and Orthonormal Bases

Orthogonal Projections and Orthonormal Bases CS 3, HANDOUT -A, 3 November 04 (adjusted on 7 November 04) Orthogonal Projections and Orthonormal Bases (continuation of Handout 07 of 6 September 04) Definition (Orthogonality, length, unit vectors).

More information

Finite Dimensional Hilbert Spaces and Linear Inverse Problems

Finite Dimensional Hilbert Spaces and Linear Inverse Problems Finite Dimensional Hilbert Spaces and Linear Inverse Problems ECE 174 Lecture Supplement Spring 2009 Ken Kreutz-Delgado Electrical and Computer Engineering Jacobs School of Engineering University of California,

More information

MATHEMATICAL METHODS OF STATISTICS

MATHEMATICAL METHODS OF STATISTICS MATHEMATICAL METHODS OF STATISTICS By HARALD CRAMER TROFESSOK IN THE UNIVERSITY OF STOCKHOLM Princeton PRINCETON UNIVERSITY PRESS 1946 TABLE OF CONTENTS. First Part. MATHEMATICAL INTRODUCTION. CHAPTERS

More information

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete

More information

Classification of Cartan matrices

Classification of Cartan matrices Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

More information

1 The Brownian bridge construction

1 The Brownian bridge construction The Brownian bridge construction The Brownian bridge construction is a way to build a Brownian motion path by successively adding finer scale detail. This construction leads to a relatively easy proof

More information

Chapter 20. Vector Spaces and Bases

Chapter 20. Vector Spaces and Bases Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit

More information

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

More information

( ) which must be a vector

( ) which must be a vector MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

Math 215 HW #6 Solutions

Math 215 HW #6 Solutions Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T

More information

Computing the Symmetry Groups of the Platonic Solids With the Help of Maple

Computing the Symmetry Groups of the Platonic Solids With the Help of Maple Computing the Symmetry Groups of the Platonic Solids With the Help of Maple Patrick J. Morandi Department of Mathematical Sciences New Mexico State University Las Cruces NM 88003 USA pmorandi@nmsu.edu

More information

Notes on metric spaces

Notes on metric spaces Notes on metric spaces 1 Introduction The purpose of these notes is to quickly review some of the basic concepts from Real Analysis, Metric Spaces and some related results that will be used in this course.

More information

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors. 3. Cross product Definition 3.1. Let v and w be two vectors in R 3. The cross product of v and w, denoted v w, is the vector defined as follows: the length of v w is the area of the parallelogram with

More information

State of Stress at Point

State of Stress at Point State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

Lecture 5: Singular Value Decomposition SVD (1)

Lecture 5: Singular Value Decomposition SVD (1) EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

More information

Lecture Notes 2: Matrices as Systems of Linear Equations

Lecture Notes 2: Matrices as Systems of Linear Equations 2: Matrices as Systems of Linear Equations 33A Linear Algebra, Puck Rombach Last updated: April 13, 2016 Systems of Linear Equations Systems of linear equations can represent many things You have probably

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

5. Linear algebra I: dimension

5. Linear algebra I: dimension 5. Linear algebra I: dimension 5.1 Some simple results 5.2 Bases and dimension 5.3 Homomorphisms and dimension 1. Some simple results Several observations should be made. Once stated explicitly, the proofs

More information

Quantum Physics II (8.05) Fall 2013 Assignment 4

Quantum Physics II (8.05) Fall 2013 Assignment 4 Quantum Physics II (8.05) Fall 2013 Assignment 4 Massachusetts Institute of Technology Physics Department Due October 4, 2013 September 27, 2013 3:00 pm Problem Set 4 1. Identitites for commutators (Based

More information

You know from calculus that functions play a fundamental role in mathematics.

You know from calculus that functions play a fundamental role in mathematics. CHPTER 12 Functions You know from calculus that functions play a fundamental role in mathematics. You likely view a function as a kind of formula that describes a relationship between two (or more) quantities.

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

Section 1.1. Introduction to R n

Section 1.1. Introduction to R n The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

More information

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A = MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information