# PETER G. CASAZZA AND GITTA KUTYNIOK

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 A GENERALIZATION OF GRAM SCHMIDT ORTHOGONALIZATION GENERATING ALL PARSEVAL FRAMES PETER G. CASAZZA AND GITTA KUTYNIOK Abstract. Given an arbitrary finite sequence of vectors in a finite dimensional Hilbert space, we describe an algorithm, which computes a Parseval frame for the subspace generated by the input vectors while preserving redundancy exactly. We further investigate several of its properties. Finally, we apply the algorithm to several numerical examples.. introduction Let H be a finite dimensional Hilbert space. A sequence (f i ) n i H forms a frame, if there exist constants < A B < such that A g 2 f i, g 2 B g 2 for all g H. () i Frames have turned out to be an essential tool for many applications such as, for example, data transmission, due to their robustness not only against noise but also against losses and due to their freedom in design [4, 6]. Their main advantage lies in the fact that a frame can be designed to be redundant while still providing a reconstruction formula. Since the frame operator Sg n i g, f i f i is invertible, each vector g H can be always reconstructed from the values ( g, f i ) n i via g SS g g, f i S f i. i However, the inverse frame operator is usually very complicated to compute. This difficulty can be avoided by choosing a frame whose frame operator equals the identity. This is one reason why Parseval frames, i.e., frames for which S Id or equivalently for which A and B in () can be chosen as A B, enjoy rapidly increasing attention. Another reason is that quite recently it was shown by Benedetto and Fickus [] that in R d as well as in C d finite equal norm Parseval frames, i.e., finite Parseval frames whose elements all have the same norm, are exactly those sequences which are in equilibrium under the so called frame force, which parallels a Coulomb potential law in electrostatics. In fact, they demonstrate that in this setting both orthonormal sets and finite equal norm Parseval frames arise from the same optimization problem. Date: January, Mathematics Subject Classification. Primary 42C5; Secondary 46C99, 94A2. Key words and phrases. Finite dimensional Hilbert space, Gram Schmidt orthogonalization, linear dependence, Parseval frame, redundancy.

2 2 PETER G. CASAZZA AND GITTA KUTYNIOK Thus, in general, Parseval frames are perceived as the most natural generalization of orthogonal bases. For more details on frame theory we refer to the survey article [3] and the book [5]. Given a finite sequence in a finite dimensional Hilbert space, our driving motivation is to determine a Parseval frame for the subspace generated by this sequence. We could apply Gram Schmidt orthogonalization, which would yield an orthonormal basis for this subspace together with a maximal number of zero vectors. But redundancy is the crucial part for many applications. Hence we need to generalize Gram Schmidt orthogonalization so that it can also produce Parseval frames which are not orthonormal bases preferably while preserving redundancy exactly. In particular, the algorithm shall be applicable to sequences of linearly dependent vectors. Our algorithm is designed to be iterative in the sense that one vector is added each time to an already modified set of vectors and then the new set is adjusted again. In each iteration it not only computes a Parseval frame for the span of the sequence of vectors already dealt with at this point, but also preserves redundancy in an exact way. Moreover, it reduces to Gram Schmidt orthogonalization if applied to a sequence of linearly independent vectors and each time a linearly dependent vector is added, the algorithm computes the Parseval frame which is closest in l 2 norm to the already modified sequence of vectors. The paper is organized as follows. In Section 2 we first state the algorithm and show that it in fact generates a special Parseval frame in each iteration. Additional properties of the algorithm such as, for example, the preservation of redundancy, are treated in Section 3. Finally, in Section 4 we first compare the complexity of our algorithm with the complexity of the Gram Schmidt orthogonalization and then study the different steps of the algorithm applied to several numerical examples. 2. The algorithm Throughout this paper let H denote a finite dimensional Hilbert space. We start by describing our iterative algorithm. On input n N and f (f i ) n i H the procedure GGSP (Generalized Gram Schmidt orthogonalization to compute Parseval frames) outputs a Parseval frame g (g i ) n i H for span{(f i) n i } with special properties (see Theorem 2.2). procedure GGSP(n, f; g) for k : to n do begin 2 if f k then 3 g k : ; 4 else 5 begin 6 g k : f k k j f k, g j g j ;

3 A GENERALIZATION OF GRAM SCHMIDT ORTHOGONALIZATION 3 7 if g k then 8 g k : g k g k; 9 else begin for i : to k do g i : g i + 2 g k : + f k 2f k; 3 end; 4 end; 5 end; end. f k 2 ( ) + f k 2 In the remainder of this paper the following notation will be used. g i, f k f k ; Notation 2.. Let Φ denote the mapping (f i ) n i (g i) n i of a sequence of vectors in H to another sequence of vectors in H given by the procedure GGSP. We will also use the notation ((f i ) n i, g) : (f,...,f n, g) for (f i ) n i H and g H. The following result shows that the algorithm not only produces a Parseval frame for span{(f i ) n i}, but even in each iteration also produces a special Parseval frame for span{(f i ) k i }, k,...,n. It is well known that applying S 2 to a sequence of vectors (f i ) n i in H yields a Parseval frame, where S denotes the frame operator for this sequence (see [3, Theorem 4.2]). Moreover, Theorem 3. will show that the Parseval frame (S 2f i ) n i is the closest in l 2 norm to the sequence (f i ) n i. However, in general the computation of the operator S 2 is not very efficient. In fact, in our algorithm we do not compute S 2((f i ) n i ). Instead in each iteration when adding a vector, which is linearly dependent to the already modified vectors, we apply S 2 to those vectors and the added one, where here S denotes the frame operator for this new set of vectors. This eases the computation in a significant manner, since the set of computed vectors already forms a Parseval frame, and nevertheless we compute the closest Parseval frame in each iteration. When we add a linearly independent vector, we orthogonalize this one vector by using a Gram Schmidt step. Thus this algorithm is also a generalization of Gram Schmidt orthogonalization. Theorem 2.2. Let n N and (f i ) n i H. Then, for each k {,..., n}, the sequence of vectors Φ((f i ) k i) is a Parseval frame for span{(f i ) k i} span{φ((f i ) k i)}. In particular, for each k {,...,n}, the following conditions hold. (i) If f k span{(f i ) k i }, then Φ((f i ) k i) (S 2 (Φ((fi ) k i ), f k)), where S is the frame operator for (Φ((f i ) k i ), f k).

4 4 PETER G. CASAZZA AND GITTA KUTYNIOK (ii) If f k span{(f i ) k i }, then and Φ((f i ) k i ) (Φ((f i) k i ), g k), g k H, g k g k Φ((f i ) k i ). Proof. We will prove the first claim by induction and meanwhile in each step we show that, in particular, the claims in (i) and (ii) hold. For this, let l denote the smallest number in {,...,n} with f l. Obviously, for each k {,...,l }, the generated set of vectors g k (see line 3 of GGSP) forms a Parseval frame for span{(f i ) k i} {} and also (i) is fulfilled. The hypothesis in (ii) does not apply here. Next notice that in the case k l we have g k : f f k k (line 8), which certainly is a Parseval frame for span{(f i ) k i} span{f k }. It is also easy to see that (i) and (ii) are satisfied. Now fix some k {l +,...,n} and assume that the sequence ( g i ) k i : Φ((f i) k i ) is a Parseval frame for span{(f i ) k i } span{( g i) k i }. We have to study two cases. Case : The vector g k : f k k j f k, g j g j computed in line 6 is trivial. This implies that span{(f i ) k i } span{( g i) k i } span{(f i) k i }, (2) since otherwise the Gram Schmidt orthogonalization step would yield a non trivial vector. In particular, only the hypothesis in (i) applies. Now let P denote the orthogonal projection of H onto span{f k }. In order to compute S 2, where S denotes the frame operator for (( g i ) k i, f k), we first show that each (I P) g i, i,...,k is an eigenvector for S with respect to the eigenvalue or the zero vector. This claim follows immediately from S(I P) g i k (I P) g i, g j g j + (I P) g i, f k f k j k (I P) g i, g j g j j (I P) g i, since ( g i ) k i is a Parseval frame for span{( g i) k i }. Also f k is an eigenvector for S, but with respect to the eigenvalue + f k 2, which is proven by the following calculation: k Sf k f k, g j g j + f k, f k f k ( + f k 2 )f k. j Using f k as an eigenbasis for P(span{( g i ) k i }) and an arbitrary eigenbasis for (I P)(span{( g i ) k i }), we can diagonalize S to compute 2. This together with the fact S that (I P) g i, i,...,k is an eigenvector for S with respect to the eigenvalue and that S(I P)f k yields S 2 gi + fk 2P g i + (I P) g i for i k

5 A GENERALIZATION OF GRAM SCHMIDT ORTHOGONALIZATION 5 and S 2 fk + fk 2f k. Comparing these equalities with line and 2 of GGSP shows that in fact Φ((f i ) k i) (S 2(( g i ) k i ), f k)), which is (i). By [3, Theorem 4.2] and (2), this immediately implies that the sequence Φ((f i ) k i ) is a Parseval frame for span{φ((f i) k i )} span{(f i) k i }. Case 2: The condition in line 7 applies, i.e., we have g k : (f k k j f k, g j g j )/ ( f k k j f k, g j g j ). Then we set g i : g i for all i,...,k. Obviously, g k. Moreover, since by induction hypothesis ( g i ) k i forms a Parseval frame, for each i,...,k, we have k g i, f k f k, g j g j g i, f k g i, f k. j Thus g k is normalized vector, which is orthogonal to g,...,g k. Hence (ii) is satisfied and, for all h span{(g i ) k i }, we obtain k k h, g i 2 (I P)h, g i 2 + Ph, g k 2 (I P)h 2 + Ph 2 h 2, i i where P denotes the orthogonal projection of H onto span{g k }. This proves that (g i ) k i Φ((f i) k i ) is a Parseval frame for span{φ((f i) k i )}. Moreover, we have span{φ((f i ) k i )} span{(f i) k i, f k k j f k, g j g j } span{(f i ) k i }. This finishes the proof, since the hypothesis in (i) does not apply in this case. The algorithm can be seen as a Gram Schmidt procedure backwards in the sense that in each iteration, if the added vector is linearly dependent to the already computed vectors, not only this vector is modified, but also all the other vectors are rearranged with respect to the new vector so that the collection forms a Parseval frame. This way of computation will be demonstrated by several examples in Subsection Special Properties of the algorithm In this section we first determine in general which Parseval frame is the closest to the initial sequence and study which properties of our algorithm this result implies. Then we investigate several additional properties of the procedure GGSP, in particular we characterize those sequences, which lead to orthonormal bases, and we show that Φ regarded as a map from finite sequences to Parseval frames is almost bijective. At last, we examine the redundancy of the generated Parseval frame. Given a sequence (f i ) n i with frame operator S, by [3, Theorem 4.2], the sequence (S 2f i ) n i always forms a Parseval frame. The following result shows that this sequence can in fact be characterized as the very same Parseval frame, which is the closest to (f i ) n i in l 2 norm.

6 6 PETER G. CASAZZA AND GITTA KUTYNIOK Theorem 3.. If (f i ) n i H, n N is any frame for H with frame operator S, then f i S 2 fi 2 inf{ f i g i 2 : (g i ) n i i i is a Parseval frame for H}. Moreover, (S 2f i ) n i is the unique minimizer. Proof. Let (e j ) d j, d : dim H, be an orthonormal eigenvector basis for H with respect to S and respective eigenvalues (λ j ) d j. Then we can rewrite the left hand side of the claimed inequality in the following way: f i S 2 fi 2 i i i j j j j f i, e j e j f i, e j e j 2 λj f i, e j 2 2 λj 2 λj f i, e j 2 i 2 λ j λj (λ j 2 λ j + ). j Now let (g i ) n i be an arbitrary Parseval frame for H. Using again the eigenbasis and its eigenvalues, we obtain f i g i 2 i i i j f i, e j e j g i, e j e j 2 j f i, e j g i, e j 2 j i ( [ ]) f i, e j 2 + g i, e j 2 2Re f i, e j g i, e j ( f i, e j 2 + j i [ ]) g i, e j 2 2Re f i, e j g i, e j i ( [ ]) λ j + 2Re f i, e j g i, e j. j i i

7 A GENERALIZATION OF GRAM SCHMIDT ORTHOGONALIZATION 7 Moreover, we have [ ] Re f i, e j g i, e j j i j f i, e j g i, e j i n f i, e j 2 n g i, e j 2 j i λj. Combining this estimate with the computations above yields f i g i 2 (λ j 2 λ j + ) f i S 2 fi 2. i j j Since (S 2f i ) n i is a Parseval frame for H, the first claim follows. For the moreover part, suppose that (g i ) n i is another minimizer. Then, by the above calculation, for each k {,..., n}, we have i Re f k, e j g k, e j f k, e j g k, e j (3) and, for each j {,..., d}, f k, e j g k, e j n f k, e j 2 n g k, e j 2. (4) k Now let r k,j, s k,j > and θ k,j, ψ k,j [, 2π) be such that f k, e j r k,j e iθ k,j g k, e j s k,j e iψ k,j. We compute [ ] Re f k, e j g k, e j r k,j s k,j Re [ e ] i(θ k,j ψ k,j ) r k,j s k,j cos(θ k,j ψ k,j ). Hence (3) implies that k k r k,j s k,j cos(θ k,j ψ k,j ) r k,j s k,j, which in turn yields θ k,j ψ k,j. Thus g k, e j t k,j f k, e j for some t k,j > for all k {,..., n}, j {,..., d}. By (4), for each j {,..., d} there exists some u j > such that u j f k, e j g k, e j t k,j f k, e j. This implies t k,j u j for all k {,...,n}. Hence, for each k {,...,n} and j {,..., d}, we obtain the relation i and g k, e j u j f k, e j. (5) Since (g i ) n i is a Parseval frame for H, we have g k, e j 2 u 2 j f k, e j 2 u 2 j λ j. k k

8 8 PETER G. CASAZZA AND GITTA KUTYNIOK This shows that u j. Thus, using (5) and the definition of (e j ) d λj j and (λ j ) d j, it follows that g k S 2f k for all k {,..., n}. This result together with Theorem 2.2 (i) implies the following property of our algorithm. Corollary 3.2. In each iteration of GGSP, in which a linearly dependent vector is added, the algorithm computes the unique Parseval frame, which is closest to the frame consisting of the already computed vectors and the added one. Next we characterize those sequences of vectors applied to which the algorithm computes an orthonormal basis. The proof will show that this is exactly the case, when only the steps of the Gram Schmidt orthogonalization are carried out. Proposition 3.3. Let (f i ) n i H, n N. The following conditions are equivalent. (i) The sequence Φ((f i ) n i ) is an orthonormal basis for span{(f i) n i }. (ii) The sequence (f i ) n i is linearly independent. Proof. If (ii) holds, only line 6 8 of GGSP will be performed and these steps coincide with Gram Schmidt orthogonalization, hence produce an orthonormal system. Now suppose that (ii) does not hold. This is equivalent to dim(span{(f i ) n i }) < n. By Theorem 2.2, we have span{(f i ) n i } span{φ((f i) n i )}. This in turn implies dim(span{φ((f i ) n i )}) < n. Thus Φ((f i) n i ) cannot form an orthonormal basis for span{(f i ) n i}. The mapping Φ given by the procedure GGSP of a finite sequence in H to a Parseval frame for a subspace of H is almost bijective in the following sense. Proposition 3.4. Let Φ be the mapping defined in the previous paragraph. Then Φ satisfies the following conditions. (i) Φ is surjective. (ii) For each Parseval frame (g i ) n i H, the set Φ ((g i ) n i) equals f i, if span{( f j ) i (f i) n j } span{( f j ) i j}, i : f i λ f i + ϕ, λ R +, ϕ span{( f j ) i j for some ( f i ) n i Φ ((g i ) n i ). }, otherwise (6) Proof. It is easy to see that each step of the procedure GGSP is reversable which implies (i). To prove (ii) we first show that the set (6) is contained in Φ ((g i ) n i ). For this, let (f i ) n i be an element of the set (6). Notice that, by definition of (f i) n i, we have span{(f i ) k i } span{( f i ) k i } for all k {,..., n}. Since ( f i ) n i Φ ((g i ) n i ), we only have to study the case span{(f i ) k i } span{(f i) k i } for some k {,..., n}. But then line 8 of GGSP will be performed. Let Φ((f i ) k i ) be denoted by ( g i) k i. By

9 A GENERALIZATION OF GRAM SCHMIDT ORTHOGONALIZATION 9 Theorem 2.2, the sequence ( g i ) k i forms a Parseval frame for span{(f i) k i }. Hence λ f k + ϕ k j λ f k + ϕ, g j g j λ f k + ϕ k j λ f k + ϕ, g j g j λ fk k j λ f k, g j g j λ f k k j λ f k, g j g j f k k j f k, g j g j f k k j f k, g j g j, which proves the first claim. Secondly, suppose (f i ) n i H is not an element of (6). We claim that Φ((f i ) n i) Φ(( f i ) n i ), which finishes the proof. Let k {,..., n} be the largest number such that f k does not satisfy the conditions in (6). We have to study two cases. Case : Suppose that f k f k, but span{(f i ) k i } span{(f i) k i }. Then in the kth iteration line 2 will be performed and we obtain h k : + fk 2f k + f k 2 fk : h k, since f k f k. Thus Φ((f i ) k i) Φ(( f i ) k i). If in the following iterations the condition in line 7 always applies, we are done, since h k and h k are not changed anymore. Now suppose that there exists l {,..., n}, l > k with span{(f i ) l i } span{(f i) l i }. Then in the lth iteration h k and h k are modified in line. Since f l f l by choice of k, using a reformulation of line, we still have h k : + fl 2Ph k + (I P)h k + f l 2 P h k + (I P) h k : h k, where P denotes the orthogonal projection onto span{f l }. Case 2: Suppose that f k λ f k + ϕ for each λ R + and ϕ span{(f i ) k i }, and also span{(f i ) k i } span{(f i) k i }. Let (h i) k i and ( h i ) k i denote Φ((f i) k i ) and Φ(( f i ) k i ), respectively. If (h i) k i ( h i ) k i, the computation in line 8 in the kth iteration yields h k : f k k j f k, h j h j f k k j f k, h j h j f k k j f k, h j h j f k k j f k, h j h j : h k. If (h i ) k i ( h i ) k i, then there exists some l {,..., k } with h l h l. In both situations these inequalities remain valid as it was shown in the preceding paragraph. An important aspect of our algorithm is the redundancy of the computed frame. Hence it is desirable to know in which way redundancy is preserved throughout the algorithm. For this, we introduce a suitable definition of redundancy for sequences in a finite dimensional Hilbert space.

10 PETER G. CASAZZA AND GITTA KUTYNIOK Definition 3.5. Let (f i ) n i H, n N. Then the redundancy red((f i ) n i) of this set is defined by red((f i ) n n i) dim(span{(f i ) n i }), where we set n. Indeed in each iteration our algorithm preserves redundancy in an exact way. Proposition 3.6. Let (f i ) n i H, n N. Then red(φ((f i ) n i)) red((f i ) n i). Proof. By Theorem 2.2, we have span{(f i ) n i} span{φ((f i ) n i)}. From this, the claim follows immediately. 4. Implementation of the algorithm We will first compare the numerical complexities of the Gram Schmidt orthogonalization and of GGSP. In a second part the procedure GGSP will be applied to several numerical examples in order to visualize the modifications of the vectors while performing the algorithm. 4.. Numerical complexity. In this subsection it will turn out that the numerical complexity of our algorithm coincides with the numerical complexity of the Gram Schmidt orthogonalization. In particular, we will analyze the complexity of the Gram Schmidt step in lines 6 and 8 of GGSP with the complexity of the step in lines 6,, and 2 of GGSP, which is performed if the added vector is linearly dependent to the already modified vectors. For this, suppose the computation of a square root of a real number with some given precision requires S elementary operations (additions, subtractions, multiplications, and divisions). Let d denote the dimension of H, and let k {,..., n}. An easy computation shows that the number of elementary operations in lines 6 and 8 of GGSP equals 3dk+S, and the number of elementary operations in lines 6,, and 2 of GGSP equals dk+(s+4)k 7d 3. Hence in both cases the numerical complexity is O(dk). Only the constants are slightly larger in the new step, which is performed in case of linear dependency. Thus both the Gram Schmidt orthogonalization and GGSP possess the same numerical complexity of O(dn 2 ) Graphical examples. In order to give further insight into the algorithm, in this subsection we will study the different steps of GGSP for three examples. The single steps of each example are illustrated by a diagram. In each of these the first image in the uppermost row shows the positions of the vectors of the input sequence. Then in the following images the remaining original vectors and the modified vectors are displayed after each step of the loop in line of GGSP. The original vectors are always marked by a circle and the already computed new vectors are indicated by a filled circle. The vector, which will be dealt with in the next step, is marked by a square. Recall that, by Theorem 2.2, in each step the set of vectors marked with a filled circle forms a Parseval frame for their linear span.

11 A GENERALIZATION OF GRAM SCHMIDT ORTHOGONALIZATION Input vectors. Step 2.Step Step 4. Step 5. Step Step 7. Step Output vectors Modified vector Next vector to be moved Original vector Figure. GGSP applied to the sequence of vectors ((,.), (,.2), (,.3), (,.4), (,.), (,.2), (,.3), (,.4)) In the first example we consider the sequence of vectors ((,.), (,.2), (,.3), (,.4), (,.), (,.2), (,.3), (,.4)). Figure shows the modifications of the vectors while performing the GGSP. The Gram Schmidt orthogonalization, which is performed in line 6 8 of GGSP, applies twice. In all the following steps the added vector is linearly dependent to the already modified vectors. Therefore we have to go through line and 2, and the vectors already dealt with are newly rearranged in each step. Figure 2 shows the same example with a different ordering of the vectors. It is no surprise that the generated Parseval frame is completely different from the one obtained in Figure, since already the Gram Schmidt orthogonalization is sensitive to the ordering of the vectors.

12 2 PETER G. CASAZZA AND GITTA KUTYNIOK Input vectors. Step 2. Step Step 4. Step 5. Step Step 7. Step Output vectors Modified vector Next vector to be moved Original vector Figure 2. GGSP applied to the sequence of vectors ((,.4), (,.3), (,.2), (,.), (,.), (,.2), (,.3), (,.4)) Both generated Parseval frames have in common that the first components of the vectors are almost all positive. Intuitively this is not astonishing, since already all vectors of the input sequence possess a positive first component. The following example gives further evidence for the claim that the generated Parseval frame inherits the geometry of the input sequence in a particular way. Here the vectors of the input sequence are located on the unit circle, in particular we consider the sequence of vectors ((, ), (.5,.5), (, ), (.5,.5), (, ), (.5,.5), (, ), (.5,.5)). While performing the GGSP the vectors almost keep the geometry of a circle and the final Parseval frame is located on a slightly deformed circle (see Figure 3). Notice that in the second step of the algorithm the second vector is moved to the position of the third vector (,). Hence in all the following computations these two vectors remain indistinguishable.

13 A GENERALIZATION OF GRAM SCHMIDT ORTHOGONALIZATION 3 Input vectors. Step 2. Step Step 4. Step 5. Step Step 7. Step Output vectors Modified vector Next vector to be moved Original vector Figure 3. GGSP applied to the sequence of vectors ((, ), (.5,.5), (, ), (.5,.5), (, ), (.5,.5), (, ), (.5,.5)) The graphical examples seem to indicate that to a certain extent output sequences inherit their geometry from the input sequence. For applications it would be especially important to characterize those input sequences, which generate equal norm Parseval frames or more generally almost equal norm Parseval frames (compare [2, Problem 4.4]). Acknowledgments The majority of the research for this paper was performed while the first author was visiting the Department of Mathematics at the University of Paderborn. This author thanks this department for its hospitality and support during this visit. The first author also acknowledges support from NSF DMS

14 4 PETER G. CASAZZA AND GITTA KUTYNIOK The second author acknowledges support from DFG grant KU 446 and Forschungspreis der Universität Paderborn 23. The authors are indebted to the referee for valuable comments and suggestions which improved the paper. References [] J.J. Benedetto and M. Fickus, Finite normalized tight frames, Adv. Comput. Math. 8 (23), [2] P.G. Casazza, Custom building finite frames, In Wavelets, frames and operator theory, 6 86, Contemp. Math., 345, Amer. Math. Soc., Providence, RI, 24. [3] P.G. Casazza, The art of frame theory, Taiwanese J. of Math. 4 (2), [4] P.G. Casazza and J. Kovaĉević, Equal-norm tight frames with erasures, Adv. Comput. Math. 8 (23), [5] O. Christensen, An Introduction to Frames and Riesz Bases, Birkhäuser, Boston, 23. [6] V.K. Goyal, J. Kovačević, and J.A. Kelner, Quantized frame expansions with erasures, Appl. Comput. Harmon. Anal. (2), Department of Mathematics, University of Missouri, Columbia, Missouri 652, USA address: Mathematical Institute, Justus Liebig University Giessen, Giessen, Germany address:

### An Introduction to Frame Theory

An Introduction to Frame Theory Taylor Hines April 4, 2010 Abstract In this talk, I give a very brief introduction to orthonormal bases for Hilbert spaces, and how they can be naturally generalized to

### Erasure-Proof Transmissions: Fusion Frames meet Coding Theory

Erasure-Proof Transmissions: Fusion Frames meet Coding Theory Bernhard G. Bodmann a and Gitta Kutyniok b a Department of Mathematics, University of Houston, Houston, TX 77204, USA; b Institute of Mathematics,

### Orthogonal Diagonalization of Symmetric Matrices

MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

### BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

### MATH 240 Fall, Chapter 1: Linear Equations and Matrices

MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS

### Section 4.4 Inner Product Spaces

Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer

### Linear Span and Bases

MAT067 University of California, Davis Winter 2007 Linear Span and Bases Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (January 23, 2007) Intuition probably tells you that the plane R 2 is of dimension

### ON THE FIBONACCI NUMBERS

ON THE FIBONACCI NUMBERS Prepared by Kei Nakamura The Fibonacci numbers are terms of the sequence defined in a quite simple recursive fashion. However, despite its simplicity, they have some curious properties

### Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

### Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

### Review: Vector space

Math 2F Linear Algebra Lecture 13 1 Basis and dimensions Slide 1 Review: Subspace of a vector space. (Sec. 4.1) Linear combinations, l.d., l.i. vectors. (Sec. 4.3) Dimension and Base of a vector space.

### On duality of modular G-Riesz bases and G-Riesz bases in Hilbert C*-modules

Journal of Linear and Topological Algebra Vol. 04, No. 02, 2015, 53-63 On duality of modular G-Riesz bases and G-Riesz bases in Hilbert C*-modules M. Rashidi-Kouchi Young Researchers and Elite Club Kahnooj

### Linear Codes. In the V[n,q] setting, the terms word and vector are interchangeable.

Linear Codes Linear Codes In the V[n,q] setting, an important class of codes are the linear codes, these codes are the ones whose code words form a sub-vector space of V[n,q]. If the subspace of V[n,q]

### The Characteristic Polynomial

Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

### Additional Topics in Linear Algebra Supplementary Material for Math 540. Joseph H. Silverman

Additional Topics in Linear Algebra Supplementary Material for Math 540 Joseph H Silverman E-mail address: jhs@mathbrownedu Mathematics Department, Box 1917 Brown University, Providence, RI 02912 USA Contents

### 1 Orthogonal projections and the approximation

Math 1512 Fall 2010 Notes on least squares approximation Given n data points (x 1, y 1 ),..., (x n, y n ), we would like to find the line L, with an equation of the form y = mx + b, which is the best fit

### The Power Method for Eigenvalues and Eigenvectors

Numerical Analysis Massoud Malek The Power Method for Eigenvalues and Eigenvectors The spectrum of a square matrix A, denoted by σ(a) is the set of all eigenvalues of A. The spectral radius of A, denoted

### 1. Let P be the space of all polynomials (of one real variable and with real coefficients) with the norm

Uppsala Universitet Matematiska Institutionen Andreas Strömbergsson Prov i matematik Funktionalanalys Kurs: F3B, F4Sy, NVP 005-06-15 Skrivtid: 9 14 Tillåtna hjälpmedel: Manuella skrivdon, Kreyszigs bok

### 1 Eigenvalues and Eigenvectors

Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

### Orthogonal Projections

Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

### Summary of week 8 (Lectures 22, 23 and 24)

WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry

### Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions

Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential

### MATH 304 Linear Algebra Lecture 16: Basis and dimension.

MATH 304 Linear Algebra Lecture 16: Basis and dimension. Basis Definition. Let V be a vector space. A linearly independent spanning set for V is called a basis. Equivalently, a subset S V is a basis for

### Linear Dependence Tests

Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

### NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

### LEARNING OBJECTIVES FOR THIS CHAPTER

CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional

### Elementary Row Operations and Matrix Multiplication

Contents 1 Elementary Row Operations and Matrix Multiplication 1.1 Theorem (Row Operations using Matrix Multiplication) 2 Inverses of Elementary Row Operation Matrices 2.1 Theorem (Inverses of Elementary

### CHAPTER III - MARKOV CHAINS

CHAPTER III - MARKOV CHAINS JOSEPH G. CONLON 1. General Theory of Markov Chains We have already discussed the standard random walk on the integers Z. A Markov Chain can be viewed as a generalization of

### Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

### Math 113 Homework 3. October 16, 2013

Math 113 Homework 3 October 16, 2013 This homework is due Thursday October 17th at the start of class. Remember to write clearly, and justify your solutions. Please make sure to put your name on the first

### 1 Sets and Set Notation.

LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

### Free groups. Contents. 1 Free groups. 1.1 Definitions and notations

Free groups Contents 1 Free groups 1 1.1 Definitions and notations..................... 1 1.2 Construction of a free group with basis X........... 2 1.3 The universal property of free groups...............

### MATHEMATICAL BACKGROUND

Chapter 1 MATHEMATICAL BACKGROUND This chapter discusses the mathematics that is necessary for the development of the theory of linear programming. We are particularly interested in the solutions of a

### The two smallest minimal blocking sets of

The two smallest minimal blocking sets of Q(2n, 3), n 3 J. De Beule L. Storme Abstract We describe the two smallest minimal blocking sets of Q(2n,3), n 3. To obtain these results, we use the characterization

### α = u v. In other words, Orthogonal Projection

Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

### Markov Chains, part I

Markov Chains, part I December 8, 2010 1 Introduction A Markov Chain is a sequence of random variables X 0, X 1,, where each X i S, such that P(X i+1 = s i+1 X i = s i, X i 1 = s i 1,, X 0 = s 0 ) = P(X

### f(x) is a singleton set for all x A. If f is a function and f(x) = {y}, we normally write

Math 525 Chapter 1 Stuff If A and B are sets, then A B = {(x,y) x A, y B} denotes the product set. If S A B, then S is called a relation from A to B or a relation between A and B. If B = A, S A A is called

### Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

### An important class of codes are linear codes in the vector space Fq n, where F q is a finite field of order q.

Chapter 3 Linear Codes An important class of codes are linear codes in the vector space Fq n, where F q is a finite field of order q. Definition 3.1 (Linear code). A linear code C is a code in Fq n for

### Markov Chains, Stochastic Processes, and Advanced Matrix Decomposition

Markov Chains, Stochastic Processes, and Advanced Matrix Decomposition Jack Gilbert Copyright (c) 2014 Jack Gilbert. Permission is granted to copy, distribute and/or modify this document under the terms

### A matrix over a field F is a rectangular array of elements from F. The symbol

Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F) denotes the collection of all m n matrices over F Matrices will usually be denoted

### Factorization Theorems

Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

### MATH 551 - APPLIED MATRIX THEORY

MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

### d(f(g(p )), f(g(q))) = d(g(p ), g(q)) = d(p, Q).

10.2. (continued) As we did in Example 5, we may compose any two isometries f and g to obtain new mappings f g: E E, (f g)(p ) = f(g(p )), g f: E E, (g f)(p ) = g(f(p )). These are both isometries, the

### LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

### Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

### 2.5 Complex Eigenvalues

1 25 Complex Eigenvalues Real Canonical Form A semisimple matrix with complex conjugate eigenvalues can be diagonalized using the procedure previously described However, the eigenvectors corresponding

### Quick Reference Guide to Linear Algebra in Quantum Mechanics

Quick Reference Guide to Linear Algebra in Quantum Mechanics Scott N. Walck September 2, 2014 Contents 1 Complex Numbers 2 1.1 Introduction............................ 2 1.2 Real Numbers...........................

### Row Ideals and Fibers of Morphisms

Michigan Math. J. 57 (2008) Row Ideals and Fibers of Morphisms David Eisenbud & Bernd Ulrich Affectionately dedicated to Mel Hochster, who has been an inspiration to us for many years, on the occasion

### SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH

31 Kragujevac J. Math. 25 (2003) 31 49. SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH Kinkar Ch. Das Department of Mathematics, Indian Institute of Technology, Kharagpur 721302, W.B.,

### Definition: A square matrix A is block diagonal if A has the form A 1 O O O A 2 O A =

The question we want to answer now is the following: If A is not similar to a diagonal matrix, then what is the simplest matrix that A is similar to? Before we can provide the answer, we will have to introduce

### Orthogonal Bases and the QR Algorithm

Orthogonal Bases and the QR Algorithm Orthogonal Bases by Peter J Olver University of Minnesota Throughout, we work in the Euclidean vector space V = R n, the space of column vectors with n real entries

### Linear Systems. Singular and Nonsingular Matrices. Find x 1, x 2, x 3 such that the following three equations hold:

Linear Systems Example: Find x, x, x such that the following three equations hold: x + x + x = 4x + x + x = x + x + x = 6 We can write this using matrix-vector notation as 4 {{ A x x x {{ x = 6 {{ b General

### Chapter 6. Orthogonality

6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

### Section 1.2. Angles and the Dot Product. The Calculus of Functions of Several Variables

The Calculus of Functions of Several Variables Section 1.2 Angles and the Dot Product Suppose x = (x 1, x 2 ) and y = (y 1, y 2 ) are two vectors in R 2, neither of which is the zero vector 0. Let α and

### QUADRATIC ALGEBRAS WITH EXT ALGEBRAS GENERATED IN TWO DEGREES. Thomas Cassidy

QUADRATIC ALGEBRAS WITH EXT ALGEBRAS GENERATED IN TWO DEGREES Thomas Cassidy Department of Mathematics Bucknell University Lewisburg, Pennsylvania 17837 Abstract Green and Marcos [3] call a graded k-algebra

### The determinant of a skew-symmetric matrix is a square. This can be seen in small cases by direct calculation: 0 a. 12 a. a 13 a 24 a 14 a 23 a 14

4 Symplectic groups In this and the next two sections, we begin the study of the groups preserving reflexive sesquilinear forms or quadratic forms. We begin with the symplectic groups, associated with

### Vector Spaces II: Finite Dimensional Linear Algebra 1

John Nachbar September 2, 2014 Vector Spaces II: Finite Dimensional Linear Algebra 1 1 Definitions and Basic Theorems. For basic properties and notation for R N, see the notes Vector Spaces I. Definition

### Angles between Subspaces of an n-inner Product Space

Angles between Subspaces of an n-inner Product Space M. Nur 1, H. Gunawan 2, O. Neswan 3 1,2,3 Analysis Geometry group, Faculty of Mathematics Natural Sciences, Bung Institute of Technology, Jl. Ganesha

### Weak topologies. David Lecomte. May 23, 2006

Weak topologies David Lecomte May 3, 006 1 Preliminaries from general topology In this section, we are given a set X, a collection of topological spaces (Y i ) i I and a collection of maps (f i ) i I such

### Lecture 10: Invertible matrices. Finding the inverse of a matrix

Lecture 10: Invertible matrices. Finding the inverse of a matrix Danny W. Crytser April 11, 2014 Today s lecture Today we will Today s lecture Today we will 1 Single out a class of especially nice matrices

### Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

### HOLOMORPHIC FRAMES FOR WEAKLY CONVERGING HOLOMORPHIC VECTOR BUNDLES

HOLOMORPHIC FRAMES FOR WEAKLY CONVERGING HOLOMORPHIC VECTOR BUNDLES GEORGIOS D. DASKALOPOULOS AND RICHARD A. WENTWORTH Perhaps the most useful analytic tool in gauge theory is the Uhlenbeck compactness

### Iterative Methods for Computing Eigenvalues and Eigenvectors

The Waterloo Mathematics Review 9 Iterative Methods for Computing Eigenvalues and Eigenvectors Maysum Panju University of Waterloo mhpanju@math.uwaterloo.ca Abstract: We examine some numerical iterative

### Section 1.1. Introduction to R n

The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

### CHARACTERISTIC ROOTS AND VECTORS

CHARACTERISTIC ROOTS AND VECTORS 1 DEFINITION OF CHARACTERISTIC ROOTS AND VECTORS 11 Statement of the characteristic root problem Find values of a scalar λ for which there exist vectors x 0 satisfying

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### Classification of Cartan matrices

Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms

### 1 Spherical Kinematics

ME 115(a): Notes on Rotations 1 Spherical Kinematics Motions of a 3-dimensional rigid body where one point of the body remains fixed are termed spherical motions. A spherical displacement is a rigid body

### Topics in Number Theory

Chapter 8 Topics in Number Theory 8.1 The Greatest Common Divisor Preview Activity 1 (The Greatest Common Divisor) 1. Explain what it means to say that a nonzero integer m divides an integer n. Recall

### Quantum manifolds. Manuel Hohmann

Quantum manifolds Manuel Hohmann Zentrum für Mathematische Physik und II. Institut für Theoretische Physik, Universität Hamburg, Luruper Chaussee 149, 22761 Hamburg, Germany Abstract. We propose a mathematical

### 17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function, : V V R, which is symmetric, that is u, v = v, u. bilinear, that is linear (in both factors):

### THE KADISON-SINGER PROBLEM IN MATHEMATICS AND ENGINEERING: A DETAILED ACCOUNT

THE ADISON-SINGER PROBLEM IN MATHEMATICS AND ENGINEERING: A DETAILED ACCOUNT PETER G. CASAZZA, MATTHEW FICUS, JANET C. TREMAIN, ERIC WEBER Abstract. We will show that the famous, intractible 1959 adison-singer

### ON SUPERCYCLICITY CRITERIA. Nuha H. Hamada Business Administration College Al Ain University of Science and Technology 5-th st, Abu Dhabi, 112612, UAE

International Journal of Pure and Applied Mathematics Volume 101 No. 3 2015, 401-405 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu doi: http://dx.doi.org/10.12732/ijpam.v101i3.7

### We call this set an n-dimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P.

Volumes of parallelograms 1 Chapter 8 Volumes of parallelograms In the present short chapter we are going to discuss the elementary geometrical objects which we call parallelograms. These are going to

### Problem Set. Problem Set #2. Math 5322, Fall December 3, 2001 ANSWERS

Problem Set Problem Set #2 Math 5322, Fall 2001 December 3, 2001 ANSWERS i Problem 1. [Problem 18, page 32] Let A P(X) be an algebra, A σ the collection of countable unions of sets in A, and A σδ the collection

### Surjective isometries on spaces of differentiable vector-valued functions

STUDIA MATHEMATICA 19 (1) (009) Surjective isometries on spaces of differentiable vector-valued functions by Fernanda Botelho and James Jamison (Memphis, TN) Abstract. This paper gives a characterization

### FINITE DIMENSIONAL ORDERED VECTOR SPACES WITH RIESZ INTERPOLATION AND EFFROS-SHEN S UNIMODULARITY CONJECTURE AARON TIKUISIS

FINITE DIMENSIONAL ORDERED VECTOR SPACES WITH RIESZ INTERPOLATION AND EFFROS-SHEN S UNIMODULARITY CONJECTURE AARON TIKUISIS Abstract. It is shown that, for any field F R, any ordered vector space structure

### 5. Linear algebra I: dimension

5. Linear algebra I: dimension 5.1 Some simple results 5.2 Bases and dimension 5.3 Homomorphisms and dimension 1. Some simple results Several observations should be made. Once stated explicitly, the proofs

### Algebra and Linear Algebra

Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Algebra and Linear Algebra Algebra: numbers and operations on numbers 2 + 3 = 5 3 7 = 21 Linear algebra: tuples, triples,...

### Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

### DETERMINANTS. b 2. x 2

DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in

### Quaternions and isometries

6 Quaternions and isometries 6.1 Isometries of Euclidean space Our first objective is to understand isometries of R 3. Definition 6.1.1 A map f : R 3 R 3 is an isometry if it preserves distances; that

### Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

### Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

### 9.3 Advanced Topics in Linear Algebra

548 93 Advanced Topics in Linear Algebra Diagonalization and Jordan s Theorem A system of differential equations x = Ax can be transformed to an uncoupled system y = diag(λ,, λ n y by a change of variables

### It is not immediately obvious that this should even give an integer. Since 1 < 1 5

Math 163 - Introductory Seminar Lehigh University Spring 8 Notes on Fibonacci numbers, binomial coefficients and mathematical induction These are mostly notes from a previous class and thus include some

### CARDINALITY, COUNTABLE AND UNCOUNTABLE SETS PART ONE

CARDINALITY, COUNTABLE AND UNCOUNTABLE SETS PART ONE With the notion of bijection at hand, it is easy to formalize the idea that two finite sets have the same number of elements: we just need to verify

### Witten Deformations. Quinton Westrich University of Wisconsin-Madison. October 29, 2013

Witten Deformations Quinton Westrich University of Wisconsin-Madison October 29, 2013 Abstract These are notes on Witten s 1982 paper Supersymmetry and Morse Theory, in which Witten introduced a deformation

### by the matrix A results in a vector which is a reflection of the given

Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 45 4 Iterative methods 4.1 What a two year old child can do Suppose we want to find a number x such that cos x = x (in radians). This is a nonlinear

### Recall that two vectors in are perpendicular or orthogonal provided that their dot

Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

### 1D 3D 1D 3D. is called eigenstate or state function. When an operator act on a state, it can be written as

Chapter 3 (Lecture 4-5) Postulates of Quantum Mechanics Now we turn to an application of the preceding material, and move into the foundations of quantum mechanics. Quantum mechanics is based on a series

### 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each)

Math 33 AH : Solution to the Final Exam Honors Linear Algebra and Applications 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) (1) If A is an invertible

### THE 2-ADIC, BINARY AND DECIMAL PERIODS OF 1/3 k APPROACH FULL COMPLEXITY FOR INCREASING k

#A28 INTEGERS 12 (2012) THE 2-ADIC BINARY AND DECIMAL PERIODS OF 1/ k APPROACH FULL COMPLEXITY FOR INCREASING k Josefina López Villa Aecia Sud Cinti Chuquisaca Bolivia josefinapedro@hotmailcom Peter Stoll