# ISOMETRIES OF R n KEITH CONRAD

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x x2 n. Example 1.1. The identity transformation: id(v) = v for all v R n. Example 1.2. Negation: id(v) = v for all v R n. Example 1.3. Translation: fixing u R n, let t u (v) = v + u. Easily t u (v) t u (w) = v w. Example 1.4. Rotations around points and reflections across lines in the plane are isometries of R 2. Formulas for these isometries will be given in Example 3.3 and Section 5. The effects of a translation, rotation (around the origin) and reflection across a line in R 2 are pictured below on sample line segments. The composition of two isometries of R n is an isometry and if an isometry is invertible, its inverse is also an isometry. It is clear that the three kinds of isometries pictured above (translations, rotations, reflections) are each invertible (translate by the negative vector, rotate by the opposite angle, reflect a second time across the same line). A general isometry of R n is invertible, but to prove this will require some work. In Section 2, we will see how to study isometries using dot products instead of distances. The dot product is more convenient to use than distance because of its algebraic properties. Section 3 introduces the matrix transformations on R n, called orthogonal matrices, that are isometries. In Section 4 we will see that all isometries of R n can be expressed in terms of translations and orthogonal matrix transformations. In particular, this will imply that every isometry of R n is invertible. Section 5 discusses the isometries of R and R 2. In Appendix A, we will look more closely at reflections in R n. 1

2 2 KEITH CONRAD 2. Isometries and dot products Using translations, we can reduce the study of isometries of R n to the case of isometries fixing 0. Theorem 2.1. Every isometry of R n can be uniquely written as the composition t k where t is a translation and k is an isometry fixing the origin. Proof. Let h: R n R n be an isometry. If h = t w k, where t w is translation by a vector w and k is an isometry fixing 0, then for all v in R n we have h(v) = t w (k(v)) = k(v) + w. Setting v = 0 we get w = h(0), so w is determined by h. Then k(v) = h(v) w = h(v) h(0), so k is determined by h. Turning this around, if we define t(v) = v + h(0) and k(v) = h(v) h(0), then t is a translation, k is an isometry fixing 0, and h(v) = k(v)+h(0) = t w k, where w = h(0). Theorem 2.2. For a function h: R n R n, the following are equivalent: (1) h is an isometry and h(0) = 0, (2) h preserves dot products: h(v) h(w) = v w for all v, w R n. Proof. The link between length and dot product is the formula v 2 = v v. Suppose h satisfies (1). Then for any vectors v and w in R n, (2.1) h(v) h(w) = v w. As a special case, when w = 0 in (2.1) we get h(v) = v for all v R n. Squaring both sides of (2.1) and writing the result in terms of dot products makes it Carrying out the multiplication, (h(v) h(w)) (h(v) h(w)) = (v w) (v w). (2.2) h(v) h(v) 2h(v) h(w) + h(w) h(w) = v v 2v w + w w. The first term on the left side of (2.2) equals h(v) 2 = v 2 = v v and the last term on the left side of (2.2) equals h(w) 2 = w 2 = w w. Canceling equal terms on both sides of (2.2), we obtain 2h(v) h(w) = 2v w, so h(v) h(w) = v w. Now assume h satisfies (2), so (2.3) h(v) h(w) = v w for all v and w in R n. Therefore h(v) h(w) 2 = (h(v) h(w)) (h(v) h(w)) = h(v) h(v) 2h(v) h(w) + h(w) h(w) = v v 2v w + w w by (2.3) = (v w) (v w) = v w 2, so h(v) h(w) = v w. Thus h is an isometry. Setting v = w = 0 in (2.3), we get h(0) 2 = 0, so h(0) = 0. Corollary 2.3. The only isometry of R n fixing 0 and the standard basis is the identity.

3 Proof. Let h: R n R n be an isometry that satisfies ISOMETRIES OF R n 3 h(0) = 0, h(e 1 ) = e 1,..., h(e n ) = e n. Theorem 2.2 says h(v) h(w) = v w for all v and w in R n. Fix v R n and let w run over the standard basis vectors e 1, e 2,..., e n, so we see h(v) h(e i ) = v e i. Since h fixes each e i, h(v) e i = v e i. Writing v = c 1 e c n e n, we get h(v) e i = c i for all i, so h(v) = c 1 e c n e n = v. As v was arbitrary, h is the identity on R n. It is essential in Corollary 2.3 that the isometry fixes 0. An isometry of R n fixing the standard basis without fixing 0 need not be the identity! For example, reflection across the line x + y = 1 in R 2 is an isometry of R 2 fixing (1, 0) and (0, 1) but not 0 = (0, 0). See below. If we knew all isometries of R n were invertible, then Corollary 2.3 would imply that two isometries f and g taking the same values at 0 and the standard basis are equal: apply Corollary 2.3 to the isometry f 1 g to see this composite is the identity, so f = g. However, we do not yet know that all isometries are invertible; that is one of our main tasks. 3. Orthogonal matrices A large supply of isometries of R n that fix 0 come from special types of l matrices. Definition 3.1. An n n matrix A is called orthogonal if AA = I n, or equivalently if A A = I n. A matrix is orthogonal when its transpose is its inverse. Since det(a ) = det A, any orthogonal matrix A satisfies (det A) 2 = 1, so det A = ±1. Example 3.2. The orthogonal 1 1 matrices are ±1. Example 3.3. For n = 2, algebra shows AA = I 2 if and only if A = ( a b εb ), where εa a 2 +b 2 = 1 and ε = ±1. Writing a = cos θ and b = sin θ, we get the matrices ( cos θ sin θ sin θ cos θ ) and ( cos θ sin θ sin θ cos θ ). Algebraically, these types of matrices are distinguished by their determinants: the first type has determinant 1 and the second type has determinant 1.

4 4 KEITH CONRAD cos θ sin θ Geometrically, the effect of these matrices is pictured below. On the left, ( sin θ cos θ ) is a counterclockwise rotation by angle θ around the origin. On the right, ( cos sin θ θ cos sin θ θ ) is a reflection across the line through the origin at angle θ/2 with respect to the positive x-axis. (Check ( cos θ sin θ ) squares to the identity, as any reflection should.) sin θ cos θ A = ( cos θ sin θ sin θ cos θ ) A(v) A = ( cos sin θ θ cos sin θ θ ) v v w w A(w) A(w) A(v) Let s explain why ( cos sin θ θ cos sin θ θ ) is a reflection at angle θ/2. See the figure below. Pick a line L through the origin, say at an angle ϕ with respect to the positive x-axis. To find a formula for reflection across L, we ll use a basis of R 2 with one vector on L and the other vector perpendicular to L. The unit vector u 1 = ( cos ϕ sin ϕ) lies on L and the unit vector u 2 = ( sin ϕ cos ϕ) is perpendicular to L. For any v R 2, write v = c 1 u 1 + c 2 u 2 with c 1, c 2 R. u 2 c 2 u 2 c 2 u 2 ϕ v u 1 c 1 u 1 s(v) L The reflection of v across L is s(v) = c 1 u 1 c 2 u 2. Writing a = cos ϕ and b = sin ϕ (so a 2 + b 2 = 1), in standard coordinates ( ) ( ) ( ) ( ) ( ) a b c1 a c (3.1) v = c 1 u 1 + c 2 u 2 = c 1 + c 2 = 2 b a b c1 = b a c 1 b + c 2 a b a c 2 and s(v) = c 1 u 1 c 2 u 2 ( ) ( ) a b c1 = b a c 2 ( ) ( ) 1 a b a b = v by (3.1) b a b a ( ) ( ) a b a b = v b a b a ( a = 2 b 2 ) 2ab 2ab (a 2 b 2 v. )

5 ISOMETRIES OF R n 5 By the sine and cosine duplication formulas, the last matrix is ( cos(2ϕ) ( cos θ sin θ sin θ cos θ sin(2ϕ) sin(2ϕ) cos(2ϕ) ) is a reflection across the line through the origin at angle θ/2. ). Therefore We return to orthogonal n n matrices for any n 1. The geometric meaning of the condition A A = I n is that the columns of A are mutually perpendicular unit vectors (check!). From this we see how to create orthogonal matrices: starting with an orthonormal basis of R n, an n n matrix having this basis as its columns (in any order) is an orthogonal matrix, and all n n orthogonal matrices arise in this way. Let O n (R) denote the set of n n orthogonal matrices: (3.2) O n (R) = {A GL n (R) : AA = I n }. Theorem 3.4. The set O n (R) is a group under matrix multiplication. Proof. Clearly I n O n (R). For A O n (R), the inverse of A 1 is (A 1 ) since (A 1 ) = (A ) = A. Therefore A 1 O n (R). If A 1 and A 2 are in O n (R), then so A 1 A 2 O n (R). (A 1 A 2 )(A 1 A 2 ) = A 1 A 2 A 2 A 1 = A 1 A 1 = I n, Theorem 3.5. If A O n (R), then the transformation h A : R n R n given by h A (v) = Av is an isometry of R n that fixes 0. Proof. Trivially the function h A fixes 0. To show h A is an isometry, by Theorem 2.2 it suffices to show (3.3) Av Aw = v w for all v, w R n. The fundamental link between the dot product and transposes is (3.4) v Aw = A v w for any n n matrix A and v, w R n. Replacing v with Av in (3.4), Av Aw = A (Av) w = (A A)v w. This is equal to v w for all v and w precisely when A A = I n. Example 3.6. Negation on R n comes from the matrix I n, which is orthogonal: id = h In. The proof of Theorem 3.5 gives us a more geometric description of O n (R) than (3.2): (3.5) O n (R) = {A GL n (R) : Av Aw = v w for all v, w R n }. Remark 3.7. Equation (3.5) is the definition of the orthogonal transformations of a subspace W R n : they are the linear transformations W W that preserve dot products between all pairs of vectors in W. The label orthogonal matrix suggests it should just be a matrix that preserves orthogonality of vectors: (3.6) v w = 0 = Av Aw = 0

6 6 KEITH CONRAD for all v and w in R n. While orthogonal matrices do satisfy (3.6), since (3.6) is a special case of the condition Av Aw = v w in (3.5), equation (3.6) is not a characterization of orthogonal matrices. That is, orthogonal matrices (which preserve all dot products) are not the only matrices that preserve orthogonality of vectors (dot products equal to 0). A simple example of a nonorthogonal matrix satisfying (3.6) is a scalar matrix ci n, where c ±1. Since (cv) (cw) = c 2 (v w), ci n does not preserve dot products in general but it does preserve dot products equal to 0. It s natural to ask what matrices besides orthogonal matrices preserve orthogonality. Here is the answer. Theorem 3.8. An n n real matrix A satisfies (3.6) if and only if A is a scalar multiple of an orthogonal matrix. Proof. If A = ca where A is orthogonal, then Av Aw = c 2 (A v A w) = c 2 (v w), so if v w = 0 then Av Aw = 0. Now assume A satisfies (3.6). Then the vectors Ae 1,..., Ae n are mutually perpendicular, so the columns of A are perpendicular to each other. We want to show that they have the same length. Note that e i + e j e i e j when i j, so by (3.6) and linearity Ae i + Ae j Ae i Ae j. Writing this in the form (Ae i + Ae j ) (Ae i Ae j ) = 0 and expanding, we are left with Ae i Ae i = Ae j Ae j, so Ae i = Ae j. Therefore the columns of A are mutually perpendicular vectors with the same length. Call this common length c. If c = 0 then A = O = 0 I n. If c 0 then the matrix (1/c)A has an orthonormal basis as its columns, so it is an orthogonal matrix. Therefore A = c((1/c)a) is a scalar multiple of an orthogonal matrix. 4. Isometries of R n form a group We now establish the converse to Theorem 3.5, and in particular establish that isometries fixing 0 are invertible linear maps. Theorem 4.1. Any isometry h: R n R n fixing 0 has the form h(v) = Av for some A O n (R). In particular, h is linear and invertible. Proof. By Theorem 2.2, h(v) h(w) = v w for all v, w R n. What does this say about the effect of h on the standard basis? Taking v = w = e i, h(e i ) h(e i ) = e i e i, so h(e i ) 2 = 1. Therefore h(e i ) is a unit vector. Taking v = e i and w = e j with i j, we get h(e i ) h(e j ) = e i e j = 0. Therefore the vectors h(e 1 ),..., h(e n ) are mutually perpendicular unit vectors (an orthonormal basis of R n ). Let A be the n n matrix with i-th column equal to h(e i ). Since the columns are mutually perpendicular unit vectors, A A equals I n, so A is an orthogonal matrix and thus acts as an isometry of R n by Theorem 3.5. By the definition of A, A(e i ) = h(e i ) for all i. Therefore A and h are isometries with the same values at the standard basis. Moreover, we know A is invertible since it is an orthogonal matrix.

7 ISOMETRIES OF R n 7 Consider now the isometry A 1 h. It fixes 0 as well as the standard basis. By Corollary 2.3, A 1 h is the identity, so h(v) = Av for all v R n : h is given by an orthogonal matrix. Theorem 4.2. For A O n (R) and w R n, the function h A,w : R n R n given by h A,w (v) = Av + w = (t w A)(v) is an isometry. Moreover, every isometry of R n has this form for unique w and A. Proof. The indicated formula always gives an isometry, since it is the composition of a translation and orthogonal transformation, which are both isometries. To show any isometry of R n has the form h A,w for some A and w, let h: R n R n be an isometry. By Theorem 2.1, h = k(v) + h(0) where k is an isometry of R n fixing 0. Theorem 4.1 tells us there is an A O n (R) such that k(v) = Av for all v R n, so h(v) = Av + h(0) = h A,w (v) where w = h(0). If h A,w = h A,w as functions on Rn, then evaluating both sides at 0 gives w = w. Therefore Av + w = A v + w for all v, so Av = A v for all v, which implies A = A. Theorem 4.3. The set Iso(R n ) of isometries of R n is a group under composition. Proof. The only property that has to be checked is invertibility. By Theorem 4.2, we can write any isometry h as h(v) = Av + w where A O n (R). Its inverse is g(v) = A 1 v A 1 w. Let s look at composition in Iso(R n ) when we write isometries as h A,w from Theorem 4.2. We have h A,w (h A,w (v)) = A(A v + w ) + w = AA v + Aw + w = h AA,Aw +w(v). This is similar to the multiplication law in the ax + b group: ( ) ( a b a b ) ( aa ab = + b In fact, if we write an isometry h A,w Iso(R n ) as an (n + 1) (n + 1) matrix ( A 0 w 1 ), where the 0 in the bottom is a row vector of n zeros, then the composition law in Iso(R n ) is multiplication of the corresponding (n+1) (n+1) matrices, so Iso(R n ) can be viewed as a subgroup of GL n+1 (R), acting on R n as the column vectors ( v 1) in R n+1 (not a subspace!). Corollary 4.4. Two isometries of R n that are equal at 0 and at a basis of R n are the same. This strengthens Corollary 2.3 since we allow any basis, not just the standard basis. Proof. Let h and h be isometries of R n such that h = h at 0 and at a basis of R n, say v 1,..., v n. Then h 1 h is an isometry of R n fixing 0 and each v i. By Theorem 4.1, h 1 h is in O n (R), so the fact that it fixes each v i implies it fixes every linear combination of the v i s, which exhausts R n. Thus h 1 h is the identity on R n, so h = h. ).

8 8 KEITH CONRAD Corollary 4.5. Let P 0, P 1,..., P n be n + 1 points of R n in general position, i.e., they don t all lie in any hyperplane of R n. Two isometries of R n that are equal at P 0,..., P n+1 are the same. In the definition of general position, the hyperplanes include those not containing the origin. For example, three points in R 2 are in general position when no line in R 2 contains all three points. Proof. As in the previous proof, it suffices to show an isometry of R n that fixes P 0,..., P n is the identity. Let h be such an isometry, so h(p i ) = P i for 0 i n 1. Set t(v) = v P 0, which is a translation. Then tht 1 is an isometry with formula (tht 1 )(v) = h(v + P 0 ) P 0. Thus (tht 1 )(0) = h(p 0 ) P 0 = 0, so tht 1 O n (R) by Theorem 4.1. Also (tht 1 )(P i P 0 ) = h(p i ) P 0 = P i P 0. Upon subtracting P 0 from P 0, P 1,..., P n, the points 0, P 1 P 0,..., P n P 0 are in general position. That means no hyperplane can contain them all, so there is no nontrivial linear relation among P 1 P 0,..., P n P 0 (a nontrivial linear relation would place these n points, along with 0, in a common hyperplane), and thus P 1 P 0,..., P n P 0 is a basis of R n. By Corollary 4.4, tht 1 is the identity, so h is the identity. 5. Isometries of R and R 2 Let s classify the isometries of R n for n = 1 and n = 2. Since O 1 (R) = {±1}, the isometries of R are the functions h(x) = x+c and h(x) = x+c for c R. (Of course, this case can be worked out easily from scratch without all the earlier preliminary material.) Now consider isometries of R 2. Write an isometry h Iso(R 2 ) in the form h(v) = Av + w with A O 2 (R). By Example 3.3, A is a rotation or reflection, depending on the determinant. There turn out to be four possibilities for h: a translation, a rotation, a reflection, and a glide reflection. A glide reflection is the composition of a reflection and a nonzero translation in a direction parallel to the line of reflection. A picture of a glide reflection is in the figure below, where the (horizontal) line of reflection is dashed and the translation is to the right. The image, which includes both before and after states, suggests a physical interpretation of a glide reflection: it is the result of turning the plane in space like a half-turn of a screw. The possibilities for isometries of f are collected in Table 1 below. They say how the type of an isometry h is determined by det A and the geometry of the set of fixed points

9 ISOMETRIES OF R n 9 of h (solutions to h(v) = v), which is empty, a point, a line, or the plane. The table also shows that a description of the fixed points can be obtained algebraically from A and w. Isometry Condition Fixed pts Identity A = I 2, w = 0 R 2 Nonzero Translation A = I 2, w 0 Nonzero Rotation det A = 1, A I 2 (I 2 A) 1 w Reflection det A = 1, Aw = w w/2 + ker(a I 2 ) Glide Reflection det A = 1, Aw w Table 1. Isometries of R 2 : h(v) = Av + w, A O 2 (R). To justify the information in the table we move down the middle column. The first two rows are obvious, so we start with the third row. Row 3: Suppose det A = 1 and A I 2, so A = ( cos θ sin θ ) for some θ and cos θ 1. We sin θ cos θ want to show h is a rotation. First of all, h has a unique fixed point: v = Av + w precisely when w = (I 2 A)v. We have det(i 2 A) = 2(1 cos θ) 0, so I 2 A is invertible and p = (I 2 A) 1 w is the fixed point of h. Then w = (I 2 A)p = p Ap, so (5.1) h(v) = Av + (p Ap) = A(v p) + p. Since A is a rotation by θ around the origin, (5.1) shows h is a rotation by θ around P. Rows 4, 5: Suppose det A = 1, so A = ( cos sin θ θ cos sin θ θ ) for some θ and A2 = I 2. We again look at fixed points of h. As before, h(v) = v for some v if and only if w = (I 2 A)v. But unlike the previous case, now det(i 2 A) = 0 (check!), so I 2 A is not invertible and therefore w may or may not be in the image of I 2 A. When w is in the image of I 2 A, we will see that h is a reflection. When w is not in the image of I 2 A, we will see that h is a glide reflection. Suppose the isometry h(v) = Av + w with det A = 1 has a fixed point. Then w/2 must be a fixed point. Indeed, let p be any fixed point, so p = Ap + w. Since A 2 = I 2, Aw = A(p Ap) = Ap p = w, so ( w ) ( w ) h = A + w = Aw + w = w 2. Conversely, if h(w/2) = w/2 then A(w/2) + w = w/2,, so Aw = w. Thus h has a fixed point if and only if Aw = w, in which case ( (5.2) h(v) = Av + w = A v w ) + w 2 2. Since A is a reflection across some line L through 0, (5.2) says h is a reflection across the parallel line w/2 + L passing through w/2. (Algebraically, L = {v : Av = v} = ker(a I 2 ). Since A I 2 is not invertible and not identically 0, its kernel really is 1-dimensional.)

10 10 KEITH CONRAD v Av L w/2 w/2 + L w Now assume h has no fixed point, so Aw w. We will show h is a glide reflection. (The formula h = Av+w shows h is the composition of a reflection and a nonzero translation, but w need not be parallel to the line of reflection of A, which is ker(a I 2 ), so this formula for h does not show directly that h is a glide reflection.) We will now take stronger advantage of the fact that A 2 = I 2. Since O = A 2 I 2 = (A I 2 )(A + I 2 ) and A ±I 2 (after all, det A = 1), A + I 2 and A I 2 are not invertible. Therefore the subspaces W 1 = ker(a I 2 ), W 2 = ker(a + I 2 ) are both nonzero, and neither is the whole plane, so W 1 and W 2 are both one-dimensional. We already noted that W 1 is the line of reflection of A (fixed points of A form the kernel of A I 2 ). It turns out that W 2 is the line perpendicular to W 1. To see why, pick w 1 W 1 and w 2 W 2, so Aw 1 = w 1, Aw 2 = w 2. Then, since Aw 1 Aw 2 = w 1 w 2 by orthogonality of A, we have w 1 ( w 2 ) = w 1 w 2. Thus w 1 w 2 = 0, so w 1 w 2. Now we are ready to show h is a glide reflection. Pick nonzero vectors w i W i for i = 1, 2, and use {w 1, w 2 } as a basis of R 2. Write w = h(0) in terms of this basis: w = c 1 w 1 + c 2 w 2. To say there are no fixed points for h is the same as Aw w, so w W 2. That is, c 1 0. Then (5.3) h(v) = Av + w = (Av + c 2 w 2 ) + c 1 w 1. Since A(c 2 w 2 ) = c 2 w 2, our previous discussion shows v Av + c 2 w 2 is a reflection across the line c 2 w 2 /2 + W 1. Since c 1 w 1 is a nonzero vector in W 1, (5.3) exhibits h as the composition of a reflection across the line c 2 w 2 /2 + W 1 and a nonzero translation by c 1 w 1, whose direction is parallel to the line of reflection, so h is a glide reflection. We have now justified the information in Table 1. Each row describes a different kind of isometry. Using fixed points it is easy to distinguish the first four rows from each other and to distinguish glide reflections from any isometry besides translations. A glide reflection can t be a translation since any isometry of R 2 is uniquely of the form h A,w, and translations have A = I 2 while glide reflections have det A = 1. h(v) Lemma 5.1. A composition of two reflections of R 2 is a translation or a rotation. Proof. The product of two matrices with determinant 1 has determinant 1, so the composition of two reflections has the form v Av + w where det A = 1. Such isometries

11 ISOMETRIES OF R n 11 are translations or rotations by Table 1 (consider the identity to be a trivial translation or rotation). In Example A.2 we will express any translation as the composition of two reflections. Theorem 5.2. Each isometry of R 2 is a composition of at most 2 reflections except for glide reflections, which are a composition of 3 (and no fewer) reflections. Proof. We check the theorem for each type of isometry in Table 1 besides reflections, for which the theorem is obvious. The identity is the square of any reflection. For a translation t(v) = v + w, let A be the matrix representing the reflection across the line w. Then Aw = w. Set s 1 (v) = Av + w and s 2 (v) = Av. Both s 1 and s 2 are reflections, and (s 1 s 2 )(v) = A(Av) + w = v + w since A 2 = I 2. Now consider a rotation, say h(v) = A(v p) + p for some A O 2 (R) with det A = 1 and p R 2. We have h = t r t 1, where t is translation by p and r(v) = Av is a rotation around the origin. Let A be any reflection matrix (e.g., A = ( )). Set s 1(v) = AA v and s 2 (v) = A v. Both s 1 and s 2 are reflections and r = s 1 s 2 (check). Therefore (5.4) h = t r t 1 = (t s 1 t 1 ) (t s 2 t 1 ). The conjugate of a reflection by a translation (or by any isometry, for that matter) is another reflection, as an explicit calculation using Table 1 shows. Thus, (5.4) expresses the rotation h as a composition of 2 reflections. Finally we consider glide reflections. Since this is the composition of a translation and a reflection, it is a composition of 3 reflections. We can t use fewer reflections to get a glide reflection, since a composition of two reflections is either a translation or a rotation by Lemma 5.1 and we know that a glide reflection is not a translation or rotation (or reflection). In Table 2 we record the minimal number of reflections whose composition can equal a particular type of isometry of R 2. Isometry Min. Num. Reflections dim(fixed set) Identity 0 2 Nonzero Translation 2 0 Nonzero Rotation 2 0 Reflection 1 1 Glide Reflection 3 0 Table 2. Counting Reflections in an Isometry That each isometry of R 2 is a composition of at most 3 reflections can be proved geometrically, without recourse to a prior classification of all isometries of the plane. We will give a rough sketch of the argument. We will take for granted (!) that an isometry that fixes at least two points is a reflection across the line through those points or is the identity. (This is related to Corollary 2.3 when n = 2.) Pick any isometry h of R 2. We may suppose h is not a reflection or the identity (the identity is the square of any reflection), so h has at most one fixed point. If h has one fixed point, say P, choose Q P. Then h(q) Q and the points Q and h(q) lie on a common circle centered at P (because h(p ) = P ). Let s be the reflection across the line through P that is perpendicular to the line connecting Q and

12 12 KEITH CONRAD h(q). Then s h fixes P and Q, so s h is the identity or is a reflection. Thus h = s (s h) is a reflection or a composition of two reflections. If h has no fixed points, pick any point P. Let s be the reflection across the perpendicular bisector of the line connecting P and h(p ), so s h fixes P. Thus s h has a fixed point, so our previous argument shows s h is either the identity, a reflection, or the composition of two reflections, so h is the composition of at most 3 reflections. A byproduct of this argument, which did not use the classification of isometries, is another proof that all isometries of R 2 are invertible: any isometry is a composition of reflections and reflections are invertible. From the fact that all isometries fixing 0 in R and R 2 are rotations or reflections, the following general description can be proved about isometries of any Euclidean space in terms of rotations and reflections on one-dimensional and two-dimensional subspaces. Theorem 5.3. If h is an isometry of R n that fixes 0 then there is an orthogonal decomposition R n = W 1 W 2 W m such that dim(w i ) = 1 or 2 for all i, and the restriction of h to W i is a rotation unless i = m, dim(w m ) = 1, and det h = 1, in which case the restriction of h to W m is a reflection. Proof. See [1, Theorem 6.47] or [2, Cor. to Theorem 2]. Appendix A. Reflections A reflection is an isometry of R n that fixes all the points in a chosen hyperplane and interchanges the position of points along each line perpendicular to that hyperplane at equal distance from it. These isometries play a role that is analogous to transpositions in the symmetric group. Reflections, like transpositions, have order 2. Let s look first at reflections across hyperplanes that contain the origin. Let H be a hyperplane containing the origin through which we wish to reflect. Set L = H, so L is a one-dimensional subspace. Every v R n can be written uniquely in the form v = w + u, where w H and u L. The reflection across H, by definition, is the function (A.1) s(v) = s(w + u) = w u. That is, s fixes H = u and acts like 1 on L = Ru. From the formula defining s, it is linear in v. Since w u, s(v) = w + u = v, so by linearity s is an isometry: s(v) s(w) = s(v w) = v w. Since s is linear, it can be represented by a matrix. To write this matrix simply, pick an orthogonal basis {v 1,..., v n 1 } of H and let v n be a nonzero vector in L = H, so v n is orthogonal to H. Then s(c 1 v c n v n ) = c 1 v c n 1 v n 1 c n v n. The matrix for s has 1 s along the diagonal except for 1 in the last position: c c 1 (A.2). c n 1 = c n 1. c n c n The matrix in (A.2) represents s relative to a convenient choice of basis. In particular, from the matrix representation we see det s = 1: every reflection in O n (R) has determinant 1. Notice the analogy with transpositions in the symmetric group, which have sign 1.

13 ISOMETRIES OF R n 13 We now derive another formula for s, which will look more complicated than what we have seen so far but should be considered more fundamental. Fix a nonzero vector u on the line L = H. Since R n = H L, any v R n can be written as w + cu, where w H and c R. Since w L, v u = c(u u), so c = (v u)/(u u). Then (A.3) s(v) = w cu = v 2cu = v 2 v u u u u. The last expression is our desired formula for s(v). Note for all v that s(v) u = v u. It is standard to label the reflection across a hyperplane containing the origin using a vector in the orthogonal complement to the hyperplane, so we write s in (A.3) as s u. This is the reflection in the hyperplane u, so s u (u) = u. By (A.3), s au = s u for any a R {0}, which makes geometric sense since (au) = u, so the reflection in the hyperplane orthogonal to u and to au is the same. Moreover, H is the set of points fixed by s u, and we can confirm this with (A.3): s u (v) = 0 if and only if v u = 0, which means v u = H. To get a formula for the reflection across any hyperplane in R n (not just those containing the origin), we use the following lemma to describe any hyperplane. Lemma A.1. Every hyperplane in R n has the form H u,c = {v R n : v u = c} for some nonzero u R n that is orthogonal to the hyperplane and some c R. The hyperplane contains 0 if and only if c = 0. Proof. Let H be a hyperplane and choose w H. Then H w is a hyperplane containing the origin. Fix a nonzero vector u that is perpendicular to H. Since H w is a hyperplane through the origin parallel to H, a vector v lies in H if and only if v w u, which is equivalent to v u = w u. Thus H = H u,c for c = w u. Below are hyperplanes (lines) in R 2 of the form H (2,1),c = {v : v (2, 1) = c}. (2, 1) w 1 w 2 c = 0 c = 2 c = 4 As the figure suggests, the different hyperplanes H u,c as c varies are parallel to each other. Specifically, if w H u,c then H u,c = H u,0 + w (check!). (The choice of w in H u,c affects how H u,0 is translated over to H u,c, since adding w to H u,0 sends 0 to w. Compare in the above figure how H u,0 is carried onto H u,4 using translation by w 1 and by w 2.) In the family of parallel hyperplanes {H u,c : c R}, we can replace u with any nonzero scalar multiple, since H au,c = H u,c/a, so {H u,c : c R} = {H au,c : c R}. Geometrically this makes sense, since the importance of u relative to the hyperplanes is that it is an orthogonal direction, and au also provides an orthogonal direction to the same hyperplanes. To reflect points across a hyperplane H, fix a nonzero vector w H. Geometric intuition suggests that to reflect across H we can subtract w, then reflect across H w (a hyperplane

14 14 KEITH CONRAD through the origin), and then add w back. In the figure below, this corresponds to moving from P to Q (subtract w from P ) to Q (reflect Q across H w) to P (add w to Q ), getting the reflection of P across H. Q Q P H w w H P Therefore reflection across H should be given by the formula (A.4) s (v) = s(v w) + w, where s is reflection across H w. Setting H = H u,c by Lemma A.1, where u is a nonzero vector orthogonal to H, c = u w (since w H) and by (A.3) and (A.4) ( ) (A.5) s (v w) u v u c (v) = (v w) 2 u + w = v 2 u. u u u u The following properties show (A.5) is the reflection across the hyperplane H u,c. If v H u,c then v u = c, so (A.5) implies s (v) = v: s fixes points in H u,c. For any v in R n, the average 1 2 (v + s (v)), which is the midpoint of the segment connecting v and s (v), lies in H u,c : it equals v ( ) v u c u u u, whose dot product with u is c. For any v in R n the difference v s (v), which is the direction of the segment connecting v and s (v), is perpendicular to H u,c since, by (A.5), it lies in Ru = H. Example A.2. We use (A.5) to show any nonzero translation t u (v) = v + u is the composition of two reflections. Set H = u = H u,0 and write s u for the reflection across H and s u for the reflection across H + u, the hyperplane parallel to H that contains u. By (A.3) and (A.5), ( ) ( ) s su (v) u u u v u u(s u (v)) = s u (v) 2 u = s u (v) 2 u u u u 1 u = v + 2u, so s u s u = t 2u. This is true for all u, so t u = s u/2 s u/2. These formulas show any translation is a composition of two reflections across hyperplanes perpendicular to the direction of the translation. The figure below illustrates Example A.2 in the plane, with u being a vector along the x-axis. Reflecting v and w across H = u and then across H + u is the same as translation of v and w by 2u.

15 ISOMETRIES OF R n 15 s(v) H v H + u s (s(v)) u 2u w s(w) s (s(w)) Theorem A.3. Let w and w be distinct in R n. There is a unique reflection s in R n such that s(w) = w. This reflection is in O n (R) if and only if w and w have the same length. Proof. A reflection taking w to w has a fixed hyperplane that contains the average 1 2 (w+w ) and is orthogonal to w w. Therefore the fixed hyperplane of a reflection taking w to w must be H w w,c for some c. Since 1 2 (w + w ) H w w,c, we have c = (w w ) 1 2 (w + w ) = 1 2 (w w w w ). Thus the only reflection that could send w to w is the one across the hyperplane H w w, 1 2 (w w w w ). Let s check that reflection across this hyperplane does send w to w. (A.5), is ( v (w w ) ) c s(v) = v 2 (w w ) (w w (w w ), ) Its formula, by where c = 1 2 (w w w w ). When v = w, the coefficient of w w in the above formula becomes 1, so s(w) = w (w w ) = w. If w and w have the same length then w w = w w, so c = 0 and that means s has fixed hyperplane H w w,0. Therefore s is a reflection fixing 0, so s O n (R). Conversely, if s O n (R) then s(0) = 0, which implies 0 H w w,c, so c = 0, and therefore w w = w w, which means w and w have the same length. To illustrate techniques, when w and w are distinct vectors in R n with the same length let s construct a reflection across a hyperplane through the origin that sends w to w geometrically, without using the algebraic formulas for reflections and hyperplanes. If w and w are on the same line through the origin then w = w (the only vectors on Rw with the same length as w are w and w). For the reflection s across the hyperplane w, s(w) = w = w. If w and w are not on the same line through the origin then the span of w and w is a plane. The vector v = w + w is nonzero and lies on the line in this plane that bisects the angle between w and w. (See the figure below.) Let u be a vector in this plane orthogonal to v, so writing w = av + bu we have w = av bu. 1 Letting s be the reflection in R n across the hyperplane u, which contains Rv (and contains more than Rv when n > 2), we have s(v) = v and s(u) = u, so s(w) = s(av + bu) = av bu = w. 1 This is geometrically clear, but algebraically tedious. Since v = w + w, we have w = v w = (1 a)v bu, so to show w = av bu we will show a = 1 2. Since v u, w v = a(v v). The vectors w and w have the same length, so w v = w (w + w ) = w w + w w and v v = (w + w ) (w + w ) = 2(w w + w w ), so w v = 1 2 (v v). Comparing this with w v = a(v v), we have a = 1 2.

16 16 KEITH CONRAD w v u w We have already noted that reflections in O n (R) are analogous to transpositions in the symmetric group S n : they have order 2 and determinant 1, just as transpositions have order 2 and sign 1. The next theorem, due to E. Cartan, is the analogue for O n (R) of the generation of S n by transpositions. Theorem A.4 (Cartan). The group O n (R) is generated by its reflections. Note that a reflection in O n (R) fixes 0 and therefore its fixed hyperplane contains the origin, since a reflection does not fix any point outside its fixed hyperplane. Proof. We argue by induction on n. The theorem is trivial when n = 1, since O 1 (R) = {±1}. Let n 2. (While the case n = 2 was treated in Theorem 5.2, we will reprove it here.) Pick h O n (R), so h(e n ) and e n have the same length. If h(e n ) e n, by Theorem A.3 there is a (unique) reflection s in O n (R) such that s(h(e n )) = e n, so the composite isometry sh = s h fixes e n. If h(e n ) = e n then we can write s(h(e n )) = e n where s is the identity on R n. We will use s with this meaning (reflection or identity) below. Any element of O n (R) preserves orthogonality, so sh sends the hyperplane H := e n = R n 1 {0} back to itself and is the identity on the line Re n. Since e n = R n 1 {0} has dimension n 1, by induction 2 there are a finite number of reflections s 1,..., s m in H fixing the origin such that sh H = s 1 s 2 s m. Any reflection H H that fixes 0 extends naturally to a reflection of R n fixing 0, by declaring it to be the identity on the line H = Re n and extending by linearity from the behavior on H and H. 3 Write s i for the extension of s i to a reflection on R n in this way. Consider now the two isometries sh, s 1 s 2 s m of R n. They agree on H = e n and they each fix e n. Thus, by linearity, we have equality as functions on R n : sh = s 1 s 2 s m. Therefore h = s 1 s 1 s 2 s m. 2 Strictly speaking, since H is not R n 1, to use induction we really should be proving the theorem not just for orthogonal transformations of the Euclidean spaces R n, but for orthogonal transformations of their subspaces as well. See Remark 3.7 for the definition of orthogonal transformation on subspaces, and use (A.3) rather than a matrix formula to define a reflection across a hyperplane in a subspace. 3 Geometrically, for n 1 2 if s is a reflection on H fixing the orthogonal complement of a line L in H, then this extension of s to R n is the reflection on R n fixing the orthogonal complement of L in R n.

17 ISOMETRIES OF R n 17 From the proof, if (sh) H is a composition of m isometries of H fixing 0 that are the identity or reflections then h is a composition of m + 1 isometries of R n fixing 0 that are the identity or reflections. Therefore every element of O n (R) is a composition of at most n elements of O n (R) that are the identity or reflections (in other words, from m n 1 we get m + 1 n). If h is not the identity then such a decomposition of h must include reflections, so by removing the identity factors we see h is a composition of at most n reflections. The identity on R n is a composition of 2 reflections. This establishes the stronger form of Cartan s theorem: every element of O n (R) is a composition of at most n reflections (except for the identity when n = 1, unless we use the convention that the identity is a composition of 0 reflections). Remark A.5. Cartan s theorem can be deduced from the decomposition of R n in Theorem 5.3. Let a be the number of 2-dimensional W i s and b be the number of 1-dimensional W i s, so 2a + b = n and h acts as a rotation on any 2-dimensional W i. By Theorem 5.2, any rotation of W i is a composition of two reflections in W i. A reflection in W i can be extended to a reflection in R n by setting it to be the identity on the other W j s. If W i is 1-dimensional then h is the identity on W i except perhaps once, in which case b 1 and h is a reflection on that W i. Putting all of these reflections together, we can express h as a composition of at most 2a reflections if b = 0 and at most 2a + 1 reflections if b 1. Either way, h is a composition of at most 2a + b = n reflections, with the understanding when n = 1 that the identity is a composition of 0 reflections. Example A.6. For 0 m n, we will show the orthogonal matrix with m 1 s and n m 1 s on the diagonal is a composition of m reflections in O n (R) and not less than m reflections in O n (R). Any reflection in O n (R) has a fixed hyperplane through 0 of dimension n 1. Therefore a composition of r reflections in O n (R) fixes the intersection of r hyperplanes through the origin, whose dimension is at least n r (some hyperplanes may be the same). If h O n (R) is a composition of r reflections and fixes a subspace of dimension d then d n r, so r n d. Hence we get a lower bound on the number of reflections in O n (R) whose composition can equal h in terms of the dimension of {v R n : h(v) = v}. For the above matrix, the subspace of fixed vectors is {0} m R n m, which has dimension n m. Therefore the least possible number of reflections in O n (R) whose composition could equal this matrix is n (n m) = m, and this bound is achieved: the m matrices with 1 in one of the first m positions on the main diagonal and 1 elsewhere on the main diagonal are all reflections in O n (R) and their composition is the above matrix. In particular, the isometry h(v) = v is a composition of n and no fewer reflections in O n (R). Corollary A.7. Every isometry of R n is a composition of at most n + 1 reflections. An isometry that fixes at least one point is a composition of at most n reflections.

18 18 KEITH CONRAD The difference between this corollary and Cartan s theorem is that in the corollary we are not assuming isometries, or in particular reflections, are taken from O n (R), i.e., they need not fix 0. Proof. Let h be an isometry of R n. If h(0) = 0, then h belongs to O n (R) (Theorem 4.1) and Cartan s theorem implies h is a composition of at most n reflections through hyperplanes containing 0. If h(p) = p for some p R n, then we can change the coordinate system (using a translation) so that the origin is placed at p. Then the previous case shows h is a composition of at most n reflections through hyperplanes containing p. Suppose h has no fixed points. Then in particular, h(0) 0. By Theorem A.3 there is some reflection s across a hyperplane in R n such that s(h(0)) = 0. Then sh O n (R), so by Cartan s theorem sh is a composition of at most n reflections, and that implies h = s(sh) is a composition of at most n + 1 reflections. The proof of Corollary A.7 shows an isometry of R n is a composition of at most n reflections except possibly when it has no fixed points. Then n + 1 reflections may be required. For example, when n = 2 nonzero translations and glide reflections have no fixed points, and the first type requires 2 reflections while the second type requires 3 reflections. References [1] S. H. Friedberg, A. J. Insel, and L. E. Spence, Linear Algebra, 4th ed., Pearson, Upper Saddle River NJ, [2] L. Rudolph, The Structure of Orthogonal Transformations, Amer. Math. Monthly 98 (1991),

### Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

### Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

### Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

### Orthogonal Projections

Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

### α = u v. In other words, Orthogonal Projection

Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

### Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

### Section 1.1. Introduction to R n

The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

### Orthogonal Matrices. u v = u v cos(θ) T (u) + T (v) = T (u + v). It s even easier to. If u and v are nonzero vectors then

Part 2. 1 Part 2. Orthogonal Matrices If u and v are nonzero vectors then u v = u v cos(θ) is 0 if and only if cos(θ) = 0, i.e., θ = 90. Hence, we say that two vectors u and v are perpendicular or orthogonal

### 1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

### LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

### Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### ( ) which must be a vector

MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are

### MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

### 6. ISOMETRIES isometry central isometry translation Theorem 1: Proof:

6. ISOMETRIES 6.1. Isometries Fundamental to the theory of symmetry are the concepts of distance and angle. So we work within R n, considered as an inner-product space. This is the usual n- dimensional

### x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

### Summary of week 8 (Lectures 22, 23 and 24)

WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry

### Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

### Section 1.4. Lines, Planes, and Hyperplanes. The Calculus of Functions of Several Variables

The Calculus of Functions of Several Variables Section 1.4 Lines, Planes, Hyperplanes In this section we will add to our basic geometric understing of R n by studying lines planes. If we do this carefully,

### THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

### THREE DIMENSIONAL GEOMETRY

Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

### 5.3 ORTHOGONAL TRANSFORMATIONS AND ORTHOGONAL MATRICES

5.3 ORTHOGONAL TRANSFORMATIONS AND ORTHOGONAL MATRICES Definition 5.3. Orthogonal transformations and orthogonal matrices A linear transformation T from R n to R n is called orthogonal if it preserves

### MAT188H1S Lec0101 Burbulla

Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u

### 1 Orthogonal projections and the approximation

Math 1512 Fall 2010 Notes on least squares approximation Given n data points (x 1, y 1 ),..., (x n, y n ), we would like to find the line L, with an equation of the form y = mx + b, which is the best fit

### WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE?

WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE? JOEL H. SHAPIRO Abstract. These notes supplement the discussion of linear fractional mappings presented in a beginning graduate course

### 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each)

Math 33 AH : Solution to the Final Exam Honors Linear Algebra and Applications 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) (1) If A is an invertible

### 18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in 2-106. Total: 175 points.

806 Problem Set 4 Solution Due Wednesday, March 2009 at 4 pm in 2-06 Total: 75 points Problem : A is an m n matrix of rank r Suppose there are right-hand-sides b for which A x = b has no solution (a) What

### Math 215 HW #6 Solutions

Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T

### CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

### Chapter 6. Orthogonality

6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

### d(f(g(p )), f(g(q))) = d(g(p ), g(q)) = d(p, Q).

10.2. (continued) As we did in Example 5, we may compose any two isometries f and g to obtain new mappings f g: E E, (f g)(p ) = f(g(p )), g f: E E, (g f)(p ) = g(f(p )). These are both isometries, the

### Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

### Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

1 Chapter 13. VECTORS IN THREE DIMENSIONAL SPACE Let s begin with some names and notation for things: R is the set (collection) of real numbers. We write x R to mean that x is a real number. A real number

### is in plane V. However, it may be more convenient to introduce a plane coordinate system in V.

.4 COORDINATES EXAMPLE Let V be the plane in R with equation x +2x 2 +x 0, a two-dimensional subspace of R. We can describe a vector in this plane by its spatial (D)coordinates; for example, vector x 5

### MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

### Vector Spaces II: Finite Dimensional Linear Algebra 1

John Nachbar September 2, 2014 Vector Spaces II: Finite Dimensional Linear Algebra 1 1 Definitions and Basic Theorems. For basic properties and notation for R N, see the notes Vector Spaces I. Definition

### They are congruent since the rst can be moved to coincide with the second. However, the next pair of triangles are not congruent.

Chapter 9 Symmetry In this chapter we explore a connection between algebra and geometry. One of the main topics of plane geometry is that of congruence; roughly, two geometric gures are said to be congruent

### 1 Sets and Set Notation.

LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

### NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

### 3. INNER PRODUCT SPACES

. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

### Plane transformations and isometries

Plane transformations and isometries We have observed that Euclid developed the notion of congruence in the plane by moving one figure on top of the other so that corresponding parts coincided. This notion

### 13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

### Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible:

Cramer s Rule and the Adjugate Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible: Theorem [Cramer s Rule] If A is an invertible

### Higher Geometry Problems

Higher Geometry Problems ( Look up Eucidean Geometry on Wikipedia, and write down the English translation given of each of the first four postulates of Euclid. Rewrite each postulate as a clear statement

### INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL

SOLUTIONS OF THEORETICAL EXERCISES selected from INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL Eighth Edition, Prentice Hall, 2005. Dr. Grigore CĂLUGĂREANU Department of Mathematics

### Matrix Algebra LECTURE 1. Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 = a 11 x 1 + a 12 x 2 + +a 1n x n,

LECTURE 1 Matrix Algebra Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 a 11 x 1 + a 12 x 2 + +a 1n x n, (1) y 2 a 21 x 1 + a 22 x 2 + +a 2n x n, y m a m1 x 1 +a m2 x

### The determinant of a skew-symmetric matrix is a square. This can be seen in small cases by direct calculation: 0 a. 12 a. a 13 a 24 a 14 a 23 a 14

4 Symplectic groups In this and the next two sections, we begin the study of the groups preserving reflexive sesquilinear forms or quadratic forms. We begin with the symplectic groups, associated with

### December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

### Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

### Solution: 2. Sketch the graph of 2 given the vectors and shown below.

7.4 Vectors, Operations, and the Dot Product Quantities such as area, volume, length, temperature, and speed have magnitude only and can be completely characterized by a single real number with a unit

GENERATING SETS KEITH CONRAD 1 Introduction In R n, every vector can be written as a unique linear combination of the standard basis e 1,, e n A notion weaker than a basis is a spanning set: a set of vectors

### Orthogonal Diagonalization of Symmetric Matrices

MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

### MATH 240 Fall, Chapter 1: Linear Equations and Matrices

MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS

### Systems of Linear Equations

Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

### Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col

### Modern Geometry Homework.

Modern Geometry Homework. 1. Rigid motions of the line. Let R be the real numbers. We define the distance between x, y R by where is the usual absolute value. distance between x and y = x y z = { z, z

### Linear Algebra Review. Vectors

Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

### Determinants, Areas and Volumes

Determinants, Areas and Volumes Theodore Voronov Part 2 Areas and Volumes The area of a two-dimensional object such as a region of the plane and the volume of a three-dimensional object such as a solid

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### Recall that two vectors in are perpendicular or orthogonal provided that their dot

Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

### Definition 12 An alternating bilinear form on a vector space V is a map B : V V F such that

4 Exterior algebra 4.1 Lines and 2-vectors The time has come now to develop some new linear algebra in order to handle the space of lines in a projective space P (V ). In the projective plane we have seen

### 9 Multiplication of Vectors: The Scalar or Dot Product

Arkansas Tech University MATH 934: Calculus III Dr. Marcel B Finan 9 Multiplication of Vectors: The Scalar or Dot Product Up to this point we have defined what vectors are and discussed basic notation

### Linear Codes. In the V[n,q] setting, the terms word and vector are interchangeable.

Linear Codes Linear Codes In the V[n,q] setting, an important class of codes are the linear codes, these codes are the ones whose code words form a sub-vector space of V[n,q]. If the subspace of V[n,q]

### Lecture 14: Section 3.3

Lecture 14: Section 3.3 Shuanglin Shao October 23, 2013 Definition. Two nonzero vectors u and v in R n are said to be orthogonal (or perpendicular) if u v = 0. We will also agree that the zero vector in

### Inner product. Definition of inner product

Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product

### 5.3 The Cross Product in R 3

53 The Cross Product in R 3 Definition 531 Let u = [u 1, u 2, u 3 ] and v = [v 1, v 2, v 3 ] Then the vector given by [u 2 v 3 u 3 v 2, u 3 v 1 u 1 v 3, u 1 v 2 u 2 v 1 ] is called the cross product (or

### T ( a i x i ) = a i T (x i ).

Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

### Theorem 5. The composition of any two symmetries in a point is a translation. More precisely, S B S A = T 2

Isometries. Congruence mappings as isometries. The notion of isometry is a general notion commonly accepted in mathematics. It means a mapping which preserves distances. The word metric is a synonym to

### Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

Math 312, Fall 2012 Jerry L. Kazdan Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday. In addition to the problems below, you should also know how to solve

### Tangent and normal lines to conics

4.B. Tangent and normal lines to conics Apollonius work on conics includes a study of tangent and normal lines to these curves. The purpose of this document is to relate his approaches to the modern viewpoints

### 13 Groups of Isometries

13 Groups of Isometries For any nonempty set X, the set S X of all one-to-one mappings from X onto X is a group under the composition of mappings (Example 71(d)) In particular, if X happens to be the Euclidean

### Math 54. Selected Solutions for Week Is u in the plane in R 3 spanned by the columns

Math 5. Selected Solutions for Week 2 Section. (Page 2). Let u = and A = 5 2 6. Is u in the plane in R spanned by the columns of A? (See the figure omitted].) Why or why not? First of all, the plane in

### University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

### Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc.

2 Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Theorem 8: Let A be a square matrix. Then the following statements are equivalent. That is, for a given A, the statements are either all true

BILINEAR FORMS KEITH CONRAD The geometry of R n is controlled algebraically by the dot product. We will abstract the dot product on R n to a bilinear form on a vector space and study algebraic and geometric

### DETERMINANTS. b 2. x 2

DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in

### Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

### 2.5 Gaussian Elimination

page 150 150 CHAPTER 2 Matrices and Systems of Linear Equations 37 10 the linear algebra package of Maple, the three elementary 20 23 1 row operations are 12 1 swaprow(a,i,j): permute rows i and j 3 3

### Orthogonal Projections and Orthonormal Bases

CS 3, HANDOUT -A, 3 November 04 (adjusted on 7 November 04) Orthogonal Projections and Orthonormal Bases (continuation of Handout 07 of 6 September 04) Definition (Orthogonality, length, unit vectors).

### Problem Set 1 Solutions Math 109

Problem Set 1 Solutions Math 109 Exercise 1.6 Show that a regular tetrahedron has a total of twenty-four symmetries if reflections and products of reflections are allowed. Identify a symmetry which is

### Math 312 Homework 1 Solutions

Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

### LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

### MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

### A matrix over a field F is a rectangular array of elements from F. The symbol

Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F) denotes the collection of all m n matrices over F Matrices will usually be denoted

### 12.5 Equations of Lines and Planes

Instructor: Longfei Li Math 43 Lecture Notes.5 Equations of Lines and Planes What do we need to determine a line? D: a point on the line: P 0 (x 0, y 0 ) direction (slope): k 3D: a point on the line: P

### 1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v,

1.3. DOT PRODUCT 19 1.3 Dot Product 1.3.1 Definitions and Properties The dot product is the first way to multiply two vectors. The definition we will give below may appear arbitrary. But it is not. It

### Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two

### We call this set an n-dimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P.

Volumes of parallelograms 1 Chapter 8 Volumes of parallelograms In the present short chapter we are going to discuss the elementary geometrical objects which we call parallelograms. These are going to

### Linear Algebra Test 2 Review by JC McNamara

Linear Algebra Test 2 Review by JC McNamara 2.3 Properties of determinants: det(a T ) = det(a) det(ka) = k n det(a) det(a + B) det(a) + det(b) (In some cases this is true but not always) A is invertible

### Equations Involving Lines and Planes Standard equations for lines in space

Equations Involving Lines and Planes In this section we will collect various important formulas regarding equations of lines and planes in three dimensional space Reminder regarding notation: any quantity

### Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi

Geometry of Vectors Carlo Tomasi This note explores the geometric meaning of norm, inner product, orthogonality, and projection for vectors. For vectors in three-dimensional space, we also examine the

### arxiv: v1 [math.mg] 6 May 2014

DEFINING RELATIONS FOR REFLECTIONS. I OLEG VIRO arxiv:1405.1460v1 [math.mg] 6 May 2014 Stony Brook University, NY, USA; PDMI, St. Petersburg, Russia Abstract. An idea to present a classical Lie group of

### Vector Algebra CHAPTER 13. Ü13.1. Basic Concepts

CHAPTER 13 ector Algebra Ü13.1. Basic Concepts A vector in the plane or in space is an arrow: it is determined by its length, denoted and its direction. Two arrows represent the same vector if they have

### GROUPS ACTING ON A SET

GROUPS ACTING ON A SET MATH 435 SPRING 2012 NOTES FROM FEBRUARY 27TH, 2012 1. Left group actions Definition 1.1. Suppose that G is a group and S is a set. A left (group) action of G on S is a rule for

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### Linear Algebra Notes

Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

### 0.1 Linear Transformations

.1 Linear Transformations A function is a rule that assigns a value from a set B for each element in a set A. Notation: f : A B If the value b B is assigned to value a A, then write f(a) = b, b is called