Computing Orthonormal Sets in 2D, 3D, and 4D



Similar documents
Chapter 17. Orthogonal Matrices and Symmetries of Space

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

Lecture L3 - Vectors, Matrices and Coordinate Transformations

5.3 The Cross Product in R 3

Unified Lecture # 4 Vectors

α = u v. In other words, Orthogonal Projection

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

Linear Algebra Review. Vectors

Chapter 6. Orthogonality

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

x = + x 2 + x

Math 215 HW #6 Solutions

Similarity and Diagonalization. Similar Matrices

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 1341: REVIEW II SANGHOON BAEK

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Section 1.1. Introduction to R n

Rotation Matrices and Homogeneous Transformations

Orthogonal Projections and Orthonormal Bases

13.4 THE CROSS PRODUCT

Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

Linear Algebra: Vectors

Section 1.4. Lines, Planes, and Hyperplanes. The Calculus of Functions of Several Variables

Section V.3: Dot Product

Figure 1.1 Vector A and Vector F

3. INNER PRODUCT SPACES

28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. v x. u y v z u z v y u y u z. v y v z

ISOMETRIES OF R n KEITH CONRAD

1 VECTOR SPACES AND SUBSPACES

v 1 v 3 u v = (( 1)4 (3)2, [1(4) ( 2)2], 1(3) ( 2)( 1)) = ( 10, 8, 1) (d) u (v w) = (u w)v (u v)w (Relationship between dot and cross product)

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Inner Product Spaces

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Section Inner Products and Norms

Math 241, Exam 1 Information.

Continued Fractions and the Euclidean Algorithm

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Inner product. Definition of inner product

THREE DIMENSIONAL GEOMETRY

Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product

Solutions to Math 51 First Exam January 29, 2015

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

LINEAR ALGEBRA. September 23, 2010

The Vector or Cross Product

Section 4.4 Inner Product Spaces

[1] Diagonal factorization

Vector Math Computer Graphics Scott D. Anderson

Linear Algebra Notes

Inner Product Spaces and Orthogonality

Equations Involving Lines and Planes Standard equations for lines in space

11.1. Objectives. Component Form of a Vector. Component Form of a Vector. Component Form of a Vector. Vectors and the Geometry of Space

Orthogonal Projections

Systems of Linear Equations

The Dot and Cross Products

Vectors VECTOR PRODUCT. Graham S McDonald. A Tutorial Module for learning about the vector product of two vectors. Table of contents Begin Tutorial

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

3 Orthogonal Vectors and Matrices

1 Symmetries of regular polyhedra

CS3220 Lecture Notes: QR factorization and orthogonal transformations

5. Orthogonal matrices

MATH APPLIED MATRIX THEORY

1.3. DOT PRODUCT If θ is the angle (between 0 and π) between two non-zero vectors u and v,

Analysis of Stresses and Strains

AP Physics - Vector Algrebra Tutorial

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Geometric Transformation CS 211A

Problem set on Cross Product

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Geometric Transformations

One advantage of this algebraic approach is that we can write down

Name: Section Registered In:

1 Introduction to Matrices

Chapter 20. Vector Spaces and Bases

1 Determinants and the Solvability of Linear Systems

Lecture 14: Section 3.3

Vector Spaces 4.4 Spanning and Independence

Geometric description of the cross product of the vectors u and v. The cross product of two vectors is a vector! u x v is perpendicular to u and v

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Vector Notation: AB represents the vector from point A to point B on a graph. The vector can be computed by B A.

9.4. The Scalar Product. Introduction. Prerequisites. Learning Style. Learning Outcomes

Elementary Linear Algebra

Mechanics 1: Vectors

9 Multiplication of Vectors: The Scalar or Dot Product

Physics 235 Chapter 1. Chapter 1 Matrices, Vectors, and Vector Calculus

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in Total: 175 points.

Linearly Independent Sets and Linearly Dependent Sets

Review Jeopardy. Blue vs. Orange. Review Jeopardy

160 CHAPTER 4. VECTOR SPACES

Section Continued

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Vector Algebra II: Scalar and Vector Products

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

4.5 Linear Dependence and Linear Independence

Notes on Determinant

Transcription:

Computing Orthonormal Sets in 2D, 3D, and 4D David Eberly Geometric Tools, LLC http://www.geometrictools.com/ Copyright c 1998-2016. All Rights Reserved. Created: March 22, 2010 Last Modified: August 11, 2013 Contents 1 Introduction 2 2 Orthonormal Sets in 2D 2 3 Orthonormal Sets in 3D 3 3.1 One Vector from Two Inputs.................................... 3 3.2 Two Vectors from One Input.................................... 4 4 Orthonormal Sets in 4D 5 4.1 One Vector from Three Inputs................................... 5 4.2 Two Vectors from Two Inputs.................................... 5 4.3 Three Vectors from One Input................................... 6 5 An Application to 3D Rotations 6 6 An Application to 4D Rotations 7 7 Implementations 8 7.1 2D Pseudocode............................................ 9 7.2 3D Pseudocode............................................ 9 7.3 4D Pseudocode............................................ 10 1

1 Introduction A frequently asked question in computer graphics is: Given a unit-length vector N = (x, y, z), how do you compute two unit-length vectors U and V so that the three vectors are mutually perpendicular? For example, N might be a normal vector to a plane, and you want to define a coordinate system in the plane, which requires computing U and V. There are many choices for the vectors, but for numerical robustness, I have always recommended the following algorithm. Locate the component of maximum absolute value. To illustrate, suppose that x is this value, so x y and x z. Swap the first two components, changing the sign on the second, and set the third component to 0, obtaining (y, x, 0). Normalize this to obtain U = Now compute a cross product to obtain the other vector, (y, x, 0) x2 + y 2 (1) V = N U = (xz, yz, x2 y 2 ) x2 + y 2 (2) As you can see, a division by x 2 + y 2 is required, so it is necessary that the divisor not be zero. In fact it is not, because of how we choose x. For a unit-length vector (x, y, z) where x y and x y, it is necessary that x 1/ 3. The division by x 2 + y 2 is therefore numerically robust you are not dividing by a number close to zero. In linear algebra, we refer to the set U, V, N as an orthonormal set. By definition, the vectors are unit length and mutually perpendicular. Let N denote the span of N. Formally, this is the set N = tn : t IR (3) where IR is the set of real numbers. The span is the line that contains the origin 0 with direction N. We may define the span of any number of vectors. For example, the span of U and V is U, V = su + tv : s IR, t IR (4) This is the plane that contains the origin and has unit-length normal N; that is, any vector in U, V is perpendicular to N. The span of U and V is said to be the orthogonal complement of the span of N. Equivalently, the span of N is said to be the orthogonal complement of the span of U and V. The notation for orthogonal complement is to add a superscript perp symbol. N is the orthogonal complement of the span of N and U, V is the orthogonal complement of the span of U and V. Moreover, U, V = N, N = U, V (5) 2 Orthonormal Sets in 2D The ideas in the introduction specialize to two dimensions. Given a unit-length vector U 0 = (x 0, y 0 ), a unit-length vector perpendicular to it is U 1 = (x 1, y 1 ) = (y 1, x 1 ). The span of each vector is a line and the two lines are perpendicular; therefore, U 0 = U 1, U 1 = U 0 (6) 2

The set U 0, U 1 is an orthonormal set. A fancy way of constructing the perpendicular vector is to use a symbolic cofactor expansion of a determinant. Define E 0 = (1, 0) and E 1 = (0, 1). The perpendicular vector is U 1 = det E 0 E 1 x y = ye 0 xe 1 = (y, x) (7) Although overkill in two dimensions, the determinant concept is useful for larger dimensions. The set U 0, U 1 is a left-handed orthonormal set. The vectors are unit length, perpendicular, and the matrix M = [U 0 U 1 ] whose columns are the two vectors is orthogonal with det(m) = 1. To obtain a right-handed orthonormal set, negate the last vector: U 0, U 1. 3 Orthonormal Sets in 3D 3.1 One Vector from Two Inputs Define E 0 = (1, 0, 0), E 1 = (0, 1, 0), and E 2 = (0, 0, 1). Given two vectors V 0 = (x 0, y 0, z 0 ) and V 1 = (x 1, y 1, z 1 ), the cross product may be written as a symbolic cofactor expansion of a determinant, V 2 = V 0 V 1 E 0 E 1 E 2 = det x 0 y 0 z 0 x 1 y 1 z 1 (8) = (y 0 z 1 y 1 z 0 )E 0 (x 0 z 1 x 1 z 0 )E 1 + (x 0 y 1 x 1 y 0 )E 2 = (y 0 z 1 y 1 z 0, x 1 z 0 x 0 z 1, x 0 y 1 x 1 y 0 ) If either of the input vectors is the zero vector or if the input vectors are nonzero and parallel, the cross product is the zero vector. If the input vectors are unit length and perpendicular, then the cross product is guaranteed to be unit length and V 0, V 1, V 2 is an orthonormal set. If the input vectors are linearly independent (not zero and not parallel), we may compute a pair of unit-length vectors that are orthogonal using Gram-Schmidt orthonormalization, U 0 = V 0 V 0, U 1 = V 1 (U 0 V 1 )U 0 V 1 (U 0 V 1 )U 0 (9) U 0 is a unit-length vector obtained by normalizing V 0. We project out the U 0 component of V 1, which produces a vector perpendicular to U 0. This vector is normalized to produce U 1. We may then compute the cross product to obtain another unit-length vector, U 2 = U 0 U 1. If we start with two unit-length vectors that are perpendicular, we can obtain a third vector using the cross product. And we have shown that U 2 = U 0, U 1 (10) 3

3.2 Two Vectors from One Input If we start only with one unit-length vector U 2, we wish to find two unit-length vectors U 0 and U 1 such that U 0, U 1, U 2 is an orthonormal set, in which case U 0, U 1 = U 2 (11) But we have already seen how to do this in the introduction section. Let us be slightly more formal and use the symbolic determinant idea. This idea allows us to generalize to four dimensions. Let U 2 = (x, y, z) be a unit-length vector. Suppose that x has the largest absolute value of the three components. We may construct a determinant whose last row is one of the basis vectors E i that does not have a zero in its first component (the one corresponding to the location of x). Let us choose (0, 0, 1) as this vector; then E 0 E 1 E 2 det x y z = ye 0 xe 1 + 0E 2 = (y, x, 0) (12) 0 0 1 which matches the construction in the introduction. This vector cannot be the zero vector, because we know that x has largest absolute magnitude and so cannot be zero (the initial vector is not the zero vector). Normalizing this vector, we have U 0 = (y, x, 0)/ x 2 + y 2. We may then compute U 1 = U 2 U 0. If y has the largest absolute magnitude, then the last row of the determinant can be either (1, 0, 0) or (0, 0, 1); that is, we may not choose the Euclidean basis vector with a 1 in the same component that corresponds to y. For example, E 0 E 1 E 2 det x y z = 0E 0 + ze 1 ye 2 = (0, z, y) (13) 1 0 0 Once again the result cannot be the zero vector, so we may robustly compute U 0 = (0, z, y)/ y 2 + z 2 and U 1 = U 2 U 0. And finally, let z have the largest absolute magnitude. We may compute E 0 E 1 E 2 det x y z = ze 0 + 0E 1 + xe 2 = ( z, 0, x) (14) 0 1 0 which cannot be the zero vector. Thus, U 0 = ( z, 0, x)/ x 2 + z 2 and U 1 = U 2 U 0. Of course, we could have also chosen the last row to be (1, 0, 0). The set U 0, U 1, U 2 is a right-handed orthonormal set. The vectors are unit length, mutually perpendicular, and the matrix M = [U 0 U 1 U 2 ] whose columns are the three vectors is orthogonal with det(m) = +1. To obtain a left-handed orthonormal set, negate the last vector: U 0, U 1, U 2. 4

4 Orthonormal Sets in 4D This section shows how the concepts in three dimensions extend to four dimensions. 4.1 One Vector from Three Inputs Consider three vectors V i = (x i, y i, z i, w i ) for i = 0, 1, 2. If the vectors are linearly independent, then we may compute a fourth vector that is perpendicular to the three inputs. This is a generalization of the cross product method in 3D for computing a vector that is perpendicular to two linearly independent vectors. Specifically, define E 0 = (1, 0, 0, 0), E 1 = (0, 1, 0, 0), E 2 = (0, 0, 1, 0), and E 3 = (0, 0, 0, 1). We may compute a symbolic determinant E 0 E 1 E 2 E 3 x 0 y 0 z 0 w 0 V 3 = det x 1 y 1 z 1 w 1 x 2 y 2 z 2 w 2 y 0 z 0 w 0 x 0 z 0 w 0 x 0 y 0 w 0 x 0 y 0 z 0 = det y 1 z 1 w 1 E0 det x 1 z 1 w 1 E1 + det x 1 y 1 w 1 E2 det x 1 y 1 z 1 E3 y 2 z 2 w 2 x 2 z 2 w 2 x 2 y 2 w 2 x 2 y 2 z 2 (15) If the input vectors U 0, U 1, and U 2 are unit length and mutually perpendicular, then it is guaranteed that the output vector U 3 is unit length and that the four vectors form an orthonormal set. If the input vectors themselves do not form an orthonormal set, we may use Gram-Schmidt orthonormalization to generate an input orthonormal set: U 0 = V 0 V 0 ; U i = V i 1 i j=0 (U j V i )U j V 1 i 1 j=0 (U j V i )U j, i 1 (16) 4.2 Two Vectors from Two Inputs Let us consider two unit-length and perpendicular vectors U i = (x i, y i, z i, w i ) for i = 0, 1. If the inputs are only linearly independent, we may use Gram-Schmidt orthonormalization to obtain the unit-length and perpendicular vectors. The inputs have six associated 2 2 determinants: x 0 y 1 x 1 y 0, x 0 z 1 x 1 z 0, x 0 w 1 x 1 w 0, y 0 z 1 y 1 z 0, y 0 w 1 y 1 w 0, and z 0 w 1 z 1 w 0. It is guaranteed that not all of these determinants are zero when the input vectors are linearly independent. We may search for the determinant of largest absolute magnitude, which is equivalent in the three-dimensional setting when searching for the largest absolute magnitude component. For simplicity, assume that x 0 y 1 x 1 y 0 has the largest absolute magnitude. The handling of other cases is similar. We may construct a symbolic determinant whose last row is either (0, 0, 1, 0) or (0, 0, 0, 1). The idea is that we need a Euclidean basis vector whose components corresponding to the x and y locations are zero. 5

We did a similar thing in three dimensions. To illustrate, let us choose (0, 0, 0, 1). The determinant is E 0 E 1 E 2 E 3 x det 0 y 0 z 0 w 0 = (y 0 z 1 y 1 z 0 )E 0 (x 0 z 1 x 1 z 0 )E 1 + (x 0 y 1 x 1 y 0 )E 2 + 0E 3 (17) x 1 y 1 z 1 w 1 0 0 0 1 This vector cannot be the zero vector, because we know that x 0 y 1 x 1 y 0 has the largest absolute magnitude and is not zero. Moreover, we know that this vector is perpendicular to the first two row vectors in the determinant. We can choose the unit-length vector U 2 = (x 2, y 2, z 2, w 2 ) = (y 0z 1 y 1 z 0, x 1 z 0 x 0 z 1, x 0 y 1 x 1 y 0, 0) (y 0 z 1 y 1 z 0, x 1 z 0 x 0 z 1, x 0 y 1 x 1 y 0, 0) Observe that (x 2, y 2, z 2 ) is the normalized cross product of (x 0, y 0, z 0 ) and (x 1, y 1, z 1 ), and w 2 = 0. We may not compute which is guaranteed to be unit length. Moreover, E 0 E 1 E 2 E 3 x U 3 = det 0 y 0 z 0 w 0 x 1 y 1 z 1 w 1 x 2 y 2 z 2 0 (18) (19) U 2, U 3 = U 0, U 1 (20) That is, the span of the output vectors is the orthogonal complement of the span of the input vectors. 4.3 Three Vectors from One Input Let U 0 = (x 0, y 0, z 0, w 0 ) be a unit-length vector. Similar to the construction in three dimensions, search for the component of largest absolute magnitude. For simplicity, assume it is x 0. The other cases are handled similarly. Choose U 1 = (y 0, x 0, 0, 0)/ x 2 0 + y2 0, which is not the zero vector. U 1 is unit length and perpendicular to U 1. Now apply the construction of the previous section to obtain U 2 and U 3. The set U 0, U 1, U 2, U 3 is a left-handed orthonormal set. The vectors are unit length, mutually perpendicular, and the matrix M = [U 0 U 1 U 2 U 3 ] whose columns are the four vectors is orthogonal with det(m) = 1. To obtain a right-handed orthonormal set, negate the last vector: U 0, U 1, U 2, U 3. 5 An Application to 3D Rotations In three dimensions, a rotation matrix R has an axis of rotation and an angle about that axis. Let U 2 be a unit-length direction vector for the axis and let θ be the angle of rotation. Construct the orthogonal complement of U 2 by computing vectors U 0 and U 1 for which U 0, U 1 = U 2. 6

The rotation does not modify the axis; that is, RU 2 = U 2. Using set notation, which says the span of U 2 is invariant to the rotation. The rotation does modify vectors in the orthogonal complement, namely, R( U 2 ) = U 2 (21) R (x 0 U 0 + x 1 U 1 ) = (x 0 cos θ x 1 sin θ)u 0 + (x 0 sin θ + x 1 cos θ)u 1 (22) This is a rotation in the plane perpendicular to the rotation axis. In set notation, R( U 0, U 1 ) = U 0, U 1 (23) which says the span of U 0 and U 1 is invariant to the rotation. This does not mean that individual elements of the span do not move in fact they all do (except for the zero vector). The notation says that the plane is mapped to itself as an entire set. A general vector V may be written in the coordinate system of the orthonormal vectors, V = x 0 U 0 +x 1 U 1 + x 2 U 2, where x i = U i V. The rotation is RV = (x 0 cos θ x 1 sin θ)u 0 + (x 0 sin θ + x 1 cos θ)u 1 + x 2 U 2 (24) Let U = [U 0 U 1 U 2 ] be a matrix whose columns are the specified vectors. The previous equation may be written in coordinate-free format as cos θ sin θ 0 cos θ sin θ 0 RV = RUX = U sin θ cos θ 0 X = U sin θ cos θ 0 U T V (25) 0 0 1 0 0 1 The rotation matrix is therefore R = U cos θ sin θ 0 sin θ cos θ 0 0 0 1 U T (26) If U 2 = (a, b, c), then U 0 = (b, a, 0)/ a 2 + b 2 and U 1 = (ac, bc, a 2 b 2 )/ a 2 + b 2. Define σ = sin θ and γ = cos θ. Some algebra will show that the rotation matrix is a 2 (1 γ) + γ ab(1 γ) cσ ac(1 γ) + bσ R = ab(1 γ) + cσ b 2 (1 γ) + γ bc(1 γ) aσ (27) ac(1 γ) bσ bc(1 γ) + aσ c 2 (1 γ) + γ 6 An Application to 4D Rotations A three-dimensional rotation has two invariant sets, a line and a plane. A four-dimensional rotation also has two invariant sets, each a two-dimensional subspace. Each subspace has a rotation applied to it, the 7

first by angle θ 0 and the second by angle θ 1. Let the first subspace be U 0, U 1 and the second subspace be U 2, U 3, where U 0, U 1, U 2, U 3 is an orthonormal set. A general vector may be written as V = x 0 U 0 + x 1 U 1 + x 2 U 2 + x 3 3 = UX (28) where U is the matrix whose columns are the U i vectors. Define γ i = cos θ i and σ i = sin θ i for i = 0, 1. The rotation of V is RV = (x 0 γ 0 x 1 σ 0 )U 0 + (x 0 σ 0 + x 1 γ 0 )U 1 + (x 2 γ 1 x 3 σ 1 )U 2 + (x 2 σ 1 + x 3 γ 1 )U 3 (29) Similar to the three dimensional case, the rotation matrix may be shown to be γ 0 σ 0 0 0 σ R = U 0 γ 0 0 0 U T (30) 0 0 γ 1 σ 1 0 0 σ 1 γ 1 In a numerical implementation, you specify U 0 and U 1 and the angles θ 0 and θ 1. The other vectors U 2 and U 3 may be computed using the ideas shown previously in this document. For example, to specify a rotation in the xy-plane, choose U 0 = (1, 0, 0, 0), U 1 = (0, 1, 0, 0), and θ 1 = 0. The angle θ 0 is the rotation angle in the xy-plane and U 2 = (0, 0, 1, 0) and U 3 = (0, 0, 0, 1). Another example is to specify a rotation in xyz-space. Choose U 0 = (x, y, z, 0), where (x, y, z) is the 3D rotation axis. Choose U 1 = (0, 0, 0, 1) and θ 0 = 0, so that all vectors in the span of U 0 and U 1 are unchanged by the rotation. The other vectors are U 2 = (y, x, 0, 0)/ x 2 + y 2 and U 3 = (xz, yz, x 2 y 2 )/ x 2 + y 2, and the angle of rotation in their plane is θ 1. 7 Implementations The Gram-Schmidt orthonormalization can be expressed in general dimensions, although for large dimensions it is not the most numerically robust approach. Iterative methods of matrix analysis are better. // The v e c t o r v i s n o r m a l i z e d i n p l a c e. The l e n g t h o f v b e f o r e n o r m a l i z a t i o n // i s t h e r e t u r n v a l u e o f t h e f u n c t i o n. template <i n t N, typename Real> Real Normalize ( Vector<N, Real>& v ) ; // The f u n c t i o n r e t u r n s t h e s m a l l e s t l e n g t h o f t h e u n n o r m a l i z e d v e c t o r s // computed d u r i n g t h e p r o c e s s. I f t h i s v a l u e i s n e a r l y zero, i t i s p o s s i b l e // t h a t t h e i n p u t s a r e l i n e a r l y dependent ( w i t h i n n u m e r i c a l round o f f e r r o r s ). // On input, 1 <= numelements <= N and v [ 0 ] through v [ numelements 1] must be // i n i t i a l i z e d. On output, t h e v e c t o r s v [ 0 ] t h r o u g h v [ numelements 1] form // an o r t h o n o r m a l s e t. template <i n t N, typename Real> Real Orthonormalize ( i n t numinputs, Vector<N, Real> v [N ] ) i f (1 <= numinputs && numinputs <= N) Real minlength = N o r m a l i z e ( v [ 0 ] ) ; f o r ( i n t i = 1 ; i < numinputs ; ++i ) Real dot = Dot ( v [ i ], v [ i 1]); 8

v [ i ] = v [ i 1] dot ; Real l e n g t h = N o r m a l i z e ( v [ i ] ) ; i f ( l e n g t h < minlength ) minlength = l e n g t h ; r e t u r n minlength ; r e t u r n ( Real ) 0 ; 7.1 2D Pseudocode template <typename Real> // Compute t h e p e r p e n d i c u l a r u s i n g t h e f o r m a l d e t e r m i n a n t, // perp = d e t e0, e1, x0, x1 = ( x1, x0 ) // where e0 = ( 1, 0 ), e1 = ( 0, 1 ), and v = ( x0, x1 ). Vector2<Real> Perp ( Vector2<Real> c o n s t& v ) r e t u r n Vector2<Real >(v [ 1 ], v [ 0 ] ) ; // Compute a r i g h t handed o r t h o n o r m a l b a s i s f o r t h e o r t h o g o n a l complement // o f t h e i n p u t v e c t o r s. The f u n c t i o n r e t u r n s t h e s m a l l e s t l e n g t h o f t h e // u n n o r m a l i z e d v e c t o r s computed d u r i n g t h e p r o c e s s. I f t h i s v a l u e i s n e a r l y // zero, i t i s p o s s i b l e t h a t t h e i n p u t s a r e l i n e a r l y dependent ( w i t h i n // n u m e r i c a l round o f f e r r o r s ). On i n p u t, numinputs must be 1 and v [ 0 ] // must be i n i t i a l i z e d. On output, t h e v e c t o r s v [ 0 ] and v [ 1 ] form an // o r t h o n o r m a l s e t. template <typename Real> Real ComputeOrthogonalComplement ( i n t numinputs, Vector2<Real> v [ 2 ] ) i f ( numinputs== 1) v [ 1 ] = Perp ( v [ 0 ] ) ; r e t u r n O r t h o n o r m a l i z e <2, Real >(2, v ) ; r e t u r n ( Real ) 0 ; // C r e a t e a r o t a t i o n m a t r i x ( a n g l e i s i n r a d i a n s ). This i s not what you // would do i n p r a c t i c e, but i t shows t h e s i m i l a r i t y to c o n s t r u c t i o n s i n // o t h e r d i m e n s i o n s. Matrix2x2<Real> C r e a t e R o t a t i o n ( Vector2<Real> u0, R e a l a n g l e ) Vector2<Real> v [ 2 ] ; v [ 0 ] = u0 ; ComputeOrthogonalComplement ( 1, v ) ; Real c s = cos ( a n g l e ), sn = s i n ( a n g l e ) ; Matrix2x2<Real> U( v [ 0 ], v [ 1 ] ) ; Matrix2x2<Real> M ( cs, sn, sn, c s ) ; Matrix2x2<Real> R = U M Transpose (U ) ; r e t u r n R ; 7.2 3D Pseudocode // Compute t h e c r o s s p r o d u c t u s i n g t h e f o r m a l d e t e r m i n a n t : // c r o s s = d e t e0, e1, e2, x0, x1, x2, y0, y1, y2 // = ( x1 y2 x2 y1, x2 y0 x0 y2, x0 y1 x1 y0 ) // where e0 = ( 1, 0, 0 ), e1 = ( 0, 1, 0 ), e2 = ( 0, 0, 1 ), v0 = ( x0, x1, x2 ), and 9

// v1 = ( y0, y1, y2 ). template <typename Real> Vector3<Real> Cross ( Vector3<Real> c o n s t& v0, Vector3<Real> c o n s t& v1 ) r e t u r n Vector3<Real> ( v0 [ 1 ] v1 [ 2 ] v0 [ 2 ] v1 [ 1 ], v0 [ 2 ] v1 [ 0 ] v0 [ 0 ] v1 [ 2 ], v0 [ 0 ] v1 [ 1 ] v0 [ 1 ] v1 [ 0 ] ) ; // Compute a r i g h t handed o r t h o n o r m a l b a s i s f o r t h e o r t h o g o n a l complement // o f t h e i n p u t v e c t o r s. The f u n c t i o n r e t u r n s t h e s m a l l e s t l e n g t h o f t h e // u n n o r m a l i z e d v e c t o r s computed d u r i n g t h e p r o c e s s. I f t h i s v a l u e i s n e a r l y // zero, i t i s p o s s i b l e t h a t t h e i n p u t s a r e l i n e a r l y dependent ( w i t h i n // n u m e r i c a l round o f f e r r o r s ). On i n p u t, numinputs must be 1 o r 2 and // v [ 0 ] t h r o u gh v [ numinputs 1] must be i n i t i a l i z e d. On output, t h e // v e c t o r s v [ 0 ] t h r o ugh v [ 2 ] form an o r t h o n o r m a l s e t. template <typename Real> Real ComputeOrthogonalComplement ( i n t numinputs, Vector3<Real> v [ 3 ] ) i f ( numinputs == 1) i f ( f a b s ( v [ 0 ] [ 0 ] ) > f a b s ( v [ 0 ] [ 1 ] ) ) v [ 1 ] = Vector3<Real >( v [ 0 ] [ 2 ], ( R e a l ) 0, +v [ 0 ] [ 0 ] ) ; e l s e v [ 1 ] = Vector3<Real >(( Real ) 0, +v [ 0 ] [ 2 ], v [ 0 ] [ 1 ] ) ; numinputs = 2 ; i f ( numinputs == 2) v [ 2 ] = C r o s s ( v [ 0 ], v [ 1 ] ) ; r e t u r n O r t h o n o r m a l i z e <3, Real >(3, v ) ; r e t u r n ( Real ) 0 ; // C r e a t e a r o t a t i o n m a t r i x ( a n g l e i s i n r a d i a n s ). This i s not what you // would do i n p r a c t i c e, but i t shows t h e s i m i l a r i t y to c o n s t r u c t i o n s i n // o t h e r d i m e n s i o n s. Matrix3x3<Real> C r e a t e R o t a t i o n ( Vector3<Real> u0, R e a l a n g l e ) Vector3<Real> v [ 3 ] ; v [ 0 ] = u0 ; ComputeOrthogonalComplement ( 1, v ) ; Real c s = cos ( a n g l e ), sn = s i n ( a n g l e ) ; Matrix3x3<Real> U( v [ 0 ], v [ 1 ], v [ 2 ] ) ; Matrix3x3<Real> M ( cs, sn, ( Real ) 0, sn, cs, ( Real ) 0, ( R eal ) 0, ( Real ) 0, ( Real )1 ) ; Matrix3x3<Real> R = U M Transpose (U ) ; r e t u r n R ; 7.3 4D Pseudocode // Compute t h e h y p e r c r o s s p r o d u c t u s i n g t h e f o r m a l d e t e r m i n a n t : // h c r o s s = d e t e0, e1, e2, e3, x0, x1, x2, x3, y0, y1, y2, y3, z0, z1, z2, z3 // where e0 = ( 1, 0, 0, 0 ), e1 = ( 0, 1, 0, 0 ), e2 = ( 0, 0, 1, 0 ), e3 = ( 0, 0, 0, 1 ), // v0 = ( x0, x1, x2, x3 ), v1 = ( y0, y1, y2, y3 ), and v2 = ( z0, z1, z2, z3 ). 10

template <typename Real> Vector4<Real> HyperCross ( Vector4<Real> c o n s t& v0, Vector4<Real> c o n s t& v1, Vector4<Real> c o n s t& v2 ) Real xy01 = v0 [ 0 ] v1 [ 1 ] v0 [ 1 ] v1 [ 0 ] ; Real xy02 = v0 [ 0 ] v1 [ 2 ] v0 [ 2 ] v1 [ 0 ] ; Real xy03 = v0 [ 0 ] v1 [ 3 ] v0 [ 3 ] v1 [ 0 ] ; Real xy12 = v0 [ 1 ] v1 [ 2 ] v0 [ 2 ] v1 [ 1 ] ; Real xy13 = v0 [ 1 ] v1 [ 3 ] v0 [ 3 ] v1 [ 1 ] ; Real xy23 = v0 [ 2 ] v1 [ 3 ] v0 [ 3 ] v1 [ 2 ] ; r e t u r n Vector4<Real> ( +xy23 v2 [ 1 ] xy13 v2 [ 2 ] + xy12 v2 [ 3 ], xy23 v2 [ 0 ] + xy03 v2 [ 2 ] xy02 v2 [ 3 ], +xy13 v2 [ 0 ] xy03 v2 [ 1 ] + xy01 v2 [ 3 ], xy12 v2 [ 0 ] + xy02 v2 [ 1 ] xy01 v2 [ 2 ] ) ; // Compute a r i g h t handed o r t h o n o r m a l b a s i s f o r t h e o r t h o g o n a l complement // o f t h e i n p u t v e c t o r s. The f u n c t i o n r e t u r n s t h e s m a l l e s t l e n g t h o f t h e // u n n o r m a l i z e d v e c t o r s computed d u r i n g t h e p r o c e s s. I f t h i s v a l u e i s n e a r l y // zero, i t i s p o s s i b l e t h a t t h e i n p u t s a r e l i n e a r l y dependent ( w i t h i n // n u m e r i c a l round o f f e r r o r s ). On i n p u t, numinputs must be 1, 2, o r 3 and // v [ 0 ] t h r o u gh v [ numinputs 1] must be i n i t i a l i z e d. On output, t h e v e c t o r s // v [ 0 ] t h r o u gh v [ 3 ] form an o r t h o n o r m a l s e t. template <typename Real> Real ComputeOrthogonalComplement ( i n t numinputs, Vector4<Real> v [ 4 ] ) i f ( numinputs == 1) i n t maxindex = 0 ; Real maxabsvalue = f a b s ( v [ 0 ] [ 0 ] ) ; f o r ( i n t i = 1 ; i < 4 ; ++i ) Real a b s V a l u e = f a b s ( v [ 0 ] [ i ] ) ; i f ( absvalue > maxabsvalue ) maxindex = i ; maxabsvalue = absvalue ; i f ( maxindex < 2) v [ 1 ] [ 0 ] = v [ 0 ] [ 1 ] ; v [ 1 ] [ 1 ] = +v [ 0 ] [ 0 ] ; v [ 1 ] [ 2 ] = ( Real ) 0 ; v [ 1 ] [ 3 ] = ( Real ) 0 ; e l s e v [ 1 ] [ 0 ] = ( Real ) 0 ; v [ 1 ] [ 1 ] = ( Real ) 0 ; v [ 1 ] [ 2 ] = +v [ 0 ] [ 3 ] ; v [ 1 ] [ 3 ] = V [ 0 ] [ 2 ] ; numinputs = 2 ; i f ( numinputs == 2) Real d e t [ 6 ] = v [ 0 ] [ 0 ] v [ 1 ] [ 1 ] v [ 1 ] [ 0 ] v [ 0 ] [ 1 ], v [ 0 ] [ 0 ] v [ 1 ] [ 2 ] v [ 1 ] [ 0 ] v [ 0 ] [ 2 ], v [ 0 ] [ 0 ] v [ 1 ] [ 3 ] v [ 1 ] [ 0 ] v [ 0 ] [ 3 ], v [ 0 ] [ 1 ] v [ 1 ] [ 2 ] v [ 1 ] [ 1 ] v [ 0 ] [ 2 ], v [ 0 ] [ 1 ] v [ 1 ] [ 3 ] v [ 1 ] [ 1 ] v [ 0 ] [ 2 ], v [ 0 ] [ 2 ] v [ 1 ] [ 3 ] v [ 1 ] [ 2 ] v [ 0 ] [ 3 ] ; 11

i n t maxindex = 0 ; Real maxabsvalue = f a b s ( d e t [ 0 ] ) ; f o r ( i n t i = 1 ; i < 6 ; ++i ) Real a b s V a l u e = f a b s ( d e t [ i ] ) ; i f ( absvalue > maxabsvalue ) maxindex = i ; maxabsvalue = absvalue ; i f ( maxindex == 0) v [ 2 ] = Vector4<Real >( d e t [ 4 ], +d e t [ 2 ], ( R e a l ) 0, d e t [ 0 ] ) ; e l s e i f ( maxindex <= 2) v [ 2 ] = Vector4<Real >(+d e t [ 5 ], ( R e a l ) 0, d e t [ 2 ], +d e t [ 1 ] ) ; e l s e v [ 2 ] = Vector4<Real >(( Real ) 0, d e t [ 5 ], +d e t [ 4 ], d e t [ 3 ] ) ; numinputs = 3 ; i f ( numinputs == 3) v [ 3 ] = HyperCross ( v [ 0 ], v [ 1 ], v [ 2 ] ) ; r e t u r n O r t h o n o r m a l i z e <4, Real >(4, v ) ; r e t u r n ( Real ) 0 ; // C r e a t e a r o t a t i o n m a t r i x. The i n p u t v e c t o r s a r e u n i t l e n g t h and p e r p e n d i c u l a r, // a n g l e 0 1 i s i n r a d i a n s and i s r o t a t i o n a n g l e f o r p l a n e o f i n p u t v e c t o r s, a n g l e 2 3 // i s i n r a d i a n s and i s r o t a t i o n a n g l e f o r p l a n e o f o r t h o g o n a l complement o f i n p u t // v e c t o r s. Matrix4x4<Real> C r e a t e R o t a t i o n ( Vector4<Real> u0, Vector4<Real> u1, f l o a t angle01, f l o a t a n g l e 2 3 ) Vector4<Real> v [ 4 ] ; v [ 0 ] = u0 ; v [ 1 ] = u1 ; ComputeOrthogonalComplement ( 2, v ) ; f l o a t c s0 = cos ( a n g l e 0 1 ), sn0 = s i n ( a n g l e 0 1 ) ; f l o a t c s1 = cos ( a n g l e 2 3 ), sn1 = s i n ( a n g l e 2 3 ) ; Matrix4x4<Real> U( v [ 0 ], v [ 1 ], v [ 2 ], v [ 3 ] ) ; Matrix4x4<Real> M ( cs0, sn0, ( R eal ) 0, ( Real ) 0, sn0, cs0, ( Real ) 0, ( Real ) 0, ( R eal ) 0, ( Real ) 0, cs1, sn1, ( R eal ) 0, ( Real ) 0, sn1, cs1 ) ; Matrix4x4<Real> R = U M Transpose (U ) ; r e t u r n R ; 12