# Quick Reference Guide to Linear Algebra in Quantum Mechanics

Save this PDF as:

Size: px
Start display at page:

Download "Quick Reference Guide to Linear Algebra in Quantum Mechanics"

## Transcription

1 Quick Reference Guide to Linear Algebra in Quantum Mechanics Scott N. Walck September 2, 2014 Contents 1 Complex Numbers Introduction Real Numbers Addition of Complex Numbers Subtraction of Complex Numbers Multiplication of Complex Numbers The Complex Conjugate Division of Complex Numbers The Complex Plane Magnitude of a Complex Number Exponential Function Polar Form of a Complex Number Multiplication in Polar Form Problems Kets 7 3 Bras 9 4 Inner Product 10 1

2 5 Linear Operators The Commutator Outer Product Completeness Relation Every Operator can be written as a sum of Outer Products Eigenkets and Eigenvalues Matrix Representation Kets Bras Linear Operators The Dagger Hermitian Operators Unitary Operators Spectral Decomposition Projection Operators Tensor Product Help Eliminate Errors 20 1 Complex Numbers 1.1 Introduction Complex numbers are pairs of real numbers. If x and y are real numbers, then the pair (x, y) can be regarded as a complex number. Usually, we write this complex number as x + yi. We call the real number x the real part of the complex number and the real number y the imaginary part of the complex number. Example: The numbers 1 and 3 ( are real numbers, so the pair 2 2 1, ) 3 can be regarded as a complex number, which we will 2 2 2

3 typically write as i. The complex number 1 3i can be written in other ways i = i 2 = 2 i = i Real Numbers We regard the real numbers as a subset of the complex numbers. Real numbers are those complex numbers that have zero for their imaginary part. Instead of writing 4 + 0i for such a number, we will just write 4. In this way, every real number is a complex number. Complex numbers can be added, subtracted, multiplied, and divided. 1.3 Addition of Complex Numbers If z 1 = x 1 + y 1 i and z 2 = x 2 + y 2 i are complex numbers, then the sum z 1 + z 2 is defined to be z 1 + z 2 := (x 1 + x 2 ) + (y 1 + y 2 )i. The symbol := means is defined to be. To add two complex numbers, we just add the real parts and add the imaginary parts. Example: Let z 1 = 3 4i and z 2 = 5i 7. Then the sum is z 1 + z 2 = 10 + i. 1.4 Subtraction of Complex Numbers If z 1 = x 1 + y 1 i and z 2 = x 2 + y 2 i are complex numbers, then the difference z 1 z 2 is defined to be z 1 z 2 := (x 1 x 2 ) + (y 1 y 2 )i. To subtract two complex numbers, we just subtract the real parts and subtract the imaginary parts. Example: Let z 1 = 3 4i and z 2 = 5i 7. Then the difference is z 1 z 2 = 4 9i. 3

4 1.5 Multiplication of Complex Numbers If z 1 = x 1 + y 1 i and z 2 = x 2 + y 2 i are complex numbers, then the product z 1 z 2 is defined to be z 1 z 2 := (x 1 x 2 y 1 y 2 ) + (x 1 y 2 + y 1 x 2 )i. The best way to think about this is to imagine that you are distributing the product term by term, and using the idea that i 2 = 1. (x 1 + y 1 i)(x 2 + y 2 i) = x 1 x 2 + x 1 y 2 i + y 1 ix 2 + y 1 iy 2 i = x 1 x 2 + x 1 y 2 i + y 1 x 2 i y 1 y 2 = (x 1 x 2 y 1 y 2 ) + (x 1 y 2 + y 1 x 2 )i 1.6 The Complex Conjugate The complex conjugate of a complex number z = x + yi is defined to be the complex number z := x yi. Mathematicians usually write z for the complex conjugate of z, while physicists usually write z. We will use the physicists convention in these notes. You can think of the complex conjugate as being formed by replacing every i in a complex number by i. If you multiply any complex number by its complex conjugate, you get a real number. (x + yi)(x yi) = x 2 + y Division of Complex Numbers If z 1 = x 1 + y 1 i and z 2 = x 2 + y 2 i are complex numbers, and z 2 0, then we can form a quotient. We can simplify the quotient by multiplying the numerator and denominator by the complex conjugate of the denominator. z 1 = x 1 + y 1 i z 2 x 2 + y 2 i = x 1 + y 1 i x 2 + y 2 i ( ) x2 y 2 i x 2 y 2 i = x 1x 2 + y 1 y 2 x y x 1y 2 + y 1 x 2 i x y2 2 4

5 1.8 The Complex Plane Since complex numbers are pairs of real numbers, it makes sense to think of them as points on a plane. By convention, we use the horizontal axis to represent the real part of a complex number, and the vertical axis to represent the imaginary part. 1.9 Magnitude of a Complex Number The magnitude of a complex number is the distance from the origin on the complex plane. Therefore, the magnitude of a complex number must always be a nonnegative real number. The magnitude will be zero if and only if the complex number itself is zero. If z = x + yi is a complex number, we write z to denote the magnitude of z, and from the Pythagorean theorem. z = x 2 + y Exponential Function If w = u + vi is a complex number (with u and v real numbers), we define e w = e u+vi = e u (cos v + i sin v) where v is understood as an angle in radians. This is really just a fancy version of the Euler formula, which holds if θ is real. e iθ = cos θ + i sin θ 1.11 Polar Form of a Complex Number Consider the complex number z = Re iθ, where R and θ are real numbers, and R is nonnegative. Using the Euler formula, we can write this as z = Re iθ = R(cos θ + i sin θ) = R cos θ + ir sin θ This complex number has real part R cos θ and imaginary part R sin θ. This is just the point in the complex plane with polar coordinates R and θ! Notice 5

6 that R is the magnitude of the complex number. The angle θ is called the argument of the complex number z. Every complex number z can be written in Cartesian form z = x + yi with x and y real, as well as polar form z = Re iθ with R and θ real. The relationship between these two forms for writing complex numbers is the same as the relationship between Cartesian and polar coordinates. x = R cos θ R = x 2 + y 2 y = R sin θ 1.12 Multiplication in Polar Form θ = arctan y x Multiplication of complex numbers is especially easy to do if they are written in polar form. R 1 e iθ 1 R 2 e iθ 2 = R 1 R 2 e i(θ 1+θ 2 ) To form the product of complex numbers in polar form you multiply the magnitudes and add the arguments Problems Try to avoid decimal computation in the following exercises. (For example, write 2 instead of ) Problem 1 Simplify the following and express in Cartesian form. 1. (4e iπ/2 )(4e iπ ) 2. 4e iπ/2 + 4e iπ 3. 3e iπ/4 + 4e i3π/4 4. (1 + i)( 3 i) 6

7 Problem 2 Simplify the following and express in polar form. 1. (4e iπ/2 )(4e iπ ) 2. 4e iπ/2 + 4e iπ 3. (1 + i)( 3 i) 4. (e iπ/6 )(e iπ/9 ) + (2e i3π/2 )(e i2π/9 ) Problem 3 Find i i, ( i) ( i), i 2, i 2, i, i. Problem 4 Let z = 3 + 4i. Find z, z z, z, z 2, and 1/z. Problem 5 Is it always true that z 2 = z z? Show that it s true or find a counterexample. (Hint: If you want to try to show that it s true, write z = x + iy with x and y real.) Problem 6 Find two different values for z so that z 2 = i. Express these numbers in both Cartesian and polar form. Plot them on the complex plane. Problem 7 Show, on the complex plane, all of the complex numbers with magnitude 1. Problem 8 Simplify e iωt + e iωt 2, where ω and t are real numbers. Is the result a real number? If so, write it in a way that does not have an i in it. 2 Kets A ket (or ket vector) is a symbol consisting of a vertical bar, a descriptive label, and a right angle bracket, as in φ 1. We can add two (or more) kets to form another ket. For example, we write ψ = φ 1 + φ 2. Ket addition is commutative, meaning that the order in which addition is done does not matter. φ 1 + φ 2 = φ 2 + φ 1 7

8 (Later, we will see that some kinds of multiplication that we do are not commutative. All kinds of addition that we do are commutative.) We can multiply a ket by a complex number to form another ket. For example, we might write ψ = (3 + 4i) φ 1. This multiplication of a ket by a complex number is also commutative. For example, (3 + 4i) φ 1 = φ 1 (3 + 4i). It is conventional, but not required, to write complex numbers in front of the kets they multiply, rather than behind. Multiplication of two kets is not defined. Let c 1 and c 2 be complex numbers and φ 1 and φ 2 be kets. We can distribute (complex) numbers over a ket, (c 1 + c 2 ) φ 1 = c 1 φ 1 + c 2 φ 1 and we can distribute a (complex) number over kets, c 1 ( φ 1 + φ 2 ) = c 1 φ 1 + c 1 φ 2. A set V of kets is called a ket space if it satisfies the following conditions. 1. There is a ket zero V, such that φ + zero = φ for all φ V. 2. If φ 1, φ 2 V, then φ 1 + φ 2 V. 3. If c is any complex number and φ 1 V, then c φ 1 V. The ket zero is called the zero ket. It is the additive identity element of the ket space. You might think that a better notation for the zero ket would be 0, but it is conventional to reserve the symbol 0 for something else. We will not have a need to write down the symbol for the zero ket very often, so zero will be a fine notation for us. At some point, we may become slightly sloppy with our notation and use the symbol 0 (without ket notation) to represent the zero ket. Notice that not every set of kets is a ket space. The set must contain all sums and scalar multiples in order to be a ket space. A linear combination of kets φ 1,..., φ n is a (weighted) sum of these kets c 1 φ c n φ n 8

9 with complex coefficients c 1,..., c n. Any of the complex coefficients c 1,..., c n could be zero. The span of a set of kets { φ 1,..., φ n } is the collection of all linear combinations of these kets. A set of kets is said to span a ket space V if the span of the set of kets contains V. Notice that we have two uses for the word span, one as a noun and one as a verb. A set of kets is called linearly dependent if one of the kets in the set can be written as a linear combination of the other kets in the set. A set of kets that is not linearly dependent is called linearly independent. A basis for the ket space V is a linearly independent set of kets that spans V. If { φ 1,..., φ n } is a basis for the ket space V, then every ket in V can be written as a linear combination of φ 1,..., φ n in a unique way. The dimension of a ket space V is the smallest number of kets that span V. The dimension of V is equal to the number of kets in any basis for V. The dimension of a ket space may be finite or infinite. We work with finite-dimensional ket spaces in this Quick Reference Guide, but quantum mechanics needs infinite-dimensional ket spaces as well. 3 Bras A bra (or bra vector) is a symbol consisting of a left angle bracket, a descriptive label, and a vertical bar, as in φ 1. For each ket, there is a corresponding bra with the same descriptive label inside, and for each bra, there is a corresponding ket. We can add two (or more) bras to form another bra. For example, we might write ψ = φ 1 + φ 2. We can multiply a bra by a complex number to form another bra. example, we might write χ = (3 + 4i) φ 1. Multiplication of two bras is not defined. Let c 1 and c 2 be complex numbers and φ 1 and φ 2 be bras. We can distribute (complex) numbers over a bra, (c 1 + c 2 ) φ 1 = c 1 φ 1 + c 2 φ 1 For 9

10 and we can distribute a (complex) number over bras, c 1 ( φ 1 + φ 2 ) = c 1 φ 1 + c 1 φ 2. We said before that for each ket, there is a corresponding bra with the same descriptive label inside. If, in addition, there are coefficients multiplying the ket, we must take their complex conjugates in forming the corresponding bra. Here are some examples. 1. The ket φ 1 has corresponding bra φ The ket (3 + 4i) φ 1 has corresponding bra (3 4i) φ The ket (3+4i) φ 1 2i φ 2 has corresponding bra (3 4i) φ 1 +2i φ The ket c 1 φ 1 has corresponding bra c 1 φ The bra c 2 φ 2 has corresponding ket c 2 φ 2. In the expressions above, c 1 is the complex conjugate of the complex number c 1. If V is a ket space, we will call the collection of corresponding bras a bra space, and denote it V. 4 Inner Product A bra multiplied by a ket (with the bra on the left and the ket on the right) is called an inner product and gives a complex number. For example, the inner product of φ 1 and ψ 2 is written φ 1 ψ 2, and is a complex number. The inner product has the following properties. Let φ 1, φ 2 be kets, χ 1, χ 2 be bras, and c 1, c 2 be complex numbers. 1. χ 1 (c 1 φ 1 + c 2 φ 2 ) = c 1 χ 1 φ 1 + c 2 χ 1 φ 2 2. (c 1 χ 1 + c 2 χ 2 ) φ 1 = c 1 χ 1 φ 1 + c 2 χ 2 φ 1 3. χ 1 φ 1 = φ 1 χ 1 Note that the last property implies that φ 1 φ 1 is real. In addition, 1. φ 1 φ

11 2. φ 1 φ 1 = 0 if and only if φ 1 = zero Two kets are called orthogonal if their inner product is zero. What we really mean here is that the inner product between the first ket s corresponding bra and the second ket is zero. A set of kets is called orthogonal if any pair of distinct kets in the set is orthogonal. The norm, or length, of a ket ψ is defined to be ψ ψ. A ket is called normalized if it has norm 1. An orthonormal set of kets is an orthogonal set of normalized kets. An orthonormal basis for a ket space V is an orthogonal set of normalized kets that forms a basis for V. If { φ 1,..., φ n } is an orthonormal basis for the ket space V, then every ket in V can be written as a linear combination of φ 1,..., φ n in a unique way, and { 1, if l = m φ l φ m = 0, if l m. This last condition is the orthonormality condition. It says that different kets in the basis are orthogonal to each other, and that individual kets in the basis are normalized. A short hand way to write this orthonormality condition is φ l φ m = δ lm where δ lm is called the Kronecker delta, defined as { 1, if l = m δ lm = 0, if l m. The inner product gives us a new way to think about bra vectors. We can think of them as maps from a ket space to the complex numbers. In other words, a bra vector is a thing which is waiting to take a ket as input and give a complex number as output. 5 Linear Operators In linear algebra, an operator is a thing that acts on a vector. For example, if we were working with two-dimensional real vectors in a plane, the command to rotate by 120 counterclockwise is an operator. Let V be a ket space. A linear operator on V is a linear map V V. In other words, a linear operator is a function that takes a ket as input, gives 11

12 a ket back as output, and does this in a linear way. We use a hat over a capital letter to denote a linear operator, such as Â. Because Â is linear, we can distribute it over kets as follows. Â (c 1 φ 1 + c 2 φ 2 ) = c 1 Â φ 1 + c 2 Â φ 2 So, if Â is a linear operator and φ is a ket, then Â φ is a ket. An expression like φ Â has no meaning. We must write linear operators to the left of the kets they act on. Since Â φ is a ket, we can take the inner product of a bra ψ with the ket Â φ to obtain the complex number ψ Â φ. In a similar way, we should regard an object like ψ Â as a bra vector, since it is a thing which is waiting to turn a ket into a complex number. An expression like Â ψ has no meaning. Two linear operators on V are equal if they give the same result when acting on any ket in V. But, because any ket in V can be written as a linear combination of basis vectors of V, two linear operators on V are equal if they give the same result when acting on any basis ket. We can define the sum of two operators on V. Let Â and ˆB be operators on V. The action of the sum, Â + ˆB, on any ket ψ V is defined to be the sum of the kets Â ψ and ˆB ψ. (Â + ˆB) ψ = Â ψ + ˆB ψ We can define the product of two operators on V. Let Â and ˆB be operators on V. The action of the product, Â ˆB, on any ket ψ V is defined to be the action of Â on the ket ˆB ψ. (Â ˆB) ψ = Â( ˆB ψ ) While operator addition is commutative, so that Â + ˆB = ˆB + Â for all linear operators Â and ˆB, operator multiplication is not commutative, and in general, Â ˆB ˆBÂ. 5.1 The Commutator We define the commutator of two linear operators, Â and ˆB, to be [Â, ˆB] = Â ˆB ˆBÂ. The commutator of two linear operators is another linear operator. 12

13 5.2 Outer Product An outer product is a product of a ket on the left and a bra on the right. An outer product is a linear operator. For example, φ 1 φ 2 is a linear operator, because when it acts on a ket ψ, it gives back a ket. ( φ 1 φ 2 ) ψ = φ 1 φ 2 ψ = φ 2 ψ φ 1 In the last equality, we brought the complex number φ 2 ψ to the front of the expression. 5.3 Completeness Relation Let { φ 1,..., φ n } be an orthonormal basis for a ket space V. Then φ l φ l = Î, (1) l=1 where Î is the identity operator on V. The identity operator is the linear operator that gives back the same ket that it acts on. In other words, for any ψ V, we have Î ψ = ψ. We can prove the completeness relation given above in the following way. Let ψ V. Since { φ 1,..., φ n } is an orthonormal basis for V, we can write ψ as a linear combination of basis kets. ψ = c m φ m Now, m=1 [ ] [ ] φ l φ l ψ = φ l φ l c m φ m l=1 = = = l=1 l=1 m=1 l=1 m=1 m=1 c m φ l φ l φ m c m φ l δ lm c l φ l l=1 = ψ 13

14 In going from the third line to the fourth line, we used the fact that δ lm gives zero unless l = m to eliminate one of the sums and to replace c m with c l. This is an important little algebra trick that gets used in several places, and is good to understand. We see that the operator n l=1 φ l φ l does the same thing to kets that the identity operator does (namely, it leaves them alone), so it must be the identity operator. 5.4 Every Operator can be written as a sum of Outer Products Let { φ 1,..., φ n } be an orthonormal basis for a ket space V and let Â be a linear operator on V. Using the completeness relation, Â can be written as a sum of outer products of basis vectors. where Â = = = φ l φ l Â l=1 l=1 m=1 l=1 m=1 φ m φ m m=1 φ l Â φ m φ l φ m A lm φ l φ m A lm = φ l Â φ m Notice that the set of numbers A lm completely describes the linear operator Â (given an orthonormal basis). 5.5 Eigenkets and Eigenvalues If linear operator Â acts on a non-zero ket φ and returns a (complex) multiple of φ, Â φ = a φ (a a complex number), then φ is called an eigenket of Â and a is the corresponding eigenvalue. 14

15 6 Matrix Representation 6.1 Kets Given a basis, we can represent a ket as a column vector. If { φ 1,..., φ n } is a basis for a ket space V, we associate a column vector of complex numbers with a ket as follows. c 1. c n c 1 φ c n φ n The complex numbers are just the coefficients of the kets, when expressed in terms of the given basis. Note that a basis is really an ordered set; the order of the kets in the basis matters. If { φ 1,..., φ n } is an orthonormal basis for the ket space V, then every ket ψ V can be represented as a column vector of complex numbers as follows. 6.2 Bras [ ] ψ = φ l φ l ψ = l=1 φ l φ l ψ l=1 φ 1 ψ. φ n ψ Given a basis, we can represent a bra as a row vector. If { φ 1,..., φ n } is a basis for a ket space V, we associate a row vector of complex numbers with a bra as follows. [ c1... c n ] c1 φ c n φ n If { φ 1,..., φ n } is an orthonormal basis for the ket space V, then every bra ψ V can be represented as a row vector of complex numbers as follows. [ ] ψ = ψ φ l φ l = l=1 ψ φ l φ l [ ψ φ 1... ψ φ n ] l=1 15

16 6.3 Linear Operators Given a basis, we can represent a linear operator as a square matrix. If { φ 1,..., φ n } is a basis for a ket space V, we associate a square matrix of complex numbers with a linear operator as follows. A A 1n..... A lm φ l φ m A n1... A l=1 m=1 nn If { φ 1,..., φ n } is an orthonormal basis for the ket space V, then every linear operator Â on V can be represented as a square matrix of complex numbers as follows. [ ] [ ] Â = φ l φ l Â φ m φ m = l=1 l=1 m=1 m=1 φ 1 φ l Â φ Â φ 1... φ 1 Â φ n m φ l φ m..... φ n Â φ 1... φ n Â φ n 7 The Dagger We have learned to regard an expression like Â φ as a ket and an expression like ψ ˆB as a bra. What is the bra that corresponds to the ket Â φ? The answer turns out to be φ Â, where Â is a linear operator that may not be the same as Â. For every linear operator Â, there is a linear operator Â, called the adjoint operator. The dagger operation turns kets into bras, bras into kets, complex numbers into their complex conjugates, and operators into their adjoints. In addition, it reverses the order of multiplication. Let s see some examples of the dagger operation. 1. ( φ 1 ) = φ 1 2. ( φ 1 ) = φ 1 3. [(3 + 4i) φ 1 ] = φ 1 (3 4i) = (3 4i) φ 1 16

17 4. ( ψ φ ) = φ ψ 5. ( ψ φ ) = ψ φ In the fourth example, we regarded the expression on which the dagger acted as a product of a bra and a ket, and used the rule that the dagger turns kets into bras, bras into kets, and reverses order. In the fifth example, we regarded the expression as a complex number and used the rule that the dagger turns complex numbers into their complex conjugates. Fortunately, one of the properties of the inner product is that the expressions obtained in these two examples are equal. Here are some more examples. ( ) 1. ψ Â φ = φ Â ψ 2. ( ψ Â φ ) = ψ Â φ 3. ( χ φ ) = φ χ 7.1 Hermitian Operators A linear operator Â is called Hermitian if Â = Â. The eigenvalues of a Hermitian operator are real. 7.2 Unitary Operators A linear operator Â is called unitary if Â Â = Î. The eigenvalues of a unitary operator are complex numbers with magnitude one. 8 Spectral Decomposition Theorem 1 (Spectral Theorem) If Â is a Hermitian linear operator acting on a ket space V, then there is an orthonormal basis of eigenvectors of Â. In other words, there is a collection of eigenvectors φ j (there will be n of them if we are working with an n-dimensional ket space) such that and Â φ j = a j φ j φ j φ k = δ jk. 17

18 Suppose that { φ 1,..., φ n } is an orthonormal basis of eigenstates of Â. Then we have Â φ m = a m φ m Using the completeness relation, we get Â = Â φ m φ m = m=1 8.1 Projection Operators a m φ m φ m. m=1 A projection operator is any linear operator ˆP that satisfies ˆP ˆP = ˆP. Example: Let χ be a normalized ket. Then ˆP = χ χ is a projection operator because ˆP ˆP = χ χ χ χ = χ χ = ˆP. Example: Let χ 1 and χ 2 be normalized kets that are orthogonal to each other. In other words, χ 1 χ 1 = χ 2 χ 2 = 1 and χ 1 χ 2 = χ 2 χ 1 = 0. Then ˆP = χ 1 χ 1 + χ 2 χ 2 is a projection operator because ˆP ˆP = ( χ 1 χ 1 + χ 2 χ 2 )( χ 1 χ 1 + χ 2 χ 2 ) = χ 1 χ 1 + χ 2 χ 2 = ˆP. Now, let Â be a Hermitian linear operator. By Theorem 1, there is an orthonormal basis { φ j } of eigenvectors of Â with corresponding eigenvalues a j. Â φ j = a j φ j Corresponding to each eigenvalue a of Â there is a projection operator ˆP a defined by ˆP a = φ j φ j j a j =a where the sum is over all eigenkets with eigenvalue a. 18

19 Recall that we have a completeness (or closure) relation φ j φ j = Î. j With projection operators corresponding to each eigenvalue, we have a new way of writing that completion relation. ˆP a = Î Here, the sum is over all distinct eigenvalues of Â. 9 Tensor Product a Let V A and V B be ket spaces. We construct a new ket space V A V B called the tensor product of the ket spaces V A and V B. If { φ 1,..., φ m } is an orthonormal basis for V A and { ψ 1,..., ψ n } is an orthonormal basis for V B, then the collection of the mn kets is an orthonormal basis for V A V B. φ 1 ψ 1. φ 1 ψ n φ 2 ψ 1. φ 2 ψ n. φ m ψ 1. φ m ψ n Example: Suppose that V A is a two-dimensional ket space with orthonormal basis { φ 1, φ 2 }, and that V B is a three-dimensional ket space with orthonormal basis { ψ 1, ψ 2, ψ 3 }. Then V A V B is a six-dimensional ket space with orthonormal basis { φ 1 ψ 1, φ 1 ψ 2, φ 1 ψ 3, φ 2 ψ 1, φ 2 ψ 2, φ 2 ψ 3 }. 19

20 The tensor product is distributive. For example, 1 φ 1 ψ φ 1 ψ 2 = φ The tensor product is not commutative. φ 1 ψ 1 ψ 1 φ 1 ( 1 2 ψ ψ 2 ). 10 Help Eliminate Errors Accuracy and correctness are very important to me, as well as to readers of this document. If this document contains errors, I would like to fix them, and you can help me. Please let me know if you find spelling mistakes, mathematical mistakes, or any other mistakes. In honor of Donald Knuth, who set the standard for care in technical writing, I will happily pay \$2.56 (one hexadecimal dollar) for each substantive or typographical error in this document to the first person who reports the error. The following people have helped improve this document. David Walsh 20

21 Index adjoint, 16 adjoint operator, 16 argument, 6 basis, 9 basis, orthonormal, 11 bra, 9 bra space, 10 Cartesian form, 6 combination, linear, 8 commutative, 7 commutator, 12 completeness relation, 13 complex conjugate, 4 conjugate, 4 delta, Kronecker, 11 dependent, linearly, 9 dimension, 9 eigenket, 14 eigenvalue, 14 Hermitian, 17 Hermitian operator, 17 imaginary part, 2 independent, linearly, 9 inner product, 10 ket, 7 ket space, 8 ket, zero, 8 Kronecker delta, 11 length, 11 linear combination, 8 linear map, 11 linear operator, 11 linear operator, Hermitian, 17 linearly dependent, 9 linearly independent, 9 magnitude, 5 map, linear, 11 matrix representation, 15 norm, 11 normalized, 11 operator, 11 operator, Hermitian, 17 operator, linear, 11 operator, projection, 18 operator, unitary, 17 orthogonal, 11 orthonormal basis, 11 orthonormal set, 11 outer product, 13 polar form, 6 product, inner, 10 product, outer, 13 product, tensor, 19 projection operator, 18 real part, 2 relation, completeness, 13 set, orthonormal, 11 space, bra, 10 space, ket, 8 span, 9 spectral theorem, 17 21

22 tensor product, 19 unitary, 17 unitary operator, 17 zero ket, 8 22

### 1 Review of complex numbers

1 Review of complex numbers 1.1 Complex numbers: algebra The set C of complex numbers is formed by adding a square root i of 1 to the set of real numbers: i = 1. Every complex number can be written uniquely

### Chapter 5 Polar Coordinates; Vectors 5.1 Polar coordinates 1. Pole and polar axis

Chapter 5 Polar Coordinates; Vectors 5.1 Polar coordinates 1. Pole and polar axis 2. Polar coordinates A point P in a polar coordinate system is represented by an ordered pair of numbers (r, θ). If r >

### α = u v. In other words, Orthogonal Projection

Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

### Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

### Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### 2. Introduction to quantum mechanics

2. Introduction to quantum mechanics 2.1 Linear algebra Dirac notation Complex conjugate Vector/ket Dual vector/bra Inner product/bracket Tensor product Complex conj. matrix Transpose of matrix Hermitian

### Hilbert Space Quantum Mechanics

qitd114 Hilbert Space Quantum Mechanics Robert B. Griffiths Version of 16 January 2014 Contents 1 Introduction 1 1.1 Hilbert space............................................. 1 1.2 Qubit.................................................

### Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

### Orthogonal Diagonalization of Symmetric Matrices

MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

### Section 1.1. Introduction to R n

The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

### 13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

### 6. Vectors. 1 2009-2016 Scott Surgent (surgent@asu.edu)

6. Vectors For purposes of applications in calculus and physics, a vector has both a direction and a magnitude (length), and is usually represented as an arrow. The start of the arrow is the vector s foot,

### Linear Algebra Review. Vectors

Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

### n th roots of complex numbers

n th roots of complex numbers Nathan Pflueger 1 October 014 This note describes how to solve equations of the form z n = c, where c is a complex number. These problems serve to illustrate the use of polar

### State of Stress at Point

State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,

### Figure 1.1 Vector A and Vector F

CHAPTER I VECTOR QUANTITIES Quantities are anything which can be measured, and stated with number. Quantities in physics are divided into two types; scalar and vector quantities. Scalar quantities have

### Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

### Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

### Complex Numbers Basic Concepts of Complex Numbers Complex Solutions of Equations Operations on Complex Numbers

Complex Numbers Basic Concepts of Complex Numbers Complex Solutions of Equations Operations on Complex Numbers Identify the number as real, complex, or pure imaginary. 2i The complex numbers are an extension

### Linear Algebra: Vectors

A Linear Algebra: Vectors A Appendix A: LINEAR ALGEBRA: VECTORS TABLE OF CONTENTS Page A Motivation A 3 A2 Vectors A 3 A2 Notational Conventions A 4 A22 Visualization A 5 A23 Special Vectors A 5 A3 Vector

### COMPLEX NUMBERS. a bi c di a c b d i. a bi c di a c b d i For instance, 1 i 4 7i 1 4 1 7 i 5 6i

COMPLEX NUMBERS _4+i _-i FIGURE Complex numbers as points in the Arg plane i _i +i -i A complex number can be represented by an expression of the form a bi, where a b are real numbers i is a symbol with

### 1 Sets and Set Notation.

LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

### Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions

Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential

### by the matrix A results in a vector which is a reflection of the given

Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

### Week 13 Trigonometric Form of Complex Numbers

Week Trigonometric Form of Complex Numbers Overview In this week of the course, which is the last week if you are not going to take calculus, we will look at how Trigonometry can sometimes help in working

### LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

### MATH 304 Linear Algebra Lecture 24: Scalar product.

MATH 304 Linear Algebra Lecture 24: Scalar product. Vectors: geometric approach B A B A A vector is represented by a directed segment. Directed segment is drawn as an arrow. Different arrows represent

### Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

### Lecture 2: Essential quantum mechanics

Department of Physical Sciences, University of Helsinki http://theory.physics.helsinki.fi/ kvanttilaskenta/ p. 1/46 Quantum information and computing Lecture 2: Essential quantum mechanics Jani-Petri Martikainen

### 1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

### 28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. v x. u y v z u z v y u y u z. v y v z

28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE 1.4 Cross Product 1.4.1 Definitions The cross product is the second multiplication operation between vectors we will study. The goal behind the definition

### Basic Terminology for Systems of Equations in a Nutshell. E. L. Lady. 3x 1 7x 2 +4x 3 =0 5x 1 +8x 2 12x 3 =0.

Basic Terminology for Systems of Equations in a Nutshell E L Lady A system of linear equations is something like the following: x 7x +4x =0 5x +8x x = Note that the number of equations is not required

### 1 Scalars, Vectors and Tensors

DEPARTMENT OF PHYSICS INDIAN INSTITUTE OF TECHNOLOGY, MADRAS PH350 Classical Physics Handout 1 8.8.2009 1 Scalars, Vectors and Tensors In physics, we are interested in obtaining laws (in the form of mathematical

### Quantum Physics II (8.05) Fall 2013 Assignment 4

Quantum Physics II (8.05) Fall 2013 Assignment 4 Massachusetts Institute of Technology Physics Department Due October 4, 2013 September 27, 2013 3:00 pm Problem Set 4 1. Identitites for commutators (Based

### 11.7 Polar Form of Complex Numbers

11.7 Polar Form of Complex Numbers 989 11.7 Polar Form of Complex Numbers In this section, we return to our study of complex numbers which were first introduced in Section.. Recall that a complex number

### Vector algebra Christian Miller CS Fall 2011

Vector algebra Christian Miller CS 354 - Fall 2011 Vector algebra A system commonly used to describe space Vectors, linear operators, tensors, etc. Used to build classical physics and the vast majority

### Chapter 20. Vector Spaces and Bases

Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit

### Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

### Vector Math Computer Graphics Scott D. Anderson

Vector Math Computer Graphics Scott D. Anderson 1 Dot Product The notation v w means the dot product or scalar product or inner product of two vectors, v and w. In abstract mathematics, we can talk about

### Solutions to Linear Algebra Practice Problems

Solutions to Linear Algebra Practice Problems. Find all solutions to the following systems of linear equations. (a) x x + x 5 x x x + x + x 5 (b) x + x + x x + x + x x + x + 8x Answer: (a) We create the

### Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

### 2 Session Two - Complex Numbers and Vectors

PH2011 Physics 2A Maths Revision - Session 2: Complex Numbers and Vectors 1 2 Session Two - Complex Numbers and Vectors 2.1 What is a Complex Number? The material on complex numbers should be familiar

### Introduction to Matrix Algebra

Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

### Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

### South Carolina College- and Career-Ready (SCCCR) Pre-Calculus

South Carolina College- and Career-Ready (SCCCR) Pre-Calculus Key Concepts Arithmetic with Polynomials and Rational Expressions PC.AAPR.2 PC.AAPR.3 PC.AAPR.4 PC.AAPR.5 PC.AAPR.6 PC.AAPR.7 Standards Know

### Lecture L3 - Vectors, Matrices and Coordinate Transformations

S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

### Vocabulary Words and Definitions for Algebra

Name: Period: Vocabulary Words and s for Algebra Absolute Value Additive Inverse Algebraic Expression Ascending Order Associative Property Axis of Symmetry Base Binomial Coefficient Combine Like Terms

### Linear Dependence Tests

Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

### Dimensionality Reduction: Principal Components Analysis

Dimensionality Reduction: Principal Components Analysis In data mining one often encounters situations where there are a large number of variables in the database. In such situations it is very likely

### 3. INNER PRODUCT SPACES

. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

### LS.6 Solution Matrices

LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

### We call this set an n-dimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P.

Volumes of parallelograms 1 Chapter 8 Volumes of parallelograms In the present short chapter we are going to discuss the elementary geometrical objects which we call parallelograms. These are going to

### BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

### DEFINITION 5.1.1 A complex number is a matrix of the form. x y. , y x

Chapter 5 COMPLEX NUMBERS 5.1 Constructing the complex numbers One way of introducing the field C of complex numbers is via the arithmetic of matrices. DEFINITION 5.1.1 A complex number is a matrix of

### The Hadamard Product

The Hadamard Product Elizabeth Million April 12, 2007 1 Introduction and Basic Results As inexperienced mathematicians we may have once thought that the natural definition for matrix multiplication would

### WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE?

WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE? JOEL H. SHAPIRO Abstract. These notes supplement the discussion of linear fractional mappings presented in a beginning graduate course

### Vectors 2. The METRIC Project, Imperial College. Imperial College of Science Technology and Medicine, 1996.

Vectors 2 The METRIC Project, Imperial College. Imperial College of Science Technology and Medicine, 1996. Launch Mathematica. Type

### 3 Orthogonal Vectors and Matrices

3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first

### Inner Product Spaces

Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

### Tensors on a vector space

APPENDIX B Tensors on a vector space In this Appendix, we gather mathematical definitions and results pertaining to tensors. The purpose is mostly to introduce the modern, geometrical view on tensors,

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### Elementary Linear Algebra

Elementary Linear Algebra Kuttler January, Saylor URL: http://wwwsaylororg/courses/ma/ Saylor URL: http://wwwsaylororg/courses/ma/ Contents Some Prerequisite Topics Sets And Set Notation Functions Graphs

### LEARNING OBJECTIVES FOR THIS CHAPTER

CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional

### Lecture 14: Section 3.3

Lecture 14: Section 3.3 Shuanglin Shao October 23, 2013 Definition. Two nonzero vectors u and v in R n are said to be orthogonal (or perpendicular) if u v = 0. We will also agree that the zero vector in

### 9.4. The Scalar Product. Introduction. Prerequisites. Learning Style. Learning Outcomes

The Scalar Product 9.4 Introduction There are two kinds of multiplication involving vectors. The first is known as the scalar product or dot product. This is so-called because when the scalar product of

### Sec 4.1 Vector Spaces and Subspaces

Sec 4. Vector Spaces and Subspaces Motivation Let S be the set of all solutions to the differential equation y + y =. Let T be the set of all 2 3 matrices with real entries. These two sets share many common

### 1D 3D 1D 3D. is called eigenstate or state function. When an operator act on a state, it can be written as

Chapter 3 (Lecture 4-5) Postulates of Quantum Mechanics Now we turn to an application of the preceding material, and move into the foundations of quantum mechanics. Quantum mechanics is based on a series

### October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

### Chapter 6. Orthogonality

6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

### Copy in your notebook: Add an example of each term with the symbols used in algebra 2 if there are any.

Algebra 2 - Chapter Prerequisites Vocabulary Copy in your notebook: Add an example of each term with the symbols used in algebra 2 if there are any. P1 p. 1 1. counting(natural) numbers - {1,2,3,4,...}

### Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

### 3.4 Complex Zeros and the Fundamental Theorem of Algebra

86 Polynomial Functions.4 Complex Zeros and the Fundamental Theorem of Algebra In Section., we were focused on finding the real zeros of a polynomial function. In this section, we expand our horizons and

### One advantage of this algebraic approach is that we can write down

. Vectors and the dot product A vector v in R 3 is an arrow. It has a direction and a length (aka the magnitude), but the position is not important. Given a coordinate axis, where the x-axis points out

### ELEMENTS OF VECTOR ALGEBRA

ELEMENTS OF VECTOR ALGEBRA A.1. VECTORS AND SCALAR QUANTITIES We have now proposed sets of basic dimensions and secondary dimensions to describe certain aspects of nature, but more than just dimensions

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### Finite Dimensional Hilbert Spaces and Linear Inverse Problems

Finite Dimensional Hilbert Spaces and Linear Inverse Problems ECE 174 Lecture Supplement Spring 2009 Ken Kreutz-Delgado Electrical and Computer Engineering Jacobs School of Engineering University of California,

### Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

### The basic unit in matrix algebra is a matrix, generally expressed as: a 11 a 12. a 13 A = a 21 a 22 a 23

(copyright by Scott M Lynch, February 2003) Brief Matrix Algebra Review (Soc 504) Matrix algebra is a form of mathematics that allows compact notation for, and mathematical manipulation of, high-dimensional

### Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Math 550 Notes Chapter 7 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 7 Fall 2010 1 / 34 Outline 1 Self-Adjoint and Normal Operators

### Section 10.4 Vectors

Section 10.4 Vectors A vector is represented by using a ray, or arrow, that starts at an initial point and ends at a terminal point. Your textbook will always use a bold letter to indicate a vector (such

### Lecture notes on linear algebra

Lecture notes on linear algebra David Lerner Department of Mathematics University of Kansas These are notes of a course given in Fall, 2007 and 2008 to the Honors sections of our elementary linear algebra

### 5. Orthogonal matrices

L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

### Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi

Geometry of Vectors Carlo Tomasi This note explores the geometric meaning of norm, inner product, orthogonality, and projection for vectors. For vectors in three-dimensional space, we also examine the

### Unified Lecture # 4 Vectors

Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,

### Applied Linear Algebra I Review page 1

Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

### MAC 1114. Learning Objectives. Module 10. Polar Form of Complex Numbers. There are two major topics in this module:

MAC 1114 Module 10 Polar Form of Complex Numbers Learning Objectives Upon completing this module, you should be able to: 1. Identify and simplify imaginary and complex numbers. 2. Add and subtract complex

### (January 14, 2009) End k (V ) End k (V/W )

(January 14, 29) [16.1] Let p be the smallest prime dividing the order of a finite group G. Show that a subgroup H of G of index p is necessarily normal. Let G act on cosets gh of H by left multiplication.

### Trigonometric Functions and Equations

Contents Trigonometric Functions and Equations Lesson 1 Reasoning with Trigonometric Functions Investigations 1 Proving Trigonometric Identities... 271 2 Sum and Difference Identities... 276 3 Extending

### December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

### Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

### Trigonometry Lesson Objectives

Trigonometry Lesson Unit 1: RIGHT TRIANGLE TRIGONOMETRY Lengths of Sides Evaluate trigonometric expressions. Express trigonometric functions as ratios in terms of the sides of a right triangle. Use the

### The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation

### Computing Orthonormal Sets in 2D, 3D, and 4D

Computing Orthonormal Sets in 2D, 3D, and 4D David Eberly Geometric Tools, LLC http://www.geometrictools.com/ Copyright c 1998-2016. All Rights Reserved. Created: March 22, 2010 Last Modified: August 11,

### 5.3 The Cross Product in R 3

53 The Cross Product in R 3 Definition 531 Let u = [u 1, u 2, u 3 ] and v = [v 1, v 2, v 3 ] Then the vector given by [u 2 v 3 u 3 v 2, u 3 v 1 u 1 v 3, u 1 v 2 u 2 v 1 ] is called the cross product (or

### [1] Diagonal factorization

8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall: