Algebra and Linear Algebra

Save this PDF as:

Size: px
Start display at page:

Transcription

1 Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Algebra and Linear Algebra Algebra: numbers and operations on numbers = = 21 Linear algebra: tuples, triples,... of numbers and operations on them Assists in geometric computations

2 Vectors: definition Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Definitions Addition Bases in 2D Dot product and cross product Bases in 3D A vector in R d is an v 1 v 2 ordered d-tuple v =.. v d v 1 In R 3, for example: v = v 2 v 3 (or v x v y v z, or x v y v z v (v 1, v 2, v 3 ), or... ), or v ( ) 5 v = 1

3 Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Vectors: algebraic interpretation Definitions Addition Bases in 2D Dot product and cross product Bases in 3D A 2D vector ( xv y v ) can be seen as the point (x v, y v ) in the Cartesian plane. v ( ) 5 v = 1

4 Vectors: notation Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Definitions Addition Bases in 2D Dot product and cross product Bases in 3D A 2D vector should be denoted as ( xv The T in the exponent stands for transposed. A 2D point should be denoted as (x v, y v ). y v ) or as (x v, y v ) T. Be aware of misused notation (mostly: point notation used for a vector).

5 Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Vectors: geometric interpretation Definitions Addition Bases in 2D Dot product and cross product Bases in 3D A 2D vector ( xv y v ) can be seen as an offset from the origin. Such an offset (arrow) can be translated. v v = v ( ) 5 1 v

6 Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Vectors: length and scalar multiple Definitions Addition Bases in 2D Dot product and cross product Bases in 3D The Euclidean length of a d-dimensional vector v is v = v1 2 + v v2 d A scalar multiple of a d-dimensional vector v is λv = (λv 1, λv 2,..., λv d ) T Note that v and λv have the same direction or opposite directions

7 Parallel vectors Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Definitions Addition Bases in 2D Dot product and cross product Bases in 3D Two vectors v 1 and v 2 are parallel if one is a scalar multiple of the other, i.e., there is a λ 0 such that v 2 = λv 1. Note that if one of the vectors is the null vector, then the vectors are considered neither parallel nor not parallel.

8 Unit vectors Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Definitions Addition Bases in 2D Dot product and cross product Bases in 3D A vector v is a unit vector if v = 1. Normalization Q: given an arbitrary vector v, how do we find a unit vector parallel to v? Can every vector be normalized?

9 Addition of vectors Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Definitions Addition Bases in 2D Dot product and cross product Bases in 3D Given two vectors in R d, v = (v 1, v 2,..., v d ) T and w = (w 1, w 2,..., w d ) T their sum is defined as v + w = (v 1 +w 1, v 2 +w 2,..., v d +w d ) T Q: How would subtraction be defined?

10 Addition of vectors Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Definitions Addition Bases in 2D Dot product and cross product Bases in 3D Addition of vectors is commutative, as can be seen easily from the geometric interpretation. Q: show algebraically that vector addition is commutative. Q: what is the relation between v, w, and v + w?

11 Bases in 2D Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Definitions Addition Bases in 2D Dot product and cross product Bases in 3D A 2D vector can be expressed as a combination of any pair of non-parallel vectors. For instance, in the image, a = 1.5v + 0.6w. Such a pair is called linearly independent, and forms a 2D basis. The extension to higher dimensions is straightforward. v w a

12 Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Orthonormal basis in 2D Definitions Addition Bases in 2D Dot product and cross product Bases in 3D Two vectors form an orthonormal basis in 2D if (1) they are orthogonal to each other, and (2) they are unit vectors. The advantage of an orthonormal basis is that lengths of vectors, expressed in the basis, are easy to calculate.

13 The null vector Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Definitions Addition Bases in 2D Dot product and cross product Bases in 3D The null vector ( ) 0 is special. 0 It acts as the zero for addition of vectors. It is the only vector that has length zero. It is the only vector that does not have a direction. It can not be used as a base vector.

14 Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Definitions Addition Bases in 2D Dot product and cross product Bases in 3D Dot product (inner product, scalar product) For two vectors v, w R d, the dot product is defined as v w = v 1 w 1 +v 2 w 2 + +v d w d, or v w = d i=1 v iw i We have that cos θ = v w v w, where θ is the angle between the two vectors. dot product

15 Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Definitions Addition Bases in 2D Dot product and cross product Bases in 3D Dot product (inner product, scalar product) Q: what is the inner product of an arbitrary unit vector with itself? Q: what do we know if for two vectors v and w we have that v w = 0? dot product

16 Cross product Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Definitions Addition Bases in 2D Dot product and cross product Bases in 3D For two vectors v, w R 3, the cross product is defined as v 2 w 3 v 3 w 2 v w = v 3 w 1 v 1 w 3 v 1 w 2 v 2 w 1 Q: Show that v w is orthogonal to both v and w. cross product We have that v w = v w sin θ, where θ is the angle between v and w.

17 Cross product Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Definitions Addition Bases in 2D Dot product and cross product Bases in 3D Q: Is it possible or necessary that v and w are orthogonal to form v w? cross product Q: What is v w if v and w are parallel?

18 Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Definitions Addition Bases in 2D Dot product and cross product Bases in 3D Dot product, cross product, and the null vector Q: What is the dot product of a vector and the null vector? Q: What is the cross product of a vector and the null vector?

19 Bases in 3D Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Definitions Addition Bases in 2D Dot product and cross product Bases in 3D You need three vectors to form a basis in 3D. If u, v, and w form a basis, then any vector a in 3D can be expressed as a = µu + λv + ρw where µ, λ, and ρ are scalars (ordinary numbers). Q: Let u, v, and w be three vectors (no one the null vector). Suppose that u and v are not parallel, u and w are not parallel, and v and w are not parallel. Do u, v, and w always form a basis?

20 Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Linear dependence in 3D Definitions Addition Bases in 2D Dot product and cross product Bases in 3D If for three vectors u, v, and w in 3D (no null vectors), we have w = µu + λv where µ and λ are scalars, then u, v, and w are linearly dependent. If such µ and λ do not exist, then they are linearly independent. Any three linearly independent vectors in 3D form a 3D basis.

21 Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Orthonormal 3D bases Definitions Addition Bases in 2D Dot product and cross product Bases in 3D Three vectors form an orthonormal basis in 3D if (1) each pair of them is orthogonal, and (2) they are unit vectors. Q: What would you do to test if three 3D vectors form an orthonormal basis?

22 Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Orthonormal 3D bases Definitions Addition Bases in 2D Dot product and cross product Bases in 3D Q: Suppose that two vectors u and v in 3D are orthogonal, and they are unit vectors. Let w be the cross-product of u and v. What can you say about u, v, and w (do they form an orthonormal basis)?

23 Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Left- and right-handed systems Left- and right-handed systems Bases and coordinate frames Coordinate systems in 3D come in two flavors: left-handed and right-handed. There are arguments for both left- and right-handed systems for The global system The camera system Object systems Z X Y Y Y Z X X Z Camera coordinate system Z Y Y X X Z World coordinate system

24 Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Coordinate transformations Left- and right-handed systems Bases and coordinate frames A frequent operation in graphics is the change from one coordinate system (e.g., the (u, v, w) camera system) to another (e.g., the (x, y, z) global system). v u w Having orthonormal bases for both systems makes the transformations simpler. Z Y X

25 2D Implicit curves Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces General curves Circles Lines f(x, y) y An implicit curve in 2D has the form x f(x, y) = 0 f maps two-dimensional points to a real value; the points for which this value is 0 are on the curve, while other points are not on the curve. y f(x, y) = 0 x

26 Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Implicit representation of circles General curves Circles Lines The implicit representation of a 2D circle with center c and radius r is p (x x c ) 2 + (y y c ) 2 r 2 = 0 c So for any point p that lies on the circle, we have that (p c) (p c) r 2 = 0, so p c 2 r 2 = 0, which gives p c = r.

27 Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Implicit representation of lines A well-known representation of lines is the slope-intercept form y = ax + b This can easily be converted to ax + y b = 0. If b = 0, the line intersects the origin, and we have ( ) x n p = 0, with p = and y ( ) a n = 1 Q: What if the line does not intersect the origin? General curves Circles Lines

28 2D parametric curves Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces General curves Circles Lines parametric implicit representation A parametric curve is controlled by a single parameter, and has the form ( ) x = y ( ) g(t) h(t) Parametric representations have some advantages over functions, even if a function would suffice to represent the curve. h(t) g(t)

29 Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Parametric equation of a circle General curves Circles Lines parametric implicit representation The parametric equation of a 2D circle with center c and radius r is ( ) x y = ( ) xc + r cos φ y c + r sin φ p φ c

30 Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Parametric equation of a line General curves Circles Lines parametric implicit representation The parametric equation of a line through the points p 0 and p 1 is ( ) x = y ( ) xp0 + t(x p1 x p0 ) y p0 + t(y p1 y p0 ) p 0 p 1 As we have seen before, this can alternatively be written as p(t) = p 0 + t(p 1 p 0 ).

31 Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Conversion between representations General curves Circles Lines parametric implicit representation It is convenient to be able to convert a parametric equation of a line into an implicit equation, and vice versa. p 0 p 1 Q: how do we do that?

32 Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Implicit surfaces: from 2D to 3D From 2D to 3D Spheres Planes Recall that an implicit curve has the form f(x, y) = 0 f(x, y) y x The 3D generalization is an implicit surface with a similar form f(x, y, z) = 0 Fun project: try to draw the 4D image of the graph of such a function... y f(x, y) = 0 x

33 Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces From 2D to 3D Spheres Planes Implicit one-dimensional curves in 3D? Cooking up an implicit function for a one-dimensional thingy in 3D is in general not possible; such thingies are degenerate surfaces. E.g., x 2 + y 2 = 0 is a cylinder with radius 0: the Z-axis. More complex curves can be described as the intersection of two or more implicit surfaces.

34 Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Parametric curves and surfaces From 2D to 3D Spheres Planes As opposed to implicit curves, it is possible to specify parametric curves in 3D: x = f(t), y = g(t), z = h(t). Parametric surfaces depend on two parameters: x = f(u, v), y = g(u, v), z = h(u, v).

35 Implicit spheres Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces From 2D to 3D Spheres Planes We have already seen the sphere equation: (x c x ) 2 + (y c y ) 2 + (z c z ) 2 r 2 = 0 Just as in the circle case, this can be written in dot product form for any point p on the sphere: (p c) (p c) r 2 = 0, or p c 2 r 2 = 0, or p c = r. p c

36 Parametric spheres Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces From 2D to 3D Spheres Planes Spheres can also be represented parametrically. For instance, a sphere with radius r centered at the origin has the equation: x = r cos φ sin θ, y = r sin φ sin θ, z = r cos θ Q: What would the equation for a sphere with radius r centered at c = (c x, c y, c z ) be? θ p φ

37 Parametric spheres Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces From 2D to 3D Spheres Planes x = r cos φ sin θ, y = r sin φ sin θ, z = r cos θ The parametric representation of a sphere looks much more inconvenient than the implicit equation. θ p φ However, when we have to do texture mapping, the parametric representation turns out to be quite convenient.

38 Implicit planes Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces From 2D to 3D Spheres Planes The implicit equation for a plane in 3D looks a lot like the equation for a line in 2D: ax + by + cz d = 0 Here, (a, b, c) is a normal vector of the plane. Q: what is the meaning of d?

39 Parametric planes Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces From 2D to 3D Spheres Planes Planes can also be described parametrically. Instead of one direction vector (as for lines), we need two: (x, y, z) = (x p, y p, z p ) + s(x v, y v, z v ) + t(x w, y w, z w ).

40 Vectors Coordinate frames 2D implicit curves 2D parametric curves 3D surfaces Implicit and parametric planes From 2D to 3D Spheres Planes Implicit: ax + by + cz d = 0 Parametric: (x, y, z) = (x p, y p, z p ) + s(x v, y v, z v ) + t(x w, y w, z w ). Q: Is an implicit description of a plane in 3D unique? Q: Is a parametric description of a plane in 3D unique?

41 n n matrices Matrices Gaussian elimination Determinants n n matrices Diagonal, Identity, and zero matrices Addition Multiplication Transpose and inverse The system of m linear equations in n variables x 1, x 2,..., x n a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. a m1 x 1 + a m2 x a mn x n = b m can be written as a matrix equation by Ax = b, or in full a 11 a 12 a 1n x 1 b 1 a 21 a 22 a 2n x = b 2. a m1 a m2 a mn x n b m

42 n n matrices Matrices Gaussian elimination Determinants n n matrices Diagonal, Identity, and zero matrices Addition Multiplication Transpose and inverse The matrix a 11 a 12 a 1n a 21 a 22 a 2n A = a m1 a m2 a mn has m rows and n columns, and is called an m n matrix.

43 Special matrices Matrices Gaussian elimination Determinants n n matrices Diagonal, Identity, and zero matrices Addition Multiplication Transpose and inverse A square matrix (for which m = n) is called a diagonal matrix if all elements a ij for which i j are zero. If all elements a ii are one, then the matrix is called an identity matrix, denoted with I m (depending on the context, the subscript m may be left out) I = If all matrix entries are zero, then the matrix is called a zero matrix or null matrix, denoted with 0.

44 Matrix addition Matrices Gaussian elimination Determinants n n matrices Diagonal, Identity, and zero matrices Addition Multiplication Transpose and inverse For two matrices A and B, we have A + B = C, with c ij = a ij + b ij : = Q: what are the conditions for the dimensions of the matrices A and B?

45 Matrix multiplication Matrices Gaussian elimination Determinants n n matrices Diagonal, Identity, and zero matrices Addition Multiplication Transpose and inverse Multiplying a matrix with a scalar is defined as follows: ca = B, with b ij = ca ij. For example, =

46 Matrix multiplication Matrices Gaussian elimination Determinants n n matrices Diagonal, Identity, and zero matrices Addition Multiplication Transpose and inverse Multiplying two matrices is a bit more involved. We have AB = C with c ij = n k=1 a ikb kj. For example, [ ] [ ] = Q: what are the conditions for the dimensions of the matrices A and B? What are the dimensions of C?

47 Matrices Gaussian elimination Determinants Properties of matrix multiplication n n matrices Diagonal, Identity, and zero matrices Addition Multiplication Transpose and inverse Matrix multiplication is associative and distributive over addition: (AB)C = A(BC) A(B + C) = AB + AC (A + B)C = AC + BC However, matrix multiplication is not commutative: in general, AB BA. Also: if AB = AC, it doesn t necessarily follow that B = C (even if A is not the zero matrix).

48 Matrices Gaussian elimination Determinants Zero and identity matrix n n matrices Diagonal, Identity, and zero matrices Addition Multiplication Transpose and inverse The zero matrix 0 has the property that if you add it to another matrix A, you get precisely A again. A + 0 = 0 + A = A The identity matrix I has the property that if you multiply it with another matrix A, you get precisely A again. AI = IA = A

49 Matrices Gaussian elimination Determinants n n matrices Diagonal, Identity, and zero matrices Addition Multiplication Transpose and inverse Matrix multiplication as a linear transformation: 2D The matrix multiplication of a 2 2 square matrix and a 2 1 matrix gives a new 2 1 matrix, e.g.: [ ] [ ] 1 = 2 [ ] 2 3 We can interpret a 2 1 matrix as a vector; the 2 2 matrix transforms any vector into another vector. More later...

50 Transposed matrices Matrices Gaussian elimination Determinants n n matrices Diagonal, Identity, and zero matrices Addition Multiplication Transpose and inverse The transpose A T of an m n matrix A is an n m matrix that is obtained by interchanging the rows and columns of A: a 11 a 12 a 1n a 21 a 22 a 2n A = a m1 a m2 a mn a 11 a 21 a m1 A T a 12 a 22 a m2 = a 1n a m2 a mn

51 Transposed matrices Matrices Gaussian elimination Determinants n n matrices Diagonal, Identity, and zero matrices Addition Multiplication Transpose and inverse for example: A = [ 1 2 ] A T = For the transpose of the product of two matrices we have (AB) T = B T A T

52 Matrices Gaussian elimination Determinants The dot product revisited n n matrices Diagonal, Identity, and zero matrices Addition Multiplication Transpose and inverse If we regard (column) vectors as matrices, we see that the inproduct of two vectors can be written as u v = u T v: = [ ] 4 5 = (A 1 1 matrix is simply a number, and the brackets are omitted.)

53 Inverse matrices Matrices Gaussian elimination Determinants n n matrices Diagonal, Identity, and zero matrices Addition Multiplication Transpose and inverse The inverse of a matrix A is a matrix A 1 such that AA 1 = I. Only square matrices possibly have an inverse. Note that the inverse of A 1 is A, so we have AA 1 = A 1 A = I

54 Matrices Gaussian elimination Determinants Solving systems of linear equations Inverting matrices Gaussian elimination Matrices are a convenient way of representing systems of linear equations: a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2 a m1 x 1 + a m2 x a mn x n = b m If such a system has a unique solution, it can be solved with Gaussian elimination..

55 Matrices Gaussian elimination Determinants Solving systems of linear equations Inverting matrices Gaussian elimination Permitted operations in Gaussian elimination are interchanging two rows. multiplying a row with a (non-zero) constant. adding a multiple of another row to a row.

56 Matrices Gaussian elimination Determinants Solving systems of linear equations Inverting matrices Gaussian elimination Matrices are not necessary for Gaussian elimination, but very convenient, especially augmented matrices. The augmented matrix corresponding to the system of equations on the previous slides is a 11 a 12 a 1n b 1 a 21 a 22 a 2n b a m1 a m2 a mn b m

57 Matrices Gaussian elimination Determinants Solving systems of linear equations Inverting matrices Gaussian elimination: example Suppose we want to solve the following system: x + y + 2z = 17 2x + y + z = 15 x + 2y + 3z = 26 Q: what is the geometric interpretation of this system? And what is the interpretation of its solution?

58 Matrices Gaussian elimination Determinants Solving systems of linear equations Inverting matrices Gaussian elimination: example Applying the rules in a clever order, we get

59 Matrices Gaussian elimination Determinants Solving systems of linear equations Inverting matrices Gaussian elimination: example The interpretation of the last augmented matrix is the very convenient system of linear equations x = 3 y = 4 z = 5 In other words, the point (3, 4, 5) satisfies all three equations.

60 Matrices Gaussian elimination Determinants Solving systems of linear equations Inverting matrices Gaussian elimination: geometric interpretation We started with three equations, which are implicit representations of planes: x + y + 2z = 17 2x + y + z = 15 x + 2y + 3z = 26 We ended with three other equations, which can also be interpreted as planes: x = 3 y = 4 z = 5 The steps in Gaussian elimination preserve the location of the solution.

61 Matrices Gaussian elimination Determinants Solving systems of linear equations Inverting matrices Gaussian elimination: possible outcomes in 3D Since any linear equation in three variables is a plane in 3D, we can interpret the possible outcomes of systems of three equations. 1 Three planes intersect in one point: the system has one unique solution 2 Three planes do not have a common intersection: the system has no solution 3 Three planes have a line in common: the system has many solutions The three planes can also coincide, then the equations are equivalent.

62 Matrices Gaussian elimination Determinants Solving systems of linear equations Inverting matrices Gaussian elimination: inverting matrices The same procedure can also be used to invert matrices:

63 Matrices Gaussian elimination Determinants Solving systems of linear equations Inverting matrices Gaussian elimination: inverting matrices The last augmented matrix tells us that the inverse of equals

64 Matrices Gaussian elimination Determinants Solving systems of linear equations Inverting matrices Gaussian elimination: inverting matrices When does a (square) matrix have an inverse? If and only if its columns, seen as vectors, are linearly independent. Equivalently, if and only if its rows, seen as transposed vectors, are linearly independent.

65 Matrices Gaussian elimination Determinants Volume Computing determinants: cofactors Solving systems of linear equations II Determinants The determinant of a matrix is the signed volume spanned by the column vectors. The determinant det A of a matrix A is also written as A. For example, [ ] a11 a A = 12 a 21 a 22 det A = A = a 11 a 12 a 21 a 22 (a 12, a 22 ) (a 11, a 21 )

66 Matrices Gaussian elimination Determinants Volume Computing determinants: cofactors Solving systems of linear equations II Computing determinants Determinants can be computed as follows: The determinant of a matrix is the sum of the products of the elements of any row or column of the matrix with their cofactors If only we knew what cofactors are...

67 Matrices Gaussian elimination Determinants Volume Computing determinants: cofactors Solving systems of linear equations II Cofactors Take a deep breath... The cofactor of an entry a ij in an n n matrix a is the determinant of the (n 1) (n 1) matrix A that is obtained from A by removing the i-th row and the j-th column, multiplied by 1 i+j. Right: long live recursion! Q: what is the bottom of this recursion?

68 Matrices Gaussian elimination Determinants Volume Computing determinants: cofactors Solving systems of linear equations II Cofactors Example: for a 4 4 matrix A, the cofactor of the entry a 13 is a c 13 = a 21 a 22 a 24 a 31 a 32 a 34 a 41 a 42 a 44 and A = a 12 a c 12 + a 22a c 22 + a 32a c 32 + a 42a c 42

69 Matrices Gaussian elimination Determinants Volume Computing determinants: cofactors Solving systems of linear equations II Determinants and cofactors Example = = 0(32 35) 1(24 30) + 2(21 24) = 0.

70 Matrices Gaussian elimination Determinants Volume Computing determinants: cofactors Solving systems of linear equations II Systems of linear equations and determinants Consider our system of linear equations again: x + y + 2z = 17 2x + y + z = 15 x + 2y + 3z = 26 Such a system of n equations in n unknowns can be solved by using determinants. In general, if we have Ax = b, then x i = Ai A. where A i is obtained from A by replacing the i-th column with b.

71 Matrices Gaussian elimination Determinants Volume Computing determinants: cofactors Solving systems of linear equations II Determinants and systems of linear equations So for our system x + y + 2z = 17 2x + y + z = 15 x + 2y + 3z = 26 we have x = y = z =

72 Linear transformations Affine transformations Transformations in 3D Closing remarks Linear transformations Definition Examples Finding matrices Transposing normal vectors Area and determinant A function T : R n R m is called a linear transformation if it satisfies 1 T (u + v) = T (u) + T (v) for all u, v R n. 2 T (cv) = ct (v) for all v R n and all scalars c. v T (v) u u + v T (u + v) T (u)

73 Linear transformations Affine transformations Transformations in 3D Closing remarks Linear transformations in graphics Definition Examples Finding matrices Transposing normal vectors Area and determinant Many transformations that we use in graphics are linear transformations. Linear transformations can be represented by matrices. A sequence of linear transformations can be represented with a single matrix. With some tricks, we can represent translations and perspective projections with matrices as well.

74 Linear transformations Affine transformations Transformations in 3D Closing remarks Matrices and linear transformations Definition Examples Finding matrices Transposing normal vectors Area and determinant [ ] a b A 2 2 matrix represents the linear transformation that c d maps the vector (x y) T to the vector (ax + by cx + dy) T. Or (more readable): [ ] [ ] a b x = c d y [ ] ax + by cx + dy A 2 3 matrix is a linear transformation that maps a 3D vector to a 2D vector (from some 3-dim. space to some 2-dim. plane). [ a b c d e f ] x y = z [ ] ax + by + cz dx + ey + fz

75 Example: rotation Linear transformations Affine transformations Transformations in 3D Closing remarks Definition Examples Finding matrices Transposing normal vectors Area and determinant To rotate 45 about the origin, we apply the matrix [ 1 ]

76 Example: scaling Linear transformations Affine transformations Transformations in 3D Closing remarks Definition Examples Finding matrices Transposing normal vectors Area and determinant To scale with a factor two with respect to the origin, we apply the matrix [ ]

77 Example: scaling Linear transformations Affine transformations Transformations in 3D Closing remarks Definition Examples Finding matrices Transposing normal vectors Area and determinant Scaling doesn t have to be uniform. Here, we scale with a factor one half in x-direction, and three in y-direction: [ 1 ] Q: what is the inverse of this matrix?

78 Example: reflection Linear transformations Affine transformations Transformations in 3D Closing remarks Definition Examples Finding matrices Transposing normal vectors Area and determinant Reflection in the line y = x boils down to swapping x- and y-coordinates: [ ] Q: what is the inverse of this matrix?

79 Example: projection Linear transformations Affine transformations Transformations in 3D Closing remarks Definition Examples Finding matrices Transposing normal vectors Area and determinant We can also use matrices to do orthographic projections, for instance, onto the Y -axis: [ ] Q: what is the inverse of this matrix?

80 Linear transformations Affine transformations Transformations in 3D Closing remarks Example: reflection and scaling Definition Examples Finding matrices Transposing normal vectors Area and determinant Multiple transformations can be combined into one. Here, we first do a reflection in the line y = x, and then we scale with a factor 5 in x-direction, and a factor 2 in y-direction: [ ] [ ] = [ ] Q: Why is the transformation that is done first rightmost?

81 Example: shearing Linear transformations Affine transformations Transformations in 3D Closing remarks Definition Examples Finding matrices Transposing normal vectors Area and determinant Shearing in x-direction pushes things sideways: [ ] Q: What happens with the x-coordinate of points that are transformed with this matrix? And what with the y-coordinates? What is the inverse of this matrix?

82 Finding matrices Linear transformations Affine transformations Transformations in 3D Closing remarks Definition Examples Finding matrices Transposing normal vectors Area and determinant Applying matrices is pretty straightforward, but how do we find the matrix for a given linear transformation? [ ] a11 a Let A = 12 a 21 a 22 Q: what is the significance of the column vectors of A?

83 Finding matrices Linear transformations Affine transformations Transformations in 3D Closing remarks Definition Examples Finding matrices Transposing normal vectors Area and determinant Aha! The column vectors of a transformation matrix are the images of the base vectors! [ ] [ ] a11 a 12 1 = a 21 a 22 0 [ ] [ ] a11 a 12 0 = a 21 a 22 1 [ a11 a 21 [ a12 a 22 ] and That gives us an easy method of finding the matrix for a given linear transformation. ]

84 Linear transformations Affine transformations Transformations in 3D Closing remarks Transposing normal vectors Definition Examples Finding matrices Transposing normal vectors Area and determinant Unfortunately, normal vectors are not always transformed properly. To transform a normal vector n under a given linear transformation A, we have to apply the matrix (A 1 ) T. Q: obviously, for shearing, normal vectors behave funny. But what about rotations? And scalings (uniform and non-uniform)?

85 Area and determinant Linear transformations Affine transformations Transformations in 3D Closing remarks Definition Examples Finding matrices Transposing normal vectors Area and determinant For any linear transformation, the determinant represents the size change (actually: the absolute value of the determinant). For example, if a 2 2 matrix has determinant 3 or 3, then the linear transformation transforms a unit square to a shape with area 3. Q: What is going on when the determinant is zero?

86 Example: rotation Linear transformations Affine transformations Transformations in 3D Closing remarks Definition Examples Finding matrices Transposing normal vectors Area and determinant To rotate 45 about the origin, we apply the matrix [ 1 ] Q: What is the determinant?

87 Example: scaling Linear transformations Affine transformations Transformations in 3D Closing remarks Definition Examples Finding matrices Transposing normal vectors Area and determinant To scale with a factor two with respect to the origin, we apply the matrix [ ] Q: What is the determinant?

88 Example: scaling Linear transformations Affine transformations Transformations in 3D Closing remarks Definition Examples Finding matrices Transposing normal vectors Area and determinant Scaling doesn t have to be uniform. Here, we scale with a factor one half in x-direction, and three in y-direction: [ 1 ] Q: What is the determinant?

89 Example: reflection Linear transformations Affine transformations Transformations in 3D Closing remarks Definition Examples Finding matrices Transposing normal vectors Area and determinant Reflection in the line y = x boils down to swapping x- and y-coordinates: [ ] Q: What is the determinant?

90 Example: projection Linear transformations Affine transformations Transformations in 3D Closing remarks Definition Examples Finding matrices Transposing normal vectors Area and determinant We can also use matrices to do orthographic projections, for instance, onto the Y -axis: [ ] Q: What is the determinant?

91 Determinant = 0 Linear transformations Affine transformations Transformations in 3D Closing remarks Definition Examples Finding matrices Transposing normal vectors Area and determinant The following statements are equivalent for an n n matrix A and the linear transformation it represents: 1 The determinant of A is zero. 2 The n column vectors of A are linearly dependent. 3 The image space of the transformation is at most (n 1)-dimensional (the transformation is a projection).

92 Linear transformations Affine transformations Transformations in 3D Closing remarks More complex transformations Problem and solution: homogeneous coordinates So now we know how to determine matrices for a given transformation. Let s try another one: Q: What is the matrix for a rotation of 90 about the point (2, 1)?

93 Linear transformations Affine transformations Transformations in 3D Closing remarks Problem and solution: homogeneous coordinates More complex transformations We can construct our transformation by composing three simpler transformations: Translate everything such that the center of rotation maps to the origin. Rotate about the origin. Revert the translation from the first step. Q: But what is the matrix for a translation?

94 Linear transformations Affine transformations Transformations in 3D Closing remarks Homogeneous coordinates Problem and solution: homogeneous coordinates Translation is not a linear transformation. A combination of linear transformations and translations is called an affine transformation. But... shearing in 2D smells a lot like translation in 1D

95 Linear transformations Affine transformations Transformations in 3D Closing remarks Homogeneous coordinates Problem and solution: homogeneous coordinates Translations in 2D can be represented by a shearing in 3D, by looking at the plane z = 1. The matrix for a translation [ ] 1 0 x t xt over the vector t = is 0 1 y y t t Q: How should we represent points? And vectors?

96 Linear transformations Affine transformations Transformations in 3D Closing remarks Affine transformations Problem and solution: homogeneous coordinates Q: What is the matrix for reflection in the line y = x + 5? Hint: move the line to the origin, reflect, and move the line back.

97 Linear transformations Affine transformations Transformations in 3D Closing remarks Affine transformations Problem and solution: homogeneous coordinates The matrix for reflection in the line y = x + 5 is

98 Linear transformations Affine transformations Transformations in 3D Closing remarks Affine transformations Problem and solution: homogeneous coordinates = = The rightmost matrix of the three translates over ( 5 leftmost matrix translates back over (5 0) T. 0) T, the

99 Linear transformations Affine transformations Transformations in 3D Closing remarks Affine transformations Problem and solution: homogeneous coordinates Q: But what if we translate by ( 4 1) T? This also makes the line y = x + 5 go through the origin

100 Linear transformations Affine transformations Transformations in 3D Closing remarks Affine transformations Problem and solution: homogeneous coordinates The matrix for reflection in the line y = x + 5 is Q: What is the significance of the columns of the matrix? Does that give us a faster way to find matrices for affine transformations?

101 Linear transformations Affine transformations Transformations in 3D Closing remarks Affine transformations Problem and solution: homogeneous coordinates Q: What is the matrix for rotation about the point (2, 2)?

102 Linear transformations Affine transformations Transformations in 3D Closing remarks Moving up a dimension Transformations in 3D Transformations in 3D are very similar to those in 2D: For scaling, we have three scaling factors on the diagonal of the matrix. Reflection is done with respect to planes. Shearing can be done in either x-, y-, or z-direction (or a combination thereof). Rotation is done about directed lines. For translations (and affine transformations in general), we use 4 4 matrices.

103 Linear transformations Affine transformations Transformations in 3D Closing remarks Affine transformations in 3D Moving up a dimension A matrix for affine transformations in 3D looks like: a 11 a 12 a 13 t 1 a 21 a 22 a 23 t 2 a 31 a 32 a 33 t a 11 a 12 a 13 t 1 where a 21 a 22 a 23 is the linear part and t 2 is where the a 31 a 32 a 33 t 3 origin ends up due to the affine transformation.

104 Linear transformations Affine transformations Transformations in 3D Closing remarks Extra terminology (final slide) Some other terms that are important in linear algebra: Linear subspace: Lower-dimensional linear space that includes the origin (or the whole space). Kernel and image of a linear transformation: What maps to the origin, and the linear subspace where all vectors are mapped to. Rank of a matrix: Number of linearly independent columns. Eigenvalue, eigenvector. When you need to know more, look in any linear algebra book.

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

2D Geometric Transformations. COMP 770 Fall 2011

2D Geometric Transformations COMP 770 Fall 2011 1 A little quick math background Notation for sets, functions, mappings Linear transformations Matrices Matrix-vector multiplication Matrix-matrix multiplication

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

4. MATRICES Matrices

4. MATRICES 170 4. Matrices 4.1. Definitions. Definition 4.1.1. A matrix is a rectangular array of numbers. A matrix with m rows and n columns is said to have dimension m n and may be represented as follows:

Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

1 Introduction to Matrices

1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

Section 1.1. Introduction to R n

The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

geometric transforms

geometric transforms 1 linear algebra review 2 matrices matrix and vector notation use column for vectors m 11 =[ ] M = [ m ij ] m 21 m 12 m 22 =[ ] v v 1 v = [ ] T v 1 v 2 2 3 matrix operations addition

Vector algebra Christian Miller CS Fall 2011

Vector algebra Christian Miller CS 354 - Fall 2011 Vector algebra A system commonly used to describe space Vectors, linear operators, tensors, etc. Used to build classical physics and the vast majority

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

Lecture 2 Matrix Operations

Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

Linear Algebra Review. Vectors

Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

NOTES on LINEAR ALGEBRA 1

School of Economics, Management and Statistics University of Bologna Academic Year 205/6 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

2.1: MATRIX OPERATIONS

.: MATRIX OPERATIONS What are diagonal entries and the main diagonal of a matrix? What is a diagonal matrix? When are matrices equal? Scalar Multiplication 45 Matrix Addition Theorem (pg 0) Let A, B, and

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

9 MATRICES AND TRANSFORMATIONS

9 MATRICES AND TRANSFORMATIONS Chapter 9 Matrices and Transformations Objectives After studying this chapter you should be able to handle matrix (and vector) algebra with confidence, and understand the

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

Helpsheet. Giblin Eunson Library MATRIX ALGEBRA. library.unimelb.edu.au/libraries/bee. Use this sheet to help you:

Helpsheet Giblin Eunson Library ATRIX ALGEBRA Use this sheet to help you: Understand the basic concepts and definitions of matrix algebra Express a set of linear equations in matrix notation Evaluate determinants

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

1 Sets and Set Notation.

LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

Lecture L3 - Vectors, Matrices and Coordinate Transformations

S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

Solutions to Linear Algebra Practice Problems

Solutions to Linear Algebra Practice Problems. Find all solutions to the following systems of linear equations. (a) x x + x 5 x x x + x + x 5 (b) x + x + x x + x + x x + x + 8x Answer: (a) We create the

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

4.6 Null Space, Column Space, Row Space

NULL SPACE, COLUMN SPACE, ROW SPACE Null Space, Column Space, Row Space In applications of linear algebra, subspaces of R n typically arise in one of two situations: ) as the set of solutions of a linear

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

Name: Section Registered In:

Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

MAT188H1S Lec0101 Burbulla

Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture 7 Transformations in 2-D

Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture 7 Transformations in 2-D Welcome everybody. We continue the discussion on 2D

1 Determinants and the Solvability of Linear Systems

1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped

Lecture 2: Homogeneous Coordinates, Lines and Conics

Lecture 2: Homogeneous Coordinates, Lines and Conics 1 Homogeneous Coordinates In Lecture 1 we derived the camera equations λx = P X, (1) where x = (x 1, x 2, 1), X = (X 1, X 2, X 3, 1) and P is a 3 4

We call this set an n-dimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P.

Volumes of parallelograms 1 Chapter 8 Volumes of parallelograms In the present short chapter we are going to discuss the elementary geometrical objects which we call parallelograms. These are going to

Matrix Algebra and Applications

Matrix Algebra and Applications Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 1 / 49 EC2040 Topic 2 - Matrices and Matrix Algebra Reading 1 Chapters

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

Chapter 6. Orthogonality

6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

Systems of Linear Equations

Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

15.062 Data Mining: Algorithms and Applications Matrix Math Review

.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

Math 241, Exam 1 Information.

Math 241, Exam 1 Information. 9/24/12, LC 310, 11:15-12:05. Exam 1 will be based on: Sections 12.1-12.5, 14.1-14.3. The corresponding assigned homework problems (see http://www.math.sc.edu/ boylan/sccourses/241fa12/241.html)

Affine Transformations. University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell

Affine Transformations University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell Logistics Required reading: Watt, Section 1.1. Further reading: Foley, et al, Chapter 5.1-5.5. David

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

1 Chapter 13. VECTORS IN THREE DIMENSIONAL SPACE Let s begin with some names and notation for things: R is the set (collection) of real numbers. We write x R to mean that x is a real number. A real number

1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each)

Math 33 AH : Solution to the Final Exam Honors Linear Algebra and Applications 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) (1) If A is an invertible

Sec 4.1 Vector Spaces and Subspaces

Sec 4. Vector Spaces and Subspaces Motivation Let S be the set of all solutions to the differential equation y + y =. Let T be the set of all 2 3 matrices with real entries. These two sets share many common

The basic unit in matrix algebra is a matrix, generally expressed as: a 11 a 12. a 13 A = a 21 a 22 a 23

(copyright by Scott M Lynch, February 2003) Brief Matrix Algebra Review (Soc 504) Matrix algebra is a form of mathematics that allows compact notation for, and mathematical manipulation of, high-dimensional

Diagonal, Symmetric and Triangular Matrices

Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

1 Eigenvalues and Eigenvectors

Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3

MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................

Unified Lecture # 4 Vectors

Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

Introduction to Matrix Algebra I

Appendix A Introduction to Matrix Algebra I Today we will begin the course with a discussion of matrix algebra. Why are we studying this? We will use matrix algebra to derive the linear regression model

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

3. Cross product Definition 3.1. Let v and w be two vectors in R 3. The cross product of v and w, denoted v w, is the vector defined as follows: the length of v w is the area of the parallelogram with

Linear Algebra: Vectors

A Linear Algebra: Vectors A Appendix A: LINEAR ALGEBRA: VECTORS TABLE OF CONTENTS Page A Motivation A 3 A2 Vectors A 3 A2 Notational Conventions A 4 A22 Visualization A 5 A23 Special Vectors A 5 A3 Vector

521493S Computer Graphics. Exercise 2 & course schedule change

521493S Computer Graphics Exercise 2 & course schedule change Course Schedule Change Lecture from Wednesday 31th of March is moved to Tuesday 30th of March at 16-18 in TS128 Question 2.1 Given two nonparallel,

Inner Product Spaces

Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

11.1. Objectives. Component Form of a Vector. Component Form of a Vector. Component Form of a Vector. Vectors and the Geometry of Space

11 Vectors and the Geometry of Space 11.1 Vectors in the Plane Copyright Cengage Learning. All rights reserved. Copyright Cengage Learning. All rights reserved. 2 Objectives! Write the component form of

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Section 5. l j v j = [ u u j u m ] l jj = l jj u j + + l mj u m. l mj Section 5. 5.. Not orthogonal, the column vectors fail to be perpendicular to each other. 5..2 his matrix is orthogonal. Check that

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

EC9A0: Pre-sessional Advanced Mathematics Course

University of Warwick, EC9A0: Pre-sessional Advanced Mathematics Course Peter J. Hammond & Pablo F. Beker 1 of 55 EC9A0: Pre-sessional Advanced Mathematics Course Slides 1: Matrix Algebra Peter J. Hammond

Linear Dependence Tests

Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

Chapter 5 Polar Coordinates; Vectors 5.1 Polar coordinates 1. Pole and polar axis

Chapter 5 Polar Coordinates; Vectors 5.1 Polar coordinates 1. Pole and polar axis 2. Polar coordinates A point P in a polar coordinate system is represented by an ordered pair of numbers (r, θ). If r >

Vector Spaces II: Finite Dimensional Linear Algebra 1

John Nachbar September 2, 2014 Vector Spaces II: Finite Dimensional Linear Algebra 1 1 Definitions and Basic Theorems. For basic properties and notation for R N, see the notes Vector Spaces I. Definition

T ( a i x i ) = a i T (x i ).

Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

Introduction to Matrices for Engineers

Introduction to Matrices for Engineers C.T.J. Dodson, School of Mathematics, Manchester Universit 1 What is a Matrix? A matrix is a rectangular arra of elements, usuall numbers, e.g. 1 0-8 4 0-1 1 0 11

Vector Math Computer Graphics Scott D. Anderson

Vector Math Computer Graphics Scott D. Anderson 1 Dot Product The notation v w means the dot product or scalar product or inner product of two vectors, v and w. In abstract mathematics, we can talk about

Geometric description of the cross product of the vectors u and v. The cross product of two vectors is a vector! u x v is perpendicular to u and v

12.4 Cross Product Geometric description of the cross product of the vectors u and v The cross product of two vectors is a vector! u x v is perpendicular to u and v The length of u x v is uv u v sin The

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

1 Introduction. 2 Matrices: Definition. Matrix Algebra. Hervé Abdi Lynne J. Williams

In Neil Salkind (Ed.), Encyclopedia of Research Design. Thousand Oaks, CA: Sage. 00 Matrix Algebra Hervé Abdi Lynne J. Williams Introduction Sylvester developed the modern concept of matrices in the 9th

α = u v. In other words, Orthogonal Projection

Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

5.3 The Cross Product in R 3

53 The Cross Product in R 3 Definition 531 Let u = [u 1, u 2, u 3 ] and v = [v 1, v 2, v 3 ] Then the vector given by [u 2 v 3 u 3 v 2, u 3 v 1 u 1 v 3, u 1 v 2 u 2 v 1 ] is called the cross product (or

Orthogonal Projections

Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

Section V.3: Dot Product

Section V.3: Dot Product Introduction So far we have looked at operations on a single vector. There are a number of ways to combine two vectors. Vector addition and subtraction will not be covered here,

1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form

1. LINEAR EQUATIONS A linear equation in n unknowns x 1, x 2,, x n is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b, where a 1, a 2,..., a n, b are given real numbers. For example, with x and

Geometric Transformation CS 211A

Geometric Transformation CS 211A What is transformation? Moving points (x,y) moves to (x+t, y+t) Can be in any dimension 2D Image warps 3D 3D Graphics and Vision Can also be considered as a movement to

MATH 304 Linear Algebra Lecture 24: Scalar product.

MATH 304 Linear Algebra Lecture 24: Scalar product. Vectors: geometric approach B A B A A vector is represented by a directed segment. Directed segment is drawn as an arrow. Different arrows represent

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables

Section 2.1. Section 2.2. Exercise 6: We have to compute the product AB in two ways, where , B =. 2 1 3 5 A =

Section 2.1 Exercise 6: We have to compute the product AB in two ways, where 4 2 A = 3 0 1 3, B =. 2 1 3 5 Solution 1. Let b 1 = (1, 2) and b 2 = (3, 1) be the columns of B. Then Ab 1 = (0, 3, 13) and

[1] Diagonal factorization

8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Chapter 3 Linear Least Squares Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

Math 215 HW #6 Solutions

Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

The Inverse of a Matrix

The Inverse of a Matrix 7.4 Introduction In number arithmetic every number a ( 0) has a reciprocal b written as a or such that a ba = ab =. Some, but not all, square matrices have inverses. If a square

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two

3. INNER PRODUCT SPACES

. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

Math 312, Fall 2012 Jerry L. Kazdan Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday. In addition to the problems below, you should also know how to solve

Orthogonal Diagonalization of Symmetric Matrices

MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

Introduction to Matrices

Introduction to Matrices Tom Davis tomrdavis@earthlinknet 1 Definitions A matrix (plural: matrices) is simply a rectangular array of things For now, we ll assume the things are numbers, but as you go on

Computing Orthonormal Sets in 2D, 3D, and 4D

Computing Orthonormal Sets in 2D, 3D, and 4D David Eberly Geometric Tools, LLC http://www.geometrictools.com/ Copyright c 1998-2016. All Rights Reserved. Created: March 22, 2010 Last Modified: August 11,

WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE?

WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE? JOEL H. SHAPIRO Abstract. These notes supplement the discussion of linear fractional mappings presented in a beginning graduate course