Matrices Gaussian elimination Determinants. Graphics 2011/2012, 4th quarter. Lecture 4: matrices, determinants

Save this PDF as:

Size: px
Start display at page:

Download "Matrices Gaussian elimination Determinants. Graphics 2011/2012, 4th quarter. Lecture 4: matrices, determinants"

Transcription

1 Lecture 4 Matrices, determinants

2 m n matrices Matrices Definitions Addition and subtraction Multiplication Transpose and inverse a 11 a 12 a 1n a 21 a 22 a 2n A = a m1 a m2 a mn is called an m n matrix with m rows and n columns. The a ij s are called the coefficients of the matrix, and m n is its dimension.

3 Special cases Matrices Definitions Addition and subtraction Multiplication Transpose and inverse A square matrix (for which m = n) is called a diagonal matrix if all elements a ij for which i j are zero. If all elements a ii are one, then the matrix is called an identity matrix, denoted with I m (depending on the context, the subscript m may be left out) I = If all matrix entries are zero (i.e. a ij = 0 for all i, j), then the matrix is called a zero matrix, denoted with 0.

4 Matrix addition Matrices Definitions Addition and subtraction Multiplication Transpose and inverse For an m A n A matrix A and an m B n B matrix B, we can define addition as A + B = C, with c ij = a ij + b ij for all 1 i m A, m B and 1 j n A, n B. For example: = Notice, that the dimensions of the matrices A and B have to fulfill the following conditions: m A = m B and n A = n B. Otherwise, addition is not defined.

5 Matrix subtraction Matrices Definitions Addition and subtraction Multiplication Transpose and inverse Similarly, we can define subtraction between an m A n A matrix A and an m B n B matrix B as A B = C, with c ij = a ij b ij for all 1 i m A, m B and 1 j n A, n B. For example: = Again, for the dimensions of the matrices A and B we must have m A = m B and n A = n B.

6 Multiplication with a scalar Definitions Addition and subtraction Multiplication Transpose and inverse Multiplying a matrix with a scalar is defined as follows: ca = B, with b ij = ca ij for all 1 i m A and 1 j n A. For example: = Obviously, there are no restrictions in this case (other than c being a scalar value, of course).

7 Matrix multiplication Matrices Definitions Addition and subtraction Multiplication Transpose and inverse The multiplication of two matrices with dimensions m A n A and m B n B is defined as For example: AB = C with c ij = n A k=1 a ik b kj ( ) ( ) = Again, we see that certain conditions have to be fulfilled, i.e. n A = m B. The dimensions of the resulting matrix C are m A n B.

8 Matrix multiplication Matrices Definitions Addition and subtraction Multiplication Transpose and inverse Useful notation when doing this on paper: ( )

9 Properties of matrix multiplication Definitions Addition and subtraction Multiplication Transpose and inverse Matrix multiplication is distributive over addition: A(B + C) = AB + AC (A + B)C = AC + BC and it is associative: (AB)C = A(BC) However, it is not commutative, i.e. in general, AB is not the same as BA.

10 Properties of matrix multiplication Definitions Addition and subtraction Multiplication Transpose and inverse Proof that matrix multiplication is not commutative, i.e. that in general, AB BA. Proof:

11 Definitions Addition and subtraction Multiplication Transpose and inverse Properties of matrix multiplication Proof that matrix multiplication is not commutative, i.e. that in general, AB BA. Alternative proof (proof by counterexample): Assume two matrices A = ( ) 1 2 and B = 3 4 ( ) ( ) ( ) ( ) ( )

12 Identity and zero matrix revisited Definitions Addition and subtraction Multiplication Transpose and inverse Identity matrix I m : I = With matrix multiplication we get IA = AI = A (hence the name identity matrix ).

13 Identity and zero matrix revisited Definitions Addition and subtraction Multiplication Transpose and inverse Zero matrix 0: = With matrix multiplication we get 0A = A0 = 0.

14 Transpose of a matrix Matrices Definitions Addition and subtraction Multiplication Transpose and inverse The transpose A T of an m n matrix A is an n m matrix that is obtained by interchanging the rows and columns of A, so a ij becomes a ji for all i, j: a 11 a 12 a 1n a 21 a 22 a 2n A = a m1 a m2 a mn a 11 a 21 a m1 A T a 12 a 22 a m2 = a 1n a 2n a mn

15 Transpose of a matrix Matrices Definitions Addition and subtraction Multiplication Transpose and inverse Example: A = ( 1 2 ) A T = For the transpose of the product of two matrices we have (AB) T = B T A T

16 Transpose of a matrix Matrices Definitions Addition and subtraction Multiplication Transpose and inverse For the transpose of the product of two matrices we have (AB) T = B T A T. Let s look at the left side first: b 11 b 1j b 1nB B = a 11 a 1nA A = a i1 a ina, AB = a ma 1 a ma n A b na 1 b na j b na n B c 11 c 1nB c ij c ma 1 c ma n B So, for AB, we get c ij = a i1b 1j + a i2b 2j + + a ina b na j for all 1 i m A and 1 j n B

17 Transpose of a matrix Matrices Definitions Addition and subtraction Multiplication Transpose and inverse Now let s look at the right side of the equation: a 11 a i1 a ma 1 A T = b 11 b na B T = b 1j b na j, B T A T = b 1nB b na n B a 1nA a ina a ma n A d 11 d 1nB d ji d ma 1 d ma n B So, for B T A T we get d ji = b 1ja i1 + b 2ja i2 + + b na ja ina for all 1 i m A and 1 j n B

18 Comment Matrices Definitions Addition and subtraction Multiplication Transpose and inverse Notice the differences in the two proofs: For AB BA, we showed that the general statement AB = BA is not true. Hence, it was enough to give one example where this general statement was not fulfilled. For (AB) T = B T A T, we needed to show that this general statement is true. Hence, it is not enough to just give one (or more) examples, but we have to prove that it is fulfilled for all possible values.

19 Inverse matrices Matrices Definitions Addition and subtraction Multiplication Transpose and inverse The inverse of a matrix A is a matrix A 1 such that AA 1 = I Only square matrices possibly have an inverse. Note that the inverse of A 1 is A, so we have AA 1 = A 1 A = I

20 The dot product revisited Definitions Addition and subtraction Multiplication Transpose and inverse If we regard (column) vectors as n 1 matrices, we see that the dot product of two vectors can be written as u v = u T v: = ( ) 4 5 = (A 1 1 matrix is simply a number, and the brackets are omitted.) Note: Remember the different vector notations, e.g. v 1 v = v 2 = (v 1, v 2, v 3 ) = (v 1 v 2 v 3 ) T v 3

21 Linear equation systems Geometric interpretation Linear equation systems (LES) The system of m linear equations in n variables x 1, x 2,..., x n a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. a m1 x 1 + a m2 x a mn x n = b m can be written as a matrix equation by Ax = b, or in full a 11 a 12 a 1n x 1 b 1 a 21 a 22 a 2n x = b 2. a m1 a m2 a mn x n b m

22 Linear equation systems Geometric interpretation LESs in graphics Suppose we want to solve the following system: x + y + 2z = 17 2x + y + z = 15 x + 2y + 3z = 26 Q: what is the geometric interpretation of this system?

23 Linear equation systems Geometric interpretation LESs in graphics Suppose we want to solve the following system: x + y + 2z 17 = 0 2x + y + z 15 = 0 x + 2y + 3z 26 = 0 Q: what is the geometric interpretation of this system?

24 Linear equation systems Geometric interpretation If an LES has a unique solution, it can be solved with Gaussian elimination. Matrices are not necessary for this, but very convenient, especially augmented matrices. The augmented matrix corresponding to a system of m linear equations is a 11 a 12 a 1n b 1 a 21 a 22 a 2n b a m1 a m2 a mn b m

25 Linear equation systems Geometric interpretation LESs in graphics Example: The previous LES: x + y + 2z 17 = 0 2x + y + z 15 = 0 x + 2y + 3z 26 = 0 And its related augmented matrix:

26 Linear equation systems Geometric interpretation Basic idea of : Apply certain operations to the matrix that do not change the solution, in order to bring the matrix into a from where we can immediately see the solution. These permitted operations in are interchanging two rows. multiplying a row with a (non-zero) constant. adding a multiple of another row to a row.

27 Linear equation systems Geometric interpretation : example Permitted operations: (1) exchange rows (2) multiply row with scalar (3) add multiple of 1 row to another

28 Linear equation systems Geometric interpretation : example Remember that the augmented matrix represents the linear equation system Ax = b, i.e is equivalent to 1x + 0y + 0z = 3 0x + 1y + 0z = 4 0x + 0y + 1z = 5 which directly gives us the solution x = 3, y = 4, and z = 5 Q: what is the geometric interpretation of this solution?

29 Linear equation systems Geometric interpretation Intersection of two planes Q: what is the geometric interpretation of this system? 1st: intersection of 2 planes

30 Linear equation systems Geometric interpretation Intersection of three planes

31 Linear equation systems Geometric interpretation Q: what about the other cases (e.g. line or no intersection)? Q: and what if an LGS can not be reduced to a triangular form? Three possible situations: We get a line 0x + 0y + 0z = b (with b 0) Or we get one line like 0x + 0y + 0z = 0 Or we get two lines like 0x + 0y + 0z = 0

32 Linear equation systems Geometric interpretation Note: In the literature you also find a slightly different procedure 1st: Put 0 s in the lower triangle... (forward step) 2nd:.. then work your way back up (backward step) Another note: can also be used to invert matrices (and you will do this in the tutorials)

33 Matrices Signed volume Computing determinants via cofactors Computing determinants differently Solving LESs with determinants Inverting matrices with determinants The determinant of an n n matrix is the signed volume spanned by its column vectors. The determinant det A of a matrix A is also written as A. For example, ( ) a11 a A = 12 a 21 a 22 det A = A = a 11 a 12 a 21 a 22 (a 12, a 22 ) (a 11, a 21 )

34 Signed volume Computing determinants via cofactors Computing determinants differently Solving LESs with determinants Inverting matrices with determinants : geometric interpretation In 2D: deta is the oriented area of the parallelogram defined by the 2 vectors. det A = A = a 11 a 12 a 21 a 22 (a 12, a 22 ) In 3D: deta is the oriented area of the parallelepiped defined by the 3 vectors. (a 11, a 21 )

35 Computing determinants Signed volume Computing determinants via cofactors Computing determinants differently Solving LESs with determinants Inverting matrices with determinants Using Laplace s expansion determinants can be computed as follows: The determinant of a matrix is the sum of the products of the elements of any row or column of the matrix with their cofactors If only we knew what cofactors are...

36 Cofactors Matrices Signed volume Computing determinants via cofactors Computing determinants differently Solving LESs with determinants Inverting matrices with determinants Take a deep breath... The cofactor of an entry a ij in an n n matrix A is the determinant of the (n 1) (n 1) matrix A that is obtained from A by removing the i-th row and the j-th column, multiplied by 1 i+j. Right: long live recursion!

37 Cofactors Matrices Signed volume Computing determinants via cofactors Computing determinants differently Solving LESs with determinants Inverting matrices with determinants Example: for a 4 4 matrix A, the cofactor of the entry a 13 is A = a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44 a 21 a 22 a 24 a c 13 = a 31 a 32 a 34 a 41 a 42 a 44 and A = a 11 a c 11 + a 12a c 12 + a 13a c 13 + a 14a c 14

38 and cofactors Signed volume Computing determinants via cofactors Computing determinants differently Solving LESs with determinants Inverting matrices with determinants Example: = = 0 1(3 8 ( 1) (1+1) ( 1) (1+2) ) +2(4 6 ( 1) (1+2) ( 1) (2+2) ) = 0 1(24 30) + 2(21 24) = 0.

39 Signed volume Computing determinants via cofactors Computing determinants differently Solving LESs with determinants Inverting matrices with determinants Computing determinants for 3 3 matrices 3x3 matrices: a b c d e f g h i = a e f h i... = (aei + bfg + cdh) (ceg + afh + bdi) An easy way to do this on paper (Rule of Sarrus): a b c a b c d e f d e f g h i g h i

40 Signed volume Computing determinants via cofactors Computing determinants differently Solving LESs with determinants Inverting matrices with determinants Computing determinants for 2 2 matrices It also works for 2x2 matrices: a b c d = ad bc But unfortunately not for higher dimensions than 3!

41 The cross product revisited Signed volume Computing determinants via cofactors Computing determinants differently Solving LESs with determinants Inverting matrices with determinants Let s look at another example: x y z v 1 v 2 v 3 w 1 w 2 w 3 From we get x y z x y z v 1 v 2 v 3 v 1 v 2 v 3 w 1 w 2 w 3 w 1 w 2 w 3 (v 2 w 3 v 3 w 2 ) x + (v 3 w 1 v 1 w 3 ) y + (v 1 w 2 v 2 w 1 ) z

42 Signed volume Computing determinants via cofactors Computing determinants differently Solving LESs with determinants Inverting matrices with determinants Systems of linear equations and determinants Consider our system of linear equations again: x + y + 2z = 17 2x + y + z = 15 x + 2y + 3z = 26 Such a system of n equations in n unknowns can be solved by using determinants using Cramer s rule: If we have Ax = b, then x i = Ai A where A i is obtained from A by replacing the i-th column with b.

43 Signed volume Computing determinants via cofactors Computing determinants differently Solving LESs with determinants Inverting matrices with determinants Systems of linear equations and determinants So for our system x y = z 26 we have x = y = z =

44 Signed volume Computing determinants via cofactors Computing determinants differently Solving LESs with determinants Inverting matrices with determinants Systems of linear equations and determinants x i = Ai A. Why? In 2D: a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 From the image we see: x 1 a 1 a 2 = b a 2 (because shearing a parallelogram doesn t change its volume) and x 1 a 1 a 2 = b a 2 (because scaling one side of a parallelogram changes its volume by the same factor)

45 Signed volume Computing determinants via cofactors Computing determinants differently Solving LESs with determinants Inverting matrices with determinants Systems of linear equations and determinants x i = Ai A. Why? In 2D: a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 From the image we see: x 1 a 1 a 2 = b a 2 (because shearing a parallelogram doesn t change its volume) and x 1 a 1 a 2 = b a 2 (because scaling one side of a parallelogram changes its volume by the same factor)

46 and inverse matrices Signed volume Computing determinants via cofactors Computing determinants differently Solving LESs with determinants Inverting matrices with determinants can also be used to compute the inverse A 1 of an invertible matrix A: A 1 = Ã A where Ã is the adjoint of A, which is the transpose of the cofactor matrix of A. The cofactor matrix of A is obtained from A by replacing every entry a ij by its cofactor a c ij.

MAT188H1S Lec0101 Burbulla

Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6

Chapter 7 Matrices Definition An m n matrix is an array of numbers set out in m rows and n columns Examples (i ( 1 1 5 2 0 6 has 2 rows and 3 columns and so it is a 2 3 matrix (ii 1 0 7 1 2 3 3 1 is a

The Characteristic Polynomial

Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

1 Sets and Set Notation.

LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Section 5. l j v j = [ u u j u m ] l jj = l jj u j + + l mj u m. l mj Section 5. 5.. Not orthogonal, the column vectors fail to be perpendicular to each other. 5..2 his matrix is orthogonal. Check that

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

Lecture notes on linear algebra

Lecture notes on linear algebra David Lerner Department of Mathematics University of Kansas These are notes of a course given in Fall, 2007 and 2008 to the Honors sections of our elementary linear algebra

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

GCE Mathematics (6360) Further Pure unit 4 (MFP4) Textbook

Version 36 klm GCE Mathematics (636) Further Pure unit 4 (MFP4) Textbook The Assessment and Qualifications Alliance (AQA) is a company limited by guarantee registered in England and Wales 364473 and a

9 MATRICES AND TRANSFORMATIONS

9 MATRICES AND TRANSFORMATIONS Chapter 9 Matrices and Transformations Objectives After studying this chapter you should be able to handle matrix (and vector) algebra with confidence, and understand the

Math 312 Homework 1 Solutions

Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

Orthogonal Projections

Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

6. Cholesky factorization

6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

1 Chapter 13. VECTORS IN THREE DIMENSIONAL SPACE Let s begin with some names and notation for things: R is the set (collection) of real numbers. We write x R to mean that x is a real number. A real number

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

26. Determinants I. 1. Prehistory

26. Determinants I 26.1 Prehistory 26.2 Definitions 26.3 Uniqueness and other properties 26.4 Existence Both as a careful review of a more pedestrian viewpoint, and as a transition to a coordinate-independent

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

Vectors 2. The METRIC Project, Imperial College. Imperial College of Science Technology and Medicine, 1996.

Vectors 2 The METRIC Project, Imperial College. Imperial College of Science Technology and Medicine, 1996. Launch Mathematica. Type

( ) which must be a vector

MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

Vector Spaces. Chapter 2. 2.1 R 2 through R n

Chapter 2 Vector Spaces One of my favorite dictionaries (the one from Oxford) defines a vector as A quantity having direction as well as magnitude, denoted by a line drawn from its original to its final

Mathematics Notes for Class 12 chapter 10. Vector Algebra

1 P a g e Mathematics Notes for Class 12 chapter 10. Vector Algebra A vector has direction and magnitude both but scalar has only magnitude. Magnitude of a vector a is denoted by a or a. It is non-negative

THREE DIMENSIONAL GEOMETRY

Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

Factorization Theorems

Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

9 Multiplication of Vectors: The Scalar or Dot Product

Arkansas Tech University MATH 934: Calculus III Dr. Marcel B Finan 9 Multiplication of Vectors: The Scalar or Dot Product Up to this point we have defined what vectors are and discussed basic notation

Subspaces of R n LECTURE 7. 1. Subspaces

LECTURE 7 Subspaces of R n Subspaces Definition 7 A subset W of R n is said to be closed under vector addition if for all u, v W, u + v is also in W If rv is in W for all vectors v W and all scalars r

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

Identifying second degree equations

Chapter 7 Identifing second degree equations 7.1 The eigenvalue method In this section we appl eigenvalue methods to determine the geometrical nature of the second degree equation a 2 + 2h + b 2 + 2g +

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

L 2 : x = s + 1, y = s, z = 4s + 4. 3. Suppose that C has coordinates (x, y, z). Then from the vector equality AC = BD, one has

The line L through the points A and B is parallel to the vector AB = 3, 2, and has parametric equations x = 3t + 2, y = 2t +, z = t Therefore, the intersection point of the line with the plane should satisfy:

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization EE103 (Fall 2011-12) factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization algorithm effect of rounding error sparse

is in plane V. However, it may be more convenient to introduce a plane coordinate system in V.

.4 COORDINATES EXAMPLE Let V be the plane in R with equation x +2x 2 +x 0, a two-dimensional subspace of R. We can describe a vector in this plane by its spatial (D)coordinates; for example, vector x 5

3. INNER PRODUCT SPACES

. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v,

1.3. DOT PRODUCT 19 1.3 Dot Product 1.3.1 Definitions and Properties The dot product is the first way to multiply two vectors. The definition we will give below may appear arbitrary. But it is not. It

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

FURTHER VECTORS (MEI)

Mathematics Revision Guides Further Vectors (MEI) (column notation) Page of MK HOME TUITION Mathematics Revision Guides Level: AS / A Level - MEI OCR MEI: C FURTHER VECTORS (MEI) Version : Date: -9-7 Mathematics

Dot product and vector projections (Sect. 12.3) There are two main ways to introduce the dot product

Dot product and vector projections (Sect. 12.3) Two definitions for the dot product. Geometric definition of dot product. Orthogonal vectors. Dot product and orthogonal projections. Properties of the dot

Math 215 HW #6 Solutions

Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T

Geometric Algebra Primer. Jaap Suter

Geometric Algebra Primer Jaap Suter March 12, 2003 Abstract Adopted with great enthusiasm in physics, geometric algebra slowly emerges in computational science. Its elegance and ease of use is unparalleled.

Section 1.1. Introduction to R n

The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

Similar matrices and Jordan form

Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

Lecture 2 Linear functions and examples

EE263 Autumn 2007-08 Stephen Boyd Lecture 2 Linear functions and examples linear equations and functions engineering examples interpretations 2 1 Linear equations consider system of linear equations y

5 VECTOR GEOMETRY. 5.0 Introduction. Objectives. Activity 1

5 VECTOR GEOMETRY Chapter 5 Vector Geometry Objectives After studying this chapter you should be able to find and use the vector equation of a straight line; be able to find the equation of a plane in

521493S Computer Graphics. Exercise 2 & course schedule change

521493S Computer Graphics Exercise 2 & course schedule change Course Schedule Change Lecture from Wednesday 31th of March is moved to Tuesday 30th of March at 16-18 in TS128 Question 2.1 Given two nonparallel,

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

Math 312, Fall 2012 Jerry L. Kazdan Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday. In addition to the problems below, you should also know how to solve

α = u v. In other words, Orthogonal Projection

Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

Linear Codes. Chapter 3. 3.1 Basics

Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length

Chapter 2: Systems of Linear Equations and Matrices:

At the end of the lesson, you should be able to: Chapter 2: Systems of Linear Equations and Matrices: 2.1: Solutions of Linear Systems by the Echelon Method Define linear systems, unique solution, inconsistent,

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

Cryptography and Network Security. Prof. D. Mukhopadhyay. Department of Computer Science and Engineering. Indian Institute of Technology, Kharagpur

Cryptography and Network Security Prof. D. Mukhopadhyay Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Module No. # 01 Lecture No. # 12 Block Cipher Standards

A vector is a directed line segment used to represent a vector quantity.

Chapters and 6 Introduction to Vectors A vector quantity has direction and magnitude. There are many examples of vector quantities in the natural world, such as force, velocity, and acceleration. A vector

LINES AND PLANES CHRIS JOHNSON

LINES AND PLANES CHRIS JOHNSON Abstract. In this lecture we derive the equations for lines and planes living in 3-space, as well as define the angle between two non-parallel planes, and determine the distance

Chapter 7. Permutation Groups

Chapter 7 Permutation Groups () We started the study of groups by considering planar isometries In the previous chapter, we learnt that finite groups of planar isometries can only be cyclic or dihedral

DEFINITION 5.1.1 A complex number is a matrix of the form. x y. , y x

Chapter 5 COMPLEX NUMBERS 5.1 Constructing the complex numbers One way of introducing the field C of complex numbers is via the arithmetic of matrices. DEFINITION 5.1.1 A complex number is a matrix of

A Second Course in Mathematics Concepts for Elementary Teachers: Theory, Problems, and Solutions

A Second Course in Mathematics Concepts for Elementary Teachers: Theory, Problems, and Solutions Marcel B. Finan Arkansas Tech University c All Rights Reserved First Draft February 8, 2006 1 Contents 25

We shall turn our attention to solving linear systems of equations. Ax = b

59 Linear Algebra We shall turn our attention to solving linear systems of equations Ax = b where A R m n, x R n, and b R m. We already saw examples of methods that required the solution of a linear system

Elements of Abstract Group Theory

Chapter 2 Elements of Abstract Group Theory Mathematics is a game played according to certain simple rules with meaningless marks on paper. David Hilbert The importance of symmetry in physics, and for

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

discuss how to describe points, lines and planes in 3 space.

Chapter 2 3 Space: lines and planes In this chapter we discuss how to describe points, lines and planes in 3 space. introduce the language of vectors. discuss various matters concerning the relative position

1. Vectors and Matrices

E. 8.02 Exercises. Vectors and Matrices A. Vectors Definition. A direction is just a unit vector. The direction of A is defined by dir A = A, (A 0); A it is the unit vector lying along A and pointed like

Department of Chemical Engineering ChE-101: Approaches to Chemical Engineering Problem Solving MATLAB Tutorial VI

Department of Chemical Engineering ChE-101: Approaches to Chemical Engineering Problem Solving MATLAB Tutorial VI Solving a System of Linear Algebraic Equations (last updated 5/19/05 by GGB) Objectives:

Lecture 14: Section 3.3

Lecture 14: Section 3.3 Shuanglin Shao October 23, 2013 Definition. Two nonzero vectors u and v in R n are said to be orthogonal (or perpendicular) if u v = 0. We will also agree that the zero vector in

Solving Simultaneous Equations and Matrices

Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering

Grassmann Algebra in Game Development. Eric Lengyel, PhD Terathon Software

Grassmann Algebra in Game Development Eric Lengyel, PhD Terathon Software Math used in 3D programming Dot / cross products, scalar triple product Planes as 4D vectors Homogeneous coordinates Plücker coordinates

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

C relative to O being abc,, respectively, then b a c.

2 EP-Program - Strisuksa School - Roi-et Math : Vectors Dr.Wattana Toutip - Department of Mathematics Khon Kaen University 200 :Wattana Toutip wattou@kku.ac.th http://home.kku.ac.th/wattou 2. Vectors A

BX in ( u, v) basis in two ways. On the one hand, AN = u+

1. Let f(x) = 1 x +1. Find f (6) () (the value of the sixth derivative of the function f(x) at zero). Answer: 7. We expand the given function into a Taylor series at the point x = : f(x) = 1 x + x 4 x

11.1. Objectives. Component Form of a Vector. Component Form of a Vector. Component Form of a Vector. Vectors and the Geometry of Space

11 Vectors and the Geometry of Space 11.1 Vectors in the Plane Copyright Cengage Learning. All rights reserved. Copyright Cengage Learning. All rights reserved. 2 Objectives! Write the component form of

Classification of Cartan matrices

Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms

Section 8.8. 1. The given line has equations. x = 3 + t(13 3) = 3 + 10t, y = 2 + t(3 + 2) = 2 + 5t, z = 7 + t( 8 7) = 7 15t.

. The given line has equations Section 8.8 x + t( ) + 0t, y + t( + ) + t, z 7 + t( 8 7) 7 t. The line meets the plane y 0 in the point (x, 0, z), where 0 + t, or t /. The corresponding values for x and

Boolean Algebra Part 1

Boolean Algebra Part 1 Page 1 Boolean Algebra Objectives Understand Basic Boolean Algebra Relate Boolean Algebra to Logic Networks Prove Laws using Truth Tables Understand and Use First Basic Theorems

8. Linear least-squares

8. Linear least-squares EE13 (Fall 211-12) definition examples and applications solution of a least-squares problem, normal equations 8-1 Definition overdetermined linear equations if b range(a), cannot

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible

RESULTANT AND DISCRIMINANT OF POLYNOMIALS

RESULTANT AND DISCRIMINANT OF POLYNOMIALS SVANTE JANSON Abstract. This is a collection of classical results about resultants and discriminants for polynomials, compiled mainly for my own use. All results

Determinants can be used to solve a linear system of equations using Cramer s Rule.

2.6.2 Cramer s Rule Determinants can be used to solve a linear system of equations using Cramer s Rule. Cramer s Rule for Two Equations in Two Variables Given the system This system has the unique solution

State of Stress at Point

State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,

4. FIRST STEPS IN THE THEORY 4.1. A

4. FIRST STEPS IN THE THEORY 4.1. A Catalogue of All Groups: The Impossible Dream The fundamental problem of group theory is to systematically explore the landscape and to chart what lies out there. We

2.1 Introduction. 2.2 Terms and definitions

.1 Introduction An important step in the procedure for solving any circuit problem consists first in selecting a number of independent branch currents as (known as loop currents or mesh currents) variables,

Content. Chapter 4 Functions 61 4.1 Basic concepts on real functions 62. Credits 11

Content Credits 11 Chapter 1 Arithmetic Refresher 13 1.1 Algebra 14 Real Numbers 14 Real Polynomials 19 1.2 Equations in one variable 21 Linear Equations 21 Quadratic Equations 22 1.3 Exercises 28 Chapter

Notes from February 11

Notes from February 11 Math 130 Course web site: www.courses.fas.harvard.edu/5811 Two lemmas Before proving the theorem which was stated at the end of class on February 8, we begin with two lemmas. The

Orthogonal Bases and the QR Algorithm

Orthogonal Bases and the QR Algorithm Orthogonal Bases by Peter J Olver University of Minnesota Throughout, we work in the Euclidean vector space V = R n, the space of column vectors with n real entries

Solutions to old Exam 1 problems

Solutions to old Exam 1 problems Hi students! I am putting this old version of my review for the first midterm review, place and time to be announced. Check for updates on the web site as to which sections

2.3. Finding polynomial functions. An Introduction:

2.3. Finding polynomial functions. An Introduction: As is usually the case when learning a new concept in mathematics, the new concept is the reverse of the previous one. Remember how you first learned

You know from calculus that functions play a fundamental role in mathematics.

CHPTER 12 Functions You know from calculus that functions play a fundamental role in mathematics. You likely view a function as a kind of formula that describes a relationship between two (or more) quantities.

Section 1.4. Lines, Planes, and Hyperplanes. The Calculus of Functions of Several Variables

The Calculus of Functions of Several Variables Section 1.4 Lines, Planes, Hyperplanes In this section we will add to our basic geometric understing of R n by studying lines planes. If we do this carefully,

x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3

Math 24 FINAL EXAM (2/9/9 - SOLUTIONS ( Find the general solution to the system of equations 2 4 5 6 7 ( r 2 2r r 2 r 5r r x + y + z 2x + y + 4z 5x + 6y + 7z 2 2 2 2 So x z + y 2z 2 and z is free. ( r

Electrical Engineering 103 Applied Numerical Computing

UCLA Fall Quarter 2011-12 Electrical Engineering 103 Applied Numerical Computing Professor L Vandenberghe Notes written in collaboration with S Boyd (Stanford Univ) Contents I Matrix theory 1 1 Vectors

4.1. COMPLEX NUMBERS

4.1. COMPLEX NUMBERS What You Should Learn Use the imaginary unit i to write complex numbers. Add, subtract, and multiply complex numbers. Use complex conjugates to write the quotient of two complex numbers

1.5. Factorisation. Introduction. Prerequisites. Learning Outcomes. Learning Style

Factorisation 1.5 Introduction In Block 4 we showed the way in which brackets were removed from algebraic expressions. Factorisation, which can be considered as the reverse of this process, is dealt with

Math 241, Exam 1 Information.

Math 241, Exam 1 Information. 9/24/12, LC 310, 11:15-12:05. Exam 1 will be based on: Sections 12.1-12.5, 14.1-14.3. The corresponding assigned homework problems (see http://www.math.sc.edu/ boylan/sccourses/241fa12/241.html)