Introduction to Matrix Algebra I

Save this PDF as:

Size: px
Start display at page:

Transcription

1 Appendix A Introduction to Matrix Algebra I Today we will begin the course with a discussion of matrix algebra. Why are we studying this? We will use matrix algebra to derive the linear regression model the main topic of POL 603. Matrices are an intuitive way to think about data. We have a set of observations (perhaps individuals) on the row, and observe many different characteristics (such as race, gender, PID, etc.). Matrices are useful for solving systems of equations, including ones that we will see in class. A.1 Definition of Matrices and Vectors A matrix is simply an arrangement of numbers in rectangular form. Generally, a (j k) matrix A can be written as follows: a 11 a 12 a 1k a 21 a 22 a 2k A a j1 a j2 a jk Note that there are j rows and k columns. Also note that the elements are double subscripted, with the row number first, and the column number second. When reading the text, and doing your assignments, you should always keep the dimensionality of matrices in mind. The dimensionality of the matrix is also called the order of a matrix. In general terms, the A above is of order (j,k).letuslookatacoupleofexamples: 1 3 W

2 is of order (2, 2). This is also called a square matrix, because the row dimension equals the column dimension (j k). There are also rectangular matrices (j k), such as: Γ which is of order (4, 2). In the text, and on the board, we will denote matrices as capital, boldfaced Roman or Greek letters. Roman is typically data, and Greek is typically parameters. This is not a universal convention, so be aware. Of course, I cannot do boldface on the board. Thus, if you forget the dimensionality (or question whether I am talking about a matrix or a vector), stop me and ask. There exists a special kind of matrix called a vector. Vectors are matrices that have either one row or one column. Of course, a matrix with one row and one column is the same as a scalar a regular number. Row vectors are those that have a single row and multiple columns. For example, an order (1,k) row vector looks like this: α α 1 α 2 α 3 α k Similarly, column vectors are those that have a single column and multiple rows. For example, an order (k, 1) column vector looks like this: y Again, the convention for vectors is just like that for matrices, except the letters are lowercase. Thus, vectors are represented as lower-case, bold-faced Roman or Greek letters. Again, Roman letters typically represent data, and Greek letters represent parameters. Here, the elements are typically given a single subscript. y 1 y 2. y k A.2 Matrix Addition and Subtraction Now that we have defined matrices, vectors, and scalars, we can start to consider the operations we can perform on them. Given a matrix of numbers, one can extend regular scalar algebra in a straight forward way. Scalar addition is simply: m + n 2+57 Addition is similarly defined for matrices. If matrices or vectors are of the same order, then they can be added. One performs the addition element by element. 168

3 Thus, for a pair of order (2, 2) matrices, addition proceeds as follows for the problem A+B C: a11 a 12 b11 b + 12 a11 + b 11 a 12 + b 12 c11 c 12 a 21 a 22 b 21 b 22 a 21 + b 21 a 22 + b 22 c 21 c 22 Subtraction similarly follows. It is important to keep in mind that matrix addition and subtraction is only defined in the matrices are the same order, or, in other words, share the same dimensionality. If they do, they are said to be conformable for addition. If not, they are nonconformable. Here is another example: There are two important properties of matrix addition that are worth noting: A + B B + A. In other words, matrix addition is commutative. (A + B)+C A +(B + C). In other words, matrix addition is associative. Another operation that is often useful is transposition. In this operation, the order subscripts are exchanged for each element of the matrix A. Thus, an order (j,k) matrix becomes an order (k, j) matrix. Transposition is denoted by placing a prime after a matrix or by placing a superscript T.Hereisanexample: Q q 1,1 q 1,2 q 2,1 q 2,2 q 3,1 q 3,2 Q q1,1 q 2,1 q 3,1 q 1,2 q 2,2 q 3,2 Note that the subscripts in the transpose remain the same, they are just exchanged. Transposition makes more sense when using numbers. Here is an example for a row vector: ω ω Note that transposing a row vector turns it into a column vector, and vice versa. There are a couple of results regarding transposition that are important to remember: An order (j,j) matrix A is said to be symmetric A A. This implies, of course, that all symmetric matrices are square. Here is an example: W W

4 (A ) A. In words, the transpose of the transpose is the original matrix. For a scalar k, (ka) ka. For two matrices of the same order, it can be shown that the transpose of the sum is equal to the sum of the transposes. Symbolically: (A + B) A + B Transposition is also commutative. A.3 Matrices and Multiplication So far we have defined addition and subtraction, as well as transposition. Now we turn our attention to multiplication. The first type of multiplication is a scalar times a matrix. In words, a scalar α times a matrix A equals the scalar times each element of A. Thus, a1,1 a A α 1,2 αa1,1 αa 1,2 a 2,1 a 2,2 αa 2,1 αa 2,2 So, for: A A Now we will discuss the process of multiplying two matrices. We apply the following definition of matrix multiplication. Given A of order (m, n) andb of order (n, r), then the product AB C is the order (m, r) matrix whose entries are defined by: n c i,j a i,k b k,j k1 where i 1,...,mand j 1,...,r. Note that for matrices to be multiplication conformable, the number of columns in the first matrix n must equal the number of rows in the second matrix n. It is easier to see this by looking at a few examples. Let A B We can now define their product. Here we would say that B is pre-multiplied by A, orthat A is post-multiplied by B: AB ( 2) ( 3) ( 2) ( 3)

5 Note that A is of order (2, 3), and B is of order (3, 2). Thus, the product AB is of order (2, 2). We can similary compute the product BA which will be of order (3, 3). You can verify that this product is: BA This shows that the multiplication of matrices is not commutative. In other words: AB BA. Given this notation, there are a couple of identities worth noting: We have alread shown the matrix multiplication is not commutative: AB BA. Matrix multiplication is associative. In other words: (AB)C A(BC) Matrix multiplication is distributive. In other words: A(B + C) AB + AC Scalar multiplication commutative, associative, and distributive. The transpose of a product takes an intersting form, that can easily be proven: (AB) B A Just as with scalar algebra, we use the exponentiation operator to denote repeated multiplication. For a square matrix (Why? Because it is the only type of matrix conformable with itself.), we use the notation to denote exponentiation. A 4 A A A A A.4 Vectors and Multiplication We of course use the same formula for vector multiplication as we do for matrix multiplication. There are a couple of examples that are worth looking let. Let us define the column vector e. By definition, the order of e is (N, 1). We can take the inner product of e, which is simply: e 1 e e e 2 e 1 e 2 e N. 171 e N

6 This can be simplified: N e e e 1 e 1 + e 2 e e N e N e 2 i i1 Question The inner product of a column vector with itself is simply equal to the sum of the square values of the vector, which is used quite often in the regression model. Geometrically, the square root of the inner product is the length of the vector. One can similarly define the outer product for column vector e, denoted ee, which yields an order (N, N) matrix. There are couple of other vector products that are interesting to note. Let i denote an order (N, 1) vector of ones, and x denote an order (N, 1) vector of data. The following is an interesting quantity: 1 N i x 1 N (x 1 + x 2 + x N ) 1 xi x N From this, it follows that: i x x i Similarly, let y denote another (N, 1) vector of data. The following is also interesting: n x y x 1 y 1 + x 2 y x N y N x i y i Note that the following have nothing to do with mean deviation form (we actually will not be using mean deviation form much more in this course it is most useful in scalar from). The lower case letters represent elements of a vector. One can employ matrix multiplication using vectors. Let A be an order (2, 3) matrix that looks like this: a11 a A 12 a 13 a 21 a 22 a 23 Define the row vector a 1 a 11 a 12 a 13 and the row vector a 2 a 21 a 22 a 23. We can now express A as follows: a A 1 It is most common to represent all vectors as column vectors, so to write a row vector you use the transposition operator. This is a good habit to get into, and one I will try to use throughout the term. We are similarly given a matrix B, which is of order (3, 2). It can be written: b 11 b 12 B b 21 b 22 b 1 b 2 b 31 b 32 where b 1 and b 2 represent the columns of B. The product of the matrices, then, can be expressed in terms of four inner products. a AB C 1 b 1 a 1 b 2 a 2 b 1 a 2 b 2 This is the same as the summation definition of multiplication. This too will come in useful later in the semester. 172 a 2 i1

7 A.5 Special Matrices and Their Properties When performing scalar algebra, we know that x 1x, which is known as the identity relationship. There is a similar relationship in matrix algebra: AI A. WhatisI? Itcan be shown that I is a diagonal, square matrix with ones on the main diagonal, and zeros on the off diagonal. For example, the order three identity matrix is: I Notice that I is oftentimes subscripted to denote its dimensionality. Here is an example of the use of an identity matrix: One of the nice properties of the identity matrix is that it is commutative, associative, and distributive with respect to multiplication. That is, AIB IAB ABI AB We similary are presented with the identity in scalar algebra that x +0 x. This generalizes to matrix algebra, with the definition of the null matrix, which is simply a matrix of zeros, denoted 0 j,k.hereisanexample: A + 0 2, A There are two other matrices worth mentioning. The first is a diagonal matrix, which takes values on the main diagonal, and zeros on the off diagonal. Formally, matrix A is diagonal if a i,j 0 i j. The identity matrix is an example, so is the matrix Ω: Ω The two other types of special matrices that we will be used often are square and symmetric matrices, which I defined earlier. 173

8 Appendix B Introduction to Matrix Algebra II B.1 Computing Determinants So far we have defined addition, subtraction, and multiplication, along with a few related operators. We have not, however, yet defined division. Remember this simple algebraic problem: 2x x x 3 The quantity can be called the inverse of 2. This is exactly what we are doing when we divide in scalar algebra. Now, let us define a matrix which the inverse of A. Let us call that matrix A 1.Inscalar algebra, a number times its inverse equals one. In matrix algebra, then, we must find the matrix A 1 where AA 1 A 1 A I. Given this matrix, we can then do the following, which is what one needs to do when solving systems of equations: Ax b A 1 Ax A 1 b x A 1 b because A 1 A I. The question remains as to how to compute A 1. Today we will solve this problem, first by computing determinants of square matrices using the cofactor expansion. Then, we will use these determinants to form the matrix A 1. Determinants are defined only for square matrices, and are scalars. They are denoted det(a) A. Determinants take a very important role in determining whether a matrix is invertable, and what the inverse is. 174

9 For an order (2, 2) matrix, the determinant is defined as follows: a det(a) A 11 a 12 a 21 a 22 a 11a 22 a 12 a 21 Here are two examples: G Γ G Γ How do we go about computing determinants for large matrices? To do so, we need to define a few other quantities. For an order (n, n) square matrix A, we can define the cofactor θ r,s for each element of A: a r,s. The cofactor of a r,s is denoted: θ r,s ( 1) (r+s) A r,s where A r,s is the matrix formed after deleting row r and column s of the matrix (sometimes called the minor of A). Thus, each element of the matrix A has its own cofactor. We therefore can compute the matrix of cofactors for a matrix. Here is an example: B We can compute the matrix of cofactors: Θ We can use any row or column of the matrix of cofactors Θ to compute the determinant of a matrix A. For any row i, n A a ij θ ij Or, for any column j, A j1 n a ij θ ij i1 175

10 These are handy formulas, because they allow the determinant of an order n matrix to be decreased to order (n 1). Note that one can use any row or column to do the expansion, and compute the determinant. This process is called cofactor expansion. It can be repeated on very large matrices many times to get down to an order 2 matrix. Let us return to our example, we will do the cofactor expansion on the first row, and the second column. n B b 1j θ 1j j ( 12) 15 B n b i2 θ i2 i ( 7) You can do this for the other two rows, or the other two columns, and get the same result. Now, for any given matrix, you can perform the cofactor expansion and compute the determinant. Of course, as the order n increases, this gets terribly difficult, because to compute each element of the cofactor matrix, you have to do another expansion. Computers, however, do this quite easily. B.2 Matrix Inversion Now, let us define a matrix which is the inverse of A. Let us call that matrix A 1.Inscalar algebra, a number times its inverse equals one. In matrix algebra, then, we must find the matrix A 1 where AA 1 A 1 A I. Last time we defined two important quantities that one can use to compute inverses: the determinant and the matrix of cofactors. The determinant of a matrix A is denoted A, and the matrix of cofactors we denoted Θ A. There is one more quantity that we need to define, the adjoint. The adjoint of a matrix A is denoted adj(a). The adjoint is simply the matrix of cofactors transposed. Thus, adj(a) Θ θ r,s ( 1) (r+s) A r,s where A r,s is the matrix formed after deleting row r and column s of the matrix (sometimes called the minor of A). We now know all we need to know to compute inverses. It can be shown that the inverse of a matrix A is defined as follows: A 1 1 A adj(a) Thus, for any matrix A that is invertable, we can compute the inverse. This is trival for order (2, 2) matrices, and only takes a few minutes for order (3, 3) matrices. It gets much 176

11 more difficult after that, and we can use computers to compute inverses, both numerically (using Stata, Gauss, or some other package), or analytically (using Mathematica, Maple, etc.). B.3 Examples of Inversion We will do two examples. First, we will find the inverse of an order (2, 2) matrix: 2 4 B 6 3 We first must calculate the determinant of B: B We can write down the matrix of cofactors for B, which we then transpose to get the adjoint: adj(b) Θ B Given the determinant and the adjoint, we can now write down the inverse of B: B To make sure, let us check the product B 1 B to make sure it equals the identity matrix I 2 : 3 B 1 4 B I Here is another example. If you remember earlier, we were working on an order (3, 3) matrix also called B. The matrix was as follows: B We took the time to compute the matrix of cofactors for this matrix: Θ B We also showed that its determinant B 15. Given this information, we can write down the inverse of B: B Θ B

12 Again, let us check to make sure this is indeed the inverse of B. B 1 B 1 B adj(b)b I B.4 Diagonal Matrices There is one type of matrix for which the computation of the inverse is nearly trivial diagonal matrices. Let A denote a diagonal matrix of order (k, k). Remember that diagonal matrices have to be square: A a a a a kk It can be shown that the inverse of A is the following: a a A a a 1 kk To show this is the case, simply multiply AA 1 and you will get I k. B.5 Conditions for Singularity Earlier I talked about the simple alebraic problem of solving for x, and show that one need to premultiply by x 1 to solve a system of equations. This is the same operation as division. In scalar algebra, there is one number for which the inverse is not defined: 0. The quantity 1 0 is not defined. Similarly, in matrix algebra there are a set of matrices for which an inverse does not exist. Remember our formula for the inverse of our matrix A: A 1 1 A adj(a) 178

13 Question Under what conditions will this not be defined? When the determinant of A is not defined, of course. This tells us, then, that if the determinant of A equals zero, then A 1 is not defined. For what sorts of matrices is this a problem? It can be shown that matrices that have rows or columns that are linearly dependent on other rows or columns have determinants that are equal to zero. For these matrices, the determinant is undefined. We are given an order (k, k) matrix A, that I will denote using column vectors: A a 1 a 2 a k Each of the vectors a i is of order (k, 1). A column a i of A is said to be linearly independent of the others if there exists no set of scalars α j such that: a i j i α j a j Thus, given the rest of the columns, if we cannot find a weighted sum to get the column we are interested in, we say the it is linearly independent. We can define the term rank. The rank of a matrix is defined as the number of linearly independent columns (or rows) of a matrix. If all of the columns are independent, we say that the matrix is of full rank. We denote the rank of a matrix as r(a). By definition, r(a) is a integer that can take values from 1 to k. This is something that can be computed by software packages, such as Mathematica, Maple, or Stata. Here are some examples: ( 6) 3( 4 ) Notice that the second column is -2 times the first column. The rank of this matrix is 1 it is not of full rank. Here is another example: 2 7 2( 14) 4( 7) Notice that the second row is 2 times the first row. Again, the rank of this matrix is 1 it is not of full rank. Here is a final example, for an order (3, 3) matrix: using Mathematica Notice that the first column times 2 plus the third column equals the third column. The rank of this matrix is 2 it is not of full rank. Here are a series of important statements about inverses and nomenclature: A must be square. It is a necessary, but not sufficient, condition that A is square for A 1 to exist. In other words, sometimes the inverse of a matrix does not exist. If the inverse of a matrix does not exist, we say that it is singular. 179

14 The following statements are equivalent: full rank nonsingular invertable. All of these imply that A 1 exists. If the determinant of A equals zero, then A is said to be singular, or not invertable. More generally, A 0 A singular. If the determinant of A is non-zero, then A is said to be nonsingular, or invertable. In other words, the inverse exists. More generally, A 0 A nonsingular. If a matrix A is not of full rank, it is not invertable; i.e., it is singular. B.6 Some Important Properties of Inverses Here are some important identities that relate to matrix inversion: AA 1 I A 1 A I A 1 is unique. A must be square. It is a necessary, but not sufficient, condition that A is square for A 1 to exist. In other words, sometimes the inverse of a matrix does not exist it is singular. (A 1 ) 1 A. In words, the inverse of an inverse is the original matrix. Just as with transposition, it can be shown that (AB) 1 B 1 A 1 One can also show that the inverse of the transpose is the transpose of the inverse. Symbolically, (A ) 1 (A 1 ) B.7 Solving Systems of Equations Using Matrices Matrices are particularly useful when solving systems of equations, which, if you remeber, is what we did when we when solved for the least squares estimators. You covered this material in your high school algebra class. Here is an example, with three equations and three unknowns: x +2y + z 3 3x y 3z 1 2x +3y + z 4 180

15 How would one go about solving this? There are various techniques, including substitution, and multiplying equations by constants and adding them to get single variables to cancel. There is an easier way, however, and that is to use a matrix. Note that this system of equations can be represented as follows: x y 1 Ax b z 4 We can solve the problem Ax b by pre-multiplying both sides by A 1 and simplifying. This yields the following: Ax b A 1 Ax A 1 b x A 1 b We can therefore solve a system of equations by computing the inverse of A, and multiplying it by b. Here our matrix A and its inverse is as follows (using Mathematica to perform the calculation): A We can now solve this system of equations: x A 1 b A If we plug these back into original equations, we see that they in fact fulfill the identities. Computationally, this is a much easier way to solve systems of equations we just need to compute an inverse, and perform a single matrix multiplication. This approach only works, however, if the matrix A is nonsingular. If it is not invertable, then this will not work. In fact, if a row or a column of the matrix A is a linear combination of the others, there are no solutions to the system of equations, or many solutions to the system of equations. In either case, the system is said to be under-determined. We can compute the determinant of a matrix to see if it in fact is underdetermined

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.

Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product

1 Introduction to Matrices

1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

The basic unit in matrix algebra is a matrix, generally expressed as: a 11 a 12. a 13 A = a 21 a 22 a 23

(copyright by Scott M Lynch, February 2003) Brief Matrix Algebra Review (Soc 504) Matrix algebra is a form of mathematics that allows compact notation for, and mathematical manipulation of, high-dimensional

Helpsheet. Giblin Eunson Library MATRIX ALGEBRA. library.unimelb.edu.au/libraries/bee. Use this sheet to help you:

Helpsheet Giblin Eunson Library ATRIX ALGEBRA Use this sheet to help you: Understand the basic concepts and definitions of matrix algebra Express a set of linear equations in matrix notation Evaluate determinants

Lecture 2 Matrix Operations

Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or

Basics Inversion and related concepts Random vectors Matrix calculus. Matrix algebra. Patrick Breheny. January 20

Matrix algebra January 20 Introduction Basics The mathematics of multiple regression revolves around ordering and keeping track of large arrays of numbers and solving systems of equations The mathematical

The Inverse of a Matrix

The Inverse of a Matrix 7.4 Introduction In number arithmetic every number a ( 0) has a reciprocal b written as a or such that a ba = ab =. Some, but not all, square matrices have inverses. If a square

Mathematics Notes for Class 12 chapter 3. Matrices

1 P a g e Mathematics Notes for Class 12 chapter 3. Matrices A matrix is a rectangular arrangement of numbers (real or complex) which may be represented as matrix is enclosed by [ ] or ( ) or Compact form

= [a ij ] 2 3. Square matrix A square matrix is one that has equal number of rows and columns, that is n = m. Some examples of square matrices are

This document deals with the fundamentals of matrix algebra and is adapted from B.C. Kuo, Linear Networks and Systems, McGraw Hill, 1967. It is presented here for educational purposes. 1 Introduction In

Cofactor Expansion: Cramer s Rule

Cofactor Expansion: Cramer s Rule MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Today we will focus on developing: an efficient method for calculating

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix.

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. Matrices Definition. An m-by-n matrix is a rectangular array of numbers that has m rows and n columns: a 11

Inverses and powers: Rules of Matrix Arithmetic

Contents 1 Inverses and powers: Rules of Matrix Arithmetic 1.1 What about division of matrices? 1.2 Properties of the Inverse of a Matrix 1.2.1 Theorem (Uniqueness of Inverse) 1.2.2 Inverse Test 1.2.3

The Solution of Linear Simultaneous Equations

Appendix A The Solution of Linear Simultaneous Equations Circuit analysis frequently involves the solution of linear simultaneous equations. Our purpose here is to review the use of determinants to solve

MATH 240 Fall, Chapter 1: Linear Equations and Matrices

MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS

( % . This matrix consists of \$ 4 5 " 5' the coefficients of the variables as they appear in the original system. The augmented 3 " 2 2 # 2 " 3 4&

Matrices define matrix We will use matrices to help us solve systems of equations. A matrix is a rectangular array of numbers enclosed in parentheses or brackets. In linear algebra, matrices are important

Diagonal, Symmetric and Triangular Matrices

Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

1 Vector Spaces and Matrix Notation

1 Vector Spaces and Matrix Notation De nition 1 A matrix: is rectangular array of numbers with n rows and m columns. 1 1 1 a11 a Example 1 a. b. c. 1 0 0 a 1 a The rst is square with n = and m = ; the

Topic 1: Matrices and Systems of Linear Equations.

Topic 1: Matrices and Systems of Linear Equations Let us start with a review of some linear algebra concepts we have already learned, such as matrices, determinants, etc Also, we shall review the method

MATH36001 Background Material 2015

MATH3600 Background Material 205 Matrix Algebra Matrices and Vectors An ordered array of mn elements a ij (i =,, m; j =,, n) written in the form a a 2 a n A = a 2 a 22 a 2n a m a m2 a mn is said to be

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

Chapter 1 - Matrices & Determinants

Chapter 1 - Matrices & Determinants Arthur Cayley (August 16, 1821 - January 26, 1895) was a British Mathematician and Founder of the Modern British School of Pure Mathematics. As a child, Cayley enjoyed

L1-2. Special Matrix Operations: Permutations, Transpose, Inverse, Augmentation 12 Aug 2014

L1-2. Special Matrix Operations: Permutations, Transpose, Inverse, Augmentation 12 Aug 2014 Unfortunately, no one can be told what the Matrix is. You have to see it for yourself. -- Morpheus Primary concepts:

We know a formula for and some properties of the determinant. Now we see how the determinant can be used.

Cramer s rule, inverse matrix, and volume We know a formula for and some properties of the determinant. Now we see how the determinant can be used. Formula for A We know: a b d b =. c d ad bc c a Can we

2.1: MATRIX OPERATIONS

.: MATRIX OPERATIONS What are diagonal entries and the main diagonal of a matrix? What is a diagonal matrix? When are matrices equal? Scalar Multiplication 45 Matrix Addition Theorem (pg 0) Let A, B, and

Determinants. Dr. Doreen De Leon Math 152, Fall 2015

Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.

Introduction to Matrix Algebra

Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

Matrices, Determinants and Linear Systems

September 21, 2014 Matrices A matrix A m n is an array of numbers in rows and columns a 11 a 12 a 1n r 1 a 21 a 22 a 2n r 2....... a m1 a m2 a mn r m c 1 c 2 c n We say that the dimension of A is m n (we

Matrix Inverse and Determinants

DM554 Linear and Integer Programming Lecture 5 and Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1 2 3 4 and Cramer s rule 2 Outline 1 2 3 4 and

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

DETERMINANTS. b 2. x 2

DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in

4. MATRICES Matrices

4. MATRICES 170 4. Matrices 4.1. Definitions. Definition 4.1.1. A matrix is a rectangular array of numbers. A matrix with m rows and n columns is said to have dimension m n and may be represented as follows:

9 Matrices, determinants, inverse matrix, Cramer s Rule

AAC - Business Mathematics I Lecture #9, December 15, 2007 Katarína Kálovcová 9 Matrices, determinants, inverse matrix, Cramer s Rule Basic properties of matrices: Example: Addition properties: Associative:

A Brief Primer on Matrix Algebra

A Brief Primer on Matrix Algebra A matrix is a rectangular array of numbers whose individual entries are called elements. Each horizontal array of elements is called a row, while each vertical array is

Matrix Algebra and Applications

Matrix Algebra and Applications Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Matrix Algebra and Applications 1 / 49 EC2040 Topic 2 - Matrices and Matrix Algebra Reading 1 Chapters

row row row 4

13 Matrices The following notes came from Foundation mathematics (MATH 123) Although matrices are not part of what would normally be considered foundation mathematics, they are one of the first topics

Matrices, transposes, and inverses

Matrices, transposes, and inverses Math 40, Introduction to Linear Algebra Wednesday, February, 202 Matrix-vector multiplication: two views st perspective: A x is linear combination of columns of A 2 4

Matrix Differentiation

1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have

1.5 Elementary Matrices and a Method for Finding the Inverse

.5 Elementary Matrices and a Method for Finding the Inverse Definition A n n matrix is called an elementary matrix if it can be obtained from I n by performing a single elementary row operation Reminder:

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

L. Vandenberghe EE133A (Spring 2016) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows

Math 315: Linear Algebra Solutions to Midterm Exam I

Math 35: Linear Algebra s to Midterm Exam I # Consider the following two systems of linear equations (I) ax + by = k cx + dy = l (II) ax + by = 0 cx + dy = 0 (a) Prove: If x = x, y = y and x = x 2, y =

NOTES on LINEAR ALGEBRA 1

School of Economics, Management and Statistics University of Bologna Academic Year 205/6 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

Matrix Algebra LECTURE 1. Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 = a 11 x 1 + a 12 x 2 + +a 1n x n,

LECTURE 1 Matrix Algebra Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 a 11 x 1 + a 12 x 2 + +a 1n x n, (1) y 2 a 21 x 1 + a 22 x 2 + +a 2n x n, y m a m1 x 1 +a m2 x

Definition A square matrix M is invertible (or nonsingular) if there exists a matrix M 1 such that

0. Inverse Matrix Definition A square matrix M is invertible (or nonsingular) if there exists a matrix M such that M M = I = M M. Inverse of a 2 2 Matrix Let M and N be the matrices: a b d b M =, N = c

Further Maths Matrix Summary

Further Maths Matrix Summary A matrix is a rectangular array of numbers arranged in rows and columns. The numbers in a matrix are called the elements of the matrix. The order of a matrix is the number

Chapter 8. Matrices II: inverses. 8.1 What is an inverse?

Chapter 8 Matrices II: inverses We have learnt how to add subtract and multiply matrices but we have not defined division. The reason is that in general it cannot always be defined. In this chapter, we

Linear Algebra: Matrices

B Linear Algebra: Matrices B 1 Appendix B: LINEAR ALGEBRA: MATRICES TABLE OF CONTENTS Page B.1. Matrices B 3 B.1.1. Concept................... B 3 B.1.2. Real and Complex Matrices............ B 3 B.1.3.

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

1 Determinants. Definition 1

Determinants The determinant of a square matrix is a value in R assigned to the matrix, it characterizes matrices which are invertible (det 0) and is related to the volume of a parallelpiped described

Sergei Silvestrov, Christopher Engström, Karl Lundengård, Johan Richter, Jonas Österberg. November 13, 2014

Sergei Silvestrov,, Karl Lundengård, Johan Richter, Jonas Österberg November 13, 2014 Analysis Todays lecture: Course overview. Repetition of matrices elementary operations. Repetition of solvability of

Economics 102C: Advanced Topics in Econometrics 2 - Tooling Up: The Basics

Economics 102C: Advanced Topics in Econometrics 2 - Tooling Up: The Basics Michael Best Spring 2015 Outline The Evaluation Problem Matrix Algebra Matrix Calculus OLS With Matrices The Evaluation Problem:

Math 313 Lecture #10 2.2: The Inverse of a Matrix

Math 1 Lecture #10 2.2: The Inverse of a Matrix Matrix algebra provides tools for creating many useful formulas just like real number algebra does. For example, a real number a is invertible if there is

SECTION 8.3: THE INVERSE OF A SQUARE MATRIX

(Section 8.3: The Inverse of a Square Matrix) 8.47 SECTION 8.3: THE INVERSE OF A SQUARE MATRIX PART A: (REVIEW) THE INVERSE OF A REAL NUMBER If a is a nonzero real number, then aa 1 = a 1 a = 1. a 1, or

Lecture Notes 1: Matrix Algebra Part B: Determinants and Inverses

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 57 Lecture Notes 1: Matrix Algebra Part B: Determinants and Inverses Peter J. Hammond email: p.j.hammond@warwick.ac.uk Autumn 2012,

The Characteristic Polynomial

Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

A matrix over a field F is a rectangular array of elements from F. The symbol

Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F) denotes the collection of all m n matrices over F Matrices will usually be denoted

Linear Algebra Review. Vectors

Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

Name: Section Registered In:

Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

Lecture 21: The Inverse of a Matrix

Lecture 21: The Inverse of a Matrix Winfried Just, Ohio University October 16, 2015 Review: Our chemical reaction system Recall our chemical reaction system A + 2B 2C A + B D A + 2C 2D B + D 2C If we write

15.062 Data Mining: Algorithms and Applications Matrix Math Review

.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

Linear Dependence Tests

Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

1 Introduction. 2 Matrices: Definition. Matrix Algebra. Hervé Abdi Lynne J. Williams

In Neil Salkind (Ed.), Encyclopedia of Research Design. Thousand Oaks, CA: Sage. 00 Matrix Algebra Hervé Abdi Lynne J. Williams Introduction Sylvester developed the modern concept of matrices in the 9th

APPLICATIONS OF MATRICES. Adj A is nothing but the transpose of the co-factor matrix [A ij ] of A.

APPLICATIONS OF MATRICES ADJOINT: Let A = [a ij ] be a square matrix of order n. Let Aij be the co-factor of a ij. Then the n th order matrix [A ij ] T is called the adjoint of A. It is denoted by adj

Introduction to Matrices for Engineers

Introduction to Matrices for Engineers C.T.J. Dodson, School of Mathematics, Manchester Universit 1 What is a Matrix? A matrix is a rectangular arra of elements, usuall numbers, e.g. 1 0-8 4 0-1 1 0 11

A Introduction to Matrix Algebra and Principal Components Analysis

A Introduction to Matrix Algebra and Principal Components Analysis Multivariate Methods in Education ERSH 8350 Lecture #2 August 24, 2011 ERSH 8350: Lecture 2 Today s Class An introduction to matrix algebra

APPENDIX QA MATRIX ALGEBRA. a 11 a 12 a 1K A = [a ik ] = [A] ik = 21 a 22 a 2K. a n1 a n2 a nk

Greene-2140242 book December 1, 2010 8:8 APPENDIX QA MATRIX ALGEBRA A.1 TERMINOLOGY A matrix is a rectangular array of numbers, denoted a 11 a 12 a 1K A = [a ik ] = [A] ik = a 21 a 22 a 2K. a n1 a n2 a

Matrices: 2.3 The Inverse of Matrices

September 4 Goals Define inverse of a matrix. Point out that not every matrix A has an inverse. Discuss uniqueness of inverse of a matrix A. Discuss methods of computing inverses, particularly by row operations.

Solution to Homework 2

Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

Lecture 23: The Inverse of a Matrix

Lecture 23: The Inverse of a Matrix Winfried Just, Ohio University March 9, 2016 The definition of the matrix inverse Let A be an n n square matrix. The inverse of A is an n n matrix A 1 such that A 1

EC327: Advanced Econometrics, Spring 2007 Wooldridge, Introductory Econometrics (3rd ed, 2006) Appendix D: Summary of matrix algebra Basic definitions A matrix is a rectangular array of numbers, with m

2.1: Determinants by Cofactor Expansion. Math 214 Chapter 2 Notes and Homework. Evaluate a Determinant by Expanding by Cofactors

2.1: Determinants by Cofactor Expansion Math 214 Chapter 2 Notes and Homework Determinants The minor M ij of the entry a ij is the determinant of the submatrix obtained from deleting the i th row and the

Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible:

Cramer s Rule and the Adjugate Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible: Theorem [Cramer s Rule] If A is an invertible

Typical Linear Equation Set and Corresponding Matrices

EWE: Engineering With Excel Larsen Page 1 4. Matrix Operations in Excel. Matrix Manipulations: Vectors, Matrices, and Arrays. How Excel Handles Matrix Math. Basic Matrix Operations. Solving Systems of

5.3 Determinants and Cramer s Rule

290 5.3 Determinants and Cramer s Rule Unique Solution of a 2 2 System The 2 2 system (1) ax + by = e, cx + dy = f, has a unique solution provided = ad bc is nonzero, in which case the solution is given

1 Gaussian Elimination

Contents 1 Gaussian Elimination 1.1 Elementary Row Operations 1.2 Some matrices whose associated system of equations are easy to solve 1.3 Gaussian Elimination 1.4 Gauss-Jordan reduction and the Reduced

Homework: 2.1 (page 56): 7, 9, 13, 15, 17, 25, 27, 35, 37, 41, 46, 49, 67

Chapter Matrices Operations with Matrices Homework: (page 56):, 9, 3, 5,, 5,, 35, 3, 4, 46, 49, 6 Main points in this section: We define a few concept regarding matrices This would include addition of

University of Warwick, EC9A0: Pre-sessional Advanced Mathematics Course Peter J. Hammond & Pablo F. Beker 1 of 55 EC9A0: Pre-sessional Advanced Mathematics Course Slides 1: Matrix Algebra Peter J. Hammond

Lecture 10: Invertible matrices. Finding the inverse of a matrix

Lecture 10: Invertible matrices. Finding the inverse of a matrix Danny W. Crytser April 11, 2014 Today s lecture Today we will Today s lecture Today we will 1 Single out a class of especially nice matrices

In this leaflet we explain what is meant by an inverse matrix and how it is calculated.

5.5 Introduction The inverse of a matrix In this leaflet we explain what is meant by an inverse matrix and how it is calculated. 1. The inverse of a matrix The inverse of a square n n matrix A, is another

Lecture No. # 02 Prologue-Part 2

Advanced Matrix Theory and Linear Algebra for Engineers Prof. R.Vittal Rao Center for Electronics Design and Technology Indian Institute of Science, Bangalore Lecture No. # 02 Prologue-Part 2 In the last

Matrix Algebra, Class Notes (part 1) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved.

Matrix Algebra, Class Notes (part 1) by Hrishikesh D. Vinod Copyright 1998 by Prof. H. D. Vinod, Fordham University, New York. All rights reserved. 1 Sum, Product and Transpose of Matrices. If a ij with

Vector and Matrix Norms

Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

B such that AB = I and BA = I. (We say B is an inverse of A.) Definition A square matrix A is invertible (or nonsingular) if matrix

Matrix inverses Recall... Definition A square matrix A is invertible (or nonsingular) if matrix B such that AB = and BA =. (We say B is an inverse of A.) Remark Not all square matrices are invertible.

The Laplace Expansion Theorem: Computing the Determinants and Inverses of Matrices

The Laplace Expansion Theorem: Computing the Determinants and Inverses of Matrices David Eberly Geometric Tools, LLC http://www.geometrictools.com/ Copyright c 1998-2016. All Rights Reserved. Created:

Math 018 Review Sheet v.3

Math 018 Review Sheet v.3 Tyrone Crisp Spring 007 1.1 - Slopes and Equations of Lines Slopes: Find slopes of lines using the slope formula m y y 1 x x 1. Positive slope the line slopes up to the right.

Unified Lecture # 4 Vectors

Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,

Solution. Area(OABC) = Area(OAB) + Area(OBC) = 1 2 det( [ 5 2 1 2. Question 2. Let A = (a) Calculate the nullspace of the matrix A.

Solutions to Math 30 Take-home prelim Question. Find the area of the quadrilateral OABC on the figure below, coordinates given in brackets. [See pp. 60 63 of the book.] y C(, 4) B(, ) A(5, ) O x Area(OABC)

Lecture Notes: Matrix Inverse. 1 Inverse Definition

Lecture Notes: Matrix Inverse Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Inverse Definition We use I to represent identity matrices,

Linear Algebra: Vectors

A Linear Algebra: Vectors A Appendix A: LINEAR ALGEBRA: VECTORS TABLE OF CONTENTS Page A Motivation A 3 A2 Vectors A 3 A2 Notational Conventions A 4 A22 Visualization A 5 A23 Special Vectors A 5 A3 Vector

Linear Algebra: Determinants, Inverses, Rank

D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Style. Learning Outcomes

The Inverse of a Matrix 7.4 Introduction In number arithmetic every number a 0 has a reciprocal b written as a or such that a ba = ab =. Similarly a square matrix A may have an inverse B = A where AB =

Solving a System of Equations

11 Solving a System of Equations 11-1 Introduction The previous chapter has shown how to solve an algebraic equation with one variable. However, sometimes there is more than one unknown that must be determined

Using the Singular Value Decomposition

Using the Singular Value Decomposition Emmett J. Ientilucci Chester F. Carlson Center for Imaging Science Rochester Institute of Technology emmett@cis.rit.edu May 9, 003 Abstract This report introduces

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

Section 1.3 Matrix Products A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form (scalar #1)(quantity #1) + (scalar #2)(quantity #2) +...