# Introduction to Matrix Algebra

Size: px
Start display at page:

Transcription

1 Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary to enclose the elements of a matrix in parentheses, brackets, or braces. For example, the following is a matrix: X = This matrix has two rows and three columns, so it is referred to as a 2 by 3 matrix. The elements of a matrix are numbered in the following way: x 11 x 12 x 13 X = x 21 x 22 x 23 That is, the first subscript in a matrix refers to the row and the second subscript refers to the column. It is important to remember this convention when matrix algebra is performed. A vector is a special type of matrix that has only one row (called a row vector) or one column (called a column vector). Below, a is a column vector while b is a row vector. 7 a = 2, b = ( ) 3 A scalar is a matrix with only one row and one column. It is customary to denote scalars by italicized, lower case letters (e.g., x), to denote vectors by bold, lower case letters (e.g., x), and to denote matrices with more than one row and one column by bold, upper case letters (e.g., X). A square matrix has as many rows as it has columns. Matrix A is square but matrix B is not square: A = , B = A symmetric matrix is a square matrix in which x ij = x ji for all i and j. Matrix A is symmetric; matrix B is not symmetric A = 1 6 2, B = A diagonal matrix is a symmetric matrix where all the off diagonal elements are 0. Matrix A is diagonal.

2 Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra A = An identity matrix is a diagonal matrix with 1s and only 1s on the diagonal. The identity matrix is almost always denoted as I I = Matrix Addition and Subtraction: To add two matrices, they both must have the same number of rows and they both must have the same number of columns. The elements of the two matrices are simply added together, element by element, to produce the results. That is, for R = A+ B, then r ij = a ij + b ij for all i and j. Thus, = Matrix subtraction works in the same way, except that elements are subtracted instead of added. Matrix Multiplication: There are several rules for matrix multiplication. The first concerns the multiplication between a matrix and a scalar. Here, each element in the product matrix is simply the scalar multiplied by the element in the matrix. That is, for R = ab, then r ij = ab ij for all i and j. Thus, = Matrix multiplication involving a scalar is commutative. That is, ab = Ba. The next rule involves the multiplication of a row vector by a column vector. To perform this, the row vector must have as many columns as the column vector has rows. For example, 2 ( 1 7 5) 4 1 is legal. However

3 Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra ( 1 7 5) 1 6 is not legal because the row vector has three columns while the column vector has four rows. The product of a row vector multiplied by a column vector will be a scalar. This scalar is simply the sum of the first row vector element multiplied by the first column vector element plus the second row vector element multiplied by the second column vector element plus the product of the third elements, etc. In algebra, if r = ab, then n r = a i b i i =1 Thus, 8 ( 2 6 3) 1 = = 34 4 All other types of matrix multiplication involve the multiplication of a row vector and a column vector. Specifically, in the expression R = AB, r ij = a i b j where a i is the ith row vector in matrix A and b j is the jth column vector in matrix B. Thus, if A = ,and B = then 1 r 11 = a 1 b 1 = ( 2 8 1) 9 = = 80 6 and 7 r 12 = a 1 b 2 = ( 2 8 1) 2 = ( 2) = 1 3 and 1 r 21 = a 2 b 1 = ( 3 6 4) 9 = = 81 6 and

4 Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra r 22 = a 2 b 2 = ( 3 6 4) 2 = ( 2) = 21 3 Hence, = For matrix multiplication to be legal, the first matrix must have as many columns as the second matrix has rows. This, of course, is the requirement for multiplying a row vector by a column vector. The resulting matrix will have as many rows as the first matrix and as many columns as the second matrix. Because A has 2 rows and 3 columns while B has 3 rows and 2 columns, the matrix multiplication may legally proceed and the resulting matrix will have 2 rows and 2 columns. Because of these requirements, matrix multiplication is usually not commutative. That is, usually AB BA. And even if AB is a legal operation, there is no guarantee that BA will also be legal. For these reasons, the terms premultiply and postmultiply are often encountered in matrix algebra while they are seldom encountered in scalar algebra. One special case to be aware of is when a column vector is postmultiplied by a row vector. That is, what is 3 4 ( 8 2 )? 7 In this case, one simply follows the rules given above for the multiplication of two matrices. Note that the first matrix has one column and the second matrix has one row, so the matrix multiplication is legal. The resulting matrix will have as many rows as the first matrix (3) and as many columns as the second matrix (2). Hence, the result is ( 8 2 ) = Similarly, multiplication of a matrix times a vector (or a vector times a matrix) will also conform to the multiplication of two matrices. For example, is an illegal operation because the number of columns in the first matrix (2) does not match the number of rows in the second matrix (3). However,

5 Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra = = and 8 5 ( 2 7 3) 6 1 = ( ) = ( 85 29) 9 4 The last special case of matrix multiplication involves the identity matrix, I. The identity matrix operates as the number 1 does in scalar algebra. That is, any vector or matrix multiplied by an identity matrix is simply the original vector or matrix. Hence, ai = a, IX = X, etc. Note, however, that a scalar multiplied by an identify matrix becomes a diagonal matrix with the scalars on the diagonal. That is, = not 4. This should be verified by reviewing the rules for multiplying a scalar and a matrix given above. Matrix Transpose: The transpose of a matrix is denoted by a prime ( A ) or a superscript t or T ( A t or A T ). The first row of a matrix becomes the first column of the transpose matrix, the second row of the matrix becomes the second column of the transpose, etc. Thus, A = , and A t = The transpose of a row vector will be a column vector, and the transpose of a column vector will be a row vector. The transpose of a symmetric matrix is simply the original matrix. Matrix Inverse: In scalar algebra, the inverse of a number is that number which, when multiplied by the original number, gives a product of 1. Hence, the inverse of x is simple 1/x. or, in slightly different notation, x 1. In matrix algebra, the inverse of a matrix is that matrix which, when multiplied by the original matrix, gives an identity matrix. The inverse of a matrix is denoted by the superscript -1. Hence, AA 1 = A 1 A = I A matrix must be square to have an inverse, but not all square matrices have an inverse. In some cases, the inverse does not exist. For covariance and correlation matrices, an inverse will always exist, provided that there are more subjects than there are variables and that every variable has a variance greater than 0.

6 Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 6 It is important to know what an inverse is in multivariate statistics, but it is not necessary to know how to compute an inverse. Determinant of a Matrix: The determinant of a matrix is a scalar and is denoted as A or det(a). The determinant has very important mathematical properties, but it is very difficult to provide a substantive definition. For covariance and correlation matrices, the determinant is a number that is sometimes used to express the generalized variance of the matrix. That is, covariance matrices with small determinants denote variables that are redundant or highly correlated. Matrices with large determinants denote variables that are independent of one another. The determinant has several very important properties for some multivariate stats (e.g., change in R 2 in multiple regression can be expressed as a ratio of determinants.) Only idiots calculate the determinant of a large matrix by hand. We will try to avoid them. Trace of a Matrix: The trace of a matrix is sometimes, although not always, denoted as tr(a). The trace is used only for square matrices and equals the sum of the diagonal elements of the matrix. For example, tr = = Orthogonal Matrices: Only square matrices may be orthogonal matrices, although not all square matrices are orthogonal matrices. An orthogonal matrix satisfied the equation AA t = I Thus, the inverse of an orthogonal matrix is simply the transpose of that matrix. Orthogonal matrices are very important in factor analysis. Matrices of eigenvectors (discussed below) are orthogonal matrices. Eigenvalues and Eigenvectors The eigenvalues and eigenvectors of a matrix play an important part in multivariate analysis. This discussion applies to correlation matrices and covariance matrices that (1) have more subjects than variables, (2) have variances > 0.0, and (3) are calculated from data having no missing values, and (4) no variable is a perfect linear combination of the other variables. Any such covariance matrix C can be mathematically decomposed into a product:

7 Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 7 C = ADA -1 where A is a square matrix of eigenvectors and D is a diagonal matrix with the eigenvalues on the diagonal. If there are n variables, both A and D will be n by n matrices. Eigenvalues are also called characteristic roots or latent roots. Eigenvectors are sometimes refereed to as characteristic vectors or latent vectors. Each eigenvalue has its associated eigenvector. That is, the first eigenvalue in D (d 11 ) is associated with the first column vector in A, the second diagonal element in D (i.e., the second eigenvalue or d 22 ) is associated with the second column in A, and so on. Actually, the order of the eigenvalues is arbitrary from a mathematical viewpoint. However, if the diagonals of D become switched around, then the corresponding columns in A must also be switched appropriately. It is customary to order the eigenvalues so that the largest one is in the upper left (d 11 ) and then they proceed in descending order until the smallest one is in d nn, or the extreme lower right. The eigenvectors in A are then ordered accordingly so that column 1 in A is associated with the largest eigenvalue and column n is associated with the lowest eigenvalue. Some important points about eigenvectors and eigenvalues are: 1) The eigenvectors are scaled so that A is an orthogonal matrix. Thus, A T = A -1, and AA T = I. Thus, each eigenvector is said to be orthogonal to all the other eigenvectors. 2) The eigenvalues will all be greater than 0.0, providing that the four conditions outlined above for C are true. 3) For a covariance matrix, the sum of the diagonal elements of the covariance matrix equals the sum of the eigenvalues, or in math terms, tr(c) = tr(d). For a correlation matrix, all the eigenvalues sum to n, the number of variables. Furthermore, in case you have a burning passion to know about it, the determinant of C equals the product of the eigenvalues of C. 4) It is a royal pain to compute eigenvalues and eigenvectors, so don't let me catch you doing it. 5) VERY IMPORTANT : The decomposition of a matrix into its eigenvalues and eigenvectors is a mathematical/geometric decomposition. The decomposition literally rearranges the dimensions in an n dimensional space (n being the number of variables) in such a way that the axis (e.g., North-South, East-West) are all perpendicular. This rearrangement may but is not guaranteed to uncover an important psychological construct or even to have a psychologically meaningful interpretation. 6) ALSO VERY IMPORTANT : An eigenvalue tells us the proportion of total variability in a matrix associated with its corresponding eigenvector. Consequently, the eigenvector that corresponds to the highest eigenvalue tells us the dimension (axis) that generates the maximum amount of individual variability in the variables. The next eigenvector is a dimension perpendicular to the first that accounts for the second largest amount of variability, and so on.

8 Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 8 Mathematical Digression: eigenvalues and eigenvectors An important mathematical formulation is the characteristic equation of a square matrix. If C is an n by n covariance matrix, the characteristic equation is C - λi = 0 (1.0) where λ is a scalar. Solving this equation for λ reveals that the equation is a nth degree polynomial of λ. That is, there are as many λs as there are variables in the covariance matrix. The n λs that are the roots of this polynomial are the eigenvalues of C. Because C is symmetric, all the λs will be real numbers (i.e., not complex or imaginary numbers), although some of the λs may be equal to or less than 0. The λs can be solved for in any order, but it is customary to order them from largest to smallest. To examine what is meant here, let C denote a two by two correlation matrix that has the form: 1 ρ ρ 1 Then the quantity C - λi may be written as C λi= 1 ρ λ 0 1 λ = ρ 1 0 λ ρ The determinant is ρ 1 λ C λi = ( 1 λ) 2 ρ 2 So the equation that requires solution is (1 λ) 2 ρ 2 = 0. This is a quadratic in λ. If we had three variables, it would be a cubic; and if there were four variables, it would be a quartic, etc. Solving for the quadratic gives λ = 1 ±ρ The largest root depends on the sign of ρ. For ρ > 0, then λ 1 = 1 + ρ and λ 2 = 1 - ρ. For each λ, one can define a nonzero vector a, such that ( C λi)a =0. The 0 to the right of the equal sign denotes a vector filled with 0s. Any number in a that satisfies this equation is called a latent vector or eigenvector of matrix C. Each eigenvector is associated with its own λ. Note that the solution to a is not unique because if a is multiplied by any scalar, the above equation still holds. Thus, there are an infinite set of values for a, although each solution will be a scalar multiple of any other solution. It is customary to normalize the values of a by imposing the constraint that a'a = 1. A latent vector subject to this constraint is called a normalized latent vector. Taking the two by two correlation matrix with ρ > 0, then

9 Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 9 (C λi)a = 1 λ ρ a 1 (1 λ)a 1 +ρa = 2 = 0. ρ 1 λ a 2 (1 λ)a 2 +ρa 1 0 or, by carrying out the multiplication, we find ( 1 λ)a 1 +ρa 2 =0 ( 1 λ)a 2 +ρa 1 =0 Now take the largest eigenvalue, l = 1 + r, and substitute. This gives ρ( a 2 a 1 ) = 0 ρ( a 1 a 2 ) = 0 Thus, all we know is that a 1 = a 2. If we let a 1 = 10, then a 2 = 10; and if we let a 1 = -.023, then a 2 = This is what was meant above when it was said that there were an infinite number of solutions where any single solution is a scalar multiple of any other solution. By requiring that a'a = 1, we can settle on values of a 1 and a 2. That is, if a a 2 2 = 1 and a 1 = a 2 = a, say, then 2a 2 = 1 and a =.5. So the first eigenvector will be a 2 by 1 column vector with both elements equaling.5. For the second eigenvector, we substitute 1 - ρ for λ. This gives, ρ( a 1 + a 2 ) = 0 Consequently, a 1 = -a 2. One of the a's must equal.5 and the other must equal -.5. It is immaterial which is positive and which is negative, but it is a frequent convention to make the first one (a 1 ) positive and the second negative. Note that the normalized eigenvectors of a two by two correlation matrix will always take on these values. The actual value of the correlation coefficient is irrelevant as long as it exceeds 0. Can you figure out the eigenvectors for a two by two matrix where ρ < 0?

### 13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

### 1 Introduction to Matrices

1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

### 15.062 Data Mining: Algorithms and Applications Matrix Math Review

.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

### Linear Algebra Review. Vectors

Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

### Orthogonal Diagonalization of Symmetric Matrices

MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

### Vector and Matrix Norms

Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

### Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

### Chapter 6. Orthogonality

6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

### Lecture 2 Matrix Operations

Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or

### The Characteristic Polynomial

Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

### Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

### MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

### [1] Diagonal factorization

8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

### DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

### MATH 551 - APPLIED MATRIX THEORY

MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

### by the matrix A results in a vector which is a reflection of the given

Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

### Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round \$200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

### Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

### Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

### Solution to Homework 2

Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### Linear Algebra: Determinants, Inverses, Rank

D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of

### a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

### Brief Introduction to Vectors and Matrices

CHAPTER 1 Brief Introduction to Vectors and Matrices In this chapter, we will discuss some needed concepts found in introductory course in linear algebra. We will introduce matrix, vector, vector-valued

### Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

### LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

### Multivariate Analysis of Variance (MANOVA): I. Theory

Gregory Carey, 1998 MANOVA: I - 1 Multivariate Analysis of Variance (MANOVA): I. Theory Introduction The purpose of a t test is to assess the likelihood that the means for two groups are sampled from the

### SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance

### SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89 by Joseph Collison Copyright 2000 by Joseph Collison All rights reserved Reproduction or translation of any part of this work beyond that permitted by Sections

### SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

### 1 Review of Least Squares Solutions to Overdetermined Systems

cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares

### Linear Algebra and TI 89

Linear Algebra and TI 89 Abdul Hassen and Jay Schiffman This short manual is a quick guide to the use of TI89 for Linear Algebra. We do this in two sections. In the first section, we will go over the editing

### Least-Squares Intersection of Lines

Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a

### Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

### December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

### State of Stress at Point

State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,

### Using row reduction to calculate the inverse and the determinant of a square matrix

Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible

### Vocabulary Words and Definitions for Algebra

Name: Period: Vocabulary Words and s for Algebra Absolute Value Additive Inverse Algebraic Expression Ascending Order Associative Property Axis of Symmetry Base Binomial Coefficient Combine Like Terms

### Matrix Differentiation

1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have

### Lecture 5 Principal Minors and the Hessian

Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### Systems of Linear Equations

Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

### Notes on Symmetric Matrices

CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

### Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

### Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

### Rotation Matrices and Homogeneous Transformations

Rotation Matrices and Homogeneous Transformations A coordinate frame in an n-dimensional space is defined by n mutually orthogonal unit vectors. In particular, for a two-dimensional (2D) space, i.e., n

### Multiple regression - Matrices

Multiple regression - Matrices This handout will present various matrices which are substantively interesting and/or provide useful means of summarizing the data for analytical purposes. As we will see,

### Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that

### Lecture L3 - Vectors, Matrices and Coordinate Transformations

S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

### Linear Algebraic Equations, SVD, and the Pseudo-Inverse

Linear Algebraic Equations, SVD, and the Pseudo-Inverse Philip N. Sabes October, 21 1 A Little Background 1.1 Singular values and matrix inversion For non-smmetric matrices, the eigenvalues and singular

### The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation

### Matrix algebra for beginners, Part I matrices, determinants, inverses

Matrix algebra for beginners, Part I matrices, determinants, inverses Jeremy Gunawardena Department of Systems Biology Harvard Medical School 200 Longwood Avenue, Cambridge, MA 02115, USA jeremy@hmsharvardedu

### October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

### Typical Linear Equation Set and Corresponding Matrices

EWE: Engineering With Excel Larsen Page 1 4. Matrix Operations in Excel. Matrix Manipulations: Vectors, Matrices, and Arrays. How Excel Handles Matrix Math. Basic Matrix Operations. Solving Systems of

GENERATING SETS KEITH CONRAD 1 Introduction In R n, every vector can be written as a unique linear combination of the standard basis e 1,, e n A notion weaker than a basis is a spanning set: a set of vectors

### Inner products on R n, and more

Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

### Eigenvalues and Eigenvectors

Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution

### Here are some examples of combining elements and the operations used:

MATRIX OPERATIONS Summary of article: What is an operation? Addition of two matrices. Multiplication of a Matrix by a scalar. Subtraction of two matrices: two ways to do it. Combinations of Addition, Subtraction,

### 1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

### Syntax Description Remarks and examples Also see

Title stata.com permutation An aside on permutation matrices and vectors Syntax Description Remarks and examples Also see Syntax Permutation matrix Permutation vector Action notation notation permute rows

### Multidimensional data and factorial methods

Multidimensional data and factorial methods Bidimensional data x 5 4 3 4 X 3 6 X 3 5 4 3 3 3 4 5 6 x Cartesian plane Multidimensional data n X x x x n X x x x n X m x m x m x nm Factorial plane Interpretation

### A Primer on Index Notation

A Primer on John Crimaldi August 28, 2006 1. Index versus Index notation (a.k.a. Cartesian notation) is a powerful tool for manipulating multidimensional equations. However, there are times when the more

### Operations with positive and negative numbers - see first chapter below. Rules related to working with fractions - see second chapter below

INTRODUCTION If you are uncomfortable with the math required to solve the word problems in this class, we strongly encourage you to take a day to look through the following links and notes. Some of them

### 1 Sets and Set Notation.

LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

### Similar matrices and Jordan form

Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

### Linear Algebra: Vectors

A Linear Algebra: Vectors A Appendix A: LINEAR ALGEBRA: VECTORS TABLE OF CONTENTS Page A Motivation A 3 A2 Vectors A 3 A2 Notational Conventions A 4 A22 Visualization A 5 A23 Special Vectors A 5 A3 Vector

### 1 Determinants and the Solvability of Linear Systems

1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped

### Recall that two vectors in are perpendicular or orthogonal provided that their dot

Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

### 3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Solving quadratic equations 3.2 Introduction A quadratic equation is one which can be written in the form ax 2 + bx + c = 0 where a, b and c are numbers and x is the unknown whose value(s) we wish to find.

CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences More than you wanted to know about quadratic forms KC Border Contents 1 Quadratic forms 1 1.1 Quadratic forms on the unit

### Applied Linear Algebra I Review page 1

Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

### A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

Section 1.3 Matrix Products A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form (scalar #1)(quantity #1) + (scalar #2)(quantity #2) +...

### Solving Linear Systems, Continued and The Inverse of a Matrix

, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

### 9 MATRICES AND TRANSFORMATIONS

9 MATRICES AND TRANSFORMATIONS Chapter 9 Matrices and Transformations Objectives After studying this chapter you should be able to handle matrix (and vector) algebra with confidence, and understand the

### Introduction to Principal Component Analysis: Stock Market Values

Chapter 10 Introduction to Principal Component Analysis: Stock Market Values The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from

### Linear Programming. March 14, 2014

Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

### Multivariate Analysis (Slides 13)

Multivariate Analysis (Slides 13) The final topic we consider is Factor Analysis. A Factor Analysis is a mathematical approach for attempting to explain the correlation between a large set of variables

### ASEN 3112 - Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1

19 MDOF Dynamic Systems ASEN 3112 Lecture 1 Slide 1 A Two-DOF Mass-Spring-Dashpot Dynamic System Consider the lumped-parameter, mass-spring-dashpot dynamic system shown in the Figure. It has two point

### 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

### POLYNOMIAL FUNCTIONS

POLYNOMIAL FUNCTIONS Polynomial Division.. 314 The Rational Zero Test.....317 Descarte s Rule of Signs... 319 The Remainder Theorem.....31 Finding all Zeros of a Polynomial Function.......33 Writing a

### CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In

### 3.1. Solving linear equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Solving linear equations 3.1 Introduction Many problems in engineering reduce to the solution of an equation or a set of equations. An equation is a type of mathematical expression which contains one or

### Lecture 5: Singular Value Decomposition SVD (1)

EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

### Equations, Inequalities & Partial Fractions

Contents Equations, Inequalities & Partial Fractions.1 Solving Linear Equations 2.2 Solving Quadratic Equations 1. Solving Polynomial Equations 1.4 Solving Simultaneous Linear Equations 42.5 Solving Inequalities

### This is a square root. The number under the radical is 9. (An asterisk * means multiply.)

Page of Review of Radical Expressions and Equations Skills involving radicals can be divided into the following groups: Evaluate square roots or higher order roots. Simplify radical expressions. Rationalize

### Lecture notes on linear algebra

Lecture notes on linear algebra David Lerner Department of Mathematics University of Kansas These are notes of a course given in Fall, 2007 and 2008 to the Honors sections of our elementary linear algebra

### Chapter 2 Determinants, and Linear Independence

Chapter 2 Determinants, and Linear Independence 2.1 Introduction to Determinants and Systems of Equations Determinants can be defined and studied independently of matrices, though when square matrices

### Solving Systems of Linear Equations

LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

### Biggar High School Mathematics Department. National 5 Learning Intentions & Success Criteria: Assessing My Progress

Biggar High School Mathematics Department National 5 Learning Intentions & Success Criteria: Assessing My Progress Expressions & Formulae Topic Learning Intention Success Criteria I understand this Approximation

### Unified Lecture # 4 Vectors

Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,

### Factor Analysis. Chapter 420. Introduction

Chapter 420 Introduction (FA) is an exploratory technique applied to a set of observed variables that seeks to find underlying factors (subsets of variables) from which the observed variables were generated.

### Solution of Linear Systems

Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

### MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 2

MATHEMATICS FO ENGINEES BASIC MATIX THEOY TUTOIAL This is the second of two tutorials on matrix theory. On completion you should be able to do the following. Explain the general method for solving simultaneous