A Linear Algebra Primer James Baugh


 Suzanna Barnett
 11 months ago
 Views:
Transcription
1 Introduction to Vectors and Matrices Vectors and Vector Spaces Vectors are elements of a vector space which is a set of mathematical objects which can be added and multiplied by numbers (scalars) subject to the following axiomatic requirements: Addition must be associative: Addition must be commutative: Addition must have a unique identity: (the zero vector). Every element must have an additive inverse:. Under scalar multiplication 1 must acts as a multiplicative identity: 1 Scalar multiplication must be distributive with addition:, and. Another requirement is closure which we can express singly as closure of linear combinations: o If and are in the space then so too is for any numbers, and. What does all that mean? It simply means that vectors behave just like number as far as doing algebra is concerned except that we don t (as yet) define multiplication of vectors times vectors. (Later we will see several distinct types of vector multiplication.) Examples of Vector Spaces: Arrows (displacements) The typical first example of vectors are arrows which we may think of as acts of displacement i.e. the action of moving a point on a plane or in space in a certain direction over a certain distance: v (Shown is the vector mapping point to point.) The arrow should be considered apart from any specific point but rather as an action we may apply to arbitrary points. In a sense the arrow is a function acting on points. In this context then we define addition of arrows as composition of actions and scalar multiplication as scaling of actions: u v + = w 0.75 v A good exercise is to verify that the various axiomatic rules for vectors hold in this example. When we interpret arrows in this way (as point motions) we refer to them as displacement vectors. Note that if we define an origin point to space (or plane) we can then identify any point with the displacement vector which moves the origin to that point. This is what we then mean by a position vector. = 0.75 v (75% of original length)
2 Examples of Vector Spaces: Function Spaces Consider now a totally different type of vector space. Let V be the set of all continuous functions with domain, the unit interval [0,1]. We can add functions to get functions and we can multiply functions to get functions. It is simple enough then to verify that if f and g have domain [0,1] then the function h: h(x)=a f(x) + b g(x) also has domain [0,1] and is also continuous and thus is also in V. Another example of a function space is the space of polynomials in one variable. This space is denoted: indicating polynomials in the variable with real coefficients. Again we can always add polynomials and multiply them by scalars. A third vector space we can define is the set of linear functions on variables. Example of Vector Spaces: Matrices Matrices are arrays of numbers with a specific number of rows and columns (the dimensions of the matrix). For Example: Here is a 23 ( two by three ) matrix. We use the convention of specifying first the number of rows and then the number of columns. (To remember this, the traditional mnemonic is RC cola! ) We may add matrices with the same dimensions by simply adding corresponding entries. We multiply a matrix by a number (scalar) by multiplying each entry by that number: Thus the set of all matrices forms a vector space. Basis, Span, Independence A basis of a vector space is a set of linearly independent vectors which span the space. To understand this of course we must understand the meaning of span and linear independence. The span of a set of vectors is the set of all linear combinations of those vectors. Example:, :, for all scalar values of and. One can easily show that the span of a set of vectors is itself a vector space (it will be a subspace of the original vector space). There are two basic (equivalent) ways to define linear independence. A set of vectors is linearly independent if no element of the set is a linear combination of the remaining elements (it isn t in the span of the set of remaining elements), or equivalently if no nontrivial linear combination of elements equals the zero vector. (The trivial linear combination would be the sum of zero times each element.) The main role of a basis is to span the space, i.e. it provides a way to express all vectors in terms of the basis set. The linear independence tells us that we have no more elements in the basis than we actually need. Example: Position vectors (or displacement vectors) in the plane can always be expressed in terms of horizontal and vertical displacements. We define the standard basis as, where is the displacement one unit to the right (in the xdirection) and is the unit displacement upward (in the ydirection).
3 3 2 Note then that to express a position vector for the point, we need only note that this is the point obtained by displacing the origin to the right by a distance and up a distance. It thus corresponds to the position vector. The standard basis thus exactly corresponds to the use of rectangular coordinates. When we expand a vector as a linear combination of basis elements then we refer to the coefficients as linear coordinates. Now many different bases (pronounced bayseez ) are possible for the same vector space but the size (number of elements) is always the same and this defines the dimension of the space. The planar displacements have a standard basis of two elements and so has dimension two. We can extend this to three dimensional displacements in space with basis,, corresponding to unit displacements in the x,y, and zdirections respectively. When we expand a vector in terms of an established basis (e.g. ) we can simply give the coefficients in which case we use angle brackets. Example:,,. When working with multiple bases we may use a subscript to indicate which basis is being used.,, where,,. We should however be a bit careful here since our definition of a basis is as a set of vectors. Sets do not indicate order. We can be clear by defining an ordered basis as a sequence instead of a set but otherwise equivalent to the above definition. Matrices as Vectors, Vectors as Matrices As was just mentioned we may view matrices as vectors. As it turns out matrix algebra is a good standard language for all vectors. We will make special use of matrices which have only one column (column vectors) and matrices which have only one row (row vectors). Such as: or First let us define an ordered basis which is simply a basis (set) rewritten as a sequence of basis elements, for example,,. We treat these formally as a matrix (row vector). The reason for ordering a basis is so we can reference elements (and coefficients) by their positions rather than their implicit identities. This is important for example if we consider a nontrivial transformation which, say cycles the basis elements without changing them. (A 60 rotation about the line x=y=z will cycle x, y, and zaxes. This can be expressed by a change of ordered basis, but it leaves the basis set unchanged.) We then write a general vector as a product of a row matrix and column matrix: We take this as the definition of multiplication of a row times a column be it rows of vectors or numbers (or later differential operations). The point here is that once we have decided upon a particular (ordered) basis we may work purely with the column vectors or coordinates.
4 So we have three ways of expressing a vector in terms of a given basis, i. Explicitly as in: ii. Using the angle bracket notation:,,, iii. Using a column vector (matrix): We write the first two as equations because they are identifications. The last however is not quite since matrices are defined as their own type of mathematical objects. We rather are identifying them by equivalent mathematical behavior rather than by identical meaning. Dual Vectors and Matrix Multiplication Dual Vectors and Row Vectors If we consider a vector space then a linear functional is a function mapping vectors in to scalars obeying the linearity property (see below). Since functionals are just functions we can add them and multiply them by scalars so they form yet another vector space. We denote the space of linear functionals by, the dual space of. We thus also call these linear functionals dual vectors. Linearity: : ( is a linear mapping from to ) means that: for all, in and for all, in. Said in English is linear means of a linear combination of objects equals the same linear combination of of each object. If we combine this with the use of a basis then we can express any linear functional uniquely by how it acts on basis elements. If and and then What s more by moving to the column vector representation of a vector (in the standard basis) we can express dual vectors (linear functionals) using row vectors: We can then entirely drop the function notation and write the functional evaluation as a matrix product: A Side Note: This form of multiplication is contracted which means we reduce dimensions by summing over terms (also the dimensions must be equal or rather dual but of equal size). Compare this with scalar multiplication which is a form of distributed multiplication. Distributed multiplication preserves dimension. I mention this to clarify its use later.
5 Multiplying Matrices times Column Vectors Now that we can multiply a row vector times a column vector to get a scalar, we can use this to define general matrix multiplication. A general matrix may be simultaneously considered as a row vector of column vectors or vis versa So we can multiply an matrix by a column vector of length ( an 1 matrix) as follows: Treat the matrix as a row vector (with columns) of column vectors (with rows) and apply the row times column multiplication. The result will be an 1 matrix We describe this as contracting outer multiplication combined with distributed inner multiplication. Now this works but there is another way to go about it. Treat the matrix instead as a column of rows and multiply the column vector on the right times each row. Column of rows times column = column of (row times column) This is a more often used sequence and it allows us to then generalize consistently. You can view this as distributed outer multiplication with contracting inner multiplication. In a similar way we can multiply a row vector times a matrix to yield another row vector. Matrix Multiplication To multiply two general matrices the number of columns of the left matrix must equal the number of rows of the right matrix. Using the dimensions (remember RC cola) we see then that we can multiply a matrix times a matrix and the result is a matrix. In short. Treat the left matrix as a column vector of row vectors and the right as a row vector of column vectors Example: and use distributed multiplication except contact at the inner most level and now contracting products (row times column):
6 Things to note: Matrix multiplication is not commutative that is given the matrix product the reverse product may not even be defined, and if defined may not yield a matrix with the same dimensions as and even in the special case (square matrices) where it does, it will not in general yield the same matrix. There are some interesting special cases one of which is square matrices which are matrices with the same number of rows and columns. Multiplication by square matrices of the same dimension yields again square matrices of the same dimension. Consider the following square (33) matrix: It is called the (3x3) Identity matrix because multiplication by this matrix (when defined) will leave the other matrix unchanged. Examples: and We can define the inverse of a square matrix to be the square matrix (if it exists) such that As the identity behaves like multiplication by 1, the inverse is analogous to the reciprocal, hence the 1 power notation. Example: Formula for 22 matrices: /2 Provided 0. If this (determinant) is equal to zero then the matrix has no inverse. Transpose, Adjoint, and Inner (dot) Products Given a vector space, an inner product (dot product) is a symmetric positive definite bilinear form, As a bilinear form it is a function mapping two vectors to a scalar (, in ) in such a way that the function is linear with respect to each of the two vectors (remember action on linear combination equals linear combination of actions).,,, and likewise with the other argument. The positive definiteness means that when we apply the form to two copies of the same (nonzero) vector we get a positive number,0, and if,0 then. By symmetric we mean that exchanging the two vector arguments doesn t change the value.,,. There are various notations for an inner product:, or or or ( here is the name of the bilinear form as a function.)
7 SIDE NOTE: This definition assumes we re using real scalars. The extension to complex numbers forks in either of two ways. We can maintain symmetry (orthogonal form) or maintain positivity (Hermitian form) but not both. We will mostly here use the dot notation and call the inner product the dot product. But one should be aware that more than one inner product can be defined on the same space. Inner (dot) products provide us with a sense of the length or size of a vector (we call this a norm of the vector) in that dotting a vector with itself may be considered as the squared magnitude: or This is the reason we insist on the positive definiteness of the inner product so we can take the square root to get a positive real valued norm. One may show that given we start with a norm we can define a corresponding inner product. So the two ideas are equivalent. We thus also refer to the inner product as a metric, (specifically a positive definite metric). In the example of displacement vectors (arrows) the norm defines (or is defined by) the length of the vector which is the distance it moves the points to which it is applied. SIDE NOTE: Sometimes we relax this positive definiteness requirement in which case we end up with a pseudonorm. For example special relativity unifies space and time into a spacetime in which the metric is not positive definite. This yields vectors some of which have real length, some of which have an imaginary length and some of which have zero length while not being the zero vector (null vectors). THE dot product (between two displacement vectors) There is a specific geometric inner product, the dot product, defined for arrows or displacements. It is defined as the product of the magnitudes of the two vectors times the cosine of their relative angle: v u cos Note that in the case where we dot a vector with itself, the relative angle is zero and so the cosine is 1. Thus a vector dotted with itself yields the square of its magnitude. Orthonormal Basis Since the inner products are bilinear we can expand their action in terms of the action on basis elements. Once we know the dot products between all pairs of basis elements we can apply this to dot products between any vectors when expanded in terms of the basis. Observe: For and the linearity of the dot product gives us: Rather tedious but note we are just applying the regular algebra skills just as if we were expanding a product of two polynomials. (Recall that we can think of polynomials as vectors.) The main point here is
8 that we have expressed the original dot product as a sum of multiples of the dot products of basis elements. Once we know these we can calculate the dot product readily. In fact we will shortly show how to use matrix notation to help keep track of all the pieces of this calculation. But for now Recall our standard basis for displacements were the unit (length 1) displacements along the x, y, and z axes. So the angles between different basis elements are 90 which has cosine of 0. This gives us: 1 1 cos cos90 0 The above tedious dot product calculation then reduces to: So (having used the standard basis) The dot product is just the sum of the products of corresponding components. This is true only because of the form of the standard basis. Note that each basis element is of unit length and orthogonal (perpendicular) to all the others. This property of the basis is called orthonormality. That is to say it means we have an orthonormal basis. For arbitrary bases the dot product is a bit more complicated but not too bad if we use matrices consistently. We ll see that shortly. Adjoint and Transpose There s an easy way to express the dot product (given an orthonormal basis) in terms of matrices:, Their dot product can be written as a matrix product: Two points to note here. Firstly, to be consistent we need to express the operation of changing a column into a row. This (and the reverse) we call transposing a matrix. Secondly note that the action of taking the dot product with respect to a given vector defines a linear functional (linear mapping from vector to scalar). Let s take that second point first. We can consistently (re)interpret the dot product notation by grouping the dot symbol with the first vector and calling the result a dual vector: We take to be a dual vector or linear functional with a corresponding row vector representation., Another way of interpreting the dot product is as a linear transformation mapping vectors to dual vectors. This type of mapping is also known as an adjoint which we indicate using a dagger superscript: Hence we can write the dot product in the form:
9 We extend the adjoint to apply to both vectors and dual vectors (and later matrices) so that when we apply the adjoint twice we end up back where we started.. Now recall we have a very simple form because we used an orthonormal basis. In the matrix representation the adjoint is just the transpose. The transpose of a matrix is the matrix we obtain by reversing rows with columns. 1 Example: Side Note: When we generalize to complex vectors (and matrices) the adjoint will in fact be the complex conjugate of the transpose (which defines a Hermitian inner product). Finally note that the transpose when applied to products of two or more matrices will reverse the order of multiplication. This you can confirm by working out examples. Adjoint and Metric with nonorthonormal bases For real vectors, the metric representation of the inner product was, in the matrix representation multiplication by the transpose provided we had an orthonormal basis. To see how to work with a general basis we go back and consider how we expanded the vector in a basis using matrices. Recall that we used a row vector of basis elements for the ordered basis and used it as follows: Let s use an arbitrary basis expansion for two vectors., We express the dot product using the transpose or more properly the adjoint and applying matrix multiplication. Note that the adjoint of the basis vectors will be take the dot product with operations so we have: Now apply matrix multiplication between the basis column and row: So we have: M We end up with the transpose of the matrix for times a square matrix times the column vector for.
10 The matrix of basis dot products is called the metric. Note that when the basis is orthonormal it takes the simple form of the identity matrix For general cases it will be either symmetric (equal to its transpose) or when we generalize to complex vectors it will be Hermitian (equal to its complex conjugate transpose). Note then we can express the adjoint of a column matrix corresponding to a vector, expanded in an arbitrary basis by This tells us in a general basis how to expand the dot product of two vectors using the adjoint of one. But given the adjoint of the adjoint gets us back where we started we also have a dual metric defining a dot product for dual vectors. The dual metric also has a matrix representation (when we express dual vectors in terms of row vectors) and it will be the inverse transpose of the matrix representation of the metric. In short given a dual vector, we have Putting this all together we can then see that the adjoint of the adjoint gets us back where we started. For a vector : To follow this string of operations remember the transpose of a product is the reversed product of the transposes and that a matrix times its inverse is the identity matrix and so cancels. For a (real) square matrix we can also define an adjoint. If the matrix is complex we must also take the complex conjugate:. It is so much easier if we work in an orthonormal basis where both metric and dual metric matrices are the identity:. Thus you will usually find the adjoint defined simply as the conjugate transpose. This however is a basis dependent definition and when working in general bases we must remember to account for these extra metric factors. THAT S ALL FOR NOW I intend to add more later including for example how to define cross products in terms of matrices, tensors and tensor products,
Recall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
More informationVector algebra Christian Miller CS Fall 2011
Vector algebra Christian Miller CS 354  Fall 2011 Vector algebra A system commonly used to describe space Vectors, linear operators, tensors, etc. Used to build classical physics and the vast majority
More informationQuick Reference Guide to Linear Algebra in Quantum Mechanics
Quick Reference Guide to Linear Algebra in Quantum Mechanics Scott N. Walck September 2, 2014 Contents 1 Complex Numbers 2 1.1 Introduction............................ 2 1.2 Real Numbers...........................
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus ndimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More informationLinear Algebra: Vectors
A Linear Algebra: Vectors A Appendix A: LINEAR ALGEBRA: VECTORS TABLE OF CONTENTS Page A Motivation A 3 A2 Vectors A 3 A2 Notational Conventions A 4 A22 Visualization A 5 A23 Special Vectors A 5 A3 Vector
More informationUNIT 2 MATRICES  I 2.0 INTRODUCTION. Structure
UNIT 2 MATRICES  I Matrices  I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress
More information13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in threespace, we write a vector in terms
More informationCS3220 Lecture Notes: QR factorization and orthogonal transformations
CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss
More informationChapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
More informationLecture L3  Vectors, Matrices and Coordinate Transformations
S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3  Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between
More informationFigure 1.1 Vector A and Vector F
CHAPTER I VECTOR QUANTITIES Quantities are anything which can be measured, and stated with number. Quantities in physics are divided into two types; scalar and vector quantities. Scalar quantities have
More informationAdding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors
1 Chapter 13. VECTORS IN THREE DIMENSIONAL SPACE Let s begin with some names and notation for things: R is the set (collection) of real numbers. We write x R to mean that x is a real number. A real number
More informationA Introduction to Matrix Algebra and Principal Components Analysis
A Introduction to Matrix Algebra and Principal Components Analysis Multivariate Methods in Education ERSH 8350 Lecture #2 August 24, 2011 ERSH 8350: Lecture 2 Today s Class An introduction to matrix algebra
More informationMathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 19967 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
More informationNotes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
More informationIntroduction to Matrix Algebra I
Appendix A Introduction to Matrix Algebra I Today we will begin the course with a discussion of matrix algebra. Why are we studying this? We will use matrix algebra to derive the linear regression model
More informationHomework One Solutions. Keith Fratus
Homework One Solutions Keith Fratus June 8, 011 1 Problem One 1.1 Part a In this problem, we ll assume the fact that the sum of two complex numbers is another complex number, and also that the product
More informationOrthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationLinear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
More informationChapter 1  Matrices & Determinants
Chapter 1  Matrices & Determinants Arthur Cayley (August 16, 1821  January 26, 1895) was a British Mathematician and Founder of the Modern British School of Pure Mathematics. As a child, Cayley enjoyed
More informationMATH 551  APPLIED MATRIX THEORY
MATH 55  APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points
More informationOrthogonal Projections
Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors
More information5. Orthogonal matrices
L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 51 Orthonormal
More informationChapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More information(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.
Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product
More information= [a ij ] 2 3. Square matrix A square matrix is one that has equal number of rows and columns, that is n = m. Some examples of square matrices are
This document deals with the fundamentals of matrix algebra and is adapted from B.C. Kuo, Linear Networks and Systems, McGraw Hill, 1967. It is presented here for educational purposes. 1 Introduction In
More informationChapter 5 Polar Coordinates; Vectors 5.1 Polar coordinates 1. Pole and polar axis
Chapter 5 Polar Coordinates; Vectors 5.1 Polar coordinates 1. Pole and polar axis 2. Polar coordinates A point P in a polar coordinate system is represented by an ordered pair of numbers (r, θ). If r >
More informationα = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
More informationMathematics Notes for Class 12 chapter 3. Matrices
1 P a g e Mathematics Notes for Class 12 chapter 3. Matrices A matrix is a rectangular arrangement of numbers (real or complex) which may be represented as matrix is enclosed by [ ] or ( ) or Compact form
More informationSouth Carolina College and CareerReady (SCCCR) PreCalculus
South Carolina College and CareerReady (SCCCR) PreCalculus Key Concepts Arithmetic with Polynomials and Rational Expressions PC.AAPR.2 PC.AAPR.3 PC.AAPR.4 PC.AAPR.5 PC.AAPR.6 PC.AAPR.7 Standards Know
More informationMATH 240 Fall, Chapter 1: Linear Equations and Matrices
MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS
More informationUnified Lecture # 4 Vectors
Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationLinear Algebra In Dirac Notation
Chapter 3 Linear Algebra In Dirac Notation 3.1 Hilbert Space and Inner Product In Ch. 2 it was noted that quantum wave functions form a linear space in the sense that multiplying a function by a complex
More informationBraket notation  Wikipedia, the free encyclopedia
Page 1 Braket notation FromWikipedia,thefreeencyclopedia Braket notation is the standard notation for describing quantum states in the theory of quantum mechanics. It can also be used to denote abstract
More information2D Geometric Transformations. COMP 770 Fall 2011
2D Geometric Transformations COMP 770 Fall 2011 1 A little quick math background Notation for sets, functions, mappings Linear transformations Matrices Matrixvector multiplication Matrixmatrix multiplication
More informationRecall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.
ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the ndimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?
More informationSummary of week 8 (Lectures 22, 23 and 24)
WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry
More information6. Vectors. 1 20092016 Scott Surgent (surgent@asu.edu)
6. Vectors For purposes of applications in calculus and physics, a vector has both a direction and a magnitude (length), and is usually represented as an arrow. The start of the arrow is the vector s foot,
More informationSolving Simultaneous Equations and Matrices
Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering
More informationMatrix Algebra LECTURE 1. Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 = a 11 x 1 + a 12 x 2 + +a 1n x n,
LECTURE 1 Matrix Algebra Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 a 11 x 1 + a 12 x 2 + +a 1n x n, (1) y 2 a 21 x 1 + a 22 x 2 + +a 2n x n, y m a m1 x 1 +a m2 x
More informationx1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.
Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability
More informationPrimer on Index Notation
Massachusetts Institute of Technology Department of Physics Physics 8.07 Fall 2004 Primer on Index Notation c 20032004 Edmund Bertschinger. All rights reserved. 1 Introduction Equations involving vector
More informationExam 2 Review. 3. How to tell if an equation is linear? An equation is linear if it can be written, through simplification, in the form.
Exam 2 Review Chapter 1 Section1 Do You Know: 1. What does it mean to solve an equation? To solve an equation is to find the solution set, that is, to find the set of all elements in the domain of the
More information1 Scalars, Vectors and Tensors
DEPARTMENT OF PHYSICS INDIAN INSTITUTE OF TECHNOLOGY, MADRAS PH350 Classical Physics Handout 1 8.8.2009 1 Scalars, Vectors and Tensors In physics, we are interested in obtaining laws (in the form of mathematical
More informationLecture No. # 02 ProloguePart 2
Advanced Matrix Theory and Linear Algebra for Engineers Prof. R.Vittal Rao Center for Electronics Design and Technology Indian Institute of Science, Bangalore Lecture No. # 02 ProloguePart 2 In the last
More informationComputer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture 7 Transformations in 2D
Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture 7 Transformations in 2D Welcome everybody. We continue the discussion on 2D
More informationWHICH LINEARFRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE?
WHICH LINEARFRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE? JOEL H. SHAPIRO Abstract. These notes supplement the discussion of linear fractional mappings presented in a beginning graduate course
More informationSolutions to Linear Algebra Practice Problems
Solutions to Linear Algebra Practice Problems. Find all solutions to the following systems of linear equations. (a) x x + x 5 x x x + x + x 5 (b) x + x + x x + x + x x + x + 8x Answer: (a) We create the
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationName: Section Registered In:
Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are
More informationVectors What are Vectors? which measures how far the vector reaches in each direction, i.e. (x, y, z).
1 1. What are Vectors? A vector is a directed line segment. A vector can be described in two ways: Component form Magnitude and Direction which measures how far the vector reaches in each direction, i.e.
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An ndimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0534405967. Systems of Linear Equations Definition. An ndimensional vector is a row or a column
More information28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. v x. u y v z u z v y u y u z. v y v z
28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE 1.4 Cross Product 1.4.1 Definitions The cross product is the second multiplication operation between vectors we will study. The goal behind the definition
More information1 Spherical Kinematics
ME 115(a): Notes on Rotations 1 Spherical Kinematics Motions of a 3dimensional rigid body where one point of the body remains fixed are termed spherical motions. A spherical displacement is a rigid body
More informationAdvanced Techniques for Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz
Advanced Techniques for Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Vectors Arrays of numbers Vectors represent a point in a n dimensional
More informationSection 1.1. Introduction to R n
The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to
More informationWe know a formula for and some properties of the determinant. Now we see how the determinant can be used.
Cramer s rule, inverse matrix, and volume We know a formula for and some properties of the determinant. Now we see how the determinant can be used. Formula for A We know: a b d b =. c d ad bc c a Can we
More informationDiagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions
Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential
More information1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two nonzero vectors u and v,
1.3. DOT PRODUCT 19 1.3 Dot Product 1.3.1 Definitions and Properties The dot product is the first way to multiply two vectors. The definition we will give below may appear arbitrary. But it is not. It
More information9.4. The Scalar Product. Introduction. Prerequisites. Learning Style. Learning Outcomes
The Scalar Product 9.4 Introduction There are two kinds of multiplication involving vectors. The first is known as the scalar product or dot product. This is socalled because when the scalar product of
More informationPhysics 235 Chapter 1. Chapter 1 Matrices, Vectors, and Vector Calculus
Chapter 1 Matrices, Vectors, and Vector Calculus In this chapter, we will focus on the mathematical tools required for the course. The main concepts that will be covered are: Coordinate transformations
More informationOn the general equation of the second degree
On the general equation of the second degree S Kesavan The Institute of Mathematical Sciences, CIT Campus, Taramani, Chennai  600 113 email:kesh@imscresin Abstract We give a unified treatment of the
More informationby the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the yaxis We observe that
More informationThe basic unit in matrix algebra is a matrix, generally expressed as: a 11 a 12. a 13 A = a 21 a 22 a 23
(copyright by Scott M Lynch, February 2003) Brief Matrix Algebra Review (Soc 504) Matrix algebra is a form of mathematics that allows compact notation for, and mathematical manipulation of, highdimensional
More informationLinear Algebra Test 2 Review by JC McNamara
Linear Algebra Test 2 Review by JC McNamara 2.3 Properties of determinants: det(a T ) = det(a) det(ka) = k n det(a) det(a + B) det(a) + det(b) (In some cases this is true but not always) A is invertible
More informationLinear Codes. In the V[n,q] setting, the terms word and vector are interchangeable.
Linear Codes Linear Codes In the V[n,q] setting, an important class of codes are the linear codes, these codes are the ones whose code words form a subvector space of V[n,q]. If the subspace of V[n,q]
More information9 Multiplication of Vectors: The Scalar or Dot Product
Arkansas Tech University MATH 934: Calculus III Dr. Marcel B Finan 9 Multiplication of Vectors: The Scalar or Dot Product Up to this point we have defined what vectors are and discussed basic notation
More information4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns
L. Vandenberghe EE133A (Spring 2016) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows
More informationPortable Assisted Study Sequence ALGEBRA IIA
SCOPE This course is divided into two semesters of study (A & B) comprised of five units each. Each unit teaches concepts and strategies recommended for intermediate algebra students. The first half of
More informationInner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 34 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
More information( % . This matrix consists of $ 4 5 " 5' the coefficients of the variables as they appear in the original system. The augmented 3 " 2 2 # 2 " 3 4&
Matrices define matrix We will use matrices to help us solve systems of equations. A matrix is a rectangular array of numbers enclosed in parentheses or brackets. In linear algebra, matrices are important
More informationA Brief Primer on Matrix Algebra
A Brief Primer on Matrix Algebra A matrix is a rectangular array of numbers whose individual entries are called elements. Each horizontal array of elements is called a row, while each vertical array is
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 3 Linear Least Squares Prof. Michael T. Heath Department of Computer Science University of Illinois at UrbanaChampaign Copyright c 2002. Reproduction
More information3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
More informationMATH36001 Background Material 2015
MATH3600 Background Material 205 Matrix Algebra Matrices and Vectors An ordered array of mn elements a ij (i =,, m; j =,, n) written in the form a a 2 a n A = a 2 a 22 a 2n a m a m2 a mn is said to be
More informationSection V.3: Dot Product
Section V.3: Dot Product Introduction So far we have looked at operations on a single vector. There are a number of ways to combine two vectors. Vector addition and subtraction will not be covered here,
More informationSolving a System of Equations
11 Solving a System of Equations 111 Introduction The previous chapter has shown how to solve an algebraic equation with one variable. However, sometimes there is more than one unknown that must be determined
More information1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
More informationMATHEMATICS (CLASSES XI XII)
MATHEMATICS (CLASSES XI XII) General Guidelines (i) All concepts/identities must be illustrated by situational examples. (ii) The language of word problems must be clear, simple and unambiguous. (iii)
More informationDETERMINANTS. b 2. x 2
DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in
More information4. MATRICES Matrices
4. MATRICES 170 4. Matrices 4.1. Definitions. Definition 4.1.1. A matrix is a rectangular array of numbers. A matrix with m rows and n columns is said to have dimension m n and may be represented as follows:
More informationDiagonal, Symmetric and Triangular Matrices
Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by
More informationWe call this set an ndimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P.
Volumes of parallelograms 1 Chapter 8 Volumes of parallelograms In the present short chapter we are going to discuss the elementary geometrical objects which we call parallelograms. These are going to
More informationELEMENTS OF VECTOR ALGEBRA
ELEMENTS OF VECTOR ALGEBRA A.1. VECTORS AND SCALAR QUANTITIES We have now proposed sets of basic dimensions and secondary dimensions to describe certain aspects of nature, but more than just dimensions
More informationLinear Dependence Tests
Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks
More informationLinear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University
Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone
More informationImages and Kernels in Linear Algebra By Kristi Hoshibata Mathematics 232
Images and Kernels in Linear Algebra By Kristi Hoshibata Mathematics 232 In mathematics, there are many different fields of study, including calculus, geometry, algebra and others. Mathematics has been
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More informationLinear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices
MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two
More informationInner product. Definition of inner product
Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product
More informationUsing the Singular Value Decomposition
Using the Singular Value Decomposition Emmett J. Ientilucci Chester F. Carlson Center for Imaging Science Rochester Institute of Technology emmett@cis.rit.edu May 9, 003 Abstract This report introduces
More informationLectures notes on orthogonal matrices (with exercises) 92.222  Linear Algebra II  Spring 2004 by D. Klain
Lectures notes on orthogonal matrices (with exercises) 92.222  Linear Algebra II  Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n realvalued matrix A is said to be an orthogonal
More informationAlgebra 2 Chapter 1 Vocabulary. identity  A statement that equates two equivalent expressions.
Chapter 1 Vocabulary identity  A statement that equates two equivalent expressions. verbal model A word equation that represents a reallife problem. algebraic expression  An expression with variables.
More informationRow and column operations
Row and column operations It is often very useful to apply row and column operations to a matrix. Let us list what operations we re going to be using. 3 We ll illustrate these using the example matrix
More informationCITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION
No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August
More information1 Eigenvalues and Eigenvectors
Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x
More informationSection 6.1  Inner Products and Norms
Section 6.1  Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
More information