Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Maren Bennewitz, Diego Tipaldi, Luciano Spinello

Similar documents
Linear Algebra Review. Vectors

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Similarity and Diagonalization. Similar Matrices

1 Introduction to Matrices

Lecture 5: Singular Value Decomposition SVD (1)

LINEAR ALGEBRA. September 23, 2010

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

x = + x 2 + x

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Orthogonal Diagonalization of Symmetric Matrices

5. Orthogonal matrices

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Solving Systems of Linear Equations

CS3220 Lecture Notes: QR factorization and orthogonal transformations

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Notes on Determinant

Applied Linear Algebra I Review page 1

Typical Linear Equation Set and Corresponding Matrices

1 Determinants and the Solvability of Linear Systems

1 VECTOR SPACES AND SUBSPACES

Name: Section Registered In:

6. Cholesky factorization

A =

Chapter 6. Orthogonality

Similar matrices and Jordan form

Orthogonal Projections

by the matrix A results in a vector which is a reflection of the given

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

[1] Diagonal factorization

Lecture 2 Matrix Operations

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Lecture 1: Schur s Unitary Triangularization Theorem

Solving Systems of Linear Equations

Solving Systems of Linear Equations Using Matrices

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

University of Lille I PC first year list of exercises n 7. Review

Recall that two vectors in are perpendicular or orthogonal provided that their dot

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

Inner Product Spaces

Introduction to Matrix Algebra

Lecture notes on linear algebra

Linear Algebra: Determinants, Inverses, Rank

Data Mining: Algorithms and Applications Matrix Math Review

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

26. Determinants I. 1. Prehistory

Eigenvalues and Eigenvectors

Solution of Linear Systems

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Mean value theorem, Taylors Theorem, Maxima and Minima.

The Characteristic Polynomial

General Framework for an Iterative Solution of Ax b. Jacobi s Method

Inner Product Spaces and Orthogonality

Question 2: How do you solve a matrix equation using the matrix inverse?

Section Inner Products and Norms

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Rotation Matrices and Homogeneous Transformations

Chapter 17. Orthogonal Matrices and Symmetries of Space

Linear Algebra Notes

Brief Introduction to Vectors and Matrices

MATH APPLIED MATRIX THEORY

Systems of Linear Equations

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Vector and Matrix Norms

Linear Algebra: Vectors

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

1 Sets and Set Notation.

( ) which must be a vector

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Mathematics Course 111: Algebra I Part IV: Vector Spaces

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Introduction to Matrices for Engineers

DATA ANALYSIS II. Matrix Algorithms

LINEAR ALGEBRA W W L CHEN

ISOMETRIES OF R n KEITH CONRAD

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

Linearly Independent Sets and Linearly Dependent Sets

Linear Programming. March 14, 2014

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

Linear Algebraic Equations, SVD, and the Pseudo-Inverse

October 3rd, Linear Algebra & Properties of the Covariance Matrix

3 Orthogonal Vectors and Matrices

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

8 Square matrices continued: Determinants

Lecture Notes 2: Matrices as Systems of Linear Equations

Numerical Methods I Eigenvalue Problems

2. SPATIAL TRANSFORMATIONS

Using row reduction to calculate the inverse and the determinant of a square matrix

Physics 235 Chapter 1. Chapter 1 Matrices, Vectors, and Vector Calculus

ALGEBRAIC EIGENVALUE PROBLEM

Vector Math Computer Graphics Scott D. Anderson

MAT188H1S Lec0101 Burbulla

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

3D Tranformations. CS 4620 Lecture 6. Cornell CS4620 Fall 2013 Lecture Steve Marschner (with previous instructors James/Bala)

Inner product. Definition of inner product

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Transcription:

Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Maren Bennewitz, Diego Tipaldi, Luciano Spinello

Vectors Arrays of numbers Vectors represent a point in a n dimensional space

Vectors: Scalar Product Scalar-Vector Product Changes the length of the vector, but not its direction

Vectors: Sum Sum of vectors (is commutative) Can be visualized as chaining the vectors.

Vectors: Dot Product Inner product of vectors (is a scalar) If one of the two vectors, e.g., has, the inner product returns the length of the projection of along the direction of If, the two vectors are orthogonal

Vectors: Linear (In)Dependence A vector In other words, if summing up the is linearly dependent from if can be obtained by properly scaled If there exist no such that then is independent from

Vectors: Linear (In)Dependence A vector In other words, if summing up the is linearly dependent from if can be obtained by properly scaled If there exist no such that then is independent from

Matrices A matrix is written as a table of values rows columns 1 st index refers to the row 2 nd index refers to the column Note: a d-dimensional vector is equivalent to a dx1 matrix

Matrices as Collections of Vectors Column vectors

Matrices as Collections of Vectors Row vectors

Important Matrices Operations Multiplication by a scalar Sum (commutative, associative) Multiplication by a vector Product (not commutative) Inversion (square, full rank) Transposition

Scalar Multiplication & Sum In the scalar multiplication, every element of the vector or matrix is multiplied with the scalar The sum of two vectors is a vector consisting of the pair-wise sums of the individual entries The sum of two matrices is a matrix consisting of the pair-wise sums of the individual entries

Matrix Vector Product The i th component of is the dot product. The vector is linearly dependent from the column vectors with coefficients row vectors column vectors

Matrix Vector Product If the column vectors of represent a reference system, the product computes the global transformation of the vector according to column vectors

Matrix Matrix Product Can be defined through the dot product of row and column vectors the linear combination of the columns of A scaled by the coefficients of the columns of B column vectors

Matrix Matrix Product If we consider the second interpretation, we see that the columns of C are the transformations of the columns of B through A All the interpretations made for the matrix vector product hold column vectors

Rank Maximum number of linearly independent rows (columns) Dimension of the image of the transformation When is we have and the equality holds iff is the null matrix Computation of the rank is done by Gaussian elimination on the matrix Counting the number of non-zero rows

Inverse If A is a square matrix of full rank, then there is a unique matrix B=A -1 such that AB=I holds The i th row of A is and the j th column of A -1 are: orthogonal (if i j) or their dot product is 1 (if i = j)

Matrix Inversion The i th column of A -1 can be found by solving the following linear system: This is the i th column of the identity matrix

Determinant (det) Only defined for square matrices The inverse of exists if and only if For matrices: Let and, then For matrices the Sarrus rule holds:

Determinant For general matrices? Let be the submatrix obtained from by deleting the i-th row and the j-th column Rewrite determinant for matrices:

Determinant For general matrices? Let be the (i,j)-cofactor, then This is called the cofactor expansion across the first row

Determinant Problem: Take a 25 x 25 matrix (which is considered small). The cofactor expansion method requires n! multiplications. For n = 25, this is 1.5 x 10^25 multiplications for which a today supercomputer would take 500,000 years. There are much faster methods, namely using Gauss elimination to bring the matrix into triangular form. Because for triangular matrices the determinant is the product of diagonal elements

Determinant: Properties Row operations ( is still a square matrix) If results from by interchanging two rows, then If results from by multiplying one row with a number, then If results from by adding a multiple of one row to another row, then Transpose: Multiplication: Does not apply to addition!

Determinant: Applications Compute Eigenvalues: Solve the characteristic polynomial Area and Volume: ( is i-th row)

Orthonormal Matrix A matrix is orthonormal iff its column (row) vectors represent an orthonormal basis As linear transformation, it is norm preserving Some properties: The transpose is the inverse Determinant has unity norm ( 1)

Rotation Matrix A Rotation matrix is an orthonormal matrix with det =+1 2D Rotations 3D Rotations along the main axes IMPORTANT: Rotations are not commutative

Matrices to Represent Affine Transformations A general and easy way to describe a 3D transformation is via matrices Translation Vector Rotation Matrix Takes naturally into account the noncommutativity of the transformations Homogeneous coordinates

Combining Transformations A simple interpretation: chaining of transformations (represented as homogeneous matrices) Matrix A represents the pose of a robot in the space Matrix B represents the position of a sensor on the robot The sensor perceives an object at a given location p, in its own frame [the sensor has no clue on where it is in the world] Where is the object in the global frame? p

Combining Transformations A simple interpretation: chaining of transformations (represented as homogeneous matrices) Matrix A represents the pose of a robot in the space Matrix B represents the position of a sensor on the robot The sensor perceives an object at a given location p, in its own frame [the sensor has no clue on where it is in the world] Where is the object in the global frame? Bp gives the pose of the object wrt the robot B

Combining Transformations A simple interpretation: chaining of transformations (represented as homogeneous matrices) Matrix A represents the pose of a robot in the space Matrix B represents the position of a sensor on the robot The sensor perceives an object at a given location p, in its own frame [the sensor has no clue on where it is in the world] Where is the object in the global frame? Bp gives the pose of the object wrt the robot A ABp gives the pose of the object wrt the world

Positive Definite Matrix The analogous of positive number Definition Example

Positive Definite Matrix Properties Invertible, with positive definite inverse All real eigenvalues > 0 Trace is > 0 Cholesky decomposition

Linear Systems (1) Interpretations: A set of linear equations A way to find the coordinates x in the reference system of A such that b is the result of the transformation of Ax Solvable by Gaussian elimination

Linear Systems (2) Notes: Many efficient solvers exit, e.g., conjugate gradients, sparse Cholesky decomposition One can obtain a reduced system (A, b ) by considering the matrix (A, b) and suppressing all the rows which are linearly dependent Let A'x=b' the reduced system with A':n'xm and b':n'x1 and rank A' = min(n',m) rows The system might be either over-constrained (n >m) or under-constrained (n <m) columns

Over-Constrained Systems More (indep) equations than variables An over-constrained system does not admit an exact solution However, if rank A = cols(a) one often computes a minimum norm solution Note: rank = Maximum number of linearly independent rows/columns

Under-Constrained Systems More variables than (indep) equations The system is under-constrained if the number of linearly independent rows of A is smaller than the dimension of b An under-constrained system admits infinite solutions The degree of these infinite solutions is cols(a ) - rows(a )

Jacobian Matrix It is a non-square matrix in general Given a vector-valued function Then, the Jacobian matrix is defined as

Jacobian Matrix It is the orientation of the tangent plane to the vector-valued function at a given point Generalizes the gradient of a scalar valued function

Further Reading A quick and dirty guide to matrices is the Matrix Cookbook available at: http://matrixcookbook.com