# Notes on Symmetric Matrices

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 CPSC 536N: Randomized Algorithms Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices. All matrices that we discuss are over the real numbers. Let A be a real, symmetric matrix of size d d and let I denote the d d identity matrix. Perhaps the most important and useful property of symmetric matrices is that their eigenvalues behave very nicely. Definition 1 Let U be a d d matrix. The matrix U is called an orthogonal matrix if U T U = I. This implies that UU T = I, by uniqueness of inverses. Fact 2 (Spectral Theorem). Let A be any d d symmetric matrix. There exists an orthogonal matrix U and a (real) diagonal matrix D such that A = UDU T. This is called a spectral decomposition of A. Let u i be the ith column of U and let λ i denote the ith diagonal entry of D. Then {u 1,..., u d } is an orthonormal basis consisting of eigenvectors of A, and λ i is the eigenvalue corresponding to u i. We can also write A = d λ i u i u T i. (1) The eigenvalues λ are uniquely determined by A, up to reordering. Caution. The product of two symmetric matrices is usually not symmetric. 1.1 Positive semi-definite matrices Definition 3 Let A be any d d symmetric matrix. The matrix A is called positive semi-definite if all of its eigenvalues are non-negative. This is denoted A 0, where here 0 denotes the zero matrix. The matrix A is called positive definite if all of its eigenvalues are strictly positive. This is denoted A 0. There are many equivalent characterizations of positive semi-definiteness. One useful condition is Similarly, A 0 x T Ax 0 x R d. (2) A 0 x T Ax > 0 x R d Another condition is: A 0 if and only if there exists a matrix V such that A = V V T. The positive semi-definite condition can be used to define a partial ordering on all symmetric matrices. This is called the Löwner ordering or the positive semi-definite ordering. For any two symmetric matrices A and B, we write A B if A B 0. 1

2 Fact 4 A B if and only if v T Av v T Bv for all vectors v. Proof: Note that v T Av v T Bv holds if and only if v T (A B)v 0 holds. By (2), this is equivalent to A B 0 which is the definition of A B. The set of positive semi-definite matrices is closed under addition and non-negative scaling. (Such a set is called a convex cone.) Fact 5 Let A and B be positive semi-definite matrices of size d d. Let α, β be non-negative scalars. Then αa + βb 0. Proof: This follows easily from (2). Caution. The Löwner ordering does not have all of the nice properties that the usual ordering of real numbers has. For example, if A B 0 then it is not necessarily true that A 2 B 2. Fact 6 Let A and B be symmetric, d d matrices and let C be any d d matrix. If A B then CAC T CBC T. Moreover, if C is non-singular then the if is actually if and only if. Proof: Since A B, then A B 0, so there exists a matrix V such that V V T. Then C(A B)C T = CV V T C T = CV (CV ) T, which shows that C(A B)C T is positive semi-definite. Suppose that C is non-singular and CAC T CBC T. Multiply on the left by C 1 and on the right by (C 1 ) T = (C T ) 1 to get A B. 1.2 Spectral Norm Just as there are many norms of vectors, there are many norms of matrices too. We will only consider spectral norm of A. (This is also called the l 2 operator norm and the Schatten -norm.) Definition 7 Let A be a symmetric d d matrix with eigenvalues λ 1,..., λ d. The spectral norm of A is denoted A and is defined by A = max i λ i. Recall that an eigenvalue of A is any real number z such that there exists a vector u for which Au = zu. Then we also have (A + ti)u = (z + t)u, which implies that z + t is an eigenvalue of A + ti. In fact, the entire set of eigenvalues of A + ti is {λ 1 + t,..., λ d + t}. Fact 8 Let A be a symmetric matrix. Let λ min and λ max respectively denote the smallest and largest eigenvalues of A. Then λ min I A λ max I. Proof: Let λ 1,..., λ d be all the eigenvalues of A. We will show only the second inequality, which is equivalent to λ max I A 0. As observed above, the eigenvalues of λ max I A are {λ max λ 1,..., λ max λ d }. 2

3 The minimum of these eigenvalues is So λ max I A 0. min j ( ) λmax λ j = λmax max λ j = 0. j In particular, for any symmetric matrix A we have A A I. 1.3 Trace Definition 9 Let A be an arbitrary d d matrix (not necessarily symmetric). The trace of A, denoted tr(a), is the sum of the diagonal entries of A. Fact 10 (Linearity of Trace) Let A and B be arbitrary d d matrices and let α, β be scalars. Then tr(αa + βb) = α tr(a) + β tr(b). Fact 11 (Cyclic Property of Trace) Let A be an arbitrary n m matrix and let B be an arbitrary m n matrix. Then tr(ab) = tr(ba). Proof: This easily follows from the definitions. The ith diagonal entry of AB is m j=1 A i,jb j,i, so tr(ab) = by swapping the order of summation. n B j,ia i,j. n j=1 m A i,j B j,i = m j=1 n B j,i A i,j, This equals tr(ba) because the jth diagonal entry of BA is Fact 12 Let A be a symmetric, d d matrix with eigenvalues {λ 1,..., λ d }. Then tr(a) = d λ d. Proof: Let A = UDU T as in Fact 2. Then as required. tr(a) = tr(udu T ) = tr(u T UD) = tr(d) = d λ i, Claim 13 Let A, B and C be symmetric d d matrices satisfying A 0 and B C. Then tr(ab) tr(ac). Proof: Suppose additionally that A is rank-one, i.e., A = vv T for some vector v. Then tr(ab) = tr(vv T B) = tr(v T Bv) = v T Bv v T Cv = tr(v T Cv) = tr(vv T C) = tr(ac). Now we consider the case where A has arbitrary rank. Let A = d λ iu i u T i. Since we assume that A 0 we can set v i = λ i u i and write A = d v ivi T. Then tr(ab) = tr ( v i vi T B ) = tr(v i vi T B) i i tr(v i v T i C) = 3 i tr ( v i vi T C ) = i tr(ac).

4 Here the second and third equalities are by linearity of trace, and the inequality follows from the rank-one case. Corollary 14 If A 0 then tr(a B) B tr(a). Proof: Apply Claim 13 with C = B I. We get tr(a B) tr(a B I) B tr(a I), by linearity of trace since B is a scalar. 1.4 Functions applied to matrices For any function f : R R, we can extend f to a function on symmetric matrices as follows. Let A = UDU T be a spectral decomposition of A where λ i is the ith diagonal entry of D. Then we define f(λ 1 ) f(a) = U U T. f(λd) In other words, f(a) is defined simplying by applying the function f to the eigenvalues of A. Caution. f(a) is not obtained applying the function f to the entries of A! One important example of applying a function to a matrix is powering. Let A be any positive semidefinite matrix with spectral decomposition UDU T. For any c R we can define A c as follows. Let S be the diagonal matrix with S i,i = (D i,i ) c. Then we define A c = USU T. For example, consider the case c = 1/2. Note that A 1/2 A 1/2 = USU T USU T = USSU T = UDU T = A. So the square of the square root is the matrix itself, as one would expect. If A is non-singular, the matrix A 1 obtained by taking c = 1 is the same as the usual matrix inverse (by uniqueness of inverses, since A 1 A = I). So we see that the inverse of a non-singular symmetric matrix is obtained by inverting its eigenvalues. Inequalities on real-valued functions also give us inequalities on matrices. Claim 15 Let f : R R and g : R R satisfy f(x) g(x) for all x [l, u] R. Let A be a symmetric matrix for which all eigenvalues lie in [l, u] (i.e., li A ui). Then f(a) g(a). Proof: Let A = UDU T be a spectral decomposition of A where λ i is the ith diagonal entry of D. We wish to show f(a) g(a), i.e., g(a) f(a) 0. By (2), this is equivalent to g(λ 1 ) f(λ 1 ) v T U Uv 0 v R d. g(λd) f(λd) Since U is non-singular this is equivalent to g(λ 1 ) f(λ 1 ) w T w 0 w R d, g(λd) f(λd) where we have simply replaced w = Uv. The left-hand side is simply i w2 i ( g(λi ) f(λ i ) ), which is non-negative by our assumptions on g, f and the λ i s. 4

5 1.5 The pseudoinverse Sometimes we would like to invert a matrix which is not quite invertible. A useful alternative is to invert the part of the matrix that is invertible (its image) and to leave alone the part of the matrix that is not invertible (its kernel). We can do this by applying the real-valued function: { 1/x (x 0) f(x) = 0 (x = 0). The function f inverts all non-zero numbers and maps 0 to 0. So if we apply f to a symmetric matrix, all non-zero eigenvalues will be inverted, and the zero eigenvalues will remain unchanged. The resulting matrix is called the pseudoinverse and is denoted A +. If A is indeed invertible then A + = A 1. Suppose A is not invertible. We cannot have A A + = I since I has full rank and A does not. However, we have the next best thing: A A + is the identity operator on the image of A. More precisely A A + is the orthogonal projection onto the image of A, which we denote I im A. In other words, for every v in the image of A, we have A A + v = v and for every v in the kernel of A we have A A + v = 0. Claim 16 Let A and B 1,..., B k be symmetric, positive semi-definite matices of the same size. Let A +/2 = (A + ) 1/2 denote the square root of the pseudoinverse of A. Let l and u be arbitrary scalars. Suppose that im(b i ) is a subspace of im(a) for all i. Then la k B i ua li im A k A +/2 B i A +/2 ui im A Proof: = : Since A +/2 AA +/2 = I im A, this follows from Fact 6. =: This also follows from Fact 6, because A 1/2 I im A A 1/2 = A and A 1/2 A +/2 B i A +/2 A 1/2 = I im A B i I im A = I im Bi B i I im Bi = B i, where the second equality follows from the assumption that im(b i ) im(a). 1.6 Matrix exponentials We will mainly be interested in the case f(x) = e x. For any symmetric matrix A, note that e A is positive semi-definite. Whereas in the scalar case e a+b = e a e b holds, it is not necessarily true in the matrix case that e A+B = e A e B. It turns out that there is a useful inequality along these lines. It is called the Golden-Thompson inequality. Theorem 17 (Golden-Thompson Inequality) Let A and B be symmetric d d matrices. Then tr(e A+B ) tr(e A e B ). 5

### Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

### Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

### α = u v. In other words, Orthogonal Projection

Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

### Inner products on R n, and more

Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

### Math 312 Homework 1 Solutions

Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

### DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

### LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

### Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Section 5. l j v j = [ u u j u m ] l jj = l jj u j + + l mj u m. l mj Section 5. 5.. Not orthogonal, the column vectors fail to be perpendicular to each other. 5..2 his matrix is orthogonal. Check that

### The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

### 1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

### 3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

### 15.062 Data Mining: Algorithms and Applications Matrix Math Review

.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

### Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

### Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

### x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

### The Characteristic Polynomial

Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

### 3 Orthogonal Vectors and Matrices

3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first

### Lecture Topic: Low-Rank Approximations

Lecture Topic: Low-Rank Approximations Low-Rank Approximations We have seen principal component analysis. The extraction of the first principle eigenvalue could be seen as an approximation of the original

### Orthogonal Projections

Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

### Eigenvalues and Eigenvectors

Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution

### Classification of Cartan matrices

Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms

### Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Math 550 Notes Chapter 7 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 7 Fall 2010 1 / 34 Outline 1 Self-Adjoint and Normal Operators

### 6. Cholesky factorization

6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

### Similar matrices and Jordan form

Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences More than you wanted to know about quadratic forms KC Border Contents 1 Quadratic forms 1 1.1 Quadratic forms on the unit

### CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

### ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

### 18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in 2-106. Total: 175 points.

806 Problem Set 4 Solution Due Wednesday, March 2009 at 4 pm in 2-06 Total: 75 points Problem : A is an m n matrix of rank r Suppose there are right-hand-sides b for which A x = b has no solution (a) What

### MAT188H1S Lec0101 Burbulla

Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u

### Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

### Math 215 HW #6 Solutions

Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T

### 1 Sets and Set Notation.

LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

### Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

### LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

### Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6

Chapter 7 Matrices Definition An m n matrix is an array of numbers set out in m rows and n columns Examples (i ( 1 1 5 2 0 6 has 2 rows and 3 columns and so it is a 2 3 matrix (ii 1 0 7 1 2 3 3 1 is a

### Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

### SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

### 2.3 Convex Constrained Optimization Problems

42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

### minimal polyonomial Example

Minimal Polynomials Definition Let α be an element in GF(p e ). We call the monic polynomial of smallest degree which has coefficients in GF(p) and α as a root, the minimal polyonomial of α. Example: We

### 3. INNER PRODUCT SPACES

. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

### Section 1.1. Introduction to R n

The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

### Factorization Theorems

Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

### December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

### Elementary Matrices and The LU Factorization

lementary Matrices and The LU Factorization Definition: ny matrix obtained by performing a single elementary row operation (RO) on the identity (unit) matrix is called an elementary matrix. There are three

### Orthogonal Bases and the QR Algorithm

Orthogonal Bases and the QR Algorithm Orthogonal Bases by Peter J Olver University of Minnesota Throughout, we work in the Euclidean vector space V = R n, the space of column vectors with n real entries

### CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

### SPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BI-DIAGONAL REPRESENTATIONS FOR PHASE TYPE DISTRIBUTIONS AND MATRIX-EXPONENTIAL DISTRIBUTIONS

Stochastic Models, 22:289 317, 2006 Copyright Taylor & Francis Group, LLC ISSN: 1532-6349 print/1532-4214 online DOI: 10.1080/15326340600649045 SPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BI-DIAGONAL

### Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two

### 26. Determinants I. 1. Prehistory

26. Determinants I 26.1 Prehistory 26.2 Definitions 26.3 Uniqueness and other properties 26.4 Existence Both as a careful review of a more pedestrian viewpoint, and as a transition to a coordinate-independent

### Recall that two vectors in are perpendicular or orthogonal provided that their dot

Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

### 0.1 Phase Estimation Technique

Phase Estimation In this lecture we will describe Kitaev s phase estimation algorithm, and use it to obtain an alternate derivation of a quantum factoring algorithm We will also use this technique to design

### CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #18: Dimensionality Reduc7on

CS 5614: (Big) Data Management Systems B. Aditya Prakash Lecture #18: Dimensionality Reduc7on Dimensionality Reduc=on Assump=on: Data lies on or near a low d- dimensional subspace Axes of this subspace

### Matrices and Linear Algebra

Chapter 2 Matrices and Linear Algebra 2. Basics Definition 2... A matrix is an m n array of scalars from a given field F. The individual values in the matrix are called entries. Examples. 2 3 A = 2 4 2

### 9 MATRICES AND TRANSFORMATIONS

9 MATRICES AND TRANSFORMATIONS Chapter 9 Matrices and Transformations Objectives After studying this chapter you should be able to handle matrix (and vector) algebra with confidence, and understand the

### Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

### (67902) Topics in Theory and Complexity Nov 2, 2006. Lecture 7

(67902) Topics in Theory and Complexity Nov 2, 2006 Lecturer: Irit Dinur Lecture 7 Scribe: Rani Lekach 1 Lecture overview This Lecture consists of two parts In the first part we will refresh the definition

### Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

### IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible

### 1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v,

1.3. DOT PRODUCT 19 1.3 Dot Product 1.3.1 Definitions and Properties The dot product is the first way to multiply two vectors. The definition we will give below may appear arbitrary. But it is not. It

### Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725

Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T

### 9 Multiplication of Vectors: The Scalar or Dot Product

Arkansas Tech University MATH 934: Calculus III Dr. Marcel B Finan 9 Multiplication of Vectors: The Scalar or Dot Product Up to this point we have defined what vectors are and discussed basic notation

### NOV - 30211/II. 1. Let f(z) = sin z, z C. Then f(z) : 3. Let the sequence {a n } be given. (A) is bounded in the complex plane

Mathematical Sciences Paper II Time Allowed : 75 Minutes] [Maximum Marks : 100 Note : This Paper contains Fifty (50) multiple choice questions. Each question carries Two () marks. Attempt All questions.

### GROUP ALGEBRAS. ANDREI YAFAEV

GROUP ALGEBRAS. ANDREI YAFAEV We will associate a certain algebra to a finite group and prove that it is semisimple. Then we will apply Wedderburn s theory to its study. Definition 0.1. Let G be a finite

### Structure of the Root Spaces for Simple Lie Algebras

Structure of the Root Spaces for Simple Lie Algebras I. Introduction A Cartan subalgebra, H, of a Lie algebra, G, is a subalgebra, H G, such that a. H is nilpotent, i.e., there is some n such that (H)

### Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 Lecture 3: QR, least squares, linear regression Linear Algebra Methods for Data Mining, Spring 2007, University

### 1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650

### Chapter 13: Basic ring theory

Chapter 3: Basic ring theory Matthew Macauley Department of Mathematical Sciences Clemson University http://www.math.clemson.edu/~macaule/ Math 42, Spring 24 M. Macauley (Clemson) Chapter 3: Basic ring

### The cover SU(2) SO(3) and related topics

The cover SU(2) SO(3) and related topics Iordan Ganev December 2011 Abstract The subgroup U of unit quaternions is isomorphic to SU(2) and is a double cover of SO(3). This allows a simple computation of

### Chapter 6. Linear Transformation. 6.1 Intro. to Linear Transformation

Chapter 6 Linear Transformation 6 Intro to Linear Transformation Homework: Textbook, 6 Ex, 5, 9,, 5,, 7, 9,5, 55, 57, 6(a,b), 6; page 7- In this section, we discuss linear transformations 89 9 CHAPTER

### Lecture 13 Linear quadratic Lyapunov theory

EE363 Winter 28-9 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discrete-time

### The Dirichlet Unit Theorem

Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if

### Chapter 20. Vector Spaces and Bases

Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit

BILINEAR FORMS KEITH CONRAD The geometry of R n is controlled algebraically by the dot product. We will abstract the dot product on R n to a bilinear form on a vector space and study algebraic and geometric

GENERATING SETS KEITH CONRAD 1 Introduction In R n, every vector can be written as a unique linear combination of the standard basis e 1,, e n A notion weaker than a basis is a spanning set: a set of vectors

### Effective Linear Discriminant Analysis for High Dimensional, Low Sample Size Data

Effective Linear Discriant Analysis for High Dimensional, Low Sample Size Data Zhihua Qiao, Lan Zhou and Jianhua Z. Huang Abstract In the so-called high dimensional, low sample size (HDLSS) settings, LDA

### SMOOTHING APPROXIMATIONS FOR TWO CLASSES OF CONVEX EIGENVALUE OPTIMIZATION PROBLEMS YU QI. (B.Sc.(Hons.), BUAA)

SMOOTHING APPROXIMATIONS FOR TWO CLASSES OF CONVEX EIGENVALUE OPTIMIZATION PROBLEMS YU QI (B.Sc.(Hons.), BUAA) A THESIS SUBMITTED FOR THE DEGREE OF MASTER OF SCIENCE DEPARTMENT OF MATHEMATICS NATIONAL

### On Minimal Valid Inequalities for Mixed Integer Conic Programs

On Minimal Valid Inequalities for Mixed Integer Conic Programs Fatma Kılınç Karzan June 27, 2013 Abstract We study mixed integer conic sets involving a general regular (closed, convex, full dimensional,

### Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

1 Chapter 13. VECTORS IN THREE DIMENSIONAL SPACE Let s begin with some names and notation for things: R is the set (collection) of real numbers. We write x R to mean that x is a real number. A real number

### Subspaces of R n LECTURE 7. 1. Subspaces

LECTURE 7 Subspaces of R n Subspaces Definition 7 A subset W of R n is said to be closed under vector addition if for all u, v W, u + v is also in W If rv is in W for all vectors v W and all scalars r

### Dot product and vector projections (Sect. 12.3) There are two main ways to introduce the dot product

Dot product and vector projections (Sect. 12.3) Two definitions for the dot product. Geometric definition of dot product. Orthogonal vectors. Dot product and orthogonal projections. Properties of the dot

### x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3

Math 24 FINAL EXAM (2/9/9 - SOLUTIONS ( Find the general solution to the system of equations 2 4 5 6 7 ( r 2 2r r 2 r 5r r x + y + z 2x + y + 4z 5x + 6y + 7z 2 2 2 2 So x z + y 2z 2 and z is free. ( r

### Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem

### Convex analysis and profit/cost/support functions

CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences Convex analysis and profit/cost/support functions KC Border October 2004 Revised January 2009 Let A be a subset of R m

### Orthogonal Projections and Orthonormal Bases

CS 3, HANDOUT -A, 3 November 04 (adjusted on 7 November 04) Orthogonal Projections and Orthonormal Bases (continuation of Handout 07 of 6 September 04) Definition (Orthogonality, length, unit vectors).

### Some Lecture Notes and In-Class Examples for Pre-Calculus:

Some Lecture Notes and In-Class Examples for Pre-Calculus: Section.7 Definition of a Quadratic Inequality A quadratic inequality is any inequality that can be put in one of the forms ax + bx + c < 0 ax

### The Australian Journal of Mathematical Analysis and Applications

The Australian Journal of Mathematical Analysis and Applications Volume 7, Issue, Article 11, pp. 1-14, 011 SOME HOMOGENEOUS CYCLIC INEQUALITIES OF THREE VARIABLES OF DEGREE THREE AND FOUR TETSUYA ANDO

### Chapter 5. Banach Spaces

9 Chapter 5 Banach Spaces Many linear equations may be formulated in terms of a suitable linear operator acting on a Banach space. In this chapter, we study Banach spaces and linear operators acting on

### Nonlinear Iterative Partial Least Squares Method

Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

### Applied Linear Algebra

Applied Linear Algebra OTTO BRETSCHER http://www.prenhall.com/bretscher Chapter 7 Eigenvalues and Eigenvectors Chia-Hui Chang Email: chia@csie.ncu.edu.tw National Central University, Taiwan 7.1 DYNAMICAL

### Section 1.4. Lines, Planes, and Hyperplanes. The Calculus of Functions of Several Variables

The Calculus of Functions of Several Variables Section 1.4 Lines, Planes, Hyperplanes In this section we will add to our basic geometric understing of R n by studying lines planes. If we do this carefully,

### t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction

### Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

### 1 = (a 0 + b 0 α) 2 + + (a m 1 + b m 1 α) 2. for certain elements a 0,..., a m 1, b 0,..., b m 1 of F. Multiplying out, we obtain

Notes on real-closed fields These notes develop the algebraic background needed to understand the model theory of real-closed fields. To understand these notes, a standard graduate course in algebra is

### SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH

31 Kragujevac J. Math. 25 (2003) 31 49. SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH Kinkar Ch. Das Department of Mathematics, Indian Institute of Technology, Kharagpur 721302, W.B.,

### Decision-making with the AHP: Why is the principal eigenvector necessary

European Journal of Operational Research 145 (2003) 85 91 Decision Aiding Decision-making with the AHP: Why is the principal eigenvector necessary Thomas L. Saaty * University of Pittsburgh, Pittsburgh,

### Lecture notes on linear algebra

Lecture notes on linear algebra David Lerner Department of Mathematics University of Kansas These are notes of a course given in Fall, 2007 and 2008 to the Honors sections of our elementary linear algebra

### Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Chapter 10 Boundary Value Problems for Ordinary Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign