# Review Jeopardy. Blue vs. Orange. Review Jeopardy

Size: px
Start display at page:

Transcription

1 Review Jeopardy Blue vs. Orange Review Jeopardy

2 Jeopardy Round Lectures 0-3 Jeopardy Round

3 \$200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other? A. Compute y 1 y 2 B. Compute y 2 y 1 C. Compute y 1 y 2 D. Compute covariance(y 1, y 2 ) E. Either (A) or (B) Jeopardy Round

4 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other? A. Compute y 1 y 2 B. Compute y 2 y 1 C. Compute y 1 y 2 D. Compute covariance(y 1, y 2 ) E. Either (A) or (B) Jeopardy Round

5 \$200 What is the span of one vector in R 3? A. A plane B. A line C. The whole 3-dimensional space D. A point E. A vector Jeopardy Round

6 \$200 What is the span of one vector in R 3? A. A plane B. A line C. The whole 3-dimensional space D. A point E. A vector Jeopardy Round

7 \$400 What is the span of two linearly independent vectors in R 3? A. A plane B. A line C. The whole 3-dimensional space D. A point E. A vector Jeopardy Round

8 \$400 What is the span of two linearly independent vectors in R 3? A. A plane B. A line C. The whole 3-dimensional space D. A point E. A vector Jeopardy Round

9 \$400 For 3 vectors, x, y and z, suppose that 2x + 3y + 5z = 0 A. Then x, y and z are linearly independent B. Then x, y and z are linearly dependent C. Then x, y and z are orthogonal D. None of the above Jeopardy Round

10 \$400 For 3 vectors, x, y and z, suppose that 2x + 3y + 5z = 0 A. Then x, y and z are linearly independent B. Then x, y and z are linearly dependent C. Then x, y and z are orthogonal D. None of the above Jeopardy Round

11 \$600 If a collection of vectors is mutually orthogonal, then those vectors are linearly independent. A. True B. False Jeopardy Round

12 \$600 If a collection of vectors is mutually orthogonal, then those vectors are linearly independent. A. True B. False Jeopardy Round

13 \$800 If U is an orthogonal matrix, then: A. U T U = UU T = I B. U T is the inverse of U C. U is a covariance matrix. D. U T U = 0 E. Both (A) and (B). Jeopardy Round

14 \$800 If U is an orthogonal matrix, then: A. U T U = UU T = I B. U T is the inverse of U C. U is a covariance matrix. D. U T U = 0 E. Both (A) and (B). Jeopardy Round

15 \$1000 If the span of 3 vectors x, y, and z is a 2-dimensional subspace (a plane) then... A. x, y, and z are linearly dependent B. x, y, and z are linearly independent C. x, y, and z are orthogonal D. x, y, and z are all multiples of the same vector Jeopardy Round

16 \$1000 If the span of 3 vectors x, y, and z is a 2-dimensional subspace (a plane) then... A. x, y, and z are linearly dependent B. x, y, and z are linearly independent C. x, y, and z are orthogonal D. x, y, and z are all multiples of the same vector Jeopardy Round

17 \$1000 In order for a matrix to have eigenvalues and eigenvectors, what must be true? A. All matrices have eigenvalues and eigenvectors B. The matrix must be square C. The matrix must be orthogonal D. The matrix must be a covariance matrix Jeopardy Round

18 \$1000 In order for a matrix to have eigenvalues and eigenvectors, what must be true? A. All matrices have eigenvalues and eigenvectors B. The matrix must be square C. The matrix must be orthogonal D. The matrix must be a covariance matrix Jeopardy Round

19 \$1000 If I multiply a matrix A by its eigenvector x, what can I say about the result, Ax? A. The result is a unit vector B. The result is a scalar, which is called the eigenvalue C. The result is a scalar multiple of x D. The result is orthogonal Jeopardy Round

20 \$1000 If I multiply a matrix A by its eigenvector x, what can I say about the result, Ax? A. The result is a unit vector B. The result is a scalar, which is called the eigenvalue C. The result is a scalar multiple of x D. The result is orthogonal Jeopardy Round

21 Double Jeopardy Round Lectures 4-7 Double Jeopardy Round

22 \$400 If your data matrix has 1,000 observations on 40 variables, then how many principal components exist? A. Impossible to tell from this information B. 40,000 C. 1,000 D. 40 Double Jeopardy Round

23 \$400 If your data matrix has 1,000 observations on 40 variables, then how many principal components exist? A. Impossible to tell from this information B. 40,000 C. 1,000 D. 40 Double Jeopardy Round

24 \$400 The first principal component is... A. A statistic that tells you how much multicollinearity is in your data B. A scalar that tells you how much total variance is in the data C. The first column in your data matrix D. A vector that points in the direction of maximum variance in the data Double Jeopardy Round

25 \$400 The first principal component is... A. A statistic that tells you how much multicollinearity is in your data B. A scalar that tells you how much total variance is in the data C. The first column in your data matrix D. A vector that points in the direction of maximum variance in the data Double Jeopardy Round

26 \$800 The loadings on a principal component tell you A. The variance of each variable on that principal component B. How correlated each variable is with that principal component C. Absolutely nothing D. How much each observation weighs along that principal component Double Jeopardy Round

27 \$800 The loadings on a principal component tell you A. The variance of each variable on that principal component B. How correlated each variable is with that principal component C. Absolutely nothing D. How much each observation weighs along that principal component Double Jeopardy Round

28 \$1200 The principal component scores are... A. Statistics which tell you the importance of each principal component B. The coordinates of your data in the new basis of principal components C. Statistics which tell you how each variable relates to each principal component D. Relatively random Double Jeopardy Round

29 \$1200 The principal component scores are... A. Statistics which tell you the importance of each principal component B. The coordinates of your data in the new basis of principal components C. Statistics which tell you how each variable relates to each principal component D. Relatively random Double Jeopardy Round

30 \$1200 The eigenvalues of the covariance matrix... A. Are always orthogonal B. Add up to 1 C. Tell you how much variance exists along each principal component D. Tell you the proportion of variance explained by each principal component Double Jeopardy Round

31 \$1200 The eigenvalues of the covariance matrix... A. Are always orthogonal B. Add up to 1 C. Tell you how much variance exists along each principal component D. Tell you the proportion of variance explained by each principal component Double Jeopardy Round

32 \$1600 The total amount of variance in a data set is... A. The sum of all the entries in the covariance matrix B. The sum of the eigenvalues of the covariance matrix C. The sum of the variances of each variable D. Both (B) and (C) Double Jeopardy Round

33 \$1600 The total amount of variance in a data set is... A. The sum of all the entries in the covariance matrix B. The sum of the eigenvalues of the covariance matrix C. The sum of the variances of each variable D. Both (B) and (C) Double Jeopardy Round

34 \$1600 PCA is a special case of the Singular Value Decomposition, when your data is either centered or standardized. A. True B. False Double Jeopardy Round

35 \$1600 PCA is a special case of the Singular Value Decomposition, when your data is either centered or standardized. A. True B. False Double Jeopardy Round

36 \$1600 Principal Component Regression... A. Can give you meaningful beta parameters for your original variables B. Attempts to solve the problem of severe multicollinearity in predictor variables C. Is a biased regression technique and should be used only as a last resort when you cannot omit correlated variables. D. All of the above Double Jeopardy Round

37 \$1600 Principal Component Regression... A. Can give you meaningful beta parameters for your original variables B. Attempts to solve the problem of severe multicollinearity in predictor variables C. Is a biased regression technique and should be used only as a last resort when you cannot omit correlated variables. D. All of the above Double Jeopardy Round

38 \$1600 Principal components with eigenvalues close to zero are correlated with the intercept in a linear regression model A. True B. False Double Jeopardy Round

39 \$1600 Principal components with eigenvalues close to zero are correlated with the intercept in a linear regression model A. True B. False Double Jeopardy Round

40 Final Jeopardy Category: PCA Rotations Final Jeopardy

41 Wager \$2000 \$3000 \$4000 \$5000 Final Jeopardy

42 Final Jeopardy Question What is the purpose or motivation behind the rotations of principal components in Factor Analysis? A. The original principal components were not orthogonal, so we need to adjust them B. The first principal component does not explain enough variance. By rotating, we can explain more variance. C. The loadings of the variables are difficult to interpret, by rotating we get new factors which more clearly represent combinations of original variables D. The rotation helps spread the observations out so that we can more clearly see different groups or classes in the data Final Jeopardy

43 Final Jeopardy Question What is the purpose or motivation behind the rotations of principal components in Factor Analysis? A. The original principal components were not orthogonal, so we need to adjust them B. The first principal component does not explain enough variance. By rotating, we can explain more variance. C. The loadings of the variables are difficult to interpret, by rotating we get new factors which more clearly represent combinations of original variables D. The rotation helps spread the observations out so that we can more clearly see different groups or classes in the data Final Jeopardy

### Linear Algebra Review. Vectors

Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

### Dimensionality Reduction: Principal Components Analysis

Dimensionality Reduction: Principal Components Analysis In data mining one often encounters situations where there are a large number of variables in the database. In such situations it is very likely

### Orthogonal Projections

Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

### Orthogonal Diagonalization of Symmetric Matrices

MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

### MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

### by the matrix A results in a vector which is a reflection of the given

Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

### 1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

### [1] Diagonal factorization

8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

### Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

### Section 1.7 22 Continued

Section 1.5 23 A homogeneous equation is always consistent. TRUE - The trivial solution is always a solution. The equation Ax = 0 gives an explicit descriptions of its solution set. FALSE - The equation

### MATH 551 - APPLIED MATRIX THEORY

MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

### α = u v. In other words, Orthogonal Projection

Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

### Introduction to Matrix Algebra

Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

### Linear Algebra Notes

Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

### 1 Introduction to Matrices

1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

### Principal components analysis

CS229 Lecture notes Andrew Ng Part XI Principal components analysis In our discussion of factor analysis, we gave a way to model data x R n as approximately lying in some k-dimension subspace, where k

### Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

### Similar matrices and Jordan form

Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

### Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

### Recall that two vectors in are perpendicular or orthogonal provided that their dot

Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

### Lecture 5: Singular Value Decomposition SVD (1)

EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

### Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

### Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

### NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates

### 3 Orthogonal Vectors and Matrices

3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first

### 13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

### October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### Factor Analysis. Chapter 420. Introduction

Chapter 420 Introduction (FA) is an exploratory technique applied to a set of observed variables that seeks to find underlying factors (subsets of variables) from which the observed variables were generated.

### Vector and Matrix Norms

Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

### MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

### LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

### Multidimensional data and factorial methods

Multidimensional data and factorial methods Bidimensional data x 5 4 3 4 X 3 6 X 3 5 4 3 3 3 4 5 6 x Cartesian plane Multidimensional data n X x x x n X x x x n X m x m x m x nm Factorial plane Interpretation

### 8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

### Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

### Name: Section Registered In:

Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

### CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In

### Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

### Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics

INTERNATIONAL BLACK SEA UNIVERSITY COMPUTER TECHNOLOGIES AND ENGINEERING FACULTY ELABORATION OF AN ALGORITHM OF DETECTING TESTS DIMENSIONALITY Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree

### 15.062 Data Mining: Algorithms and Applications Matrix Math Review

.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

### Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Math 550 Notes Chapter 7 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 7 Fall 2010 1 / 34 Outline 1 Self-Adjoint and Normal Operators

### Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col

### 5. Orthogonal matrices

L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

### Systems of Linear Equations

Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

### Partial Least Squares (PLS) Regression.

Partial Least Squares (PLS) Regression. Hervé Abdi 1 The University of Texas at Dallas Introduction Pls regression is a recent technique that generalizes and combines features from principal component

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### Introduction to Principal Components and FactorAnalysis

Introduction to Principal Components and FactorAnalysis Multivariate Analysis often starts out with data involving a substantial number of correlated variables. Principal Component Analysis (PCA) is a

### Eigenvalues and Eigenvectors

Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution

### Nonlinear Iterative Partial Least Squares Method

Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

### Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

### LS.6 Solution Matrices

LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

### Multivariate Analysis (Slides 13)

Multivariate Analysis (Slides 13) The final topic we consider is Factor Analysis. A Factor Analysis is a mathematical approach for attempting to explain the correlation between a large set of variables

### DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

### How To Understand Multivariate Models

Neil H. Timm Applied Multivariate Analysis With 42 Figures Springer Contents Preface Acknowledgments List of Tables List of Figures vii ix xix xxiii 1 Introduction 1 1.1 Overview 1 1.2 Multivariate Models

### Simple Regression Theory II 2010 Samuel L. Baker

SIMPLE REGRESSION THEORY II 1 Simple Regression Theory II 2010 Samuel L. Baker Assessing how good the regression equation is likely to be Assignment 1A gets into drawing inferences about how close the

### Question 2: How do you solve a matrix equation using the matrix inverse?

Question : How do you solve a matrix equation using the matrix inverse? In the previous question, we wrote systems of equations as a matrix equation AX B. In this format, the matrix A contains the coefficients

### 3. INNER PRODUCT SPACES

. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

### MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

### 1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0

Solutions: Assignment 4.. Find the redundant column vectors of the given matrix A by inspection. Then find a basis of the image of A and a basis of the kernel of A. 5 A The second and third columns are

### Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

### Component Ordering in Independent Component Analysis Based on Data Power

Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals

Factor Analysis Advanced Financial Accounting II Åbo Akademi School of Business Factor analysis A statistical method used to describe variability among observed variables in terms of fewer unobserved variables

### CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

### 18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in 2-106. Total: 175 points.

806 Problem Set 4 Solution Due Wednesday, March 2009 at 4 pm in 2-06 Total: 75 points Problem : A is an m n matrix of rank r Suppose there are right-hand-sides b for which A x = b has no solution (a) What

### NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

### Principal Component Analysis

Principal Component Analysis ERS70D George Fernandez INTRODUCTION Analysis of multivariate data plays a key role in data analysis. Multivariate data consists of many different attributes or variables recorded

### Principal Component Analysis

Principal Component Analysis Principle Component Analysis: A statistical technique used to examine the interrelations among a set of variables in order to identify the underlying structure of those variables.

### Notes on Symmetric Matrices

CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

### Part 2: Analysis of Relationship Between Two Variables

Part 2: Analysis of Relationship Between Two Variables Linear Regression Linear correlation Significance Tests Multiple regression Linear Regression Y = a X + b Dependent Variable Independent Variable

### Unified Lecture # 4 Vectors

Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,

### Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

### Inner Product Spaces

Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

### Lecture L3 - Vectors, Matrices and Coordinate Transformations

S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

### Introduction to Principal Component Analysis: Stock Market Values

Chapter 10 Introduction to Principal Component Analysis: Stock Market Values The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from

### T ( a i x i ) = a i T (x i ).

Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

### Factor analysis. Angela Montanari

Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number

### Math 215 HW #6 Solutions

Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T

### Math 2270 - Lecture 33 : Positive Definite Matrices

Math 2270 - Lecture 33 : Positive Definite Matrices Dylan Zwick Fall 2012 This lecture covers section 6.5 of the textbook. Today we re going to talk about a special type of symmetric matrix, called a positive

### is in plane V. However, it may be more convenient to introduce a plane coordinate system in V.

.4 COORDINATES EXAMPLE Let V be the plane in R with equation x +2x 2 +x 0, a two-dimensional subspace of R. We can describe a vector in this plane by its spatial (D)coordinates; for example, vector x 5

### Examination paper for TMA4115 Matematikk 3

Department of Mathematical Sciences Examination paper for TMA45 Matematikk 3 Academic contact during examination: Antoine Julien a, Alexander Schmeding b, Gereon Quick c Phone: a 73 59 77 82, b 40 53 99

### Matrix Algebra in R A Minimal Introduction

A Minimal Introduction James H. Steiger Department of Psychology and Human Development Vanderbilt University Regression Modeling, 2009 1 Defining a Matrix in R Entering by Columns Entering by Rows Entering

### Section 1.4. Lines, Planes, and Hyperplanes. The Calculus of Functions of Several Variables

The Calculus of Functions of Several Variables Section 1.4 Lines, Planes, Hyperplanes In this section we will add to our basic geometric understing of R n by studying lines planes. If we do this carefully,

### Linear Algebraic Equations, SVD, and the Pseudo-Inverse

Linear Algebraic Equations, SVD, and the Pseudo-Inverse Philip N. Sabes October, 21 1 A Little Background 1.1 Singular values and matrix inversion For non-smmetric matrices, the eigenvalues and singular

### Exploratory Factor Analysis and Principal Components. Pekka Malo & Anton Frantsev 30E00500 Quantitative Empirical Research Spring 2016

and Principal Components Pekka Malo & Anton Frantsev 30E00500 Quantitative Empirical Research Spring 2016 Agenda Brief History and Introductory Example Factor Model Factor Equation Estimation of Loadings

### MAT 242 Test 3 SOLUTIONS, FORM A

MAT Test SOLUTIONS, FORM A. Let v =, v =, and v =. Note that B = { v, v, v } is an orthogonal set. Also, let W be the subspace spanned by { v, v, v }. A = 8 a. [5 points] Find the orthogonal projection

### LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

### SALEM COMMUNITY COLLEGE Carneys Point, New Jersey 08069 COURSE SYLLABUS COVER SHEET. Action Taken (Please Check One) New Course Initiated

SALEM COMMUNITY COLLEGE Carneys Point, New Jersey 08069 COURSE SYLLABUS COVER SHEET Course Title Course Number Department Linear Algebra Mathematics MAT-240 Action Taken (Please Check One) New Course Initiated

### Inner product. Definition of inner product

Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product

### Chapter 6: Multivariate Cointegration Analysis

Chapter 6: Multivariate Cointegration Analysis 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie VI. Multivariate Cointegration

### FACTOR ANALYSIS NASC

FACTOR ANALYSIS NASC Factor Analysis A data reduction technique designed to represent a wide range of attributes on a smaller number of dimensions. Aim is to identify groups of variables which are relatively

### SYSTEMS OF REGRESSION EQUATIONS

SYSTEMS OF REGRESSION EQUATIONS 1. MULTIPLE EQUATIONS y nt = x nt n + u nt, n = 1,...,N, t = 1,...,T, x nt is 1 k, and n is k 1. This is a version of the standard regression model where the observations

### Factor Analysis. Principal components factor analysis. Use of extracted factors in multivariate dependency models

Factor Analysis Principal components factor analysis Use of extracted factors in multivariate dependency models 2 KEY CONCEPTS ***** Factor Analysis Interdependency technique Assumptions of factor analysis

### Exploratory Factor Analysis

Exploratory Factor Analysis Definition Exploratory factor analysis (EFA) is a procedure for learning the extent to which k observed variables might measure m abstract variables, wherein m is less than

### Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

### APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder

APPM4720/5720: Fast algorithms for big data Gunnar Martinsson The University of Colorado at Boulder Course objectives: The purpose of this course is to teach efficient algorithms for processing very large

### Applied Linear Algebra

Applied Linear Algebra OTTO BRETSCHER http://www.prenhall.com/bretscher Chapter 7 Eigenvalues and Eigenvectors Chia-Hui Chang Email: chia@csie.ncu.edu.tw National Central University, Taiwan 7.1 DYNAMICAL

### Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

### Brief Introduction to Vectors and Matrices

CHAPTER 1 Brief Introduction to Vectors and Matrices In this chapter, we will discuss some needed concepts found in introductory course in linear algebra. We will introduce matrix, vector, vector-valued