Review Jeopardy. Blue vs. Orange. Review Jeopardy



Similar documents
Linear Algebra Review. Vectors

Dimensionality Reduction: Principal Components Analysis

Orthogonal Projections

Orthogonal Diagonalization of Symmetric Matrices

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

by the matrix A results in a vector which is a reflection of the given

x = + x 2 + x

[1] Diagonal factorization

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Section Continued

MATH APPLIED MATRIX THEORY

α = u v. In other words, Orthogonal Projection

Introduction to Matrix Algebra

Linear Algebra Notes

1 Introduction to Matrices

Principal components analysis

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Similar matrices and Jordan form

Solutions to Math 51 First Exam January 29, 2015

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Lecture 5: Singular Value Decomposition SVD (1)

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

3 Orthogonal Vectors and Matrices

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

October 3rd, Linear Algebra & Properties of the Covariance Matrix

Similarity and Diagonalization. Similar Matrices

Factor Analysis. Chapter 420. Introduction

Vector and Matrix Norms

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

LINEAR ALGEBRA. September 23, 2010

8 Square matrices continued: Determinants

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Name: Section Registered In:

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.

Section Inner Products and Norms

Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics

Data Mining: Algorithms and Applications Matrix Math Review

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

5. Orthogonal matrices

Systems of Linear Equations

Partial Least Squares (PLS) Regression.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Introduction to Principal Components and FactorAnalysis

Eigenvalues and Eigenvectors

Nonlinear Iterative Partial Least Squares Method

Chapter 17. Orthogonal Matrices and Symmetries of Space

LS.6 Solution Matrices

Multivariate Analysis (Slides 13)

DATA ANALYSIS II. Matrix Algorithms

How To Understand Multivariate Models

Simple Regression Theory II 2010 Samuel L. Baker

Question 2: How do you solve a matrix equation using the matrix inverse?

3. INNER PRODUCT SPACES

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

A =

Inner Product Spaces and Orthogonality

Component Ordering in Independent Component Analysis Based on Data Power

Factor Analysis. Advanced Financial Accounting II Åbo Akademi School of Business

CS3220 Lecture Notes: QR factorization and orthogonal transformations

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in Total: 175 points.

NOTES ON LINEAR TRANSFORMATIONS

Principal Component Analysis

Principal Component Analysis

Notes on Symmetric Matrices

Part 2: Analysis of Relationship Between Two Variables

Unified Lecture # 4 Vectors

Lecture 1: Schur s Unitary Triangularization Theorem

Inner Product Spaces

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Introduction to Principal Component Analysis: Stock Market Values

T ( a i x i ) = a i T (x i ).

Factor analysis. Angela Montanari

Math 215 HW #6 Solutions

Math Lecture 33 : Positive Definite Matrices

is in plane V. However, it may be more convenient to introduce a plane coordinate system in V.

Examination paper for TMA4115 Matematikk 3

Matrix Algebra in R A Minimal Introduction

Section 1.4. Lines, Planes, and Hyperplanes. The Calculus of Functions of Several Variables

Linear Algebraic Equations, SVD, and the Pseudo-Inverse

Exploratory Factor Analysis and Principal Components. Pekka Malo & Anton Frantsev 30E00500 Quantitative Empirical Research Spring 2016

MAT 242 Test 3 SOLUTIONS, FORM A

LINEAR ALGEBRA W W L CHEN

SALEM COMMUNITY COLLEGE Carneys Point, New Jersey COURSE SYLLABUS COVER SHEET. Action Taken (Please Check One) New Course Initiated

Inner product. Definition of inner product

Chapter 6: Multivariate Cointegration Analysis

FACTOR ANALYSIS NASC

SYSTEMS OF REGRESSION EQUATIONS

Factor Analysis. Principal components factor analysis. Use of extracted factors in multivariate dependency models

Exploratory Factor Analysis

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder

Applied Linear Algebra

Notes on Determinant

Brief Introduction to Vectors and Matrices

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) Write the given system in matrix form x = Ax + f ( ) sin(t) x y z = dy cos(t)

Exploratory Factor Analysis: rotation. Psychology 588: Covariance structure and factor models

Transcription:

Review Jeopardy Blue vs. Orange Review Jeopardy

Jeopardy Round Lectures 0-3 Jeopardy Round

$200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other? A. Compute y 1 y 2 B. Compute y 2 y 1 C. Compute y 1 y 2 D. Compute covariance(y 1, y 2 ) E. Either (A) or (B) Jeopardy Round

How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other? A. Compute y 1 y 2 B. Compute y 2 y 1 C. Compute y 1 y 2 D. Compute covariance(y 1, y 2 ) E. Either (A) or (B) Jeopardy Round

$200 What is the span of one vector in R 3? A. A plane B. A line C. The whole 3-dimensional space D. A point E. A vector Jeopardy Round

$200 What is the span of one vector in R 3? A. A plane B. A line C. The whole 3-dimensional space D. A point E. A vector Jeopardy Round

$400 What is the span of two linearly independent vectors in R 3? A. A plane B. A line C. The whole 3-dimensional space D. A point E. A vector Jeopardy Round

$400 What is the span of two linearly independent vectors in R 3? A. A plane B. A line C. The whole 3-dimensional space D. A point E. A vector Jeopardy Round

$400 For 3 vectors, x, y and z, suppose that 2x + 3y + 5z = 0 A. Then x, y and z are linearly independent B. Then x, y and z are linearly dependent C. Then x, y and z are orthogonal D. None of the above Jeopardy Round

$400 For 3 vectors, x, y and z, suppose that 2x + 3y + 5z = 0 A. Then x, y and z are linearly independent B. Then x, y and z are linearly dependent C. Then x, y and z are orthogonal D. None of the above Jeopardy Round

$600 If a collection of vectors is mutually orthogonal, then those vectors are linearly independent. A. True B. False Jeopardy Round

$600 If a collection of vectors is mutually orthogonal, then those vectors are linearly independent. A. True B. False Jeopardy Round

$800 If U is an orthogonal matrix, then: A. U T U = UU T = I B. U T is the inverse of U C. U is a covariance matrix. D. U T U = 0 E. Both (A) and (B). Jeopardy Round

$800 If U is an orthogonal matrix, then: A. U T U = UU T = I B. U T is the inverse of U C. U is a covariance matrix. D. U T U = 0 E. Both (A) and (B). Jeopardy Round

$1000 If the span of 3 vectors x, y, and z is a 2-dimensional subspace (a plane) then... A. x, y, and z are linearly dependent B. x, y, and z are linearly independent C. x, y, and z are orthogonal D. x, y, and z are all multiples of the same vector Jeopardy Round

$1000 If the span of 3 vectors x, y, and z is a 2-dimensional subspace (a plane) then... A. x, y, and z are linearly dependent B. x, y, and z are linearly independent C. x, y, and z are orthogonal D. x, y, and z are all multiples of the same vector Jeopardy Round

$1000 In order for a matrix to have eigenvalues and eigenvectors, what must be true? A. All matrices have eigenvalues and eigenvectors B. The matrix must be square C. The matrix must be orthogonal D. The matrix must be a covariance matrix Jeopardy Round

$1000 In order for a matrix to have eigenvalues and eigenvectors, what must be true? A. All matrices have eigenvalues and eigenvectors B. The matrix must be square C. The matrix must be orthogonal D. The matrix must be a covariance matrix Jeopardy Round

$1000 If I multiply a matrix A by its eigenvector x, what can I say about the result, Ax? A. The result is a unit vector B. The result is a scalar, which is called the eigenvalue C. The result is a scalar multiple of x D. The result is orthogonal Jeopardy Round

$1000 If I multiply a matrix A by its eigenvector x, what can I say about the result, Ax? A. The result is a unit vector B. The result is a scalar, which is called the eigenvalue C. The result is a scalar multiple of x D. The result is orthogonal Jeopardy Round

Double Jeopardy Round Lectures 4-7 Double Jeopardy Round

$400 If your data matrix has 1,000 observations on 40 variables, then how many principal components exist? A. Impossible to tell from this information B. 40,000 C. 1,000 D. 40 Double Jeopardy Round

$400 If your data matrix has 1,000 observations on 40 variables, then how many principal components exist? A. Impossible to tell from this information B. 40,000 C. 1,000 D. 40 Double Jeopardy Round

$400 The first principal component is... A. A statistic that tells you how much multicollinearity is in your data B. A scalar that tells you how much total variance is in the data C. The first column in your data matrix D. A vector that points in the direction of maximum variance in the data Double Jeopardy Round

$400 The first principal component is... A. A statistic that tells you how much multicollinearity is in your data B. A scalar that tells you how much total variance is in the data C. The first column in your data matrix D. A vector that points in the direction of maximum variance in the data Double Jeopardy Round

$800 The loadings on a principal component tell you A. The variance of each variable on that principal component B. How correlated each variable is with that principal component C. Absolutely nothing D. How much each observation weighs along that principal component Double Jeopardy Round

$800 The loadings on a principal component tell you A. The variance of each variable on that principal component B. How correlated each variable is with that principal component C. Absolutely nothing D. How much each observation weighs along that principal component Double Jeopardy Round

$1200 The principal component scores are... A. Statistics which tell you the importance of each principal component B. The coordinates of your data in the new basis of principal components C. Statistics which tell you how each variable relates to each principal component D. Relatively random Double Jeopardy Round

$1200 The principal component scores are... A. Statistics which tell you the importance of each principal component B. The coordinates of your data in the new basis of principal components C. Statistics which tell you how each variable relates to each principal component D. Relatively random Double Jeopardy Round

$1200 The eigenvalues of the covariance matrix... A. Are always orthogonal B. Add up to 1 C. Tell you how much variance exists along each principal component D. Tell you the proportion of variance explained by each principal component Double Jeopardy Round

$1200 The eigenvalues of the covariance matrix... A. Are always orthogonal B. Add up to 1 C. Tell you how much variance exists along each principal component D. Tell you the proportion of variance explained by each principal component Double Jeopardy Round

$1600 The total amount of variance in a data set is... A. The sum of all the entries in the covariance matrix B. The sum of the eigenvalues of the covariance matrix C. The sum of the variances of each variable D. Both (B) and (C) Double Jeopardy Round

$1600 The total amount of variance in a data set is... A. The sum of all the entries in the covariance matrix B. The sum of the eigenvalues of the covariance matrix C. The sum of the variances of each variable D. Both (B) and (C) Double Jeopardy Round

$1600 PCA is a special case of the Singular Value Decomposition, when your data is either centered or standardized. A. True B. False Double Jeopardy Round

$1600 PCA is a special case of the Singular Value Decomposition, when your data is either centered or standardized. A. True B. False Double Jeopardy Round

$1600 Principal Component Regression... A. Can give you meaningful beta parameters for your original variables B. Attempts to solve the problem of severe multicollinearity in predictor variables C. Is a biased regression technique and should be used only as a last resort when you cannot omit correlated variables. D. All of the above Double Jeopardy Round

$1600 Principal Component Regression... A. Can give you meaningful beta parameters for your original variables B. Attempts to solve the problem of severe multicollinearity in predictor variables C. Is a biased regression technique and should be used only as a last resort when you cannot omit correlated variables. D. All of the above Double Jeopardy Round

$1600 Principal components with eigenvalues close to zero are correlated with the intercept in a linear regression model A. True B. False Double Jeopardy Round

$1600 Principal components with eigenvalues close to zero are correlated with the intercept in a linear regression model A. True B. False Double Jeopardy Round

Final Jeopardy Category: PCA Rotations Final Jeopardy

Wager $2000 $3000 $4000 $5000 Final Jeopardy

Final Jeopardy Question What is the purpose or motivation behind the rotations of principal components in Factor Analysis? A. The original principal components were not orthogonal, so we need to adjust them B. The first principal component does not explain enough variance. By rotating, we can explain more variance. C. The loadings of the variables are difficult to interpret, by rotating we get new factors which more clearly represent combinations of original variables D. The rotation helps spread the observations out so that we can more clearly see different groups or classes in the data Final Jeopardy

Final Jeopardy Question What is the purpose or motivation behind the rotations of principal components in Factor Analysis? A. The original principal components were not orthogonal, so we need to adjust them B. The first principal component does not explain enough variance. By rotating, we can explain more variance. C. The loadings of the variables are difficult to interpret, by rotating we get new factors which more clearly represent combinations of original variables D. The rotation helps spread the observations out so that we can more clearly see different groups or classes in the data Final Jeopardy