Metric Multidimensional Scaling (MDS): Analyzing Distance Matrices
|
|
|
- Erik Eaton
- 9 years ago
- Views:
Transcription
1 Metric Multidimensional Scaling (MDS): Analyzing Distance Matrices Hervé Abdi 1 1 Overview Metric multidimensional scaling (MDS) transforms a distance matrix into a set of coordinates such that the (Euclidean) distances derived from these coordinates approximate as well as possible the original distances. he basic idea of MDS is to transform the distance matrix into a cross-product matrix and then to find its eigen-decomposition which gives a principal component analysis (PCA). Like PCA, MDS can be used with supplementary or illustrative elements which are projected onto the dimensions after they have been computed. 2 An example he example is derived from O oole, Jiang, Abdi, and Haxby (2005), in which the authors used a combination of principal component 1 In: Neil Salkind (Ed.) (2007). Encyclopedia of Measurement and Statistics. housand Oaks (CA): Sage. Address correspondence to: Hervé Abdi Program in Cognition and Neurosciences, MS: Gr.4.1, he University of exas at Dallas, Richardson, X , USA [email protected] Revised May 26, (Equations 17, 18, and 21) herve 1
2 analysis and neural networks to analyze brain imaging data. In this study 6 subjects were scanned using fmri when they were watching pictures from 8 categories (faces, houses, cats, chairs, shoes, scissors, bottles and scrambled images). he authors computed for each subject a distance matrix corresponding to how well they could predict the type of pictures that this subject was watching from his/her brain scans. he distance used was d (see entry) which expresses the discriminability between categories. O oole et al., give two distance matrices. he first one is the average distance matrix computed from the brain scans of all 6 subjects. he authors also give a distance matrix derived directly from the pictures watched by the subjects. he authors computed this distance matrix with the same algorithm that they used for the brain scans, they just substituted images to brain scans. We will use these two matrices to review the basic of multidimensional scaling: namely how to transform a distance matrix into a cross-product matrix and how to project a set of supplementary observations onto the space obtained by the original analysis. 3 Multidimensional Scaling: Eigen-analysis of a distance matrix PCA is obtained by performing the eigen-decomposition of a matrix. his matrix can be a correlation matrix (i.e., the variables to be analyzed are centered and normalized), a covariance matrix (i.e., the variables are centered but not normalized), or a crossproduct matrix (i.e., the variables are neither centered nor normalized). A distance matrix cannot be analyzed directly using the eigen-decomposition (because distance matrices are not positive semi-definite matrices), but it can be transformed into an equivalent cross-product matrix which can then be analyzed. 2
3 3.1 ransforming a distance matrix into a cross-product matrix In order to transform a distance matrix into a cross-product matrix, we start from the observation that the scalar product between two vectors can easily be transformed into a distance (the scalar product between vectors corresponds to a cross-product matrix). Let us start with some definitions. Suppose that a and b are two vectors with I elements, the Euclidean distance between these two vectors is computed as d 2 (a, b) = (a b) (a b). (1) his distance can be rewritten in order to isolate the scalar product between vectors a and b: d 2 (a, b) = (a b) (a b) = a a + b b 2 (a b), (2) where a b is the scalar product between a and b. If the data are stored into an I by J data matrix denoted X (where I observations are described by J variables), the between observations cross product matrix is then obtained as S = X X. (3) I I I J J I A distance matrix can be computed directly from the cross-product matrix as D = s s 2 S. (4) I I I 1 I I 1 I I I (Note that the elements of D gives the squared Euclidean distance between rows of S) his equation shows that an Euclidean distance matrix can be computed from a cross-product matrix. In order to perform MDS on a set of data, the main idea is to revert" Equation 4 in order to obtain a cross-product matrix from a distance matrix. here is one problem when implementing this idea, namely that different cross-product matrices can give the same distance. his can happen because distances are invariant for any change of origin. herefore, in order to revert the equation we need to impose an 3
4 origin for the computation of the distance. An obvious choice is to choose the origin of the distance as the center of gravity of the dimensions. With this constraint, the cross-product matrix is obtained as follows. First define a mass vector denoted m whose I elements give the mass of the I rows of matrix D. hese elements are all positive and their sum is equal to one: m 1 = 1. (5) 1 I 1 When all the rows have equal importance, each element is equal to 1 I Ṡecond, define an I I centering matrix denoted Ξ (read big Xi ) equal to Ξ = I 1 m. (6) I I I I I 1 I with Finally, the cross-product matrix is obtained from matrix D as: S = 1 I I 2 ΞDΞ. (7) he eigen-decomposition of this matrix gives S = UΛU (8) U U = I and Λ diagonal matrix of eigenvalues. (9) (see appendix for a proof). he scores (i.e., the projection of the rows on the principal components of the analysis of S) are obtained as F = M 1 2 UΛ 1 2 (with M = diag {m}) (10) he scores have the properties that their variance is equal to the eigenvalues: F MF = Λ. (11) 4
5 able 1: he d matrix from O'oole et al., (2005). his matrix gives the d obtained for the discrimination between categories based upon the brain scans. hese data are obtained by averaging 12 data tables (2 per subject). Face House Cat Chair Shoe Scissors Bottle Scrambled Face House Cat Chair Shoes Scissors Bottle Scrambled able 2: he d matrix from O'oole et al., (2005). his matrix gives the d obtained for the discrimination between categories based upon the images watched by the subjects. Face House Cat Chair Shoe Scissors Bottle Scrambled Face House Cat Chair Shoe Scissors Bottle Scrambled
6 2 2 τ = 28% 1 1 τ= 21% a b Figure 1: (a) Multidimensional scaling of the subjects's distance table. (b) Projection of the image distance table as supplementary elements in the subjects' space (Distance from ables 1 and 2). 6
7 3.2 Example o illustrate the transformation of the distance matrix, we will use the distance matrix derived from the brain scans given in able 1: D = he elements of the mass vector m are all equal to 1 8 ;. (12) m = [ ]. (13) he centering matrix is equal to: Ξ = he cross product matrix is then equal to S =
8 he eigen-decomposition of S gives U = (14) and Λ = (15) As in PCA, the eigenvalues are often transformed into percentage of explained variance (or inertia) in order to make their interpretation easier. Here, for example, we find that the first dimension explains 28% of the variance of the distances (i.e., =.28). We obtain the following matrix of scores. F = Figure 1a displays the projection of the categories on the first two dimensions. he first dimension explains 28% of the variance of the distance, it can be interpreted as the opposition of the face and cat categories to the house category (these categories are the ones most easily discriminated in the scans). he second dimension, which explains 21% of the variance, separates the small objects from the other categories. 8
9 3.3 Multidimensional scaling: Supplementary elements After we have computed the MDS solution, it is possible to project supplementary or illustrative elements onto this solution. o illustrate this procedure, we will project the distance matrix obtained from the pictures (see able 2) onto the space defined by the analysis of the brain scans. he number of supplementary elements is denoted by I sup. For each supplementary elements, we need the values of its distances to all the I active elements. We can store these distances in an I I sup supplementary distance matrix denoted D sup. So, for our example, we have: D sup = (16) he first step is to transform D sup into a cross-product matrix denoted S sup. his is done by first computing the difference for each supplementary column and the average distance vector and then centering the rows with the same centering matrix that was used previously to transform the distance of the active elements. Specifically, the supplementary cross-product matrix is obtained as: S sup = 1 2 Ξ (D sup Dm1 ) (17) (where 1 is an I sup by 1 vector of ones; note, also, that when D sup = 9
10 D, Equation 17 reduces to Equation 7). For our example, this gives: S sup = (18) he next step is to project the matrix S sup onto the space defined by the analysis of the active distance matrix. We denote by F sup the matrix of projection of the supplementary elements. Its computational formula is obtained by first combining Equations 10 and 8 in order to get F = S M 1 2 UΛ 1 2, (19) and then substituting S sup for S and simplifying. his gives F sup = S sup M 1 2 UΛ 1 2 = S sup FΛ 1. (20) For our example, this equation gives the following values: F sup = (21) Figure 1b displays the projection of the supplementary categories on the first two dimensions. Comparing plots a and b shows that an analysis of the pictures reveals a general map very similar to the analysis of the brain scans with only one major difference: he cat category for the images moves to the center of the space. his suggests that the cat category is interpreted by the subjects as being face-like (i.e., cats have faces"). 10
11 4 Analyzing non-metric data Metric MDS is adequate only when dealing with distances (see ogerson, 1958). In order to accommodate weaker measurements (called dissimilarities) non-metric MDS is adequate. It derives an Euclidean distance approximation using only the ordinal information from the data (Shepard, 1966; for a recent thorough review, see Borg & Groenen, 1997). Appendix: Proof We start with an I I distance matrix D, and an I 1 vector of mass (whose elements are all positive or zero and whose sum is equal to 1) denoted m and such that he centering matrix is equal to m 1 = 1. (22) 1 I 1 Ξ = I 1 I I I I m I 1 I. (23) We want to show that the following cross-product matrix S I I = 1 2 ΞDΞ, (24) will give back the original distance matrix when the distance matrix is computed as: D = s I I I 1 I I s 2 S. (25) I I In order to do so, we need to choose an origin for the coordinates (because several coordinates systems will give the same distance matrix). A natural choice is to assume that the data are centered (i.e., the mean of each original variable is equal to zero). here we assume that the mean vector, denoted c computed as: c = X J 1 J 11 1 I m I 1, (26)
12 (for some data matrix X). Because the origin of the space is located at the center of gravity, its coordinates are equal to c = 0. he crossproduct matric can therefore be computed as ( S = X 1 I I I J I ( = X 1 I J I c 1 J c 1 J )( X 1 I J I )( X J I c c 1 J ) ) 1 J 1 I. (27) First, we assume that there exists a matrix denoted S such that Equation 25) is satisfied. hen we plug Equation 25 into Equation 24, develop and simplify in order to get 1 2 ΞDΞ = 1 2 Ξs1 Ξ 1 2 Ξ1s Ξ + ΞSΞ. (28) hen we show that the terms Ξ ( s1 ) Ξ and Ξ ( 1s ) Ξ are null because:: (s1 ) Ξ = s1 ( I 1m ) = s1 ( I m1 ) = s1 s1 m1 (but from Equation 22: 1 m = 1) = s1 s1 = 0 I I. (29) he last thing to show now is that the term ΞSΞ is equal to S. his is shown by developing: ( ΞSΞ = I 1m ) S (I m1 ) = S Sm1 1m S + 1m Sm1. (30) Because ( X c1 ) m = X m c1 m (cf. Equations 26 and 22) 12
13 = c c we get (cf. Equation 27): = 0 I 1, (31) ( Sm = X 1c )( X c1 ) m = 0 (32) and, therefore, Equation 30 becomes ΞSΞ = S, (33) which lead to 1 2 ΞDΞ = S, (34) which completes the proof. References [1] Borg I., & Groenen, P. (1997). Modern multidimensional scaling. New York: Springer Verlag. [2] O oole, A.J., Jiang, F., Abdi, H., & Haxby, J.V. (2005). Partially distributed representations of objects and faces in ventral temporal cortex. Journal of Cognitive Neuroscience, 17, [3] Shepard, R.N. (1966). Metric structure in ordinal data. Journal of Mathetical Pyschology, 3, [4] ogerson, W.S. (1958). heory and methods of scaling. New York: Wiley. 13
Multiple Correspondence Analysis
Multiple Correspondence Analysis Hervé Abdi 1 & Dominique Valentin 1 Overview Multiple correspondence analysis (MCA) is an extension of correspondence analysis (CA) which allows one to analyze the pattern
The Method of Least Squares
Hervé Abdi 1 1 Introduction The least square methods (LSM) is probably the most popular technique in statistics. This is due to several factors. First, most common estimators can be casted within this
Partial Least Squares (PLS) Regression.
Partial Least Squares (PLS) Regression. Hervé Abdi 1 The University of Texas at Dallas Introduction Pls regression is a recent technique that generalizes and combines features from principal component
The Kendall Rank Correlation Coefficient
The Kendall Rank Correlation Coefficient Hervé Abdi Overview The Kendall (955) rank correlation coefficient evaluates the degree of similarity between two sets of ranks given to a same set of objects.
1 Overview and background
In Neil Salkind (Ed.), Encyclopedia of Research Design. Thousand Oaks, CA: Sage. 010 The Greenhouse-Geisser Correction Hervé Abdi 1 Overview and background When performing an analysis of variance with
Factor Rotations in Factor Analyses.
Factor Rotations in Factor Analyses. Hervé Abdi 1 The University of Texas at Dallas Introduction The different methods of factor analysis first extract a set a factors from a data set. These factors are
Partial Least Square Regression PLS-Regression
Partial Least Square Regression Hervé Abdi 1 1 Overview PLS regression is a recent technique that generalizes and combines features from principal component analysis and multiple regression. Its goal is
Multiple factor analysis: principal component analysis for multitable and multiblock data sets
Multiple factor analysis: principal component analysis for multitable and multiblock data sets Hervé Abdi 1, Lynne J. Williams 2 and Domininique Valentin 3 Multiple factor analysis (MFA, also called multiple
1 Overview. Fisher s Least Significant Difference (LSD) Test. Lynne J. Williams Hervé Abdi
In Neil Salkind (Ed.), Encyclopedia of Research Design. Thousand Oaks, CA: Sage. 2010 Fisher s Least Significant Difference (LSD) Test Lynne J. Williams Hervé Abdi 1 Overview When an analysis of variance
15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
The Bonferonni and Šidák Corrections for Multiple Comparisons
The Bonferonni and Šidák Corrections for Multiple Comparisons Hervé Abdi 1 1 Overview The more tests we perform on a set of data, the more likely we are to reject the null hypothesis when it is true (i.e.,
Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances
Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances It is possible to construct a matrix X of Cartesian coordinates of points in Euclidean space when we know the Euclidean
α = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
Least-Squares Intersection of Lines
Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a
Orthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
Multiple regression - Matrices
Multiple regression - Matrices This handout will present various matrices which are substantively interesting and/or provide useful means of summarizing the data for analytical purposes. As we will see,
Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components
Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they
Similarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
How To Understand Multivariate Models
Neil H. Timm Applied Multivariate Analysis With 42 Figures Springer Contents Preface Acknowledgments List of Tables List of Figures vii ix xix xxiii 1 Introduction 1 1.1 Overview 1 1.2 Multivariate Models
Review Jeopardy. Blue vs. Orange. Review Jeopardy
Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0
DATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
Manifold Learning Examples PCA, LLE and ISOMAP
Manifold Learning Examples PCA, LLE and ISOMAP Dan Ventura October 14, 28 Abstract We try to give a helpful concrete example that demonstrates how to use PCA, LLE and Isomap, attempts to provide some intuition
Recall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
MATH 551 - APPLIED MATRIX THEORY
MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points
Lecture 5 Principal Minors and the Hessian
Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and
University of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
Component Ordering in Independent Component Analysis Based on Data Power
Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals
MAT 242 Test 2 SOLUTIONS, FORM T
MAT 242 Test 2 SOLUTIONS, FORM T 5 3 5 3 3 3 3. Let v =, v 5 2 =, v 3 =, and v 5 4 =. 3 3 7 3 a. [ points] The set { v, v 2, v 3, v 4 } is linearly dependent. Find a nontrivial linear combination of these
Orthogonal Projections
Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors
Multivariate Analysis of Variance (MANOVA): I. Theory
Gregory Carey, 1998 MANOVA: I - 1 Multivariate Analysis of Variance (MANOVA): I. Theory Introduction The purpose of a t test is to assess the likelihood that the means for two groups are sampled from the
13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation
1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
is in plane V. However, it may be more convenient to introduce a plane coordinate system in V.
.4 COORDINATES EXAMPLE Let V be the plane in R with equation x +2x 2 +x 0, a two-dimensional subspace of R. We can describe a vector in this plane by its spatial (D)coordinates; for example, vector x 5
( ) which must be a vector
MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written
4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION
4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:
Chapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
STATISTICS AND DATA ANALYSIS IN GEOLOGY, 3rd ed. Clarificationof zonationprocedure described onpp. 238-239
STATISTICS AND DATA ANALYSIS IN GEOLOGY, 3rd ed. by John C. Davis Clarificationof zonationprocedure described onpp. 38-39 Because the notation used in this section (Eqs. 4.8 through 4.84) is inconsistent
THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS
THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS O.U. Sezerman 1, R. Islamaj 2, E. Alpaydin 2 1 Laborotory of Computational Biology, Sabancı University, Istanbul, Turkey. 2 Computer Engineering
Multivariate Analysis of Ecological Data
Multivariate Analysis of Ecological Data MICHAEL GREENACRE Professor of Statistics at the Pompeu Fabra University in Barcelona, Spain RAUL PRIMICERIO Associate Professor of Ecology, Evolutionary Biology
Principal component analysis (PCA) is probably the
Hervé Abdi and Lynne J. Williams (PCA) is a multivariate technique that analyzes a data table in which observations are described by several inter-correlated quantitative dependent variables. Its goal
Nonlinear Iterative Partial Least Squares Method
Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for
3 Orthogonal Vectors and Matrices
3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first
Exploratory data analysis for microarray data
Eploratory data analysis for microarray data Anja von Heydebreck Ma Planck Institute for Molecular Genetics, Dept. Computational Molecular Biology, Berlin, Germany [email protected] Visualization
Self Organizing Maps: Fundamentals
Self Organizing Maps: Fundamentals Introduction to Neural Networks : Lecture 16 John A. Bullinaria, 2004 1. What is a Self Organizing Map? 2. Topographic Maps 3. Setting up a Self Organizing Map 4. Kohonen
Notes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors
Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col
Introduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
Linear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
Systems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
Elasticity Theory Basics
G22.3033-002: Topics in Computer Graphics: Lecture #7 Geometric Modeling New York University Elasticity Theory Basics Lecture #7: 20 October 2003 Lecturer: Denis Zorin Scribe: Adrian Secord, Yotam Gingold
5.3 The Cross Product in R 3
53 The Cross Product in R 3 Definition 531 Let u = [u 1, u 2, u 3 ] and v = [v 1, v 2, v 3 ] Then the vector given by [u 2 v 3 u 3 v 2, u 3 v 1 u 1 v 3, u 1 v 2 u 2 v 1 ] is called the cross product (or
Solutions to old Exam 1 problems
Solutions to old Exam 1 problems Hi students! I am putting this old version of my review for the first midterm review, place and time to be announced. Check for updates on the web site as to which sections
6. Cholesky factorization
6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix
v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.
3. Cross product Definition 3.1. Let v and w be two vectors in R 3. The cross product of v and w, denoted v w, is the vector defined as follows: the length of v w is the area of the parallelogram with
6. Vectors. 1 2009-2016 Scott Surgent ([email protected])
6. Vectors For purposes of applications in calculus and physics, a vector has both a direction and a magnitude (length), and is usually represented as an arrow. The start of the arrow is the vector s foot,
Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics
INTERNATIONAL BLACK SEA UNIVERSITY COMPUTER TECHNOLOGIES AND ENGINEERING FACULTY ELABORATION OF AN ALGORITHM OF DETECTING TESTS DIMENSIONALITY Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree
Principal Component Analysis
Principal Component Analysis ERS70D George Fernandez INTRODUCTION Analysis of multivariate data plays a key role in data analysis. Multivariate data consists of many different attributes or variables recorded
Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
Notes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.
Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(
8 Square matrices continued: Determinants
8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You
Inner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix
Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,
1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v,
1.3. DOT PRODUCT 19 1.3 Dot Product 1.3.1 Definitions and Properties The dot product is the first way to multiply two vectors. The definition we will give below may appear arbitrary. But it is not. It
Factor Analysis. Factor Analysis
Factor Analysis Principal Components Analysis, e.g. of stock price movements, sometimes suggests that several variables may be responding to a small number of underlying forces. In the factor model, we
Factor Analysis. Chapter 420. Introduction
Chapter 420 Introduction (FA) is an exploratory technique applied to a set of observed variables that seeks to find underlying factors (subsets of variables) from which the observed variables were generated.
2.3 Convex Constrained Optimization Problems
42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions
Module 3: Correlation and Covariance
Using Statistical Data to Make Decisions Module 3: Correlation and Covariance Tom Ilvento Dr. Mugdim Pašiƒ University of Delaware Sarajevo Graduate School of Business O ften our interest in data analysis
Principal components analysis
CS229 Lecture notes Andrew Ng Part XI Principal components analysis In our discussion of factor analysis, we gave a way to model data x R n as approximately lying in some k-dimension subspace, where k
Dimensionality Reduction: Principal Components Analysis
Dimensionality Reduction: Principal Components Analysis In data mining one often encounters situations where there are a large number of variables in the database. In such situations it is very likely
An Introduction to MDS
An Introduction to MDS FLORIAN WICKELMAIER Sound Quality Research Unit, Aalborg University, Denmark May 4, 2003 Author s note This manuscript was completed while the author was employed at the Sound Quality
CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.
CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In
Linear Models and Conjoint Analysis with Nonlinear Spline Transformations
Linear Models and Conjoint Analysis with Nonlinear Spline Transformations Warren F. Kuhfeld Mark Garratt Abstract Many common data analysis models are based on the general linear univariate model, including
CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION
No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August
Canonical Correlation Analysis
Canonical Correlation Analysis Lecture 11 August 4, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #11-8/4/2011 Slide 1 of 39 Today s Lecture Canonical Correlation Analysis
Multivariate normal distribution and testing for means (see MKB Ch 3)
Multivariate normal distribution and testing for means (see MKB Ch 3) Where are we going? 2 One-sample t-test (univariate).................................................. 3 Two-sample t-test (univariate).................................................
Factor analysis. Angela Montanari
Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number
Introduction to Principal Components and FactorAnalysis
Introduction to Principal Components and FactorAnalysis Multivariate Analysis often starts out with data involving a substantial number of correlated variables. Principal Component Analysis (PCA) is a
Lecture 5: Singular Value Decomposition SVD (1)
EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system
Linear Algebraic Equations, SVD, and the Pseudo-Inverse
Linear Algebraic Equations, SVD, and the Pseudo-Inverse Philip N. Sabes October, 21 1 A Little Background 1.1 Singular values and matrix inversion For non-smmetric matrices, the eigenvalues and singular
Exploratory Factor Analysis
Introduction Principal components: explain many variables using few new variables. Not many assumptions attached. Exploratory Factor Analysis Exploratory factor analysis: similar idea, but based on model.
Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj
Section 5. l j v j = [ u u j u m ] l jj = l jj u j + + l mj u m. l mj Section 5. 5.. Not orthogonal, the column vectors fail to be perpendicular to each other. 5..2 his matrix is orthogonal. Check that
Multivariate Analysis of Variance (MANOVA)
Chapter 415 Multivariate Analysis of Variance (MANOVA) Introduction Multivariate analysis of variance (MANOVA) is an extension of common analysis of variance (ANOVA). In ANOVA, differences among various
DERIVATIVES AS MATRICES; CHAIN RULE
DERIVATIVES AS MATRICES; CHAIN RULE 1. Derivatives of Real-valued Functions Let s first consider functions f : R 2 R. Recall that if the partial derivatives of f exist at the point (x 0, y 0 ), then we
Matrix Differentiation
1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have
Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression
Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression Saikat Maitra and Jun Yan Abstract: Dimension reduction is one of the major tasks for multivariate
Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems
Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems In Chapters 8 and 9 of this book we have designed dynamic controllers such that the closed-loop systems display the desired transient
Question 2: How do you solve a matrix equation using the matrix inverse?
Question : How do you solve a matrix equation using the matrix inverse? In the previous question, we wrote systems of equations as a matrix equation AX B. In this format, the matrix A contains the coefficients
Numerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number
Statistical Machine Learning
Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes
Linear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka [email protected] http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
[1] Diagonal factorization
8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:
Vector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
