Multivariate normal distribution and testing for means (see MKB Ch 3)
|
|
- Ethan Freeman
- 7 years ago
- Views:
Transcription
1 Multivariate normal distribution and testing for means (see MKB Ch 3) Where are we going? 2 One-sample t-test (univariate) Two-sample t-test (univariate) One sample T 2 -test (multivariate) Two sample T 2 -test (multivariate) Multivariate normal distribution 7 Definition Why is it important in multivariate statistics? Why is it important in multivariate statistics? Characterization Properties Properties Data matrices Wishart distribution 15 Quadratic forms of normal data matrices Definition Properties Properties Properties Hotelling s T 2 distribution 21 Definition One-sample T 2 test Relationship with F distribution Mahalonobis distance Two-sample T 2 test Final remarks 27 Assumptions for one-sample T 2 test Assumptions for two-sample T 2 test Multivariate test versus univariate tests
2 Where are we going? 2 / 30 One-sample t-test (univariate) Suppose that x 1,...,x n are i.i.d. N(µ,σ 2 ). Then x = 1 n n i=1 x i N(µ,σ 2 /n) ns 2 = n i=1 (x i x) 2 σ 2 χ 2 n 1 x and s 2 are independent Suppose that the mean µ is unknown. Then we can do a one-sample t-test: H 0 : µ = µ 0, H a : µ µ 0. Test statistic: t = x µ 0 s u/ n, where s u is sample standard deviation with n 1 in the denominator. Under H 0, t t n 1, i.e., it has a student t-distribution with n 1 degrees of freedom. Compute p-value. If p-value < 0.05, we reject the null hypothesis. 3 / 30 Two-sample t-test (univariate) Suppose that x 1,...,x n are i.i.d. N(µ X,σX 2 ), and let y 1,...,y m be i.i.d. N(µ Y,σY 2 ) (independent of x 1,...,x n ). Suppose we want to test whether µ X = µ Y, under the assumption that σ X = σ Y. Then we can do a two-sample t-test: H 0 : µ X = µ Y, H a : µ X µ Y. x ȳ Test statistic: t = where 1n 1m ( + ), s 2 p s 2 p = 1 ( ns 2 n + m 2 X + ms 2 ) Y. Under H 0, t t n+m 2. 4 / 30 One sample T 2 -test (multivariate) Suppose that x 1,...,x n are i.i.d. N p (µ,σ). Then x = 1 n n i=1 x i N p (µ,σ/n). ns W p (Σ,n 1), where S is the sample covariance matrix and W p is a Wishart distribution. x and S are independent. Suppose that the mean µ is unknown. Then we can do a one-sample t-test: H 0 : µ = µ 0, H a : µ µ 0. Test statistic: T = n( x µ 0 ) S 1 u ( x µ 0), where S u is the sample covariance matrix with n 1 in the denominator. Under H 0, T T 2 (p,n 1), i.e., it has Hotelling s T 2 distribution with parameters p and n 1. 5 / 30 2
3 Two sample T 2 -test (multivariate) Suppose that x 1,...,x n are i.i.d. N p (µ X,Σ X ), and let y 1,...,y m be i.i.d. N p (µ Y,Σ Y ) (independent of x 1,...,x n ). Suppose we want to test whether µ X = µ Y, under the assumption that Σ X = Σ Y. Then we can do a two-sample t-test: H 0 : µ X = µ Y, H a : µ X µ Y. Test statistic: T = (nm/(n + m))( x ȳ) S 1 u ( x ȳ), where S u = 1 n + m 2 (ns 1 + ms 2 ). Under H 0, T T 2 (p,n + m 2). 6 / 30 Multivariate normal distribution 7 / 30 Definition A random variable x R has a univariate normal distribution with mean µ and variance σ 2 (we write x N(µ,σ 2 )) iff its density can be written as f(x) = 1 ( exp 1 (x µ) 2 ) 2πσ 2 σ ( 2 = {2πσ 2 } 1/2 exp 1 ) 2 (x µ){σ2 } 1 (x µ). A random vector x R p has a p-variate normal distribution with mean vector µ and covariance matrix Σ (we write x N p (µ,σ)) iff its density can be written as ( f(x) = 2πΣ 1/2 exp 1 ) 2 (x µ) Σ 1 (x µ). 8 / 30 Why is it important in multivariate statistics? It is an easy generalization of the univariate normal distribution. Such generalizations are not obvious for all univariate distributions; sometimes there are several plausible ways to generalize them. The multivariate normal distribution is entirely defined by its first two moments. Hence, it has a sparse parametrization using only p(p + 3)/2 parameters (see board). In the case of normal variables, zero correlation implies independence, and pairwise independence implies mutual independence. These properties do not hold for many other distributions. 9 / 30 3
4 Why is it important in multivariate statistics? Linear functions of a multivariate normal are univariate normal. This yields simple derivations. Even when the original data is not multivariate normal, certain functions like the sample mean will be approximately multivariate normal due to the central limit theorem. The multivariate normal distribution has a simple geometry. Its equiprobability contours are ellipsoids (see picture on slide). 10 / 30 Characterization x is p-variate normal iff a x is univariate normal for all fixed vectors a R p (To allow for a = 0, we regard constants as degenerate forms of the normal distribution.) Geometric interpretation: x is p-variate normal iff its projection on any univariate subspace is normal. This characterization will allow us to derive many properties of the multivariate normal without writing down densities. 11 / 30 Properties (Th of MKB) If x is p-variate normal, and if y = Ax + c where A is any q p matrix and c is any q-vector, then y is q-variate normal (see proof on board). (Cor of MKB) Any subset of elements of a multivariate normal vector are multivariate normal. In particular, all individual elements are univariate normal (see proof on board). 12 / 30 Properties (Th of MKB) If x N p (µ,σ) and y = Ax + c, then y N q (Aµ + c,aσa ) (see proof on board). (Cor of MKB) If x N p (µ,σ) with Σ > 0, then y = Σ 1/2 (x µ) N p (0,I) and (x µ) Σ 1 (x µ) = p i=1 y2 i χ2 p (see proof on board). 13 / 30 4
5 Data matrices Let x 1,...,x n be a random sample from N(µ,Σ). Then we call X = (x 1,...,x n ) a data matrix from N(µ,Σ) or a normal data matrix. (Th of MKB, without proof) If X(n p) is a normal data matrix from N p (µ,σ) and if Y (m q) satisfies Y = AXB, then Y is a normal matrix iff the following two properties hold: A1 = α1 for some scalar α, or B µ = 0 AA = βi for some scalar β, or B ΣB = 0 When both these conditions are satisfied then Y is a normal data matrix from N q (αb µ,βb ΣB). Note: Pre-multiplication with A means that we take linear combinations of the rows. The conditions on A ensure that the new rows are independent. Post-multiplication with B means that we take linear combinations of the columns (variables). 14 / 30 Wishart distribution 15 / 30 Quadratic forms of normal data matrices We now consider quadratic forms of normal data matrices, i.e., functions of the form X CX for some symmetric matrix C. A special case of such a quadratic form is the covariance matrix, which we obtain when C = n 1 (I n 1 11 ) (see proof on board; H = I n 1 11 is called the centering matrix). 16 / 30 Definition If M(p p) can be written as X X where X(m p) is a data matrix from N p (0,Σ), then M is said to have a p-variate Wishart distribution with scale matrix Σ and m degrees of freedom. We write M W p (Σ,m). When Σ = I p, then the distribution is said to be in standard form. Note: The Wishart distribution is a multivariate generalization of the χ 2 distribution: when p = 1, the W 1 (σ 2,m) distribution is given by x x, where x R m contains i.i.d. N 1 (0,σ 2 ) variables. Hence, W 1 (σ 2,m) = σ 2 χ 2 m. 17 / 30 5
6 Properties (Th of MKB) If M W p (Σ,m) and B is a p q matrix, then B MB W q (B ΣB,m) (see proof on board). (Cor of MKB) Diagonal submatrices of M (square submatrices of M whose diagonal corresponds to the diagonal of M) have a Wishart distribution. (Cor of MKB) Σ 1/2 MΣ 1/2 W p (I,m) (Cor of MKB) If M W p (I,m) and B(p q) satisfies B B = I q, then B MB W q (I,m) (Cor of MKB) The ith diagonal element of M, m ii, has a σ 2 i χ2 m distribution (where σ2 i is the ith diagonal element of Σ). All these corollaries follow by choosing particular values of B in Th / 30 Properties (Th of MKB) If M 1 W p (Σ,m 1 ) and M 2 W p (Σ,m 2 ), and if M 1 and M 2 are independent, then M 1 + M 2 W p (Σ,m 1 + m 2 ) (see proof on board). 19 / 30 Properties (Th of MKB) If X(n p) is a data matrix from N p (0,Σ) and C(n n) is a symmetric matrix, then X CX has the same distribution as a weighted sum of independent W p (Σ,1) matrices, where the weights are eigenvalues of C; X CX has a Wishart distribution if C is idempotent. In this case X CX W p (Σ,r) where r = tr(c) = rank(c); If S = n 1 X HX is the sample covariance matrix, then ns W p (Σ,n 1). (See proof on board) 20 / 30 Hotelling s T 2 distribution 21 / 30 Definition If α can be written as md M 1 d where d and M are independently distributed as N p (0,I) and W p (I,m), then we say that α has the Hotelling T 2 distribution with parameters p and m. We write α T 2 (p,m). Hotelling s T 2 distribution is a generalization of the student t-distribution. If x t m, then x 2 T 2 (1,m). 22 / 30 6
7 One-sample T 2 test (Th of MKB) If x and M are independently distributed as N p (µ,σ) and W p (Σ,m) then m(x µ) M 1 (x µ) T 2 (p,m) (see proof on board). (Cor of MKB) if x and S are the mean vector and covariance matrix of a sample of size n from N p (µ,σ), and S u = (n/(n 1))S, then T1 2 = (n 1)( x µ) S 1 ( x µ) = n( x µ)su 1 ( x µ) has a T 2 (p,n 1) distribution (see proof on board). 23 / 30 Relationship with F distribution The T 2 distribution is not readily available in R. But the T 2 distribution is closely related to the F-distribution: (Th of MKB, without proof) T 2 (p,m) = {mp/(m p + 1)}F p,m p+1. (Cor of MKB) If x and S are the mean and covariance of a sample of size n from N p (µ,σ), then n p (n 1)p T 2 1 has a F p,n p distribution. 24 / 30 Mahalonobis distance The so-called Mahalonobis distance between two populations with means µ 1 and µ 2 and common covariance matrix Σ is given by, where 2 = (µ 1 µ 2 ) Σ 1 (µ 1 µ 2 ) In other words, D is the euclidian distance between the re-scaled vectors Σ 1/2 µ 1 and Σ 1/2 µ 2. The sample version of the Mahalonobis distance, D, is defined by D 2 = ( x 1 x 2 ) S 1 u ( x 1 x 2 ), where S u = (n 1 S 1 + n 2 S 2 )/(n 2), x i is the sample mean of sample i, n i is the sample size of sample i, and n = n 1 + n / 30 7
8 Two-sample T 2 test (Th of MKB, without proof) If X 1 and X 2 are independent data matrices, and if the n i rows of X i are i.i.d. N p (µ i,σ i ), i = 1,2, then when µ 1 = µ 2 and Σ 1 = Σ 2, has a T 2 (p,n 2) distribution. T 2 2 = n 1n 2 n D2 Corollary: n 1 p (n 2)p T 2 2 has a F p,n 1 p distribution. 26 / 30 Final remarks 27 / 30 Assumptions for one-sample T 2 test We have a simple random sample from the population The population has a multivariate normal distribution 28 / 30 Assumptions for two-sample T 2 test We have a simple random sample from each population In each population the variables have a multivariate normal distribution The two populations have the same covariance matrix 29 / 30 Multivariate test versus univariate tests See board 30 / 30 8
Quadratic forms Cochran s theorem, degrees of freedom, and all that
Quadratic forms Cochran s theorem, degrees of freedom, and all that Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 1, Slide 1 Why We Care Cochran s theorem tells us
More informationIntroduction to General and Generalized Linear Models
Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby
More informationMultivariate Normal Distribution
Multivariate Normal Distribution Lecture 4 July 21, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #4-7/21/2011 Slide 1 of 41 Last Time Matrices and vectors Eigenvalues
More informationSF2940: Probability theory Lecture 8: Multivariate Normal Distribution
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,
More informationSummary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)
Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume
More informationLinear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
More informationSTAT 830 Convergence in Distribution
STAT 830 Convergence in Distribution Richard Lockhart Simon Fraser University STAT 830 Fall 2011 Richard Lockhart (Simon Fraser University) STAT 830 Convergence in Distribution STAT 830 Fall 2011 1 / 31
More informationSF2940: Probability theory Lecture 8: Multivariate Normal Distribution
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More information1.5 Oneway Analysis of Variance
Statistics: Rosie Cornish. 200. 1.5 Oneway Analysis of Variance 1 Introduction Oneway analysis of variance (ANOVA) is used to compare several means. This method is often used in scientific or medical experiments
More informationSimple Linear Regression Inference
Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation
More information160 CHAPTER 4. VECTOR SPACES
160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results
More informationOrthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
More informationPrinciple of Data Reduction
Chapter 6 Principle of Data Reduction 6.1 Introduction An experimenter uses the information in a sample X 1,..., X n to make inferences about an unknown parameter θ. If the sample size n is large, then
More informationA SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS
A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS Eusebio GÓMEZ, Miguel A. GÓMEZ-VILLEGAS and J. Miguel MARÍN Abstract In this paper it is taken up a revision and characterization of the class of
More informationSome probability and statistics
Appendix A Some probability and statistics A Probabilities, random variables and their distribution We summarize a few of the basic concepts of random variables, usually denoted by capital letters, X,Y,
More informationTHE FIRST SET OF EXAMPLES USE SUMMARY DATA... EXAMPLE 7.2, PAGE 227 DESCRIBES A PROBLEM AND A HYPOTHESIS TEST IS PERFORMED IN EXAMPLE 7.
THERE ARE TWO WAYS TO DO HYPOTHESIS TESTING WITH STATCRUNCH: WITH SUMMARY DATA (AS IN EXAMPLE 7.17, PAGE 236, IN ROSNER); WITH THE ORIGINAL DATA (AS IN EXAMPLE 8.5, PAGE 301 IN ROSNER THAT USES DATA FROM
More information15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
More informationT ( a i x i ) = a i T (x i ).
Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)
More informationOctober 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix
Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,
More information3.4 Statistical inference for 2 populations based on two samples
3.4 Statistical inference for 2 populations based on two samples Tests for a difference between two population means The first sample will be denoted as X 1, X 2,..., X m. The second sample will be denoted
More informationLeast Squares Estimation
Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David
More informationReview Jeopardy. Blue vs. Orange. Review Jeopardy
Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More informationGaussian Conjugate Prior Cheat Sheet
Gaussian Conjugate Prior Cheat Sheet Tom SF Haines 1 Purpose This document contains notes on how to handle the multivariate Gaussian 1 in a Bayesian setting. It focuses on the conjugate prior, its Bayesian
More informationChapter 7 Notes - Inference for Single Samples. You know already for a large sample, you can invoke the CLT so:
Chapter 7 Notes - Inference for Single Samples You know already for a large sample, you can invoke the CLT so: X N(µ, ). Also for a large sample, you can replace an unknown σ by s. You know how to do a
More informationEigenvalues, Eigenvectors, Matrix Factoring, and Principal Components
Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationOverview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written
More informationMethods for Finding Bases
Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,
More informationEfficiency and the Cramér-Rao Inequality
Chapter Efficiency and the Cramér-Rao Inequality Clearly we would like an unbiased estimator ˆφ (X of φ (θ to produce, in the long run, estimates which are fairly concentrated i.e. have high precision.
More informationLesson 1: Comparison of Population Means Part c: Comparison of Two- Means
Lesson : Comparison of Population Means Part c: Comparison of Two- Means Welcome to lesson c. This third lesson of lesson will discuss hypothesis testing for two independent means. Steps in Hypothesis
More informationChapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
More informationPart 2: Analysis of Relationship Between Two Variables
Part 2: Analysis of Relationship Between Two Variables Linear Regression Linear correlation Significance Tests Multiple regression Linear Regression Y = a X + b Dependent Variable Independent Variable
More informationIntroduction to Hypothesis Testing. Hypothesis Testing. Step 1: State the Hypotheses
Introduction to Hypothesis Testing 1 Hypothesis Testing A hypothesis test is a statistical procedure that uses sample data to evaluate a hypothesis about a population Hypothesis is stated in terms of the
More informationHypothesis Testing for Beginners
Hypothesis Testing for Beginners Michele Piffer LSE August, 2011 Michele Piffer (LSE) Hypothesis Testing for Beginners August, 2011 1 / 53 One year ago a friend asked me to put down some easy-to-read notes
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationLinear Algebraic Equations, SVD, and the Pseudo-Inverse
Linear Algebraic Equations, SVD, and the Pseudo-Inverse Philip N. Sabes October, 21 1 A Little Background 1.1 Singular values and matrix inversion For non-smmetric matrices, the eigenvalues and singular
More informationSolving Linear Systems, Continued and The Inverse of a Matrix
, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing
More informationSection 13, Part 1 ANOVA. Analysis Of Variance
Section 13, Part 1 ANOVA Analysis Of Variance Course Overview So far in this course we ve covered: Descriptive statistics Summary statistics Tables and Graphs Probability Probability Rules Probability
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationNotes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
More information1 Sufficient statistics
1 Sufficient statistics A statistic is a function T = rx 1, X 2,, X n of the random sample X 1, X 2,, X n. Examples are X n = 1 n s 2 = = X i, 1 n 1 the sample mean X i X n 2, the sample variance T 1 =
More informationChapter 3: The Multiple Linear Regression Model
Chapter 3: The Multiple Linear Regression Model Advanced Econometrics - HEC Lausanne Christophe Hurlin University of Orléans November 23, 2013 Christophe Hurlin (University of Orléans) Advanced Econometrics
More informationLinear Algebra Notes
Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note
More informationLecture 8. Confidence intervals and the central limit theorem
Lecture 8. Confidence intervals and the central limit theorem Mathematical Statistics and Discrete Mathematics November 25th, 2015 1 / 15 Central limit theorem Let X 1, X 2,... X n be a random sample of
More informationChapter 4: Statistical Hypothesis Testing
Chapter 4: Statistical Hypothesis Testing Christophe Hurlin November 20, 2015 Christophe Hurlin () Advanced Econometrics - Master ESA November 20, 2015 1 / 225 Section 1 Introduction Christophe Hurlin
More informationα = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
More informationMultivariate Statistical Inference and Applications
Multivariate Statistical Inference and Applications ALVIN C. RENCHER Department of Statistics Brigham Young University A Wiley-Interscience Publication JOHN WILEY & SONS, INC. New York Chichester Weinheim
More informationHYPOTHESIS TESTING: POWER OF THE TEST
HYPOTHESIS TESTING: POWER OF THE TEST The first 6 steps of the 9-step test of hypothesis are called "the test". These steps are not dependent on the observed data values. When planning a research project,
More informationSubspaces of R n LECTURE 7. 1. Subspaces
LECTURE 7 Subspaces of R n Subspaces Definition 7 A subset W of R n is said to be closed under vector addition if for all u, v W, u + v is also in W If rv is in W for all vectors v W and all scalars r
More informationAu = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
More informationLeast-Squares Intersection of Lines
Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a
More informationRecall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
More informationPermutation Tests for Comparing Two Populations
Permutation Tests for Comparing Two Populations Ferry Butar Butar, Ph.D. Jae-Wan Park Abstract Permutation tests for comparing two populations could be widely used in practice because of flexibility of
More informationMath 312 Homework 1 Solutions
Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please
More information17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function
17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function, : V V R, which is symmetric, that is u, v = v, u. bilinear, that is linear (in both factors):
More informationLinear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)
MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of
More informationTHE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok
THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE Alexer Barvinok Papers are available at http://www.math.lsa.umich.edu/ barvinok/papers.html This is a joint work with J.A. Hartigan
More informationEconometrics Simple Linear Regression
Econometrics Simple Linear Regression Burcu Eke UC3M Linear equations with one variable Recall what a linear equation is: y = b 0 + b 1 x is a linear equation with one variable, or equivalently, a straight
More informationGeneral Sampling Methods
General Sampling Methods Reference: Glasserman, 2.2 and 2.3 Claudio Pacati academic year 2016 17 1 Inverse Transform Method Assume U U(0, 1) and let F be the cumulative distribution function of a distribution
More informationE3: PROBABILITY AND STATISTICS lecture notes
E3: PROBABILITY AND STATISTICS lecture notes 2 Contents 1 PROBABILITY THEORY 7 1.1 Experiments and random events............................ 7 1.2 Certain event. Impossible event............................
More informationby the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that
More informationCalculating P-Values. Parkland College. Isela Guerra Parkland College. Recommended Citation
Parkland College A with Honors Projects Honors Program 2014 Calculating P-Values Isela Guerra Parkland College Recommended Citation Guerra, Isela, "Calculating P-Values" (2014). A with Honors Projects.
More informationLAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING
LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING In this lab you will explore the concept of a confidence interval and hypothesis testing through a simulation problem in engineering setting.
More information3.1 Least squares in matrix form
118 3 Multiple Regression 3.1 Least squares in matrix form E Uses Appendix A.2 A.4, A.6, A.7. 3.1.1 Introduction More than one explanatory variable In the foregoing chapter we considered the simple regression
More informationNotes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
More informationMATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0
More informationa 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
More informationSolving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
More informationIntroduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
More informationThese axioms must hold for all vectors ū, v, and w in V and all scalars c and d.
DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More informationNonparametric Statistics
Nonparametric Statistics References Some good references for the topics in this course are 1. Higgins, James (2004), Introduction to Nonparametric Statistics 2. Hollander and Wolfe, (1999), Nonparametric
More informationUnit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression
Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Objectives: To perform a hypothesis test concerning the slope of a least squares line To recognize that testing for a
More information1. Introduction to multivariate data
. Introduction to multivariate data. Books Chat eld, C. and A.J.Collins, Introduction to multivariate analysis. Chapman & Hall Krzanowski, W.J. Principles of multivariate analysis. Oxford.000 Johnson,
More information18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in 2-106. Total: 175 points.
806 Problem Set 4 Solution Due Wednesday, March 2009 at 4 pm in 2-06 Total: 75 points Problem : A is an m n matrix of rank r Suppose there are right-hand-sides b for which A x = b has no solution (a) What
More informationSolutions to Math 51 First Exam January 29, 2015
Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not
More informationUniversity of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
More informationFactor analysis. Angela Montanari
Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column
More informationLectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain
Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
More information1 Prior Probability and Posterior Probability
Math 541: Statistical Theory II Bayesian Approach to Parameter Estimation Lecturer: Songfeng Zheng 1 Prior Probability and Posterior Probability Consider now a problem of statistical inference in which
More informationLecture 5 Principal Minors and the Hessian
Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and
More informationSection 6.1 - Inner Products and Norms
Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
More informationMultivariate Normal Distribution Rebecca Jennings, Mary Wakeman-Linn, Xin Zhao November 11, 2010
Multivariate Normal Distribution Rebecca Jeings, Mary Wakeman-Li, Xin Zhao November, 00. Basics. Parameters We say X ~ N n (µ, ) with parameters µ = [E[X ],.E[X n ]] and = Cov[X i X j ] i=..n, j= n. The
More informationCHAPTER 6: Continuous Uniform Distribution: 6.1. Definition: The density function of the continuous random variable X on the interval [A, B] is.
Some Continuous Probability Distributions CHAPTER 6: Continuous Uniform Distribution: 6. Definition: The density function of the continuous random variable X on the interval [A, B] is B A A x B f(x; A,
More informationminimal polyonomial Example
Minimal Polynomials Definition Let α be an element in GF(p e ). We call the monic polynomial of smallest degree which has coefficients in GF(p) and α as a root, the minimal polyonomial of α. Example: We
More informationTwo-Sample T-Tests Assuming Equal Variance (Enter Means)
Chapter 4 Two-Sample T-Tests Assuming Equal Variance (Enter Means) Introduction This procedure provides sample size and power calculations for one- or two-sided two-sample t-tests when the variances of
More informationOutline. Topic 4 - Analysis of Variance Approach to Regression. Partitioning Sums of Squares. Total Sum of Squares. Partitioning sums of squares
Topic 4 - Analysis of Variance Approach to Regression Outline Partitioning sums of squares Degrees of freedom Expected mean squares General linear test - Fall 2013 R 2 and the coefficient of correlation
More informationMultivariate Analysis of Variance (MANOVA)
Chapter 415 Multivariate Analysis of Variance (MANOVA) Introduction Multivariate analysis of variance (MANOVA) is an extension of common analysis of variance (ANOVA). In ANOVA, differences among various
More informationMultidimensional data and factorial methods
Multidimensional data and factorial methods Bidimensional data x 5 4 3 4 X 3 6 X 3 5 4 3 3 3 4 5 6 x Cartesian plane Multidimensional data n X x x x n X x x x n X m x m x m x nm Factorial plane Interpretation
More informationSections 2.11 and 5.8
Sections 211 and 58 Timothy Hanson Department of Statistics, University of South Carolina Stat 704: Data Analysis I 1/25 Gesell data Let X be the age in in months a child speaks his/her first word and
More informationExact Confidence Intervals
Math 541: Statistical Theory II Instructor: Songfeng Zheng Exact Confidence Intervals Confidence intervals provide an alternative to using an estimator ˆθ when we wish to estimate an unknown parameter
More informationWhat you CANNOT ignore about Probs and Stats
What you CANNOT ignore about Probs and Stats by Your Teacher Version 1.0.3 November 5, 2009 Introduction The Finance master is conceived as a postgraduate course and contains a sizable quantitative section.
More informationIntroduction. Hypothesis Testing. Hypothesis Testing. Significance Testing
Introduction Hypothesis Testing Mark Lunt Arthritis Research UK Centre for Ecellence in Epidemiology University of Manchester 13/10/2015 We saw last week that we can never know the population parameters
More informationInner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
More information1 2 3 1 1 2 x = + x 2 + x 4 1 0 1
(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which
More informationConstrained Bayes and Empirical Bayes Estimator Applications in Insurance Pricing
Communications for Statistical Applications and Methods 2013, Vol 20, No 4, 321 327 DOI: http://dxdoiorg/105351/csam2013204321 Constrained Bayes and Empirical Bayes Estimator Applications in Insurance
More information