CONDITIONAL, PARTIAL AND RANK CORRELATION FOR THE ELLIPTICAL COPULA; DEPENDENCE MODELLING IN UNCERTAINTY ANALYSIS
|
|
|
- Austen Horn
- 9 years ago
- Views:
Transcription
1 CONDITIONAL, PARTIAL AND RANK CORRELATION FOR THE ELLIPTICAL COPULA; DEPENDENCE MODELLING IN UNCERTAINTY ANALYSIS D. Kurowicka, R.M. Cooke Delft University of Technology, Mekelweg 4, 68CD Delft, Netherlands R.M. Abstract: The copula-vine method of specifying dependence in high dimensional distributions has been developed in Cooke [], Bedford and Cooke [6], Kurowicka and Cooke ([], [4]), and Kurowicka et all [3]. According to this method, a high dimensional distribution is constructed from two dimensional and conditional two dimensional distributions of uniform variaties. When the (conditional) two dimensional distributions are specified via (conditional) rank correlations, the distribution can be sampled on the fly. When instead we use partial correlations, the specifications are algebraically independent and uniquely determine the (rank) correlation matrix. We prove that for the elliptical copulae ([3]), the conditional and partial correlations are equal. This enables on-the-fly simulation of a full correlation structure; something which here to fore was not possible. Introduction A unique joint distribution is specified by associating a copula (that is, a bivariate distribution on the unit square with uniform marginals) with each (conditional) bivariate distribution. An open question concerns the relation between conditional rank correlation and partial correlation. This is important for the following reason. Bedford and Cooke ([6]) show that a bijection exists between partial correlations on a regular vine and correlation matrices. Specifying a dependence structure in terms of partial correlations avoids all problems of positive definiteness and incomplete specification (unspecified partial correlation on the vine may be chosen arbitrarily, in particular, they may be set equal to zero). Kurowicka and Cooke have shown that if (X, Y ) and (X, Z) are copula distributions, and if (Y, Z) are independent given X, then linear regression of the copula is sufficient, and with extra conditions necessary, for zero partial correlation. It is conjectured that linear regression is sufficient for the equality of constant conditional correlation and partial correlation (this has been proved in some special cases including the elliptical copulae discussed below). The copulae implemented in current uncertainty analysis programs (PREP/SPOP and UNICORN) do not have the linear regression property. Kurowicka and Cooke ([]) show that with the maximum entropy copula, and constant conditional rank correlation, the mean conditional correlation is close to the partial correlation. However, the approximate equality degrades as correlations become extreme, and the relation between conditional rank correlation and mean conditional correlation must be determined numerically. The problem of exactly sampling a distributions with
2 given marginals and given rank correlation matrix has until now remained unsolved. We present a simulation using elliptical copula. The elliptical copulae introduced in Kurowicka et al ([3]) are continuous, have linear regression and can realize all correlations in (-,). These copulae are obtained from two dimensional projections of linear transformations of the uniform distribution on the unit sphere in three dimensions. In this paper we prove that when (X, Y )and (X, Z) are joined by elliptical copulae, and when the conditional copula for (Y, Z X) does not depend on X, then the conditional correlation of (Y, Z X) and the partial correlation are equal. We study the relation between conditional correlation and conditional rank correlation. The elliptical copulae thus provide a satisfactory solution to the problem of specifying dependence in high dimensional distributions. Given a fully specified correlation matrix, we can derive partial correlations on a regular vine, and convert this to an on-the-fly conditional sampling routine. The sample will exactly reproduce the specified correlations (modulo sampling errors). In contrast, the standard methods for sampling high dimensional distributions involve transforming to joint normal and taking linear combinations to induce a target correlation matrix. This is, however, not the rank correlation matrix. Indeed, it is easy to show using a result of Pearson ([5]) that not every correlation matrix can be the rank correlation matrix of a joint normal distribution. We report simulation exercises indicating that the probability of sampling a normal rank correlation matrix from the set of correlation matrices of given dimension goes rapidly to zero as the dimension gets large. Rank and product moment correlations The obvious relationship between product moment and rank correlations follows directly from their definitions. Definition. (Product moment correlation) The product moment correlation ρ(x, X ) of two variables X and X is given by ρ(x, X ) = Cov(X, X ) Var(X )Var(X ). Definition. (Rank correlation) The rank correlation r(x, X ) of two random variables X and X with probability distribution F and marginal probability distributions F, F respectively, is given by r(x, X ) = ρ(f (X ), F (X )). For uniform variables these two correlations are equal but in general they are different. The rank correlation has some important advantages over the product moment correlation. It always exists, can take any value in the interval [-,], and is independent of the marginal distributions. Definition.3 (Conditionl correlation) The conditional correlation of Y and Z given X, ρ Y Z X, is the product moment correlation calculated with the conditional distributions Y, Z given X.
3 Let us consider variables X i with zero mean and standard deviations σ i, i =,..., n. Let the numbers b ;3,...,n,..., b n;,...,n minimize E ( (X b ;3,...,n X... b n;,...,n X n ) ). Definition.4 (Partial correlation) ρ ;3,...,n = sgn(b ;3,...,n ) (b ;3,...,n b ;3,...,n ), etc. Partial correlations can be computed from correlations with the following recursive formula (Yule and Kendall [7]): ρ ;3,...,n = ρ ;3,...,n ρ n;,...,n ρ n;,3,...,n. () ρ n;,...,n ρ n;,3,...,n 3 Rank and product moment correlations for joint normal K. Pearson [5], proved that if vector (X, X ) has a joint normal distribution then the relationship between rank and product moment correlation is given by following formula ( π ) ρ(x, X ) = sin 6 r(x, X ). () The proof of this fact is based on the property that the derivative of the density function for bivariate normals with respect to correlation is equal to the second order derivative with respect to variables x and x. From the above we can conclude, however, that not every positive definite matrix with ones on the main diagonal is the rank correlation matrix of a joint normal distribution. Let us consider following example: Example 3. Let us consider matrix A A = We can easily check that A is positive definite. However, the matrix B, such that B(i, j) = sin( π A(i, j)) for i, j =,..., 3, 6 that is, B = is not positive definite.
4 This should be taken into account in any procedure where the dependence structure for a set of variables with specified marginals is induced using the joint normal distribution. This procedure consists of transforming specified marginal distributions to standard normals and inducing a dependence structure by using the linear properties of the joint normal. From the above example we can see that is not always possible to find a correlation matrix inducing a given rank correlation matrix. In fact we show in next section that the probability that randomly chosen matrix stays positive definite after transformation () goes rapidly to zero with matrix dimension. 4 Sampling a positive definite matrix with regular vine A graphical model called vines was introduced in (Cooke []). A vine on N variables is a nested set of trees, where the edges of tree j are the nodes of tree j +, and each tree has the maximum number of edges. A regular vine on N variables is a vine in which two edges in tree j are joined by an edge in tree j + only if these edges share a common node. There are (N ) + (N ) = N(N ) edges in a regular vine on N variables. Each edge in a regular vine may be associated with a partial correlation (for j = the conditions are vacuous) with values chosen arbitrarily in the interval (, ). The partial correlations associated with each edge are determined as follows: the variables reachable from a given edge are called the constraint set of that edge. When two edges are joined by an edge of the next tree, the intersection of the respective constraint sets form the conditioning variables, and the symmetric difference of the constraint sets are the conditioned variables. The regularity condition insures that the symmetric difference of the constraint sets always is always a doubleton. The Figure below shows vine on four variables with assigned to the edges partial correlations. It can be shown that each such partial correlation regular vine specification Figure : Partial correlation specification on regular vine on 4 variables. uniquely determines the correlation matrix, and every full rank correlation matrix can be obtained in this way (Bedford and Cooke [6]). In other words, a regular vine provides a bijective mapping from (, ) (N ) into the set of positive definite matrices with
5 s on the diagonal. We use the properties of a partial correlation specification on a regular vine to randomly sample positive definite matrices; we simply sample ( ) N independent uniforms on (-,) and recalculate with formula () a positive definite matrix. We apply transformation () to the matrix obtained in this way and check if the transformed matrix stays positive definite. The table shows results of simulations prepared in Matlab 5.3. Dimension Proportion Table : Relationship between dimension of the matrix and proportions of the matrices retaining positive definite after transformation (). 5 Conditional rank and product moment correlations for elliptical copulae For copulae, that is bivariate distributions on the unit square with uniform marginals, rank and product moment correlations coincide. We are interested in the relation between conditional product moment and conditional rank correlations. We consider the elliptical copula. This distribution has very similar properties to bivariate normal distributions. It is shown in (Kurowicka et al [3]) that it has linear regression and that partial and conditional correlations are equal if the conditional correlations are constant. The density function of the elliptical copula with given correlation ρ (, ) is ρ ( ) (x, y) B f ρ (x, y) = π 4 x y ρx ρ 0 (x, y) B where B = {(x, y) x + ( ) y ρx < ρ 4 }. The Figure shows graph of density function of the elliptical copula with correlation ρ = 0.8. Some properties of elliptical copulae are shown in the theorem below (Kurowicka et al [3]). Theorem 5. If X, Y are joined by the elliptical copula with correlation ρ then (a) E(Y X) = ρx, (b) Var(Y X) = ( ρ ) ( 4 X), (c) for ρx ρ 4 X < y < ρx + ρ 4 X, ( ) F Y X (y) = + arcsin y ρx, π ρ 4 X
6 Figure : A density function of an elliptical copula with correlation 0.8. (d) for 0 < t <, F Y X (t) = ρ 4 X sin(π(t 0.5)) + ρx. Theorem 5. Let X, Y and X, Z be joined by elliptical copula with correlations ρ XY and ρ XZ respectively and assume that the conditional copula for Y Z given X does not depend on X; then the conditional correlation ρ Y Z X is constant in X. Proof. We calculate the conditional correlation for an arbitrary copula f(u, v) using Theorem 5.: ρ Y Z X = E(Y Z X) E(Y X)E(Z X) σ Y X σ Z X. E(Y Z X) = = F Y X (u)fz X(v)f(u, v)dudv = ρ XY ρ XZ X f(u, v)dudv ρ XY ρ XZ 4 X X sin(πu)f(u, v)dudv ρ XZ ρ XY 4 X X sin(πv)f(u, v)dudv ρ XY ρ XZ ( 4 X ) sin(πu) sin(πv)f(u, v)dudv. Since function f is a density function of the copulae and sin(πu)du = 0 then we get E(Y Z X) = ρ XY ρ XZ X + ρ XY ρ XZ ( 4 X )I
7 where I = sin(πu) sin(πv)f(u, v)dudv. From the above calculations and Theorem 5. we obtain ρ Y Z X = ρ XY ρ XZ X + ρ XY ρ XZ ( 4 X )I E(Y X)E(Z X) = I. σ Y X σ Z X Hence the conditional correlation ρ Y Z X doesn t depend on X, is constant. Moreover, this result doesn t depend on copula f. Now we take copula f as elliptical copula with rank correlation r. Calculating I we obtain relationship between r and ρ Y Z X. This way the relationship between conditional product moment and conditional rank correlations will be found. I = ru+ r 4 u du ru r π sin(πu) sin (πv) r ( ) dv. 4 u 4 u v ru r Using transformation ( ) u = a cos(b); v = a r sin(b) + r cos(b) where 0 a, 0 b π, we reduce above integral to the following form I = π π 0 0 a ( [ ]) sin (πa cos(b)) sin πa r sin(b) + r cos(b) dadb.(3) 4 a Since the above integral is improper we first integrate by parts to get rid of the singularities and then calculate numerically. Table below presents numerical results prepared in Maple. r Y Z X ρ Y Z X Table : Relationship between conditional product moment and constant conditional rank correlation for variables joined by elliptical copula. Acknowledgements The authors would like to thank M. de Bruin for help in finding efficient way of solving integral from this section and D. Lewandowski for preparing numerical results presented in Table.
8 6 Conclusions These results mean that the problem of sampling exactly from a distribution with fixed marginals and given rank correlation matrix has been solved. In fact for Example 3. this could not be done with existing methods. The correlation specification on a regular vine with elliptical copula provides a very convenient way of sampling such a distribution. We get ρ = ρ 3 = 0.7 and ρ 3 = 0. The partial correlation ρ 3; can be calculated from () as For the elliptical copula conditional product moment correlation is constant and equal to partial correlation. From (3) we compute that constant conditional rank correlation r 3 = corresponds to (constant) conditional product moment correlation of The sampling algorithm samples three independent uniform (0, ) variables U, U, U 3. We assume that the variables X, X, X 3 are also uniform. Let F ri,j k ;U i (X j ) denote the cumulative distribution function for X j (a uniform variable, to be denoted U j ) under the conditional copula with rank correlation r i,j k, as a function of U i. Then X j = F r i,j k ;U i (U j ) expresses X j as a function of U j and U i. The algorithm can now be stated as follows: x = u ; x = F r ;u (u ); x 3 = F r 3 ;u F r 3 ;u (u 3 ). Since for elliptical copula inverse cumulative distribution is given in functional form (5.), this sampling procedure is very efficient and accurate. References [] R.M. Cooke. Uncertainty modeling: examples and issues. Safety Science, 6, no /:49 60, 997. [] R.M. Cooke D. Kurowicka. Conditional and partial correlation for graphical uncertainty models. in Recent Advances in Reldiability Theory, Birkhauser, Boston, pages 59 76, 000. [3] J. Misiewicz D.Kurowicka and R.M Cooke. Elliptical copulae. in appear, 000. [4] D. Kurowicka and R.M. Cooke. A parametrization of positive definite matrices in terms of partial correlation vines. submitted to Lienear Algebra and its Applications, 000. [5] K. Pearson. Mathematical contributions to the theory of evolution. Biometric, Series. VI.Series, 907. [6] R.M. Cooke T.J. Bedford. Reliability methods as management tools: dependence modeling and partial mission success. In Chameleon Press, editor, ESREL 95, pages London, 995. [7] G.U. Yule and M.G. Kendall. An introduction to the theory of statistics. Charles Griffin & Co. 4th edition, Belmont, California, 965.
Dependence Concepts. by Marta Monika Wawrzyniak. Master Thesis in Risk and Environmental Modelling Written under the direction of Dr Dorota Kurowicka
Dependence Concepts by Marta Monika Wawrzyniak Master Thesis in Risk and Environmental Modelling Written under the direction of Dr Dorota Kurowicka Delft Institute of Applied Mathematics Delft University
Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.
Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance
Some probability and statistics
Appendix A Some probability and statistics A Probabilities, random variables and their distribution We summarize a few of the basic concepts of random variables, usually denoted by capital letters, X,Y,
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)
Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume
Least Squares Estimation
Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David
Similarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
Introduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok
THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE Alexer Barvinok Papers are available at http://www.math.lsa.umich.edu/ barvinok/papers.html This is a joint work with J.A. Hartigan
Mathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
Solution to Homework 2
Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if
Linear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka [email protected] http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
Covariance and Correlation
Covariance and Correlation ( c Robert J. Serfling Not for reproduction or distribution) We have seen how to summarize a data-based relative frequency distribution by measures of location and spread, such
1 if 1 x 0 1 if 0 x 1
Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column
The Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors
Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col
MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column
A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS
A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS Eusebio GÓMEZ, Miguel A. GÓMEZ-VILLEGAS and J. Miguel MARÍN Abstract In this paper it is taken up a revision and characterization of the class of
13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms
Systems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
DATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
MULTIVARIATE PROBABILITY DISTRIBUTIONS
MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined
7 Gaussian Elimination and LU Factorization
7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
NOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
n 2 + 4n + 3. The answer in decimal form (for the Blitz): 0, 75. Solution. (n + 1)(n + 3) = n + 3 2 lim m 2 1
. Calculate the sum of the series Answer: 3 4. n 2 + 4n + 3. The answer in decimal form (for the Blitz):, 75. Solution. n 2 + 4n + 3 = (n + )(n + 3) = (n + 3) (n + ) = 2 (n + )(n + 3) ( 2 n + ) = m ( n
Joint Exam 1/P Sample Exam 1
Joint Exam 1/P Sample Exam 1 Take this practice exam under strict exam conditions: Set a timer for 3 hours; Do not stop the timer for restroom breaks; Do not look at your notes. If you believe a question
Lecture 16 : Relations and Functions DRAFT
CS/Math 240: Introduction to Discrete Mathematics 3/29/2011 Lecture 16 : Relations and Functions Instructor: Dieter van Melkebeek Scribe: Dalibor Zelený DRAFT In Lecture 3, we described a correspondence
1 Determinants and the Solvability of Linear Systems
1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped
Inner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
Interpretation of Somers D under four simple models
Interpretation of Somers D under four simple models Roger B. Newson 03 September, 04 Introduction Somers D is an ordinal measure of association introduced by Somers (96)[9]. It can be defined in terms
Generating Random Numbers Variance Reduction Quasi-Monte Carlo. Simulation Methods. Leonid Kogan. MIT, Sloan. 15.450, Fall 2010
Simulation Methods Leonid Kogan MIT, Sloan 15.450, Fall 2010 c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 1 / 35 Outline 1 Generating Random Numbers 2 Variance Reduction 3 Quasi-Monte
Math 312 Homework 1 Solutions
Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please
LS.6 Solution Matrices
LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions
University of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
Sections 2.11 and 5.8
Sections 211 and 58 Timothy Hanson Department of Statistics, University of South Carolina Stat 704: Data Analysis I 1/25 Gesell data Let X be the age in in months a child speaks his/her first word and
Continued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
Statistical Machine Learning
Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes
1 The Line vs Point Test
6.875 PCP and Hardness of Approximation MIT, Fall 2010 Lecture 5: Low Degree Testing Lecturer: Dana Moshkovitz Scribe: Gregory Minton and Dana Moshkovitz Having seen a probabilistic verifier for linearity
Elasticity Theory Basics
G22.3033-002: Topics in Computer Graphics: Lecture #7 Geometric Modeling New York University Elasticity Theory Basics Lecture #7: 20 October 2003 Lecturer: Denis Zorin Scribe: Adrian Secord, Yotam Gingold
Multivariate Analysis (Slides 13)
Multivariate Analysis (Slides 13) The final topic we consider is Factor Analysis. A Factor Analysis is a mathematical approach for attempting to explain the correlation between a large set of variables
Recall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
MA106 Linear Algebra lecture notes
MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector
Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors
1 Chapter 13. VECTORS IN THREE DIMENSIONAL SPACE Let s begin with some names and notation for things: R is the set (collection) of real numbers. We write x R to mean that x is a real number. A real number
THREE DIMENSIONAL GEOMETRY
Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,
G. GRAPHING FUNCTIONS
G. GRAPHING FUNCTIONS To get a quick insight int o how the graph of a function looks, it is very helpful to know how certain simple operations on the graph are related to the way the function epression
Linear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
Methods for Finding Bases
Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,
8 Primes and Modular Arithmetic
8 Primes and Modular Arithmetic 8.1 Primes and Factors Over two millennia ago already, people all over the world were considering the properties of numbers. One of the simplest concepts is prime numbers.
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation
The Matrix Elements of a 3 3 Orthogonal Matrix Revisited
Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation
The Effects of Start Prices on the Performance of the Certainty Equivalent Pricing Policy
BMI Paper The Effects of Start Prices on the Performance of the Certainty Equivalent Pricing Policy Faculty of Sciences VU University Amsterdam De Boelelaan 1081 1081 HV Amsterdam Netherlands Author: R.D.R.
Notes on Continuous Random Variables
Notes on Continuous Random Variables Continuous random variables are random quantities that are measured on a continuous scale. They can usually take on any value over some interval, which distinguishes
Polynomial Invariants
Polynomial Invariants Dylan Wilson October 9, 2014 (1) Today we will be interested in the following Question 1.1. What are all the possible polynomials in two variables f(x, y) such that f(x, y) = f(y,
Notes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
Current Standard: Mathematical Concepts and Applications Shape, Space, and Measurement- Primary
Shape, Space, and Measurement- Primary A student shall apply concepts of shape, space, and measurement to solve problems involving two- and three-dimensional shapes by demonstrating an understanding of:
9.4. The Scalar Product. Introduction. Prerequisites. Learning Style. Learning Outcomes
The Scalar Product 9.4 Introduction There are two kinds of multiplication involving vectors. The first is known as the scalar product or dot product. This is so-called because when the scalar product of
Transportation Polytopes: a Twenty year Update
Transportation Polytopes: a Twenty year Update Jesús Antonio De Loera University of California, Davis Based on various papers joint with R. Hemmecke, E.Kim, F. Liu, U. Rothblum, F. Santos, S. Onn, R. Yoshida,
8 Square matrices continued: Determinants
8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You
1 Norms and Vector Spaces
008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)
Quadratic forms Cochran s theorem, degrees of freedom, and all that
Quadratic forms Cochran s theorem, degrees of freedom, and all that Dr. Frank Wood Frank Wood, [email protected] Linear Regression Models Lecture 1, Slide 1 Why We Care Cochran s theorem tells us
CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION
No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August
The Bivariate Normal Distribution
The Bivariate Normal Distribution This is Section 4.7 of the st edition (2002) of the book Introduction to Probability, by D. P. Bertsekas and J. N. Tsitsiklis. The material in this section was not included
c 2008 Je rey A. Miron We have described the constraints that a consumer faces, i.e., discussed the budget constraint.
Lecture 2b: Utility c 2008 Je rey A. Miron Outline: 1. Introduction 2. Utility: A De nition 3. Monotonic Transformations 4. Cardinal Utility 5. Constructing a Utility Function 6. Examples of Utility Functions
Common Core Unit Summary Grades 6 to 8
Common Core Unit Summary Grades 6 to 8 Grade 8: Unit 1: Congruence and Similarity- 8G1-8G5 rotations reflections and translations,( RRT=congruence) understand congruence of 2 d figures after RRT Dilations
Introduction to General and Generalized Linear Models
Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby
Georgia Department of Education Kathy Cox, State Superintendent of Schools 7/19/2005 All Rights Reserved 1
Accelerated Mathematics 3 This is a course in precalculus and statistics, designed to prepare students to take AB or BC Advanced Placement Calculus. It includes rational, circular trigonometric, and inverse
PROPERTIES OF THE SAMPLE CORRELATION OF THE BIVARIATE LOGNORMAL DISTRIBUTION
PROPERTIES OF THE SAMPLE CORRELATION OF THE BIVARIATE LOGNORMAL DISTRIBUTION Chin-Diew Lai, Department of Statistics, Massey University, New Zealand John C W Rayner, School of Mathematics and Applied Statistics,
Probability and Random Variables. Generation of random variables (r.v.)
Probability and Random Variables Method for generating random variables with a specified probability distribution function. Gaussian And Markov Processes Characterization of Stationary Random Process Linearly
CHAPTER 6: Continuous Uniform Distribution: 6.1. Definition: The density function of the continuous random variable X on the interval [A, B] is.
Some Continuous Probability Distributions CHAPTER 6: Continuous Uniform Distribution: 6. Definition: The density function of the continuous random variable X on the interval [A, B] is B A A x B f(x; A,
Using row reduction to calculate the inverse and the determinant of a square matrix
Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible
1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
A Non-Linear Schema Theorem for Genetic Algorithms
A Non-Linear Schema Theorem for Genetic Algorithms William A Greene Computer Science Department University of New Orleans New Orleans, LA 70148 bill@csunoedu 504-280-6755 Abstract We generalize Holland
88 CHAPTER 2. VECTOR FUNCTIONS. . First, we need to compute T (s). a By definition, r (s) T (s) = 1 a sin s a. sin s a, cos s a
88 CHAPTER. VECTOR FUNCTIONS.4 Curvature.4.1 Definitions and Examples The notion of curvature measures how sharply a curve bends. We would expect the curvature to be 0 for a straight line, to be very small
DERIVATIVES AS MATRICES; CHAIN RULE
DERIVATIVES AS MATRICES; CHAIN RULE 1. Derivatives of Real-valued Functions Let s first consider functions f : R 2 R. Recall that if the partial derivatives of f exist at the point (x 0, y 0 ), then we
Solving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.
MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α
INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS
INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem
Unified Lecture # 4 Vectors
Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,
Lecture L3 - Vectors, Matrices and Coordinate Transformations
S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between
Matrix Representations of Linear Transformations and Changes of Coordinates
Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under
Prentice Hall Mathematics: Algebra 2 2007 Correlated to: Utah Core Curriculum for Math, Intermediate Algebra (Secondary)
Core Standards of the Course Standard 1 Students will acquire number sense and perform operations with real and complex numbers. Objective 1.1 Compute fluently and make reasonable estimates. 1. Simplify
Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.
Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(
a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
Row Echelon Form and Reduced Row Echelon Form
These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation
Fitting Subject-specific Curves to Grouped Longitudinal Data
Fitting Subject-specific Curves to Grouped Longitudinal Data Djeundje, Viani Heriot-Watt University, Department of Actuarial Mathematics & Statistics Edinburgh, EH14 4AS, UK E-mail: [email protected] Currie,
3 Some Integer Functions
3 Some Integer Functions A Pair of Fundamental Integer Functions The integer function that is the heart of this section is the modulo function. However, before getting to it, let us look at some very simple
Math 241, Exam 1 Information.
Math 241, Exam 1 Information. 9/24/12, LC 310, 11:15-12:05. Exam 1 will be based on: Sections 12.1-12.5, 14.1-14.3. The corresponding assigned homework problems (see http://www.math.sc.edu/ boylan/sccourses/241fa12/241.html)
NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS
NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS TEST DESIGN AND FRAMEWORK September 2014 Authorized for Distribution by the New York State Education Department This test design and framework document
On the representability of the bi-uniform matroid
On the representability of the bi-uniform matroid Simeon Ball, Carles Padró, Zsuzsa Weiner and Chaoping Xing August 3, 2012 Abstract Every bi-uniform matroid is representable over all sufficiently large
Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs
CSE599s: Extremal Combinatorics November 21, 2011 Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs Lecturer: Anup Rao 1 An Arithmetic Circuit Lower Bound An arithmetic circuit is just like
15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
Least-Squares Intersection of Lines
Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a
A characterization of trace zero symmetric nonnegative 5x5 matrices
A characterization of trace zero symmetric nonnegative 5x5 matrices Oren Spector June 1, 009 Abstract The problem of determining necessary and sufficient conditions for a set of real numbers to be the
Chapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
