Math 576: Quantitative Risk Management

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Math 576: Quantitative Risk Management"

Transcription

1 Math 576: Quantitative Risk Management Haijun Li Department of Mathematics Washington State University Week 4 Haijun Li Math 576: Quantitative Risk Management Week 4 1 / 22

2 Outline 1 Basics of Multivariate Modelling 2 The Multivariate Normal Distribution Haijun Li Math 576: Quantitative Risk Management Week 4 2 / 22

3 Notation R d = the d-dimensional Euclidean space. R d k = the space of all d k matrices. For any a, b R d, the rectangular region [a, b] := d [a i, b i ], a = (a 1,..., a d ), b = (b 1,..., b d ), i=1 is called a multivariate closed interval. The intervals [a, b), (a, b], etc, are defined similarly. For any a = (a 1,..., a d ), b = (b 1,..., b d ), a b a i b i, i = 1,..., d. The inequality a < b is defined similarly component-wise. Haijun Li Math 576: Quantitative Risk Management Week 4 3 / 22

4 Notation (cont d) Random (row) vectors: X = (X 1,..., X d ), etc. The transpose X is a column vector. The multivariate distribution F(x) = P(X 1 x 1,..., X d x d ) = P(X (, x]), x = (x 1,..., x d ) R d. The marginal distribution of X i is given by F i (x i ) = P(X i x i ) = F(,...,, x }{{} i,,..., ), x }{{} i R. i 1 n i Write X = (X 1,..., X k, X k+1,..., X d ). The multivariate marginal distribution of (X 1,..., X k ) is give by F {1,...,k} (x 1,..., x k ) = P(X 1 x 1,..., X k x k ) = F(x 1,..., x k,,..., ). }{{} n k Haijun Li Math 576: Quantitative Risk Management Week 4 4 / 22

5 Notation (cont d) The joint survival function F(x) = P(X 1 > x 1,..., X d > x d ) = P(X (x, )), x = (x 1,..., x d ) R d. The marginal survival distribution of X i is given by F i (x i ) = P(X i > x i ) = F(,...,, x }{{} i,,..., ), x }{{} i R. i 1 n i A random vector X is said to be absolutely continuous if F(x) = x1 xd f (t 1,..., t d ) dt 1 dt }{{ d, x = (x } 1,..., x d ) R d, dt where f (t 1,..., t d ) is known as the joint density at (t 1,..., t d ) R d. The notion of densities is local. In fact, for any Borel subset A R d, P(X A) = f (t)dt. Haijun Li Math 576: Quantitative Risk Management Week 4 5 / 22 A

6 Conditional Distributions X 1 {}}{{}}{ Write X = ( X 1,..., X k, X k+1,..., X d ) X 2 Let X = (X 1, X 2 ) have the joint distribution F(x) with multivariate margins F 1 (x 1 ) and F 2 (x 2 ). Let f (t) denote the density of X having multivariate marginal densities f 1 (t 1 ) and f 2 (t 2 ). The conditional density function of X 2 given that X 1 = t 1 is given by f 2 1 (t 2 t 1 ) := f (t 1, t 2 ) f (t 1 ), t 1 R k, t 2 R n k. The conditional distribution of X 2 given that X 1 = x 1 is given by F 2 1 (x 2 x 1 ) = P(X 2 x 2 X 1 = x 1 ) = f 2 1 (t 2 x 1 )dt 2. (,x 2 ] Haijun Li Math 576: Quantitative Risk Management Week 4 6 / 22

7 Expectations Let g : R d R, and h : R n k R be Borel-measurable. The expectation: E[g(X)] := g(x)df (x) = g(x)f (x)dx. R d R } d {{} if the density exists If the density exists, the conditional expectation is defined as E[h(X 2 ) X 1 = x 1 ] := h(t 2 )f 2 1 (t 2 x 1 )dt 2. R n k The expectation E[h(X 2 ) X 1 ] is a function of random vector X 1. Haijun Li Math 576: Quantitative Risk Management Week 4 7 / 22

8 Independence Let X = (X 1, X 2 ) have the joint distribution F(x 1, x 2 ) with multivariate margins F 1 (x 1 ) and F 2 (x 2 ). X 1 and X 2 are independent, denoted as X 1 X 2, if P(X 2 B X 1 A) = P(X 2 B), Borel sets A R k, B R n k. In terms of distribution functions, X 1 X 2 if and only if F(x 1, x 2 ) = F 1 (x 1 )F 2 (x 2 ), x = (x 1, x 2 ) R k R n k. In the case that the density exists, X 1 X 2 if and only if f (t 1, t 2 ) = f 1 (t 1 )f 2 (t 2 ), t = (t 1, t 2 ) R k R n k. In terms of expectations, X 1 X 2 if and only if E[h(X 2 ) X 1 ] = E[h(X 2 )], h : R n k R. Haijun Li Math 576: Quantitative Risk Management Week 4 8 / 22

9 Moments The mean vector of X = (X 1,..., X d ) is defined as µ = E(X) = (E(X 1 ),..., E(X d )). The covariance matrix of X = (X 1,..., X d ) is defined as Σ = Cov(X) = E[(X µ) (X µ)] R d d. If Σ = (σ ij ) d d R d d, then the covariance of X i and X j is given by σ ij = E[X i X j ] E(X i )E(X j ), 1 i, j d. σ ii = E(X 2 i ) [E(X i )] 2 =: σ 2 i is known as the variance of X i. The correlation of X i and X j is a rescaled covariance: ρ ij := σ ij σii σ jj. The matrix (ρ ij ) d d is known as the correlation matrix. Haijun Li Math 576: Quantitative Risk Management Week 4 9 / 22

10 Remark Higher order moments can be obtained from the moment generating function E(exp{Xt }). For any matrix B R d k, any vector b R k, E(XB + b) = E(X)B + b. Cov(XB + b) = Cov(XB) = B Cov(X)B. Any covariance matrix is positive semidefinite. Haijun Li Math 576: Quantitative Risk Management Week 4 10 / 22

11 Standard Estimators of Mean and Covariance Suppose we have n iid observations, X 1,..., X n, of a d-dimensional risk-factor change vector X. The sample mean vector: X := 1 n n X i E(X), as n. i=1 The sample covariance matrix: S := 1 n 1 n (X i X) (X i X) Cov(X), as n. i=1 Both estimators are unbiased. Haijun Li Math 576: Quantitative Risk Management Week 4 11 / 22

12 The Multivariate Normal Distribution Definition 1 Let Z = (Z 1,..., Z k ), where Z 1,..., Z k are iid with standard normal density N(0, 1): ϕ(x) = 1 2π e x2 2, x R. 2 X = (X 1,..., X d ) N d (µ, Σ) has a multivariate normal distribution if X d = µ + ZA, for some matrix A R k d. E(X) = µ, and Cov(X) = Σ = A A (Cholesky decomposition). If Σ is invertible, the normal density is given by 1 ϕ d (x) = (2π) d/2 exp { 12 } Σ 1/2 (x µ)σ 1 (x µ), x R d }{{} ellipsoid contours Haijun Li Math 576: Quantitative Risk Management Week 4 12 / 22

13 Figure : Normal ellipsoids (x µ)σ 1 (x µ) = c with smaller c lead to higher probability mass concentration. Haijun Li Math 576: Quantitative Risk Management Week 4 13 / 22

14 Cholesky Factorization (André-Louis Cholesky, 1924) If the covariance matrix Σ is positive-definite, there exists a square matrix A such that Σ = A A. The matrix A can be constructed using the Cholesky Factorization, and can be chosen as an upper triangular matrix. The matrix A can be also written as λ λ2 0 A = P 0 0 λd where λ 1 λ 2 λ d 0 are the eigenvalues of the covariance matrix Σ, and the matrix P is a d d orthogonal matrix; that is, P P = I (indentity matrix). Haijun Li Math 576: Quantitative Risk Management Week 4 14 / 22

15 Sampling Algorithm with Geometric Interpretation 1 Simulate z 1, z 2,..., z d independently from N(0, 1). 2 Stretch and then rotate stretch { }} { λ1 0 (z 1,..., z d ) λd 3 Translate P = ( λ1 z 1,..., λ d z d ) P } {{ } rotation Cholesky factor A { }} { traslation {}}{ λ1 0 (u 1,..., u d ) = (µ 1,..., µ d )+(z 1,..., z d )..... P. 0 λd Haijun Li Math 576: Quantitative Risk Management Week 4 15 / 22

16 Figure : Starting at standard normal, stretch, rotation, and translation yield an ellipsoid. Haijun Li Math 576: Quantitative Risk Management Week 4 16 / 22

17 Properties of Normal Distributions Let X = (X 1,..., X d ) N d (µ, Σ), where Σ = (σ ij ) d d. The moment generating function is E(e Xt ) = e µt tσt. Any affine transform XB + b N k (µb + b, B ΣB), B R d k, b R k. X N d (µ, Σ) if and only if ax = d a i X i N 1 (aµ, aσa ), a = (a 1,..., a d ) R d. i=1 If Y = (Y 1,..., Y d ) N d (µ, Σ ) and X Y, then the convolution X + Y N d (µ + µ, Σ + Σ ). Haijun Li Math 576: Quantitative Risk Management Week 4 17 / 22

18 Properties of Normal Distributions (cont d) X 1 X 2 {}}{{}}{ Write X = ( X 1,..., X k, X k+1,..., X d ) N d (µ, Σ) with block matrices: ( ) Σ11 Σ µ = (µ 1, µ 2 ), Σ = 12 Σ 21 Σ 22 Then the multivariate margins X 1 N k (µ 1, Σ 11 ), X 2 N n k (µ 2, Σ 22 ). Haijun Li Math 576: Quantitative Risk Management Week 4 18 / 22

19 Properties of Normal Distributions (cont d) X 1 X 2 {}}{{}}{ Write X = ( X 1,..., X k, X k+1,..., X d ) N d (µ, Σ) with block matrices: ( ) Σ11 Σ µ = (µ 1, µ 2 ), Σ = 12 Σ 21 Σ 22 Then [X 2 X 1 = x 1 ] N n k (µ 2 1, Σ 22 1 ), where µ 2 1 = µ 2 + (x 1 µ 1 )Σ 1 11 Σ 12, Σ 22 1 = Σ 22 Σ 21 Σ 1 11 Σ 12. Haijun Li Math 576: Quantitative Risk Management Week 4 19 / 22

20 Properties of Normal Distributions (cont d) Let X = (X 1,..., X d ) N d (µ, Σ), where Σ is positive-definite. Then the squared Mahalanobis distance from the mean vector µ: (X µ)σ 1 (X µ) χ 2 d, a chi-squared distribution with d degrees of freedom (with density cx d 2 1 e x 2, where c is the normalizing constant). Figure : The squared Mahalanobis distance decays exponentially fast! Haijun Li Math 576: Quantitative Risk Management Week 4 20 / 22

21 Example: Daily returns of the Disney share price There are many numerical tests of normality. A QQplot: Ordered observations are plotted against quantiles of standard normal distribution. A lack of linearity shows evidence against the hypothesized reference normal distribution. Figure : QQplot of daily returns of the Disney share price from 1993 to 2000 against a normal reference distribution Haijun Li Math 576: Quantitative Risk Management Week 4 21 / 22

22 Defects of Multivariate Normal Distribution The tails of its univariate marginal distributions are too thin; they do not assign enough weight to extreme events. The joint tails of the distribution do not assign enough weight to joint extreme outcomes. The distribution has a strong form of symmetry, known as elliptical symmetry. Haijun Li Math 576: Quantitative Risk Management Week 4 22 / 22

Notes for STA 437/1005 Methods for Multivariate Data

Notes for STA 437/1005 Methods for Multivariate Data Notes for STA 437/1005 Methods for Multivariate Data Radford M. Neal, 26 November 2010 Random Vectors Notation: Let X be a random vector with p elements, so that X = [X 1,..., X p ], where denotes transpose.

More information

Multivariate Normal Distribution

Multivariate Normal Distribution Multivariate Normal Distribution Lecture 4 July 21, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #4-7/21/2011 Slide 1 of 41 Last Time Matrices and vectors Eigenvalues

More information

Mathematical Background

Mathematical Background Appendix A Mathematical Background A.1 Joint, Marginal and Conditional Probability Let the n (discrete or continuous) random variables y 1,..., y n have a joint joint probability probability p(y 1,...,

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

Random Vectors and the Variance Covariance Matrix

Random Vectors and the Variance Covariance Matrix Random Vectors and the Variance Covariance Matrix Definition 1. A random vector X is a vector (X 1, X 2,..., X p ) of jointly distributed random variables. As is customary in linear algebra, we will write

More information

Joint Probability Distributions and Random Samples. Week 5, 2011 Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage

Joint Probability Distributions and Random Samples. Week 5, 2011 Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage 5 Joint Probability Distributions and Random Samples Week 5, 2011 Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage Two Discrete Random Variables The probability mass function (pmf) of a single

More information

Examination 110 Probability and Statistics Examination

Examination 110 Probability and Statistics Examination Examination 0 Probability and Statistics Examination Sample Examination Questions The Probability and Statistics Examination consists of 5 multiple-choice test questions. The test is a three-hour examination

More information

Jointly Distributed Random Variables

Jointly Distributed Random Variables Jointly Distributed Random Variables COMP 245 STATISTICS Dr N A Heard Contents 1 Jointly Distributed Random Variables 1 1.1 Definition......................................... 1 1.2 Joint cdfs..........................................

More information

The Monte Carlo Framework, Examples from Finance and Generating Correlated Random Variables

The Monte Carlo Framework, Examples from Finance and Generating Correlated Random Variables Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh The Monte Carlo Framework, Examples from Finance and Generating Correlated Random Variables 1 The Monte Carlo Framework Suppose we wish

More information

Bivariate Distributions

Bivariate Distributions Chapter 4 Bivariate Distributions 4.1 Distributions of Two Random Variables In many practical cases it is desirable to take more than one measurement of a random observation: (brief examples) 1. What is

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01 Probability and Statistics for Engineers Winter 2010-2011 Contents 1 Joint Probability Distributions 1 1.1 Two Discrete

More information

Multivariate normal distribution and testing for means (see MKB Ch 3)

Multivariate normal distribution and testing for means (see MKB Ch 3) Multivariate normal distribution and testing for means (see MKB Ch 3) Where are we going? 2 One-sample t-test (univariate).................................................. 3 Two-sample t-test (univariate).................................................

More information

4. Joint Distributions of Two Random Variables

4. Joint Distributions of Two Random Variables 4. Joint Distributions of Two Random Variables 4.1 Joint Distributions of Two Discrete Random Variables Suppose the discrete random variables X and Y have supports S X and S Y, respectively. The joint

More information

Sections 2.11 and 5.8

Sections 2.11 and 5.8 Sections 211 and 58 Timothy Hanson Department of Statistics, University of South Carolina Stat 704: Data Analysis I 1/25 Gesell data Let X be the age in in months a child speaks his/her first word and

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

Statistiek (WISB361)

Statistiek (WISB361) Statistiek (WISB361) Final exam June 29, 2015 Schrijf uw naam op elk in te leveren vel. Schrijf ook uw studentnummer op blad 1. The maximum number of points is 100. Points distribution: 23 20 20 20 17

More information

L10: Probability, statistics, and estimation theory

L10: Probability, statistics, and estimation theory L10: Probability, statistics, and estimation theory Review of probability theory Bayes theorem Statistics and the Normal distribution Least Squares Error estimation Maximum Likelihood estimation Bayesian

More information

SYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation

SYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation SYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu September 19, 2015 Outline

More information

The Delta Method and Applications

The Delta Method and Applications Chapter 5 The Delta Method and Applications 5.1 Linear approximations of functions In the simplest form of the central limit theorem, Theorem 4.18, we consider a sequence X 1, X,... of independent and

More information

Estimation with Minimum Mean Square Error

Estimation with Minimum Mean Square Error C H A P T E R 8 Estimation with Minimum Mean Square Error INTRODUCTION A recurring theme in this text and in much of communication, control and signal processing is that of making systematic estimates,

More information

Introduction to Monte-Carlo Methods

Introduction to Monte-Carlo Methods Introduction to Monte-Carlo Methods Bernard Lapeyre Halmstad January 2007 Monte-Carlo methods are extensively used in financial institutions to compute European options prices to evaluate sensitivities

More information

Least Squares Estimation

Least Squares Estimation Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David

More information

MATHEMATICAL METHODS OF STATISTICS

MATHEMATICAL METHODS OF STATISTICS MATHEMATICAL METHODS OF STATISTICS By HARALD CRAMER TROFESSOK IN THE UNIVERSITY OF STOCKHOLM Princeton PRINCETON UNIVERSITY PRESS 1946 TABLE OF CONTENTS. First Part. MATHEMATICAL INTRODUCTION. CHAPTERS

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

Lecture 8: Random Walk vs. Brownian Motion, Binomial Model vs. Log-Normal Distribution

Lecture 8: Random Walk vs. Brownian Motion, Binomial Model vs. Log-Normal Distribution Lecture 8: Random Walk vs. Brownian Motion, Binomial Model vs. Log-ormal Distribution October 4, 200 Limiting Distribution of the Scaled Random Walk Recall that we defined a scaled simple random walk last

More information

Master s Theory Exam Spring 2006

Master s Theory Exam Spring 2006 Spring 2006 This exam contains 7 questions. You should attempt them all. Each question is divided into parts to help lead you through the material. You should attempt to complete as much of each problem

More information

General Sampling Methods

General Sampling Methods General Sampling Methods Reference: Glasserman, 2.2 and 2.3 Claudio Pacati academic year 2016 17 1 Inverse Transform Method Assume U U(0, 1) and let F be the cumulative distribution function of a distribution

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

3. The Multivariate Normal Distribution

3. The Multivariate Normal Distribution 3. The Multivariate Normal Distribution 3.1 Introduction A generalization of the familiar bell shaped normal density to several dimensions plays a fundamental role in multivariate analysis While real data

More information

6. Cholesky factorization

6. Cholesky factorization 6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

More information

Multivariate Statistical Inference and Applications

Multivariate Statistical Inference and Applications Multivariate Statistical Inference and Applications ALVIN C. RENCHER Department of Statistics Brigham Young University A Wiley-Interscience Publication JOHN WILEY & SONS, INC. New York Chichester Weinheim

More information

Chapter 1. Probability, Random Variables and Expectations. 1.1 Axiomatic Probability

Chapter 1. Probability, Random Variables and Expectations. 1.1 Axiomatic Probability Chapter 1 Probability, Random Variables and Expectations Note: The primary reference for these notes is Mittelhammer (1999. Other treatments of probability theory include Gallant (1997, Casella & Berger

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance

More information

P (x) 0. Discrete random variables Expected value. The expected value, mean or average of a random variable x is: xp (x) = v i P (v i )

P (x) 0. Discrete random variables Expected value. The expected value, mean or average of a random variable x is: xp (x) = v i P (v i ) Discrete random variables Probability mass function Given a discrete random variable X taking values in X = {v 1,..., v m }, its probability mass function P : X [0, 1] is defined as: P (v i ) = Pr[X =

More information

A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS

A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS Eusebio GÓMEZ, Miguel A. GÓMEZ-VILLEGAS and J. Miguel MARÍN Abstract In this paper it is taken up a revision and characterization of the class of

More information

Probability and Statistics

Probability and Statistics CHAPTER 2: RANDOM VARIABLES AND ASSOCIATED FUNCTIONS 2b - 0 Probability and Statistics Kristel Van Steen, PhD 2 Montefiore Institute - Systems and Modeling GIGA - Bioinformatics ULg kristel.vansteen@ulg.ac.be

More information

Chapter 4. Multivariate Distributions

Chapter 4. Multivariate Distributions 1 Chapter 4. Multivariate Distributions Joint p.m.f. (p.d.f.) Independent Random Variables Covariance and Correlation Coefficient Expectation and Covariance Matrix Multivariate (Normal) Distributions Matlab

More information

Topic 8 The Expected Value

Topic 8 The Expected Value Topic 8 The Expected Value Functions of Random Variables 1 / 12 Outline Names for Eg(X ) Variance and Standard Deviation Independence Covariance and Correlation 2 / 12 Names for Eg(X ) If g(x) = x, then

More information

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4) Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume

More information

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two

More information

Factor analysis. Angela Montanari

Factor analysis. Angela Montanari Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number

More information

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015. Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

Math 497C Sep 9, Curves and Surfaces Fall 2004, PSU

Math 497C Sep 9, Curves and Surfaces Fall 2004, PSU Math 497C Sep 9, 2004 1 Curves and Surfaces Fall 2004, PSU Lecture Notes 2 15 sometries of the Euclidean Space Let M 1 and M 2 be a pair of metric space and d 1 and d 2 be their respective metrics We say

More information

Some probability and statistics

Some probability and statistics Appendix A Some probability and statistics A Probabilities, random variables and their distribution We summarize a few of the basic concepts of random variables, usually denoted by capital letters, X,Y,

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

Multivariate Analysis of Ecological Data

Multivariate Analysis of Ecological Data Multivariate Analysis of Ecological Data MICHAEL GREENACRE Professor of Statistics at the Pompeu Fabra University in Barcelona, Spain RAUL PRIMICERIO Associate Professor of Ecology, Evolutionary Biology

More information

1 Introduction. 2 Matrices: Definition. Matrix Algebra. Hervé Abdi Lynne J. Williams

1 Introduction. 2 Matrices: Definition. Matrix Algebra. Hervé Abdi Lynne J. Williams In Neil Salkind (Ed.), Encyclopedia of Research Design. Thousand Oaks, CA: Sage. 00 Matrix Algebra Hervé Abdi Lynne J. Williams Introduction Sylvester developed the modern concept of matrices in the 9th

More information

Diagonal, Symmetric and Triangular Matrices

Diagonal, Symmetric and Triangular Matrices Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

More information

Continuous Random Variables. and Probability Distributions. Continuous Random Variables and Probability Distributions ( ) ( ) Chapter 4 4.

Continuous Random Variables. and Probability Distributions. Continuous Random Variables and Probability Distributions ( ) ( ) Chapter 4 4. UCLA STAT 11 A Applied Probability & Statistics for Engineers Instructor: Ivo Dinov, Asst. Prof. In Statistics and Neurology Teaching Assistant: Neda Farzinnia, UCLA Statistics University of California,

More information

Cofactor Expansion: Cramer s Rule

Cofactor Expansion: Cramer s Rule Cofactor Expansion: Cramer s Rule MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Today we will focus on developing: an efficient method for calculating

More information

What is Statistics? Lecture 1. Introduction and probability review. Idea of parametric inference

What is Statistics? Lecture 1. Introduction and probability review. Idea of parametric inference 0. 1. Introduction and probability review 1.1. What is Statistics? What is Statistics? Lecture 1. Introduction and probability review There are many definitions: I will use A set of principle and procedures

More information

Lecture Notes 1. Brief Review of Basic Probability

Lecture Notes 1. Brief Review of Basic Probability Probability Review Lecture Notes Brief Review of Basic Probability I assume you know basic probability. Chapters -3 are a review. I will assume you have read and understood Chapters -3. Here is a very

More information

Chapter 2, part 2. Petter Mostad

Chapter 2, part 2. Petter Mostad Chapter 2, part 2 Petter Mostad mostad@chalmers.se Parametrical families of probability distributions How can we solve the problem of learning about the population distribution from the sample? Usual procedure:

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

Covariance and Correlation. Consider the joint probability distribution f XY (x, y).

Covariance and Correlation. Consider the joint probability distribution f XY (x, y). Chapter 5: JOINT PROBABILITY DISTRIBUTIONS Part 2: Section 5-2 Covariance and Correlation Consider the joint probability distribution f XY (x, y). Is there a relationship between X and Y? If so, what kind?

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Lesson 5 Chapter 4: Jointly Distributed Random Variables

Lesson 5 Chapter 4: Jointly Distributed Random Variables Lesson 5 Chapter 4: Jointly Distributed Random Variables Department of Statistics The Pennsylvania State University 1 Marginal and Conditional Probability Mass Functions The Regression Function Independence

More information

4. MATRICES Matrices

4. MATRICES Matrices 4. MATRICES 170 4. Matrices 4.1. Definitions. Definition 4.1.1. A matrix is a rectangular array of numbers. A matrix with m rows and n columns is said to have dimension m n and may be represented as follows:

More information

Inner product. Definition of inner product

Inner product. Definition of inner product Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product

More information

1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each)

1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) Math 33 AH : Solution to the Final Exam Honors Linear Algebra and Applications 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) (1) If A is an invertible

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Basics of Point-Referenced Data Models

Basics of Point-Referenced Data Models Basics of Point-Referenced Data Models Basic tool is that of a spatial process, {Y (s),s D}, where D R r Note that time series follows this approach with r = 1; we will usually have r = 2 or 3 We begin

More information

Joint Distributions. Tieming Ji. Fall 2012

Joint Distributions. Tieming Ji. Fall 2012 Joint Distributions Tieming Ji Fall 2012 1 / 33 X : univariate random variable. (X, Y ): bivariate random variable. In this chapter, we are going to study the distributions of bivariate random variables

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Section 5.1 Continuous Random Variables: Introduction

Section 5.1 Continuous Random Variables: Introduction Section 5. Continuous Random Variables: Introduction Not all random variables are discrete. For example:. Waiting times for anything (train, arrival of customer, production of mrna molecule from gene,

More information

Vector Spaces; the Space R n

Vector Spaces; the Space R n Vector Spaces; the Space R n Vector Spaces A vector space (over the real numbers) is a set V of mathematical entities, called vectors, U, V, W, etc, in which an addition operation + is defined and in which

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

Linear Algebra Review Part 2: Ax=b

Linear Algebra Review Part 2: Ax=b Linear Algebra Review Part 2: Ax=b Edwin Olson University of Michigan The Three-Day Plan Geometry of Linear Algebra Vectors, matrices, basic operations, lines, planes, homogeneous coordinates, transformations

More information

Change of Continuous Random Variable

Change of Continuous Random Variable Change of Continuous Random Variable All you are responsible for from this lecture is how to implement the Engineer s Way (see page 4) to compute how the probability density function changes when we make

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 3 Linear Least Squares Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Probability, Random Variables and Expectations

Probability, Random Variables and Expectations Chapter Probability, Random Variables and Expectations Note: The primary reference for these notes is Mittelhammer (999). Other treatments of probability theory include Gallant (997), Casella & Berger

More information

Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

More information

Quadratic forms Cochran s theorem, degrees of freedom, and all that

Quadratic forms Cochran s theorem, degrees of freedom, and all that Quadratic forms Cochran s theorem, degrees of freedom, and all that Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 1, Slide 1 Why We Care Cochran s theorem tells us

More information

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns L. Vandenberghe EE133A (Spring 2016) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Math 541: Statistical Theory II Lecturer: Songfeng Zheng Maximum Likelihood Estimation 1 Maximum Likelihood Estimation Maximum likelihood is a relatively simple method of constructing an estimator for

More information

MULTIVARIATE PROBABILITY DISTRIBUTIONS

MULTIVARIATE PROBABILITY DISTRIBUTIONS MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined

More information

Applications to Data Smoothing and Image Processing I

Applications to Data Smoothing and Image Processing I Applications to Data Smoothing and Image Processing I MA 348 Kurt Bryan Signals and Images Let t denote time and consider a signal a(t) on some time interval, say t. We ll assume that the signal a(t) is

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

Correlation in Random Variables

Correlation in Random Variables Correlation in Random Variables Lecture 11 Spring 2002 Correlation in Random Variables Suppose that an experiment produces two random variables, X and Y. What can we say about the relationship between

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

Conditional Tail Expectations for Multivariate Phase Type Distributions

Conditional Tail Expectations for Multivariate Phase Type Distributions Conditional Tail Expectations for Multivariate Phase Type Distributions Jun Cai Department of Statistics and Actuarial Science University of Waterloo Waterloo, ON N2L 3G1, Canada jcai@math.uwaterloo.ca

More information

1 Eigenvalues and Eigenvectors

1 Eigenvalues and Eigenvectors Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

More information

Statistical Foundations: Measures of Location and Central Tendency and Summation and Expectation

Statistical Foundations: Measures of Location and Central Tendency and Summation and Expectation Statistical Foundations: and Central Tendency and and Lecture 4 September 5, 2006 Psychology 790 Lecture #4-9/05/2006 Slide 1 of 26 Today s Lecture Today s Lecture Where this Fits central tendency/location

More information

TRANSFORMATIONS OF RANDOM VARIABLES

TRANSFORMATIONS OF RANDOM VARIABLES TRANSFORMATIONS OF RANDOM VARIABLES 1. INTRODUCTION 1.1. Definition. We are often interested in the probability distributions or densities of functions of one or more random variables. Suppose we have

More information

Tail inequalities for order statistics of log-concave vectors and applications

Tail inequalities for order statistics of log-concave vectors and applications Tail inequalities for order statistics of log-concave vectors and applications Rafał Latała Based in part on a joint work with R.Adamczak, A.E.Litvak, A.Pajor and N.Tomczak-Jaegermann Banff, May 2011 Basic

More information

Vector Spaces II: Finite Dimensional Linear Algebra 1

Vector Spaces II: Finite Dimensional Linear Algebra 1 John Nachbar September 2, 2014 Vector Spaces II: Finite Dimensional Linear Algebra 1 1 Definitions and Basic Theorems. For basic properties and notation for R N, see the notes Vector Spaces I. Definition

More information

Topic 8: The Expected Value

Topic 8: The Expected Value Topic 8: September 27 and 29, 2 Among the simplest summary of quantitative data is the sample mean. Given a random variable, the corresponding concept is given a variety of names, the distributional mean,

More information

2D Geometric Transformations. COMP 770 Fall 2011

2D Geometric Transformations. COMP 770 Fall 2011 2D Geometric Transformations COMP 770 Fall 2011 1 A little quick math background Notation for sets, functions, mappings Linear transformations Matrices Matrix-vector multiplication Matrix-matrix multiplication

More information

Lecture 8: Signal Detection and Noise Assumption

Lecture 8: Signal Detection and Noise Assumption ECE 83 Fall Statistical Signal Processing instructor: R. Nowak, scribe: Feng Ju Lecture 8: Signal Detection and Noise Assumption Signal Detection : X = W H : X = S + W where W N(, σ I n n and S = [s, s,...,

More information

Understanding and Applying Kalman Filtering

Understanding and Applying Kalman Filtering Understanding and Applying Kalman Filtering Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton 1 Introduction Objectives: 1. Provide a basic understanding

More information

Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers

Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers Variance Reduction The statistical efficiency of Monte Carlo simulation can be measured by the variance of its output If this variance can be lowered without changing the expected value, fewer replications

More information

Orthogonal Projections

Orthogonal Projections Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

More information

Data Modeling & Analysis Techniques. Probability & Statistics. Manfred Huber 2011 1

Data Modeling & Analysis Techniques. Probability & Statistics. Manfred Huber 2011 1 Data Modeling & Analysis Techniques Probability & Statistics Manfred Huber 2011 1 Probability and Statistics Probability and statistics are often used interchangeably but are different, related fields

More information

CS395T Computational Statistics with Application to Bioinformatics

CS395T Computational Statistics with Application to Bioinformatics CS395T Computational Statistics with Application to Bioinformatics Prof. William H. Press Spring Term, 2010 The University of Texas at Austin Unit 6: Multivariate Normal Distributions and Chi Square The

More information

A Tutorial on Probability Theory

A Tutorial on Probability Theory Paola Sebastiani Department of Mathematics and Statistics University of Massachusetts at Amherst Corresponding Author: Paola Sebastiani. Department of Mathematics and Statistics, University of Massachusetts,

More information

Matrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo

Matrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo Matrix Norms Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 28, 2009 Matrix Norms We consider matrix norms on (C m,n, C). All results holds for

More information