Chapter 3 Joint Distributions



Similar documents
Joint Exam 1/P Sample Exam 1

Section 2.7 One-to-One Functions and Their Inverses

Math 151. Rumbos Spring Solutions to Assignment #22

( ) is proportional to ( 10 + x)!2. Calculate the

Inner Product Spaces

The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].

MULTIVARIATE PROBABILITY DISTRIBUTIONS

Inverse Functions and Logarithms

Chapter 3. Distribution Problems. 3.1 The idea of a distribution The twenty-fold way

M2S1 Lecture Notes. G. A. Young ayoung

Lecture 5 Principal Minors and the Hessian

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS

1 Portfolio mean and variance

Principle of Data Reduction

Notes on Continuous Random Variables

α = u v. In other words, Orthogonal Projection

2WB05 Simulation Lecture 8: Generating random variables

FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL

( ) FACTORING. x In this polynomial the only variable in common to all is x.

1 Prior Probability and Posterior Probability

), 35% use extra unleaded gas ( A

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Schooling, Political Participation, and the Economy. (Online Supplementary Appendix: Not for Publication)

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March Due:-March 25, 2015.

Class Meeting # 1: Introduction to PDEs

Chapter 3 RANDOM VARIATE GENERATION

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

ST 371 (IV): Discrete Random Variables

Statistics 100A Homework 7 Solutions

Solving Linear Systems, Continued and The Inverse of a Matrix

Section 12.6: Directional Derivatives and the Gradient Vector

Lecture 13 - Basic Number Theory.

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

God created the integers and the rest is the work of man. (Leopold Kronecker, in an after-dinner speech at a conference, Berlin, 1886)

List the elements of the given set that are natural numbers, integers, rational numbers, and irrational numbers. (Enter your answers as commaseparated

Algebra 2 PreAP. Name Period

6.2 Permutations continued

Chapter 6. Linear Transformation. 6.1 Intro. to Linear Transformation

Chapter 4 Lecture Notes

5. Orthogonal matrices

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA

3. INNER PRODUCT SPACES

Lecture 3: Continuous distributions, expected value & mean, variance, the normal distribution

Linear Algebra I. Ronald van Luijk, 2012

Math 431 An Introduction to Probability. Final Exam Solutions

An Overview of Integer Factoring Algorithms. The Problem

1 Sufficient statistics

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

Systems of Linear Equations

x 2 + y 2 = 1 y 1 = x 2 + 2x y = x 2 + 2x + 1

Zeros of a Polynomial Function

Chapter 3: DISCRETE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS. Part 3: Discrete Uniform Distribution Binomial Distribution

minimal polyonomial Example

Lecture 6: Discrete & Continuous Probability and Random Variables

Geometric Transformations

Notes on the Negative Binomial Distribution

Quotient Rings and Field Extensions

Vocabulary Words and Definitions for Algebra

Chapter 17. Orthogonal Matrices and Symmetries of Space

5.1 Radical Notation and Rational Exponents

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Math 461 Fall 2006 Test 2 Solutions

Scalar Valued Functions of Several Variables; the Gradient Vector

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Matrix Differentiation

The Exponential Distribution

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS

Metric Spaces. Chapter Metrics

Maximum Likelihood Estimation

P. Jeyanthi and N. Angel Benseera

Nominal and ordinal logistic regression

Chapter 4. Probability Distributions

STT315 Chapter 4 Random Variables & Probability Distributions KM. Chapter 4.5, 6, 8 Probability Distributions for Continuous Random Variables

1.3 Polynomials and Factoring

(a) We have x = 3 + 2t, y = 2 t, z = 6 so solving for t we get the symmetric equations. x 3 2. = 2 y, z = 6. t 2 2t + 1 = 0,

1 Norms and Vector Spaces

Stats on the TI 83 and TI 84 Calculator

Math 55: Discrete Mathematics

DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents

You know from calculus that functions play a fundamental role in mathematics.

Graphs. Exploratory data analysis. Graphs. Standard forms. A graph is a suitable way of representing data if:

Math 4310 Handout - Quotient Vector Spaces

THREE DIMENSIONAL GEOMETRY

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

arxiv:math/ v2 [math.co] 5 Jul 2006

Feb 28 Homework Solutions Math 151, Winter Chapter 6 Problems (pages )

SUM OF TWO SQUARES JAHNAVI BHASKAR

ALGEBRA 2: 4.1 Graph Quadratic Functions in Standard Form

E3: PROBABILITY AND STATISTICS lecture notes

BX in ( u, v) basis in two ways. On the one hand, AN = u+

3 Orthogonal Vectors and Matrices

March 29, S4.4 Theorems about Zeros of Polynomial Functions

Probability Theory. Florian Herzog. A random variable is neither random nor variable. Gian-Carlo Rota, M.I.T..

Study Guide 2 Solutions MATH 111

discuss how to describe points, lines and planes in 3 space.

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

INSURANCE RISK THEORY (Problems)

Random variables P(X = 3) = P(X = 3) = 1 8, P(X = 1) = P(X = 1) = 3 8.

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include

Transcription:

Chapter 3 Joint Distributions 3.6 Functions of Jointly Distributed Random Variables Discrete Random Variables: Let f(x, y) denote the joint pdf of random variables X and Y with A denoting the two-dimensional space of points for which f(x, y) > 0. Let u = g 1 (x, y) and v = g 2 (x, y) define a one-to-one transformation that maps A onto the space of U and V, B. The joint pdf of U = g 1 (X, Y ) and V = g 2 (X, Y ) is f UV (u, v) for (u, v) B, where x = h 1 (u, v), y = h 2 (u, v) is the inverse of u = g 1 (x, y), v = g 2 (x, y)

Example 1: Let X and Y be two independent random variables that have Poisson distributions with means µ 1 and µ 2, respectively. f(x, y) = µx 1µ y 2e µ 1 µ 2 x!y! for x = 0, 1, 2,..., and y = 0, 1, 2,...,. The space A of points (x, y) such that f(x, y) > 0, is just all pairs of nonnegative integers. We want to find the pdf of U = X + Y. It will help to use the change of variables technique. This requires defining a second transformation V, so that a one-to-one transformation between pairs (x, y) and (u, v) is created. Define V = Y. Then u = x + y and v = y, represent a one-to-one transformation that maps A onto B={(u, v) : v = 0, 1,..., u and u} = 0, 1, 2,...}.

For (u, v) B, the inverse functions are given by x = u v and y = v. Then, f(u, v) = µu v 1 µ v 2e µ 1 µ 2 (u v)!v! From this, we can find the marginal pdf of U = e µ 1 µ 2 u! f U (u) = u u v=0 v=0 f(u, v) u! (u v)!v! µu v 1 µ v 2 u = 0, 1, 2,... = (µ 1 + µ 2 ) u e µ 1 µ 2 u! From this we can see that U = X + Y is Poisson with mean µ 1 + µ 2.

Continuous Random Variables: Let u = g 1 (x, y) and v = g 2 (x, y) define a one-to-one transformation that maps a two-dimensional set A in the xy plane onto a (two-dimensional) set B in the uv plane. If we express x and y in terms of u and v we have x = h 1 (u, v) and y = h 2 (u, v). The determinant of order 2, J = x u y u x v y v is called the Jacobian of the transformation is a function of (u, v). We ll assume that these first-order derivatives are continuous, and the Jacobian J is not identical to 0 in A.

Example 2: Let A be the square A={(x, y) : 0 < x < 1, 0 < y < 1}. Consider the transformation u = g 1 (x, y) = x + y v = g 2 (x, y) = x y To apply the Jacobian of the transformation we first find the inverse transformation. x = h 1 (u, v) = 1 (u + v) 2 y = h 2 (u, v) = 1 (u v) 2

To determine B in the uv plane, note how the boundaries of A are transformed into the boundaries of B. x = 0 into 0 = 1 2 (u + v) x = 1 into 1 = 1 2 (u + v) y = 0 into 0 = 1 2 (u v) y = 1 into 1 = 1 2 (u v). The Jacobian is given by J = x u y u x v x 2 v = 1 1 2 2 1 2 1 2 = 1 2 Note: Your book calls this J 1 (h 1 (u, v), h 2 (u, v)) and uses the notation J(x, y). Still need to take absolute value, J 1 (h 1 (u, v), h 2 (u, v) as in Book Proposition A. Examples

Convolution: Finding the pdf of the sum of two independent random variables. Let X and Y be independent random variables with respective pdfs f X (x) and f Y (y). Let Z = X + Y and W = Y. We have the one-to-one transformation x = z w and y = w with Jacobian J = 1. Therefore the joint pdf of Z and W is f ZW (z, w) = f X (z w)f Y (w) and the marginal of Z = X + Y is given by Very useful! f Z (z) = f X(z w)f Y (w)dw

3.7 Extrema and Order Statistics Let X 1, X 2,..., X n denote a random sample of the continuous type having a pdf f(x) and cdf F (x). Let X (1) be the smallest of these, X (2) be the second smallest, and so on with X (n) denoting the largest. X (1) < X (2) < X (3) <... < X (n) X (k) is called the kth order statistic of the sample for i = 1, 2,..., n. The joint pdf of X (1),..., X (n) is given by f(x (1),..., x (n) ) = n!f(x (1) )f(x (2) ) f(x (n) ) when a < x (1) < x (2) <... < x (n) < b. This is quite consistent with intuition.

Consider, and vector (x 1, x 2,..., x n ). Because of independence we have the joint pdf of X 1,.., X n is f(x 1, x 2,.., x n ) = f(x 1 )f(x 2 ) f(x n ) However, the order statistics are unaltered for all n! permutations of (x 1, x 2,..., x n ), which results in the coefficient in the pdf of X (1), X (2),..., X (n) above. Next, consider the maximum of the sample X (n). Let F n (x (n) ) denote the cdf of X (n). We find the cdf and pdf of X (n) written in terms of F and f. P [X (n) x (n) ] = P [X 1 x (n), X 2 x (n),..., X n x (n) ] Thus, F n (x (n) ) = [F (x (n) )] n. = n i=1 P [X i x (n) ] = [F (x (n) )] n

We find the pdf of X (n) by differentiating the cdf. for a < x (n) < b f n (x (n) ) = F n(x (n) ) = n[f (x (n) )] n 1 f(x (n) ) Now consider the minimum of the sample X (1). 1 F 1 (x (1) ) = P [X (1) > x (1) ] = n i=1 P [X i > x (1) ] = n [1 F (x (1) )] = [1 F (x (1) )] n We see that F 1 (x (1) ) = 1 [1 F (x (1) )] n and the pdf of X (1) is found by taking a derivative i=1 for a < x (1) < b. f 1 (x (1) ) = F 1(x (1) ) = n[1 F (x (1) )] n 1 f(x (1) ).

In general, suppose we are interested in the pdf of the kth order statistic, for 1 k n. To find F k (x (k) ), we notice that the probability of {X (k) x (k) } is just the probability that at least k of the X s are less than x (k) The chance that X x (k) is F (x (k) ), so we find that = n i=k F k (x (k) ) = P [X (k) x (k) ] n! (n i)!(i)! F (x (k)) i [1 F (x (k) )] n i and f k (x (k) ) is just found by taking the derivative of F k (x (k) ). However, by applying an interesting identity in analysis, it can be shown that f k (x (k) ) simplifies to f k (x (k) ) = The above is Theorem A in Book. n! (k 1)!(n k)! [F (x (k))] k 1 [1 F (x (k) )] n k f(x (k) )