MULTIVARIATE PROBABILITY DISTRIBUTIONS

Size: px
Start display at page:

Download "MULTIVARIATE PROBABILITY DISTRIBUTIONS"

Transcription

1 MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined on this sample space. We will assign an indicator random variable to the result of tossing the coin. If it comes up heads we will assign a value of one, and if it comes up zero we will assign a value of zero. Consider the following random variables. X : The number of dots appearing on the die. X : The sum of the number of dots on the die and the indicator for the coin. X : The value of the indicator for tossing the coin. X 4 : The product of the number of dots on the die and the indicator for the coin. There are twelve sample points associated with this experiment. E : H E : H E : H E 4 : 4H E 5 : 5H E : H E 7 : T E 8 : T E 9 : T E : 4T E : 5T E : T Random variable X has six possible outcomes, each with probability. Random variable X has two possible outcomes, each with probability. Consider the values of X for each of the sample points. The possible outcomes and the probabilities for X are as follows. TABLE. Probability of X Value of Random Variable Probability / / / 4 / 5 / / 7 / Date: August 9, 4.

2 MULTIVARIATE PROBABILITY DISTRIBUTIONS.. Bivariate Random Variables. Now consider the intersection of X and X. We call this intersection a bivariate random variable. For a general bivariate case we write this as P (X x, X x ). We can write the probability distribution in the form of a table as follows for the above example. TABLE. Joint Probability of X and X X X 4 5 For the example, P (X, X ), which is the probability of sample point E 9.. PROBABILITY DISTRIBUTIONS FOR DISCRETE MULTIVARIATE RANDOM VARIABLES.. Definition. If X and X be discrete random variables, the function given by p(x, x ) P (X x, X x ) for each pair of values of (x, x ) within the range of X and X is called the joint (or bivariate) probability distribution for X and X. Specifically we write p(x, x ) P (X x, X x ), < x <, < x <. () In the single-variable case, the probability function for a discrete random variable X assigns non-zero probabilities to a countable number of distinct values of X in such a way that the sum of the probabilities is equal to. Similarly, in the bivariate case the joint probability function p(x, x ) assigns non-zero probabilities to only a countable number of pairs of values (x, x ). Further, the non-zero probabilities must sum to... Properties of the Joint Probability (or Density) Function. Theorem. If X and X are discrete random variables with joint probability function p(x, x ), then (i) p(x, x ) for all x, x. (ii) x, x p(x, x ), where the sum is over all values (x, x ) that are assigned nonzero probabilities.

3 MULTIVARIATE PROBABILITY DISTRIBUTIONS Once the joint probability function has been determined for discrete random variables X and X, calculating joint probabilities involving X and X is straightforward... Example. Roll a red die and a green die. Let There are points in the sample space. X number of dots on the red die X number of dots on the green die TABLE. Possible Outcomes of Rolling a Red Die and a Green Die. (First number in pair is number on red die.) Green 4 5 Red The probability of (, ) is. The probability of (, ) is also. Now consider P ( X, X ). This is given as P ( X, X ) p(, ) + p(, ) + p(, ) + p(, ) Example. Consider the example of tossing a coin and rolling a die from section. Now consider P ( X, X ). This is given as P ( X 4, X 5) p(, ) + p(, 4) + p(, 5) + p(, ) + p(, 4) + p(, 5) + p(4, ) + p(4, 4) + p(4, 5) 5.5. Example. Two caplets are selected at random from a bottle containing three aspirin, two sedative, and four cold caplets. If X and Y are, respectively, the numbers of aspirin and sedative caplets included among the two caplets drawn from the bottle, find the probabilities associated with all possible pairs of values of X and Y? The possible pairs are (, ), (, ), (, ), (, ), (,, and (, ). To find the probability associated with (, ), for example, observe that we are concerned with the event of getting one of the three aspirin caplets, none of the two sedative caplets, and hence, one of the four cold caplets. The number of ways in which this can be done is ( )( )( ) 4

4 4 MULTIVARIATE PROBABILITY DISTRIBUTIONS and the total number of ways in which two of the nine caplets can be selected is ( ) 9 Since those possibilities are all equally likely by virtue of the assumption that the selection is random, it follows that the probability associated with (, ) is Similarly, the probability associated with (, ) is ( )( )( 4 ) and, continuing this way, we obtain the values shown in the following table: TABLE 4. Joint Probability of Drawing Aspirin (X ) and Sedative Caplets (Y ). y 9 x We can also represent this joint probability distribution as a formula p(x, y) ( )( ) 4 x y)( x y, x,, ; y,, ; (x + y). DISTRIBUTION FUNCTIONS FOR DISCRETE MULTIVARIATE RANDOM VARIABLES.. Definition of the Distribution Function. If X and X are discrete random variables, the function given by F (x, x ) P [X x, X x ] u x < x < p(u, u ) < x < u x () where p(u, u )is the value of the joint probability function of X and X at (u, u ) is called the joint distribution function, or the joint cumulative distribution of X and X.

5 MULTIVARIATE PROBABILITY DISTRIBUTIONS 5.. Examples.... Example. Consider the experiment of tossing a red and green die where X is the number of the red die and X is the number on the green die. Now find F (, ) P (X, X ). This is given by summing as in the definition (equation ). F (, ) P [X, X ] u p(u, u ) u p(, ) + p(, ) + p(, ) + p(, ) + p(, ) + p(, ) Example. Consider Example from Section. The joint probability distribution is given in Table 4 which is repeated here for convenience. TABLE 4. Joint Probability of Drawing Aspirin (X ) and Sedative Caplets (Y ). y 9 x The joint probability distribution is p(x, y) ( )( ) 4 x y)( x y, x,, ; y,, ; (x + y) For this problem find F (, ) P (X, Y ). This is given by

6 MULTIVARIATE PROBABILITY DISTRIBUTIONS F (, ) P [X, Y ] u p(u, u ) u p(, ) + p(, ) + p(, ) + p(, ) + p(, ) + p(, ) PROBABILITY DISTRIBUTIONS FOR CONTINUOUS BIVARIATE RANDOM VARIABLES 4.. Definition of a Joint Probability Density Function. A bivariate function with values f(x, x ) defined over the x x -plane is called a joint probability density function of the continuous random variables X and X if, and only if, P [(X, X ) A] A f(x, x ) dx dx for any region A the x x -plane () 4.. Properties of the Joint Probability (or Density) Function in the Continuous Case. Theorem. A bivariate function can serve as a joint probability density function of a pair of continuous random variables X and X if its values, f(x, x ), satisfy the conditions (i) f(x, x ) for < x <, < x < (ii) f(x, x )dx dx 4.. Example of a Joint Probability Density Function. Given the joint probability density function { x f(x, x ) x < x <, < x < elsewhere of the two random variables, X and X, find P [(X, X ) A], where A is the region {(x, x ) < x < 4, < x < }. We find the probability by integrating the double integral over the relevant region, i.e.,

7 MULTIVARIATE PROBABILITY DISTRIBUTIONS 7 P ( < X < 4, < X < ) Integrate the inner integral first f(x, x ) dx dx x x dx dx + x x dx dx 4 dx dx P ( < X < 4, < X < ) Now integrate the remaining integral 4 x x dx dx (x ) x dx ( ( () 4 ( () x dx ) x ) dx ( ) ) 7 x dx 4 P ( < X < 4, < X < ) x dx 8 x ( 54 8 ( ) () ) () ( ) ( ) ( ) 8

8 8 MULTIVARIATE PROBABILITY DISTRIBUTIONS This probability is the volume under the surface f(x, x ) x x and above the rectangular set {(x, x ) < x < 4, < x < } in the x x -plane Definition of a Joint Distribution Function. If X and X are continuous random variables, the function given by F (x, x ) P [X x, X x ] x x f(u, u ) du du < x < < x < (4) where f(u, u ) is the value of the joint probability function of X and X at (u, u ) is called the joint distribution function, or the joint cumulative distribution of X and X. If the joint distribution function is continuous everywhere and partially differentiable with respect to x and x for all but a finite set of values then f(x, x ) F (x, x ) (5) x x wherever these partial derivatives exist Properties of the Joint Distribution Function. Theorem. If X and X are random variables with joint distribution function F (x, x ), then (i) F (, ) F (, x ) F (x, ) (ii) F (, ) (iii) If a < b and c < d, then F (a, c) < F (b, d) (iv) If a > x and b > x, then F (a, b) F (a, x ) F (x, b) + F (x, x ) Part (iv) follows because F (a, b) F (a, x ) F (x, b) + F (x, x ) P [x < X a, x < X b] Note also that F (, ) lim lim F (x, x ) x x implies that the joint density function f(x, x ) must be such that the integral of f(x, x ) over all values of (x, x ) is. 4.. Examples of a Joint Distribution Function and Density Functions Deriving a Distribution Function from a Joint Density Function. Consider a joint density function for X and X given by { x + x < x <, < x < f(x, x ) elsewhere This has a positive value in the square bounded by the horizontal and vertical axes and the vertical and horizontal lines at one. It is zero elsewhere. We will therefore need to find the value of the distribution function for five different regions: second, third and fourth quadrants, square defined by the vertical and horizontal lines at one, area between

9 MULTIVARIATE PROBABILITY DISTRIBUTIONS 9 the vertical axis and a vertical line at one and above a horizontal line at one in the first quadrant, area between the horizontal axis and a horizontal line at one and to the right of a vertical line at one in the first quadrant, the area in the first quadrant not previously mentioned. This can be diagrammed as follows. II.5 I III IV We find the distribution function by integrating the joint density function. If either x < or x <, it follows that For < x < and < x <, we get F (x, x ) for x > and < x <, we get F (x, x ) for < x < and x >, we get F (x, x ) x x F (x, x ) x x (s + t) ds dt x x (x + x ) (s + t)ds dt x (x + ) (s + t)ds dt x (x + )

10 MULTIVARIATE PROBABILITY DISTRIBUTIONS and for x > and x > we get F (x, x ) (s + t) ds dt Because the joint distribution function is everywhere continuous, the boundaries between any two of these regions can be included in either one, and we can write for x or x x x (x + x ) for < x <, < x < F (x, x ) x (x + ) for x, < x < x (x + ) for < x <, x for x, x 4.7. Deriving a Joint Density Function from a Distribution Function. Consider two random variables X and X whose joint distribution function is given by { ( e x ) ( e x ) for x > and x > F (x, x ) elsewhere Partial differentiation yields x x F (x, x ) e (x +x ) For x > and x > and elsewhere we find that the joint probability density of X and X is given by f(x, x ) { e (x +x ) for x > and x > elsewhere 5. MULTIVARIATE DISTRIBUTIONS FOR CONTINUOUS RANDOM VARIABLES 5.. Joint Density of Several Random Variables. The k-dimensional random variable (X, X,..., X k ) is said to be a k-dimensional random variable if there exists a function f(,,..., ) such that F (x, x,..., x k ) for all (x, x,..., x k ) where xk xk x... f(u, u,... u k ) du... du k () F (x, x, x,... ) P [X x, X x, X x,... ]

11 MULTIVARIATE PROBABILITY DISTRIBUTIONS The function f( ) is defined to be a joint probability density function. It has the following properties:... f(x, x,..., x k ) f(x, x,..., x k ) dx... dx k In order to make it clear the variables over which f is defined it is sometimes written f(x, x,..., x k ) f X, X,..., X k (x, x,..., x k ) (8). MARGINAL DISTRIBUTIONS.. Example Problem. Consider the example of tossing a coin and rolling a die from section. The probability of any particular pair, (x, x ) is given in the Table 5. TABLE 5. Joint and Marginal Probabilities of X and X. (7) X X 4 5 Notice that we have summed the columns and the rows and placed this sums at the bottom and right hand side of the table. The sum in the first column is the probability that X. The sum in the sixth row is the probability that X. Specifically the column totals are the probabilities that X will take on the values,,,..., 7. They are the values g(x ) p(x, x ) for x,,,..., 7 x In the same way, the row totals are the probabilities that X will take on the values in its space. Because these numbers are computed in the margin of the table, they are called marginal probabilities.

12 MULTIVARIATE PROBABILITY DISTRIBUTIONS.. Marginal Distributions for Discrete Random Variables. If X and X are discrete random variables and p(x, x ) is the value of their joint distribution function at (x, x ), the function given by g(x ) x p(x, x ) (9) for each x within the range of X is called the marginal distribution of X. Correspondingly, the function given by h(x ) x p(x, x ) () for each x within the range of X is called the marginal distribution of X... Marginal Distributions for Continuous Random Variables. If X and Y are jointly continuous random variables, then the functions f X ( ) and f Y ( ) are called the marginal probability density functions. The subscripts remind us that f X is defined for the random variable X. Intuitively, the marginal density is the density that results when we ignore any information about the random outcome Y. The marginal densities are obtained by integration of the joint density f X (x) f Y (y) f X, Y (x, y) dy f X, Y (x, y) dx () In a similar fashion for a k-dimensional random variable X f X (x ) f X (x ) f(x, x,... ) dx dx... dx k f(x, x,... ) dx dx... dx k ().4. Example. Let the joint density of two random variables x and x be given by { x e x x, x f(x, x ) otherwise What are the marginal densities of x and x?

13 First find the marginal density for x. f (x ) MULTIVARIATE PROBABILITY DISTRIBUTIONS x e x dx x e x e x e x Now find the marginal density for x. f (x ) x e x dx x e x ( x e ) x e x.5. Example. Let the joint density of two random variables x and y be given by f(x, y) { (x + 4y) < x <, < y < otherwise What are the marginal densities of x and y? First find the marginal density for x. f X (x) (x + 4y) dy (xy + y ) (x + ) () (x + )

14 4 MULTIVARIATE PROBABILITY DISTRIBUTIONS Now find the marginal density for y. f Y (y) (x + 4y) dx ( ) x + 4xy ( ) 4 + 8y () ( + 8y) 7. CONDITIONAL DISTRIBUTIONS 7.. Conditional Probability Functions for Discrete Distributions. We have previously shown that the conditional probability of A given B can be obtained by dividing the probability of the intersection by the probability of B, specifically, P (A B) P (A B) P (B) Now consider two random variables X and Y. We can write the probability that X x and Y y as P (X x Y y) P (X x, Y y) P (Y y) p (x, y) h(y) provided P (Y y), where p(x, y) is the value of joint probability distribution of X and Y at (x, y) and h(y) is the value of the marginal distribution of Y at y. We can then define a conditional distribution of X given Y y as follows. If p(x, y) is the value of the joint probability distribution of the discrete random variables X and Y at (x, y) and h(y) is the value for the marginal distribution of Y at y, then the function given by p (x y) p (x, y) h(y) () (4) h(y) (5) for each x within the range of X, is called the conditional distribution of X given Y y. 7.. Example for discrete distribution. Consider the example of tossing a coin and rolling a die from section. The probability of any particular pair, (x, x ) is given in the following table where x is the value on the die and x is the sum of the number on the die and an indicator that is one if the coin is a head and zero otherwise. The data is in the Table 5 repeated following. Consider the probability that x given that x 4. We compute this as follows.

15 MULTIVARIATE PROBABILITY DISTRIBUTIONS 5 TABLE 5. Joint and Marginal Probabilities of X and X. X X 4 5 For the example, P (X, X ), which is the probability of sample point E 9. p (x x ) p( 4) p (x, x ) h(x ) p(, 4) h(4) We can then make a table for the conditional probability function for x.

16 MULTIVARIATE PROBABILITY DISTRIBUTIONS TABLE. Probability Function for X given X. X X 4 5 We can do the same for X given X. TABLE 7. Probability Function for X given X. X X Conditional Distribution Functions for Continuous Distributions Discussion. In the continuous case, the idea of a conditional distribution takes on a slightly different meaning than in the discrete case. If X and X are both continuous,

17 MULTIVARIATE PROBABILITY DISTRIBUTIONS 7 P (X x X x ) is not defined because the probability of any one point is identically zero. It make sense however to define a conditional distribution function, i.e., P (X x X x ) because the value of X is known when we compute the value the probability that X is less than some specific value Definition of a Continuous Distribution Function. If X and X are jointly continuous random variables with joint density function f(x, x ), then the conditional distribution function of X given X x is F (x x ) P (X x X x ) () We can obtain the unconditional distribution function by integrating the conditional one over x. This is done as follows F (x ) F (x x )f X (x ) dx (7) We can also find the probability that X is less than x is the usual fashion as F (x ) x f X (t ) dt (8) But the marginal distribution inside the integral is obtained by integrating the joint density over the range of x. Specifically, This implies then that f X (t ) f X X (t, x ) dx (9) F (x ) x f X X (t, x ) dt dx () Now compare the integrand in equation with that in equation 7 to conclude that F (x x ) f X (x ) F (x x ) x x f X X (t, x ) dt f X X (t, x ) dt f X (x ) () We call the integrand in the second line of () the conditional density function of X given X x. We denote it by f(x x ) orf X X (x x ). Specifically

18 8 MULTIVARIATE PROBABILITY DISTRIBUTIONS Let X and X be jointly continuous random variables with joint probability density f X X (x, x ) and marginal densities f X (x ) and f X (x ), respectively. For any x such that f X (x ) > the conditional probability density function of X given X x, is defined to be And similarly f X X (x x ) f X X (x, x ) f X (x ) f(x, x ) f(x ) f X X (x x ) f X X (x, x ) f X (x ) f(x, x ) f(x ) () () 7.4. Example. Let the joint density of two random variables x and y be given by { (x + 4y) < x <, < y < f(x, y) otherwise The marginal density of x is f X (x) (x + ) while the marginal density of y is f y (y) ( + 8y). Now find the conditional distribution of x given y. This is given by f X Y (x y) f(x, y) f(y) (x + 4y) (8y + ) (x + 4y) ( + 8y) for < x < and < y <. Now find the probability that X given that y. First determine the density function when y as follows f(x, y) f(y) (x + 4y) (8y + ) ( x + 4 ( ) ) ( 8 ( ) ) + Then (x + ) (4 + ) (x + )

19 MULTIVARIATE PROBABILITY DISTRIBUTIONS 9 P ( X Y ) (x + ) dx ( ) x + x ( ) INDEPENDENT RANDOM VARIABLES 8.. Discussion. We have previously shown that two events A and B are independent if the probability of their intersection is the product of their individual probabilities, i.e. P (A B) P (A)P (B) (4) In terms of random variables, X and Y, consistency with this definition would imply that P (a X b, c Y d) P (a X b) P (c Y d) (5) That is, if X and Y are independent, the joint probability can be written as the product of the marginal probabilities. We then have the following definition. Let X have distribution function F X (x), Y have distribution function F Y (y), and X and Y gave joint distribution function F (x, y). Then X and Y are said to be independent if, and only if, F (x, y) F X (X)F Y (y) () for every pair of real numbers (x, y). If X and Y are not independent, they are said to be dependent. 8.. Independence Defined in Terms of Density Functions Discrete Random Variables. If X and Y are discrete random variables with joint probability density function p(x, y) and marginal density functions p X (x) and p Y (y), respectively, then X and Y are independent if, and only if p X,Y (x, y) p X (x)p Y (y) p(x)p(y) (7) for all pairs of real numbers (x, y).

20 MULTIVARIATE PROBABILITY DISTRIBUTIONS 8... Continuous Bivariate Random Variables. If X and Y are continuous random variables with joint probability density function f(x, y) and marginal density functions f X (x) and f Y (y), respectively then X and Y are independent if and only if for all pairs of real numbers (x, y). f X,Y (x, y) f X (x)f Y (y) f(x)f(y) 8.. Continuous Multivariate Random Variables. In a more general context the variables X, X,..., X k are independent if, and only if (8) k f X, X,..., X k (x, x,..., x k ) f Xi (x i ) i f X (x )f X (x )... f Xk (x k ) (9) f(x )f(x )... f(x k ) In other words two random variables are independent if the joint density is equal to the product of the marginal densities Examples Example Rolling a Die and Tossing a Coin. Consider the previous example where we rolled a die and tossed a coin. X is the number on the die, X is the number of the die plus the value of the indicator on the coin (H ). Table 7 is repeated here for convenience. For independence p(x, y) p(x)p(y) for all values of x and x. TABLE 7. Probability Function for X given X. X X 4 5

21 MULTIVARIATE PROBABILITY DISTRIBUTIONS To show that the variables are not independent, we only need show that p(x, y) p(x)p(y) Consider p(, ). If we multiply the marginal probabilities we obtain ( ) ( ) Example A Continuous Multiplicative Joint Density. Let the joint density of two random variables x and x be given by f(x x ) The marginal density for x is given by { x e x x, x otherwise f (x ) The marginal density for x is given by f (x ) x e x e x e x x e x dx x e x dx x e x ( x e ) x e x It is clear the joint density is the product of the marginal densities Example. Let the joint density of two random variables x and y be given by x log[y] f(x, y) ( + log[] log[4] ) x, y 4 otherwise

22 MULTIVARIATE PROBABILITY DISTRIBUTIONS First find the marginal density for x. x log[y] f X (x) ( + log[] log[4] ) dy x y ( log[y] ) ( + log[] log[4] ) 4 x 4 ( log[4] ) + x ( log[] ) ( + log[] log[4] ) x ( ( log[] ) 4 ( log[4] )) ( + log[] log[4] ) x( log[] 4 log[4] + 4 ) ( + log[] log[4] ) x( log[] 4 log[4] + ) ( + log[] log[4] ) x ( ( + log[] log[4] )) x Now find the marginal density for y. ( + log[] log[4] ) x log[y] f Y (y) ( + log[] log[4] ) dx x log[y] ( + log[] log[4] ) log[y] + ( + log[] log[4] ) log[y] ( + log[] log[4] ) It is clear the joint density is the product of the marginal densities Example 4. Let the joint density of two random variables X and Y be given by f(x, y) { 5 x + y otherwise

23 First find the marginal density for x. MULTIVARIATE PROBABILITY DISTRIBUTIONS f X (x) ( 5 x + ) y dy ( 5 x y + ) y ( 5 x + ) Now find the marginal density for y. f Y (y) 5 x + 5 ( 5 x + ) y dx ( 5 x + ) xy ( 5 + ) y ( 5 + ) y The product of the marginal densities is not the joint density Example 5. Let the joint density of two random variables X and Y be given by { e (x+y) x y, y f(x, y) otherwise Find the marginal density of X. f X (x) x e (x+y) dy e (x+y) x e (x+ ) + e x e x The marginal density of Y is obtained as follows: ( e (x+x))

24 4 MULTIVARIATE PROBABILITY DISTRIBUTIONS f Y (y) y e (x+y) dx e (x+y) y e (y+y) e y + e y e y ( e y) ( e (+y)) We can show that this is a proper density function by integrating it over the range of x and y. e (x+y) dy dx [ e (x+y) ] dx Or in the other order as follows: y x e (x+y) dx dy 8.5. Separation of a Joint Density Function Theorem 4. e x e x dx e [ e ] + x [ e (x+y) y ] dy [ e y e y] dy e y [ e y] [ e + ] [ e + ] [ + ] [ + ] Theorem 4. Let X and X have a joint density function f(x, x ) that is positive if, and only if, a x b and c x d, for constants a, b, c and d; and f(x, x ) otherwise. Then X and X are independent random variables if, and only if f(x, x ) g(x )h(x )

25 MULTIVARIATE PROBABILITY DISTRIBUTIONS 5 where g(x ) is a non-negative function of x alone and h(x ) is a non-negative function of x alone. Thus if we can separate the joint density into two multiplicative terms, one depending on x alone and one on x alone, we know the random variables are independent without showing that these functions are actually the marginal densities Example. Let the joint density of two random variables x and y be given by { 8x x f(x, y), y otherwise We can write f(x, y) as g(x)h(y), where g(x) h(y) { x x otherwise { 8 y otherwise These functions are not density functions because they do not integrate to one. / x dx / x 8 8 dy 8y 8 The marginal densities as defined below do sum to one. { 8x x f X (x) otherwise { y f Y (y) otherwise 9.. Definition. 9. EXPECTED VALUE OF A FUNCTION OF RANDOM VARIABLES 9... Discrete Case. Let X (X, X,..., X k ) be a k-dimensional discrete random variable with probability function p(x, x,..., x k ). Let g(,,..., ) be a function of the k random variables (X, X,..., X k ). Then the expected value of g(x, X,..., X k ) is E [ g(x, X,..., X k ) ] x k g(x,..., x k )p(x, x,..., x k ) () x k x x

26 MULTIVARIATE PROBABILITY DISTRIBUTIONS 9... Continuous Case. Let X (X, X,..., X k ) be a k-dimensional random variable with density f(x, x,..., x k ). Let g(,,..., ) be a function of the k random variables (X, X,..., X k ). Then the expected value of g(x, X,..., X k ) is E [g(x, X..., X k )]... x k x k... x x g(x,..., x k )f X,..., X k (x,..., x k )dx... dx k g(x,..., x k )f X,..., X k (x,..., x k ) dx... dx k if the integral is defined. Similarly, if g(x) is a bounded real function on the interval [a, b] then E(g(X)) b a g(x) df (x) b a () g df () where the integral is in the sense of Lebesque and can be loosely interpreted asf(x) dx. Consider as an example g(x,..., x k ) x i. Then E [g(x,..., X k )] E[X i ]... x i f(x,..., x k ) dx... dx k () x i f Xi (x i ) dx i because integration over all the other variables gives the marginal density of x i. 9.. Example. Let the joint density of two random variables x and x be given by f(x x ) The marginal density for x is given by { x e x x, x otherwise f (x ) x e x dx x e x e x e x The marginal density for x is given by

27 MULTIVARIATE PROBABILITY DISTRIBUTIONS 7 f (x ) x e x dx x e x ( x e ) x e x We can find the expected value of X by integrating the joint density or the marginal density. First with the joint density. E [X ] x x e x dx dx Consider the inside integral first. We will need a u dv substitution to evaluate the integral. Let then u x x and dv e x dx du x dx and v e x Then x x e x dx x x e x x e x dx x x e x + x e x dx Now integrate with respect to x. + x e x x E [X ] x dx x Now find it using the marginal density of x. Integrate as follows E [X ] x e x dx We will need to use a u dv substitution to evaluate the integral. Let

28 8 MULTIVARIATE PROBABILITY DISTRIBUTIONS u x and dv e x dx then du dx and v e x Then x e x dx x e x e x dx x e x + e x dx + e x e e + We can likewise show that the expected value of x is. Now consider E [x x ]. We can obtain it as E [X X ] x x e x dx dx Consider the inside integral first. We will need a u dv substitution to evaluate the integral. Let then u x x and dv e x dx du x dx and v e x Then x x e x dx x x e x x e x dx x x e x + x e x dx Now integrate with respect to x. + x e x x

29 MULTIVARIATE PROBABILITY DISTRIBUTIONS Properties of Expectation Constants. Theorem 5. Let c be a constant. Then 9... Theorem. E [X X ] E[c] x c c x x dx x y cf(x, y) dy dx y f(x, y) dy dx Theorem. Let g(x, X ) be a function of the random variables X and X and let a be a constant. Then 9... Theorem. E[ag(X, X )] x a x ag(x, x )f(x, x ) dx dx x ae[g(x, X )] x g(x, x )fx, x ) dx dx Theorem 7. Let X and Y denote two random variables defined on the same probability space and let f(x, y) be their joint density. Then E[aX + by ] y a + b y y x (ax + by)f(x, y) dx dy x x xf(x, y) dx dy yf(x, y) dx dy ae[x] + be[y ] (4) (5) ()

30 MULTIVARIATE PROBABILITY DISTRIBUTIONS In matrix notation we can write this as [ ] [ ] x E(x ) E[a a ] [a x a ] a E(x ) µ + a µ (7) Theorem. Theorem 8. Let X and Y denote two random variables defined on the same probability space and let g (X, Y ), g (X, Y ), g (X, Y ),..., g k (X, Y ) be functions of (X, Y ). Then E[g (X, Y ) + g (X, Y ) + + g k (X, Y )] Independence. E[g (X, Y )] + E[g (X, Y )] + + E[g k (X, Y )] (8) Theorem 9. Let X and X be independent random variables and g(x ) and h(x ) be functions of X and X, respectively. Then provided that the expectations exist. E [g(x )h(x )] E [g(x )] E [h(x )] (9) Proof: Let f(x, x ) be the joint density of X and X. The product g(x )h(x ) is a function of X and X. Therefore we have E [ g(x ]h(x ] ] g(x )h(x )f(x, x ) dx dx x x x g(x )h(x )f X (x )f X (x ) dx dx x [ ] g(x )f X (x ) x h(x )f X (x ) dx x dx g(x )f X (x ) ( E [h(x )] ) dx x E [h(x )] g(x )f X (x ) dx x E [h(x )] E [g(x )] (4)

31 MULTIVARIATE PROBABILITY DISTRIBUTIONS. VARIANCE, COVARIANCE AND CORRELATION.. Variance of a Single Random Variable. The variance of a random variable X with mean µ is given by [ (X var(x) σ ) ] E E(X) E [ (X µ) ] (x µ) f(x) dx [ ] x f(x) dx xf(x)dx E(x ) E (x) (4) The variance is a measure of the dispersion of the random variable about the mean... Covariance.... Definition. Let X and Y be any two random variables defined in the same probability space. The covariance of X and Y, denoted cov[x, Y ] or σ X, Y, is defined as cov[x, Y ] E [(X µ X )(Y µ Y )] E[XY ] E[µ X Y ] E[Xµ Y ] + E[µ Y µ X ] E[XY ] E[X]E[Y ] [ xyf(x, y) dx dy [ xyf(x, y) dx dy xf(x, y) dx dy xf X (x, y) dx yf Y (x, y) dy ] yf(x, y) dx dy ] The covariance measures the interaction between two random variables, but its numerical value is not independent of the units of measurement of X and Y. Positive values of the covariance imply that X and Y that X increases when Y increases; negative values indicate X decreases as Y decreases.... Examples. (i) Let the joint density of two random variables x and x be given by { x e x x, x f(x x ) otherwise (4)

32 MULTIVARIATE PROBABILITY DISTRIBUTIONS We showed in Example 9. that E [X ] E [X ] E [X X ] The covariance is then given by cov[x, Y ] E[XY ] E[X]E[Y ] ( ) () (ii) Let the joint density of two random variables x and x be given by f(x x ) { x x, x otherwise First compute the expected value of X X as follows. E [X X ] 4 x x dx dx ( 8 x x 8 8 x dx 4 9 x dx 8 x 8 ) dx

33 MULTIVARIATE PROBABILITY DISTRIBUTIONS Then compute expected value of X as follows E [X ] ( x dx dx x 4 dx dx x ) dx Then compute the expected value of X as follows. E [X ] x x dx dx ( x x 4 x dx x dx x ) dx 9 The covariance is then given by cov[x, Y ] E[XY ] E[X]E[Y ] ( ) ( ) 4

34 4 MULTIVARIATE PROBABILITY DISTRIBUTIONS (iii) Let the joint density of two random variables x and x be given by { 8 f(x x ) x x x otherwise First compute the expected value of X X as follows. E [X X ] x 8 x x dx dx ( ) 4 x x dx x ( 4 4 x ) 4 x4 dx ( x ) 8 x4 dx ( x ) 4 x Then compute expected value of X as follows E [X ] x 8 x dx dx ( x 8 x x 8 x dx x4 48 ) dx

35 MULTIVARIATE PROBABILITY DISTRIBUTIONS 5 Then compute the expected value of X as follows. E [X ] x 8 x x dx dx ( ) x x dx x ( x ) x dx ( 4 x x ( 8 x ) 4 x ) dx 4 The covariance is then given by cov[x, Y ] E[XY ] E[X]E[Y ] ( ) ( ) Correlation. The correlation coefficient, denoted by ρ[x, Y ], or ρ X, Y of random variables X and Y is defined to be ρ X, Y cov[x, Y ] σ X σ Y (4) provided that cov[x, Y ], σ X and σ Y exist, and σ X, σ Y are positive. The correlation coefficient between two random variables is a measure of the interaction between them. It also has the property of being independent of the units of measurement and being bounded between negative one and one. The sign of the correlation coefficient is the same as the

36 MULTIVARIATE PROBABILITY DISTRIBUTIONS sign of the covariance. Thus ρ > indicates that X increases as X increases and ρ indicates perfect correlation, with all the points falling on a straight line with positive slope. If ρ, there is no correlation and the covariance is zero..4. Independence and Covariance..4.. Theorem. Theorem. If X and Y are independent random variables, then Proof: We know from equation 4 that cov[x, Y ]. (44) cov[x, Y ] E[XY ] E[X]E[Y ] (45) We also know from equation 9 that if X and Y are independent, then Let g(x) X and h(y ) Y to obtain Substituting into equation 45 we obtain E [g(x)h(y )] E [g(x)] E [h(y )] (4) E [XY ] E [X] E [Y ] (47) cov[x, Y ] E[X]E[Y ] E[X]E[Y ] (48) The converse of Theorem is not true, i.e., cov[x, Y ] does not imply X and Y are independent..4.. Example. Consider the following discrete probability distribution. x x

37 MULTIVARIATE PROBABILITY DISTRIBUTIONS 7 These random variables are not independent because the joint probabilities are not the product of the marginal probabilities. For example p X X [, ] p X ( )p X ( ) ( ) ( ) Now compute the covariance between X and X. First find E[X ] as follows ( ) ( ) ( ) 5 5 E[X ] ( ) + () + () Similarly for the expected value of X. Now compute E[X X ] as follows The covariance is then ( ) ( ) ( ) 5 5 E[X ] ( ) + () + () ( E[X X ] ( )( ) ( + ()( ) ( + ()( ) ) ( ) ( + ( )() + ( )() ( ) ) + ()()() + ()() + ) ( ) + ()() + ()() cov[x, Y ] E[XY ] E[X]E[Y ] () () ( In this case the covariance is zero, but the variables are not independent..5. Sum of Variances var[a x + a x ]. var[a x + a x ] a var(x ) + a var(x ) + a a cov(x, x ) ) ) a σ + a a σ + a σ [ ] [ ] σ σ a [a, a ] σ σ a [ ] x var[a, a ] x (49)

38 8 MULTIVARIATE PROBABILITY DISTRIBUTIONS.. The Expected Value and Variance of a Linear Functions of Random Variables.... Theorem. Theorem. Let Y, Y,..., Y n and X, X,..., X m be random variables with E[Y i ] µ i and E[X j ] ξ i. Define U n m a i Y i and U b j X j (5) i j for constants a, a,..., a n and b, b,..., b m. Then the following three results hold: (i) E[U ] n i a iµ i (ii) var[u ] n i a i var[y i] + a i a j cov[y i, Y j ] i<j where the double sum is over all pairs (i, j) with i < j. (iii) cov[u, U ] n i m j a ib j cov[y i, X j ]. Proof: (i) We want to show that E[U ] n a i µ i i Write out the E[U ] as follows: [ n ] E[U ] E a i Y i i n E [a i Y i ] i n a i E [Y i ] i n a i µ i i (5) using Theorems 8 as appropriate.

39 (ii) Write out the var[u ] as follows: MULTIVARIATE PROBABILITY DISTRIBUTIONS 9 [ n ] n var(u ) E[U E(U )] E a i Y i a i µ i [ n E a i (Y i µ i ) i i i ] n E a i (Y i µ i ) + a i a j (Y i µi)(y j µ j ) i i j (5) n a i E(Y i µ i ) + a i a j E[(Y i µ i )(Y j µ j )] i i j By definitions of variance and covariance, we have var(u ) n a i V (Y i ) + a i a j cov(y i, Y j ) (5) i i j Because cov(y i, Y j ) cov(y j, Y i ) we can write var(u ) n a i V (Y i ) + a i a j cov(y i, Y j ) (54) i i<j Similar steps can be used to obtain (iii).

40 4 MULTIVARIATE PROBABILITY DISTRIBUTIONS (iii) We have cov(u, U ) E {[U E(U )] [U E(U )]} ( n ) n m E a i Y i a i µ i b j X j i i j m b j ξ j j [ n E a i (Y i µ i )] m b j (x j ξ j ) i i j j n m E a i b i (Y i µ i )(X j ξ j ) (55) n m a i b i E [(Y i µ i )(X j ξ j )] i j n m a i b i cov(y i, X j ) i j. CONDITIONAL EXPECTATIONS.. Definition. If X and X are any two random variables, the conditional expectation of g(x ), given that X x, is defined to be E [g(x ) X ] if X and X are jointly continuous and g(x )f(x x ) dx (5) E [g(x ) X ] x g(x )p(x x ) (57) if X and X are jointly discrete... Example. Let the joint density of two random variables X and Y be given by f(x, y) { x, y, x + y otherwise

41 MULTIVARIATE PROBABILITY DISTRIBUTIONS 4 We can find the marginal density of y by integrating the joint density with respect to x as follows f Y (y) y x y f(x, y) dx dx ( y), y We find the conditional density of X given that Y y by forming the ratio f X Y (x y) f(x, y) f Y (y) ( y) ( y), x y We then form the expected value by multiplying the density by x and then integrating over x. E [X Y ] y ( y) ( y) ( y) ( y) x ( y) dx y ( x x dx y ) ( ) ( y) We can find the unconditional expected value of X by multiplying the marginal density of y by this expected value and integrating over y as follows

42 4 MULTIVARIATE PROBABILITY DISTRIBUTIONS E[X] E Y [ E[X Y ] ] y ( ) ( y) dy ( y) dy ( y) [ ( ) ( ) ] [ ] We can show this directly by multiplying the joint density by x then and integrating over x and y. E[X] y ( x y x dx dy ( y) dy ) dy ( y) dy ( y) [ ( ) ( ) ] [ ] The fact that we can find the expected value of x using the conditional distribution of x given y is due to the following theorem.

43 MULTIVARIATE PROBABILITY DISTRIBUTIONS 4.. Theorem. Theorem. Let X and Y denote random variables. Then E[X] E Y [ EX Y [X Y ] ] (58) The inner expectation is with respect to the conditional distribution of X given Y and the outer expectation is with respect to the distribution of Y. Proof: Suppose that X and Y are jointly continuous with joint density F (X, y) and marginal distributions f X (x) and f Y (y), respectively. Then E[X] xf XY (x, y) dx dy xf X Y (x y)f Y (y) dx dy [ ] xf X Y (x y) dx f Y (y) dy E [X Y y] f Y (y) dy E Y [ EX Y [X Y ] ] (59) The proof is similar for the discrete case..4. Conditional Variance..4.. Definition. Just as we can compute a conditional expected value, we can compute a conditional variance. The idea is that the variance of the random variable X may be different for different values of Y. We define the conditional variance as follows. var[x Y y] E [ (X E[X Y y]) Y y ] E [ X Y y ] [ E[X Y y] ] () We can write the variance of X as a function of the expected value of the conditional variance. This is sometimes useful for specific problems..4.. Theorem. Theorem. Let X and Y denote random variables. Then var[x] E [ var[x Y y] ] + var [ E[X Y y] ] ()

44 44 MULTIVARIATE PROBABILITY DISTRIBUTIONS Proof: First note the following three definitions var[x Y ] E[X Y ] [ E[X Y ] ] E [ var[x Y ] ] E [ E[X Y ] ] E {[ E[X Y ] ] } var [ E[X Y ] ] E {[ E[X Y ] ] } { E [ E[X Y ] ]} (a) (b) (c) The variance of X is given by var[x] E[X ] [ E[X] ] We can find the expected value of a variable by taking the expected value of the conditional expectation as in Theorem. For this problem we can write E[X ] as the expected value of the conditional expectation of X given Y. Specifically, () and E[X ] E Y { EX Y [X Y ] } (4) [ E[X] ] [ EY { EX Y [X Y ] }] (5) Write () substituting in (4) and (5) as follows var[x] E[X ] [ E[X] ] E Y { EX Y [X Y ] } [ E Y { EX Y [X Y ] }] () Now subtract and add E {[ E(X Y ) ] } to the right hand side of equation as follows var[x] E Y { EX Y [X Y ] } [ E Y { EX Y [X Y ] }] E Y { EX Y [X Y ] } E {[ E(X Y ) ] } + E {[ E(X Y ) ] } [ EY { EX Y [X Y ] }] (7) Now notice that the first two terms in equation 7 are the same as the right hand side of equation b which is ( E [ var[x Y ] ]). Then notice that the second two terms in equation 7 are the same as the right hand side of equation b which is ( var [ E[X Y ] ]). We can then write var[x] as var[x] E Y { EX Y [X Y ] } [ E Y { EX Y [X Y ] }] E [ var[x Y ] ] + var [ E[X Y ] ] (8)

45 MULTIVARIATE PROBABILITY DISTRIBUTIONS Example. Let the joint density of two random variables X and Y be given by f(x, y) { 4 (x + y) x, y otherwise We can find the marginal density of x by integrating the joint density with respect to y as follows f X (x) f(x, y) dy (x + y) dy 4 ) (xy + y 4 ( 4x + 4 ) 4 (9) (4x + ), x 4 We can find the marginal density of y by integrating the joint density with respect to x as follows: f Y (y) 4 f(x, y) dy (x + y) dx 4 ( x + xy ) (7) ( + y), y 4

46 4 MULTIVARIATE PROBABILITY DISTRIBUTIONS We find the expected value of X by multiplying the conditional density by x and then integrating over x E[X] x (4x + ) dx 4 ( 4x + x ) dx 4 ) ( 4 4 x + x ( ) To find the variance of X, we first need to find the E[X ]. We do as follows E[X ] The variance of X is then given by ( 7 ) 4 x (4x + ) dx ( 4x + x ) dx 4 (x 4 + ) x ( + ) 4 ( 5 var(x) E [ (X E(X)) ] E(x ) E (x) ) (7) (7) 5 ( ) (7)

47 MULTIVARIATE PROBABILITY DISTRIBUTIONS 47 We find the conditional density of X given that Y y by forming the ratio f X Y (x y) f(x, y) f Y (y) 4 4 (x + y) ( + y) (x + y) ( + y) (74) We then form the expected value of X given Y by multiplying the density by x and then integrating over x. E [X Y ] + y + y (x + y) x ( + y) dx (x + xy) dx ( x + ) x y ( ( + y) + ) y ( + y ) ( + y) (4 + y) ( + y) ( ) (4 + y) ( + y) (75)

48 48 MULTIVARIATE PROBABILITY DISTRIBUTIONS We can find the unconditional expected value of X by multiplying the marginal density of y by this expected value and integrating over y as follows: E[X] E Y [ E[X Y ] ] 4 (4 + y) ( + y) dy ( + y) 4 (4 + y)( + y) ( + y) (4 + y) dy 4 (4y + ) y dy (7) 4 (8 + ) We find the conditional variance by finding the expected value of X given Y and then subtracting the square of E[X Y ]. E[X Y ] (x + y) x ( + y) dx (x + x y) dx + y ( + y x4 + ) x y ( + y + ) y ( + y ) + y ( ) ( ) + y + y (77)

49 MULTIVARIATE PROBABILITY DISTRIBUTIONS 49 Now square E[X Y ]. ( ( ) ) (4 + y) E [X Y ] ( + y) Now subtract equation 78 from equation 77 (4 + y) ( + y) ( ) ( ) + y var[x Y ] (4 + y) + y ( + y) ( ) ( ) (8 + y)( + y) (4 + y) ( + y) ( y + y + 8 ( + 4y + 9y ) ) ( + y) (78) (79) For example, if y, we obtain y + y + ( + y) var[x Y ] y + y + ( + y) 44 y To find the expected value of this variance we need to multiply the expression in equation 8 by the marginal density of Y and then integrate over the range of Y. (8) E [ var[x Y ] ] 44 y + y + ( + y) ( + y) dy 4 y + y + ( + y) dy (8) Consider first the indefinite integral. y + y + z dy (8) ( + y) This integral would be easier to solve if ( + y) in the denominator could be eliminated. This would be the case if it could be factored out of the numerator. One way to do this is

50 5 MULTIVARIATE PROBABILITY DISTRIBUTIONS carry out the specified division. y + + y y + y + y + y y + y + (8) y + y + + y (y + ) + y Now substitute in equation 8 in to equation 8 as follows y + y + z dy ( + y) [ (y + ) ] dy + y (84) y + y log[ + y] Now compute the expected value of the variance as E [ var[x Y ] ] y + y + 44 ( + y) [ y dy + y log[ + y] [( ) ] + log[] log[] [ ] log[] 44 ] (85) To compute the variance of E[X Y ] we need to find E Y [( E[X Y ] ) ] and then subtract ( EY [ E[X Y ] ]). First find the second term. The expected value of X given Y comes from equation 75. E[X Y ] ( ) (4 + y) ( + y) (8)

51 MULTIVARIATE PROBABILITY DISTRIBUTIONS 5 We found the expected value of E[X Y ] in equation 7. We repeat the derivation here by multiplying E[X Y ] by the marginal density of Y and then integrating over the range of Y. ( ) ( E Y E[X Y ] ) (4 + y) ( + y) (4 + y) dy 4 (4y + ) y (y + ) dy ( 8 + ) (87) 4 (4) 7 Now find the first term E Y ( (E[X Y ] ) ) ( ) (4 + y) ( + y) (y + ) dy 4 (4 + y) + y dy 9y + 4y + + y Now find the indefinite integral by first simplifying the integrand using long division. dy (88) 9y + 4y + + y Now carry out the division + y 9y + 4y + (89) 9y y 9y + 4y + 9y + 4y + + y 9y + 9y 5y + 5y + 5 (9y + 5) + + y (9)

52 5 MULTIVARIATE PROBABILITY DISTRIBUTIONS Now substitute in equation 9 into equation 88 as follows ( (E[X ) ) E Y Y ] 9y + 4y + dy 44 + y 44 9y y dy [ 9y ] y + log[y + ] [ ] log[] [ ] 48 + log[] 44 (9) The variance is obtained by subtracting the square of (87) from (9) var [ E[X Y ] ] ( (E[X ) ) ( ) E Y Y ] + (E ) Y E[X Y ] ( ) [ ] log[] 44 [ ] log[] [ ] log[] 44 (9) We can show that the sum of (85) and (9) is equal to the var[x ] as in Theorem : var[x] E [ var[x Y y] ] + var [ E[X Y y] ] [ ] [ ] log[] + log[] log[] + log[] (9) which is the same as in equation 7.

53 MULTIVARIATE PROBABILITY DISTRIBUTIONS 5. CAUCHY-SCHWARZ INEQUALITY.. Statement of Inequality. For any functions g(x) and h(x) and cumulative distribution function F (x),the following holds: ( g(x)h(x) df (x) where x is a vector random variable. ) ( ) g(x) df (x) h(x) df (x).. Proof. Form a linear combination of g(x) and h(x), square it and then integrate as follows: (tg(x) + h(x) ) df (x) (95) (94) The inequality holds because of the square and df (x) >. Now expand the integrand in (95) to obtain t (g(x) ) df (x) + t g(x)h(x) df (x) + (h(x) ) df (x) (9) This is a quadratic equation in t which holds for all t. Now define t as follows: and substitute in (9) t g(x)h(x) df (x) ( g(x) ) df (x) (97) ( ) ( ) g(x)h(x) df x) g(x)h(x) df (x) (h(x) ) ( ) ( ) + df (x) g(x) df (x) g(x) df (x) ( ( g(x)h(x) df (x) ) ( g(x) ) df (x) (h(x) ) df (x) g(x)h(x) df (x)) (h(x) ) df (x) (g(x) ) df (x) (98) ( g(x)h(x) df (x) ( ) ) ( ( ) ) h(x) df (x) gx) df (x).. Corollary. Consider two random variables X and X and the expectation of their product. Using (98) we obtain ( E(X X ) ) E(X )E(X ) E (X X ) ( E(X ) ) ( E(X ) ) (99)

54 54 MULTIVARIATE PROBABILITY DISTRIBUTIONS.4. Corollary. cov(x X ) < ( var(x ) ) ( var(x ) ) () Proof: Apply (98) to the centered random variables g(x) X µ and h(x) X µ where µ i E(X i ).

55 MULTIVARIATE PROBABILITY DISTRIBUTIONS 55 REFERENCES [] Amemiya, T. Advanced Econometrics. Cambridge: Harvard University Press, 985 [] Bickel, P.J. and K.A. Doksum. Mathematical Statistics: Basic Ideas and Selected Topics, Vol ). nd edition. Upper Saddle River, NJ: Prentice Hall, [] Billingsley, P. Probability and Measure. rd edition. New York: Wiley, 995 [4] Casella, G. And R.L. Berger. Statistical Inference. Pacific Grove, CA: Duxbury, [5] Cramer, H. Mathematical Methods of Statistics. Princeton: Princeton University Press, 94 [] Goldberger, A.S. Econometric Theory. New York: Wiley, 94 [7] Lindgren, B.W. Statistical Theory rd edition. New York: Macmillan Publishing Company, 97 [8] Rao, C.R. Linear Statistical Inference and its Applications. nd edition. New York: Wiley, 97

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4) Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume

More information

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS. DISCRETE RANDOM VARIABLES.. Definition of a Discrete Random Variable. A random variable X is said to be discrete if it can assume only a finite or countable

More information

The Bivariate Normal Distribution

The Bivariate Normal Distribution The Bivariate Normal Distribution This is Section 4.7 of the st edition (2002) of the book Introduction to Probability, by D. P. Bertsekas and J. N. Tsitsiklis. The material in this section was not included

More information

For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i )

For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i ) Probability Review 15.075 Cynthia Rudin A probability space, defined by Kolmogorov (1903-1987) consists of: A set of outcomes S, e.g., for the roll of a die, S = {1, 2, 3, 4, 5, 6}, 1 1 2 1 6 for the roll

More information

Covariance and Correlation

Covariance and Correlation Covariance and Correlation ( c Robert J. Serfling Not for reproduction or distribution) We have seen how to summarize a data-based relative frequency distribution by measures of location and spread, such

More information

Math 431 An Introduction to Probability. Final Exam Solutions

Math 431 An Introduction to Probability. Final Exam Solutions Math 43 An Introduction to Probability Final Eam Solutions. A continuous random variable X has cdf a for 0, F () = for 0 <

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

Random variables P(X = 3) = P(X = 3) = 1 8, P(X = 1) = P(X = 1) = 3 8.

Random variables P(X = 3) = P(X = 3) = 1 8, P(X = 1) = P(X = 1) = 3 8. Random variables Remark on Notations 1. When X is a number chosen uniformly from a data set, What I call P(X = k) is called Freq[k, X] in the courseware. 2. When X is a random variable, what I call F ()

More information

ST 371 (IV): Discrete Random Variables

ST 371 (IV): Discrete Random Variables ST 371 (IV): Discrete Random Variables 1 Random Variables A random variable (rv) is a function that is defined on the sample space of the experiment and that assigns a numerical variable to each possible

More information

The Method of Least Squares

The Method of Least Squares The Method of Least Squares Steven J. Miller Mathematics Department Brown University Providence, RI 0292 Abstract The Method of Least Squares is a procedure to determine the best fit line to data; the

More information

Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density

Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density HW MATH 461/561 Lecture Notes 15 1 Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density and marginal densities f(x, y), (x, y) Λ X,Y f X (x), x Λ X,

More information

Econometrics Simple Linear Regression

Econometrics Simple Linear Regression Econometrics Simple Linear Regression Burcu Eke UC3M Linear equations with one variable Recall what a linear equation is: y = b 0 + b 1 x is a linear equation with one variable, or equivalently, a straight

More information

Joint Exam 1/P Sample Exam 1

Joint Exam 1/P Sample Exam 1 Joint Exam 1/P Sample Exam 1 Take this practice exam under strict exam conditions: Set a timer for 3 hours; Do not stop the timer for restroom breaks; Do not look at your notes. If you believe a question

More information

Microeconomic Theory: Basic Math Concepts

Microeconomic Theory: Basic Math Concepts Microeconomic Theory: Basic Math Concepts Matt Van Essen University of Alabama Van Essen (U of A) Basic Math Concepts 1 / 66 Basic Math Concepts In this lecture we will review some basic mathematical concepts

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

Lecture 6: Discrete & Continuous Probability and Random Variables

Lecture 6: Discrete & Continuous Probability and Random Variables Lecture 6: Discrete & Continuous Probability and Random Variables D. Alex Hughes Math Camp September 17, 2015 D. Alex Hughes (Math Camp) Lecture 6: Discrete & Continuous Probability and Random September

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

Section 6.1 Joint Distribution Functions

Section 6.1 Joint Distribution Functions Section 6.1 Joint Distribution Functions We often care about more than one random variable at a time. DEFINITION: For any two random variables X and Y the joint cumulative probability distribution function

More information

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015. Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x

More information

M2S1 Lecture Notes. G. A. Young http://www2.imperial.ac.uk/ ayoung

M2S1 Lecture Notes. G. A. Young http://www2.imperial.ac.uk/ ayoung M2S1 Lecture Notes G. A. Young http://www2.imperial.ac.uk/ ayoung September 2011 ii Contents 1 DEFINITIONS, TERMINOLOGY, NOTATION 1 1.1 EVENTS AND THE SAMPLE SPACE......................... 1 1.1.1 OPERATIONS

More information

E3: PROBABILITY AND STATISTICS lecture notes

E3: PROBABILITY AND STATISTICS lecture notes E3: PROBABILITY AND STATISTICS lecture notes 2 Contents 1 PROBABILITY THEORY 7 1.1 Experiments and random events............................ 7 1.2 Certain event. Impossible event............................

More information

Lecture 5: Mathematical Expectation

Lecture 5: Mathematical Expectation Lecture 5: Mathematical Expectation Assist. Prof. Dr. Emel YAVUZ DUMAN MCB1007 Introduction to Probability and Statistics İstanbul Kültür University Outline 1 Introduction 2 The Expected Value of a Random

More information

Practice with Proofs

Practice with Proofs Practice with Proofs October 6, 2014 Recall the following Definition 0.1. A function f is increasing if for every x, y in the domain of f, x < y = f(x) < f(y) 1. Prove that h(x) = x 3 is increasing, using

More information

Correlation in Random Variables

Correlation in Random Variables Correlation in Random Variables Lecture 11 Spring 2002 Correlation in Random Variables Suppose that an experiment produces two random variables, X and Y. What can we say about the relationship between

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete

More information

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the

More information

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab 1 Overview of Monte Carlo Simulation 1.1 Why use simulation?

More information

Some probability and statistics

Some probability and statistics Appendix A Some probability and statistics A Probabilities, random variables and their distribution We summarize a few of the basic concepts of random variables, usually denoted by capital letters, X,Y,

More information

Chapter 4 Lecture Notes

Chapter 4 Lecture Notes Chapter 4 Lecture Notes Random Variables October 27, 2015 1 Section 4.1 Random Variables A random variable is typically a real-valued function defined on the sample space of some experiment. For instance,

More information

Solutions to Homework 10

Solutions to Homework 10 Solutions to Homework 1 Section 7., exercise # 1 (b,d): (b) Compute the value of R f dv, where f(x, y) = y/x and R = [1, 3] [, 4]. Solution: Since f is continuous over R, f is integrable over R. Let x

More information

To give it a definition, an implicit function of x and y is simply any relationship that takes the form:

To give it a definition, an implicit function of x and y is simply any relationship that takes the form: 2 Implicit function theorems and applications 21 Implicit functions The implicit function theorem is one of the most useful single tools you ll meet this year After a while, it will be second nature to

More information

Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

More information

Probability Calculator

Probability Calculator Chapter 95 Introduction Most statisticians have a set of probability tables that they refer to in doing their statistical wor. This procedure provides you with a set of electronic statistical tables that

More information

Solutions for Review Problems

Solutions for Review Problems olutions for Review Problems 1. Let be the triangle with vertices A (,, ), B (4,, 1) and C (,, 1). (a) Find the cosine of the angle BAC at vertex A. (b) Find the area of the triangle ABC. (c) Find a vector

More information

The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].

The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1]. Probability Theory Probability Spaces and Events Consider a random experiment with several possible outcomes. For example, we might roll a pair of dice, flip a coin three times, or choose a random real

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page

Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page Errata for ASM Exam C/4 Study Manual (Sixteenth Edition) Sorted by Page 1 Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page Practice exam 1:9, 1:22, 1:29, 9:5, and 10:8

More information

Multi-variable Calculus and Optimization

Multi-variable Calculus and Optimization Multi-variable Calculus and Optimization Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Multi-variable Calculus and Optimization 1 / 51 EC2040 Topic 3 - Multi-variable Calculus

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Algebra I Vocabulary Cards

Algebra I Vocabulary Cards Algebra I Vocabulary Cards Table of Contents Expressions and Operations Natural Numbers Whole Numbers Integers Rational Numbers Irrational Numbers Real Numbers Absolute Value Order of Operations Expression

More information

ECE302 Spring 2006 HW5 Solutions February 21, 2006 1

ECE302 Spring 2006 HW5 Solutions February 21, 2006 1 ECE3 Spring 6 HW5 Solutions February 1, 6 1 Solutions to HW5 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics

More information

A characterization of trace zero symmetric nonnegative 5x5 matrices

A characterization of trace zero symmetric nonnegative 5x5 matrices A characterization of trace zero symmetric nonnegative 5x5 matrices Oren Spector June 1, 009 Abstract The problem of determining necessary and sufficient conditions for a set of real numbers to be the

More information

Probability Generating Functions

Probability Generating Functions page 39 Chapter 3 Probability Generating Functions 3 Preamble: Generating Functions Generating functions are widely used in mathematics, and play an important role in probability theory Consider a sequence

More information

4.3 Lagrange Approximation

4.3 Lagrange Approximation 206 CHAP. 4 INTERPOLATION AND POLYNOMIAL APPROXIMATION Lagrange Polynomial Approximation 4.3 Lagrange Approximation Interpolation means to estimate a missing function value by taking a weighted average

More information

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5. PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include

More information

Separable First Order Differential Equations

Separable First Order Differential Equations Separable First Order Differential Equations Form of Separable Equations which take the form = gx hy or These are differential equations = gxĥy, where gx is a continuous function of x and hy is a continuously

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Probability Theory. Florian Herzog. A random variable is neither random nor variable. Gian-Carlo Rota, M.I.T..

Probability Theory. Florian Herzog. A random variable is neither random nor variable. Gian-Carlo Rota, M.I.T.. Probability Theory A random variable is neither random nor variable. Gian-Carlo Rota, M.I.T.. Florian Herzog 2013 Probability space Probability space A probability space W is a unique triple W = {Ω, F,

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Math 370, Spring 2008 Prof. A.J. Hildebrand. Practice Test 2 Solutions

Math 370, Spring 2008 Prof. A.J. Hildebrand. Practice Test 2 Solutions Math 370, Spring 008 Prof. A.J. Hildebrand Practice Test Solutions About this test. This is a practice test made up of a random collection of 5 problems from past Course /P actuarial exams. Most of the

More information

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce

More information

4. Continuous Random Variables, the Pareto and Normal Distributions

4. Continuous Random Variables, the Pareto and Normal Distributions 4. Continuous Random Variables, the Pareto and Normal Distributions A continuous random variable X can take any value in a given range (e.g. height, weight, age). The distribution of a continuous random

More information

Math 370, Actuarial Problemsolving Spring 2008 A.J. Hildebrand. Practice Test, 1/28/2008 (with solutions)

Math 370, Actuarial Problemsolving Spring 2008 A.J. Hildebrand. Practice Test, 1/28/2008 (with solutions) Math 370, Actuarial Problemsolving Spring 008 A.J. Hildebrand Practice Test, 1/8/008 (with solutions) About this test. This is a practice test made up of a random collection of 0 problems from past Course

More information

Integrals of Rational Functions

Integrals of Rational Functions Integrals of Rational Functions Scott R. Fulton Overview A rational function has the form where p and q are polynomials. For example, r(x) = p(x) q(x) f(x) = x2 3 x 4 + 3, g(t) = t6 + 4t 2 3, 7t 5 + 3t

More information

5. Continuous Random Variables

5. Continuous Random Variables 5. Continuous Random Variables Continuous random variables can take any value in an interval. They are used to model physical characteristics such as time, length, position, etc. Examples (i) Let X be

More information

Section 1.3 P 1 = 1 2. = 1 4 2 8. P n = 1 P 3 = Continuing in this fashion, it should seem reasonable that, for any n = 1, 2, 3,..., = 1 2 4.

Section 1.3 P 1 = 1 2. = 1 4 2 8. P n = 1 P 3 = Continuing in this fashion, it should seem reasonable that, for any n = 1, 2, 3,..., = 1 2 4. Difference Equations to Differential Equations Section. The Sum of a Sequence This section considers the problem of adding together the terms of a sequence. Of course, this is a problem only if more than

More information

Lecture Notes on Elasticity of Substitution

Lecture Notes on Elasticity of Substitution Lecture Notes on Elasticity of Substitution Ted Bergstrom, UCSB Economics 210A March 3, 2011 Today s featured guest is the elasticity of substitution. Elasticity of a function of a single variable Before

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Statistics 100A Homework 8 Solutions

Statistics 100A Homework 8 Solutions Part : Chapter 7 Statistics A Homework 8 Solutions Ryan Rosario. A player throws a fair die and simultaneously flips a fair coin. If the coin lands heads, then she wins twice, and if tails, the one-half

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

Lecture Notes 1. Brief Review of Basic Probability

Lecture Notes 1. Brief Review of Basic Probability Probability Review Lecture Notes Brief Review of Basic Probability I assume you know basic probability. Chapters -3 are a review. I will assume you have read and understood Chapters -3. Here is a very

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization 2.1. Introduction Suppose that an economic relationship can be described by a real-valued

More information

PSTAT 120B Probability and Statistics

PSTAT 120B Probability and Statistics - Week University of California, Santa Barbara April 10, 013 Discussion section for 10B Information about TA: Fang-I CHU Office: South Hall 5431 T Office hour: TBA email: chu@pstat.ucsb.edu Slides will

More information

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA We Can Early Learning Curriculum PreK Grades 8 12 INSIDE ALGEBRA, GRADES 8 12 CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA April 2016 www.voyagersopris.com Mathematical

More information

Constrained optimization.

Constrained optimization. ams/econ 11b supplementary notes ucsc Constrained optimization. c 2010, Yonatan Katznelson 1. Constraints In many of the optimization problems that arise in economics, there are restrictions on the values

More information

SOLUTIONS. f x = 6x 2 6xy 24x, f y = 3x 2 6y. To find the critical points, we solve

SOLUTIONS. f x = 6x 2 6xy 24x, f y = 3x 2 6y. To find the critical points, we solve SOLUTIONS Problem. Find the critical points of the function f(x, y = 2x 3 3x 2 y 2x 2 3y 2 and determine their type i.e. local min/local max/saddle point. Are there any global min/max? Partial derivatives

More information

Understanding Basic Calculus

Understanding Basic Calculus Understanding Basic Calculus S.K. Chung Dedicated to all the people who have helped me in my life. i Preface This book is a revised and expanded version of the lecture notes for Basic Calculus and other

More information

Solve addition and subtraction word problems, and add and subtract within 10, e.g., by using objects or drawings to represent the problem.

Solve addition and subtraction word problems, and add and subtract within 10, e.g., by using objects or drawings to represent the problem. Solve addition and subtraction word problems, and add and subtract within 10, e.g., by using objects or drawings to represent the problem. Solve word problems that call for addition of three whole numbers

More information

Recursive Estimation

Recursive Estimation Recursive Estimation Raffaello D Andrea Spring 04 Problem Set : Bayes Theorem and Bayesian Tracking Last updated: March 8, 05 Notes: Notation: Unlessotherwisenoted,x, y,andz denoterandomvariables, f x

More information

THREE DIMENSIONAL GEOMETRY

THREE DIMENSIONAL GEOMETRY Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

More information

Scalar Valued Functions of Several Variables; the Gradient Vector

Scalar Valued Functions of Several Variables; the Gradient Vector Scalar Valued Functions of Several Variables; the Gradient Vector Scalar Valued Functions vector valued function of n variables: Let us consider a scalar (i.e., numerical, rather than y = φ(x = φ(x 1,

More information

Feb 28 Homework Solutions Math 151, Winter 2012. Chapter 6 Problems (pages 287-291)

Feb 28 Homework Solutions Math 151, Winter 2012. Chapter 6 Problems (pages 287-291) Feb 8 Homework Solutions Math 5, Winter Chapter 6 Problems (pages 87-9) Problem 6 bin of 5 transistors is known to contain that are defective. The transistors are to be tested, one at a time, until the

More information

6.4 Logarithmic Equations and Inequalities

6.4 Logarithmic Equations and Inequalities 6.4 Logarithmic Equations and Inequalities 459 6.4 Logarithmic Equations and Inequalities In Section 6.3 we solved equations and inequalities involving exponential functions using one of two basic strategies.

More information

Zeros of Polynomial Functions

Zeros of Polynomial Functions Zeros of Polynomial Functions Objectives: 1.Use the Fundamental Theorem of Algebra to determine the number of zeros of polynomial functions 2.Find rational zeros of polynomial functions 3.Find conjugate

More information

Principle of Data Reduction

Principle of Data Reduction Chapter 6 Principle of Data Reduction 6.1 Introduction An experimenter uses the information in a sample X 1,..., X n to make inferences about an unknown parameter θ. If the sample size n is large, then

More information

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions. Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.

More information

Math 151. Rumbos Spring 2014 1. Solutions to Assignment #22

Math 151. Rumbos Spring 2014 1. Solutions to Assignment #22 Math 151. Rumbos Spring 2014 1 Solutions to Assignment #22 1. An experiment consists of rolling a die 81 times and computing the average of the numbers on the top face of the die. Estimate the probability

More information

APPLIED MATHEMATICS ADVANCED LEVEL

APPLIED MATHEMATICS ADVANCED LEVEL APPLIED MATHEMATICS ADVANCED LEVEL INTRODUCTION This syllabus serves to examine candidates knowledge and skills in introductory mathematical and statistical methods, and their applications. For applications

More information

Definition 8.1 Two inequalities are equivalent if they have the same solution set. Add or Subtract the same value on both sides of the inequality.

Definition 8.1 Two inequalities are equivalent if they have the same solution set. Add or Subtract the same value on both sides of the inequality. 8 Inequalities Concepts: Equivalent Inequalities Linear and Nonlinear Inequalities Absolute Value Inequalities (Sections 4.6 and 1.1) 8.1 Equivalent Inequalities Definition 8.1 Two inequalities are equivalent

More information

Section 5.1 Continuous Random Variables: Introduction

Section 5.1 Continuous Random Variables: Introduction Section 5. Continuous Random Variables: Introduction Not all random variables are discrete. For example:. Waiting times for anything (train, arrival of customer, production of mrna molecule from gene,

More information

Review of Fundamental Mathematics

Review of Fundamental Mathematics Review of Fundamental Mathematics As explained in the Preface and in Chapter 1 of your textbook, managerial economics applies microeconomic theory to business decision making. The decision-making tools

More information

Least Squares Estimation

Least Squares Estimation Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David

More information

Module 3: Correlation and Covariance

Module 3: Correlation and Covariance Using Statistical Data to Make Decisions Module 3: Correlation and Covariance Tom Ilvento Dr. Mugdim Pašiƒ University of Delaware Sarajevo Graduate School of Business O ften our interest in data analysis

More information

Zeros of Polynomial Functions

Zeros of Polynomial Functions Zeros of Polynomial Functions The Rational Zero Theorem If f (x) = a n x n + a n-1 x n-1 + + a 1 x + a 0 has integer coefficients and p/q (where p/q is reduced) is a rational zero, then p is a factor of

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

5.3 Improper Integrals Involving Rational and Exponential Functions

5.3 Improper Integrals Involving Rational and Exponential Functions Section 5.3 Improper Integrals Involving Rational and Exponential Functions 99.. 3. 4. dθ +a cos θ =, < a

More information

Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test

Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test Math Review for the Quantitative Reasoning Measure of the GRE revised General Test www.ets.org Overview This Math Review will familiarize you with the mathematical skills and concepts that are important

More information

Real Roots of Univariate Polynomials with Real Coefficients

Real Roots of Univariate Polynomials with Real Coefficients Real Roots of Univariate Polynomials with Real Coefficients mostly written by Christina Hewitt March 22, 2012 1 Introduction Polynomial equations are used throughout mathematics. When solving polynomials

More information

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August

More information

MATH 10550, EXAM 2 SOLUTIONS. x 2 + 2xy y 2 + x = 2

MATH 10550, EXAM 2 SOLUTIONS. x 2 + 2xy y 2 + x = 2 MATH 10550, EXAM SOLUTIONS (1) Find an equation for the tangent line to at the point (1, ). + y y + = Solution: The equation of a line requires a point and a slope. The problem gives us the point so we

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

Elliptical copulae. Dorota Kurowicka, Jolanta Misiewicz, Roger Cooke

Elliptical copulae. Dorota Kurowicka, Jolanta Misiewicz, Roger Cooke Elliptical copulae Dorota Kurowicka, Jolanta Misiewicz, Roger Cooke Abstract: In this paper we construct a copula, that is, a distribution with uniform marginals. This copula is continuous and can realize

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

1 Portfolio mean and variance

1 Portfolio mean and variance Copyright c 2005 by Karl Sigman Portfolio mean and variance Here we study the performance of a one-period investment X 0 > 0 (dollars) shared among several different assets. Our criterion for measuring

More information

The Point-Slope Form

The Point-Slope Form 7. The Point-Slope Form 7. OBJECTIVES 1. Given a point and a slope, find the graph of a line. Given a point and the slope, find the equation of a line. Given two points, find the equation of a line y Slope

More information

MATH 10034 Fundamental Mathematics IV

MATH 10034 Fundamental Mathematics IV MATH 0034 Fundamental Mathematics IV http://www.math.kent.edu/ebooks/0034/funmath4.pdf Department of Mathematical Sciences Kent State University January 2, 2009 ii Contents To the Instructor v Polynomials.

More information

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver Høgskolen i Narvik Sivilingeniørutdanningen STE637 ELEMENTMETODER Oppgaver Klasse: 4.ID, 4.IT Ekstern Professor: Gregory A. Chechkin e-mail: chechkin@mech.math.msu.su Narvik 6 PART I Task. Consider two-point

More information