MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS


 Abner Lamb
 2 years ago
 Views:
Transcription
1 MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS Contents 1. Moment generating functions 2. Sum of a ranom number of ranom variables 3. Transforms associate with joint istributions Moment generating functions, an their close relatives (probability generating functions an characteristic functions) provie an alternative way of representing a probability istribution by means of a certain function of a single variable. These functions turn out to be useful in many ifferent ways: (a) They provie an easy way of calculating the moments of a istribution. (b) They provie some powerful tools for aressing certain counting an combinatorial problems. (c) They provie an easy way of characterizing the istribution of the sum of inepenent ranom variables. () They provie tools for ealing with the istribution of the sum of a ranom number of inepenent ranom variables. (e) They play a central role in the stuy of branching processes. (f) They play a key role in large eviations theory, that is, in stuying the asymptotics of tail probabilities of the form P(X c), when c is a large number. (g) They provie a brige between complex analysis an probability, so that complex analysis methos can be brought to bear on probability problems. (h) They provie powerful tools for proving limit theorems, such as laws of large numbers an the central limit theorem. 1
2 1 MOMENT GENERATING FUNCTIONS 1.1 Definition Definition 1. The moment generating function associate with a ranom variable X is a function M X : R [0, ] efine by M X (s) = E[e sx ]. The omain D X of M X is efine as the set D X = {s M X (s) < }. If X is a iscrete ranom variable, with PMF p X, then M X (s) = e sx p X (x). If X is a continuous ranom variable with PDF f X, then M X (s) = e sx f X (x) x. x Note that this is essentially the same as the efinition of the Laplace transform of a function f X, except that we are using s instea of s in the exponent. 1.2 The omain of the moment generating function Note that 0 D X, because M X (0) = E[e 0X ] = 1. For a iscrete ranom variable that takes only a finite number of ifferent values, we have D X = R. For example, if X takes the values 1, 2, an 3, with probabilities 1/2, 1/3, an 1/6, respectively, then M X (s) = e s + e 2s + e 3s, (1) which is finite for every s R. On the other han, for the Cauchy istribution, f X (x) = 1/(π(1 + x 2 )), for all x, it is easily seen that M X (s) =, for all s = 0. In general, D X is an interval (possibly infinite or semiinfinite) that contains zero. Exercise 1. Suppose that M X (s) < for some s > 0. Show that M X (t) < for all t [0, s]. Similarly, suppose that M X (s) < for some s < 0. Show that M X (t) < for all t [s, 0]. 2
3 Exercise 2. Suppose that log P(X > x) lim sup ν < 0. x x Establish that M X (s) < for all s [0, ν). 1.3 Inversion of transforms By inspection of the formula for M X (s) in Eq. (1), it is clear that the istribution of X is reaily etermine. The various powers e sx inicate the possible values of the ranom variable X, an the associate coefficients provie the corresponing probabilities. At the other extreme, if we are tol that M X (s) = for every s = 0, this is certainly not enough information to etermine the istribution of X. On this subject, there is the following funamental result. It is intimately relate to the inversion properties of Laplace transforms. Its proof requires sophisticate analytical machinery an is omitte. Theorem 1. Inversion theorem (a) Suppose that M X (s) is finite for all s in an interval of the form [ a, a], where a is a positive number. Then, M X etermines uniquely the CDF of the ranom variable X. (b) If M X (s) = M Y (s) <, for all s [ a, a], where a is a positive number, then the ranom variables X an Y have the same CDF. There are explicit formulas that allow us to recover the PMF or PDF of a ranom variable starting from the associate transform, but they are quite ifficult to use (e.g., involving contour integrals ). In practice, transforms are usually inverte by pattern matching, base on tables of known istributiontransform pairs. 1.4 Moment generating properties There is a reason why M X is calle a moment generating function. Let us consier the erivatives of M X at zero. Assuming for a moment we can interchange 3
4 the orer of integration an ifferentiation, we obtain M X (s) = E[e sx ] = E[Xe sx ] = E[X], s s=0 s s=0 s=0 m M X (s) = m E[e sx sx ] = E[X m e ] = E[X m ] s m s=0 s m s=0 s=0 Thus, knowlege of the transform M X allows for an easy calculation of the moments of a ranom variable X. Justifying the interchange of the expectation an the ifferentiation oes require some work. The steps are outline in the following exercise. For simplicity, we restrict to the case of nonnegative ranom variables. Exercise 3. Suppose that X is a nonnegative ranom variable an that M X (s) < for all s (, a], where a is a positive number. (a) Show that E[X k ] <, for every k. (b) Show that E[X k e sx ] <, for every s < a. (c) Show that (e hx 1)/h Xe hx. () Use the DCT to argue that hx e 1 E[e hx ] 1 E[X] = E lim = lim. h 0 h h 0 h 1.5 The probability generating function For iscrete ranom variables, the following probability generating function is sometimes useful. It is efine by g X (s) = E[s X ], with s usually restricte to positive values. It is of course closely relate to the moment generating function in that, for s > 0, we have g X (s) = M X (log s). One ifference is that when X is a positive ranom variable, we can efine g X (s), as well as its erivatives, for s = 0. So, suppose that X has a PMF p X (m), for m = 1,.... Then, g X (s) = s m p X (m), m=1 resulting in m g X (s) = m! p X (m). s m s=0 (The interchange of the summation an the ifferentiation nees justification, but is inee legitimate for small s.) Thus, we can use g X to easily recover the PMF p X, when X is a positive integer ranom variable. 4
5 1.6 Examples Example : X = Exp(λ). Then, λ, s < λ; M X (s) = e sx λe λx x = λ s 0, otherwise. Example : X = Ge(p) es p M X (s) = e sm p(1 p) m 1 1 (1 p)e, e s < 1/(1 p); s, otherwise. m=1 In this case, we also fin g X (s) = ps/(1 (1 p)s), s < 1/(1 p) an g X (s) =, otherwise. Example : X = N(0, 1). Then, 1 x 2 M X (s) = exp(sx) exp( )x 2π 2 = exp(s2 /2) exp( x2 + 2sx s 2 )x 2π 2 = exp(s 2 /2). 1.7 Properties of moment generating functions We recor some useful properties of moment generating functions. Theorem 2. (a) If Y = ax + b, then M Y (s) = e sb M X (as). (b) If X an Y are inepenent, then M X+Y (s) = M X (s)m Y (s). (c) Let X an Y be inepenent ranom variables. Let Z be equal to X, with probability p, an equal to Y, with probability 1 p. Then, M Z (s) = pm X (s) + (1 p)m Y (s). Proof: For part (a), we have M X (ax + b) = E[exp(saX + sb)] = exp(sb)e[exp(sax)] = exp(sb)m X (as). 5
6 For part (b), we have M X+Y (s) = E[exp(sX + sy )] = E[exp(sX)]E[exp(sY )] = M X (s)m Y (s). For part (c), by conitioning on the ranom choice between X an Y, we have M Z (s) = E[e sz ] = pe[e sx ] + (1 p)e[e sy ] = pm X (s) + (1 p)m Y (s). Example : (Normal ranom variables) (a) Let X be a stanar normal ranom variable, an let Y = σx + µ, which we know to have a N (µ, σ 2 ) istribution. We then fin that M Y (s) = exp(sµ s2 σ 2 ). (b) Let X = N(µ1, σ 2 1 ) an Y = N(µ 2, σ 2 2 ). Then, 1 M X+Y (s) = exp( s(µ 1 + µ 2 ) + s 2 2 (σ σ 2 2 ). Using the inversion property of transforms, we conclue that X + Y = N (µ µ 2, σ 1 + σ 2 2 ), thus corroborating a result we first obtaine using convolutions. 2 SUM OF A RANDOM NUMBER OF INDEPENDENT RANDOM VARI ABLES Let X 1, X 2,... be a sequence of i.i.. ranom variables, with mean µ an variance σ 2. Let N be another inepenent ranom variable that takes nonnegative integer values. Let Y = N i=1 X i. Let us erive the mean, variance, an moment generating function of Y. We have E[Y ] = E[E[Y N]] = E[Nµ] = E[N]E[X]. Furthermore, using the law of total variance, var(y ) = E var(y N) + var E[Y N] Finally, note that = E[Nσ 2 ] + var(nµ) = E[N]σ 2 + µ 2 var(n). E[exp(sY ) N = n] = M n (s) = exp(n log M X (s)), X 6
7 implying that M Y (s) = exp(n log M X (s))p(n = n) = M N (log M X (s)). n=1 The reaer is encourage to take the erivative of the above expression, an evaluate it at s = 0, to recover the formula E[Y ] = E[N]E[X]. Example : Suppose that each X i is exponentially istribute, with parameter λ, an that N is geometrically istribute, with parameter p (0, 1). We fin that e log M X (s) p pλ/(λ s) λp M Y (s) = = = 1 e log M X (s) (1 p) 1 λ(1 p)/(λ s) λp s which we recognize as a moment generating function of an exponential ranom variable with parameter λp. Using the inversion theorem, we conclue that Y is exponentially istribute. In view of the fact that the sum of a fixe number of exponential ranom variables is far from exponential, this result is rather surprising. An intuitive explanation will be provie later in terms of the Poisson process. 3 TRANSFORMS ASSOCIATED WITH JOINT DISTRIBUTIONS If two ranom variables X an Y are escribe by some joint istribution (e.g., a joint PDF), then each one is associate with a transform M X (s) or M Y (s). These are the transforms of the marginal istributions an o not convey information on the epenence between the two ranom variables. Such information is containe in a multivariate transform, which we now efine. Consier n ranom variables X 1,..., X n relate to the same experiment. Let s 1,..., s n be real parameters. The associate multivariate transform is a function of these n parameters an is efine by M X1,...,X n (s 1,..., s n ) = E e s 1X 1 + +s nx n. The inversion property of transforms iscusse earlier extens to the multivariate case. That is, if Y 1,..., Y n is another set of ranom variables an M X1,...,X n (s 1,..., s n ), M Y1,...,Y n (s 1,..., s n ) are the same functions of s 1,..., s n, in a neighborhoo of the origin, then the joint istribution of X 1,..., X n is the same as the joint istribution of Y 1,..., Y n. 7
8 Example : (a) Consier two ranom variables X an Y. Their joint transform is M X,Y (s, t) = E[e sx e ty ] = E[e sx+ty ] = M Z (1), where Z = sx+ty. Thus, calculating a multivariate transform essentially amounts to calculating the univariate transform associate with a single ranom variable that is a linear combination of the original ranom variables. (b) If X an Y are inepenent, then M X,Y (s, t) = M X (s)m Y (t). 8
9 MIT OpenCourseWare J / J Funamentals of Probability Fall 2008 For information about citing these materials or our Terms of Use, visit:
MSc. Econ: MATHEMATICAL STATISTICS, 1995 MAXIMUMLIKELIHOOD ESTIMATION
MAXIMUMLIKELIHOOD ESTIMATION The General Theory of ML Estimation In orer to erive an ML estimator, we are boun to make an assumption about the functional form of the istribution which generates the
More informationProbability Generating Functions
page 39 Chapter 3 Probability Generating Functions 3 Preamble: Generating Functions Generating functions are widely used in mathematics, and play an important role in probability theory Consider a sequence
More informationSection 3.1 Worksheet NAME. f(x + h) f(x)
MATH 1170 Section 3.1 Worksheet NAME Recall that we have efine the erivative of f to be f (x) = lim h 0 f(x + h) f(x) h Recall also that the erivative of a function, f (x), is the slope f s tangent line
More informationA Generalization of Sauer s Lemma to Classes of LargeMargin Functions
A Generalization of Sauer s Lemma to Classes of LargeMargin Functions Joel Ratsaby University College Lonon Gower Street, Lonon WC1E 6BT, Unite Kingom J.Ratsaby@cs.ucl.ac.uk, WWW home page: http://www.cs.ucl.ac.uk/staff/j.ratsaby/
More informationMath 230.01, Fall 2012: HW 1 Solutions
Math 3., Fall : HW Solutions Problem (p.9 #). Suppose a wor is picke at ranom from this sentence. Fin: a) the chance the wor has at least letters; SOLUTION: All wors are equally likely to be chosen. The
More informationf(x) = a x, h(5) = ( 1) 5 1 = 2 2 1
Exponential Functions an their Derivatives Exponential functions are functions of the form f(x) = a x, where a is a positive constant referre to as the base. The functions f(x) = x, g(x) = e x, an h(x)
More informationInverse Trig Functions
Inverse Trig Functions c A Math Support Center Capsule February, 009 Introuction Just as trig functions arise in many applications, so o the inverse trig functions. What may be most surprising is that
More informationDifferentiability of Exponential Functions
Differentiability of Exponential Functions Philip M. Anselone an John W. Lee Philip Anselone (panselone@actionnet.net) receive his Ph.D. from Oregon State in 1957. After a few years at Johns Hopkins an
More informationarcsine (inverse sine) function
Inverse Trigonometric Functions c 00 Donal Kreier an Dwight Lahr We will introuce inverse functions for the sine, cosine, an tangent. In efining them, we will point out the issues that must be consiere
More informationFOURIER TRANSFORM TERENCE TAO
FOURIER TRANSFORM TERENCE TAO Very broaly speaking, the Fourier transform is a systematic way to ecompose generic functions into a superposition of symmetric functions. These symmetric functions are usually
More informationP (x) 0. Discrete random variables Expected value. The expected value, mean or average of a random variable x is: xp (x) = v i P (v i )
Discrete random variables Probability mass function Given a discrete random variable X taking values in X = {v 1,..., v m }, its probability mass function P : X [0, 1] is defined as: P (v i ) = Pr[X =
More informationDepartment of Mathematics, Indian Institute of Technology, Kharagpur Assignment 23, Probability and Statistics, March 2015. Due:March 25, 2015.
Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 3, Probability and Statistics, March 05. Due:March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x
More informationSensitivity Analysis of Nonlinear Performance with Probability Distortion
Preprints of the 19th Worl Congress The International Feeration of Automatic Control Cape Town, South Africa. August 2429, 214 Sensitivity Analysis of Nonlinear Performance with Probability Distortion
More informationINSURANCE RISK THEORY (Problems)
INSURANCE RISK THEORY (Problems) 1 Counting random variables 1. (Lack of memory property) Let X be a geometric distributed random variable with parameter p (, 1), (X Ge (p)). Show that for all n, m =,
More informationMath 431 An Introduction to Probability. Final Exam Solutions
Math 43 An Introduction to Probability Final Eam Solutions. A continuous random variable X has cdf a for 0, F () = for 0 <
More informationSF2940: Probability theory Lecture 8: Multivariate Normal Distribution
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,
More information6.041/6.431 Spring 2008 Quiz 2 Wednesday, April 16, 7:309:30 PM. SOLUTIONS
6.4/6.43 Spring 28 Quiz 2 Wednesday, April 6, 7:39:3 PM. SOLUTIONS Name: Recitation Instructor: TA: 6.4/6.43: Question Part Score Out of 3 all 36 2 a 4 b 5 c 5 d 8 e 5 f 6 3 a 4 b 6 c 6 d 6 e 6 Total
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete
More information2 HYPERBOLIC FUNCTIONS
HYPERBOLIC FUNCTIONS Chapter Hyperbolic Functions Objectives After stuying this chapter you shoul unerstan what is meant by a hyperbolic function; be able to fin erivatives an integrals of hyperbolic functions;
More informationChapter 3 Joint Distributions
Chapter 3 Joint Distributions 3.6 Functions of Jointly Distributed Random Variables Discrete Random Variables: Let f(x, y) denote the joint pdf of random variables X and Y with A denoting the twodimensional
More informationLecture 13: Differentiation Derivatives of Trigonometric Functions
Lecture 13: Differentiation Derivatives of Trigonometric Functions Derivatives of the Basic Trigonometric Functions Derivative of sin Derivative of cos Using the Chain Rule Derivative of tan Using the
More informationJoint Exam 1/P Sample Exam 1
Joint Exam 1/P Sample Exam 1 Take this practice exam under strict exam conditions: Set a timer for 3 hours; Do not stop the timer for restroom breaks; Do not look at your notes. If you believe a question
More information5. Continuous Random Variables
5. Continuous Random Variables Continuous random variables can take any value in an interval. They are used to model physical characteristics such as time, length, position, etc. Examples (i) Let X be
More informationSF2940: Probability theory Lecture 8: Multivariate Normal Distribution
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance
More informationnparameter families of curves
1 nparameter families of curves For purposes of this iscussion, a curve will mean any equation involving x, y, an no other variables. Some examples of curves are x 2 + (y 3) 2 = 9 circle with raius 3,
More information4. Joint Distributions of Two Random Variables
4. Joint Distributions of Two Random Variables 4.1 Joint Distributions of Two Discrete Random Variables Suppose the discrete random variables X and Y have supports S X and S Y, respectively. The joint
More information6. Jointly Distributed Random Variables
6. Jointly Distributed Random Variables We are often interested in the relationship between two or more random variables. Example: A randomly chosen person may be a smoker and/or may get cancer. Definition.
More informationMAT1A01: Differentiation of Polynomials & Exponential Functions + the Product & Quotient Rules
MAT1A01: Differentiation of Polynomials & Exponential Functions + te Prouct & Quotient Rules Dr Craig 17 April 2013 Reminer Mats Learning Centre: CRing 512 My office: CRing 533A (Stats Dept corrior)
More informationOverview of Monte Carlo Simulation, Probability Review and Introduction to Matlab
Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab 1 Overview of Monte Carlo Simulation 1.1 Why use simulation?
More informationTopic 4: Multivariate random variables. Multiple random variables
Topic 4: Multivariate random variables Joint, marginal, and conditional pmf Joint, marginal, and conditional pdf and cdf Independence Expectation, covariance, correlation Conditional expectation Two jointly
More informationAggregate Loss Models
Aggregate Loss Models Chapter 9 Stat 477  Loss Models Chapter 9 (Stat 477) Aggregate Loss Models Brian Hartman  BYU 1 / 22 Objectives Objectives Individual risk model Collective risk model Computing
More information20. Product rule, Quotient rule
20. Prouct rule, 20.1. Prouct rule Prouct rule, Prouct rule We have seen that the erivative of a sum is the sum of the erivatives: [f(x) + g(x)] = x x [f(x)] + x [(g(x)]. One might expect from this that
More informationCh 10. Arithmetic Average Options and Asian Opitons
Ch 10. Arithmetic Average Options an Asian Opitons I. Asian Option an the Analytic Pricing Formula II. Binomial Tree Moel to Price Average Options III. Combination of Arithmetic Average an Reset Options
More informationExercises with solutions (1)
Exercises with solutions (). Investigate the relationship between independence and correlation. (a) Two random variables X and Y are said to be correlated if and only if their covariance C XY is not equal
More informationMaximum Likelihood Estimation
Math 541: Statistical Theory II Lecturer: Songfeng Zheng Maximum Likelihood Estimation 1 Maximum Likelihood Estimation Maximum likelihood is a relatively simple method of constructing an estimator for
More informationUNIT I: RANDOM VARIABLES PART A TWO MARKS
UNIT I: RANDOM VARIABLES PART A TWO MARKS 1. Given the probability density function of a continuous random variable X as follows f(x) = 6x (1x) 0
More informationThe meanfield computation in a supermarket model with server multiple vacations
DOI.7/s66375 The meanfiel computation in a supermaret moel with server multiple vacations QuanLin Li Guirong Dai John C. S. Lui Yang Wang Receive: November / Accepte: 8 October 3 SpringerScienceBusinessMeiaNewYor3
More informationAnswers to the Practice Problems for Test 2
Answers to the Practice Problems for Test 2 Davi Murphy. Fin f (x) if it is known that x [f(2x)] = x2. By the chain rule, x [f(2x)] = f (2x) 2, so 2f (2x) = x 2. Hence f (2x) = x 2 /2, but the lefthan
More informatione.g. arrival of a customer to a service station or breakdown of a component in some system.
Poisson process Events occur at random instants of time at an average rate of λ events per second. e.g. arrival of a customer to a service station or breakdown of a component in some system. Let N(t) be
More informationNotes for STA 437/1005 Methods for Multivariate Data
Notes for STA 437/1005 Methods for Multivariate Data Radford M. Neal, 26 November 2010 Random Vectors Notation: Let X be a random vector with p elements, so that X = [X 1,..., X p ], where denotes transpose.
More informationProof of the Power Rule for Positive Integer Powers
Te Power Rule A function of te form f (x) = x r, were r is any real number, is a power function. From our previous work we know tat x x 2 x x x x 3 3 x x In te first two cases, te power r is a positive
More informationLecture 17: Implicit differentiation
Lecture 7: Implicit ifferentiation Nathan Pflueger 8 October 203 Introuction Toay we iscuss a technique calle implicit ifferentiation, which provies a quicker an easier way to compute many erivatives we
More information(We assume that x 2 IR n with n > m f g are twice continuously ierentiable functions with Lipschitz secon erivatives. The Lagrangian function `(x y) i
An Analysis of Newton's Metho for Equivalent Karush{Kuhn{Tucker Systems Lus N. Vicente January 25, 999 Abstract In this paper we analyze the application of Newton's metho to the solution of systems of
More informationST 371 (IV): Discrete Random Variables
ST 371 (IV): Discrete Random Variables 1 Random Variables A random variable (rv) is a function that is defined on the sample space of the experiment and that assigns a numerical variable to each possible
More informationDefinition 6.1.1. A r.v. X has a normal distribution with mean µ and variance σ 2, where µ R, and σ > 0, if its density is f(x) = 1. 2σ 2.
Chapter 6 Brownian Motion 6. Normal Distribution Definition 6... A r.v. X has a normal distribution with mean µ and variance σ, where µ R, and σ > 0, if its density is fx = πσ e x µ σ. The previous definition
More informationSolutions of Laplace s equation in 3d
Solutions of Laplace s equation in 3 Motivation The general form of Laplace s equation is: Ψ = 0 ; it contains the laplacian, an nothing else. This section will examine the form of the solutions of Laplaces
More informationFor a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i )
Probability Review 15.075 Cynthia Rudin A probability space, defined by Kolmogorov (19031987) consists of: A set of outcomes S, e.g., for the roll of a die, S = {1, 2, 3, 4, 5, 6}, 1 1 2 1 6 for the roll
More information4 Sums of Random Variables
Sums of a Random Variables 47 4 Sums of Random Variables Many of the variables dealt with in physics can be expressed as a sum of other variables; often the components of the sum are statistically independent.
More informationProbability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur
Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special DistributionsVI Today, I am going to introduce
More information1.5 / 1  Communication Networks II (Görg)  www.comnets.unibremen.de. 1.5 Transforms
.5 /  Communication Networks II (Görg)  www.comnets.unibremen.de.5 Transforms Using different summation and integral transformations pmf, pdf and cdf/ccdf can be transformed in such a way, that even
More informationNotes on Continuous Random Variables
Notes on Continuous Random Variables Continuous random variables are random quantities that are measured on a continuous scale. They can usually take on any value over some interval, which distinguishes
More informationMathematics Review for Economists
Mathematics Review for Economists by John E. Floy University of Toronto May 9, 2013 This ocument presents a review of very basic mathematics for use by stuents who plan to stuy economics in grauate school
More informationDigital barrier option contract with exponential random time
IMA Journal of Applie Mathematics Avance Access publishe June 9, IMA Journal of Applie Mathematics ) Page of 9 oi:.93/imamat/hxs3 Digital barrier option contract with exponential ranom time Doobae Jun
More informationOn Adaboost and Optimal Betting Strategies
On Aaboost an Optimal Betting Strategies Pasquale Malacaria 1 an Fabrizio Smerali 1 1 School of Electronic Engineering an Computer Science, Queen Mary University of Lonon, Lonon, UK Abstract We explore
More informationLagrangian and Hamiltonian Mechanics
Lagrangian an Hamiltonian Mechanics D.G. Simpson, Ph.D. Department of Physical Sciences an Engineering Prince George s Community College December 5, 007 Introuction In this course we have been stuying
More information19.2. First Order Differential Equations. Introduction. Prerequisites. Learning Outcomes
First Orer Differential Equations 19.2 Introuction Separation of variables is a technique commonly use to solve first orer orinary ifferential equations. It is socalle because we rearrange the equation
More informationProbability Theory. Florian Herzog. A random variable is neither random nor variable. GianCarlo Rota, M.I.T..
Probability Theory A random variable is neither random nor variable. GianCarlo Rota, M.I.T.. Florian Herzog 2013 Probability space Probability space A probability space W is a unique triple W = {Ω, F,
More informationThe Bivariate Normal Distribution
The Bivariate Normal Distribution This is Section 4.7 of the st edition (2002) of the book Introduction to Probability, by D. P. Bertsekas and J. N. Tsitsiklis. The material in this section was not included
More informationChapter 4. Multivariate Distributions
1 Chapter 4. Multivariate Distributions Joint p.m.f. (p.d.f.) Independent Random Variables Covariance and Correlation Coefficient Expectation and Covariance Matrix Multivariate (Normal) Distributions Matlab
More informationExponential Functions: Differentiation and Integration. The Natural Exponential Function
46_54.q //4 :59 PM Page 5 5 CHAPTER 5 Logarithmic, Eponential, an Other Transcenental Functions Section 5.4 f () = e f() = ln The inverse function of the natural logarithmic function is the natural eponential
More informationChapter 2 Review of Classical Action Principles
Chapter Review of Classical Action Principles This section grew out of lectures given by Schwinger at UCLA aroun 1974, which were substantially transforme into Chap. 8 of Classical Electroynamics (Schwinger
More information2WB05 Simulation Lecture 8: Generating random variables
2WB05 Simulation Lecture 8: Generating random variables Marko Boon http://www.win.tue.nl/courses/2wb05 January 7, 2013 Outline 2/36 1. How do we generate random variables? 2. Fitting distributions Generating
More informationLecture L253D Rigid Body Kinematics
J. Peraire, S. Winall 16.07 Dynamics Fall 2008 Version 2.0 Lecture L253D Rigi Boy Kinematics In this lecture, we consier the motion of a 3D rigi boy. We shall see that in the general threeimensional
More informationStatistics 100A Homework 7 Solutions
Chapter 6 Statistics A Homework 7 Solutions Ryan Rosario. A television store owner figures that 45 percent of the customers entering his store will purchase an ordinary television set, 5 percent will purchase
More information3 Multiple Discrete Random Variables
3 Multiple Discrete Random Variables 3.1 Joint densities Suppose we have a probability space (Ω, F,P) and now we have two discrete random variables X and Y on it. They have probability mass functions f
More informationPrinciple of Data Reduction
Chapter 6 Principle of Data Reduction 6.1 Introduction An experimenter uses the information in a sample X 1,..., X n to make inferences about an unknown parameter θ. If the sample size n is large, then
More informationThe wave equation is an important tool to study the relation between spectral theory and geometry on manifolds. Let U R n be an open set and let
1. The wave equation The wave equation is an important tool to stuy the relation between spectral theory an geometry on manifols. Let U R n be an open set an let = n j=1 be the Eucliean Laplace operator.
More informationSection 5.1 Continuous Random Variables: Introduction
Section 5. Continuous Random Variables: Introduction Not all random variables are discrete. For example:. Waiting times for anything (train, arrival of customer, production of mrna molecule from gene,
More informationA Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails
12th International Congress on Insurance: Mathematics and Economics July 1618, 2008 A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails XUEMIAO HAO (Based on a joint
More informationMATH 125: LAST LECTURE
MATH 5: LAST LECTURE FALL 9. Differential Equations A ifferential equation is an equation involving an unknown function an it s erivatives. To solve a ifferential equation means to fin a function that
More informationData Center Power System Reliability Beyond the 9 s: A Practical Approach
Data Center Power System Reliability Beyon the 9 s: A Practical Approach Bill Brown, P.E., Square D Critical Power Competency Center. Abstract Reliability has always been the focus of missioncritical
More information2r 1. Definition (Degree Measure). Let G be a rgraph of order n and average degree d. Let S V (G). The degree measure µ(s) of S is defined by,
Theorem Simple Containers Theorem) Let G be a simple, rgraph of average egree an of orer n Let 0 < δ < If is large enough, then there exists a collection of sets C PV G)) satisfying: i) for every inepenent
More informationMeasures of distance between samples: Euclidean
4 Chapter 4 Measures of istance between samples: Eucliean We will be talking a lot about istances in this book. The concept of istance between two samples or between two variables is funamental in multivariate
More informationMT426 Notebook 3 Fall 2012 prepared by Professor Jenny Baglivo. 3 MT426 Notebook 3 3. 3.1 Definitions... 3. 3.2 Joint Discrete Distributions...
MT426 Notebook 3 Fall 2012 prepared by Professor Jenny Baglivo c Copyright 20042012 by Jenny A. Baglivo. All Rights Reserved. Contents 3 MT426 Notebook 3 3 3.1 Definitions............................................
More informationSolutions for the exam for Matematisk statistik och diskret matematik (MVE050/MSG810). Statistik för fysiker (MSG820). December 15, 2012.
Solutions for the exam for Matematisk statistik och diskret matematik (MVE050/MSG810). Statistik för fysiker (MSG8). December 15, 12. 1. (3p) The joint distribution of the discrete random variables X and
More informationLecture 13: Martingales
Lecture 13: Martingales 1. Definition of a Martingale 1.1 Filtrations 1.2 Definition of a martingale and its basic properties 1.3 Sums of independent random variables and related models 1.4 Products of
More informationCalculus Refresher, version 2008.4. c 19972008, Paul Garrett, garrett@math.umn.edu http://www.math.umn.edu/ garrett/
Calculus Refresher, version 2008.4 c 9972008, Paul Garrett, garrett@math.umn.eu http://www.math.umn.eu/ garrett/ Contents () Introuction (2) Inequalities (3) Domain of functions (4) Lines (an other items
More informationUCLA STAT 13 Introduction to Statistical Methods for the Life and Health Sciences. Chapter 9 Paired Data. Paired data. Paired data
UCLA STAT 3 Introuction to Statistical Methos for the Life an Health Sciences Instructor: Ivo Dinov, Asst. Prof. of Statistics an Neurology Chapter 9 Paire Data Teaching Assistants: Jacquelina Dacosta
More informationThe Exponential Distribution
21 The Exponential Distribution From DiscreteTime to ContinuousTime: In Chapter 6 of the text we will be considering Markov processes in continuous time. In a sense, we already have a very good understanding
More informationFactoring Dickson polynomials over finite fields
Factoring Dickson polynomials over finite fiels Manjul Bhargava Department of Mathematics, Princeton University. Princeton NJ 08544 manjul@math.princeton.eu Michael Zieve Department of Mathematics, University
More informationMTH135/STA104: Probability
MTH135/STA14: Probability Homework # 8 Due: Tuesday, Nov 8, 5 Prof Robert Wolpert 1 Define a function f(x, y) on the plane R by { 1/x < y < x < 1 f(x, y) = other x, y a) Show that f(x, y) is a joint probability
More informationStats on the TI 83 and TI 84 Calculator
Stats on the TI 83 and TI 84 Calculator Entering the sample values STAT button Left bracket { Right bracket } Store (STO) List L1 Comma Enter Example: Sample data are {5, 10, 15, 20} 1. Press 2 ND and
More informationThe Scalar Algebra of Means, Covariances, and Correlations
3 The Scalar Algebra of Means, Covariances, and Correlations In this chapter, we review the definitions of some key statistical concepts: means, covariances, and correlations. We show how the means, variances,
More informationM147 Practice Problems for Exam 2
M47 Practice Problems for Exam Exam will cover sections 4., 4.4, 4.5, 4.6, 4.7, 4.8, 5., an 5.. Calculators will not be allowe on the exam. The first ten problems on the exam will be multiple choice. Work
More informationMATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators...
MATH4427 Notebook 2 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 20092016 by Jenny A. Baglivo. All Rights Reserved. Contents 2 MATH4427 Notebook 2 3 2.1 Definitions and Examples...................................
More informationUsing Stein s Method to Show Poisson and Normal Limit Laws for Fringe Subtrees
AofA 2014, Paris, France DMTCS proc. (subm., by the authors, 1 12 Using Stein s Metho to Show Poisson an Normal Limit Laws for Fringe Subtrees Cecilia Holmgren 1 an Svante Janson 2 1 Department of Mathematics,
More informationEfficiency and the CramérRao Inequality
Chapter Efficiency and the CramérRao Inequality Clearly we would like an unbiased estimator ˆφ (X of φ (θ to produce, in the long run, estimates which are fairly concentrated i.e. have high precision.
More informationOptimal Energy Commitments with Storage and Intermittent Supply
Submitte to Operations Research manuscript OPRE200909406 Optimal Energy Commitments with Storage an Intermittent Supply Jae Ho Kim Department of Electrical Engineering, Princeton University, Princeton,
More informationECE302 Spring 2006 HW7 Solutions March 11, 2006 1
ECE32 Spring 26 HW7 Solutions March, 26 Solutions to HW7 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics where
More information4. Important theorems in quantum mechanics
TFY4215 Kjemisk fysikk og kvantemekanikk  Tillegg 4 1 TILLEGG 4 4. Important theorems in quantum mechanics Before attacking threeimensional potentials in the next chapter, we shall in chapter 4 of this
More information1 HighDimensional Space
Contents HighDimensional Space. Properties of HighDimensional Space..................... 4. The HighDimensional Sphere......................... 5.. The Sphere an the Cube in Higher Dimensions...........
More informationJoint Distributions. Tieming Ji. Fall 2012
Joint Distributions Tieming Ji Fall 2012 1 / 33 X : univariate random variable. (X, Y ): bivariate random variable. In this chapter, we are going to study the distributions of bivariate random variables
More informationA Coefficient of Variation for Skewed and HeavyTailed Insurance Losses. Michael R. Powers[ 1 ] Temple University and Tsinghua University
A Coefficient of Variation for Skewed and HeavyTailed Insurance Losses Michael R. Powers[ ] Temple University and Tsinghua University Thomas Y. Powers Yale University [June 2009] Abstract We propose a
More informationSYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation
SYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu September 19, 2015 Outline
More informationJON HOLTAN. if P&C Insurance Ltd., Oslo, Norway ABSTRACT
OPTIMAL INSURANCE COVERAGE UNDER BONUSMALUS CONTRACTS BY JON HOLTAN if P&C Insurance Lt., Oslo, Norway ABSTRACT The paper analyses the questions: Shoul or shoul not an iniviual buy insurance? An if so,
More informationThe oneyear nonlife insurance risk
The oneyear nonlife insurance risk Ohlsson, Esbjörn & Lauzeningks, Jan Abstract With few exceptions, the literature on nonlife insurance reserve risk has been evote to the ultimo risk, the risk in the
More informationThe sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].
Probability Theory Probability Spaces and Events Consider a random experiment with several possible outcomes. For example, we might roll a pair of dice, flip a coin three times, or choose a random real
More informationVERTICES OF GIVEN DEGREE IN SERIESPARALLEL GRAPHS
VERTICES OF GIVEN DEGREE IN SERIESPARALLEL GRAPHS MICHAEL DRMOTA, OMER GIMENEZ, AND MARC NOY Abstract. We show that the number of vertices of a given degree k in several kinds of seriesparallel labelled
More informationMaster s Theory Exam Spring 2006
Spring 2006 This exam contains 7 questions. You should attempt them all. Each question is divided into parts to help lead you through the material. You should attempt to complete as much of each problem
More informationSection 3.3. Differentiation of Polynomials and Rational Functions. Difference Equations to Differential Equations
Difference Equations to Differential Equations Section 3.3 Differentiation of Polynomials an Rational Functions In tis section we begin te task of iscovering rules for ifferentiating various classes of
More information