SYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation

Size: px
Start display at page:

Download "SYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation"

Transcription

1 SYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas M.Vidyasagar@utdallas.edu September 19, 2015

2 Outline Motivating Example 1 Motivating Example 2 3 Example with Bounded Distributions Example with Unbounded Distributions 4 Definition and Characterization of Independence Covariance and Correlation Coefficient

3 Outline Motivating Example 1 Motivating Example 2 3 Example with Bounded Distributions Example with Unbounded Distributions 4 Definition and Characterization of Independence Covariance and Correlation Coefficient

4 Motivation Motivating Example Until now we have seen how to fit distributions to data. The objective of Monte Carlo simulation is to generate data from distributions. Even if we have exact formulas for the distribution functions of individual random variables, it is not always possible (or easy) to generate distribution functions of their sum, or product, etc.

5 Motivation (Cont d) Motivating Example In applications such as supply chain management or project management, we often have available the distribution functions of the constituent parts of a large and complex system. Monte Carlo simulation allows us to generate samples for each constituent random variable, combine those to generate samples for the overall random variable, and then combine the samples of the overall random variable. These samples can then be used to estimate various quantities about the overall random variable, such as mean, variance, tail values, etc.

6 Toy Manufacturing Example X 1 X 3 Start Finish X 2 X 4 Parts simultaneously start at stations 1 and 2, then move to 3 and 4 respectively. When both stations 3 and 4 finish, the process is complete. Y = max{x 1 + X 3, X 2 + X 4 }.

7 Toy Manufacturing Example No. 2 X 1 X 3 Start Finish X 2 X 4 Parts simultaneously start at stations 1 and 2, then move to 3 and 4 respectively. When both stations 3 and 4 finish, the process is complete. Y 2 = max{max{x 1, X 2 } + X 3, X 2 + X 4 }.

8 General Approach Motivating Example From historical records we can generate cumulative distribution functions (cdfs) of the individual random variables X 1 through X 4. Even if we had formulas for the cdfs of the four random variabes X 1 through X 4, it would be extremely difficult to find a formula for the cdf of Y.

9 General Approach (Cont d) So instead we can generate lots of random samples of each of the four random variables X 1 through X 4, use these to compute lots of random samples of Y. We can use these samples to estimate various quantities, e.g., the mean and variance of Y. We can try to fit some distribution to these randomly generated samples, to get an approximate cdf of Y. We can fit an empirical distribution to the data, and estimate how close it is to the true distribution. Usually the middle bullet is not attempted.

10 Pertinent Questions Motivating Example Given cdfs of individual random variables X 1 through X 4, how do we generate samples of the random variables X 1 through X 4 with the specified distribution? How can we generate an empirical distribution of the random variabl Y? How well does this empirical distribution approximate the true but unknown distribution function?

11 Outline Motivating Example 1 Motivating Example 2 3 Example with Bounded Distributions Example with Unbounded Distributions 4 Definition and Characterization of Independence Covariance and Correlation Coefficient

12 Outline Motivating Example 1 Motivating Example 2 3 Example with Bounded Distributions Example with Unbounded Distributions 4 Definition and Characterization of Independence Covariance and Correlation Coefficient

13 Percentile Approach to Sampling Φ X x The grid points are uniformly spaced on the vertical axis, though not on the horizontal axis.

14 Generating Samples Using Uniform Distribution Suppose a cdf Φ X is specified. How can we generate samples of X with this distribution? Suppose Z is uniformly distributed on [0, 1], and let Φ X ( ) denote the distribution function of X. Then the r.v. Φ 1 X (Z) has the same distribution as X. To generate samples x 1,..., x n of X according to the distribution Φ X ( ), first generate samples z 1,..., z n with the uniform distribution, and then define x i = Φ 1 X (z i), i = 1,..., n.

15 Generating Samples Using Uniform Distribution (Cont d) The matlab command rand(n,m) generates an n m matrix of random (actually pseudo-random) numbers that are uniformly distributed. In particular, rand (n) generates m uniformly distributed random numbers. By substituting these numbers into Φ 1 X, we can generate the desired samples of X. Note that matlab provides inverse cdfs for many widely used distributions, such as Gaussian (normal), Poisson, etc. In addition, the function stblinv.m can be used to invert a given stable distribution, while triinv can be used to invert a triangular distribution.

16 Generating Samples Using Uniform Distribution (Cont d) If there are k independent random variables X 1,..., X k, we can generate a k n array of uniformly distributed (pseudo-)random numbers Z by using the command Z = rand(k,n). Denote the entries of the k n matrix Z as z 11,..., z kn. Then we can generate n independent samples for each of the k random variables via x ij = Φ 1 X i (z ij ), i = 1,..., k, j = 1,..., n.

17 Outline Motivating Example 1 Motivating Example 2 3 Example with Bounded Distributions Example with Unbounded Distributions 4 Definition and Characterization of Independence Covariance and Correlation Coefficient

18 Monte Carlo Simulation Suppose Y = f(x 1,..., X k ) where X 1,..., X k are independent random variables. How can we generate samples of Y? Generate n independent samples each of the k random variables; call them x 11,..., x kn. Compute the samples y i = f(x 1i,..., x ki ), i = 1,..., n. Construct the empirical distribution function ˆΦ Y (u) = n I {u yi }. i=1

19 Toy Manufacturing Example Revisited Recall the example where Y = max{x 1 + X 3, X 2 + X 4 }. By substituting the samples into this formula, we can generate n independent samples of Y. So the question now arises: What do we do with these samples?

20 Empirical Distribution Function Suppose Y is the random variable of interest, and we have n independent samples of Y, call them y 1,..., y n. For each value of y, define the empirical distribution function ˆΦ Y (y) = 1 n n i=1 I yi y, where I denotes the indicator function it equals one if the statement in the subscript is true, and zero if it is false. So specifically ˆΦ Y (y) is just the fraction of the n samples that are smaller than or equal to y.

21 Empirical Distribution Function (Cont d) To construct the empirical distribution function, first sort all the samples y 1,..., y n in increasing order; call the result (y) 1,..., (y) n. Then construct a staircase function that jumps by 1/n at each sample (y) i. That is the empirical distribution function.

22 Depiction: Empirical Distribution ˆΦ Y (u) (y) 1 (y) 2 (y) 3 (y) 4 (y) 5 (y) 6 u

23 Theory Behind Monte Carlo Simulation Theory allows us to say just how many samples we need to draw, to get a desired level of accuracy of the estimate, with a given level of confidence. With confidence 1 δ it can be said that the true but unknown probability distribution function Φ Y (u) satisfies max u ˆΦ Y (u) Φ Y (u) θ, where θ(n, δ) = ( 1 2n log 2 ) 1/2. δ

24 Theory Behind Monte Carlo Simulation (Cont d) Turning around this inequality, if we want to approximate Φ Y (u) to accuracy θ with confidence 1 δ, then the minimum number of samples needed is n 1 2θ 2 log 2 δ. With this many samples, true but unknown probability distribution function Φ Y (u) lies within a band of width ɛ around the empirical probability distribution ˆΦ Y (u). For example, to approximate Φ Y ( ) to accuracy with confidence 95%, we require n 2951 samples. If we wish to be 99% sure, then we require 4, 239 samples.

25 Depiction: Error Bands for Empirical Distribution Thin green and vermilion staircase functions show upper and lower bounds for true but unknown distribution Φ Y ( ) These can be used to bound percentile values of Y with specified confidence and accuracy. ˆΦ Y (u) y i1 y i2 y i3 y i4 y i5 y i6 u

26 Estimating Value at Risk One of the most common uses of Monte Carlo simulation is estimating the Value at Risk (VaR). Suppose wish to determine a value V such that Pr{Y > V } α, where α is a pre-specified level. Usual values of α are 0.01 or If α = 0.5, then V is the called the 95% Value at Risk, whereas if α = 0.01 then V is called the 99% Value at Risk.

27 Estimating Value at Risk (Cont d) We can express the VaR in terms of the complementary cdf: V = Φ 1 1 (1 α) = Φ (α). Y The difficulty however is that we don t know the true cdf Φ or the true ccdf Φ. This is where we can use the empirical distribution. Y

28 Estimating Value at Risk (Cont d) Suppose α = 0.05, so that we wish to estimate the 95% VaR. Choose θ = α/2 = Then choose the desired confidence level δ, and the corresponding number of samples n according to n 1 2θ 2 log 2 δ. With this many samples, we know that the empirical distribution function ˆΦ Y is within θ of the true but unknown distribution function.

29 Estimating Value at Risk (Cont d) Now compute ˆV according to ˆΦ Y ( ˆV ) 1 α/2 = 1 θ. Then, with confidence 1 δ, we can say that Φ Y ( ˆV ) 1 α. Therefore ˆV is an estimated VaR, at a confidence level of δ.

30 Estimating Value at Risk (Cont d) Often the VaR is estimated of some function of Y. For example, suppose Y is the time needed to complete some manufacturing job. The manufacturer receives a bonus for early completion and pays a penalty for late completion. We wish to estimate the VaR of the bonus/penalty. This situation can be modeled by defining the bonus B as a function of Y, with negative bonus corresponding to a penalty.

31 Estimating Value at Risk (Cont d) Once a level α is specified, we wish to estimate the value V B such that Pr{B(Y ) V B } = 1 α. Again, using the empirical distribution of Y, we can construct a corresponding empirical distribution of the bonus B, and use that to estimate the VaR of the bonus.

32 Estimating Value at Risk (Cont d) But often there is a simpler way to do this. If the bonus is a monotonic function of the time to completion (which is a reasonable assumption), then we simply compute (or estimate) the VaR of Y, and substitute that into the formula for the bonus.

33 Estimating Percentiles The VaR calculation applies to the far end of the distribution. The same philosophy can also be applied to estimating other percentiles, such the median for example. Suppose we wish to estimate the median value of Y. We have the empirical estimate ˆΦ Y, and we have chosen the number of samples n such that, with confidence 1 δ, we can assert that ˆΦ Y (u) Φ Y (u) θ u. Now the median corresponds to Φ 1 T (0.5). So we can compute ˆΦ 1 1 Y (0.5 θ) and ˆΦ Y (0.5 + θ). These numbers give a range for the median. To estimate other percentiles, just replace 0.5 by the desired number.

34 Hoeffding s Inequality If the random variable Y is bounded, then a very useful estimate known as Hoeffding s inequality becomes applicable. Note that if popular models such as Gaussian or log-normal distributions are used to model various quantities, then in principle the random variables are not bounded, and Hoeffding s inequality does not apply. But if triangular distributions (for example) are used, then Hoeffding s inequality does apply.

35 Hoeffding s Inequality (Cont d) Suppose Y is a random variable assuming values in a finite interval [a, b]. Suppose y 1,..., y n are independent samples of Y, and define ˆµ Y = 1 n n i=1 y i to be the empirical mean of Y. Let µ Y denote the true but unknown mean of Y. Hoeffding s inequality states that Pr{ ˆµ Y µ Y > ɛ} 2 exp( 2nɛ 2 /(b a) 2 ).

36 Hoeffding s Inequality (Cont d) Therefore, to estimate the quantity µ Y to within a specified accuracy ɛ with confidence 1 δ, we require ( ) (b a)2 2 n 2ɛ 2 log δ samples. We can also compute the accuracy ɛ in terms of the number of samples n and the confidence δ. [ ( )] b a 2 1/2 ɛ = 2n log. δ

37 Outline Motivating Example Example with Bounded Distributions Example with Unbounded Distributions 1 Motivating Example 2 3 Example with Bounded Distributions Example with Unbounded Distributions 4 Definition and Characterization of Independence Covariance and Correlation Coefficient

38 Outline Motivating Example Example with Bounded Distributions Example with Unbounded Distributions 1 Motivating Example 2 3 Example with Bounded Distributions Example with Unbounded Distributions 4 Definition and Characterization of Independence Covariance and Correlation Coefficient

39 Example with Bounded Distributions Example with Unbounded Distributions Specification of Individual Random Variables Suppose X 1 has a triangular distribution with minimum a 1 = 1, mode b 1 = 2, and maximum c 1 = 6. X 2 has a triangular distribution with minimum a 2 = 1, mode b 2 = 3, and maximum c 2 = 5. X 3 has a triangular distribution with minimum a 3 = 3, mode b 3 = 5, and maximum c 3 = 7. X 4 has a triangular distribution with minimum a 4 = 2, mode b 4 = 5, and maximum c 4 = 8.

40 Example with Bounded Distributions Example with Unbounded Distributions Determination of the Number of Samples Let us choose θ = 0.01, δ = This leads to n = 1 2θ 2 log 2 δ = Let us round this up to 26,500 samples. Repeating earlier steps leads to the empirical distribution shown in the next slide.

41 Example with Bounded Distributions Example with Unbounded Distributions Empirical Distribution of Processing Time 1 Empirical Distribution Function Phi hat(y) Values of Y

42 Estimating Median Processing Time Example with Bounded Distributions Example with Unbounded Distributions Following earlier steps, we find that [8.7431, ] to be the 99% confidence interval for the median processing time.

43 Estimating the Mean Processing Time Example with Bounded Distributions Example with Unbounded Distributions Because Y is now bounded, lying between 3 and 13, we can apply Hoeffding s inequality. Because we have 26,500 samples, we can compute the achievable accuracy at a confidence level of 1 δ using the formula [ ( )] b a 2 1/2 ɛ = 2n log. δ turns out to be ˆµ Y = Therefore we can assert with confidence 1 δ that the true mean µ(y ) lies in the interval [ˆµ Y ɛ, ˆµ Y + ɛ].

44 Example with Bounded Distributions Example with Unbounded Distributions Estimating the Mean Processing Time (Cont d) In the present case, choosing δ = 0.01 leads to ɛ = The empirical mean, that is, the average of the 26,500 samples of Y, In the present case, we can state with 99% confidence that the true mean of Y lies in the interval [8.7730, ]. This estimate does not differ too much from the estimate for the median, which is [8.7431, ]. This is because the empirical distribution of Y is not very skewed.

45 Outline Motivating Example Example with Bounded Distributions Example with Unbounded Distributions 1 Motivating Example 2 3 Example with Bounded Distributions Example with Unbounded Distributions 4 Definition and Characterization of Independence Covariance and Correlation Coefficient

46 Toy Manufacturing Example: Reprise Example with Bounded Distributions Example with Unbounded Distributions X 1 X 3 Start Finish X 2 X 4 Parts simultaneously start at stations 1 and 2, then move to 3 and 4 respectively. When both stations 3 and 4 finish, the process is complete. Y = max{x 1 + X 3, X 2 + X 4 }.

47 Example with Bounded Distributions Example with Unbounded Distributions Specification of Individual Random Variables Suppose X 1 is log-normally distributed with mean µ 1 = 1 and standard deviation (of log X 1 ) of s 1 = 0.5. X 2 has a triangular distribution with minimum a 2 = 1, mode b 2 = 2, and maximum c 2 = 6. X 3 is log-normally distributed with mean µ 1 = 2.25 and standard deviation (of log X 3 ) of s 1 = X 4 has a triangular distribution with minimum a 4 = 5, mode b 4 = 9, and maximum c 4 = 16.

48 Generation of Samples Example with Bounded Distributions Example with Unbounded Distributions Suppose we wish to approximate the distribution function of the total processing time Y to an accuracy of 0.025, with a confidence of Therefore θ = 0.025, δ = 0.01, which means that we require n = 1 2θ 2 log 2 δ = 4288 samples. We can round this up to n = 4300.

49 Generation of Samples (Cont d) Example with Bounded Distributions Example with Unbounded Distributions By using the appropriate matlab commands, we can generate n samples for each of the four random variables. By substituting into the expression Y = max{x 1 + X 3, X 2 + X 4 }. we can generate n independent samples of Y. This leads to the empirical distribution function shown in the next slide.

50 Example with Bounded Distributions Example with Unbounded Distributions Empirical Distribution of Processing Time 1 Empirical Distribution Function via Monte Carlo Simulation Empirical Distribution Function of Y Values of Y1

51 Example with Bounded Distributions Example with Unbounded Distributions Estimating Value at Risk of Processing Time We have chosen θ = So we can estimate the 1 θ Value at Risk (97.5% VaR) of Y using the empirical distribution. Note that 1 2θ = 095. So, with confidence 1 δ = 0.99, we can say that the 95% VaR of the empirical distribution of Y is no larger than the 97.5% VaR of the true but unknown distribution of Y. This value turns out to be Therefore we are 99% sure that the 97.5% VaR of Y is not larger than this number.

52 Estimating the Median Example with Bounded Distributions Example with Unbounded Distributions We would like to estimate the median value of Y, which is Φ 1 Y (0.5). By finding the range of values [ˆΦ 1 1 Y (0.5 θ), ˆΦ Y (0.5 + θ)], we can get an estimate for the mediam value of Y, with confidence 1 δ. This interval turns out to be [ , ]. So we are 99% sure that the median value of Y lies in this interval. Because the log-normal distribution is unbounded, we cannot apply Hoeffding s inequality to this problem.

53 Outline Motivating Example Definition and Characterization of Independence Covariance and Correlation Coefficient 1 Motivating Example 2 3 Example with Bounded Distributions Example with Unbounded Distributions 4 Definition and Characterization of Independence Covariance and Correlation Coefficient

54 Outline Motivating Example Definition and Characterization of Independence Covariance and Correlation Coefficient 1 Motivating Example 2 3 Example with Bounded Distributions Example with Unbounded Distributions 4 Definition and Characterization of Independence Covariance and Correlation Coefficient

55 Independence of Real Random Variables Definition and Characterization of Independence Covariance and Correlation Coefficient There are two equivalent ways of defining independence in this case. X, Y are independent if or equivalently Φ X,Y (a, b) = Φ X (a) Φ Y (b) x, y, φ X,Y (x, y) = φ X (x) φ Y (y) x, y.

56 Sums of Independent Random Variables Definition and Characterization of Independence Covariance and Correlation Coefficient Suppose X, Y are independent r.v.s with densities φ X and φ Y respectively. Then the r.v. Z = X + Y has the density φ Z (z) = = z z φ X (u)φ Y (z u)du φ X (z v)φ Y (v)dv. In other words, the density of X + Y is the convolution of the densities of X and Y. If X and Y are not indepedent, then this statement is false in general.

57 Outline Motivating Example Definition and Characterization of Independence Covariance and Correlation Coefficient 1 Motivating Example 2 3 Example with Bounded Distributions Example with Unbounded Distributions 4 Definition and Characterization of Independence Covariance and Correlation Coefficient

58 Covariance Motivating Example Definition and Characterization of Independence Covariance and Correlation Coefficient Suppose X, Y are real-valued r.v.s. Define the expected values E[X], E[Y ], variances V (X), V (Y ), and standard deviations σ(x) = (V (X)) 1/2 and σ(y ) = (V (Y )) 1/2. The quantity C(X, Y ) = E[(X E(X))(Y E(Y ))] = E[XY ] E[X]E[Y ] is called the covariance of X and Y.

59 Correlation Coefficient Definition and Characterization of Independence Covariance and Correlation Coefficient ρ(x, Y ) = C(X, Y ) E[XY ] E[X]E[Y ] = σ(x)σ(y ) σ(x)σ(y ) is called the correlation coefficient between X and Y. ρ(x, Y ) is always in the interval [ 1, 1]. If ρ(x, Y ) > 0 we say that X, Y are positively correlated; if ρ(x, Y ) < 0 we say that X, Y are negatively correlated; and if ρ(x, Y ) = 0 we say that X, Y are uncorrelated. Note that the correlation coefficient is invariant under linear transformations. So if a, b, c, d are real numbers, then ρ(x, Y ) = ρ(ax + b, cy + d).

60 Correlation Coefficient 2 Definition and Characterization of Independence Covariance and Correlation Coefficient Common Misinterpretation: If X, Y are uncorrelated, then they are independent. The correlation coefficient ρ(x, Y ) tells only whether E[XY ] is more or less than the product E[X]E[Y ]. Fact: If X, Y are independent, then E(XY ) = E(X) E(Y ). Therefore ρ(x, Y ) = 0 if X, Y are independent. But the converse is not true at all!

61 More Than Two Random Variables Definition and Characterization of Independence Covariance and Correlation Coefficient We will discuss the case where there are, not just two, but d 2 real-valued random variables X 1,..., X d. In this case we can define an d d covariance matrix C defined by c ij = E[X i X j ] E[X i ]E[X j ], i, j = 1,..., d. Then C is a symmetric and positive semidefinite matrix, that is, all of its eigenvalues are real and nonnegative.

62 Multivariate Gaussian Distribution Definition and Characterization of Independence Covariance and Correlation Coefficient Suppose d 2 is some integer, µ R d, and Σ is a d d symmetric and positive definite matrix. Then the d-dimensional joint density function φ X (x) = 1 (2π) d/2 det(σ) 1/2 exp( (1/2)(x µ)t Σ 1 (x µ)) defines the d-dimensional Gaussian distribution with mean µ and covariance matrix Σ. It is easy to check that it is a generalization of the one-dimensional Gaussian density function φ(x) = 1 2πσ exp( (x µ) 2 /2σ 2 ).

63 Multivariate Gaussian Distribution 2 Definition and Characterization of Independence Covariance and Correlation Coefficient The d-dimensional Gaussian distribution defines a collection of d random variables with the properties that and covariance matrix Σ. E(X) = µ, that is, E(X i ) = µ i, Important Property: It is easy to see that the d random variables are pairwise uncorrelated if and only if the matrix Σ is diagonal. However, for Gaussian distributions only, it can be shown that if Σ is diagonal, then the d random variables are also pairwise independent.

64 Simulating Correlated Gaussian Variables Definition and Characterization of Independence Covariance and Correlation Coefficient The Matlab command norminv can be used with scalars as well as matrices. Thus if x is an n-dimensional vector consisting of samples generated using the uniform distribution, then y = norminv(x, µ, σ) generates Gaussian samples with mean µ and standard deviation σ. If X is an n d matrix consisting of independent samples generated using the uniform distribution, µ is a d-dimensional vector, and Σ is a d d matrix, then y = norminv(x, µ, Σ) generates Gaussian samples with mean µ and covariance matrix Σ.

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015. Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x

More information

Chapter 3 RANDOM VARIATE GENERATION

Chapter 3 RANDOM VARIATE GENERATION Chapter 3 RANDOM VARIATE GENERATION In order to do a Monte Carlo simulation either by hand or by computer, techniques must be developed for generating values of random variables having known distributions.

More information

Some probability and statistics

Some probability and statistics Appendix A Some probability and statistics A Probabilities, random variables and their distribution We summarize a few of the basic concepts of random variables, usually denoted by capital letters, X,Y,

More information

Generating Random Numbers Variance Reduction Quasi-Monte Carlo. Simulation Methods. Leonid Kogan. MIT, Sloan. 15.450, Fall 2010

Generating Random Numbers Variance Reduction Quasi-Monte Carlo. Simulation Methods. Leonid Kogan. MIT, Sloan. 15.450, Fall 2010 Simulation Methods Leonid Kogan MIT, Sloan 15.450, Fall 2010 c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 1 / 35 Outline 1 Generating Random Numbers 2 Variance Reduction 3 Quasi-Monte

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Math 541: Statistical Theory II Lecturer: Songfeng Zheng Maximum Likelihood Estimation 1 Maximum Likelihood Estimation Maximum likelihood is a relatively simple method of constructing an estimator for

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i )

For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i ) Probability Review 15.075 Cynthia Rudin A probability space, defined by Kolmogorov (1903-1987) consists of: A set of outcomes S, e.g., for the roll of a die, S = {1, 2, 3, 4, 5, 6}, 1 1 2 1 6 for the roll

More information

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4) Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume

More information

Econometrics Simple Linear Regression

Econometrics Simple Linear Regression Econometrics Simple Linear Regression Burcu Eke UC3M Linear equations with one variable Recall what a linear equation is: y = b 0 + b 1 x is a linear equation with one variable, or equivalently, a straight

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance

More information

1 Sufficient statistics

1 Sufficient statistics 1 Sufficient statistics A statistic is a function T = rx 1, X 2,, X n of the random sample X 1, X 2,, X n. Examples are X n = 1 n s 2 = = X i, 1 n 1 the sample mean X i X n 2, the sample variance T 1 =

More information

Covariance and Correlation

Covariance and Correlation Covariance and Correlation ( c Robert J. Serfling Not for reproduction or distribution) We have seen how to summarize a data-based relative frequency distribution by measures of location and spread, such

More information

Joint Exam 1/P Sample Exam 1

Joint Exam 1/P Sample Exam 1 Joint Exam 1/P Sample Exam 1 Take this practice exam under strict exam conditions: Set a timer for 3 hours; Do not stop the timer for restroom breaks; Do not look at your notes. If you believe a question

More information

Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page

Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page Errata for ASM Exam C/4 Study Manual (Sixteenth Edition) Sorted by Page 1 Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page Practice exam 1:9, 1:22, 1:29, 9:5, and 10:8

More information

The Monte Carlo Framework, Examples from Finance and Generating Correlated Random Variables

The Monte Carlo Framework, Examples from Finance and Generating Correlated Random Variables Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh The Monte Carlo Framework, Examples from Finance and Generating Correlated Random Variables 1 The Monte Carlo Framework Suppose we wish

More information

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce

More information

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab 1 Overview of Monte Carlo Simulation 1.1 Why use simulation?

More information

Lecture 6: Discrete & Continuous Probability and Random Variables

Lecture 6: Discrete & Continuous Probability and Random Variables Lecture 6: Discrete & Continuous Probability and Random Variables D. Alex Hughes Math Camp September 17, 2015 D. Alex Hughes (Math Camp) Lecture 6: Discrete & Continuous Probability and Random September

More information

Math 431 An Introduction to Probability. Final Exam Solutions

Math 431 An Introduction to Probability. Final Exam Solutions Math 43 An Introduction to Probability Final Eam Solutions. A continuous random variable X has cdf a for 0, F () = for 0 <

More information

Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

More information

MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators...

MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators... MATH4427 Notebook 2 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 2009-2016 by Jenny A. Baglivo. All Rights Reserved. Contents 2 MATH4427 Notebook 2 3 2.1 Definitions and Examples...................................

More information

4. Continuous Random Variables, the Pareto and Normal Distributions

4. Continuous Random Variables, the Pareto and Normal Distributions 4. Continuous Random Variables, the Pareto and Normal Distributions A continuous random variable X can take any value in a given range (e.g. height, weight, age). The distribution of a continuous random

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

Permutation Tests for Comparing Two Populations

Permutation Tests for Comparing Two Populations Permutation Tests for Comparing Two Populations Ferry Butar Butar, Ph.D. Jae-Wan Park Abstract Permutation tests for comparing two populations could be widely used in practice because of flexibility of

More information

5. Continuous Random Variables

5. Continuous Random Variables 5. Continuous Random Variables Continuous random variables can take any value in an interval. They are used to model physical characteristics such as time, length, position, etc. Examples (i) Let X be

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Dongfeng Li. Autumn 2010

Dongfeng Li. Autumn 2010 Autumn 2010 Chapter Contents Some statistics background; ; Comparing means and proportions; variance. Students should master the basic concepts, descriptive statistics measures and graphs, basic hypothesis

More information

Basics of Statistical Machine Learning

Basics of Statistical Machine Learning CS761 Spring 2013 Advanced Machine Learning Basics of Statistical Machine Learning Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu Modern machine learning is rooted in statistics. You will find many familiar

More information

Exact Confidence Intervals

Exact Confidence Intervals Math 541: Statistical Theory II Instructor: Songfeng Zheng Exact Confidence Intervals Confidence intervals provide an alternative to using an estimator ˆθ when we wish to estimate an unknown parameter

More information

The Bivariate Normal Distribution

The Bivariate Normal Distribution The Bivariate Normal Distribution This is Section 4.7 of the st edition (2002) of the book Introduction to Probability, by D. P. Bertsekas and J. N. Tsitsiklis. The material in this section was not included

More information

Module 3: Correlation and Covariance

Module 3: Correlation and Covariance Using Statistical Data to Make Decisions Module 3: Correlation and Covariance Tom Ilvento Dr. Mugdim Pašiƒ University of Delaware Sarajevo Graduate School of Business O ften our interest in data analysis

More information

Efficiency and the Cramér-Rao Inequality

Efficiency and the Cramér-Rao Inequality Chapter Efficiency and the Cramér-Rao Inequality Clearly we would like an unbiased estimator ˆφ (X of φ (θ to produce, in the long run, estimates which are fairly concentrated i.e. have high precision.

More information

6.4 Normal Distribution

6.4 Normal Distribution Contents 6.4 Normal Distribution....................... 381 6.4.1 Characteristics of the Normal Distribution....... 381 6.4.2 The Standardized Normal Distribution......... 385 6.4.3 Meaning of Areas under

More information

Quadratic forms Cochran s theorem, degrees of freedom, and all that

Quadratic forms Cochran s theorem, degrees of freedom, and all that Quadratic forms Cochran s theorem, degrees of freedom, and all that Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 1, Slide 1 Why We Care Cochran s theorem tells us

More information

5.1 Identifying the Target Parameter

5.1 Identifying the Target Parameter University of California, Davis Department of Statistics Summer Session II Statistics 13 August 20, 2012 Date of latest update: August 20 Lecture 5: Estimation with Confidence intervals 5.1 Identifying

More information

Notes on Continuous Random Variables

Notes on Continuous Random Variables Notes on Continuous Random Variables Continuous random variables are random quantities that are measured on a continuous scale. They can usually take on any value over some interval, which distinguishes

More information

Probability and Random Variables. Generation of random variables (r.v.)

Probability and Random Variables. Generation of random variables (r.v.) Probability and Random Variables Method for generating random variables with a specified probability distribution function. Gaussian And Markov Processes Characterization of Stationary Random Process Linearly

More information

Descriptive Statistics

Descriptive Statistics Y520 Robert S Michael Goal: Learn to calculate indicators and construct graphs that summarize and describe a large quantity of values. Using the textbook readings and other resources listed on the web

More information

THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok

THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE Alexer Barvinok Papers are available at http://www.math.lsa.umich.edu/ barvinok/papers.html This is a joint work with J.A. Hartigan

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

SYSM 6304: Risk and Decision Analysis Lecture 5: Methods of Risk Analysis

SYSM 6304: Risk and Decision Analysis Lecture 5: Methods of Risk Analysis SYSM 6304: Risk and Decision Analysis Lecture 5: Methods of Risk Analysis M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu October 17, 2015 Outline

More information

Point and Interval Estimates

Point and Interval Estimates Point and Interval Estimates Suppose we want to estimate a parameter, such as p or µ, based on a finite sample of data. There are two main methods: 1. Point estimate: Summarize the sample by a single number

More information

Master s Theory Exam Spring 2006

Master s Theory Exam Spring 2006 Spring 2006 This exam contains 7 questions. You should attempt them all. Each question is divided into parts to help lead you through the material. You should attempt to complete as much of each problem

More information

Gaussian Conjugate Prior Cheat Sheet

Gaussian Conjugate Prior Cheat Sheet Gaussian Conjugate Prior Cheat Sheet Tom SF Haines 1 Purpose This document contains notes on how to handle the multivariate Gaussian 1 in a Bayesian setting. It focuses on the conjugate prior, its Bayesian

More information

1 Inner Products and Norms on Real Vector Spaces

1 Inner Products and Norms on Real Vector Spaces Math 373: Principles Techniques of Applied Mathematics Spring 29 The 2 Inner Product 1 Inner Products Norms on Real Vector Spaces Recall that an inner product on a real vector space V is a function from

More information

Aggregate Loss Models

Aggregate Loss Models Aggregate Loss Models Chapter 9 Stat 477 - Loss Models Chapter 9 (Stat 477) Aggregate Loss Models Brian Hartman - BYU 1 / 22 Objectives Objectives Individual risk model Collective risk model Computing

More information

The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].

The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1]. Probability Theory Probability Spaces and Events Consider a random experiment with several possible outcomes. For example, we might roll a pair of dice, flip a coin three times, or choose a random real

More information

Correlation in Random Variables

Correlation in Random Variables Correlation in Random Variables Lecture 11 Spring 2002 Correlation in Random Variables Suppose that an experiment produces two random variables, X and Y. What can we say about the relationship between

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

4 Sums of Random Variables

4 Sums of Random Variables Sums of a Random Variables 47 4 Sums of Random Variables Many of the variables dealt with in physics can be expressed as a sum of other variables; often the components of the sum are statistically independent.

More information

MULTIVARIATE PROBABILITY DISTRIBUTIONS

MULTIVARIATE PROBABILITY DISTRIBUTIONS MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined

More information

UNIT I: RANDOM VARIABLES PART- A -TWO MARKS

UNIT I: RANDOM VARIABLES PART- A -TWO MARKS UNIT I: RANDOM VARIABLES PART- A -TWO MARKS 1. Given the probability density function of a continuous random variable X as follows f(x) = 6x (1-x) 0

More information

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 18. A Brief Introduction to Continuous Probability

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 18. A Brief Introduction to Continuous Probability CS 7 Discrete Mathematics and Probability Theory Fall 29 Satish Rao, David Tse Note 8 A Brief Introduction to Continuous Probability Up to now we have focused exclusively on discrete probability spaces

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

Lecture 8: More Continuous Random Variables

Lecture 8: More Continuous Random Variables Lecture 8: More Continuous Random Variables 26 September 2005 Last time: the eponential. Going from saying the density e λ, to f() λe λ, to the CDF F () e λ. Pictures of the pdf and CDF. Today: the Gaussian

More information

Multivariate normal distribution and testing for means (see MKB Ch 3)

Multivariate normal distribution and testing for means (see MKB Ch 3) Multivariate normal distribution and testing for means (see MKB Ch 3) Where are we going? 2 One-sample t-test (univariate).................................................. 3 Two-sample t-test (univariate).................................................

More information

Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers

Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers Variance Reduction The statistical efficiency of Monte Carlo simulation can be measured by the variance of its output If this variance can be lowered without changing the expected value, fewer replications

More information

Monte Carlo Simulation

Monte Carlo Simulation 1 Monte Carlo Simulation Stefan Weber Leibniz Universität Hannover email: sweber@stochastik.uni-hannover.de web: www.stochastik.uni-hannover.de/ sweber Monte Carlo Simulation 2 Quantifying and Hedging

More information

CHAPTER 6: Continuous Uniform Distribution: 6.1. Definition: The density function of the continuous random variable X on the interval [A, B] is.

CHAPTER 6: Continuous Uniform Distribution: 6.1. Definition: The density function of the continuous random variable X on the interval [A, B] is. Some Continuous Probability Distributions CHAPTER 6: Continuous Uniform Distribution: 6. Definition: The density function of the continuous random variable X on the interval [A, B] is B A A x B f(x; A,

More information

E3: PROBABILITY AND STATISTICS lecture notes

E3: PROBABILITY AND STATISTICS lecture notes E3: PROBABILITY AND STATISTICS lecture notes 2 Contents 1 PROBABILITY THEORY 7 1.1 Experiments and random events............................ 7 1.2 Certain event. Impossible event............................

More information

General Sampling Methods

General Sampling Methods General Sampling Methods Reference: Glasserman, 2.2 and 2.3 Claudio Pacati academic year 2016 17 1 Inverse Transform Method Assume U U(0, 1) and let F be the cumulative distribution function of a distribution

More information

Spatial Statistics Chapter 3 Basics of areal data and areal data modeling

Spatial Statistics Chapter 3 Basics of areal data and areal data modeling Spatial Statistics Chapter 3 Basics of areal data and areal data modeling Recall areal data also known as lattice data are data Y (s), s D where D is a discrete index set. This usually corresponds to data

More information

TOPIC 4: DERIVATIVES

TOPIC 4: DERIVATIVES TOPIC 4: DERIVATIVES 1. The derivative of a function. Differentiation rules 1.1. The slope of a curve. The slope of a curve at a point P is a measure of the steepness of the curve. If Q is a point on the

More information

Stats on the TI 83 and TI 84 Calculator

Stats on the TI 83 and TI 84 Calculator Stats on the TI 83 and TI 84 Calculator Entering the sample values STAT button Left bracket { Right bracket } Store (STO) List L1 Comma Enter Example: Sample data are {5, 10, 15, 20} 1. Press 2 ND and

More information

3.4 The Normal Distribution

3.4 The Normal Distribution 3.4 The Normal Distribution All of the probability distributions we have found so far have been for finite random variables. (We could use rectangles in a histogram.) A probability distribution for a continuous

More information

Simple Linear Regression Inference

Simple Linear Regression Inference Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

Classification Problems

Classification Problems Classification Read Chapter 4 in the text by Bishop, except omit Sections 4.1.6, 4.1.7, 4.2.4, 4.3.3, 4.3.5, 4.3.6, 4.4, and 4.5. Also, review sections 1.5.1, 1.5.2, 1.5.3, and 1.5.4. Classification Problems

More information

CA200 Quantitative Analysis for Business Decisions. File name: CA200_Section_04A_StatisticsIntroduction

CA200 Quantitative Analysis for Business Decisions. File name: CA200_Section_04A_StatisticsIntroduction CA200 Quantitative Analysis for Business Decisions File name: CA200_Section_04A_StatisticsIntroduction Table of Contents 4. Introduction to Statistics... 1 4.1 Overview... 3 4.2 Discrete or continuous

More information

Chapter 4 - Lecture 1 Probability Density Functions and Cumul. Distribution Functions

Chapter 4 - Lecture 1 Probability Density Functions and Cumul. Distribution Functions Chapter 4 - Lecture 1 Probability Density Functions and Cumulative Distribution Functions October 21st, 2009 Review Probability distribution function Useful results Relationship between the pdf and the

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

Lecture 7: Continuous Random Variables

Lecture 7: Continuous Random Variables Lecture 7: Continuous Random Variables 21 September 2005 1 Our First Continuous Random Variable The back of the lecture hall is roughly 10 meters across. Suppose it were exactly 10 meters, and consider

More information

Least Squares Estimation

Least Squares Estimation Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis

More information

Credit Risk Models: An Overview

Credit Risk Models: An Overview Credit Risk Models: An Overview Paul Embrechts, Rüdiger Frey, Alexander McNeil ETH Zürich c 2003 (Embrechts, Frey, McNeil) A. Multivariate Models for Portfolio Credit Risk 1. Modelling Dependent Defaults:

More information

Elliptical copulae. Dorota Kurowicka, Jolanta Misiewicz, Roger Cooke

Elliptical copulae. Dorota Kurowicka, Jolanta Misiewicz, Roger Cooke Elliptical copulae Dorota Kurowicka, Jolanta Misiewicz, Roger Cooke Abstract: In this paper we construct a copula, that is, a distribution with uniform marginals. This copula is continuous and can realize

More information

BNG 202 Biomechanics Lab. Descriptive statistics and probability distributions I

BNG 202 Biomechanics Lab. Descriptive statistics and probability distributions I BNG 202 Biomechanics Lab Descriptive statistics and probability distributions I Overview The overall goal of this short course in statistics is to provide an introduction to descriptive and inferential

More information

1 Prior Probability and Posterior Probability

1 Prior Probability and Posterior Probability Math 541: Statistical Theory II Bayesian Approach to Parameter Estimation Lecturer: Songfeng Zheng 1 Prior Probability and Posterior Probability Consider now a problem of statistical inference in which

More information

Probability Calculator

Probability Calculator Chapter 95 Introduction Most statisticians have a set of probability tables that they refer to in doing their statistical wor. This procedure provides you with a set of electronic statistical tables that

More information

Capital Allocation and Bank Management Based on the Quantification of Credit Risk

Capital Allocation and Bank Management Based on the Quantification of Credit Risk Capital Allocation and Bank Management Based on the Quantification of Credit Risk Kenji Nishiguchi, Hiroshi Kawai, and Takanori Sazaki 1. THE NEED FOR QUANTIFICATION OF CREDIT RISK Liberalization and deregulation

More information

Numerical methods for American options

Numerical methods for American options Lecture 9 Numerical methods for American options Lecture Notes by Andrzej Palczewski Computational Finance p. 1 American options The holder of an American option has the right to exercise it at any moment

More information

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not. Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C

More information

Chapter 4. Probability and Probability Distributions

Chapter 4. Probability and Probability Distributions Chapter 4. robability and robability Distributions Importance of Knowing robability To know whether a sample is not identical to the population from which it was selected, it is necessary to assess the

More information

A Log-Robust Optimization Approach to Portfolio Management

A Log-Robust Optimization Approach to Portfolio Management A Log-Robust Optimization Approach to Portfolio Management Dr. Aurélie Thiele Lehigh University Joint work with Ban Kawas Research partially supported by the National Science Foundation Grant CMMI-0757983

More information

Representing Uncertainty by Probability and Possibility What s the Difference?

Representing Uncertainty by Probability and Possibility What s the Difference? Representing Uncertainty by Probability and Possibility What s the Difference? Presentation at Amsterdam, March 29 30, 2011 Hans Schjær Jacobsen Professor, Director RD&I Ballerup, Denmark +45 4480 5030

More information

( ) is proportional to ( 10 + x)!2. Calculate the

( ) is proportional to ( 10 + x)!2. Calculate the PRACTICE EXAMINATION NUMBER 6. An insurance company eamines its pool of auto insurance customers and gathers the following information: i) All customers insure at least one car. ii) 64 of the customers

More information

Numerical Methods for Option Pricing

Numerical Methods for Option Pricing Chapter 9 Numerical Methods for Option Pricing Equation (8.26) provides a way to evaluate option prices. For some simple options, such as the European call and put options, one can integrate (8.26) directly

More information

Lecture 3: Linear methods for classification

Lecture 3: Linear methods for classification Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,

More information

A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution

A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution PSYC 943 (930): Fundamentals of Multivariate Modeling Lecture 4: September

More information

Practice problems for Homework 11 - Point Estimation

Practice problems for Homework 11 - Point Estimation Practice problems for Homework 11 - Point Estimation 1. (10 marks) Suppose we want to select a random sample of size 5 from the current CS 3341 students. Which of the following strategies is the best:

More information

An introduction to Value-at-Risk Learning Curve September 2003

An introduction to Value-at-Risk Learning Curve September 2003 An introduction to Value-at-Risk Learning Curve September 2003 Value-at-Risk The introduction of Value-at-Risk (VaR) as an accepted methodology for quantifying market risk is part of the evolution of risk

More information

A LOGNORMAL MODEL FOR INSURANCE CLAIMS DATA

A LOGNORMAL MODEL FOR INSURANCE CLAIMS DATA REVSTAT Statistical Journal Volume 4, Number 2, June 2006, 131 142 A LOGNORMAL MODEL FOR INSURANCE CLAIMS DATA Authors: Daiane Aparecida Zuanetti Departamento de Estatística, Universidade Federal de São

More information

Package SHELF. February 5, 2016

Package SHELF. February 5, 2016 Type Package Package SHELF February 5, 2016 Title Tools to Support the Sheffield Elicitation Framework (SHELF) Version 1.1.0 Date 2016-01-29 Author Jeremy Oakley Maintainer Jeremy Oakley

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Interpretation of Somers D under four simple models

Interpretation of Somers D under four simple models Interpretation of Somers D under four simple models Roger B. Newson 03 September, 04 Introduction Somers D is an ordinal measure of association introduced by Somers (96)[9]. It can be defined in terms

More information

How to assess the risk of a large portfolio? How to estimate a large covariance matrix?

How to assess the risk of a large portfolio? How to estimate a large covariance matrix? Chapter 3 Sparse Portfolio Allocation This chapter touches some practical aspects of portfolio allocation and risk assessment from a large pool of financial assets (e.g. stocks) How to assess the risk

More information

2WB05 Simulation Lecture 8: Generating random variables

2WB05 Simulation Lecture 8: Generating random variables 2WB05 Simulation Lecture 8: Generating random variables Marko Boon http://www.win.tue.nl/courses/2wb05 January 7, 2013 Outline 2/36 1. How do we generate random variables? 2. Fitting distributions Generating

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

Linear Discrimination. Linear Discrimination. Linear Discrimination. Linearly Separable Systems Pairwise Separation. Steven J Zeil.

Linear Discrimination. Linear Discrimination. Linear Discrimination. Linearly Separable Systems Pairwise Separation. Steven J Zeil. Steven J Zeil Old Dominion Univ. Fall 200 Discriminant-Based Classification Linearly Separable Systems Pairwise Separation 2 Posteriors 3 Logistic Discrimination 2 Discriminant-Based Classification Likelihood-based:

More information