2WB05 Simulation Lecture 8: Generating random variables



Similar documents
Math 461 Fall 2006 Test 2 Solutions

Notes on the Negative Binomial Distribution

Probability Generating Functions

Notes on Continuous Random Variables

Joint Exam 1/P Sample Exam 1

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March Due:-March 25, 2015.

Chapter 4 Lecture Notes

UNIT I: RANDOM VARIABLES PART- A -TWO MARKS

Chapter 3 RANDOM VARIATE GENERATION

Random Variate Generation (Part 3)

INSURANCE RISK THEORY (Problems)

Practice problems for Homework 11 - Point Estimation

Lecture 6: Discrete & Continuous Probability and Random Variables

CHAPTER 6: Continuous Uniform Distribution: 6.1. Definition: The density function of the continuous random variable X on the interval [A, B] is.

Maximum Likelihood Estimation

Chapter 3: DISCRETE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS. Part 3: Discrete Uniform Distribution Binomial Distribution

Lecture 7: Continuous Random Variables

5. Continuous Random Variables

LECTURE 16. Readings: Section 5.1. Lecture outline. Random processes Definition of the Bernoulli process Basic properties of the Bernoulli process

1.1 Introduction, and Review of Probability Theory Random Variable, Range, Types of Random Variables CDF, PDF, Quantiles...

Section 5.1 Continuous Random Variables: Introduction

Tenth Problem Assignment

Practice Problems #4

Generating Random Variables and Stochastic Processes

Generating Random Variables and Stochastic Processes

6.2 Permutations continued

e.g. arrival of a customer to a service station or breakdown of a component in some system.

12.5: CHI-SQUARE GOODNESS OF FIT TESTS

1 Sufficient statistics

Homework 4 - KEY. Jeff Brenion. June 16, Note: Many problems can be solved in more than one way; we present only a single solution here.

Chapter 5. Random variables

The Exponential Distribution

Exponential Distribution

Generating Random Variates

Probability Calculator

Random variables P(X = 3) = P(X = 3) = 1 8, P(X = 1) = P(X = 1) = 3 8.

MATH 10: Elementary Statistics and Probability Chapter 5: Continuous Random Variables

Math 425 (Fall 08) Solutions Midterm 2 November 6, 2008

Queueing Systems. Ivo Adan and Jacques Resing

ST 371 (IV): Discrete Random Variables

Normal distribution. ) 2 /2σ. 2π σ

FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab

Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density

Exploratory Data Analysis

A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails

General Sampling Methods

Microeconomic Theory: Basic Math Concepts

An Introduction to Basic Statistics and Probability

Evaluating the Lead Time Demand Distribution for (r, Q) Policies Under Intermittent Demand


Normal Distribution as an Approximation to the Binomial Distribution

Experimental Design. Power and Sample Size Determination. Proportions. Proportions. Confidence Interval for p. The Binomial Test

STT315 Chapter 4 Random Variables & Probability Distributions KM. Chapter 4.5, 6, 8 Probability Distributions for Continuous Random Variables

Homework # 3 Solutions

Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page

Practice Problems for Homework #6. Normal distribution and Central Limit Theorem.

1.5 / 1 -- Communication Networks II (Görg) Transforms

Lecture 5 : The Poisson Distribution

Principle of Data Reduction

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

PROBABILITY AND SAMPLING DISTRIBUTIONS

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Statistics I for QBIC. Contents and Objectives. Chapters 1 7. Revised: August 2013

2. Discrete random variables

ECE302 Spring 2006 HW5 Solutions February 21,

Important Probability Distributions OPRE 6301

WHERE DOES THE 10% CONDITION COME FROM?

Statistics 100A Homework 4 Solutions

SPARE PARTS INVENTORY SYSTEMS UNDER AN INCREASING FAILURE RATE DEMAND INTERVAL DISTRIBUTION

Aggregate Loss Models

Sums of Independent Random Variables

Basics of Statistical Machine Learning

ECE302 Spring 2006 HW3 Solutions February 2,

Chapter 4 - Lecture 1 Probability Density Functions and Cumul. Distribution Functions

Section 6.1 Joint Distribution Functions

Random variables, probability distributions, binomial random variable

16. THE NORMAL APPROXIMATION TO THE BINOMIAL DISTRIBUTION

Math 151. Rumbos Spring Solutions to Assignment #22

Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics

Chapter 4. Probability and Probability Distributions

Generating Random Numbers Variance Reduction Quasi-Monte Carlo. Simulation Methods. Leonid Kogan. MIT, Sloan , Fall 2010

Ch5: Discrete Probability Distributions Section 5-1: Probability Distribution

The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].

For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i )

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

Lecture 2: Discrete Distributions, Normal Distributions. Chapter 1

ECE302 Spring 2006 HW4 Solutions February 6,

Stats on the TI 83 and TI 84 Calculator

Lecture 8. Confidence intervals and the central limit theorem

Statistics 100A Homework 7 Solutions

Chapter 5 Discrete Probability Distribution. Learning objectives

The normal approximation to the binomial

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS

STAT 315: HOW TO CHOOSE A DISTRIBUTION FOR A RANDOM VARIABLE

Introduction to Queueing Theory and Stochastic Teletraffic Models

IEOR 6711: Stochastic Models I Fall 2012, Professor Whitt, Tuesday, September 11 Normal Approximations and the Central Limit Theorem

Transcription:

2WB05 Simulation Lecture 8: Generating random variables Marko Boon http://www.win.tue.nl/courses/2wb05 January 7, 2013

Outline 2/36 1. How do we generate random variables? 2. Fitting distributions

Generating random variables 3/36 How do we generate random variables? Sampling from continuous distributions Sampling from discrete distributions

Continuous distributions 4/36 Inverse Transform Method Let the random variable X have a continuous and increasing distribution function F. Denote the inverse of F by F 1. Then X can be generated as follows: Generate U from U(0, 1); Return X = F 1 (U). If F is not continuous or increasing, then we have to use the generalized inverse function F 1 (u) = min{x : F(x) u}.

Continuous distributions 5/36 Examples X = a + (b a)u is uniform on (a, b); X = ln(u)/λ is exponential with parameter λ; X = ( ln(u)) 1/a /λ is Weibull, parameters a and λ. Unfortunately, for many distribution functions we do not have an easy-to-use (closed-form) expression for the inverse of F.

Continuous distributions 6/36 Composition method This method applies when the distribution function F can be expressed as a mixture of other distribution functions F 1, F 2,..., F(x) = p i F i (x), where p i 0, i=1 p i = 1 The method is useful if it is easier to sample from the F i s than from F. First generate an index I such that P(I = i) = p i, i = 1, 2,... Generate a random variable X with distribution function F I. i=1

Continuous distributions 7/36 Examples Hyper-exponential distribution: F(x) = p 1 F 1 (x) + p 2 F 2 (x) + + p k F k (x), x 0, where F i (x) is the exponential distribution with parameter µ i, i = 1,..., k. Double-exponential (or Laplace) distribution: where f denotes the density of F. f (x) = 1 2 e x, x < 0; 1 2 e x, x 0,

Continuous distributions 8/36 Convolution method In some case X can be expressed as a sum of independent random variables Y 1,..., Y n, so X = Y 1 + Y 2 + + Y n. where the Y i s can be generated more easily than X. Algorithm: Generate independent Y 1,..., Y n, each with distribution function G; Return X = Y 1 + + Y n.

Continuous distributions 9/36 Example If X is Erlang distributed with parameters n and µ, then X can be expressed as a sum of n independent exponentials Y i, each with mean 1/µ. Algorithm: Generate n exponentials Y 1,..., Y n, each with mean µ; Set X = Y 1 + + Y n. More efficient algorithm: Generate n uniform (0, 1) random variables U 1,..., U n ; Set X = ln(u 1 U 2 U n )/µ.

Continuous distributions 10/36 Acceptance-Rejection method Denote the density of X by f. This method requires a function g that majorizes f, g(x) f (x) for all x. Now g will not be a density, since c = g(x)dx 1. Assume that c <. Then h(x) = g(x)/c is a density. Algorithm: 1. Generate Y having density h; 2. Generate U from U(0, 1), independent of Y ; 3. If U f (Y )/g(y ), then set X = Y ; else go back to step 1. The random variable X generated by this algorithm has density f.

Continuous distributions 11/36 Validity of the Acceptance-Rejection method Note P(X x) = P(Y x Y accepted). Now, and thus, letting x gives P(Y x, Y accepted) = x f (y) g(y) h(y)dy = 1 x c f (y)dy, P(Y accepted) = 1 c. Hence, P(X x) = P(Y x, Y accepted) P(Y accepted) = x f (y)dy.

Continuous distributions Note that the number of iterations is geometrically distributed with mean c. 12/36 How to choose g? Try to choose g such that the random variable Y can be generated rapidly; The probability of rejection in step 3 should be small; so try to bring c close to 1, which mean that g should be close to f.

Continuous distributions 13/36 Example The Beta(4,3) distribution has density f (x) = 60x 3 (1 x) 2, 0 x 1. The maximal value of f occurs at x = 0.6, where f (0.6) = 2.0736. Thus, if we define then g majorizes f. Algorithm: 1. Generate Y and U from U(0, 1); g(x) = 2.0736, 0 x 1, 2. If U 60Y 3 (1 Y ) 2, then set X = Y ; else reject Y and return to step 1. 2.0736

Continuous distributions 14/36 Normal distribution Methods: Acceptance-Rejection method Box-Muller method

Continuous distributions 15/36 Acceptance-Rejection method If X is N(0, 1), then the density of X is given by f (x) = 2 2π e x2 /2, x > 0. Now the function g(x) = 2e/πe x majorizes f. This leads to the following algorithm: 1. Generate an exponential Y with mean 1; 2. Generate U from U(0, 1), independent of Y ; 3. If U e (Y 1)2 /2, then accept Y ; else reject Y and return to step 1. 4. Return X = Y or X = Y, both with probability 1/2.

Continuous distributions 16/36 Box-Muller method If U 1 and U 2 are independent U(0, 1) random variables, then X 1 = 2 ln U 1 cos(2πu 2 ) X 2 = 2 ln U 1 sin(2πu 2 ) are independent standard normal random variables. This method is implemented in the function nextgaussian() in java.util.random

Discrete distributions 17/36 Discrete version of Inverse Transform Method Let X be a discrete random variable with probabilities P(X = x i ) = p i, i = 0, 1,..., i=0 p i = 1. To generate a realization of X, we first generate U from U(0, 1) and then set X = x i if i 1 j=0 p j U < i j=0 p j.

Discrete distributions So the algorithm is as follows: Generate U from U(0, 1); Determine the index I such that and return X = x I. I 1 j=0 p j U < I j=0 p j 18/36 The second step requires a search; for example, starting with I = 0 we keep adding 1 to I until we have found the (smallest) I such that I U < j=0 Note: The algorithm needs exactly one uniform random variable U to generate X; this is a nice feature if you use variance reduction techniques. p j

Discrete distributions 19/36 Array method: when X has a finite support Suppose p i = k i /100, i = 1,..., m, where k i s are integers with 0 k i 100 Construct array A[i], i = 1,..., 100 as follows: set A[i] = x 1 for i = 1,..., k 1 set A[i] = x 2 for i = k 1 + 1,..., k 1 + k 2, etc. Then, first sample a random index I from 1,..., 100: I = 1 + 100U and set X = A[I ]

Discrete distributions 20/36 Bernoulli Two possible outcomes of X (success or failure): P(X = 1) = 1 P(X = 0) = p. Algorithm: Generate U from U(0, 1); If U p, then X = 1; else X = 0.

Discrete distributions 21/36 Discrete uniform The possible outcomes of X are m, m + 1,..., n and they are all equally likely, so Algorithm: Generate U from U(0, 1); Set X = m + (n m + 1)U. P(X = i) = Note: No search is required, and compute (n m + 1) ahead. 1, i = m, m + 1,..., n. n m + 1

Discrete distributions 22/36 Geometric A random variable X has a geometric distribution with parameter p if P(X = i) = p(1 p) i, i = 0, 1, 2,... ; X is the number of failures till the first success in a sequence of Bernoulli trials with success probability p. Algorithm: Generate independent Bernoulli(p) random variables Y 1, Y 2,...; let I be the index of the first successful one, so Y I = 1; Set X = I 1. Alternative algorithm: Generate U from U(0, 1); Set X = ln(u)/ ln(1 p).

Discrete distributions 23/36 Binomial A random variable X has a binomial distribution with parameters n and p if ( ) n P(X = i) = p i (1 p) n i, i = 0, 1,..., n; i X is the number of successes in n independent Bernoulli trials, each with success probability p. Algorithm: Generate n Bernoulli( p) random variables Y 1,..., Y n ; Set X = Y 1 + Y 2 + + Y n. Alternative algorithms can be derived by using the following results.

Discrete distributions Let Y 1, Y 2,... be independent geometric(p) random variables, and I the smallest index such that 24/36 I +1 (Y i + 1) > n. Then the index I has a binomial distribution with parameters n and p. i=1 Let Y 1, Y 2,... be independent exponential random variables with mean 1, and I the smallest index such that I +1 i=1 Y i n i + 1 > ln(1 p). Then the index I has a binomial distribution with parameters n and p.

Discrete distributions 25/36 Negative Binomial A random variable X has a negative-binomial distribution with parameters n and p if ( ) n + i 1 P(X = i) = p n (1 p) i, i = 0, 1, 2,... ; i X is the number of failures before the n-th success in a sequence of independent Bernoulli trials with success probability p. Algorithm: Generate n geometric( p) random variables Y 1,..., Y n ; Set X = Y 1 + Y 2 + + Y n.

Discrete distributions 26/36 Poisson A random variable X has a Poisson distribution with parameter λ if P(X = i) = λi i! e λ, i = 0, 1, 2,... ; X is the number of events in a time interval of length 1 if the inter-event times are independent and exponentially distributed with parameter λ. Algorithm: Generate exponential inter-event times Y 1, Y 2,... with mean 1; let I be the smallest index such that Set X = I. I +1 i=1 Y i > λ;

Discrete distributions 27/36 Poisson (alternative) Generate U(0,1) random variables U 1, U 2,...; let I be the smallest index such that I +1 i=1 U i < e λ ; Set X = I.

Fitting distributions 28/36 Input of a simulation Specifying distributions of random variables (e.g., interarrival times, processing times) and assigning parameter values can be based on: Historical numerical data Expert opinion In practice, there is sometimes real data available, but often the only information of random variables that is available is their mean and standard deviation.

Fitting distributions Empirical data can be used to: construct empirical distribution functions and generate samples from them during the simulation; fit theoretical distributions and then generate samples from the fitted distributions. 29/36

Fitting distributions 30/36 Moment-fitting Obtain an approximating distribution by fitting a phase-type distribution on the mean, E(X), and the coefficient of variation, c X = σ X E(X), of a given positive random variable X, by using the following simple approach.

Moment-fitting 31/36 Coefficient of variation less than 1: Mixed Erlang If 0 < c X < 1, then fit an E k 1,k distribution as follows. If 1 k c2 X 1 k 1, for certain k = 2, 3,..., then the approximating distribution is with probability p (resp. 1 p) the sum of k 1 (resp. k) independent exponentials with common mean 1/µ. By choosing p = 1 ) (kc 2X 1 + c 2 k(1 + c 2 X ) k2 c 2 X, µ = k p E(X), X the E k 1,k distribution matches E(X) and c X.

Moment-fitting 32/36 Coefficient of variation greater than 1: Hyperexponential In case c X 1, fit a H 2 (p 1, p 2 ; µ 1, µ 2 ) distribution. Phase diagram for the H k (p 1,..., p k ; µ 1,..., µ k ) distribution: µ 1 p 1 1 p k k µ k

Moment-fitting But the H 2 distribution is not uniquely determined by its first two moments. In applications, the H 2 distribution with balanced means is often used. This means that the normalization p 1 µ 1 = p 2 µ 2 is used. The parameters of the H 2 distribution with balanced means and fitting E(X) and c X ( 1) are given by ( p 1 = 1 c 2 X 1 + 1 ) 2 c 2 X + 1, p 2 = 1 p 1, µ 1 = 2p 1 E(X), µ 2 = 2p 2 E(X). 33/36

Moment-fitting In case c 2 X 0.5 one can also use a Coxian-2 distribution for a two-moment fit. 34/36 Phase diagram for the Coxian-k distribution: µ 1 µ 2 µ k 1 p 1 2 p 2 p k 1 k 1 p 1 1 p 2 1 p k 1 The following parameter set for the Coxian-2 is suggested: µ 1 = 2/E(X), p 1 = 0.5/c 2 X, µ 2 = µ 1 p 1.

Moment-fitting 35/36 Fitting nonnegative discrete distributions Let X be a random variable on the non-negative integers with mean E(X) and coefficient of variation c X. Then it is possible to fit a discrete distribution on E(X) and c X using the following families of distributions: Mixtures of Binomial distributions; Poisson distribution; Mixtures of Negative-Binomial distributions; Mixtures of geometric distributions. This fitting procedure is described in Adan, van Eenige and Resing (see Probability in the Engineering and Informational Sciences, 9, 1995, pp 623-632).

Fitting distributions Adequacy of fit 36/36 Graphical comparison of fitted and empirical curves; Statistical tests (goodness-of-fit tests).