[] Monte Carlo Sampling Methods Jasmina L. Vujic Nuclear Engineering Department University of California, Berkeley Email: vujic@nuc.berkeley.edu phone: (50) 643-8085 fax: (50) 643-9685
[2] Monte Carlo Monte Carlo is a computational technique based on constructing a random process for a problem and carrying out a NUMERICAL EXPERIMENT by N-fold sampling from a random sequence of numbers with a PRESCRIBED probability distribution. x - random variable xˆ = N --- x N i Xˆ - the estimated or sample mean of x x - the expectation or true mean value of x If a problem can be given a PROBABILISTIC interpretation, then it can be modeled using RANDOM NUMBERS. i =
[3] Monte Carlo Monte Carlo techniques came from the complicated diffusion problems that were encountered in the early work on atomic energy. 772 Compte de Bufon - earliest documented use of random sampling to solve a mathematical problem. 786 Laplace suggested that π could be evaluated by random sampling. Lord Kelvin used random sampling to aid in evaluating time integrals associated with the kinetic theory of gases. Enrico Fermi was among the first to apply random sampling methods to study neutron moderation in Rome. 947 Fermi, John von Neuman, Stan Frankel, Nicholas Metropolis, Stan Ulam and others developed computer-oriented Monte Carlo methods at Los Alamos to trace neutrons through fissionable materials
Monte Carlo [4] Monte Carlo methods can be used to solve: a) The problems that are stochastic (probabilistic) by nature: - particle transport, - telephone and other communication systems, - population studies based on the statistics of survival and reproduction. b) The problems that are deterministic by nature: - the evaluation of integrals, - solving the systems of algebraic equations, - solving partial differential equations.
Monte Carlo [5] Monte Carlo methods are divided into: a) ANALOG, where the natural laws are PRESERVED - the game played is the analog of the physical problem of interest (i.e., the history of each particle is simulated exactly), b) NON-ANALOG, where in order to reduce required computational time the strict analog simulation of particle histories is abounded (i.e., we CHEAT!) Variance-reduction techniques: - Absorption suppression - History termination and Russian Roulette - Splitting and Russian Roulette - Forced Collisions - Source Biasing
Example : Evaluation of Integrals [6] fx ( ) y fmax reject I P b a a accept = f( x) dx - area uder the function f(x), R = ( b a)f max - area of rectangle I = --- - is a probability that a random point lies under f(x), thus I = RP R x Step : Choose a random point (x,y): x = a + ( b a)ξ and y = f max ξ 2 Step 2: Check if y fx ( ) - accept the point, if y < fx ( ) - reject the point Step 3: Repeat this process N times, Ni - the number of accepted points Ni Step 4: Determine P = ----- and the value of integral I = R Ni ----- N N b
Major Components of Monte Carlo [7] Major Components of a Monte Carlo Algorithm Probability distribution functions (pdf s) - the physical (or mathematical) system must be described by a set of pdf s. Random number generator - a source of random numbers uniformly distributed on the unit interval must be available. Sampling rule - a prescription for sampling from the specified pdf, assuming the availability of random numbers on the unit interval. Scoring (or tallying) - the outcomes must be accumulated into overall tallies or scores for the quantities of interest. Error estimation - an estimate of the statistical error (variance) as a function of the number of trials and other quantities must be determined. Variance reduction techniques - methods for reducing the varinace in the estimated solution to reduce the computational time for Monte Carlo simulation. Parallelization and vectorization - efficient use of advanced computer architectures.
Probability Distribution Functions [8] Probability Distribution Functions Random Variable, x, - a variable that takes on particular values with a frequency that is determined by some underlying probability distribution. Continuous Probability Distribution P{ a x b} Discrete Probability Distribution P{ x = x i } = p i
Probability Distribution Functions [9] PDFs and CDFs (continuous) Probability Density Function (PDF) - continuous f(x), 0 fx ( ), fx ( )dx = P{ x x' x + dx} Probability{ a x b } = fx ( )dx Cumulative Distribution Function (CDF) - continuous Fx ( ) = fx' ( )dx' = P{ x' x} 0 Fx ( ) fx ( )dx = x b a fx ( ) Fx ( ) x 0 b d Fx ( ) dx = fx ( ) fx' ( )dx' = P{ a x b} = Fb ( ) Fa ( ) a 0 x
Probability Distribution Functions [0] PDFs and CDFs (discrete) Probability Density Function (PDF) - discrete f(x i ), fx ( i ) = p i δ( x x i ) 0 fx ( i ) fx ( )( x ) i = or p i = i i i fx ( ) p p 2 p 3 Cumulative Distribution Function (CDF) - discrete x i < x Fx ( ) = p i = 0 Fx ( ) x i < x fx ( i ) x i F(x) x x 2 x 3 p +p 2 +p 3 p +p 2 p x x 2 x 3
Monte Carlo & Random Sampling [] Sampling from a given discrete distribution Given fx ( i ) = p i and p i, i =, 2,..., N i 0 p p +p 2 p +p 2 +p 3 p +p 2 +...+p N = ξ and 0 ξ, then Px ( = x k ) = p k = P( ξ d k ) or k k p i ξ < p i i = i =
Monte Carlo & Random Sampling [2] Sampling from a given continuous distribution If f(x) and F(x) represent PDF and CDF od a random variable x, and if ξ is a random number distributed uniformly on [0,] with PDF g( ξ)=, and if x is such that F(x) = ξ than for each ξ there is a corresponding x, and the variable x is distribute according to the probability density function f(x). Proof: For each ξ in ( ξ, ξ+ ξ), there is x in (x,x+ x). Assume that PDF for x is q(x). Show that q(x) = f(x): q(x) x = g( ξ) ξ = ξ = ( ξ+ ξ)- ξ = F(x+ x)- F(x) q(x) = [F(x+ x)- F(x)]/ x = f(x) Thus, if x = F - ( ξ), then x is distributed according to f(x).
Monte Carlo & Random Sampling [3] Monte Carlo Codes Categories of Random Sampling Random number generator uniform PDF on [0,] Sampling from analytic PDF s normal, exponential, Maxwellian,... Sampling from tabulated PDF s angular PDF s, spectrum, cross sect For Monte Carlo Codes... Random numbers, ξ, are produced by the R.N. generator on [0,] Non-uniform random variates are produced from the direct inversion of CDFs rejection methods transformations composition (mixtures) sums, products, ratios,... table lookup + interpolation lots (!) of other tricks... ξ s by < 0% of total cpu time (typical)
Random Sampling Methods [4] Random Number Generator Pseudo-Random Numbers Not strictly "random", but good enough pass statistical tests reproducible sequence Uniform PDF on [0,] Must be easy to compute, must have a large period Multiplicative congruential method Algorithm S 0 = initial seed, odd integer, < M S k = G S k- mod M, k =, 2,... ξ k = S k / M Typical (vim, mcnp): S k = 5 9 S k- mod 2 48 ξ k = S k / 248 period = 2 46 7.0 x 0 3 fx ( ) Fx ( ) 0 0
Random Sampling Methods [5] Direct Solution of xˆ F ( ξ) Direct Sampling (Direct Inversion of CDFs) Sampling Procedure: Generate ξ Determine xˆ such that F( xˆ ) = ξ ξ Fx ( ) Advantages Straightforward mathematics & coding "High-level" approach 0 x xˆ Disadvantages Often involves complicated functions In some cases, F(x) cannot be inverted (e.g., Klein-Nishina)
Random Sampling Methods [6] Rejection Sampling Used when the inverse of CDF is costly ot impossible to find. Select a bounding function, g(x), such that c g( x) fx ( ) for all x g(x) is an easy-to-sample PDF Sampling Procedure: sample xˆ from g(x): xˆ g ( ) 6 test: ξ 2 cg( xˆ ) fxˆ ( ) if true accept xˆ, done if false reject xˆ, try again ξ fx ( ) Advantages Simple computer operations Disadvantages "Low-level" approach, sometimes hard to understand cg( x) reject x accept
Random Sampling Methods [7] Table Look-Up Used when f(x) is given in a form of a histogram fx ( ) f f 2 f i ξ F(x) x x 2 x i- x i x x 2 x i- x x i Then by linear interpolation Fx ( ) ( x x i )F i + ( x i x)f i = --------------------------------------------------------------------, x x i x i = [( x i x i )ξ x i F i + x i F i ] ---------------------------------------------------------------------------------- F i F i
Random Sampling Methods [8] Sampling Multidimensional Random Variables If the random quantity is a function of two or more random variables that are independent, the joint PDF and CDF can be written as fxy (, ) = f ( x)f 2 ( y) Fxy (, ) = F ( x)f 2 ( y) EXAMPLE: Sampling the direction of isotropically scattered particle in 3D Ω dω ------- 4π = Ω( θ, ϕ) = Ω x i + Ω y j + Ω z k = v+ w + u sin θdθdϕ d( cosθ)dϕ = ------------------------- = -------------------------------- = 4π 4π dµdϕ ----------------- 4π f( Ω) = f ( µ )f 2 ( ϕ) = -- ----- 22π µ F ( µ ) = f ( µ' ) dµ' = -- ( µ + ) = ξ or 2 µ = 2ξ ϕ F ( ϕ) = f 2 ( ϕ' ) dϕ' = ----- = ξ, or 0 2π 2 ϕ = 2πξ 2 ϕ,
Random Sampling Methods [9] Linear: (L, L2) Exponential: (E) 2D Isotropic: (C) 3D Isotropic: (I, I2) Probability Density Function fx ( ) = 2x, 0< x < x ξ fx ( ) = e x, 0 x f( ρ) f( Ω) = -----, ρ = ( uv, ) 2π = -----, Ω = ( uvw,, ) 4π < x logξ Direct Sampling Method u cos2πξ v sin2πξ u 2ξ v u 2 cos2πξ 2 w u 2 sin 2πξ 2 Maxwellian: (M, M2, M3) Watt Spectrum: (W, W2, W3) Normal: (N, N2) fx ( ) fx ( ) fx ( ) 2 x = ---------- -- e T π T xt /, 0 x = ---------------- e x a sinh bx, 0< x = 2e ab 4 πa 3 b ------------- σ 2π e 2- x ----------- µ 2 σ π < x T( logξ logξ 2 cos 2 -- ξ 2 3 ) w a( logξ logξ π 2 cos 2 -- ξ 2 3 ) x w a2 b + --------- + ( 2ξ 4 4 ) a 2 bw x µ + σ 2logξ cos2πξ 2
Random Sampling Examples [20] Rejection (old vim) Example 2D Isotropic f( ρ) = -----, ρ = ( uv, ) 2π SUBROUTINE AZIRN_VIM( S, C ) IMPLICIT DOUBLE PRECISION (A-H, O-Z) 00 R=2.*RANF() -. RSQ=R*R R2=RANF() R2SQ=R2*R2 RSQ=RSQ+R2SQ IF(.-RSQ)00,05,05 05 S=2.*R*R2/RSQ C=(R2SQ-RSQ)/RSQ RETURN END c s Direct (racer, new vim) subroutine azirn_new( s, c ) implicit double precision (a-h,o-z) parameter ( twopi = 2.*3.459265 ) phi = twopi*ranf() c = cos(phi) s = sin(phi) return end
Random Sampling Examples [2] Rejection Direct (mcnp) Example Watt Spectrum ( x) Based on Algorithm R2 from 3rd Monte Carlo Sampler, Everett & Cashwell Define K = + ab 8, L = a{ K + ( K 2 ) 2 / }, M = L a Set x logξ, y logξ 2 If { y M( x+ ) } 2 blx, accept: return (Lx) otherwise, reject (new vim) ab 4 2 e ---------------- e x = a sinh bx, 0< x πa 3 b Sample from Maxwellian in C-of-M, transform to lab w a( logξ logξ π 2 cos 2 -- ξ 2 3 ) x w a2 b + --------- + ( 2ξ 4 4 ) a 2 bw (assume isotropic emission from fission fragment moving with constant velocity in C-of-M) Unpublished sampling scheme, based on original Watt spectrum derivation
Random Sampling Examples [22] Example Linear PDF fx ( ) = 2x, 0 x Rejection (strictly this is not "rejection", but has the same flavor) if ξ ξ 2, then xˆ ξ else xˆ ξ 2 or or xˆ max( ξ, ξ 2 ) xˆ ξ ξ 2 Direct Inversion Fx ( ) = x 2, 0 x ξ
Random Sampling Examples [23] Example Collision Type Sampling Assume (for photon interactions): Define µ tot = µ cs + µ fe + µ pp p µ cs µ = --------, p fe µ 2 = --------, and p pp 3 = -------- µ tot µ tot µ tot with Then 0 3 p = i. i = p p+p2 p+p2+p3= Collision event: Photoefect p < ξ < p + p 2
[24] How Do We Model A Complicated Physical System? a) Need to know the physics of the system b) Need to derive equations describing physical processes c) Need to generate material specific data bases (cross sections for particle interactions, kermafactors, Q-factors) d) Need to translate equations into a computer program (code) e) Need to "describe" geometrical configuration of the system to computer f) Need an adequate computer system g) Need a physicist smart enough to do the job and dumb enough to want to do it.
Transport Equation [25] Time-Dependent Particle Transport Equation (Boltzmann Transport Equation): -- ψ( reω,,, t) + Ω ψ( reω,,, t) + Σt ( E)ψ( reω,,, t) = v t 0 de' Σ s ( E' E, Ω' Ω)Ψ( re'ω',,, t) dω' 4π + χ( E) ----------- de' υσ 4π f ( E' )Ψ( re'ω',,, t) dω' 0 4π + ----- QrEt (,, ) 4π