A Simple Pseudo Random Number algorithm



Similar documents
Lecture 8. Generating a non-uniform probability distribution

MATH 140 Lab 4: Probability and the Standard Normal Distribution

Let s first see how precession works in quantitative detail. The system is illustrated below: ...

The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].

APPLIED MATHEMATICS ADVANCED LEVEL

4. Continuous Random Variables, the Pareto and Normal Distributions

Lecture 7: Continuous Random Variables

Electromagnetism - Lecture 2. Electric Fields

5. Continuous Random Variables

Problem Set V Solutions

Chapter 22: Electric Flux and Gauss s Law

Probabilistic Strategies: Solutions

Magnetic Dipoles. Magnetic Field of Current Loop. B r. PHY2061 Enriched Physics 2 Lecture Notes

Orbits of the Lennard-Jones Potential

Chapter 4 Lecture Notes

A power series about x = a is the series of the form

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 18. A Brief Introduction to Continuous Probability

10.2 Series and Convergence

FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL

1 The Brownian bridge construction

A vector is a directed line segment used to represent a vector quantity.

Normal distribution. ) 2 /2σ. 2π σ

Lecture 2 Binomial and Poisson Probability Distributions

CHAPTER 5. Number Theory. 1. Integers and Division. Discussion

Confidence Intervals for the Difference Between Two Means

Section 9.5: Equations of Lines and Planes

Oscillations. Vern Lindberg. June 10, 2010

Lab 11. Simulations. The Concept

Understanding Poles and Zeros

PHYS 1624 University Physics I. PHYS 2644 University Physics II

Stat 5102 Notes: Nonparametric Tests and. confidence interval

Random variables, probability distributions, binomial random variable

Notes on Continuous Random Variables

1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 10

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Today in Physics 217: the method of images

5.61 Fall 2012 Lecture #19 page 1

Lecture 5 : The Poisson Distribution

Physics Notes Class 11 CHAPTER 6 WORK, ENERGY AND POWER

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Random Variate Generation (Part 3)

Reading 13 : Finite State Automata and Regular Expressions

Programming with TI-Nspire

The Math Circle, Spring 2004

7. Beats. sin( + λ) + sin( λ) = 2 cos(λ) sin( )

2.6. Probability. In general the probability density of a random variable satisfies two conditions:

REPEATED TRIALS. The probability of winning those k chosen times and losing the other times is then p k q n k.

Largest Fixed-Aspect, Axis-Aligned Rectangle

Current Standard: Mathematical Concepts and Applications Shape, Space, and Measurement- Primary

Physics 235 Chapter 1. Chapter 1 Matrices, Vectors, and Vector Calculus

Chapter 7: Polarization

Lectures 5-6: Taylor Series

x a x 2 (1 + x 2 ) n.

2. Spin Chemistry and the Vector Model

ELECTRIC FIELD LINES AND EQUIPOTENTIAL SURFACES

Chapter 3: DISCRETE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS. Part 3: Discrete Uniform Distribution Binomial Distribution

Chapter 6 Circular Motion

The normal approximation to the binomial

Introductory Probability. MATH 107: Finite Mathematics University of Louisville. March 5, 2014

The continuous and discrete Fourier transforms

Mathematical Induction. Mary Barnes Sue Gordon

Chapter 3 RANDOM VARIATE GENERATION

Probability: Terminology and Examples Class 2, 18.05, Spring 2014 Jeremy Orloff and Jonathan Bloom

State of Stress at Point

THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok

ST 371 (IV): Discrete Random Variables

6.4 Normal Distribution

Chapter 22: The Electric Field. Read Chapter 22 Do Ch. 22 Questions 3, 5, 7, 9 Do Ch. 22 Problems 5, 19, 24

BEST METHODS FOR SOLVING QUADRATIC INEQUALITIES.

The Quantum Harmonic Oscillator Stephen Webb

2 Session Two - Complex Numbers and Vectors

An Introduction to Partial Differential Equations in the Undergraduate Curriculum

1. Degenerate Pressure

Lecture 13 - Basic Number Theory.

2, 8, 20, 28, 50, 82, 126.

Financial Mathematics and Simulation MATH Spring 2011 Homework 2

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Physics 9e/Cutnell. correlated to the. College Board AP Physics 1 Course Objectives

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Introduction to Complex Numbers in Physics/Engineering

d d Φ * Φdx T(B) Barrier (B ) : Vo = 5, a = 2 Well (W ) : Vo= -5, a = ENERGY (E)

Network Security. Chapter 6 Random Number Generation

Assignment 5 - Due Friday March 6

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2015

8 Fractals: Cantor set, Sierpinski Triangle, Koch Snowflake, fractal dimension.

Reflection and Refraction

COMP 250 Fall 2012 lecture 2 binary representations Sept. 11, 2012

N 1. (q k+1 q k ) 2 + α 3. k=0

Fraunhofer Diffraction

Derive 5: The Easiest... Just Got Better!

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

E3: PROBABILITY AND STATISTICS lecture notes

8. THE NORMAL DISTRIBUTION

Chapter 9 Circular Motion Dynamics

Euler s Method and Functions

arxiv: v1 [quant-ph] 3 Mar 2016

Introduction to the Monte Carlo method

Stokes flow. Chapter 7

Transcription:

Lecture 7 Generating random numbers is a useful technique in many numerical applications in Physics. This is because many phenomena in physics are random, and algorithms that use random numbers have applications in scientific problems. A computer algorithm cannot produce true random numbers. Using a quantum system, or a system that is truly random, true random numbers can be produced. One can use radioactive decay to produce truly random numbers (see Throwing Natures Dice, Ricardo Aguayo, Geoff Simms and P.B. Siegel, Am. J. Phys. 64, 752-758 (June 1996)). However, it is difficult to produce these true random numbers quickly, and numbers that have the properties of true randomness can often be used. Therefore, it is useful to develop computer algorithms that generate numbers that have enough properties of true random numbers for scientific applications. Random numbers generated by a computer algorithm are called pseudo-random numbers. Most compilers come with a pseudo-random number generator. These generators use a numerical algorithm to produce a sequence of numbers that have many properties of truly random numbers. Although the sequence of pseudo-random numbers is not truly random, a good generator will produce numbers that give essentially the same results as truly random numbers. Most generators determine the pseudorandom number from previous ones via a formula. Once an initial number(s) (or seed(s)) is chosen, then the algorithm can generate the pseudo-random series. 1

A Simple Pseudo Random Number algorithm If you want to make your own pseudo-random numbers, a simple algorithm that will generate a sequence of integers between 0 and m is: x n+1 = (ax n + b) mod(m) (1) where a and b are constant integers. A sequence of integers x i is produced by this algorithm. Since all the integers, x i, generated are less than m, the sequence will eventually repeat. To have the period for repeating to be as large as possible, we want to chose m to be as large as possible. If m is very large, there is no guarantee that all integers less than m will be included in the sequence, nor is there a guarantee that the integers in the sequence will be uniformly distributed between 0 and m. However, for large m both these two properties are nearly satisfied and the algorithm works fairly well as a pseudo-random number generator. For a 32-bit machine, a good choice of values are a = 7 5, b = 0, and m = 2 31 1, which is a Mersenne prime number. The series of numbers produced is fairly equally distributed between 1 and m. Usually, one does not need to make up one s own pseudo-random number generator. Most C compilers have one built in. 2

Pseudo Random Numbers in C There are various commands in C for generating random numbers. A common one is random(32767) This command returns a number with the properties of a random number with equal probability to lie between 0 and 32767 = 2 16 1. That is, a 16 bit random number with uniform probability. To obtain a pseudo-random number between 0 and 1, the line of code: r = random(32767)/32767.0; does the trick. One can also include a line of code that sets the initial seed, or have the program pick a random seed. I believe the defaut number in gcc is 2 31 1 = 2147483647. So in this case, we can use the following code: m=pow(2,31)-1.0; r=random()/m; where m is declared as double. I would recommend this defaut option for producing a pseudo-random number with uniform probability between zero and one. One can generate a random number with uniform probability between a and b from a random number between 0 and 1. For example, the following code should work: r = random()/m; x = a + r*(b-a); If r is random with uniform probability between 0 and 1, then x will have a uniform probability between a and b. 3

Discrete outcomes Generating a non-uniform probability distribution Sometimes it is useful to generate random numbers that do not have a uniform distribution. This is fairly easy for the case of a finite number of discrete outcomes. For example, suppose there are N possible outcomes. We can label each possibility with an integer i. Let the value of the i th outcome be z i. Let P i be the probability of obtaining z i as an outcome. The index i is an integer from 1 to N. Note that the P i are unitless, and Σ N i=1p i = 1. One way to determine the outcome with the correct probability is as follows. Divide the interval 0 1 into N segments, where the i th segment has length P i. That is, the length of the i th segment is the probability to have the outcome z i. Then, throw a random number r with uniform probability between 0 and 1. Whichever segment that r is in, that is the outcome. In other words, if r lies in the k th segment, then the outcome is z k. Since the probability to land in a particular segment is proportional to the length of the segment, the outcomes will have the correct probabilities. Another way of expressing this idea, more formally, is the following. Define n A n = P j (2) j=1 where A 0 = 0. A n is just the sum of the probabilities from 1 n. Note that A N = 1. Now, throw a random number r that has uniform probability between 0 and 1. Find the value of k, such that A k 1 < r < A k. Then the outcome is z k. We demonstrate the generation of discrete outcomes with a non-uniform distribution with two examples. As a first example, suppose that you want to simulate an unfair coin: the coin is heads 40% of the time and tails 60% of the time. The table below displays z i, P i, and A i. i z i P i A i 1 heads.4.4 2 tails.6 1 The following code will flip the coin with the desired probabilities: 4

r = random()/m; if (r 0.4) then heads; if (r > 0.4) then tails; As a second example, suppose you want to throw an unfair die with probabilities 0.2, 0.3, 0.1, 0.2, 0.1, 0.1, for the integers 1 6 respectively as shown in the table below: i z i P i A i 1 1 0.2 0.2 2 2 0.3 0.5 3 3 0.1 0.6 4 4 0.2 0.8 5 5 0.1 0.9 6 6 0.1 1.0 In these examples, one throws r with uniform probability distribution between zero and one. One then divides the interval between zero and one into the probability desired for the outcomes. Next week we will discuss how to generate a non-uniform probability distribution when the outcomes are continuous quantities. The method is similar to the case of discrete outcomes that we just covered. We will also derive a method to produce random numbers that follow a Gaussian distribution, which you will use in assignment 4. Now let s cover some physics. 5

Scattering amplitude and partial waves In assignment 4 you are asked to simulate a scattering experiment. Your simulation will need to have some random error in the data. Before we discuss how to produce a Gaussian probability distribution for the random errors, let s go over the physics of scattering. The differential cross section, dσ/dω, can be written in terms of a scattering amplitude, f(θ, φ). If the interaction is spherically symmetric, then there is no φ dependence for f. So dσ dω = f(θ) 2 (3) where f(θ) is a complex number with units of length. In our application, f(θ) will have units of fm 2. For non-relativistic energies, f(θ) can be determined from the Schroedinger equation with the appropriate scattering boundary conditions at r =. If the interaction is spherically symmetric, i.e. V ( r = V (r), then the Schroedinger equation can be separated into the different orbital angular momentum quantum numbers l. We derived this separation for the first assignment when we treated bound states. We obtained the set of equations: h2 u(r) l(l + 1) 2m (d2 u(r)) + V (r)u(r) = Eu(r) (4) dr 2 r 2 with an equation for each value of l. The same separation holds for scattering problems. One will obtain a scattering amplitude for each value of orbital angular momentum l, which we label as f l. The complete scattering solution will have a Y lm (θ, φ) added on for each l. For spherical symmetry, where there is no φ dependence, so the Y lm reduce to Legendre polynomicals in cos(θ), P l (θ). The scattering amplitude therefore becomes f(θ) = (2l + 1)f l P l (cos(θ)) (5) l=0 The scattering amplitude f(θ) and the f l are complex numbers, with a real and an imaginary part. The sum over orbital angular momentum l in the expression for the scattering amplitude goes to infinity. The contribution from large l goes to zero, and one only needs to sum over a few values of l. The maximum value needed for l is roughly Rpc, where R is the size of the target and p is the momentum of the projectile. In our 6

application we will only sum over the l = 0 and l = 1 partial waves. One can express the partial wave amplitudes f l in terms of phase shifts. The phase shifts are determined from the solution of the Schroedinger equation for each l value. In terms of the phase shifts, δ l, the elastic scattering amplitudes are f l = eiδ l sin(δl ) (6) k where k = p/ h. You might wonder where the name phase shift comes from. The δ l are the shift in phase from the free particle solutions of the Schroedinger equation. In the absence of the potential V (r) (V (r) = 0), the solutions to the Schroedinger equation for orbital angular momentum l are the spherical Bessel functions, j l (kr). In fact, a plane wave traveling in the z-direction expressed in spherical coordinates is given by: e ikz = (2l + 1)i l j l (kr)p l (cos(θ)) (7) l=0 where θ is the angle with respect to the z-axis. For large r, the spherical Bessel function j l (kr) sin(kr lπ/2)/(kr). The effect of a real potential V (r) is to cause a phase shift in this large r limit: sin(kr lπ/2 + δ l )/(kr). To solve for the phase shifts δ l, one just iterates the Schroedinger equation to large r, like we did for the bound state assignment. However for the scattering calculation, the energy is greater than zero, and the solution oscillates. One can obtain the phase shifts by examining the large r solution to the discrete Schroedinger equation. We will not solve the Schroedinger equation for the δ l in our assignment 4. I ll give you δ 0 and δ 1 for a particular energy, and you will generate simulated data. 7

8

9