Modeling and Analysis of Information Technology Systems

Size: px
Start display at page:

Download "Modeling and Analysis of Information Technology Systems"

Transcription

1 Modeling and Analysis of Information Technology Systems Dr. János Sztrik University of Debrecen, Faculty of Informatics

2 Reviewers: Dr. József Bíró Doctor of the Hungarian Academy of Sciences, Full Professor Budapest University of Technology and Economics Dr. Zalán Heszberger PhD, Associate Professor Budapest University of Technology and Economics 2

3 This book is dedicated to my wife without whom this work could have been finished much earlier. If anything can go wrong, it will. If you change queues, the one you have left will start to move faster than the one you are in now. Your queue always goes the slowest. Whatever queue you join, no matter how short it looks, it will always take the longest for you to get served. ( Murphy Laws on reliability and queueing ) 3

4 4

5 Contents Preface 7 I Modeling and Analysis of Information Technology Systems 9 Basic Concepts from Probability Theory. Brief Summary Some Important Discrete Probability Distributions Some Important Continuous Probability Distributions Fundamentals of Stochastic Modeling Distributions Related to the Exponential Distribution Basics of Reliability Theory Generation of Random Numbers Random Sums Analytic Tools, Transforms Generating Function Laplace-Transform Stochastic Systems Poisson Process Performance Analysis of Some Simple Systems Continuous-Time Markov Chains 7 5. Birth-Death Processes II Exercises 75 6 Basic Concepts from Probability Theory Discrete Probability Distributions Continuous Probability Distributions Fundamentals of Stochastic Modeling Exponential Distribution and Related Distributions Basics of Reliability Theory Random Sums

6 8 Analytic Tools, Transforms Generating Function Laplace-Transform Stochastic Systems 3 9. Poisson Process Some Simple Systems Appendix 3 Bibliography 5 6

7 Preface Modern information technologies require innovations that are based on modeling, analyzing, designing and finally implementing new systems. The whole developing process assumes a well-organized team work of experts including engineers, computer scientists, mathematicians, physicist just to mention some of them. Modern info-communication networks are one of the most complex systems where the reliability and efficiency of the components play a very important role. For the better understanding of the dynamic behavior of the involved processes one have to deal with constructions of mathematical models which describe the stochastic service of randomly arriving requests. Queueing Theory is one of the most commonly used mathematical tool for the performance evaluation of such systems. The aim of the book is to present the basic methods, approaches in a Markovian level for the analysis of not too complicated systems. The main purpose is to understand how models could be constructed and how to analyze them. It is assumed the reader has been exposed to a first course in probability theory, however in the text I give a refresher and state the most important principles I need later on. My intention is to show what is behind the formulas and how we can derive formulas. It is also essential to know which kind of questions are reasonable and then how to answer them. My experience and advice are that if it is possible solve the same problem in different ways and compare the results. Sometimes very nice closed-form, analytic solutions are obtained but the main problem is that we cannot compute them for higher values of the involved variables. In this case the algorithmic or asymptotic approaches could be very useful. My intention is to find the balance between the mathematical and practitioner needs. I feel that a satisfactory middle ground has been established for understanding and applying these tools to practical systems. I hope that after understanding this book the reader will be able to create his owns formulas if needed. It should be underlined that most of the models are based on the assumption that the involved random variables are exponentially distributed and independent of each other. We must confess that this assumption is artificial since in practice the exponential distribution is not so frequent. However, the mathematical models based on the memoryless property of the exponential distribution greatly simplifies the solution methods resulting in computable formulas. By using these relatively simple formulas one can easily foresee the effect of a given parameter on the performance measure and hence the trends can be forecast. Clearly, instead of the exponential distribution one can use other distributions but in that case the mathematical models will be much more complicated. The analytic 7

8 results can help us in validating the results obtained by stochastic simulation. This approach is quite general when analytic expressions cannot be expected. In this case not only the model construction but also the statistical analysis of the output is important. The primary purpose of the book is to show how to create simple models for practical problems that is why the general theory of stochastic processes is omitted. It uses only the most important concepts and sometimes states theorem without proofs, but each time the related references are cited. I must confess that the style of the following books greatly influenced me, even if they are in different level and more comprehensive than this material: Allen [], Jain [3], Kleinrock [5], Kobayashi and Mark [6], Stewart [], Tijms [3], Trivedi [4]. This book is intended not only for students of computer science, engineering, operation research, mathematics but also those who study at business, management and planning departments, too. It covers more than one semester and has been tested by graduate students at Debrecen University over the years. It gives a very detailed analysis of the involved systems by giving density function, distribution function, generating function, Laplace-transform, respectively. Furthermore, Java-applets are provided to calculate the main performance measure immediately by using the pdf version of the book in a WWW environment. I have attempted to provide examples for the better understanding and a collection of exercises with detailed solution helps the reader in deepening her/his knowledge. I am convinced that the book covers the basic topics in stochastic modeling of practical problems and it supports students in all over the world. I am indebted to Professor József Bíró for his review, comments and suggestions which greatly improved the quality of the book. I am very grateful to Márk Kósa, Albert Barnák, Balázs Máté for their help in editing.. All comments and suggestions are welcome at: [email protected] Debrecen, 2. János Sztrik 8

9 Part I Modeling and Analysis of Information Technology Systems 9

10

11 Chapter Basic Concepts from Probability Theory Stochastic modeling of any type of system needs basic knowledge of probability theory. It is my experience that a brief summary about the most important concepts and theorems is very useful because the readers might have different level approaches to the theory of probability. The refresher concentrate only on those theorems and distributions which are closely related to this material. It should be noted that there are many good textbooks about theory of probability in all over the word. Moreover, a number of digital versions can be downloaded from the internet, too. I would recommend any of the following books, Allen [], Gnedenko et.al. [2],Jain [3], Kleinrock [5], Kobayashi and Mark [6], Ovcharov and Wentzel [7], Ravichandran [8], Rényi [9], Ross [], Stewart [], Tijms [3], Trivedi [4].. Brief Summary Theorem (Basic Forms of the Law of Total Probability) Let B, B 2,... be a set of mutually exclusive exhaustive events with positive probabilities and let A be any event. Then (.) P (A) P (A B i )P (B i ). i F (x) f(x) F (x B i )P (B i ) i f(x B i )P (B i ) i f X (x) f X Y (x y)f Y (y)dy

12 F X (x) F X Y (x y)f Y (y)dy P (A) P (A Y y)f Y (y)dy, where F (x) is a distribution function, f(x) is a density function, f(x, y) is a joint density function, f X Y (x y) f(x, y) f Y (y) is a conditional density function, F X Y (x y) x f X Y (t y)dt is a conditional distribution function. Theorem 2 (Bayes Theorem or Bayes Rule) Let B, B 2,... be a set of mutually exclusive exhaustive events with positive probabilities and let A be any event of positive probability. Then P (A B i )P (B i ) P (B i A) j P (A B j)p (B j ). Definition Let p k P (X x k ), k,2,..., the distribution of a discrete random variable X. The mean ( first moment, expectation, average ) of X is defined as k p kx k if this series is absolute convergent. That is the mean of X is EX p k x k. k Definition 2 Let f(x) be the density function of a continuous random variable X. If x f(x) dx is finite then the mean is defined by + EX + xf(x) dx. Without proof the main properties of the expectation are as follows If EX, EY <, then. E(X + Y ) exists and E(X + Y ) EX + EY, 2. E(cX) exists and E(cX) cex, 3. E(XY ) exists and E(XY ) EXEY, provided X and Y are independent, 2

13 4. E(aX + b) exists and E(aX + b) aex + b, 5. (E(XY )) 2 exists and (E(XY )) 2 EX 2 EY 2, if the second moments are exist, 6. If X, then EX ( F (x))dx, EX k P (ξ k). Theorem 3 (Theorem of Total Moments) The most commonly used forms are E(X n ) E(X n B i )P (B i ), i where E(X n B i ) denotes the nth conditional moment. The continuous version is E(X n ) E(X n Y y)f Y (y)dy. In case of n we have the theorem of total expectation. Definition 3 (Variance) Let X be a random variable with a finite mean EX m. Then V ar(x) E(X m) 2 is called the variance of X provided it is finite. The following properties hold. If V ar(x) < then V ar(x) EX 2 E 2 X. 2. V ar(ax + b) a 2 V ar(x) for any a,b R. 3. V ar(x) ; V ar(x) if and only if P (X EX). is de- Definition 4 (Squared coefficient of variation) The coefficient CX 2 V ar(x) (EX) 2 fined as the squared coefficient of variation of random variable X..2 Some Important Discrete Probability Distributions Binomial Distribution A random variable X is said to have a binomial distribution with parameters n, p if its distribution is ( ) n p k p k ( p) n k, k,,..., n k Notation: X B(n, p). It can be shown that EX np, V ar(x) np( p), C 2 X p np. If n, then X is Bernoulli distributed. 3

14 Poisson Distribution A random variable X is said to have a Poisson distribution with parameter λ if Notation: X P o(λ). It is well-known that p k λk k! e λ, λ >, k,,... EX λ, V ar(x) λ, C 2 X λ. It can be proved that lim n,p,np λ ( n )p k ( p) n k λk k k! e λ, k,,... that is the binomial distribution can be approximated by the Poisson distribution. The closer p to zero, the better the approximation. An acceptable rule of thumb to use this procedure is n 2 and p.5. Geometric Distribution A random variable X is said to have geometric distribution with parameter p if Notation: X Geo(p). It is easily verified that p k p( p) k, k, 2,... EX p, V ar(x) p p 2, C 2 X p. A random variable X X is called modified geometric. In this case Convolution P (X k) p( p) k k,,... EX p p, V ar(x ) p p 2, C 2 X p. Definition 5 Let X and Y be independent random variables with distributions P (X i) p i, P (Y j) q j, i, j,,, 2... Then the distribution of Z X + Y is P (Z k) k p j q k j, k,, 2,... j is called the convolution of X, Y, that is we calculated the distribution of X + Y. 4

15 Example Show that if X B(n, p), Y B(m, p) and are independent random variables then X + Y B(n + m, p). l ( ) ( ) n m P (X + Y l) p k ( p) n k p l k ( p) m l+k k l k k l ( )( ) n m p l ( p) n+m l k l k ( n + m l k ) p l ( p) n+m l. ( ) n + m p l ( p) n+m l l Example 2 Verify if X Po(λ), Y Po(β) and are independent random variables then X + Y P o(λ + β). P (X + Y l) l k e λ β l λ k βl k e λ k! (l k)! e β k (λ + β)l e (λ+β). l! λ k β l k k! (l k)! e (λ+β) l! l k ( ) l λ k β l k k Example 3 Customers arrive at the busy supermarket according to a Poisson distribution with parameter λ. Each of them independently of the others becomes a buyer with probability p. Find the distribution of the number of buyers. Let X P o(λ) denote the number of customers and Y the number of buyers. By the 5

16 virtue of the theorem of total probability we have That is Y P o(λp). P (Y n) P (Y n X k) P (X k) kn kn p n e λ ( k )p n ( p) k n λk n k! e λ kn p n e λ kn p n e λ n! λn (λp)n e λ n! (λp)n e λp n! k! λk ( p)k n n!(k n)! k! n!(k n)! ( p)k n λ n λ k n l l (( p)λ)l l! (( p)λ) l l! (λp)n e λ e ( p)λ n!.3 Some Important Continuous Probability Distributions Uniform Distribution A random variable X is said to have a uniform distribution on the interval [a,b] if its density function is {, if a x b, b a f(x), otherwise. It is easy to see that its distribution function is, if x a, x a F (x), if a < x b, b a, if b < x. Notation: X U(a, b). It is not difficult to show EX a + b 2 (b a)2, V ar(x), CX 2 2 It is easy to verify that if X U(, ), than X a + (b a)x U(a, b). 6 (b a)2 3(a + b) 2.

17 To generate a random number with a given distribution one can successfully use the following procedure. If F X (x) exists then Y F X(X) E(, ) and thus X F (Y ). It can be proved as follows F Y (x) P (Y < x) P (F X (X) < x) F X (F X (x)) x, that is Y E(, ), therefore X F X (Y ). Exponential Distribution A random variable X is said to have an exponential distribution with parameter λ if its density function is given by {, if x <, f(x) λe λx, if x. So its distribution function is F (x) where λ >. Notation: X Exp(λ). It can be proved that Erlang Distribution {, if x <, e λx, if x. EX λ, V ar(x) λ 2, C2 X. A random variable Y n is said to have an Erlang distribution with parameters (n, λ) if its density function is defined by {, if x <, f(x) λ (λx)n (n )! e λx, if x. It can be shown that the distribution function is {, if x <, F (x) n (λx) k k e λx, if x, k! where n is natural number, λ >. Notation: X Erl(n, λ), or X E n (λ). It is easy to see that in the case of n it reduces to the exponential distribution. It can be verified that E(Y n ) n λ, V ar(y n) n λ 2, C2 Y n n. 7

18 Gamma Distribution A random variable X is said to have a gamma distribution with parameters (α, λ) if its density function is given by where λ >, α >,,if x < f(x) λ(λx) α e λx,if x. Γ(α) Γ(α) is the so-called complete gamma function. t α e t dt Its distribution function can not be obtained in an explicit form except α n. This case it reduces to the Erlang distribution. Notation: X Γ(α, λ). It can be shown that E(X) α λ, V ar(x) α λ 2, C2 X α. α is called the shape parameter, λ is called the scale parameter. Weibull Distribution A random variable X is said to have a Weibull distribution with parameters (λ, α) if its density function is given by It is easy to see that,if x < f(x) λαx α e λxα,if x.,if x < F (x) e λxα,if x where λ > is called the scale parameter, α > is called the shape parameter. Specially, in the case of α it reduces to the exponential distribution. 8

19 Notation: X W (λ, α). It can be shown that Pareto Distribution ( E(X) λ ( V ar(x) λ ) ( α Γ + ) α ) 2 [ ( α Γ + 2 α CX 2 2αΓ ( ) 2 α Γ ( ). 2 α ) ( Γ 2 + )] α A random variable X is said to have a Pareto distribution with parameters (k, α) if its density function is given by, x < k f(x) αk α x α, x k Thus the distribution function is, x < k F (x) ( k α x), x k where α, k >. Notation: X P ar(k, α), where k is called the location parameter, α is called the shape parameter. It can be proved that Thus V ar(x) kα, α > α E(X), α k 2 α, α > 2 E(X 2 α 2 ), α 2. ( ) 2 k2 α kα α 2, CX 2 α Normal Distribution (Gaussian Distribution) (α )2, α > 2. α(α 2) A random variable X is said to have a normal distribution with parameters (m, σ) if it density function is given by 9

20 f(x) 2πσ e (x m)2 2σ 2, For the distribution function we have F (x) x f(t)dt, where m R, σ >. Notation: X N(m, σ). For F (x) there is no closed form expression. Specially, if m, σ, then X N(, ), which is the standard normal distribution. In this case the traditional notation for the density and distribution function is ϕ(x) x e x2 2, φ(x) ϕ(t)dt. 2π It can be proved that if X N(m, σ), then ( ) x m P (X < x) φ, σ furthermore φ( x) + φ(x). It is well-known that Lognormal Distribution E(X) m, V ar(x) σ 2, C 2 X σ2 m 2. Let Y N(m, σ), then the random variable X e Y is said to have lognormal distribution with parameters (m, σ), notation: X LN(m, σ). It is not difficult to verify that thus It can be shown that P (X < x) P (e Y < x) P (Y < ln x) ( ) ln x m F X (x) φ, x > σ ( ) ln x m f X (x) φ ( ) ln x m σ σx ϕ, x >. σ σ2 m+ E(X) e 2, V ar(x) e 2m+σ 2 (e σ2 ), CX 2 e σ2. Theorem 4 (Markov Inequality) Let X be a nonnegative random variable with finite mean, that is EX <. Then for any δ > P (X δ) EX δ. 2

21 Theorem 5 (Chebychev Inequality) Let X be a random variable for which V ar(x) <, EX m. Then for any ε > P ( X m ε) V ar(x) ε 2. Theorem 6 (Central Limit Theorem) Let X, X 2,... be independent and identically distributed random variables for which V ar(x i ) σ 2 <, E(X i ) m. Then ( ) lim P X X n nm < x φ(x). n nσ In particular, if X i χ i, then X X n B(n, p) and thus P (X X n < x) ( ) ( ) n x np p k ( p) n k φ. k k<x npq The local form is ( ) n p k ( p) n k k e (k np) 2πnp( p) 2 2np( p). Practical experiments have shown that if n and 9 p n n+9 distribution provides a good approximation to the binomial one. n+9, then the normal 2

22 22

23 Chapter 2 Fundamentals of Stochastic Modeling This chapter is devoted to the most important distributions derived from the exponential distribution. The lifetime of series and parallel systems are investigated which play crucial role in reliability theory. It is shown how to generate random numbers having given distribution. Finally, random sums are treated which occurred in many practical situations. The material is based on mainly the following books: Allen [], Gnedenko, Belyayev, Szolovjev [2], Kleinrock [5], Ovcharov [7], Ravichandran [8], Ross [], Stewart [], Tijms [3], Trivedi [4]. 2. Distributions Related to the Exponential Distribution Theorem 7 (Memoryless or Markov property) If X Exp(λ) then it satisfies the following, so-called memoryless, or Markov property Proof: P (X < x + y X y) P (X < x), x >, y >, P (X > x + y X y) P (X > x), x >, y >. P (X < x + y X y) P (y X < x + y) P (X y) F (x + y) F (y) F (y) e λ(x+y) ( e λy ) ( e λy ) e λy ( e λx ) e λy e λx F (x) P (X < x) The proof of the second formula can be carried out in the same way. Theorem 8 e λh λh+o(h), where o(h)(small ordo h) is defined by lim h o(h) h. 23

24 Proof: As it can be seen the statement is equivalent to e λh λh lim h h, which can be proved by applying the L Hospital s rule. That is e λh λh lim h h lim h λe λh λ. Theorem 9 If F (x) is the distribution function of a random variable X for which F (), and F (x + h) F (x) λh + o(h), if x >, F (x) then F (x) e λx,if x. Proof: It can be seen from the conditions that therefore F (x+h) F (x) h lim h F (x) F (x) λ thus F (x) lim h λh + o(h) h λ F (x) F (x) dx ln F (x) λx + ln c F (x) ce λx, that is F (x) ce λx. λ dx According to the initial condition F () thus we have c, consequently F (x) e λx. In many practical problems it is important to determine the distribution of the minimum of independent random variables. Theorem (Distribution of the lifetime of a series system) If X i Exp(λ i ) and are independent random variables (i,2,...,n) then Y min(x,..., X n ) is also exponentially distributed with parameter n i λ i. Proof: By using the properties of the probability and the independent events we have P (Y < x) P (Y x) P (X x,..., X n x) n n P (X i x) ( ( e λix )) e ( n i λ i)x i i 24

25 Example 4 Let X, Y be independent exponentially distributed random variables with parameters λ, µ, respectively. Find the probability that X min(x, Y ). X min(x, Y ) if and only if X < Y. By the theorem of total probability we have P (X < Y ) P (X < Y ) ( e λx )f Y (x) dx µe µx dx µ λ + µ P (X < x)f Y (x) dx, ( e λx )µe µx dx (λ + µ)e (λ+µ)x dx µ λ + µ λ λ + µ Example 5 (Distribution of the lifetime of a parallel system) Let X,..., X n be independent random variables and Y max(x,..., X n ). Find the distribution of Y. P (Y < x) P (X < x,..., X n < x) If X i Exp(λ i ), then F Y (x) n i ( e λ ix ). n P (X i < x) i n F Xi (x) i In addition, if λ i λ, i,..., n, then F Y (x) ( e λx ) n Example 6 Find the mean lifetime of a parallel system with two independent and exponentially distributed components. Let us solve the problem first according to the definition of the mean. This case Thus f max(x,x 2 )(x) [( e λ x ) ( e λ 2x )] ( e λ x e λ 2x + e (λ +λ 2 )x ) λ e λ x + λ 2 e λ 2x (λ + λ 2 )e (λ +λ 2 )x. E(max(X, X 2 )) xf max(x,x 2 )(x)dx λ + λ 2 25 λ + λ 2.

26 This can be expressed as E(max(X, X 2 )) λ2 + λ λ λ 2 λ λ 2 (λ + λ 2 ) λ + λ 2 + λ λ + λ 2 λ 2 + λ 2 λ + λ 2 λ. Now, let us show how this problem can be solved by probabilistic reasoning. At the beginning both components are operating, thus the mean of the first failure is λ + λ 2. The second failure happens if the remaining component fails, too. We have 2 cases, depending which component failed first. It is easy to see that by the memoryless property of the exponential distribution the distribution of the residual life time of the remaining component is the same as it was at the beginning. Then by using the theorem of total expectation for the mean residual life time after the first failure we have λ λ + λ 2 λ 2 } {{ } component failed first + λ 2 λ + λ 2 λ } {{ } component 2 failed first Hence the mean operating time of a parallel system is In homogeneous case it reduces to 2λ + λ λ + λ 2 + λ λ + λ 2 λ 2 + λ 2 λ + λ 2 λ. as we will see in the next problem. It is easy to see that the second moment of the lifetime could be calculated by the same way by using either the definition or the theorem of second moments and thus the variance can be obtained. Of course these are much complicated formulas but in homogeneous case they could be simplified as we see in the next Example.. Example 7 Find the mean and variance of a parallel system with homogeneous, independent and exponentially distributed components, that is X i Exp(λ), i,..., n. P (max(x,..., X n ) < x) As it is well-known if X then n P (X i < x) ( e λx ) n. i EX P (X x) dx 26 ( F (x)) dx.

27 Using substitution t e λx we get E(max(X,..., X n )) ( ( e λx ) n dx ( t n ) λ t dt ( + t t n ) dt [ ] [ t + t2 λ λ tn λ + n ] n }{{} nλ (n )λ } {{ } }{{} λ first failure nth failure - (n-)th failure λ [ n second failure - first failure ]. Due to the memoryless property of the exponential distribution it is easy to see that the time difference between the consecutive failures are exponentially distributed. More precisely, the distribution of time between the (k )th and kth failures is exponentially distributed with parameter (n k + )λ, k,..., n. Moreover, they are independent of each other. This fact can be used to get the mean and variance of the kth failure. After these arguments it is clear that E(time of the kth failure ) nλ (n k + )λ V ar(time of the kth failure) (nλ) ((n k + )λ) 2 k,..., n. In particular, the variance of the lifetime of a parallel system is (nλ) λ 2. Definition 6 Let X and Y independent random variables with density functions f X (x) and f Y (x), respectively. Then the density function of Z X + Y can be obtained as f Z (z) f X (x)f Y (z x) dx, which is said to be the convolution of f X (x) and f Y (x). In addition, if X and Y, then f Z (z) z f X (x)f Y (z x) dx. Example 8 Let X and Y be independent and exponentially distributed random variables with parameter λ. Find their convolution. 27

28 After substitution we have f X+Y (z) z z λe λx λe λ(z x) dx λ 2 e λz dx z λ 2 e λz dx λ 2 e λz z λ(λz)e λz, which shows the fact that the sum of independent exponentially distributed random variables is not exponentially distributed. Example 9 Let X n... X n be independent and exponentially distributed random variables with the same parameter λ. Show that f X +...+X n (z) λ (λz)n (n )! e λz. To prove this we shall use induction. As we have seen this statement is true for k 2. Let us assume it is valid for k n and let us see what happens to k n. f X +...+X n +X n (z) z λ(λx) n 2 (n 2)! e λx λe λ(z x) dx λ 2 e λz (n 2)! λn 2 λ (λz)n (n )! e λz, z e λz x n 2 dx λ 2 z n (n 2)! λn 2 (n ) what is exactly the density function of an Erlang distribution with parameters (n, λ). This representation of the Erlang distribution help us to compute its mean and variance in a very simple way without using its density function. The Erlang distribution is very useful to approximate the distribution of such a random variable X for which the squared coefficient of variation CX 2 <. In other words, if the first two moments of X are given then f Y (t) p λ(λt)k 2 (k 2)! e λt + ( p) λ(λt)k (k )! e λt is the mixture of two Erlang distributions with parameters (k, λ) and (k, λ), where ( ) p kcx 2 k( + CX 2 ) k2 CX 2, with the property that + C 2 X λ k p E(X), k C2 X k, E(Y ) E(X), C 2 Y C 2 X. 28

29 Such a distribution of Y -t is denoted by E k,k (λ) and it matches X on the first two moments. Hypoexponential Distribution Let X i Exp(λ i ) (i,..., n) be independent exponentially distributed random variables. The random variable Y n X X n is said to have a hypoexponential distribution. It can be shown that its density function is given by f Yn (x) {, if x <, ( ) n [ n i λ i] n e λ j x j n k,k j (λ j λ k, if x. ) It is easy to see that E(Y n ) n i λ i, V ar(y n ) Thus for the squared coefficient of variation we have n ( ) 2 n i. λ 2 i C 2 Y n i ( n λ i ) 2. i λ i Hyperexponeniális distribution Let X i Exp(λ i ) (i,..., n) and p,..., p n be distribution. A random variable Y n is said to have a hyperexponential distribution if its density function is given by {, if x < f Yn (x) n i p iλ i e λix, if x. Its distribution function is F Yn (x) {, if x < n i p ie λ ix, if x. It is easy to see that E(Y n ) n i p i λ i, E(Y n ) n i p i λ i 2.

30 It can be shown that C 2 Y n n ( ) ( 2 n 2 i λ i ( n i λ i ) 2 ) 2. i λ i In the case when for a random variable X, CX 2 fit is suggested > then the the following two-moment f Y (t) pλ e λ t + ( p)λ 2 e λ 2t, that is Y is a 2-phase hyperexponentially distributed random variable. Since the density function of Y contains 3 parameters and the fit is based on the first two moments the distribution is not uniquely determined. The most commonly used procedure is the balanced mean method, that is p p. λ λ 2 In this case E(Y ) p λ + p E(Y 2 ) 2p + λ 2 λ 2 2( p) λ 2 2 E(X) E(X 2 ). The solution is ( ) p CX 2 2 CX 2 +, λ 2p E(X), λ 2 2( p) E(X). If the fit is based on the first 3 m, m 2, m 3 moments then the m m2 2 condition is needed, and it gives a unique solution. It can be shown that the gamma and lognormal distributions satisfy this condition. The parameters of the resulting unique hyperexponential distribution are λ,2 ( ) a ± a 2 4a 2, p λ ( λ 2 m), 2 λ λ 2 where ( ) 3 a 2 (6m 2 3m 2 )/ 2 m2 2 m m 3, a ( + 2 ) m 2a 2 /m. 3

31 Mixture of Distributions Definition 7 Let X i, X 2,... be random variables and p, p 2,... be a distribution. The distribution function F (x) p i F Xi (x) is called the mixture of distributions F Xi (x) and weights p i. Similarly The density function f(x) p i f Xi (x) is called the mixture of density functions f Xi (x) and weights p i. It is easy to see that F (x), f(x) are indeed distribution, density functions, respectively. Using this terminology we can say that the hyperexponential is the mixture of exponential distributions. 2.2 Basics of Reliability Theory Definition 8 Let a random variable X denote the lifetime or time to failure of a component. Then R(t) P (X > t) F (t) is called the reliability function of the component. It can easily be seen that R (t) f X (t), and E(X) R(t)dt. The reliability function is very useful in reliability investigations of different complex systems. On the basic of the previous arguments we can formulate the reliability function of series and parallel systems, namely Series system Parallel system R S (t) R P (t) n R i (t) i n ( R i (t)) Another important function is failure rate, hazard rate function) defined by h(t) lim x P (X < t + x X t) x lim x F (t + x) F (t) xr(t) i lim x P (t < t + x) xp (X t) lim x R(t) R(t + x) xr(t) Let us show how R(t) can be expressed by the help of h(t). Namely, t t h(x)dx t R (x) R(x) dx h(x)dx [ ln R(x)] t ln R(t), 3 f(t) R(t).

32 since R(). Thus R(t) e t h(x)dx. Let H(t) t rate function. So we have h(x)dx which is called cumulative failure rate, cumulative hazard R(t) e H(t). In the following R(t), h(t), H(t) are listed for some important distributions. These formulas can be computed by the definitions of the involved random variables. At the same time we show what is the relationship between h(t)t and C 2 X. Exponential distribution Erlang distribution X Exp(λ), R(t) e λt, h(t) λ, H(t) λt, C 2 X. X Erl(n, λ) R(t) n i (λt) i i! e λt λ(λt) n e λt h(t) n (λt) i (n )! e λt i! i λ(λt) n n (λt) i (n )! i! i which is monotone increasing function with image in the interval [, λ]. C 2 X n λ 2 ( n λ) 2 n. Weibull distribution X W (λ, α), R(t) e λtα, h(t) λαtα e λtα e λtα λαt α. That is h(t) monotone increasing for α >, and monotone decreasing for α <. For α h(t) λ, H(t) λt α. ( 2 CX 2 α λ) ( Γ ( ) + ( 2 α Γ 2 + α)) (( ) ( )) λ Γ + 2 Γ ( ) + 2 α Γ ( ) 2 + α α 2αΓ ( ) 2 α Γ ( ). 2 α 32

33 It can be shown that C 2 X >, if < α <, C 2 X <, if α >. Pareto distribution X P ar(k, α), R(t) C 2 X ) 2 k 2 α ( kα α 2 α ) 2 ( kα α ( ) α t, h(t) λ k t, k 2 α α 2 ( kα α ) 2 ( ) t H(t) α ln, t k. k (α )2, α > 2. (α 2)α Seeing the above examples we might expect that if h(t) is monotone increasing (decreasing) function then CX 2 < (> ). However, in the case of the Pareto distribution h(t) is monotone decreasing, but it easy to show that CX 2 < if α > + 2 and Cξ 2 > if 2 < α < Generation of Random Numbers We have seen that the generation of random numbers can be carried by the help of the following formula, sometimes called inverse transformation method Y F X (X) E(, ), and thus X F (Y ). In the next examples we show how to generate random numbers having important continuous distribution. Example Generate an exponentially distributed random number with parameter λ. If X Exp(λ) and Y E(, ) then e λx Y so if we can generate a uniformly distributed on [,] then the exponentially distributed random numbers are X ln( Y ). λ Example Generate a random number having Erlang distribution with parameters (n, λ). Keeping in mind the representation of the Erlang distribution it is easy to see that Y n X X n n λ ln( ( Y i )), where Y i E(, ), i,..., n gives the desired random number. i 33

34 Example 2 Generate a hypoexponentially distributed random number. As the hypoexponential distribution is the generalization of the Erlang distribution the following formula results in the desired random number Y n X X n λ ln( Y )... λ n ln( Y n ). Example 3 Generate a hyperexponentially distributed random number. In the first step let us generate a uniformly distributed random number Y on [, ] and then choose an i for which i i p j < Y < p j. j In the second step let us generate an exponentially distributed random number with parameter λ i as we discussed earlier. j Example 4 Generate a Weilbull distributed random number. Y e λxα, thus X [ λ ] ln( Y ) α. Example 5 Generate a Pareto distributed random number. Y X k( Y ) α. ( ) α k, thus X 34

35 2.4 Random Sums Definition 9 Let ν {,, 2, 3,...} be a random variable and let {X i } i be independent identically distributed random variables that are independent of ν, too. The random variable Y ν X X ν is called a random sum (ν, Y ). The distribution of Y ν can be obtained by using the theorem of total probability. Similarly, the moments of the random sum can be calculated by the help of the theorem of total moments. Discrete case P (Y ν n) P (Y k n)p (ν k), Continuous case E(Y l ν ) k k E(Y l k)p (ν k). f Yν (x) f Yk (x)p (ν k), F Yν (x) k F Yk (x)p (ν k), k E(Y l ν ) k x l f Yk (x) dxp (ν k) k E(Y l k)p (ν k). Example 6 Let f Xi (x) λe λx, i, 2,... and let ν be geometrically distributed with parameter p. Find the density function of Y ν. Notice that Y k is Erlang distributed with parameters (k, λ), hence by substituting its density function we have f Yν (x) k It means that Y ν Exp(λp). λ(λx) k (k )! e λx p( p) k λp e λx (λx( p)) k (k )! k λp e λx e λx( p) λp e λpx (λx( p)) j j j! Theorem Mean of a random sum E(Y ν ) EX Eν. 35

36 Proof: By the law of total expectation we have E(Y ν ) E(Y k )P (ν k) kex P (ν k) k E(X ) k kp (ν k) EX Eν. k Theorem 2 Variance of a random sum V ar(y ν ) V ar(x ) Eν + E 2 X V ar(ν). Proof: By applying the theorem of total moment we get E(Y 2 ν ) k E(Y 2 k )P (ν k) E[(X X k ) 2 ]P (ν k) k Thus (kv ar(x) + k 2 E 2 X )P (ν k) k kv ar(x) P (ν k) + k k k 2 E 2 X P (ν k) V ar(x) Eν + E 2 X Eν 2. V ar(y ν ) V ar(x ) Eν + E 2 X Eν 2 E 2 X E 2 ν V ar(x ) Eν + E 2 X V ar(ν). 36

37 Chapter 3 Analytic Tools, Transforms The concept of transform appear naturally for investigation of different problems in mathematics, physics, and engineering sciences. The main reason to introduce them is that they greatly simplify the calculations. The type of transformation depends on the problem itself, that is why varieties of transform occur, see for example, Z-transform, moment generating function, Laplace-transform, Fourier-transform, Mellin-transform, Hankel-transform, etc. Moreover, they may have different names as well, e.g., probability generating function, characteristic function. This chapter is devoted to the probability generating function and the Laplace-transform which are closely related to the discrete and continuous nonnegative random variables. Their usefulness will be illustrated by several examples. Of course, there many books dealing with special transform, but in this material I concentrate on our needs only keeping in mind their applications in queueing theory. As basic sources I recommend the following books: Allen [], Kleinrock [5], Trivedi [4]. 3. Generating Function Definition Let X be a nonnegative discrete random variable having distribution P (X n) p n, i,, 2,.... Then the generating function G X (s) of X is defined as G X (s) s k p k E(s X ). i G X (s) is defined if the series is convergent. Theorem 3 The generating function holds the following properties. G X (), 2. G X (s), if s, 3. E(X) G X (), 4. E(X 2 ) G X () + G X (), 5. p k Gk X () k!, k,, 2,

38 Proof:. G X () k k p k k p k. 2. G X (s) k sk p k k s k p k k p k. 3. G X (s) k (sk ) p k k ksk p k thus G X () k kp k EX. 4. G X (s) s k (sk ) p k k (ksk ) p k s k k(k )sk 2 p k s k k2 p k k kp k E(X 2 ) E(X) Collecting the terms we get V ar(x) G X() + G X() (G X()) 2. Theorem 4 If X,..., X n are independent then G X +...+X n (s) n i G X i (s). Proof: In the proof we use the theorem if the random variables are independent then the mean of their product is equal to the product of their means. Thus we can write G X +...+X n (s) E(s X +...+X n ) E(s X... s Xn ) n E(s X i ) i n G Xi (s). i Theorem 5 Generating function of a random sum Proof: By the law of total expectation we get G Yν (s) G ν (G X (s)). G Yν (s) n E(s Yn )P (ν n) n (G X (s) n P (ν n) G ν (G X (s)). The following two theorems play an important role in many applications and simplify the calculations in limiting distributions. Theorem 6 (Continuity Theorem ) Let X,..., X n,... be a sequence of nonnegative, integer valued random variables. If the sequence of the corresponding distribution converges to a distribution, that is lim n p nk p k (k,,...), and k p k, where p nk P (X n k), (k,,..., n, 2,...) then the corresponding generating functions of X n converge to the generating function of {p k } at any point in [, ], that is lim G n(s) G(s), ( s ), n 38

39 where G n (s) G(z) p nk s k G Xn (s) k p k s k. k If the limit lim n p nk p k exists, but p k, then the convergence of the generating functions holds only in (, ). Comment For illustration let us see the following example Let X n, that is p nn, and p nk, if k n then lim p nk, (k,,...). n However lim G n(s) lim s n n n, if s <, if s non exists, if s. Theorem 7 (Continuity Theorem ) If a sequence of the generating function of X n converges to a function G(s) on s then the sequence of the corresponding distribution of X n converges to a probability distribution with generating function G(s). Comment 2 If we assume that lim n G n (s) G(s) exists but only on s <, then G(s) is not necessarily a generating function as we see in the following example. Example 7 If a random variable X n takes the values and n with the same probability then G n (s) +sn and thus lim 2 n G n (s). It is easy to see that p 2 k is not valid since { lim p nk p k, if k 2. n otherwise. Generating Function of Important Distributions Example 8 Find the generating function of a Bernoulli distribution, then the mean and variance. G χ(a) (s) s ( p) + s p sp + p + p(s ), G χ(a)(s) s p, Eχ(A) 2 + p, V arχ(a) p p 2 p( p). Example 9 Find the generating function of a Poisson distribution with parameter λ, and then the mean and variance. 39

40 G X (s) k s k λk k! e λ e λ (λs) k e λ e sλ e λ( s). k! k G X(s) s e λ( s) λ s λ, G X(s) s e λ( s) λλ s λ 2, V ar(x) λ 2 + λ λ 2 λ. Example 2 By using the generating function approach find the convolution of two Poisson distributions. Applying the properties of the generating function it is easy to get the generating function of the sum and by taking into account the generating function of a Poisson distribution we have G X+Y (s) e λ( s) e µ( s) e (λ+µ)s, that exactly the generating function of a Poisson distribution with parameter λ + µ. Example 2 By the help the generating functions show that B(n, p) P o(λ), if n, p such that np λ. Use that if a n A then ( + an n )n e A. We are going to show that the generating function of the binomial distribution converges to the generating function of the Poisson distribution, that is G Xn (s) ( p( s)) n ( np( s) ) n e λ( s) n that is the generating function of a Poisson distribution with parameter λ. Example 22 Let X i χ(a) and independent, ν P o(λ). Find the generating function of the random sum Y ν. By using the relationship to the generating function of a random sum and taking into account the form of the generating function of a Bernoulli and Poisson distribution after substitution we shall get the desired result. Therefore we can write which shows that Y ν P o(λp). G Yν (s) G ν (G Xi (s)) e λ( (+p(s ))) e λp( s), 4

41 Example 23 Solve the following system of differential equations k,2,... dp (t) dt dp k (t) dt dp (t) dt P k () λp (t) λp (t) + λp (t)... λp k (t) + λp k (t) {, if k,, if k. Multiplying both sides of the equations by the appropriate power of s we get dp (t) dt λp (t) sdp (t) ( λp (t) + λp (t))s dt... s k dp k (t) ( λp k (t) + λp k (t))s k λs k P k (t) + λss k P k (t) dt Let us introduce the generating function G(t, s) as By adding both sides we have G(t, s) G(t, s) t k } {{ } G(t,s) s k P k (t). k k s k dp k(t) dt λ s k P k (t) +λs s k P k (t). k } {{ } G(t,s) The initial condition is G(, s) s k P k (). k Thus the system of differential equations reduces to a single differential equation, namely with initial condition G(t, s) t λ( s)g(t, s), G(, s). 4

42 Rearranging the terms we get and the solution is G(t,s) t G(t, s) λ( s) ln G(t, s) λt( s) + ln C. Since G(t, s) Ce λt( s) and G(, s), thus G(, s) Ce, that is C. Hence G(t, s) e λt( s) which shows that G(t, s) is the generating function of a Poisson distribution with parameter λt. Therefore the solution of the system of differential equation is P k (t) (λt)k e λt, k,,... k! 3.2 Laplace-Transform Definition Let X be a nonnegative random variable with density function f X (x). The Laplace-transform L X (s) of X is defined as L X (s) e sx f X (x) dx E(e sx ) f X(s). Theorem 8 The Laplace-transform holds the following properties. L X (), 2. L X (s), if s, 3. If X,..., X n are independent random variables then L X +...+X n (s) n L Xi (s), i 4. E(X n ) ( ) n L (n) X (). Proof:. L X () e x f X (x) dx f X (x) dx. 2. The lower bound of L X (s) comes from the observation that e sx f X (x) is nonnegative and thus the integral is also nonnegative. For the upper bound we have L X (s) e sx f X (x) dx 42 f X (x) dx.

43 3. n L X +...+X n (s) E(e s(x +...+X n) ) E(e sx... e sxn ) E( (e sx i ) n n E(e sx i ) L Xi (s), i i i because if X,..., X n are independent then e sx,..., e sxn and hence the multiplicative law is valid. are also independent 4. L (n) X () (e sx ) (n) f X (x) dx s ( x) n e sx s f X (x) dx ( ) n x n f X (x) dx } {{ } E(X n ) hence E(X n ) ( ) n L (n) X (). The main advantage of the Laplace-transform is that it can be used to solve differential equations. It should be noted that the Laplace-transform can be applied for any function with nonnegative range. In the following we would like to solve some differential equations that is why we need Theorem 9 The Laplace-transform hold the following properties. (af(x) + bg(x)) (s) af (s) + bg (s) 2. (f (x)) (s) sf (s) f(), if lim x f(x) e sx. Proof:. (af(x)+bg(x)) (s) e sx (af(x) + bg(x))dx a e sx f(x)dx+b af (s) + bg (s) e sx g(x)dx 2. Using integration by parts (f (x)) (s) e sx f (x)dx [f(x)e sx ] + s e sx f(x)dx sf (s) f(). Theorem 2 Laplace-transform of a random sum L Yν (s) G ν (L X (s)). 43

44 Proof: By the theorem of total expectation we have L Yν (s) E(e syν ) E(e syn )P (ν n) n (L X (s)) n P (ν n) G ν (L X (s). n Due to their practical importance we state the following theorems without proof. Theorem 2 f (s) yields the following limits Initial value theorem Final value theorem lim sf (s) lim f(t) s t lim sf (s) lim f(t) s t Theorem 22 (POST-WIDDER inversion formula ) If f(x) is a continuous and bounded function on (, ) then n n dn L(f)(s) ds lim n s n y n y n (n )! f(y) Theorem 23 (Continuity Theorem) Let X,..., X n,... be a sequence of random variables having distribution functions F (x), F 2 (x),..., F n (x),.... If lim n F n (x) F (x), where F (x) is the distribution function of some random variable X then for the corresponding Laplace-transform we have lim n E(e sxn ) E(e sx ), and conversely if the sequence of Laplace-transforms converges to a function then for the corresponding distribution functions we have lim n F n(x) F (x) and the limiting function is the Laplace-transform of some random variable X with distribution function F (x). In the following let us find the Laplace-transform of some important distributions. Example 24 Find the Laplace-transform if X Exp(λ). L X (s) e sx λe λx dx λ λ + s (λ + s)e (λ+s)x dx } {{ } λ λ + s. 44

45 Example 25 Find the Laplace-transform if X Erl(n, λ). Since X is the sum of independent exponentially distributed random variables with the same parameter by applying the convolution property of the Laplace-transform we have ( ) n λ L X (s). λ + s Example 26 Find the Laplace-transform of a hypoexponential distribution. Since a hypoexponentially distributed random variable is the sum of independent exponentially distributed random variable with different parameters we obtain L Yn (s) n i λ i λ i + s. In the next example we show how the nth moment of an exponentially distributed random variable can be obtained in a simple way by using one of the properties of the Laplacetransform. This calculation is rather cumbersome by the density function approach. Example 27 Using the Laplace-transform show that if X Exp(λ), then E(X n ) n! λ n. ( (n) E(X n ) ( ) n L (n) λ X () ( )n s λ + s) ( ) n λ((λ + s) ) (n) s ( ) n λ(( )( 2)... ( n)(λ + s)) n s ( ) n λ( ) n n! λ n+ ( )2n λ n! n! λn+ λ. n Example 28 Let ν be a geometrically distributed counting random variable and X i Exp(λ) summands. Find the distribution of the random sum. Knowing that if ν Geo(p), then G ν (z) L Yν (s) G ν (L X (s)) zp z( p), thus λp λ+s λ λ+s That is exactly the Laplace-transform of Y ν Exp(λp). ( p) λp λp + s. 45

46 Theorem 24 Laplace-transform of a mixture distribution is the mixture of the corresponding Laplace-transforms. Proof: Let f Y (x) p i f Xi (x). i Then L Y (s) ( ) e sx p i f Xi (x) dx i p i e sx f Xi (x)dx } {{ } i L Xi (s) p i L Xi (s). i Example 29 Find the Laplace-transform of the function g(t) (λt)k k! e λt. g (s) λk k! st (λt)k e k! e λt dt λk k! k! λ + s (λ + s) k λ + s λ + s (s + λ)t k e (s+λ)t dt } {{ } EX k k! (λ+s) k ( ) k λ. λ + s Example 3 Use the Laplace-transform to solve the system of differential equations P (t) λp (t) P (t) λp (t) + λp (t)... P k(t) λp k (t) + λp k (t), k, 2,... with initial conditions P k () {, if k,, if k. 46

47 By taking the Laplace-transform of both sides we have (P (t)) (s) λ(p (t)) (s)... (P k(t)) (s) λ(p k (t)) (s) + λ(p k (t)) (s), k, 2,... Using integration by parts we get e st P k(t) dt [e st P k (t)] se st P k (t) dt. Assuming that P k (t) is bounded that is P k (t) < K then Consequently Using the initial condition we obtain and After substitution we get thus Furthermore hence and thus it is easy to see that [e st P k (t)] P k (). (P k(t)) (s) P k () + sp k (s). (P ) (s) + sp (s) (P k) (s) sp k (s) for k. + sp (s) λp (s) P (s) λ + s. sp k (s) λp k (s) + λp k (s), P k (s) λ λ + s P k (s), Pk (s) ( ) k λ. λ + s λ + s Keeping in mind the previous example finally we get the solution as P k (t) (λt)k e λt, k,, 2... k! 47

48 48

49 Chapter 4 Stochastic Systems This chapter is devoted to the stochastic modeling of dynamic systems. The most wellknow stochastic process, the Poisson process is introduced and it is shown what is its relationship to other distributions. Simple stochastic systems are investigated serving as a building blocks for the more complex ones. Different methods and approaches are used to get the main performance measures of these systems. Variety of Examples helps the reader to understand the topic. The material is based mainly on the following books: Allen [], Ovcharov [7], Trivedi [4]. 4. Poisson Process Definition 2 Let τ, τ 2... be nonnegative, independent and identically distributed random variables. The random variable counting the number of events until time t, that is n ν(t) max { τ i < t} n is called renewal process, and its mean m(t) Eν(t)-t is referred to as f renewal function. Theorem 25 If τ, τ 2... are exponentially distributed with parameter λ then P (ν(t) k) (λt)k k! e λt. Proof: It can be seen from the construction that S n n i τ i is Erlang distributed with parameters (n, λ) thus its distribution function is i P (S n < x) F Sn (x) n i (λx) i e λx. i! In the proof we use that if an event A involves event B, denoted as A B in probability theory, then P (B \ A) P (B) P (A). Clearly, in our case event A can be defined as {S n+ < t} and event B is {S n < t}. 49

50 It can easily be seen that exactly k events occur iff S k < t, S k+ t. Hence P (ν(t) k) P (S k < t, S k+ t) F Sk (t) F Sk+ (t) k ( (λx) i k ) e λx (λx) i e λx i! i! i i (λt)k e λt. k! As we can see the number of events that happened until time t is Poisson distributed with parameter λt and it is called a Poisson process with rate λ. It is not difficult to verify that. P (ν(h) ) e λh λh + o(h), 2. P (ν(h) ) λhe λh λh( λh + o(h)) λh + (λh) 2 + λho(h) λh + o(h), 3. P (ν(h) 2) [( λh + o (h)) + λh + o 2 (h)] o(h). Definition 3 Rarity condition P (ν(h) 2) lim h P (ν(h) ) lim h o(h) λh + o(h) lim h o(h) h λ + o(h) h Let ν(t, t + h) denote the number of events (number of renewals ) that occurred in the interval (t, t+h). Due to the construction of the Poisson process and the memoryless property of the exponential distribution one can easily see that the distribution of ν(t, t + h) depends only on h irrespective to the position of the interval ( time-homogeneous ). In addition, the number of renewals happened during non-intersected intervals are independent random variables ( independent increments ).. The Poisson process has been introduced as a counting process with probability distribution P k (t) for the number of arrival during a given interval of length t, namely we have P (ν(t) k) (λt)k e λt. k! Let us investigate the joint distribution of the arrival epochs during a given time interval of length t when it is known in advance that exactly arrivals have occurred during that interval. Let us divide the interval (, t) into 2k + nonoverlapping intervals in the following way. Intervals of length α i always precede the interval of length β i, (i,..., k), and the interval is of length α k+ and in addition k+ α i + i k β i t. i Let A k denote the event that exactly one arrival occurs in each of the intervals β i, (i, 2,..., k), and that no arrival occurs in any of the intervals α i, (i, 2,..., k + ). 5

51 We would like to calculate the probability of event A k given that exactly k arrivals have occurred in the interval (, t). By the definition of the conditional probability thus we have P (A k exactlyk arrivals in (, t)) P (A k and exactly k arrivals in (, t)). P (exactly k arrivals in (, t)) When the number of arrivals of a Poisson process during nonoverlapping intervals are considered, they can be viewed as independent random variables with Poisson distribution. Thus the probability of the joint events may be calculated as the product of the individual probabilities. ( Poisson process has independent increments ) Therefore and P (one arrival in interval of length β i ) λβ i e λβ i P (no arrival in interval of length α i ) e λα i. By using these probabilities we immediately get P (A k exactly k arrivals in (, t)) (λβ λβ 2...λβ k e λβ e λβ 2...e λβ k )(e λα e λα 2...e λα k ) ((λt) k /k!)e λt β β 2...β k k!. t k On the other hand, let us consider another process that selects k points in the interval (, t) independently where each point has uniform distribution over this interval. It can easily be verified that P (A k exactly k arrival in (, t)) ( β t ) ( β2 t )... ( βk t ) k!, where the term k! comes about because the permutations of the k points are not distinguished. Since these two conditional distributions are the same we can conclude that if in an interval of length t there are exactly k arrivals from a Poisson process, then the joint distribution of the moments when these arrivals have occurred is the same as the distribution of k points independent and uniformly distributed over the same interval. Example 3 What is the rate ( intensity ) of the renewals E(ν(t)) lim t t λt lim t t λ. Eτ 5

52 Definition 4 (Stochastic convergence, convergence in probability) A sequence of random variables X n is said to converge in probability to a random variable X if, for any ε) >, lim n P ( X n X ) ε) Theorem 26 ν(t) t converges in probability to λ Proof: By applying the Chebychev inequality and observing that we get which implies that E( ν(t) ) λt t t λ, P ( ( ν(t) t V ar(ν(t) ) λt t t 2 λ ε) λ tε 2 lim P ( (ν(t) λ ε). t t λ t Definition 5 The rate of renewals is defined as. m(t) lim t t Theorem 27 ( The Elementary Renewal Theory) m(t) lim t t Eτ. Derivation of system of differential equations for the Poisson process Let P k (t) denote the probability that until time t k events occurred. Due to the properties of the Poisson process the following system of equations can be written P (t + h) P (t)( λh + o(h)) P (t + h) P (t)( λh + o(h)) + P (t)(λh + o(h)) P k (t + h) P k (t)( λh + o(h)) + P k (t(λh + o(h)) k + P k j (t)p (j events happened during h ) j2 P (). } {{ } o(h) 52

53 The first equation can be rewritten as which implies P (t + h) P (t) λhp (t) + o(h), P (t) λp (t). Similarly to this the other equations imply the following system of differential equations with initial condition P (t) λp (t) P (t) λp (t) + λp (t) P k(t) λp k (t) + λp k (t) P (). Notice that we have solved this system at Examples 23 and 3 and the solution is the Poisson distribution, that is P k (t) (λt)k e λt, k,,... k! 4.2 Performance Analysis of Some Simple Systems In the next section several simple systems are investigated with the aim that understanding their performance more complicated ones could be analyzed. Example 32 Let us consider a component having two states ( if it operating, if it failed ) and let us suppose that at time it is operating. Find the probability that at time t it is failed assuming that the operating times are exponentially distributed random variables with parameter λ and they are independent of the the repair times that are exponentially distributed random variables with parameter µ. To formulate the problem in mathematical terms let introduce the following notations. Let {, if at time t the component is operating X(t), if at time t the component is failed furthermore its distribution is denoted by P i (t) P (X(t) i), i,. Then by the help of the theorem of total probability and the memoryless property of the exponential distribution we get P (t + h) P (t)( λh + o(h)) + P (t)(µh + o(h)) + o(h) P (t) + P (t), which is called the normalizing condition P (), is the initial condition. 53

54 It is easy to see that after substitution and rearranging the terms we have Thus P (t + h) P (t) λhp (t) + o(h) + ( P (t))µh + o(h) + o(h). Hence the calculations have reduced to P (t + h) P (t) (λ + µ)hp (t) + µh + o(h) P (t + h) P (t) lim (λ + µ)p (t) + µ h h P (t) (λ + µ)p (t) + µ. P (t) + (λ + µ)p (t) µ, P (), which is a first-order inhomogeneous linear differential equation with constant coefficients. Its solution can be obtained in different ways. In the next part we show how to get the solution by applying the Laplace-transform method. In the mean time we shall use the following properties of the Laplace-transform (P (t)) (s) P () + sp (s) and the Laplace-transform of a constant c is c s. Taking the Laplace-transform of both side and keeping in mind these properties the transformed differential equation can be written as + sp (s) + (λ + µ)p (s) µ s By using the method of partial fractions we get P (s) µ + s s s + λ + µ A s + B s + λ + µ (A + B)s + A(λ + µ), s(s + λ + µ) resulting thus A + B, A(λ + µ) µ A µ λ + µ, B λ λ + µ. Therefore P (s) µ λ + µ s + λ λ + µ s + λ + µ. Knowing that (δe δt ) (s) δ s + δ, (e δt ) (s) s + δ inverting the terms we obtain the solution, namely P (t) µ λ + µ + λ λ + µ e (λ+µ)t, 54

55 P (t) λ λ + µ λ λ + µ e (λ+µ)t. If the initial condition is P (), then the solution is P (t) µ λ + µ µ λ + µ e (λ+µ)t, P (t) λ λ + µ + µ λ + µ e (λ+µ)t. Steady-state (stationary) distribution Let P i lim t P i (t), i,. Then taking the limits in the corresponding equations it is easy to see that the solution is at initial condition P () at initial condition P () P µ λ + µ, P λ λ + µ, P µ λ + µ, P λ λ + µ. Notice that we have the same distribution and the the initial condition has no effect on the limiting distribution. Example 33 Find P, P by the help of the steady-state balance equations. Since in steady-state the functions do not depend on the time their derivatives are zero. Hence from the corresponding differential equation and the normalizing condition we have P µ λ + µ, P λ λ + µ. Example 34 Find P by the help of the expectations of the operating and repair times. Let Y i, X i denote the operating times, repair times of the component, respectively. Let us assume that all these times are independent of each other. As the time goes the states of the component alternate, and Y i + X i create so called cycle which are independent of each other. It can be proved that the stationary distribution that the component is operating is the ratio of the mean operating time and the mean 55

56 cycle time. In the case of exponentially distributed times P EY EY + EX λ λ + µ λ µ+λ λµ which the same as we have in the previous Example. λµ λ λ + µ µ λ + µ, In the reliability theory the distribution of the time to the first system failure plays a very important role. Obviously, this distribution should depend on the initial condition of the system. The aim of the next Example to illustrate this topic. Example 35 Let us consider a system consisting of two components having exponentially distributed operating times with parameter λ. The failed component is maintained by a single repairman and the repair times are supposed to be exponentially distributed random variables with parameter µ. If both components are failed the system is said to be failed and the whole operation stops. Assuming that at the beginning all components are operating and they are independent of each other find the mean time to the first system failure. As in the previous Example let i denote the number of failed components, i,, 2. Since if the process enters to state 2 the system stops it is an absorbing state. It is not difficult to see that the transition rates between the states can be illustrated as follows Figure 4.: Transition rates in Example 35 It can easily be verified that for the distribution of the system the following differential equations with initial condition can be written P (t) 2λP (t) + µp (t) P (t) (λ + µ)p (t) + 2λP (t) P 2(t) λp (t) P (). It is enough to solve P (t) and P (t) since P 2 (t) (P (t) + P (t)). 56

57 Taking the Laplace-transform at both sides and using the initial condition we have sp (s) 2λP (s) + µp (s) sp (s) (λ + µ)p (s) + 2λP (s) P 2λ (s) s + λ + µ P (s) (s + 2λ)P 2λµ (s) s + λ + µ P (s) + (2λ + s)(s + λ + µ)p (s) 2λµP (s) + s + λ + µ [(2λ + s)(s + λ) + sµ]p (s) s + λ + µ. Thus P s + λ + µ (s) s 2 + (3λ + µ)s + 2λ 2 P 2λ (s) s 2 + (3λ + µ)s + 2λ. 2 Distribution of the time to the first system failure Let Y denote the time to the first system failure. It is easy to see P (Y < t) P 2 (t) (P (t) + P (t)). Thus E(Y ) since P i () P i (t) dt. P (Y > t) dt (P (t) + P (t)) dt P () + P () Therefore E(Y ) λ + µ 2λ 2 + λ λ + µ + 2λ 2λ 2 3λ + µ 2λ 2 3 2λ + µ 2λ 2 Specially, if µ, which means that there is no repair, this formula simplifies to E(Y ) 3 2λ 2λ + λ as we could see in the case of a parallel system. Of course, the density function of the time to the first system failure can be obtained this way. Let us see what to do get it. Determination of density function f Y (t) To get f Y (t) let us notice that f Y (t) P 2(t). Using the properties of the Laplace-transform this can be transformed to f Y (s) (P 2) (s) sp 2 (s) P 2 () 57

58 ( ) sp2 (s) s( P (t) P (t)) (s) s s P (s) P (s). Alternatively, at balance equation taking the Laplace-transform we have f Y (s) λp (s) P 2(t) λp (t) 2λ 2 s 2 + (3λ + µ)s + 2λ 2 2λ2 α α 2 ( ) s + α 2 s + α where Thus Therefore E(Y ) s 2 + (3λ + µ)s + 2λ 2 (s + α )(s + α 2 ) α,2 (3λ + µ) ± λ 2 + 6λµ + µ 2. 2 f Y (t) yf Y (y) dy 2λ2 α α 2 (e α 2t e α t ). 2λ2 α α 2 [ α 2 2 α 2 ] 2λ2 (α + α 2 ) (α α 2 ) 2 2λ2 (3λ + µ) (2λ 2 ) 2 3 2λ + µ 2λ 2. Example 36 Modify the initial condition and let us assume that the system operation start with operating component. Find the mean time to the first system failure. Let us notice that we have the same system of differential equations just the initial condition has changed. That is P (t) 2λP (t) + µp (t) P (t) (λ + µ)p (t) + 2λP (t) P 2(t) λp (t) P (). Similarly to the previous solution we have sp (s) 2λP (s) + µp (s) sp (s) (λ + µ)p (s) + 2λP (s) P (s) s + 2λ µ P (s) 58

59 [(s + λ + µ)( s + 2λ µ ) + 2λ]P (s) Thus P µ (s) (s + λ + µ)(s + 2λ) + 2λµ µ s 2 + (3λ + µ)s + 2λ 2 + 2λµ + 2λµ µ s 2 + (3λ + µ)s + 2λ(λ + 2µ) P (s) s + 2λ µ P s + 2λ (s) s 2 + (3λ + µ)s + 2λ(λ + 2µ). Hence the mean is E(Y ) P () + P () µ 2λ(λ + 2µ) + 2λ 2λ(λ + 2µ) 2λ + µ 2λ(λ + 2µ). In particular, in the case of non-maintained system, that in when µ it reduces to E(Y ), which shows that our calculation is correct. λ Let E(Y i ) denote the mean time to the first system failure with initial state i. On the basic of the previous calculations we have It is easy to see that E(Y ) > E(Y ), since E(Y ) 3λ + µ 2λ 2, E(Y ) 2λ + µ 2λ(λ + 2µ). E(Y ) E(Y ) 3λ+µ 2λ 2 2λ+µ 2λ(λ+2µ) (3λ + µ)(λ + 2µ) λ(2λ + µ) 3λ2 + 7λµ + 2µ 2 2λ 2 + λµ which was expected. + λ2 + 6λµ + 2µ 2 2λ 2 + λµ >, Example 37 Find the steady-state probability that k components are operating in a system containing of n independent components. µ The key to this problem is the binomial distribution with parameters (n, ) since in λ+µ µ steady-state the probability that a given component is operating is. Since we have n λ+µ components the probability in question is ( )( ) k ( ) n k n µ λ P k. k λ + µ λ + µ 59

60 Mean operation time of a parallel system Let A denote the steady-state availability coefficient of a system, that is the steadystate probability that the system is operating. As we have seen a parallel system is operating if there exists operating component. In other words it is failed if all the components are failed. In the case of a system containing of n independent components having exponentially distributed operating time, repair times with parameters λ, µ, respectively, this probability is given by ( λ λ+µ )n. Thus A ( ) n λ. λ + µ Let E(S) denote the mean sojourn time of the system in failed state and let E(O) denote the mean operating time of the system. Then ( λ ) n E(O) A E(O) + E(S) µ. + λ µ Therefore A(E(O) + E(S)) E(O), E(O) A A E(S). In the case of a parallel system it has the form of E(O) ( λ µ + λ µ ( λ µ + λ µ ) n ) n The term is the mean time while all the components are failed. Since this time is the nµ minimum of the repair times it is exponentially distributed with parameter nµ because the repair times are exponentially distributed with parameter µ. Example 38 Let us consider a system with two components and two repairmen. Let i denote the number of failed components. Assuming independent exponentially distributed operating and repair times derive the corresponding set of differential equations. Similarly as we have done earlier it is easy to see that the transition rates can be written as it is illustrated and hence the differential equations can easily be derived in the usual way, that is we have nµ. P (t) 2λP (t) + µp (t) P (t) (λ + µ)p (t) + 2λP (t) + 2µP 2 (t) P 2(t) 2µP 2 (t) + λp (t) P (t) + P (t) + P 2 (t) P (). 6

61 Figure 4.2: 2 components, 2 repairmen Example 39 Let as consider the previous Example with a single repairman. Find the steady-state distribution, the mean operating time of the system, and the mean busy period of the repairman. Of course we have to modify the repair rates, as it is illustrated, but after that the steady-state balance equations can easily be derived in the usual way as follows Figure 4.3: 2 components, repairman It is easy to verify that the solution is 2λP µp (λ + µ)p 2λP + µp 2 µp 2 λp P + P + P 2 P 2λ µ P, P 2 λ µ P 2λ2 µ 2 P, P + 2λ µ + 2λ2 µ 2. For the mean operating time we have the general formula, namely E(O) A A E(S). 6

62 Since we have only a single repairman and the repair time is exponentially distributed thus E(S) µ. The availability coefficient A P 2. To answer the third question let us introduce the following notations. Let E(i) be the mean idle time and E(δ) be the mean busy period of the server. Since thus P E(i) E(i) + E(δ), E(δ) P E(i). P This time E(i), and in the case of n components it is E(i), since the idle time 2λ nλ is the minimum of the operating times of the components. Example 4 Compare the mean operation time E(O ) and E(O 2 ) of the systems with and 2 repairmen. In the case of two repairmen As we have calculated earlier E(O 2 ) ( λ λ+µ )2 ( λ λ+µ )2 2µ µ 2λ + 2 λ. As we have shown in Example 35 the mean time to the first system failure starting with two operating components is It is easy to see that T 3λ + µ 2λ 2. T > E(O 2 ). In the case of a single repairman Thus E(O ) 2λ2 P µ 2 2λ 2 P µ 2 µ, wherep E(O ) 2λ2 µ 2 µ 2 µ 2 + 2λµ + 2λ 2 2λ 2 µ 2 µ 2 µ 2 + 2λµ + 2λ 2 µ2 + 2λµ + 2λ 2 2λ 2 2λ 2 µ µ 2λ 2 + λ λ + 2λ2 µ µ 2 µ µ2 + 2λµ µ 2 µ 2 + 2λµ + 2λ 2. µ 2 (µ 2 + 2λµ + 2λ 2 ) 2λ 2 µ 2 µ 2 (µ 2 + 2λµ + 2λ 2 ) 2λ 2 µ 2 µ 2 (µ 2 + 2λµ + 2λ 2 ) 2λ 2 µ 2λ + µ 2λ 2 µ

63 Surprisingly there is no difference in the two cases. Of course the steady-state distributions are different, but nevertheless the mean value is the same. So far we have investigated systems with homogeneous components. In the next section we are dealing with systems with heterogeneous elements resulting more complicated set of differential equations and formulas. Example 4 Let us consider a system with heterogeneous components and with 2 repairman. The ith component has exponentially distributed operating times and repair times with parameter λ i and µ i, respectively, i,2. Assuming that the involved random variables are independent of each other find the transient distribution of the system starting with 2 operating components. Furthermore, in steady-state compute the mean operating time of this parallel system. What is the mean operating time without repair? To describe the behavior of the system we need more sophisticated notations since we have to keep in mind the heterogeneity of the components. Thus let us denote by the state when both components are operating, by when component with index is failed, by 2 when component with index 2 is failed, and finally by, 2 when both components are failed. The transition rates are illustrated in the following Figure, showing a more complicated situation. By the help of these rates the corresponding differential equations can be written in the traditional way. Figure 4.4: Heterogeneous case with 2 repairmen 63

64 Transient distribution P (t) (λ + λ 2 )P (t) + µ P (t) + µ 2 P 2 (t) P (t) (λ 2 + µ )P (t) + λ P (t) + µ 2 P,2 (t) P 2(t) (λ + µ 2 )P 2 (t) + λ 2 P (t) + µ P,2 (t) P,2(t) (µ + µ 2 )P,2 (t) + λ 2 P (t) + λ P 2 (t) P (). After elementary calculations we can verify that the solution is ( µ P (t) + λ ) ( e (λ +µ )t µ2 + λ ) 2 e (λ 2+µ 2 )t λ + µ λ + µ λ 2 + µ 2 λ 2 + µ ( 2 λ P (t) λ ) ( e (λ +µ )t µ2 + λ ) 2 e (λ 2+µ 2 )t λ + µ λ + µ λ 2 + µ 2 λ 2 + µ ( 2 µ P 2 (t) + λ ) ( e (λ +µ )t λ2 λ ) 2 e (λ 2+µ 2 )t λ + µ λ + µ λ 2 + µ 2 λ 2 + µ ( 2 λ P,2 (t) λ ) ( e (λ +µ )t λ2 λ ) 2 e (λ 2+µ 2 )t λ + µ λ + µ λ 2 + µ 2 λ 2 + µ 2 Further performance measures are R(t) P,2 (t), A P,2, E(O) P,2 P,2 µ + µ 2 Steady-state distribution Let Q i P ( i components are failed ) As we have seen earlier Thus it is easy to see that P (ith component is operating) Q µ λ + µ µ 2 λ 2 + µ 2 λ i λ i +. µ i Q λ λ + µ µ 2 λ 2 + µ 2 + µ λ + µ λ 2 λ 2 + µ 2 Q 2 λ λ 2. λ + µ λ 2 + µ 2 Therefore the mean operation time of the system is E(O) Q 2 Q 2 E(S) λ λ 2 λ + µ λ 2 + µ 2 λ λ 2 λ + µ λ 2 + µ 2 (λ + µ )(λ 2 + µ 2 ) λ λ 2 λ λ 2 µ + µ 2 λ λ 2 + λ µ 2 + µ λ 2 + µ µ 2 λ λ 2 λ λ 2 λ µ 2 + µ λ 2 + µ µ 2 λ λ 2 64 µ + µ 2. µ + µ 2 µ + µ 2

65 Example 42 Let as consider the previous system with a single repairman. Find the steady-state performance measures under different service disciplines. Assuming that at the beginning both components are operating find the mean time to the first system failure in the case of a parallel system. Fist-In First-Out (FIFO) discipline As usual, first we have to introduce the states of the system keeping in mind the order of arrivals of the failed components. Thus - there is no failed component - component with index is failed 2 - component with index 2 is failed, 2 - both components are failed, but component with index arrived first 2, - both components are failed, but component with index 2 arrived first The transition rates are illustrated in the following Figure. The set of steady-state balance equations can be written as usual. Figure 4.5: FIFO discipline 65

66 The equations and the normalizing condition are (λ + λ 2 )P µ P + µ 2 P 2 (µ + λ 2 )P λ P + µ 2 P 2, (µ 2 + λ )P 2 λ 2 P + µ P,2 µ P,2 λ 2 P µ 2 P 2, λ P 2 P + P + P 2 + P,2 + P 2, After these by elementary but lengthly calculations one can verify that the solution is P + λ (λ + λ 2 + µ 2 ) λ 2 µ 2 + µ (λ + µ 2 ) + λ 2(λ 2 + λ + µ ) λ µ + µ 2 (λ 2 + µ ) + λ 2 λ (λ + λ 2 + µ 2 ) µ λ 2 µ 2 + µ (λ + µ 2 ) + λ λ 2 (λ 2 + λ + µ ) µ 2 λ µ + µ 2 (λ 2 + µ ) P λ (λ + λ 2 + µ 2 ) λ 2 µ 2 + µ (λ + µ 2 ) P P 2 λ 2(λ 2 + λ + µ ) λ µ + µ 2 (λ 2 + µ ) P P,2 λ 2 µ P P 2, λ µ 2 P 2 Hence the distribution of the number of failed components can be computed as Q P, Q P + P 2, Q 2 P,2 + P 2,. It is easy to see that the main performance measures are E(δ) ( P ) P λ + λ 2 E(O) Q 2 Q 2 ( µ P,2 Q 2 + µ 2 P2, Q 2 ). Processor Sharing (PS) discipline Under this discipline the order of arrivals is not significant and that is why if both components are failed it is denoted by, 2. However, in this state the repair intensity is halved. The transition rates are illustrated in the following Figure 66

67 Figure 4.6: Processor Sharing discipline The steady-state balance equations and the normalizing condition can be written as The solution is much simpler, namely (λ + λ 2 )P µ P + µ 2 P 2 (λ 2 + µ )P λ P + µ 2 2 P,2 (λ + µ 2 )P 2 λ 2 P + µ 2 P,2 ( µ 2 + µ ) 2 P,2 λ 2 P + λ 2 P 2 2 P + P + P 2 + P,2 Q P, Q P + P 2, Q 2 P,2 P λ µ P, P 2 λ 2 µ 2 P, P,2 2 λ λ 2 µ µ 2 P P + λ µ + λ 2 µ λ λ 2 µ µ 2 The main performance measures are E(δ) ( P ), E(O) Q 2 2. P λ + λ 2 Q 2 µ + µ 2 Preemptive Priority discipline Under this discipline component with index has preemptive priority over component with index 2. This means if component fails when component 2 is under repair the service process stops and the service of component starts immediately. In other words, service of component 2 can be carried out only when component is operating. It is 67

68 Figure 4.7: Preemptive Priority discipline easy to see that the states remain the same but the transition rates are different as it is illustrated in the following Figure. For the steady-state balance equations we have (λ + λ 2 )P µ P + µ 2 P 2 (λ 2 + µ )P λ P (λ + µ 2 )P 2 λ 2 P + µ P,2 µ P,2 λ 2 P + λ P 2 P + P + P 2 + P,2. The distribution of the number of failed components can be obtained as Q P, Q P + P 2, Q 2 P,2. It is not too difficult to verify that the solution is of the form P + λ µ + λ 2 + λ 2 µ 2 λ + λ 2 + µ λ 2 + µ + 2 λ λ 2 µ µ 2 λ + λ 2 + µ + µ 2 λ 2 + µ P λ µ + λ 2 P P 2 λ 2 µ 2 λ + λ 2 + µ λ 2 + µ P P,2 2 λ λ 2 µ µ 2 λ + λ 2 + µ + µ 2 λ 2 + µ P. For the performance measures we get E(δ) P P λ + λ 2, E(O) Q 2 Q 2 µ. 68

69 Knowing the distribution in all 3 disciplines it is quite simple to get the distribution for the homogeneous case. Namely, we obtain Q 2λ µ P, as we have seen in the earlier Examples. Q 2 λ µ P 2λ2 µ 2 P, Q P + 2λ µ + 2λ2 µ 2, Mean time to the first system failure To get the distribution of the time to the first system failure one can easily see that the service discipline has no effect on it if we have only two components. Omitting the tiresome Laplace-transform method the mean time can be obtained relative simple by probabilistic reasoning. To do so we need the following notation. Let E(T i ) denote the mean time to the first system failure starting from state i, i,, 2. By the theorem of total expectation and the properties of the exponential distribution the following equations can be written E(T ) λ 2 λ 2 µ + λ 2 + µ µ + λ 2 E(T ), E(T 2 ) λ λ µ 2 + λ + µ 2 µ 2 + λ E(T ) E(T ) λ + λ 2 + λ λ + λ 2 E(T ) + λ 2 λ + λ 2 E(T 2 ). After elementary calculations we obtain E(T ) µ + λ 2 + µ E(T ) µ + λ 2 E(T ), E(T 2 ) ( + λ λ 2 + µ + λ 2 λ + µ 2 λ + λ 2 ( / λ µ λ 2 µ 2 λ + λ 2 µ + λ λ + λ 2 µ 2 + λ 2 + µ 2 E(T ) µ 2 + λ µ 2 + λ ) / ). In particular, if µ µ 2, that is when there is no repair this formula reduces to the result of Example 6, that is ( E(T ) + λ + λ ) 2. λ + λ 2 λ 2 λ 69

70 By applying the theorem of total moments the second moment of these times could be calculated and thus the variance of T could be obtained. In particular, if µ µ 2, that is when there is no repair this formula could reduce to the result of Example 6. 7

71 Chapter 5 Continuous-Time Markov Chains The evolution of time-dependent systems can be obtained by different methods, the most commonly known one is the method of differential equations. In addition, if they exhibit random movements the situation becomes even more complicated. It is not the aim of this chapter to deal with the theory of stochastic processes since it has a wide literature and their mathematical level exceeds the level of the note. Omitting the precise mathematical treatment we concentrate only on the simplest processes which be used later on in the performance modeling of queueing systems. There is a variety of sources on this topic in either digital or printed form but for our purposes the following books fit best: Allen [], Kleinrock [5], Ovcharov [7], Sztrik [2], Tijms [3], Trivedi [4]. This section is devoted to one of the most commonly used stochastic processes, that is when each time t we have a random variable taking values,,... representing the states of the process. To know the evolution of the process we need the connection of the random variables at different times, in other words, we have to formulate mathematically how the future depends on th past. The simplest relationship is when the future depends on the past only through the present which can expressed as follows Definition 6 (Markov-property) If for any n and states i,, i n+ P (X(t n+ ) i n+ X(t ) i,, X(t n ) i n ) P (X(t n+ ) i n+ X(t n ) i n ) then process X(t) is called a Markov chain. Let P ij (t, t + h) P (X(t + h) j X(t) i) denote the transition probability probability of the chain, which is in time-homogeneous case is denoted by P ij (h). Clearly it means that the process during time h changes its state from i to j irrespective to where it is. It is easy to see that P ij (t, t + h). j To get how the distribution of the states changes during the time we have to know how the transition probabilities changes. Thus we define 7

72 Definition 7 (Intensity matrix, rate matrix ) The intensity matrix Q with elements q ij is defined by { P lim ij (h) h, if i j, h q ij P lim ij (h) h, if i j. h {, if i j P ij (h) P ij () P ij (), that is q ij lim, if i j h h Hence the transition probabilities can be expressed as P ij (h) q ij h + o(h) P ii (h) q ii h + o(h) Let us introduce the distribution of the process at time t, that is Then for the balance equations we have P j (t) P (X(t) j), j,, P j (t + h) P j (t) P jj (h) + i j P i (t)p ij (h), j,, 2,... These can be written as P j (t + h) P j (t) P j (t) (P jj (h) ) + i j P i (t)p ij (h), j,, 2,... P j (t + h) P j (t) h P j(t) (P jj (h) ) h + i j P i (t)p ij (h), j,, 2,... h Taking the limit the desired system of differential equations can be obtained, namely P j(t) q jj P j (t) + q ij P i (t), j,, 2,... i j P j (t), normalizing condition j P j () {, if j k,, if j k Steady-state, stationary distribution k,, 2... initial condition. One can easily notice that all the systems treated in the previous problems are special cases of continuous-time Markov chains. As we have seen it is rather difficult to get the transient solution even for quite simple systems. To obtain treatable formulas we are interested in the limiting distribution which is called steady-state, stationary distribution. Mathematically this can be written as P k lim k P k (t). 72

73 Then the steady-state balance equations can be obtained as q j P j q ij P i, P j, q j q jj. i j j In performance modeling of different systems we have to identify the stochastic process describing the dynamic behavior of the system. The most widely used, consequently the most thoroughly investigated class of stochastic process is the Markov process. Many practical problems can be treated by the help of a continuous-time Markov chain that is why without proof we state the following theorem Theorem 28 A stochastic process X(t) is a continuous-time Markov chain if and only if the sojourn time in any state j is an exponentially distributed random variable with parameter q j, j,, Birth-Death Processes To simplify the investigations let us assume that the process enters to neighboring states only. This situation can be formulated as follows q ii+ λ i, P ii+ (h) λ i h + o(h), i,,... q ii µ i, P ii (h) µ i h + o(h), i,... q ii (λ i + µ i ), P ii (h) (λ i + µ i )h + o(h), i,,... q ij, P ij o(h) if i j >, i, j,,... µ. In this case λ i, and µ i are called birth, death intensities, respectively. Then the balance equations are also simplified, namely P j(t) (λ j + µ j )P j (t) + λ j P j (t) + µ i+ P j+ (t), j,,... In steady-state we have (λ j + µ j )P j +λ j P j + µ i+ P j+, j,,... P j j µ To obtain the solution let us notice that for any j we get since it is easy to see that D j λ j P j µ j+ P j+, D λ P µ P, D j λ j P j µ j+ P j+ λ j P j µ j P j D j, j,

74 By using this relationship it can be verified that thus P j+ λ j µ j+ P j, (5.) P i λ λ i µ µ i P, i, 2,, P + i λ λ i µ µ i, which is called the steady-state distribution of a birth-death process and later on plays an important role in modeling several queueing systems. In the case of an infinite state space the series concerning to the normalizing condition should be convergent. In many times conditions assuring the convergence are referred to as stability conditions. It could be proved that under stability condition the solution is unique. Let us consider the following simple example Example 43 Let λ i λ, i,,... and µ i µ i, 2,.... Then ( ) i λ ( ) i λ P i P, P, µ µ which is convergent iff λ < µ. Pure Birth Processes If λ i λ, i,,... and µ i, i, 2,..., then we are speaking about a pure birth process and hence the set of differential equations reduces to P (t) λp (t) which was obtained for the Poisson process. i P j(t) λp j (t) + λp j (t) {, if k, P k (), if k. 74

75 Part II Exercises 75

76

77 Chapter 6 Basic Concepts from Probability Theory 6. Discrete Probability Distributions Exercise Show that if X B(n, p) then EX np, and V ar(x) np( p). n ( ) n n ( ) n EX k p k ( p) n k n p p k ( p) n (k ) k k k k n ( ) n np p j ( p) n j np(p + p) n np. j j } {{ } Binomimial theorem EX 2 n ( n k 2 k n ( n k(k ) k k k2 ) p k ( p) n k n ( n (k(k ) + k) k n ( n k k k ) p k ( p) n k + k ) p k ( p) n k ) p k ( p) n k } {{ } np n ( ) n 2 n(n )p 2 p k 2 ( p) n 2 (k 2) + np k 2 k2 n 2 ( ) n 2 n(n )p 2 p j ( p) n 2 j + np j j n(n )p 2 + np np((n )p + ) np(np p + ). V ar(x) EX 2 E 2 X np(np p + ) (np) 2 (np) 2 np 2 + np (np) 2 np( p). Exercise 2 Show that if X P o(λ), then EX λ, and V ar(x) λ. 77

78 EX k k λk k! e λ e λ λ k λ k (k )! e λ λ j λ j j! e λ λe λ λ. EX 2 k k2 k 2 λk k! e λ λ 2 e λ (k(k ) + k) λk k k(k ) λk k! e λ + k2 k k λk k! e λ } {{ } λ k! e λ λ k 2 (k 2)! + λ λ2 e λ e λ + λ λ 2 + λ. V ar(x) EX 2 E 2 X λ 2 + λ λ 2 λ. Exercise 3 Show that if X Geo(p), then EX q, and V ar(x). p p 2 EX kpq k p kq k p k k k ( q) ( )q p p ( q) 2 p 2 p. ( (q k ) p k ) ( ) q q k p q We used the fact that in the case of absolute convergent series the summation and derivative are interchangeable. Thus EX 2 k k k 2 pq k p k 2 q k p (k(k ) + k)q k k p k(k )q k + p ( pq k 2( p) p 2 k kq k } {{ } p ) q k + ( q p pq q + p 2 2p + p p 2 V arx 2 p p 2 k p k(k )q k 2 q + p pq (q k ) + p k ) + ( p pq 2 p p 2. ( q) 2 ( ) 2 2 p q p p 2 p k ) + p pq 2 ( q) 3 + p

79 In the following we can show how these results can be obtained by using the property of the geometric distribution. E(X) kp( p) k ( p) (k )p( p) k 2 + k ( p)e(x) +. So E(X) p. Similarly E(X 2 ) k 2 p( p) k k ( p) k p( p) k k ((k ) 2 + 2k )p( p) k k (k ) 2 p( p) k 2 + 2E(X) k ( p)e(x 2 ) + 2 p. Thus Hence E(X 2 ) ( p)e(x 2 ) + 2 p p E(X 2 ) 2 p p 2. V ar(x) 2 p p 2 p 2 p p 2. Exercise 4 Find the mean and variance of a modified geometric distribution with success parameter p. As we know the modified geometric distribution is P (X k) pq k, k,... and X X, where X Geo(p). Hence E(X ) E(X ) EX p p p V ar(x ) V ar(x). q p, Exercise 5 Show that that the geometric distribution yields P (X k + l X > k) P (X l), that is the so-called memoryless property holds. 79

80 P (X k + l X > k) P (X k + l) P (X > k) p( p) k+l ( jk+ p( p)j p) k+l jk+ ( p)j ( p)k+l ( p) k +p ( p)k+l ( p) k p p( p)k+l ( p) k p( p) l P (X l). Exercise 6 Let X B(n, ρ), Y B(m, ρ) and independent random variables. Find that P (X i X + Y k). P (X i X + Y k) P (X i, X + Y k) P (X + Y k) P (X i, Y k i). P (X + Y k) Since X and Y are independent then the convolution of X, Y is also binomial so we have P (X i X + Y k) ( n k) p i ( p) n i( ) m k i p k i ( p) m k+i ) pk ( p) n+m k ( n+m k that is we obtain the hypergeometric distribution. ( n m ) i)( k i ) ( n+m k Exercise 7 Let X P o(λ) and Y P o(β) independent random variables. Find that )( λ λ + β } {{ } p P (X k X + Y n). P (X k, Y n k) P (X k)p (Y n k) P (X k X + Y n) P (X + Y n) P (X + Y n) λ k βn k e λ k! (n k)! e β λ k β n k ( n ) k! (n k)! k λ k β n k ( ) n λ k (λ+β) n e n! (λ+y ) (λ+β) n (λ + Y ) β n k n k (λ + β) k (λ + β) n k n! ( ) k ( ) n k ( ) n β n p k ( p) n k B(n, p). k k λ + β } {{ } p 8

81 Exercise 8 Let us consider a supermarket at which customers arrive according to a Poisson distribution with parameter λ and choose the ith cashier with probability p i (i,..., n, i p i ). Find the distribution of the number of customers at cashier i. Let us perform a random experiment with N independent and identical trials. Let describe X i, i,..., n the number of ith outcome. As the joint distribution of (X,..., X n ) is a multinomial distribution with parameters N and p,..., p n we have P (X k,..., X n k n X X n N) Since X X n X P o(λ) N! k!... k n! pk... p kn n. P (X k,..., X n k n ) P (X k,..., X n k n X X n N)P (X N) N! λ N k!... k n! pk... p kn n N! e λ pk... p kn n k!... k n! λk +...+k n e λ( (p λ) k k! e λpn... (p nλ) kn e λpn. k n! n i p i) It follows that X i P o(λp i ), i,..., n, and are independent random variables. 6.2 Continuous Probability Distributions Exercise 9 Let Γ(α) t α e t dt, α > so-called complete Gamma function ( Γ(α) function ). Show that Γ(α) (α )Γ(α ), α >. Using integration by parts we have Γ(α) t α e t dt [ t α e t] + (α ) t α 2 e t dt (α )Γ(α ) since the value of the first part is zero. It can easily be proved by the help of the L Hospital rule. It is easy to see that Γ(), so Γ(n) (n )! that is Γ(α) can be considered as the generalization of the factorial function. 8

82 ( ) Exercise Show that Γ π. 2 Γ ( ) 2 Introducing the substitution t x2 2 Γ ( ) 2 t 2 e t dt we have dt dx e t dt. t x, thus 2 x 2 x e 2 xdx π e x2 2 dx π. 2π Exercise Find the mean, variance and the kth moment of the gamma distribution with parameters (α, λ). where E(X) Introducing the substitution u λx since Γ(α + ) αγ(α). Similarly E(X 2 ) E(X) Γ(α) u α e u Γ(α) x λ(λx)α e λx dx, Γ(α) t α e t dt. Γ(α + ) du λ λγ(α) x 2 λ(λx)α e λx dx Γ(α) λ λ 2 Γ(α) u α+ e u du 82 α λ, (λx) α+ e λx dx Γ(α) Γ(α + 2) λ 2 Γ(α) (α + )α λ 2.

83 Thus V ar(x) E(X 2 ) (E(X)) 2 That is the squared coefficient of variation is CX 2 than. Finally E(X k ) x k λ(λx)α e λx dx Γ(α) λ k λ k Γ(α) u k+α e u du (α + )α ( α ) 2 α λ 2 λ λ. 2 Γ(k + α) λ k Γ(α) /α, that can be less and greater (λx) k+α e λx dx Γ(α) α(α + )... (α + k ) λ k. In particular, in the case of α n we have the Erlang distribution with parameters (n, λ) and we obtain E(X k n(n + )... (n + k ) ). λ k In case of n it reduces to the exponential distribution, that is E(X k ) k! λ k. Exercise 2 Find the mean and variance of the Pareto distribution with parameters (k, α). Thus E(X) E(X 2 ) V ar(x) k xαk α x α dx [ αk α x α+ α k ] k k αk α x α dx kα, α > α, α k 2 α, α > 2 αk α x α+ α 2 dx, α 2 ( ) 2 k2 α kα α 2, α > 2 α 83

84 Exercise 3 Let X Exp(λ), and Y c e αx, where α, c >. Find the distribution function of Y. P (Y < x) P (ce αx < x) P ( ( x )) ( αx < ln P X < ( x ) ) c α ln c e λ α ln( x c ) e ln( c x) λ α ( c x) λ α, that is we obtain the Pareto distribution with parameters ( c, λ α). 84

85 Chapter 7 Fundamentals of Stochastic Modeling 7. Exponential Distribution and Related Distributions Exercise 4 Show that that the exponential distribution obeys P (X > x + y X > x) P (X > y), which is referred to as memoryless, or Markov property. P (X > x + y X > x) P (X > x + y) P (X > x) P (X < x + y) P (X < x) ( e λ(x+y) ) ( e λx ) e λy P (X > y). Exercise 5 Find the nth moment of an exponentially distributed random variable with parameter λ. E(X n ) x n λe λx dx [ x n e λx] + n λ x n λe λx dx. Using the L Hospital s rule it is easy to prove that the value of the first part is and thus E(X n ) n λ E(Xn ). Taking into account the recursion one can easily see that E(X n ) n! λ n. 85

86 Exercise 6 Let us assume that at t two independent activities start. Their durations are denoted by X, Y and are supposed to be exponentially distributed random variables with parameters λ, µ, respectively. Let V minima(x, Y ), Z maxima(x, Y ), and W Z V. Find. The distribution and mean of V, 2. P (X < Y ), that is X completes first, 3. The distribution and mean of W, that is the distribution of the time between the first and second events, 4. The probability that at an arbitrary time t 5. P (W + V < t)), 6. P (X < t X < τ), 7. P (X < t X < Y ), 8. P (X < t Y < X). P (X < t < Y ), P (V < t < Z), P (X < t, Y < t),. Distribution of the first event P (V < t) P (V > t) P (X > t, Y > t) P (X > t)p (Y > t) e λt e µt e (λ+µ)t that is, V is exponentially distributed with parameter λ + µ, consequently E(V ) λ+µ, 2. X completes first P (X < Y ) y P (X < y)f Y (y) dy 3. distribution of the time between the first and second event y ( e λy )µe µy dy µ λ + µ P (W < t) P (W < t X < Y )P (X < Y ) + P (W < t X > Y )P (X > Y ) λ λ + µ, given (X < Y ), W represents the residual time of Y and similar argument is valid for X. Due to the memoryless property of Y, X Consequently P (W < t X < Y )P (X < Y ) + P (W < t X > Y )P (X > Y ) λ λ + µ ( e µt ) + µ λ + µ ( eλt ). E(W ) λ µ(λ + µ) + µ λ(λ + µ), 86

87 4. X has been completed, but Y is still running P (X < t < Y ) P (X < t)p (Y > t) ( e λt )e µt, the first event has been completed, but the second is still running P (V < t < Z) P (V < t < Z X < Y )P (X < Y ) + P (V < t < Z X > Y )P (X > Y ) P (X < t < Y X < Y )P (X < Y ) + P (Y < t < X X > Y )P (X > Y ) P (X < t < Y, X < Y ) + P (Y < t < X, X > Y ) P (X < t < Y ) + P (Y < t < X) ( e λt )e µt + ( e µt )e λt, both events have been completed P (X < t, Y < t) P (Z < t) P (X < t)p (Y < t) ( e λt )( e µt ), 5. distribution of the sum of W and V Since W, V are dependent random variables their convolution cannot be applied. However, it is easy to see that P (W + V < t) P (Z < t) ( e λt )( e µt ), 6. distribution of X given X < τ P (X < t X < τ) P (X < t, X < τ) P (X < τ) { P (X<t) e λt ha < t < τ P (X<τ) e λτ ha t > τ. 7. distribution of X given X < Y P (X < t, X < Y ) P (X < t X < Y ) P (X < Y ) t y P (X < y)f Y (y) dy P (X < Y ) e (λ+µ)t, y + P (X < t, X < y)f Y (y) dy yt P (X < Y ) P (X < t)f Y (y) dy P (X < Y ) that is, it follows an exponential distribution with parameter λ + µ, 8. distribution of X given X > Y P (X < t, X > Y ) P (X < t X > Y ) P (X < t, X > y)f y Y (y) dy P (X > Y ) P (X > Y ) t y P (y < X < t)f t Y (y) dy + (F y X(t) F X (y))f Y (y) dy P (X > Y ) P (X > Y ) e λt λ µ e µt ( e µt ). 87

88 Exercise 7 Find the probability that X i min{x,..., X n }, supposing that X k Exp(λ k ), k,..., n and are independent. By the law of total probability P (X i < min(x,..., X i, X i+,..., X n )) ( n ( e λix ) ( n n j,j i j,j i j,j i λ j ) e n j,j i λ jx dx λ j ) e λ ix e n j,j i λ jx dx P (X i < x)f min(x,...,x i,x i+,...,x n)(x) dx ( n ) λ j e n j λ jx λ j dx n j j λ j } {{ } ( n j,j i e n j λ jx λ j ) e n j,j i λ jx dx ( n j,j i n j,j i λ j n j λ j λ j ) dx λ i n j λ. j Exercise 8 Find the distribution and mean of a series system consisting of independent and exponentially distributed components. In the case of series system E(min(X,..., X n )) n j λ j n j min(ex,..., EX n ). EX j Exercise 9 Find the distribution, mean and variance of a parallel system consisting of independent and exponentially distributed components with the same failure rate, that is X i Exp(λ), i,..., n. P (max(x,..., X n ) < x) n P (X i < x) ( e λx ) n. i Apply the following useful relation. If X then EX Substitute t e λx then P (X x) dx 88 ( F (x)) dx.

89 E(max(X,..., X n )) ( ( e λx ) n dx ( t n ) λ t dt ( + t t n ) dt [ ] t + t2 λ λ tn [ + n λ ] n }{{} nλ (n )λ } {{ } }{{} λ. failure failure n. - n-. failure Due to the memoryless property of the exponential distribution the time between the consecutive failures are also exponentially distributed and are independent of each other. It is easy to see that the parameter of the time between (k ) th and kth failure is (n k + )λ, k,..., n. This fact can be used to calculate the mean and variance of the time to the kth failure. Hence E(time to the kth failure ) nλ (n k + )λ V ar(time to the kth failure) (nλ) ((n k + )λ) 2 k,..., n. In particular, the variance of the life time of a parallel system is the variance of the last failure, that is (nλ) λ 2. Exercise 2 Let X i, i,..., n, be independent random variables. Show that E(max(X,..., X n )) max(e(x ),..., E(X n )) E(min(X,..., X n )) min(e(x ),..., E(X n )). P (max(x,..., X n ) < x) n P (X i < x) P (X i < x) i hence P (max(x,..., X n ) x) P (X i x) 89

90 thus E(max(X,..., X n )) P (max(x,..., X n ) x)dx from which the statement follows. Similarly thus P (min(x,..., X n ) x) E(min(X,..., X n )) P (X i x)dx E(X i ), i,..., n, n P (X i x) P (X i x), i P (min(x,..., X n ) x)dx P (X i x)dx E(X i ), i,..., n, from which the statement follows. Exercise 2 Prove that the distribution function of the Erlang distribution with parameters (n, λ) is n (λx) j F Yn (x) e λx! j! j F Yn (x) x λ(λt) n (n )! e λt dt λn (n )! x t n e λt dt Using integration by parts, where g(x) t n, f(x) λ e λt we get λ n x t n e λt dt (n )! } {{ } I n(x) λxn (n )! e λx +... n j x λn (n )! ([ λ e λt t n ] x λ(λt) n 2 (n 2)! e λt dt } {{ } I n (x) (λx) j e λx. j! x ( λ e λt )(n )t n 2 dt) λxn (n )! e λx λxn 2 (n 2)! e λx + I n 2 (x) 9

91 Consequently (λx) j e λx j! jn x λ(λt) n (n )! e λt dt. Exercise 22 Let X Exp(λ) and Y Exp(µ) and independent random variables. Find their convolution. z z z f X+Y (z) λe λx µe µ(z x) dx λµ e λx µ(z x) dx λµe µz e x(λ µ) dx λµe µz [ λ µ e x(λ µ) ] z λµe µz (λ µ) (e z(λ µ) ) λµ µ λ e λz λµ µ λ e µz µ µ λ λe λz + λ λ µ µe µz. Exercise 23 Find the mean of the previous convolution by using the density function. E(X + Y ) µ µ λ (λ + µ)(µ λ)) λµ(µ λ) µ x( µ λ λe λx + λ λ µ µe µx ) dx xλe λx dx + λ λ µ λ + µ λµ λ + µ. xµe µx dx µ µ λ λ + λ λ µ µ µ2 λ 2 λµ(µ λ) Obviously E(X + Y ) E(X) + E(Y ) thus we could check the correctness of the density function. Exercise 24 Derive the density function of the Erlang distribution with parameters (2, λ) from the 2-phase hypoexponential distribution. As we have seen f X+Y (x) λ λ µ µe µx + µ µ λ λe λx. 9

92 Taking the limit as µ λ we get the desired result, that is λ lim µ λ λ µ µe µx + µ ( λµ e λx e µx) µ λ λe λx lim µ λ µ λ, therefore we apply the L Hospital s rule. Thus we obtain λ 2 xe λx, what is the density function we needed. Exercise 25 Find the distribution function of the 2-phase hypoexponential distribution. F X+Y (x) x f X+Y (y)dy x ( λ λ µ µe µx + µ ) µ λ λe λx dy λ ( ) e µx + µ ( ) e λx λ µ µ λ λ λe µx µ + µe λx λ µ + ( µe λx λe µx). λ µ To check its correctness let us take the limit as µ λ. Applying the L Hospital s rule we have e λx λxe λx which is exactly the distribution function of the Erlang distribution with parameters (2, λ). Exercise 26 Let X Exp(λ), Y Exp(µ) and independent random variables. Find the conditional density function f X X+Y (x y). f X X+Y (x y) f X(x) f Y (y x) f X+Y (y) e (λ µ)x λe λx µ e µ(y x) λµ λ µ (e µy e λy ) (λ µ), < x < y. e (λ µ)y Specially, if λ µ, then using the L Hospital s rule and taking substitution z λ µ we get lim z z e z x lim e z y z z lim e z y z y e z y y, 92

93 that is we have the uniform distribution. If at the beginning we assume that λ µ then f X X+Y (x y) λe λx λ e λ(y x) λ(λy)e λy y, since X + Y follows the Erlang distribution with parameters (2, λ). Exercise 27 Find the squared coefficient of variation of the Erlang distribution with parameters (n, λ). C Yn DY n EY n D(X X n ) E(X X n ) V ar(x ) V ar(x n ) EX EX n n λ 2 n λ n n n. Exercise 28 Verify the density function of the hyperexponential distribution. It is easy to see that it is nonnegative, furthermore f Yn (x) dx n p i λ i e λix dx i n p i λ i e λix dx } {{ } i n p i. i Exercise 29 Show that the squared coefficient of variation of the hyperexponential distribution is always at least To prove it, we need n CY 2 i n p i 2 ( n λ 2 i p i i λ i ) 2 ( n i p i λ i ) 2 n i p i λ 2 i ( n i ) 2 p i, λ i which follows from the Cauchy-Bunyakovszkij-Schwartz inequality with substitutions. y i p i, x i pi λ i 93

94 Exercise 3 Let X i W (λ i, α), i,..., n, independent random variables. Show that ( n ) min(x,..., X n ) W λ i, α. It is well-known that P (min(x,..., X n ) < x) e i n ( P (X i < x)) i n i λ i x α, n ( e λ i ) x α from which our statement follows. In particular, if α we obtain the relations valid for the exponential distribution. i 7.2 Basics of Reliability Theory Exercise 3 Find the hazard rate ( known also as failure rate, intensity, conditional failure rate) function of the hyperexponential distribution. h(t) n p i λ i e λ it i n p i e λ it which is monotone decreasing and its image is in the interval i, [ min(λ,..., λ n ), n p i λ i ]. It can be shown in the following way. If h (t) < on the interval [, ) then h(t) is monotone decreasing on it. Since by the rule of the derivative of a ratio the denominator is always positive it is enough to investigate the sign of the numerator of h (t). For the numerator we have ( n ) ( n ) ( n 2 p i λ 2 i e λ it p i e λ it + p i λ i e it) λ. i i i i Apply the well-known inequality ( n ) 2 a i b i i 94 n i a 2 i n i b 2 i

95 with substitutions a i p i e λ it, b i λ i pi e λ it thus h (t) <. It can also be seen that h(t) takes its maximum at, therefore h() n p i λ i. Furthermore, one can easily verify that h(t) min(λ,..., λ n ). i Exercise 32 Find the hazard rate function of the 2-phase hypoexponential distribution. λ λ 2 ( λ h(t) 2 λ e λ t e ) λ 2t ( λ2 / e λt λ ) e λ 2t λ 2 λ λ 2 λ λ 2 λ λ ( λ 2 e λ t e ) λ 2t, λ 2 e λ t λ e λ 2t which is monotone increasing and its image is in the interval [, min(λ, λ 2 )]. Similarly to the previous exercise the sign of h (t) is determined by ( λ e λ t + λ 2 e λ 2t ) 2 + (λ λ 2 e λ t λ λ 2 e λ 2t )(e λ t e λ 2t ) ( λ e λ t + λ 2 e λ 2t ) 2 + λ λ 2 (e λ t e λ 2t ) 2 > thus h(t) is monotone increasing and h(). If λ < λ 2, then h(t) λ λ 2 λ 2 λ. Similarly, if λ 2 < λ, then h(t) λ λ 2 λ λ 2. Exercise 33 Find the hazard rate function of the Erlang distribution with parameters (n, λ) and show that it is monotone increasing. Similarly to the previous exercises we deal with the numerator of h (t), that is we have λλ(λx) n 2 (n ) (n )! λ 2 (λx)n 2 (n 2)! n i ( n 2 (λx) i i! (λx) i i i! λ(λx)n (n )! n 2 i λ(λx) i i! + (λx)n (n )! λx n 2 n Its sign depends on the second term. Let us denote it by S n n 2 i (λx) i i! S 2 λx i ( λx ) + (λx)n n (n )! + λx >. 95 ) (λx) i. i!

96 If n 3, then S n n 3 i n 3 (λx) i i! (λx) i i! i It is easy to see that ( λx n ( λx n S 3 λx 2 ) + (λx)n 2 (n 2)! ) + (λx)n 2 (n 2)!. + λx 2 + λx 2 ( λx ) + (λx)n n (n )! >. Let us suppose that S n > then prove that S n+ > by induction. since S n >. S n+ n 2 i n 3 (λx) i i! (λx) i i! i n 3 (λx) i > i! i ( λx ) + (λx)n n (n )! ( λx ) + (λx)n 2 n ( λx n S n + (λx)n (n 2)! (n )n (n 2)! ( λx ) + (λx)n n (n )! ) + (λx)n 2 (n 2)! + (λx)n (n 2)! >, ( n n ) 7.3 Random Sums Exercise 34 Find the distribution of the mixture of Erlang distributions with parameters (i, λ) and geometric distribution with parameter p. f(x) i i λ(λx)i p( p) (i )! e λx pλe λx j i (λx)i ( p) (i )! pλe λx (( p)λ)x) j pλe λx e ( p)λx pλe pλx Exp(pλ). j! i Exercise 35 Find the distribution of the mixture of binomial and Poisson distributions. ( p k (i) ( ) i k p i ( p) i k, q i λi i! e λ )? 96

97 p k i (pλ)k e λ k! ( i )p k i k λi ( p) k i! e λ e λ p k λ k k! j (( p)λ) j j! (pλ)k k! ik (( p)λ)i k (i k)! e λ e ( p)λ (pλ)k e pλ P o(pλ). k! Exercise 36 Let X i Exp(λ), ν Geo(p) and independent random variables. Find the density function of the random sum Y ν. By the theorem of total probability f(x) f Yν (x) f Yk (x) P (ν k) } {{ } k (n,λ) Erlang i λ(λx)i p( p) (i )! e λx pλe λx i (λx)i ( p) (i )! i pλe λx (( p)λ)x) j pλe λx e ( p)λx pλe pλx Exp(pλ). j! j i Exercise 37 Let X i (A) be Bernoulli distributed with parameter p, ν P o(λ) and independent random variables. Find the distribution of Y ν X (A) X ν (A). p k ik (pλ)k e λ k! ( i )p k i k λi ( p) k i! e λ e λ p k λ k k! j (( p)λ) j j! (pλ)k k! ik (( p)λ)i k (i k)! e λ e ( p)λ (pλ)k e pλ P o(pλ). k! 97

98 98

99 Chapter 8 Analytic Tools, Transforms 8. Generating Function Exercise 38 Find the generating function of the binomial distribution with parameters (n, p) and then its mean, variance and distribution. G X (s) n ( ) n s k p k ( p) n k k k n k ( ) n (sp) k ( p) n k (sp+( p)) n (+p(s )) n. k EX G X () n( + p(s )) n p s np( + p(s )) n s np. G X() (np( + p(s )) n ) s np(n )p( + p(s )) n 2 s n(n )p 2. V ar(x) n(n )p 2 + np (np) 2 np( p). G (k) X (s) n(n )(n 2)... (n k + )pk ( + p(s )) n k. G (k) X () n(n )(n 2)... (n k + )pk ( p)) n k. p k G(k) X () k! n(n )(n 2)... (n k + ) p k ( p) n k. } {{ k! } ( n k) Exercise 39 Find the generating function of the geometric distribution with parameter p. Furthermore, investigate for which s it will be convergent, then calculate the mean and variance. G X (s) s k p( p) k sp (( p)s) k k k sp, if s < ( p)s p. EX p( ( p)s) sp( ( p)) ( ( p)s) 2 s p2 p 2 + p ( + p) 2 p. 99

100 Exercise 4 Find the distribution by the help of the generating function G X (s) e λ( s). Thus X (s) λ }. {{.. λ } e λ( s) λ k e λ( s). k G (k) p k G(k) X () k! λk k! e λ, that is, G X (s) is the generating function of the Poisson distribution with parameter λ. Exercise 4 Find the mean and variance of the random sum by the help of the generating function. As it has been proved the generating function of the random sum is Hence Furthermore G Yν (s) G ν (G X (s)). EY ν G ν(g X (s))g X (s) s G ν(g X ())G X } {{ } (s) EνEX. G Y ν (s) s (G ν(g X (s))g X (s)) s G ν(g X (s))g X (s)g X (s) s + G ν(g X (s))g X (s) s G ν()(ex ) 2 + EνG X (), thus E(Y 2 ν ) G ν()(ex ) 2 + EνG X () + EνEX. Therefore V ar(y ν ) G ν()(ex ) 2 + EνG X () + EνEX (EνEX ) 2 (Eν 2 Eν)(EX ) 2 + Eν(EX 2 EX ) + EνEX (EνEX ) 2 (EX ) 2 (Eν 2 (Eν) 2 ) + Eν(EX 2 (EX ) 2 ) (EX ) 2 V ar(ν) + EνV ar(x ).

101 8.2 Laplace-Transform Exercise 42 Find the mean and variance of the random sum by the help of the Laplacetransform. EY ν L Y ν () G ν(l X ()) L X } {{ } () EνEX, Eν L Y ν (s) G ν(l X (s))l X (s)l X (s) + L X (s)g ν(l X (s)) L Y ν () G ν(l X ())L X } {{ } ()L X () + L X ()G ν(l X ()) G } {{ } ν()( EX ) 2 + EXEν 2 (Eν 2 Eν)(EX ) 2 + EXEν 2 Eν 2 (EX ) 2 + Eν(EX 2 (EX ) 2 ) } {{ } V ar(x ) Therefore Eν 2 (EX ) 2 + EνV ar(x ). V ar(y ν ) Eν 2 (EX ) 2 + EνV ar(x ) (EνEX ) 2 (Eν 2 (Eν) 2 )(EX } {{ } ) 2 + EνV ar(x ) V ar(ν) V ar(ν)(ex ) 2 + EνV ar(x ). Exercise 43 Find the Laplace-transform of the Erlang distribution with parameters (n, λ), and then the mean and variance. L X (s) λn sx λ(λx)n e (n )! e λx dx λn (n )! (n )! (n )! λ + s (λ + s) λ n n (λ + s) n (λ + s)x n e (λ+s)x dx λ + s } {{ } ( ) n λ. λ + s E(X n ) (n )! (λ+s) n λ EX ( )(( λ + s )n ) s λ n ((λ + s) n ) s λ n ( n(λ + s) n ) s λ n ( n)λ n n λ L X() ( λ n ((λ + s) n ) s λ n ( n(λ + s) n )) s λ n ( n)( n )(λ + s) n 2 s λ n (n 2 + n)λ n 2 n2 + n λ 2. Therefore V ar(x) n2 + n λ 2 ( n λ )2 n λ 2.

102 Exercise 44 Find the mean of the hyperexponential distribution by the help of the Laplacetransform. ( n EX ( ) n i i p i ) ( λ i n ) s s λ i + s ( ) p i λ i ( )(λ i + s) 2 i n λ i n p i p i. λ 2 i λ i p i λ i (λ i + s) 2 s i i Exercise 45 Find the Laplace-transform of the hypoexponential distribution. By applying the properties of the Laplace-transform we have n ( ) n λi L Yn (s). λ i + s i Exercise 46 Find the Laplace-transform of the gamma distribution. L X (s) ( λ λ + s ( λ λ + s where z (λ + s)t. e st λ(λt) α e λt dt Γ(α) ) α ) α Γ(α) Γ(α) λ α t α e (α+s)t dt Γ(α) (λ + s) α t α e (α+s)t dt Γ(α) ( ) α λ, λ + s ( λ λ + s ) α z α e z dz Γ(α) Exercise 47 Show that if X i Γ(α i, λ), i,..., n and are independent random variables, then ( n n ) Y X i Γ α i, λ. L Y (s) which proves the statement. n i i i ( ) αi ( ) λ λ λ + s λ + s n i α i 2

103 Chapter 9 Stochastic Systems 9. Poisson Process Exercise 48 Find the correlation coefficient of the Poisson process. R(ν(t), ν(t + h)) where DX V ar(x). To get E(ν(t)ν(t + h)) we need the following steps E(ν(t)(ν(t + h) ν(t) )) E(ν(t)ν(t + h)) Eν 2 (t) } {{ } ν(h) E(ν(t)ν(t + h)) Eν(t)Eν(t + h), Dν(t)Dν(t + h) E(ν(t)(ν(t + h) ν(t) )) Eν(t)Eν(h) since ν(t) and ν(t + h) ν(t) are independent. } {{ } ν(h) Thus After substitution we obtain Eν(t)Eν(h) + Eν 2 (t) E(ν(t)ν(t + h)). Eν(t)(Eν(h) Eν(t + h)) + Eν 2 (t) Dν(t)Dν(t + h) (λt)2 + λt + (λt) 2 λt λ(t + h) λt λ2 t(t + h) λt(λh λt λh) + λt + (λt)2 λt λ(t + h) t t2 + th. + h t Exercise 49 Let us consider a service system at which the inter-arrival times of the customers are exponentially distributed with parameter λ and the service times are also exponentially distributed with parameter µ. Supposing that the involved times are independent of each other find the distribution of the number of customers arrived during a service. 3

104 By the theorem of total probability If f S (x) µe µx, then λk µ k! P (N a (S) k) P (N a (S) k) P (N a (S) k S x)f S (x) dx. x k (λ + µ)e (λµ)x dx λ + µ } {{ } k! (λ+µ) k (λx) k e λx + µe µx dx k! which is a modified geometric distribution with parameter λ k µ (λ + µ) µ ( ) k λ, k+ λ + µ λ + µ µ λ+µ. Exercise 5 Find the mean number of customers arrived during a service having a general distribution. To solve the problem let us apply the properties of the generating function and the Laplace-transform. So we get G Na(S)(z) z k (λx) k e λx f S (x) dx k! k k z k (λx)k e λx f S (x) dx k! k (zλx) k e λx f S (x) dx k! that is e zλx e λx f S (x) dx e λx( z) f S (x) dx L S (λ( z)), G Na(S)(z) L S (λ( z)). Therefore E(N a (S)) G N a(s)() (L S (λ( z))) s λl S () λes. Exercise 5 Let the service times be Erlang distributed random variables with parameters (r, µ). Similarly to the previous problem find the distribution of the number of customers arrived during a service time if the arrival process remains the same. 4

105 (λx) k k! λx µ(µx)r e (r )! e µx dx λk k! λk µ r k! (r )!(λ + µ) ( ) λ k µ r r + k (λ + µ) r+k r ( r + k r µ r (r )! x k+r (λ + µ)e (λ+µ)x dx } {{ } E(X k+r ) (r+k )! (λ+µ) r+k ( λ λ + µ ) k ( µ λ + µ ) ( p) k p r, where p µ λ + µ, x k+r e (λ+µ)x dx λk k! ) r ( ) r + k µ r (r + k )! (r )!(λ + µ) (λ + µ) r+k r that is we get the negative binomial ( Pascal ) distribution with parameters (p, r). 9.2 Some Simple Systems Exercise 52 Solve the following first-order inhomogeneous linear differential equation. P (t) + (λ + µ)p (t) µ with initial condition P () The homogeneous part is P (t) + (λ + µ)p (t) P (t) (λ + µ)p (t) P (t) (λ + µ) P (t) P (t) P (t) dt (λ + µ) dt ln P (t) (λ + µ)t + ln C P (t) Ce (λ+µ)t A particular solution of the inhomogeneous part can be obtained by appying the method of variation of parameters ( or variation of constant ), that is P (t) c(t)e (λ+µ)t c (t)e (λ+µ)t + c(t)( (λ + µ)e (λ+µ)t ) + (λ + µ)c(t)e (λ+µ)t µ c (t)e (λ+µ)t µ c (t) µe (λ+µ)t c(t) µ λ + µ e(λ+µ)t, 5

106 Thus a particular solution is Hence for the general solution we have P (t) µ λ + µ e(λ+µ)t e (λ+µ)t µ λ + µ. P (t) Ce (λ+µ)t + µ λ + µ. Taking into account the initial condition P () we obtain C + µ thus C λ+µ Therefore P (t) λ λ + µ e (λ+µ)t + µ λ + µ. P (t) P (t) λ λ + µ λ λ + µ e (λ+µ)t. If the initial condition is P () then the solution is P (t) µ λ + µ µ λ + µ e (λ+µ)t P (t) µ λ + µ e (λ+µ)t + λ λ + µ Taking the limit as t for the steady-state distribution we get. P lim P (t) µ t λ + µ λ P lim t P (t) λ + µ λ λ + µ µ λ + µ λ. λ+µ Exercise 53 Find the probability that at time t k components are operating provided that at the beginning n components were operational and m were failed. P,k (t) k ( )( n λ l λ + µ e (λ+µ)t + µ ) l ( λ λ + µ λ + µ )( µ λ + µ µ ) k l ( µ λ + µ e (λ+µ)t λ + µ e (λ+µ)t + l ( m k l λ ) n l λ + µ e (λ+µ)t λ ) m (k l). λ + µ For the steady-state distribution we have k ( )( ) l ( ) n l ( )( ) k l ( ) m (k l) n µ λ m µ λ lim P,k(t) t l λ + µ λ + µ k l λ + µ λ + µ l k ( )( )( ) k ( ) n+m k ( )( ) k ( ) n+m k n m µ λ n + m µ λ. l k l λ + µ λ + µ k λ + µ λ + µ l 6

107 Cold reserve Exercise 54 Let us consider a system containing a main component having an exponentially distributed operating time with parameter λ. As soon as it fails a reserve unit start operation with the same probabilistic manner. There are 2 repairmen and the service times are supposed to be exponentially distributed random variables with parameter µ. Assuming that the involved random variables are independent find the main stead-state performance measures of the system. It is easy to see that the transition rates are the following Figure 9.: Cold reserve In stationary case let us introduce the usual notations, that is denote by P i the probability that i components are failed. Then for the balance equations we have λp µp (λ + µ)p λp + 2µP 2 2µP 2 λp The solution can be obtained up to a multiplicative constant P λ µ P, P 2 λ 2µ P λ2 2µ 2 P, which must satisfy the normalizing condition, that is P 2µ 2 + λ + λ2 2µ 2 + 2λµ + λ, wherel ϱ λ 2 µ 2µ + ϱ + ϱ2 µ. 2 2 The mean operating time of the system is E(O) P 2 P 2 E(S) P 2 P 2 2µ. 7

108 Figure 9.2: Warm reserve Warm reserve Exercise 55 In this case both components can operate simultaneously but the reserve s failure rate is λ (λ < λ). Find the usual performance measures. In this case the balance equations are (λ + λ )P µp (λ + µ)p (λ + λ )P + 2µP 2 2µP 2 λp The solution is where Finally P λ + λ µ P, P 2 λ 2µ P λ + λ λ µ 2µ P P + λ + λ + λ + λ λ µ µ 2µ. E(O) P 2 P 2 2µ. Exercise 56 Let us consider a component which in case of failure needs a detection time before repairing. This time is supposed to be an exponentially distributed random variable with parameter ν. Find the steady-state distribution of the system. Figure 9.3: Detection time 8

109 Similarly to the previous parts it easy to see that we can obtain the steady-state balance equations as λp µp 2, νp λp, µp 2 νp. Thus P λ ν P, P 2 ν µ P ν λ µ ν P λ µ P Using the normalizing condition P + P + P 2 we have P + λ ν + λ µ νµ νµ + λ(µ + ν). Exercise 57 Assume that we have a two-component parallel-redundant system with a single repair facility. The operating times for both components are supposed to be exponentially distributed random variables with parameter λ and the repair times are also exponentially distributed with parameter µ. When both components have failed, the system is considered to have failed and no recovery is possible, in other words it is a parallel system with repairs. Let us denote by,, 2, the number of failed components and let the system start from state, that is both components are operating. Find the mean time to the first system failure supposing that the involved random variables are independent of each other. Figure 9.4: Parallel-redundant system The transient probability distribution of the system can be obtained from the following balance equations P (t + h) P (t)( 2λh + o(h)) + P (t)(µh + o(h)) + o(h) P (t + h) P (t)( (λ + µ)h + o(h)) + P (t)(2λh + o(h)) + o(h) P 2 (t + h) P (t)(λh + o(h)) + o(h). 9

110 For the differential equations we have P (t) 2λP (t) + µp (t) P (t) (λ + µ)p (t) + 2λP (t) P 2(t) λp (t) P (), P (), P 2 () with the above initial conditions. This case, however, we need the time dependent solution, that is we have to solve the system of differential equations. Using Laplace-transform we obtain After simple calculations we get sp (s) 2λP (s) + µp (s) sp (s) (λ + µ)p (s) + 2λP (s) sp 2 (s) λp (s). P (s) s λ P 2 (s), P (s) 2λ P 2 (s)( s2 λ + (λ + µ) s λ ). Since P (t) + P (t) + P 2 (t), it is clear that P (s) + P (s) + P2 (s), thus we have s P2 (s) s( + s + s2 + (λ+µ)s ) 2λ2 s(s 2 + (3λ + µ)s + 2λ 2 ). λ 2λ 2 2λ 2 By inversion P 2 (t) can be obtained, that is at time t the system is failed since there is no operating component. Let Y be denote the time to the first system failure. Then P 2 (t) means that the operating time of the system is less than t. Hence the reliability function of the system is R(t) P 2 (t), thus R (t) P 2(t), P (Y < t) P 2 (t), f Y (t) P 2(t). Using the technique of Laplace-transform we get (P 2) (s) sp 2 (s) P 2 () 2λ 2 s 2 + (3λ + µ)s + 2λ 2. The denominator can be written in the form (s + a )(s + a 2 ) and thus we can use the method of partial ratios. So ( ) ( 2λ 2 s 2 + (3λ + µ)s + 2λ A 2 2λ2 2λ 2 + B ) (s + a )(s + a 2 ) s + a s + a 2 Therefore where a,2 (3λ + µ) ± λ 2 + 6λµ + µ 2 2 2λ 2 s 2 + (3λ + µ)s + 2λ 2 2λ2 a a 2 and A a 2 a, B ( ). s + a 2 s + a a a 2.

111 Hence f Y (t) 2λ2 (e a2t e at ) since if f (s) a a 2 s + a, then f(t) e at. The mean time to the first system failure can be computed in the following way [ ] E(Y ) yf Y (y) dy 2λ2 ye a2y dy ye ay dy a a 2 [ 2λ2 a 2 ye a2y dy ] [ ya e ay dy 2λ2 ] a a 2 a 2 a a a 2 a 2 2 a 2 2λ2 (a + a 2 ) 2λ2 (3λ + µ) (a a 2 ) 2 (2λ 2 ) 2 3 2λ }{{} without repair + µ. }{{} 2λ 2 increase It should be noted that E(Y ) can be determined without the density function, since its Laplace-transform is known. Thus E(Y ) L Y () d(p ( ) 2) (s) 2λ 2 ds s s 2 + (3λ + µ)s + 2λ s 2 2λ 2 (2s + 3λ + µ) (s 2 + (3λ + µ)s + 2λ 2 ) 2 2λ2 (3λ + µ) 3λ + µ 3 s 4λ 4 2λ 2 2λ + µ 2λ. 2 In the case when the component are not repaired, that is when µ we get a parallel system treated earlier. After substitution we have and then f Y (s) 2λ2 λ α 2λ, ( s + λ ) s + 2λ α 2 λ 2λ 2 (s + λ)(s + 2λ) 2λ s + 2λ λ s + λ. This can be interpreted as follows. The system failure time is the sum of the time of the first failure of the components and the residual operating time of the second component. The first failure is exponentially distributed with parameter 2λ and the remaining time is also exponentially distributed with parameter λ, furthermore they are independent of each other. Exercise 58 Let us modify the previous system in the following way. The repairs are carried out by 2 repairmen and assume that the repair starts when both components are failed. Find the steady-state characteristics of the system. Let us introduce the following notations

112 Figure 9.5: Exercise 58 - both components are operating - component is failed, there is no repair 2-2 components are failed 3 - component is failed the other is under repair. It is easy to see that the steady-state balance equations are 2λP µp 3 λp 2λP 2µP 2 λp + λp 3 (λ + µ)p 3 2µP 2 For the solution we obtain P 3 2λ µ P, P 2P, P 2 λ + µ 2µ P 3 λ + µ λ 2µ µ P. Using the normalizing condition we have P 3 + (λ+µ)λ µ 2 + 2λ µ µ 2 3µ 2 + λ 2 + 3λµ. The availability A of the system is A P 2 3µ2 + 2λµ 3µ 2 + λ 2 + 3λµ, since P 2 Furthermore, for the mean operating time of the system we get E(O) P 2 P 2 2µ 2λ + 3µ 2λ(λ + µ). (λ + µ)λ 3µ 2 + λ 2 + 3λµ. 2

113 Appendix In this Appendix some properties of the generating function, sometimes called as z- transform, and the Laplace-transform are listed. More properties can be found, for example in Kleinrock [5]. Some properties of the generating function Sequence Generating function. f n, n,, 2,... G(z) f n z n n 2. af n + bg n ag(z) + bh(z) 3. a n f n f(az) 4. f n k, n, k, 2k,... G(zk ) 5. f n+k, k > G(z) z k 6. f n k, k > z k G(z) k z i k f i 7. n(n ) (n m + )f n z m dm G(z), dz m m 8. f n g n : f n k g k G(z)H(z) k 9. f n f n ( z)g(z). n f k, n,, 2,... k G(z) z i. a n a 2. Series sum property G() f n n 3. Alternating sum property G( ) ( ) n f n n 4. Initial value theorem G() f d 5. Intermediate value theorem n G(z) z f n! dz n n 6. Final value theorem lim z ( z)g(z) lim n f n 3

114 Some properties of the Laplace-transform Function Transform. f(t), t f (s) f(t)e st dt 2. af(t) + bg(t) af (s) + bg (s) 3. f ( t a), (a > ) af (as) 4. f(t a) e as f (s) 5. e at f(t) f (s + a) 6. t n f(t) ( ) n dn f (s) f(t) t f(t) t n 9. f(t) g(t). df(t) dt d n f(t) t f(t x)g(x)dx s s s s ds n f (s )ds ds ds 2... ds n f (s n ) s 2 s s ns n f (s)g (s) sf (s) f(). : f (n) (t) s n f (s) s n f() s n 2 f ()... f (n ) () dt n 2. f(t) a is parameter a F (s) a a 3. Integral property f () f(t)dt 4. Initial value theorem lim sf (s) lim f(t) s t 5. Final value theorem lim sf (s) lim f(t) s t if sf (s) is analytic for Re(s) 4

115 Bibliography [] Allen, A. O. Probability, statistics, and queueing theory with computer science applications, 2nd ed. Academic Press, Inc., Boston, MA, 99. [2] Gnedenko, B., Belyayev, J., and Solovyev, A. Mathematical methods of reliability theory. Academic Press, 969. [3] Jain, R. The Art of Computer Systems Performance Analysis. John Wiley and Sons, 99. [4] Karlin, S., and Taylor, H. An introduction to stochastic modeling. Harcourt, New York, 998. [5] Kleinrock, L. Queueing Systems, Volume I: Theory. John Wiley and Sons, 975. [6] Kobayashi, H., and Mark, B. System modeling and analysis: Foundations of system performance evaluation. Pearson Education Inc., Upper Sadle River, 28. [7] Ovcharov, L., and Wentzel, E. Applied Problems in Probability Theory. Mir Publishers, Moscow, 986. [8] Ravichandran, N. Stochastic Methods in Reliability Theory. John Wiley and Sons, 99. [9] Rényi, A. Probability Theory. Dover Pubns, 27. [] Ross, S. M. Introduction to Probability Models. Academic Press, Boston, 989. [] Stewart, W. Probability, Markov chains, queues, and simulation. Princeton University Press, Princeton, 29. [2] Sztrik, J. Introduction to theory of queues and its applications (in Hungarian). Kossuth Egyetemi Kiadó, Debrecen, 2. [3] Tijms, H. A first course in stochastic models. Wiley & Son, Chichester, 23. [4] Trivedi, K. Probability and Statistics with Reliability, Queueing and Computer Science Applications, 2nd edition. John Wiley & Sons, 22. 5

The Exponential Distribution

The Exponential Distribution 21 The Exponential Distribution From Discrete-Time to Continuous-Time: In Chapter 6 of the text we will be considering Markov processes in continuous time. In a sense, we already have a very good understanding

More information

INSURANCE RISK THEORY (Problems)

INSURANCE RISK THEORY (Problems) INSURANCE RISK THEORY (Problems) 1 Counting random variables 1. (Lack of memory property) Let X be a geometric distributed random variable with parameter p (, 1), (X Ge (p)). Show that for all n, m =,

More information

Section 5.1 Continuous Random Variables: Introduction

Section 5.1 Continuous Random Variables: Introduction Section 5. Continuous Random Variables: Introduction Not all random variables are discrete. For example:. Waiting times for anything (train, arrival of customer, production of mrna molecule from gene,

More information

Exponential Distribution

Exponential Distribution Exponential Distribution Definition: Exponential distribution with parameter λ: { λe λx x 0 f(x) = 0 x < 0 The cdf: F(x) = x Mean E(X) = 1/λ. f(x)dx = Moment generating function: φ(t) = E[e tx ] = { 1

More information

Notes on Continuous Random Variables

Notes on Continuous Random Variables Notes on Continuous Random Variables Continuous random variables are random quantities that are measured on a continuous scale. They can usually take on any value over some interval, which distinguishes

More information

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab 1 Overview of Monte Carlo Simulation 1.1 Why use simulation?

More information

Probability Generating Functions

Probability Generating Functions page 39 Chapter 3 Probability Generating Functions 3 Preamble: Generating Functions Generating functions are widely used in mathematics, and play an important role in probability theory Consider a sequence

More information

Nonparametric adaptive age replacement with a one-cycle criterion

Nonparametric adaptive age replacement with a one-cycle criterion Nonparametric adaptive age replacement with a one-cycle criterion P. Coolen-Schrijner, F.P.A. Coolen Department of Mathematical Sciences University of Durham, Durham, DH1 3LE, UK e-mail: [email protected]

More information

1.5 / 1 -- Communication Networks II (Görg) -- www.comnets.uni-bremen.de. 1.5 Transforms

1.5 / 1 -- Communication Networks II (Görg) -- www.comnets.uni-bremen.de. 1.5 Transforms .5 / -- Communication Networks II (Görg) -- www.comnets.uni-bremen.de.5 Transforms Using different summation and integral transformations pmf, pdf and cdf/ccdf can be transformed in such a way, that even

More information

e.g. arrival of a customer to a service station or breakdown of a component in some system.

e.g. arrival of a customer to a service station or breakdown of a component in some system. Poisson process Events occur at random instants of time at an average rate of λ events per second. e.g. arrival of a customer to a service station or breakdown of a component in some system. Let N(t) be

More information

2WB05 Simulation Lecture 8: Generating random variables

2WB05 Simulation Lecture 8: Generating random variables 2WB05 Simulation Lecture 8: Generating random variables Marko Boon http://www.win.tue.nl/courses/2wb05 January 7, 2013 Outline 2/36 1. How do we generate random variables? 2. Fitting distributions Generating

More information

For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i )

For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i ) Probability Review 15.075 Cynthia Rudin A probability space, defined by Kolmogorov (1903-1987) consists of: A set of outcomes S, e.g., for the roll of a die, S = {1, 2, 3, 4, 5, 6}, 1 1 2 1 6 for the roll

More information

A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails

A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails 12th International Congress on Insurance: Mathematics and Economics July 16-18, 2008 A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails XUEMIAO HAO (Based on a joint

More information

MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators...

MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators... MATH4427 Notebook 2 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 2009-2016 by Jenny A. Baglivo. All Rights Reserved. Contents 2 MATH4427 Notebook 2 3 2.1 Definitions and Examples...................................

More information

Lecture 13: Martingales

Lecture 13: Martingales Lecture 13: Martingales 1. Definition of a Martingale 1.1 Filtrations 1.2 Definition of a martingale and its basic properties 1.3 Sums of independent random variables and related models 1.4 Products of

More information

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015. Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x

More information

ECE302 Spring 2006 HW5 Solutions February 21, 2006 1

ECE302 Spring 2006 HW5 Solutions February 21, 2006 1 ECE3 Spring 6 HW5 Solutions February 1, 6 1 Solutions to HW5 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics

More information

M/M/1 and M/M/m Queueing Systems

M/M/1 and M/M/m Queueing Systems M/M/ and M/M/m Queueing Systems M. Veeraraghavan; March 20, 2004. Preliminaries. Kendall s notation: G/G/n/k queue G: General - can be any distribution. First letter: Arrival process; M: memoryless - exponential

More information

Supplement to Call Centers with Delay Information: Models and Insights

Supplement to Call Centers with Delay Information: Models and Insights Supplement to Call Centers with Delay Information: Models and Insights Oualid Jouini 1 Zeynep Akşin 2 Yves Dallery 1 1 Laboratoire Genie Industriel, Ecole Centrale Paris, Grande Voie des Vignes, 92290

More information

5. Continuous Random Variables

5. Continuous Random Variables 5. Continuous Random Variables Continuous random variables can take any value in an interval. They are used to model physical characteristics such as time, length, position, etc. Examples (i) Let X be

More information

UNIT I: RANDOM VARIABLES PART- A -TWO MARKS

UNIT I: RANDOM VARIABLES PART- A -TWO MARKS UNIT I: RANDOM VARIABLES PART- A -TWO MARKS 1. Given the probability density function of a continuous random variable X as follows f(x) = 6x (1-x) 0

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Math 541: Statistical Theory II Lecturer: Songfeng Zheng Maximum Likelihood Estimation 1 Maximum Likelihood Estimation Maximum likelihood is a relatively simple method of constructing an estimator for

More information

IEOR 6711: Stochastic Models I Fall 2012, Professor Whitt, Tuesday, September 11 Normal Approximations and the Central Limit Theorem

IEOR 6711: Stochastic Models I Fall 2012, Professor Whitt, Tuesday, September 11 Normal Approximations and the Central Limit Theorem IEOR 6711: Stochastic Models I Fall 2012, Professor Whitt, Tuesday, September 11 Normal Approximations and the Central Limit Theorem Time on my hands: Coin tosses. Problem Formulation: Suppose that I have

More information

Lectures 5-6: Taylor Series

Lectures 5-6: Taylor Series Math 1d Instructor: Padraic Bartlett Lectures 5-: Taylor Series Weeks 5- Caltech 213 1 Taylor Polynomials and Series As we saw in week 4, power series are remarkably nice objects to work with. In particular,

More information

Optimization of Business Processes: An Introduction to Applied Stochastic Modeling. Ger Koole Department of Mathematics, VU University Amsterdam

Optimization of Business Processes: An Introduction to Applied Stochastic Modeling. Ger Koole Department of Mathematics, VU University Amsterdam Optimization of Business Processes: An Introduction to Applied Stochastic Modeling Ger Koole Department of Mathematics, VU University Amsterdam Version of March 30, 2010 c Ger Koole, 1998 2010. These lecture

More information

Practice problems for Homework 11 - Point Estimation

Practice problems for Homework 11 - Point Estimation Practice problems for Homework 11 - Point Estimation 1. (10 marks) Suppose we want to select a random sample of size 5 from the current CS 3341 students. Which of the following strategies is the best:

More information

Properties of Future Lifetime Distributions and Estimation

Properties of Future Lifetime Distributions and Estimation Properties of Future Lifetime Distributions and Estimation Harmanpreet Singh Kapoor and Kanchan Jain Abstract Distributional properties of continuous future lifetime of an individual aged x have been studied.

More information

THE CENTRAL LIMIT THEOREM TORONTO

THE CENTRAL LIMIT THEOREM TORONTO THE CENTRAL LIMIT THEOREM DANIEL RÜDT UNIVERSITY OF TORONTO MARCH, 2010 Contents 1 Introduction 1 2 Mathematical Background 3 3 The Central Limit Theorem 4 4 Examples 4 4.1 Roulette......................................

More information

Math 431 An Introduction to Probability. Final Exam Solutions

Math 431 An Introduction to Probability. Final Exam Solutions Math 43 An Introduction to Probability Final Eam Solutions. A continuous random variable X has cdf a for 0, F () = for 0 <

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete

More information

4 The M/M/1 queue. 4.1 Time-dependent behaviour

4 The M/M/1 queue. 4.1 Time-dependent behaviour 4 The M/M/1 queue In this chapter we will analyze the model with exponential interarrival times with mean 1/λ, exponential service times with mean 1/µ and a single server. Customers are served in order

More information

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce

More information

CHAPTER 6: Continuous Uniform Distribution: 6.1. Definition: The density function of the continuous random variable X on the interval [A, B] is.

CHAPTER 6: Continuous Uniform Distribution: 6.1. Definition: The density function of the continuous random variable X on the interval [A, B] is. Some Continuous Probability Distributions CHAPTER 6: Continuous Uniform Distribution: 6. Definition: The density function of the continuous random variable X on the interval [A, B] is B A A x B f(x; A,

More information

Lecture 6: Discrete & Continuous Probability and Random Variables

Lecture 6: Discrete & Continuous Probability and Random Variables Lecture 6: Discrete & Continuous Probability and Random Variables D. Alex Hughes Math Camp September 17, 2015 D. Alex Hughes (Math Camp) Lecture 6: Discrete & Continuous Probability and Random September

More information

Introduction to Queueing Theory and Stochastic Teletraffic Models

Introduction to Queueing Theory and Stochastic Teletraffic Models Introduction to Queueing Theory and Stochastic Teletraffic Models Moshe Zukerman EE Department, City University of Hong Kong Copyright M. Zukerman c 2000 2015 Preface The aim of this textbook is to provide

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

n k=1 k=0 1/k! = e. Example 6.4. The series 1/k 2 converges in R. Indeed, if s n = n then k=1 1/k, then s 2n s n = 1 n + 1 +...

n k=1 k=0 1/k! = e. Example 6.4. The series 1/k 2 converges in R. Indeed, if s n = n then k=1 1/k, then s 2n s n = 1 n + 1 +... 6 Series We call a normed space (X, ) a Banach space provided that every Cauchy sequence (x n ) in X converges. For example, R with the norm = is an example of Banach space. Now let (x n ) be a sequence

More information

The Heat Equation. Lectures INF2320 p. 1/88

The Heat Equation. Lectures INF2320 p. 1/88 The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)

More information

Principle of Data Reduction

Principle of Data Reduction Chapter 6 Principle of Data Reduction 6.1 Introduction An experimenter uses the information in a sample X 1,..., X n to make inferences about an unknown parameter θ. If the sample size n is large, then

More information

A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS

A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS Eusebio GÓMEZ, Miguel A. GÓMEZ-VILLEGAS and J. Miguel MARÍN Abstract In this paper it is taken up a revision and characterization of the class of

More information

VERTICES OF GIVEN DEGREE IN SERIES-PARALLEL GRAPHS

VERTICES OF GIVEN DEGREE IN SERIES-PARALLEL GRAPHS VERTICES OF GIVEN DEGREE IN SERIES-PARALLEL GRAPHS MICHAEL DRMOTA, OMER GIMENEZ, AND MARC NOY Abstract. We show that the number of vertices of a given degree k in several kinds of series-parallel labelled

More information

THE DYING FIBONACCI TREE. 1. Introduction. Consider a tree with two types of nodes, say A and B, and the following properties:

THE DYING FIBONACCI TREE. 1. Introduction. Consider a tree with two types of nodes, say A and B, and the following properties: THE DYING FIBONACCI TREE BERNHARD GITTENBERGER 1. Introduction Consider a tree with two types of nodes, say A and B, and the following properties: 1. Let the root be of type A.. Each node of type A produces

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS Contents 1. Moment generating functions 2. Sum of a ranom number of ranom variables 3. Transforms

More information

4 Sums of Random Variables

4 Sums of Random Variables Sums of a Random Variables 47 4 Sums of Random Variables Many of the variables dealt with in physics can be expressed as a sum of other variables; often the components of the sum are statistically independent.

More information

1 Prior Probability and Posterior Probability

1 Prior Probability and Posterior Probability Math 541: Statistical Theory II Bayesian Approach to Parameter Estimation Lecturer: Songfeng Zheng 1 Prior Probability and Posterior Probability Consider now a problem of statistical inference in which

More information

Sums of Independent Random Variables

Sums of Independent Random Variables Chapter 7 Sums of Independent Random Variables 7.1 Sums of Discrete Random Variables In this chapter we turn to the important question of determining the distribution of a sum of independent random variables

More information

What is Statistics? Lecture 1. Introduction and probability review. Idea of parametric inference

What is Statistics? Lecture 1. Introduction and probability review. Idea of parametric inference 0. 1. Introduction and probability review 1.1. What is Statistics? What is Statistics? Lecture 1. Introduction and probability review There are many definitions: I will use A set of principle and procedures

More information

Taylor and Maclaurin Series

Taylor and Maclaurin Series Taylor and Maclaurin Series In the preceding section we were able to find power series representations for a certain restricted class of functions. Here we investigate more general problems: Which functions

More information

MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

MATH 425, PRACTICE FINAL EXAM SOLUTIONS. MATH 45, PRACTICE FINAL EXAM SOLUTIONS. Exercise. a Is the operator L defined on smooth functions of x, y by L u := u xx + cosu linear? b Does the answer change if we replace the operator L by the operator

More information

1.1 Introduction, and Review of Probability Theory... 3. 1.1.1 Random Variable, Range, Types of Random Variables... 3. 1.1.2 CDF, PDF, Quantiles...

1.1 Introduction, and Review of Probability Theory... 3. 1.1.1 Random Variable, Range, Types of Random Variables... 3. 1.1.2 CDF, PDF, Quantiles... MATH4427 Notebook 1 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 2009-2016 by Jenny A. Baglivo. All Rights Reserved. Contents 1 MATH4427 Notebook 1 3 1.1 Introduction, and Review of Probability

More information

Statistics 100A Homework 8 Solutions

Statistics 100A Homework 8 Solutions Part : Chapter 7 Statistics A Homework 8 Solutions Ryan Rosario. A player throws a fair die and simultaneously flips a fair coin. If the coin lands heads, then she wins twice, and if tails, the one-half

More information

Stochastic Inventory Control

Stochastic Inventory Control Chapter 3 Stochastic Inventory Control 1 In this chapter, we consider in much greater details certain dynamic inventory control problems of the type already encountered in section 1.3. In addition to the

More information

1 Sufficient statistics

1 Sufficient statistics 1 Sufficient statistics A statistic is a function T = rx 1, X 2,, X n of the random sample X 1, X 2,, X n. Examples are X n = 1 n s 2 = = X i, 1 n 1 the sample mean X i X n 2, the sample variance T 1 =

More information

Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page

Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page Errata for ASM Exam C/4 Study Manual (Sixteenth Edition) Sorted by Page 1 Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page Practice exam 1:9, 1:22, 1:29, 9:5, and 10:8

More information

Probability Theory. Florian Herzog. A random variable is neither random nor variable. Gian-Carlo Rota, M.I.T..

Probability Theory. Florian Herzog. A random variable is neither random nor variable. Gian-Carlo Rota, M.I.T.. Probability Theory A random variable is neither random nor variable. Gian-Carlo Rota, M.I.T.. Florian Herzog 2013 Probability space Probability space A probability space W is a unique triple W = {Ω, F,

More information

1 The Brownian bridge construction

1 The Brownian bridge construction The Brownian bridge construction The Brownian bridge construction is a way to build a Brownian motion path by successively adding finer scale detail. This construction leads to a relatively easy proof

More information

Math 461 Fall 2006 Test 2 Solutions

Math 461 Fall 2006 Test 2 Solutions Math 461 Fall 2006 Test 2 Solutions Total points: 100. Do all questions. Explain all answers. No notes, books, or electronic devices. 1. [105+5 points] Assume X Exponential(λ). Justify the following two

More information

Performance Analysis of a Telephone System with both Patient and Impatient Customers

Performance Analysis of a Telephone System with both Patient and Impatient Customers Performance Analysis of a Telephone System with both Patient and Impatient Customers Yiqiang Quennel Zhao Department of Mathematics and Statistics University of Winnipeg Winnipeg, Manitoba Canada R3B 2E9

More information

Lectures on Stochastic Processes. William G. Faris

Lectures on Stochastic Processes. William G. Faris Lectures on Stochastic Processes William G. Faris November 8, 2001 2 Contents 1 Random walk 7 1.1 Symmetric simple random walk................... 7 1.2 Simple random walk......................... 9 1.3

More information

Sequences and Series

Sequences and Series Sequences and Series Consider the following sum: 2 + 4 + 8 + 6 + + 2 i + The dots at the end indicate that the sum goes on forever. Does this make sense? Can we assign a numerical value to an infinite

More information

Methods of Solution of Selected Differential Equations Carol A. Edwards Chandler-Gilbert Community College

Methods of Solution of Selected Differential Equations Carol A. Edwards Chandler-Gilbert Community College Methods of Solution of Selected Differential Equations Carol A. Edwards Chandler-Gilbert Community College Equations of Order One: Mdx + Ndy = 0 1. Separate variables. 2. M, N homogeneous of same degree:

More information

STAT 830 Convergence in Distribution

STAT 830 Convergence in Distribution STAT 830 Convergence in Distribution Richard Lockhart Simon Fraser University STAT 830 Fall 2011 Richard Lockhart (Simon Fraser University) STAT 830 Convergence in Distribution STAT 830 Fall 2011 1 / 31

More information

CHAPTER IV - BROWNIAN MOTION

CHAPTER IV - BROWNIAN MOTION CHAPTER IV - BROWNIAN MOTION JOSEPH G. CONLON 1. Construction of Brownian Motion There are two ways in which the idea of a Markov chain on a discrete state space can be generalized: (1) The discrete time

More information

6.041/6.431 Spring 2008 Quiz 2 Wednesday, April 16, 7:30-9:30 PM. SOLUTIONS

6.041/6.431 Spring 2008 Quiz 2 Wednesday, April 16, 7:30-9:30 PM. SOLUTIONS 6.4/6.43 Spring 28 Quiz 2 Wednesday, April 6, 7:3-9:3 PM. SOLUTIONS Name: Recitation Instructor: TA: 6.4/6.43: Question Part Score Out of 3 all 36 2 a 4 b 5 c 5 d 8 e 5 f 6 3 a 4 b 6 c 6 d 6 e 6 Total

More information

Chapter 2: Binomial Methods and the Black-Scholes Formula

Chapter 2: Binomial Methods and the Black-Scholes Formula Chapter 2: Binomial Methods and the Black-Scholes Formula 2.1 Binomial Trees We consider a financial market consisting of a bond B t = B(t), a stock S t = S(t), and a call-option C t = C(t), where the

More information

Aggregate Loss Models

Aggregate Loss Models Aggregate Loss Models Chapter 9 Stat 477 - Loss Models Chapter 9 (Stat 477) Aggregate Loss Models Brian Hartman - BYU 1 / 22 Objectives Objectives Individual risk model Collective risk model Computing

More information

Estimating the Degree of Activity of jumps in High Frequency Financial Data. joint with Yacine Aït-Sahalia

Estimating the Degree of Activity of jumps in High Frequency Financial Data. joint with Yacine Aït-Sahalia Estimating the Degree of Activity of jumps in High Frequency Financial Data joint with Yacine Aït-Sahalia Aim and setting An underlying process X = (X t ) t 0, observed at equally spaced discrete times

More information

Introduction. Appendix D Mathematical Induction D1

Introduction. Appendix D Mathematical Induction D1 Appendix D Mathematical Induction D D Mathematical Induction Use mathematical induction to prove a formula. Find a sum of powers of integers. Find a formula for a finite sum. Use finite differences to

More information

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5. PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include

More information

SPARE PARTS INVENTORY SYSTEMS UNDER AN INCREASING FAILURE RATE DEMAND INTERVAL DISTRIBUTION

SPARE PARTS INVENTORY SYSTEMS UNDER AN INCREASING FAILURE RATE DEMAND INTERVAL DISTRIBUTION SPARE PARS INVENORY SYSEMS UNDER AN INCREASING FAILURE RAE DEMAND INERVAL DISRIBUION Safa Saidane 1, M. Zied Babai 2, M. Salah Aguir 3, Ouajdi Korbaa 4 1 National School of Computer Sciences (unisia),

More information

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem

More information

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS. DISCRETE RANDOM VARIABLES.. Definition of a Discrete Random Variable. A random variable X is said to be discrete if it can assume only a finite or countable

More information

Chapter 4 Lecture Notes

Chapter 4 Lecture Notes Chapter 4 Lecture Notes Random Variables October 27, 2015 1 Section 4.1 Random Variables A random variable is typically a real-valued function defined on the sample space of some experiment. For instance,

More information

9.2 Summation Notation

9.2 Summation Notation 9. Summation Notation 66 9. Summation Notation In the previous section, we introduced sequences and now we shall present notation and theorems concerning the sum of terms of a sequence. We begin with a

More information

Simple Markovian Queueing Systems

Simple Markovian Queueing Systems Chapter 4 Simple Markovian Queueing Systems Poisson arrivals and exponential service make queueing models Markovian that are easy to analyze and get usable results. Historically, these are also the models

More information

Probability density function : An arbitrary continuous random variable X is similarly described by its probability density function f x = f X

Probability density function : An arbitrary continuous random variable X is similarly described by its probability density function f x = f X Week 6 notes : Continuous random variables and their probability densities WEEK 6 page 1 uniform, normal, gamma, exponential,chi-squared distributions, normal approx'n to the binomial Uniform [,1] random

More information

CONTINUOUS COUNTERPARTS OF POISSON AND BINOMIAL DISTRIBUTIONS AND THEIR PROPERTIES

CONTINUOUS COUNTERPARTS OF POISSON AND BINOMIAL DISTRIBUTIONS AND THEIR PROPERTIES Annales Univ. Sci. Budapest., Sect. Comp. 39 213 137 147 CONTINUOUS COUNTERPARTS OF POISSON AND BINOMIAL DISTRIBUTIONS AND THEIR PROPERTIES Andrii Ilienko Kiev, Ukraine Dedicated to the 7 th anniversary

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

Survival Distributions, Hazard Functions, Cumulative Hazards

Survival Distributions, Hazard Functions, Cumulative Hazards Week 1 Survival Distributions, Hazard Functions, Cumulative Hazards 1.1 Definitions: The goals of this unit are to introduce notation, discuss ways of probabilistically describing the distribution of a

More information

Basic Queueing Theory

Basic Queueing Theory Basic Queueing Theory Dr. János Sztrik University of Debrecen, Faculty of Informatics Reviewers: Dr. József Bíró Doctor of the Hungarian Academy of Sciences, Full Professor Budapest University of Technology

More information

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform MATH 433/533, Fourier Analysis Section 11, The Discrete Fourier Transform Now, instead of considering functions defined on a continuous domain, like the interval [, 1) or the whole real line R, we wish

More information

Single item inventory control under periodic review and a minimum order quantity

Single item inventory control under periodic review and a minimum order quantity Single item inventory control under periodic review and a minimum order quantity G. P. Kiesmüller, A.G. de Kok, S. Dabia Faculty of Technology Management, Technische Universiteit Eindhoven, P.O. Box 513,

More information

LogNormal stock-price models in Exams MFE/3 and C/4

LogNormal stock-price models in Exams MFE/3 and C/4 Making sense of... LogNormal stock-price models in Exams MFE/3 and C/4 James W. Daniel Austin Actuarial Seminars http://www.actuarialseminars.com June 26, 2008 c Copyright 2007 by James W. Daniel; reproduction

More information

Portfolio Distribution Modelling and Computation. Harry Zheng Department of Mathematics Imperial College [email protected]

Portfolio Distribution Modelling and Computation. Harry Zheng Department of Mathematics Imperial College h.zheng@imperial.ac.uk Portfolio Distribution Modelling and Computation Harry Zheng Department of Mathematics Imperial College [email protected] Workshop on Fast Financial Algorithms Tanaka Business School Imperial College

More information

RANDOM INTERVAL HOMEOMORPHISMS. MICHA L MISIUREWICZ Indiana University Purdue University Indianapolis

RANDOM INTERVAL HOMEOMORPHISMS. MICHA L MISIUREWICZ Indiana University Purdue University Indianapolis RANDOM INTERVAL HOMEOMORPHISMS MICHA L MISIUREWICZ Indiana University Purdue University Indianapolis This is a joint work with Lluís Alsedà Motivation: A talk by Yulij Ilyashenko. Two interval maps, applied

More information

Martingale Ideas in Elementary Probability. Lecture course Higher Mathematics College, Independent University of Moscow Spring 1996

Martingale Ideas in Elementary Probability. Lecture course Higher Mathematics College, Independent University of Moscow Spring 1996 Martingale Ideas in Elementary Probability Lecture course Higher Mathematics College, Independent University of Moscow Spring 1996 William Faris University of Arizona Fulbright Lecturer, IUM, 1995 1996

More information

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4) ROOTS OF POLYNOMIAL EQUATIONS In this unit we discuss polynomial equations. A polynomial in x of degree n, where n 0 is an integer, is an expression of the form P n (x) =a n x n + a n 1 x n 1 + + a 1 x

More information

Alternative Price Processes for Black-Scholes: Empirical Evidence and Theory

Alternative Price Processes for Black-Scholes: Empirical Evidence and Theory Alternative Price Processes for Black-Scholes: Empirical Evidence and Theory Samuel W. Malone April 19, 2002 This work is supported by NSF VIGRE grant number DMS-9983320. Page 1 of 44 1 Introduction This

More information

Generating Functions

Generating Functions Chapter 10 Generating Functions 10.1 Generating Functions for Discrete Distributions So far we have considered in detail only the two most important attributes of a random variable, namely, the mean and

More information

Message-passing sequential detection of multiple change points in networks

Message-passing sequential detection of multiple change points in networks Message-passing sequential detection of multiple change points in networks Long Nguyen, Arash Amini Ram Rajagopal University of Michigan Stanford University ISIT, Boston, July 2012 Nguyen/Amini/Rajagopal

More information

CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e.

CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e. CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e. This chapter contains the beginnings of the most important, and probably the most subtle, notion in mathematical analysis, i.e.,

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance

More information

Numerical methods for American options

Numerical methods for American options Lecture 9 Numerical methods for American options Lecture Notes by Andrzej Palczewski Computational Finance p. 1 American options The holder of an American option has the right to exercise it at any moment

More information

Black-Scholes Option Pricing Model

Black-Scholes Option Pricing Model Black-Scholes Option Pricing Model Nathan Coelen June 6, 22 1 Introduction Finance is one of the most rapidly changing and fastest growing areas in the corporate business world. Because of this rapid change,

More information

Stochastic Processes and Queueing Theory used in Cloud Computer Performance Simulations

Stochastic Processes and Queueing Theory used in Cloud Computer Performance Simulations 56 Stochastic Processes and Queueing Theory used in Cloud Computer Performance Simulations Stochastic Processes and Queueing Theory used in Cloud Computer Performance Simulations Florin-Cătălin ENACHE

More information

Mathematical Finance

Mathematical Finance Mathematical Finance Option Pricing under the Risk-Neutral Measure Cory Barnes Department of Mathematics University of Washington June 11, 2013 Outline 1 Probability Background 2 Black Scholes for European

More information