Modeling and Analysis of Information Technology Systems



Similar documents
The Exponential Distribution

INSURANCE RISK THEORY (Problems)

Section 5.1 Continuous Random Variables: Introduction

Exponential Distribution

Notes on Continuous Random Variables

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab

Probability Generating Functions

Nonparametric adaptive age replacement with a one-cycle criterion

1.5 / 1 -- Communication Networks II (Görg) Transforms

e.g. arrival of a customer to a service station or breakdown of a component in some system.

2WB05 Simulation Lecture 8: Generating random variables

For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i )

A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails

MATH4427 Notebook 2 Spring MATH4427 Notebook Definitions and Examples Performance Measures for Estimators...

Lecture 13: Martingales

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March Due:-March 25, 2015.

ECE302 Spring 2006 HW5 Solutions February 21,

M/M/1 and M/M/m Queueing Systems

Supplement to Call Centers with Delay Information: Models and Insights

5. Continuous Random Variables

UNIT I: RANDOM VARIABLES PART- A -TWO MARKS

Maximum Likelihood Estimation

IEOR 6711: Stochastic Models I Fall 2012, Professor Whitt, Tuesday, September 11 Normal Approximations and the Central Limit Theorem

Lectures 5-6: Taylor Series

Optimization of Business Processes: An Introduction to Applied Stochastic Modeling. Ger Koole Department of Mathematics, VU University Amsterdam

Practice problems for Homework 11 - Point Estimation

Properties of Future Lifetime Distributions and Estimation

THE CENTRAL LIMIT THEOREM TORONTO

Math 431 An Introduction to Probability. Final Exam Solutions

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

4 The M/M/1 queue. 4.1 Time-dependent behaviour

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

CHAPTER 6: Continuous Uniform Distribution: 6.1. Definition: The density function of the continuous random variable X on the interval [A, B] is.

Lecture 6: Discrete & Continuous Probability and Random Variables

Introduction to Queueing Theory and Stochastic Teletraffic Models

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

n k=1 k=0 1/k! = e. Example 6.4. The series 1/k 2 converges in R. Indeed, if s n = n then k=1 1/k, then s 2n s n = 1 n

The Heat Equation. Lectures INF2320 p. 1/88

Principle of Data Reduction

A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS

VERTICES OF GIVEN DEGREE IN SERIES-PARALLEL GRAPHS

THE DYING FIBONACCI TREE. 1. Introduction. Consider a tree with two types of nodes, say A and B, and the following properties:

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS

4 Sums of Random Variables

1 Prior Probability and Posterior Probability

Sums of Independent Random Variables

What is Statistics? Lecture 1. Introduction and probability review. Idea of parametric inference

Taylor and Maclaurin Series

MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

1.1 Introduction, and Review of Probability Theory Random Variable, Range, Types of Random Variables CDF, PDF, Quantiles...

Statistics 100A Homework 8 Solutions

Stochastic Inventory Control

1 Sufficient statistics

Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page

Probability Theory. Florian Herzog. A random variable is neither random nor variable. Gian-Carlo Rota, M.I.T..

1 The Brownian bridge construction

Math 461 Fall 2006 Test 2 Solutions

Performance Analysis of a Telephone System with both Patient and Impatient Customers

Lectures on Stochastic Processes. William G. Faris

Sequences and Series

Methods of Solution of Selected Differential Equations Carol A. Edwards Chandler-Gilbert Community College

STAT 830 Convergence in Distribution

CHAPTER IV - BROWNIAN MOTION

6.041/6.431 Spring 2008 Quiz 2 Wednesday, April 16, 7:30-9:30 PM. SOLUTIONS

Chapter 2: Binomial Methods and the Black-Scholes Formula

Aggregate Loss Models

Estimating the Degree of Activity of jumps in High Frequency Financial Data. joint with Yacine Aït-Sahalia

Introduction. Appendix D Mathematical Induction D1

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include

SPARE PARTS INVENTORY SYSTEMS UNDER AN INCREASING FAILURE RATE DEMAND INTERVAL DISTRIBUTION

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS

Chapter 4 Lecture Notes

9.2 Summation Notation

Simple Markovian Queueing Systems

Probability density function : An arbitrary continuous random variable X is similarly described by its probability density function f x = f X

CONTINUOUS COUNTERPARTS OF POISSON AND BINOMIAL DISTRIBUTIONS AND THEIR PROPERTIES

Metric Spaces. Chapter Metrics

Survival Distributions, Hazard Functions, Cumulative Hazards

Basic Queueing Theory

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform

Single item inventory control under periodic review and a minimum order quantity

LogNormal stock-price models in Exams MFE/3 and C/4

Portfolio Distribution Modelling and Computation. Harry Zheng Department of Mathematics Imperial College

RANDOM INTERVAL HOMEOMORPHISMS. MICHA L MISIUREWICZ Indiana University Purdue University Indianapolis

Martingale Ideas in Elementary Probability. Lecture course Higher Mathematics College, Independent University of Moscow Spring 1996

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)

Alternative Price Processes for Black-Scholes: Empirical Evidence and Theory

Generating Functions

Message-passing sequential detection of multiple change points in networks

CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e.

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

Numerical methods for American options

Black-Scholes Option Pricing Model

Stochastic Processes and Queueing Theory used in Cloud Computer Performance Simulations

Mathematical Finance

Transcription:

Modeling and Analysis of Information Technology Systems Dr. János Sztrik University of Debrecen, Faculty of Informatics

Reviewers: Dr. József Bíró Doctor of the Hungarian Academy of Sciences, Full Professor Budapest University of Technology and Economics Dr. Zalán Heszberger PhD, Associate Professor Budapest University of Technology and Economics 2

This book is dedicated to my wife without whom this work could have been finished much earlier. If anything can go wrong, it will. If you change queues, the one you have left will start to move faster than the one you are in now. Your queue always goes the slowest. Whatever queue you join, no matter how short it looks, it will always take the longest for you to get served. ( Murphy Laws on reliability and queueing ) 3

4

Contents Preface 7 I Modeling and Analysis of Information Technology Systems 9 Basic Concepts from Probability Theory. Brief Summary.................................2 Some Important Discrete Probability Distributions............ 3.3 Some Important Continuous Probability Distributions........... 6 2 Fundamentals of Stochastic Modeling 23 2. Distributions Related to the Exponential Distribution........... 23 2.2 Basics of Reliability Theory......................... 3 2.3 Generation of Random Numbers....................... 33 2.4 Random Sums................................. 35 3 Analytic Tools, Transforms 37 3. Generating Function............................. 37 3.2 Laplace-Transform.............................. 42 4 Stochastic Systems 49 4. Poisson Process................................ 49 4.2 Performance Analysis of Some Simple Systems............... 53 5 Continuous-Time Markov Chains 7 5. Birth-Death Processes............................ 73 II Exercises 75 6 Basic Concepts from Probability Theory 77 6. Discrete Probability Distributions...................... 77 6.2 Continuous Probability Distributions.................... 8 7 Fundamentals of Stochastic Modeling 85 7. Exponential Distribution and Related Distributions............ 85 7.2 Basics of Reliability Theory......................... 94 7.3 Random Sums................................. 96 5

8 Analytic Tools, Transforms 99 8. Generating Function............................. 99 8.2 Laplace-Transform.............................. 9 Stochastic Systems 3 9. Poisson Process................................ 3 9.2 Some Simple Systems............................. 5 Appendix 3 Bibliography 5 6

Preface Modern information technologies require innovations that are based on modeling, analyzing, designing and finally implementing new systems. The whole developing process assumes a well-organized team work of experts including engineers, computer scientists, mathematicians, physicist just to mention some of them. Modern info-communication networks are one of the most complex systems where the reliability and efficiency of the components play a very important role. For the better understanding of the dynamic behavior of the involved processes one have to deal with constructions of mathematical models which describe the stochastic service of randomly arriving requests. Queueing Theory is one of the most commonly used mathematical tool for the performance evaluation of such systems. The aim of the book is to present the basic methods, approaches in a Markovian level for the analysis of not too complicated systems. The main purpose is to understand how models could be constructed and how to analyze them. It is assumed the reader has been exposed to a first course in probability theory, however in the text I give a refresher and state the most important principles I need later on. My intention is to show what is behind the formulas and how we can derive formulas. It is also essential to know which kind of questions are reasonable and then how to answer them. My experience and advice are that if it is possible solve the same problem in different ways and compare the results. Sometimes very nice closed-form, analytic solutions are obtained but the main problem is that we cannot compute them for higher values of the involved variables. In this case the algorithmic or asymptotic approaches could be very useful. My intention is to find the balance between the mathematical and practitioner needs. I feel that a satisfactory middle ground has been established for understanding and applying these tools to practical systems. I hope that after understanding this book the reader will be able to create his owns formulas if needed. It should be underlined that most of the models are based on the assumption that the involved random variables are exponentially distributed and independent of each other. We must confess that this assumption is artificial since in practice the exponential distribution is not so frequent. However, the mathematical models based on the memoryless property of the exponential distribution greatly simplifies the solution methods resulting in computable formulas. By using these relatively simple formulas one can easily foresee the effect of a given parameter on the performance measure and hence the trends can be forecast. Clearly, instead of the exponential distribution one can use other distributions but in that case the mathematical models will be much more complicated. The analytic 7

results can help us in validating the results obtained by stochastic simulation. This approach is quite general when analytic expressions cannot be expected. In this case not only the model construction but also the statistical analysis of the output is important. The primary purpose of the book is to show how to create simple models for practical problems that is why the general theory of stochastic processes is omitted. It uses only the most important concepts and sometimes states theorem without proofs, but each time the related references are cited. I must confess that the style of the following books greatly influenced me, even if they are in different level and more comprehensive than this material: Allen [], Jain [3], Kleinrock [5], Kobayashi and Mark [6], Stewart [], Tijms [3], Trivedi [4]. This book is intended not only for students of computer science, engineering, operation research, mathematics but also those who study at business, management and planning departments, too. It covers more than one semester and has been tested by graduate students at Debrecen University over the years. It gives a very detailed analysis of the involved systems by giving density function, distribution function, generating function, Laplace-transform, respectively. Furthermore, Java-applets are provided to calculate the main performance measure immediately by using the pdf version of the book in a WWW environment. I have attempted to provide examples for the better understanding and a collection of exercises with detailed solution helps the reader in deepening her/his knowledge. I am convinced that the book covers the basic topics in stochastic modeling of practical problems and it supports students in all over the world. I am indebted to Professor József Bíró for his review, comments and suggestions which greatly improved the quality of the book. I am very grateful to Márk Kósa, Albert Barnák, Balázs Máté for their help in editing.. All comments and suggestions are welcome at: sztrik.janos@inf.unideb.hu http://irh.inf.unideb.hu/user/jsztrik Debrecen, 2. János Sztrik 8

Part I Modeling and Analysis of Information Technology Systems 9

Chapter Basic Concepts from Probability Theory Stochastic modeling of any type of system needs basic knowledge of probability theory. It is my experience that a brief summary about the most important concepts and theorems is very useful because the readers might have different level approaches to the theory of probability. The refresher concentrate only on those theorems and distributions which are closely related to this material. It should be noted that there are many good textbooks about theory of probability in all over the word. Moreover, a number of digital versions can be downloaded from the internet, too. I would recommend any of the following books, Allen [], Gnedenko et.al. [2],Jain [3], Kleinrock [5], Kobayashi and Mark [6], Ovcharov and Wentzel [7], Ravichandran [8], Rényi [9], Ross [], Stewart [], Tijms [3], Trivedi [4].. Brief Summary Theorem (Basic Forms of the Law of Total Probability) Let B, B 2,... be a set of mutually exclusive exhaustive events with positive probabilities and let A be any event. Then (.) P (A) P (A B i )P (B i ). i F (x) f(x) F (x B i )P (B i ) i f(x B i )P (B i ) i f X (x) f X Y (x y)f Y (y)dy

F X (x) F X Y (x y)f Y (y)dy P (A) P (A Y y)f Y (y)dy, where F (x) is a distribution function, f(x) is a density function, f(x, y) is a joint density function, f X Y (x y) f(x, y) f Y (y) is a conditional density function, F X Y (x y) x f X Y (t y)dt is a conditional distribution function. Theorem 2 (Bayes Theorem or Bayes Rule) Let B, B 2,... be a set of mutually exclusive exhaustive events with positive probabilities and let A be any event of positive probability. Then P (A B i )P (B i ) P (B i A) j P (A B j)p (B j ). Definition Let p k P (X x k ), k,2,..., the distribution of a discrete random variable X. The mean ( first moment, expectation, average ) of X is defined as k p kx k if this series is absolute convergent. That is the mean of X is EX p k x k. k Definition 2 Let f(x) be the density function of a continuous random variable X. If x f(x) dx is finite then the mean is defined by + EX + xf(x) dx. Without proof the main properties of the expectation are as follows If EX, EY <, then. E(X + Y ) exists and E(X + Y ) EX + EY, 2. E(cX) exists and E(cX) cex, 3. E(XY ) exists and E(XY ) EXEY, provided X and Y are independent, 2

4. E(aX + b) exists and E(aX + b) aex + b, 5. (E(XY )) 2 exists and (E(XY )) 2 EX 2 EY 2, if the second moments are exist, 6. If X, then EX ( F (x))dx, EX k P (ξ k). Theorem 3 (Theorem of Total Moments) The most commonly used forms are E(X n ) E(X n B i )P (B i ), i where E(X n B i ) denotes the nth conditional moment. The continuous version is E(X n ) E(X n Y y)f Y (y)dy. In case of n we have the theorem of total expectation. Definition 3 (Variance) Let X be a random variable with a finite mean EX m. Then V ar(x) E(X m) 2 is called the variance of X provided it is finite. The following properties hold. If V ar(x) < then V ar(x) EX 2 E 2 X. 2. V ar(ax + b) a 2 V ar(x) for any a,b R. 3. V ar(x) ; V ar(x) if and only if P (X EX). is de- Definition 4 (Squared coefficient of variation) The coefficient CX 2 V ar(x) (EX) 2 fined as the squared coefficient of variation of random variable X..2 Some Important Discrete Probability Distributions Binomial Distribution A random variable X is said to have a binomial distribution with parameters n, p if its distribution is ( ) n p k p k ( p) n k, k,,..., n k Notation: X B(n, p). It can be shown that EX np, V ar(x) np( p), C 2 X p np. If n, then X is Bernoulli distributed. 3

Poisson Distribution A random variable X is said to have a Poisson distribution with parameter λ if Notation: X P o(λ). It is well-known that p k λk k! e λ, λ >, k,,... EX λ, V ar(x) λ, C 2 X λ. It can be proved that lim n,p,np λ ( n )p k ( p) n k λk k k! e λ, k,,... that is the binomial distribution can be approximated by the Poisson distribution. The closer p to zero, the better the approximation. An acceptable rule of thumb to use this procedure is n 2 and p.5. Geometric Distribution A random variable X is said to have geometric distribution with parameter p if Notation: X Geo(p). It is easily verified that p k p( p) k, k, 2,... EX p, V ar(x) p p 2, C 2 X p. A random variable X X is called modified geometric. In this case Convolution P (X k) p( p) k k,,... EX p p, V ar(x ) p p 2, C 2 X p. Definition 5 Let X and Y be independent random variables with distributions P (X i) p i, P (Y j) q j, i, j,,, 2... Then the distribution of Z X + Y is P (Z k) k p j q k j, k,, 2,... j is called the convolution of X, Y, that is we calculated the distribution of X + Y. 4

Example Show that if X B(n, p), Y B(m, p) and are independent random variables then X + Y B(n + m, p). l ( ) ( ) n m P (X + Y l) p k ( p) n k p l k ( p) m l+k k l k k l ( )( ) n m p l ( p) n+m l k l k ( n + m l k ) p l ( p) n+m l. ( ) n + m p l ( p) n+m l l Example 2 Verify if X Po(λ), Y Po(β) and are independent random variables then X + Y P o(λ + β). P (X + Y l) l k e λ β l λ k βl k e λ k! (l k)! e β k (λ + β)l e (λ+β). l! λ k β l k k! (l k)! e (λ+β) l! l k ( ) l λ k β l k k Example 3 Customers arrive at the busy supermarket according to a Poisson distribution with parameter λ. Each of them independently of the others becomes a buyer with probability p. Find the distribution of the number of buyers. Let X P o(λ) denote the number of customers and Y the number of buyers. By the 5

virtue of the theorem of total probability we have That is Y P o(λp). P (Y n) P (Y n X k) P (X k) kn kn p n e λ ( k )p n ( p) k n λk n k! e λ kn p n e λ kn p n e λ n! λn (λp)n e λ n! (λp)n e λp n! k! λk ( p)k n n!(k n)! k! n!(k n)! ( p)k n λ n λ k n l l (( p)λ)l l! (( p)λ) l l! (λp)n e λ e ( p)λ n!.3 Some Important Continuous Probability Distributions Uniform Distribution A random variable X is said to have a uniform distribution on the interval [a,b] if its density function is {, if a x b, b a f(x), otherwise. It is easy to see that its distribution function is, if x a, x a F (x), if a < x b, b a, if b < x. Notation: X U(a, b). It is not difficult to show EX a + b 2 (b a)2, V ar(x), CX 2 2 It is easy to verify that if X U(, ), than X a + (b a)x U(a, b). 6 (b a)2 3(a + b) 2.

To generate a random number with a given distribution one can successfully use the following procedure. If F X (x) exists then Y F X(X) E(, ) and thus X F (Y ). It can be proved as follows F Y (x) P (Y < x) P (F X (X) < x) F X (F X (x)) x, that is Y E(, ), therefore X F X (Y ). Exponential Distribution A random variable X is said to have an exponential distribution with parameter λ if its density function is given by {, if x <, f(x) λe λx, if x. So its distribution function is F (x) where λ >. Notation: X Exp(λ). It can be proved that Erlang Distribution {, if x <, e λx, if x. EX λ, V ar(x) λ 2, C2 X. A random variable Y n is said to have an Erlang distribution with parameters (n, λ) if its density function is defined by {, if x <, f(x) λ (λx)n (n )! e λx, if x. It can be shown that the distribution function is {, if x <, F (x) n (λx) k k e λx, if x, k! where n is natural number, λ >. Notation: X Erl(n, λ), or X E n (λ). It is easy to see that in the case of n it reduces to the exponential distribution. It can be verified that E(Y n ) n λ, V ar(y n) n λ 2, C2 Y n n. 7

Gamma Distribution A random variable X is said to have a gamma distribution with parameters (α, λ) if its density function is given by where λ >, α >,,if x < f(x) λ(λx) α e λx,if x. Γ(α) Γ(α) is the so-called complete gamma function. t α e t dt Its distribution function can not be obtained in an explicit form except α n. This case it reduces to the Erlang distribution. Notation: X Γ(α, λ). It can be shown that E(X) α λ, V ar(x) α λ 2, C2 X α. α is called the shape parameter, λ is called the scale parameter. Weibull Distribution A random variable X is said to have a Weibull distribution with parameters (λ, α) if its density function is given by It is easy to see that,if x < f(x) λαx α e λxα,if x.,if x < F (x) e λxα,if x where λ > is called the scale parameter, α > is called the shape parameter. Specially, in the case of α it reduces to the exponential distribution. 8

Notation: X W (λ, α). It can be shown that Pareto Distribution ( E(X) λ ( V ar(x) λ ) ( α Γ + ) α ) 2 [ ( α Γ + 2 α CX 2 2αΓ ( ) 2 α Γ ( ). 2 α ) ( Γ 2 + )] α A random variable X is said to have a Pareto distribution with parameters (k, α) if its density function is given by, x < k f(x) αk α x α, x k Thus the distribution function is, x < k F (x) ( k α x), x k where α, k >. Notation: X P ar(k, α), where k is called the location parameter, α is called the shape parameter. It can be proved that Thus V ar(x) kα, α > α E(X), α k 2 α, α > 2 E(X 2 α 2 ), α 2. ( ) 2 k2 α kα α 2, CX 2 α Normal Distribution (Gaussian Distribution) (α )2, α > 2. α(α 2) A random variable X is said to have a normal distribution with parameters (m, σ) if it density function is given by 9

f(x) 2πσ e (x m)2 2σ 2, For the distribution function we have F (x) x f(t)dt, where m R, σ >. Notation: X N(m, σ). For F (x) there is no closed form expression. Specially, if m, σ, then X N(, ), which is the standard normal distribution. In this case the traditional notation for the density and distribution function is ϕ(x) x e x2 2, φ(x) ϕ(t)dt. 2π It can be proved that if X N(m, σ), then ( ) x m P (X < x) φ, σ furthermore φ( x) + φ(x). It is well-known that Lognormal Distribution E(X) m, V ar(x) σ 2, C 2 X σ2 m 2. Let Y N(m, σ), then the random variable X e Y is said to have lognormal distribution with parameters (m, σ), notation: X LN(m, σ). It is not difficult to verify that thus It can be shown that P (X < x) P (e Y < x) P (Y < ln x) ( ) ln x m F X (x) φ, x > σ ( ) ln x m f X (x) φ ( ) ln x m σ σx ϕ, x >. σ σ2 m+ E(X) e 2, V ar(x) e 2m+σ 2 (e σ2 ), CX 2 e σ2. Theorem 4 (Markov Inequality) Let X be a nonnegative random variable with finite mean, that is EX <. Then for any δ > P (X δ) EX δ. 2

Theorem 5 (Chebychev Inequality) Let X be a random variable for which V ar(x) <, EX m. Then for any ε > P ( X m ε) V ar(x) ε 2. Theorem 6 (Central Limit Theorem) Let X, X 2,... be independent and identically distributed random variables for which V ar(x i ) σ 2 <, E(X i ) m. Then ( ) lim P X +... + X n nm < x φ(x). n nσ In particular, if X i χ i, then X +... + X n B(n, p) and thus P (X +... + X n < x) ( ) ( ) n x np p k ( p) n k φ. k k<x npq The local form is ( ) n p k ( p) n k k e (k np) 2πnp( p) 2 2np( p). Practical experiments have shown that if n and 9 p n n+9 distribution provides a good approximation to the binomial one. n+9, then the normal 2

22

Chapter 2 Fundamentals of Stochastic Modeling This chapter is devoted to the most important distributions derived from the exponential distribution. The lifetime of series and parallel systems are investigated which play crucial role in reliability theory. It is shown how to generate random numbers having given distribution. Finally, random sums are treated which occurred in many practical situations. The material is based on mainly the following books: Allen [], Gnedenko, Belyayev, Szolovjev [2], Kleinrock [5], Ovcharov [7], Ravichandran [8], Ross [], Stewart [], Tijms [3], Trivedi [4]. 2. Distributions Related to the Exponential Distribution Theorem 7 (Memoryless or Markov property) If X Exp(λ) then it satisfies the following, so-called memoryless, or Markov property Proof: P (X < x + y X y) P (X < x), x >, y >, P (X > x + y X y) P (X > x), x >, y >. P (X < x + y X y) P (y X < x + y) P (X y) F (x + y) F (y) F (y) e λ(x+y) ( e λy ) ( e λy ) e λy ( e λx ) e λy e λx F (x) P (X < x) The proof of the second formula can be carried out in the same way. Theorem 8 e λh λh+o(h), where o(h)(small ordo h) is defined by lim h o(h) h. 23

Proof: As it can be seen the statement is equivalent to e λh λh lim h h, which can be proved by applying the L Hospital s rule. That is e λh λh lim h h lim h λe λh λ. Theorem 9 If F (x) is the distribution function of a random variable X for which F (), and F (x + h) F (x) λh + o(h), if x >, F (x) then F (x) e λx,if x. Proof: It can be seen from the conditions that therefore F (x+h) F (x) h lim h F (x) F (x) λ thus F (x) lim h λh + o(h) h λ F (x) F (x) dx ln F (x) λx + ln c F (x) ce λx, that is F (x) ce λx. λ dx According to the initial condition F () thus we have c, consequently F (x) e λx. In many practical problems it is important to determine the distribution of the minimum of independent random variables. Theorem (Distribution of the lifetime of a series system) If X i Exp(λ i ) and are independent random variables (i,2,...,n) then Y min(x,..., X n ) is also exponentially distributed with parameter n i λ i. Proof: By using the properties of the probability and the independent events we have P (Y < x) P (Y x) P (X x,..., X n x) n n P (X i x) ( ( e λix )) e ( n i λ i)x i i 24

Example 4 Let X, Y be independent exponentially distributed random variables with parameters λ, µ, respectively. Find the probability that X min(x, Y ). X min(x, Y ) if and only if X < Y. By the theorem of total probability we have P (X < Y ) P (X < Y ) ( e λx )f Y (x) dx µe µx dx µ λ + µ P (X < x)f Y (x) dx, ( e λx )µe µx dx (λ + µ)e (λ+µ)x dx µ λ + µ λ λ + µ Example 5 (Distribution of the lifetime of a parallel system) Let X,..., X n be independent random variables and Y max(x,..., X n ). Find the distribution of Y. P (Y < x) P (X < x,..., X n < x) If X i Exp(λ i ), then F Y (x) n i ( e λ ix ). n P (X i < x) i n F Xi (x) i In addition, if λ i λ, i,..., n, then F Y (x) ( e λx ) n Example 6 Find the mean lifetime of a parallel system with two independent and exponentially distributed components. Let us solve the problem first according to the definition of the mean. This case Thus f max(x,x 2 )(x) [( e λ x ) ( e λ 2x )] ( e λ x e λ 2x + e (λ +λ 2 )x ) λ e λ x + λ 2 e λ 2x (λ + λ 2 )e (λ +λ 2 )x. E(max(X, X 2 )) xf max(x,x 2 )(x)dx λ + λ 2 25 λ + λ 2.

This can be expressed as E(max(X, X 2 )) λ2 + λ 2 2 + λ λ 2 λ λ 2 (λ + λ 2 ) λ + λ 2 + λ λ + λ 2 λ 2 + λ 2 λ + λ 2 λ. Now, let us show how this problem can be solved by probabilistic reasoning. At the beginning both components are operating, thus the mean of the first failure is λ + λ 2. The second failure happens if the remaining component fails, too. We have 2 cases, depending which component failed first. It is easy to see that by the memoryless property of the exponential distribution the distribution of the residual life time of the remaining component is the same as it was at the beginning. Then by using the theorem of total expectation for the mean residual life time after the first failure we have λ λ + λ 2 λ 2 } {{ } component failed first + λ 2 λ + λ 2 λ } {{ } component 2 failed first Hence the mean operating time of a parallel system is In homogeneous case it reduces to 2λ + λ λ + λ 2 + λ λ + λ 2 λ 2 + λ 2 λ + λ 2 λ. as we will see in the next problem. It is easy to see that the second moment of the lifetime could be calculated by the same way by using either the definition or the theorem of second moments and thus the variance can be obtained. Of course these are much complicated formulas but in homogeneous case they could be simplified as we see in the next Example.. Example 7 Find the mean and variance of a parallel system with homogeneous, independent and exponentially distributed components, that is X i Exp(λ), i,..., n. P (max(x,..., X n ) < x) As it is well-known if X then n P (X i < x) ( e λx ) n. i EX P (X x) dx 26 ( F (x)) dx.

Using substitution t e λx we get E(max(X,..., X n )) ( ( e λx ) n dx ( t n ) λ t dt ( + t +... + t n ) dt [ ] [ t + t2 λ λ 2 +... + tn λ + n 2 +... + ] n + +... + }{{} nλ (n )λ } {{ } }{{} λ first failure nth failure - (n-)th failure λ [ + 2 +... + n second failure - first failure ]. Due to the memoryless property of the exponential distribution it is easy to see that the time difference between the consecutive failures are exponentially distributed. More precisely, the distribution of time between the (k )th and kth failures is exponentially distributed with parameter (n k + )λ, k,..., n. Moreover, they are independent of each other. This fact can be used to get the mean and variance of the kth failure. After these arguments it is clear that E(time of the kth failure ) nλ +... + (n k + )λ V ar(time of the kth failure) (nλ) +... + 2 ((n k + )λ) 2 k,..., n. In particular, the variance of the lifetime of a parallel system is (nλ) 2 +... + λ 2. Definition 6 Let X and Y independent random variables with density functions f X (x) and f Y (x), respectively. Then the density function of Z X + Y can be obtained as f Z (z) f X (x)f Y (z x) dx, which is said to be the convolution of f X (x) and f Y (x). In addition, if X and Y, then f Z (z) z f X (x)f Y (z x) dx. Example 8 Let X and Y be independent and exponentially distributed random variables with parameter λ. Find their convolution. 27

After substitution we have f X+Y (z) z z λe λx λe λ(z x) dx λ 2 e λz dx z λ 2 e λz dx λ 2 e λz z λ(λz)e λz, which shows the fact that the sum of independent exponentially distributed random variables is not exponentially distributed. Example 9 Let X n... X n be independent and exponentially distributed random variables with the same parameter λ. Show that f X +...+X n (z) λ (λz)n (n )! e λz. To prove this we shall use induction. As we have seen this statement is true for k 2. Let us assume it is valid for k n and let us see what happens to k n. f X +...+X n +X n (z) z λ(λx) n 2 (n 2)! e λx λe λ(z x) dx λ 2 e λz (n 2)! λn 2 λ (λz)n (n )! e λz, z e λz x n 2 dx λ 2 z n (n 2)! λn 2 (n ) what is exactly the density function of an Erlang distribution with parameters (n, λ). This representation of the Erlang distribution help us to compute its mean and variance in a very simple way without using its density function. The Erlang distribution is very useful to approximate the distribution of such a random variable X for which the squared coefficient of variation CX 2 <. In other words, if the first two moments of X are given then f Y (t) p λ(λt)k 2 (k 2)! e λt + ( p) λ(λt)k (k )! e λt is the mixture of two Erlang distributions with parameters (k, λ) and (k, λ), where ( ) p kcx 2 k( + CX 2 ) k2 CX 2, with the property that + C 2 X λ k p E(X), k C2 X k, E(Y ) E(X), C 2 Y C 2 X. 28

Such a distribution of Y -t is denoted by E k,k (λ) and it matches X on the first two moments. Hypoexponential Distribution Let X i Exp(λ i ) (i,..., n) be independent exponentially distributed random variables. The random variable Y n X +... + X n is said to have a hypoexponential distribution. It can be shown that its density function is given by f Yn (x) {, if x <, ( ) n [ n i λ i] n e λ j x j n k,k j (λ j λ k, if x. ) It is easy to see that E(Y n ) n i λ i, V ar(y n ) Thus for the squared coefficient of variation we have n ( ) 2 n i. λ 2 i C 2 Y n i ( n λ i ) 2. i λ i Hyperexponeniális distribution Let X i Exp(λ i ) (i,..., n) and p,..., p n be distribution. A random variable Y n is said to have a hyperexponential distribution if its density function is given by {, if x < f Yn (x) n i p iλ i e λix, if x. Its distribution function is F Yn (x) {, if x < n i p ie λ ix, if x. It is easy to see that E(Y n ) n i p i λ i, E(Y n ) 2 2 29 n i p i λ i 2.

It can be shown that C 2 Y n n ( ) ( 2 n 2 i λ i ( n i λ i ) 2 ) 2. i λ i In the case when for a random variable X, CX 2 fit is suggested > then the the following two-moment f Y (t) pλ e λ t + ( p)λ 2 e λ 2t, that is Y is a 2-phase hyperexponentially distributed random variable. Since the density function of Y contains 3 parameters and the fit is based on the first two moments the distribution is not uniquely determined. The most commonly used procedure is the balanced mean method, that is p p. λ λ 2 In this case E(Y ) p λ + p E(Y 2 ) 2p + λ 2 λ 2 2( p) λ 2 2 E(X) E(X 2 ). The solution is ( ) p CX 2 2 CX 2 +, λ 2p E(X), λ 2 2( p) E(X). If the fit is based on the first 3 m, m 2, m 3 moments then the m 3 3 2 m2 2 condition is needed, and it gives a unique solution. It can be shown that the gamma and lognormal distributions satisfy this condition. The parameters of the resulting unique hyperexponential distribution are λ,2 ( ) a ± a 2 4a 2, p λ ( λ 2 m), 2 λ λ 2 where ( ) 3 a 2 (6m 2 3m 2 )/ 2 m2 2 m m 3, a ( + 2 ) m 2a 2 /m. 3

Mixture of Distributions Definition 7 Let X i, X 2,... be random variables and p, p 2,... be a distribution. The distribution function F (x) p i F Xi (x) is called the mixture of distributions F Xi (x) and weights p i. Similarly The density function f(x) p i f Xi (x) is called the mixture of density functions f Xi (x) and weights p i. It is easy to see that F (x), f(x) are indeed distribution, density functions, respectively. Using this terminology we can say that the hyperexponential is the mixture of exponential distributions. 2.2 Basics of Reliability Theory Definition 8 Let a random variable X denote the lifetime or time to failure of a component. Then R(t) P (X > t) F (t) is called the reliability function of the component. It can easily be seen that R (t) f X (t), and E(X) R(t)dt. The reliability function is very useful in reliability investigations of different complex systems. On the basic of the previous arguments we can formulate the reliability function of series and parallel systems, namely Series system Parallel system R S (t) R P (t) n R i (t) i n ( R i (t)) Another important function is failure rate, hazard rate function) defined by h(t) lim x P (X < t + x X t) x lim x F (t + x) F (t) xr(t) i lim x P (t < t + x) xp (X t) lim x R(t) R(t + x) xr(t) Let us show how R(t) can be expressed by the help of h(t). Namely, t t h(x)dx t R (x) R(x) dx h(x)dx [ ln R(x)] t ln R(t), 3 f(t) R(t).

since R(). Thus R(t) e t h(x)dx. Let H(t) t rate function. So we have h(x)dx which is called cumulative failure rate, cumulative hazard R(t) e H(t). In the following R(t), h(t), H(t) are listed for some important distributions. These formulas can be computed by the definitions of the involved random variables. At the same time we show what is the relationship between h(t)t and C 2 X. Exponential distribution Erlang distribution X Exp(λ), R(t) e λt, h(t) λ, H(t) λt, C 2 X. X Erl(n, λ) R(t) n i (λt) i i! e λt λ(λt) n e λt h(t) n (λt) i (n )! e λt i! i λ(λt) n n (λt) i (n )! i! i which is monotone increasing function with image in the interval [, λ]. C 2 X n λ 2 ( n λ) 2 n. Weibull distribution X W (λ, α), R(t) e λtα, h(t) λαtα e λtα e λtα λαt α. That is h(t) monotone increasing for α >, and monotone decreasing for α <. For α h(t) λ, H(t) λt α. ( 2 CX 2 α λ) ( Γ ( ) + ( 2 α Γ 2 + α)) (( ) ( )) λ Γ + 2 Γ ( ) + 2 α Γ ( ) 2 + α α 2αΓ ( ) 2 α Γ ( ). 2 α 32

It can be shown that C 2 X >, if < α <, C 2 X <, if α >. Pareto distribution X P ar(k, α), R(t) C 2 X ) 2 k 2 α ( kα α 2 α ) 2 ( kα α ( ) α t, h(t) λ k t, k 2 α α 2 ( kα α ) 2 ( ) t H(t) α ln, t k. k (α )2, α > 2. (α 2)α Seeing the above examples we might expect that if h(t) is monotone increasing (decreasing) function then CX 2 < (> ). However, in the case of the Pareto distribution h(t) is monotone decreasing, but it easy to show that CX 2 < if α > + 2 and Cξ 2 > if 2 < α < + 2. 2.3 Generation of Random Numbers We have seen that the generation of random numbers can be carried by the help of the following formula, sometimes called inverse transformation method Y F X (X) E(, ), and thus X F (Y ). In the next examples we show how to generate random numbers having important continuous distribution. Example Generate an exponentially distributed random number with parameter λ. If X Exp(λ) and Y E(, ) then e λx Y so if we can generate a uniformly distributed on [,] then the exponentially distributed random numbers are X ln( Y ). λ Example Generate a random number having Erlang distribution with parameters (n, λ). Keeping in mind the representation of the Erlang distribution it is easy to see that Y n X +... + X n n λ ln( ( Y i )), where Y i E(, ), i,..., n gives the desired random number. i 33

Example 2 Generate a hypoexponentially distributed random number. As the hypoexponential distribution is the generalization of the Erlang distribution the following formula results in the desired random number Y n X +... + X n λ ln( Y )... λ n ln( Y n ). Example 3 Generate a hyperexponentially distributed random number. In the first step let us generate a uniformly distributed random number Y on [, ] and then choose an i for which i i p j < Y < p j. j In the second step let us generate an exponentially distributed random number with parameter λ i as we discussed earlier. j Example 4 Generate a Weilbull distributed random number. Y e λxα, thus X [ λ ] ln( Y ) α. Example 5 Generate a Pareto distributed random number. Y X k( Y ) α. ( ) α k, thus X 34

2.4 Random Sums Definition 9 Let ν {,, 2, 3,...} be a random variable and let {X i } i be independent identically distributed random variables that are independent of ν, too. The random variable Y ν X +... + X ν is called a random sum (ν, Y ). The distribution of Y ν can be obtained by using the theorem of total probability. Similarly, the moments of the random sum can be calculated by the help of the theorem of total moments. Discrete case P (Y ν n) P (Y k n)p (ν k), Continuous case E(Y l ν ) k k E(Y l k)p (ν k). f Yν (x) f Yk (x)p (ν k), F Yν (x) k F Yk (x)p (ν k), k E(Y l ν ) k x l f Yk (x) dxp (ν k) k E(Y l k)p (ν k). Example 6 Let f Xi (x) λe λx, i, 2,... and let ν be geometrically distributed with parameter p. Find the density function of Y ν. Notice that Y k is Erlang distributed with parameters (k, λ), hence by substituting its density function we have f Yν (x) k It means that Y ν Exp(λp). λ(λx) k (k )! e λx p( p) k λp e λx (λx( p)) k (k )! k λp e λx e λx( p) λp e λpx (λx( p)) j j j! Theorem Mean of a random sum E(Y ν ) EX Eν. 35

Proof: By the law of total expectation we have E(Y ν ) E(Y k )P (ν k) kex P (ν k) k E(X ) k kp (ν k) EX Eν. k Theorem 2 Variance of a random sum V ar(y ν ) V ar(x ) Eν + E 2 X V ar(ν). Proof: By applying the theorem of total moment we get E(Y 2 ν ) k E(Y 2 k )P (ν k) E[(X +... + X k ) 2 ]P (ν k) k Thus (kv ar(x) + k 2 E 2 X )P (ν k) k kv ar(x) P (ν k) + k k k 2 E 2 X P (ν k) V ar(x) Eν + E 2 X Eν 2. V ar(y ν ) V ar(x ) Eν + E 2 X Eν 2 E 2 X E 2 ν V ar(x ) Eν + E 2 X V ar(ν). 36

Chapter 3 Analytic Tools, Transforms The concept of transform appear naturally for investigation of different problems in mathematics, physics, and engineering sciences. The main reason to introduce them is that they greatly simplify the calculations. The type of transformation depends on the problem itself, that is why varieties of transform occur, see for example, Z-transform, moment generating function, Laplace-transform, Fourier-transform, Mellin-transform, Hankel-transform, etc. Moreover, they may have different names as well, e.g., probability generating function, characteristic function. This chapter is devoted to the probability generating function and the Laplace-transform which are closely related to the discrete and continuous nonnegative random variables. Their usefulness will be illustrated by several examples. Of course, there many books dealing with special transform, but in this material I concentrate on our needs only keeping in mind their applications in queueing theory. As basic sources I recommend the following books: Allen [], Kleinrock [5], Trivedi [4]. 3. Generating Function Definition Let X be a nonnegative discrete random variable having distribution P (X n) p n, i,, 2,.... Then the generating function G X (s) of X is defined as G X (s) s k p k E(s X ). i G X (s) is defined if the series is convergent. Theorem 3 The generating function holds the following properties. G X (), 2. G X (s), if s, 3. E(X) G X (), 4. E(X 2 ) G X () + G X (), 5. p k Gk X () k!, k,, 2,.... 37

Proof:. G X () k k p k k p k. 2. G X (s) k sk p k k s k p k k p k. 3. G X (s) k (sk ) p k k ksk p k thus G X () k kp k EX. 4. G X (s) s k (sk ) p k k (ksk ) p k s k k(k )sk 2 p k s k k2 p k k kp k E(X 2 ) E(X) Collecting the terms we get V ar(x) G X() + G X() (G X()) 2. Theorem 4 If X,..., X n are independent then G X +...+X n (s) n i G X i (s). Proof: In the proof we use the theorem if the random variables are independent then the mean of their product is equal to the product of their means. Thus we can write G X +...+X n (s) E(s X +...+X n ) E(s X... s Xn ) n E(s X i ) i n G Xi (s). i Theorem 5 Generating function of a random sum Proof: By the law of total expectation we get G Yν (s) G ν (G X (s)). G Yν (s) n E(s Yn )P (ν n) n (G X (s) n P (ν n) G ν (G X (s)). The following two theorems play an important role in many applications and simplify the calculations in limiting distributions. Theorem 6 (Continuity Theorem ) Let X,..., X n,... be a sequence of nonnegative, integer valued random variables. If the sequence of the corresponding distribution converges to a distribution, that is lim n p nk p k (k,,...), and k p k, where p nk P (X n k), (k,,..., n, 2,...) then the corresponding generating functions of X n converge to the generating function of {p k } at any point in [, ], that is lim G n(s) G(s), ( s ), n 38

where G n (s) G(z) p nk s k G Xn (s) k p k s k. k If the limit lim n p nk p k exists, but p k, then the convergence of the generating functions holds only in (, ). Comment For illustration let us see the following example Let X n, that is p nn, and p nk, if k n then lim p nk, (k,,...). n However lim G n(s) lim s n n n, if s <, if s non exists, if s. Theorem 7 (Continuity Theorem ) If a sequence of the generating function of X n converges to a function G(s) on s then the sequence of the corresponding distribution of X n converges to a probability distribution with generating function G(s). Comment 2 If we assume that lim n G n (s) G(s) exists but only on s <, then G(s) is not necessarily a generating function as we see in the following example. Example 7 If a random variable X n takes the values and n with the same probability then G n (s) +sn and thus lim 2 n G n (s). It is easy to see that p 2 k is not valid since { lim p nk p k, if k 2. n otherwise. Generating Function of Important Distributions Example 8 Find the generating function of a Bernoulli distribution, then the mean and variance. G χ(a) (s) s ( p) + s p sp + p + p(s ), G χ(a)(s) s p, Eχ(A) 2 + p, V arχ(a) p p 2 p( p). Example 9 Find the generating function of a Poisson distribution with parameter λ, and then the mean and variance. 39

G X (s) k s k λk k! e λ e λ (λs) k e λ e sλ e λ( s). k! k G X(s) s e λ( s) λ s λ, G X(s) s e λ( s) λλ s λ 2, V ar(x) λ 2 + λ λ 2 λ. Example 2 By using the generating function approach find the convolution of two Poisson distributions. Applying the properties of the generating function it is easy to get the generating function of the sum and by taking into account the generating function of a Poisson distribution we have G X+Y (s) e λ( s) e µ( s) e (λ+µ)s, that exactly the generating function of a Poisson distribution with parameter λ + µ. Example 2 By the help the generating functions show that B(n, p) P o(λ), if n, p such that np λ. Use that if a n A then ( + an n )n e A. We are going to show that the generating function of the binomial distribution converges to the generating function of the Poisson distribution, that is G Xn (s) ( p( s)) n ( np( s) ) n e λ( s) n that is the generating function of a Poisson distribution with parameter λ. Example 22 Let X i χ(a) and independent, ν P o(λ). Find the generating function of the random sum Y ν. By using the relationship to the generating function of a random sum and taking into account the form of the generating function of a Bernoulli and Poisson distribution after substitution we shall get the desired result. Therefore we can write which shows that Y ν P o(λp). G Yν (s) G ν (G Xi (s)) e λ( (+p(s ))) e λp( s), 4

Example 23 Solve the following system of differential equations k,2,... dp (t) dt dp k (t) dt dp (t) dt P k () λp (t) λp (t) + λp (t)... λp k (t) + λp k (t) {, if k,, if k. Multiplying both sides of the equations by the appropriate power of s we get dp (t) dt λp (t) sdp (t) ( λp (t) + λp (t))s dt... s k dp k (t) ( λp k (t) + λp k (t))s k λs k P k (t) + λss k P k (t) dt Let us introduce the generating function G(t, s) as By adding both sides we have G(t, s) G(t, s) t k } {{ } G(t,s) s k P k (t). k k s k dp k(t) dt λ s k P k (t) +λs s k P k (t). k } {{ } G(t,s) The initial condition is G(, s) s k P k (). k Thus the system of differential equations reduces to a single differential equation, namely with initial condition G(t, s) t λ( s)g(t, s), G(, s). 4

Rearranging the terms we get and the solution is G(t,s) t G(t, s) λ( s) ln G(t, s) λt( s) + ln C. Since G(t, s) Ce λt( s) and G(, s), thus G(, s) Ce, that is C. Hence G(t, s) e λt( s) which shows that G(t, s) is the generating function of a Poisson distribution with parameter λt. Therefore the solution of the system of differential equation is P k (t) (λt)k e λt, k,,... k! 3.2 Laplace-Transform Definition Let X be a nonnegative random variable with density function f X (x). The Laplace-transform L X (s) of X is defined as L X (s) e sx f X (x) dx E(e sx ) f X(s). Theorem 8 The Laplace-transform holds the following properties. L X (), 2. L X (s), if s, 3. If X,..., X n are independent random variables then L X +...+X n (s) n L Xi (s), i 4. E(X n ) ( ) n L (n) X (). Proof:. L X () e x f X (x) dx f X (x) dx. 2. The lower bound of L X (s) comes from the observation that e sx f X (x) is nonnegative and thus the integral is also nonnegative. For the upper bound we have L X (s) e sx f X (x) dx 42 f X (x) dx.

3. n L X +...+X n (s) E(e s(x +...+X n) ) E(e sx... e sxn ) E( (e sx i ) n n E(e sx i ) L Xi (s), i i i because if X,..., X n are independent then e sx,..., e sxn and hence the multiplicative law is valid. are also independent 4. L (n) X () (e sx ) (n) f X (x) dx s ( x) n e sx s f X (x) dx ( ) n x n f X (x) dx } {{ } E(X n ) hence E(X n ) ( ) n L (n) X (). The main advantage of the Laplace-transform is that it can be used to solve differential equations. It should be noted that the Laplace-transform can be applied for any function with nonnegative range. In the following we would like to solve some differential equations that is why we need Theorem 9 The Laplace-transform hold the following properties. (af(x) + bg(x)) (s) af (s) + bg (s) 2. (f (x)) (s) sf (s) f(), if lim x f(x) e sx. Proof:. (af(x)+bg(x)) (s) e sx (af(x) + bg(x))dx a e sx f(x)dx+b af (s) + bg (s) e sx g(x)dx 2. Using integration by parts (f (x)) (s) e sx f (x)dx [f(x)e sx ] + s e sx f(x)dx sf (s) f(). Theorem 2 Laplace-transform of a random sum L Yν (s) G ν (L X (s)). 43

Proof: By the theorem of total expectation we have L Yν (s) E(e syν ) E(e syn )P (ν n) n (L X (s)) n P (ν n) G ν (L X (s). n Due to their practical importance we state the following theorems without proof. Theorem 2 f (s) yields the following limits Initial value theorem Final value theorem lim sf (s) lim f(t) s t lim sf (s) lim f(t) s t Theorem 22 (POST-WIDDER inversion formula ) If f(x) is a continuous and bounded function on (, ) then n n dn L(f)(s) ds lim n s n y n y n (n )! f(y) Theorem 23 (Continuity Theorem) Let X,..., X n,... be a sequence of random variables having distribution functions F (x), F 2 (x),..., F n (x),.... If lim n F n (x) F (x), where F (x) is the distribution function of some random variable X then for the corresponding Laplace-transform we have lim n E(e sxn ) E(e sx ), and conversely if the sequence of Laplace-transforms converges to a function then for the corresponding distribution functions we have lim n F n(x) F (x) and the limiting function is the Laplace-transform of some random variable X with distribution function F (x). In the following let us find the Laplace-transform of some important distributions. Example 24 Find the Laplace-transform if X Exp(λ). L X (s) e sx λe λx dx λ λ + s (λ + s)e (λ+s)x dx } {{ } λ λ + s. 44

Example 25 Find the Laplace-transform if X Erl(n, λ). Since X is the sum of independent exponentially distributed random variables with the same parameter by applying the convolution property of the Laplace-transform we have ( ) n λ L X (s). λ + s Example 26 Find the Laplace-transform of a hypoexponential distribution. Since a hypoexponentially distributed random variable is the sum of independent exponentially distributed random variable with different parameters we obtain L Yn (s) n i λ i λ i + s. In the next example we show how the nth moment of an exponentially distributed random variable can be obtained in a simple way by using one of the properties of the Laplacetransform. This calculation is rather cumbersome by the density function approach. Example 27 Using the Laplace-transform show that if X Exp(λ), then E(X n ) n! λ n. ( (n) E(X n ) ( ) n L (n) λ X () ( )n s λ + s) ( ) n λ((λ + s) ) (n) s ( ) n λ(( )( 2)... ( n)(λ + s)) n s ( ) n λ( ) n n! λ n+ ( )2n λ n! n! λn+ λ. n Example 28 Let ν be a geometrically distributed counting random variable and X i Exp(λ) summands. Find the distribution of the random sum. Knowing that if ν Geo(p), then G ν (z) L Yν (s) G ν (L X (s)) zp z( p), thus λp λ+s λ λ+s That is exactly the Laplace-transform of Y ν Exp(λp). ( p) λp λp + s. 45

Theorem 24 Laplace-transform of a mixture distribution is the mixture of the corresponding Laplace-transforms. Proof: Let f Y (x) p i f Xi (x). i Then L Y (s) ( ) e sx p i f Xi (x) dx i p i e sx f Xi (x)dx } {{ } i L Xi (s) p i L Xi (s). i Example 29 Find the Laplace-transform of the function g(t) (λt)k k! e λt. g (s) λk k! st (λt)k e k! e λt dt λk k! k! λ + s (λ + s) k λ + s λ + s (s + λ)t k e (s+λ)t dt } {{ } EX k k! (λ+s) k ( ) k λ. λ + s Example 3 Use the Laplace-transform to solve the system of differential equations P (t) λp (t) P (t) λp (t) + λp (t)... P k(t) λp k (t) + λp k (t), k, 2,... with initial conditions P k () {, if k,, if k. 46

By taking the Laplace-transform of both sides we have (P (t)) (s) λ(p (t)) (s)... (P k(t)) (s) λ(p k (t)) (s) + λ(p k (t)) (s), k, 2,... Using integration by parts we get e st P k(t) dt [e st P k (t)] se st P k (t) dt. Assuming that P k (t) is bounded that is P k (t) < K then Consequently Using the initial condition we obtain and After substitution we get thus Furthermore hence and thus it is easy to see that [e st P k (t)] P k (). (P k(t)) (s) P k () + sp k (s). (P ) (s) + sp (s) (P k) (s) sp k (s) for k. + sp (s) λp (s) P (s) λ + s. sp k (s) λp k (s) + λp k (s), P k (s) λ λ + s P k (s), Pk (s) ( ) k λ. λ + s λ + s Keeping in mind the previous example finally we get the solution as P k (t) (λt)k e λt, k,, 2... k! 47

48

Chapter 4 Stochastic Systems This chapter is devoted to the stochastic modeling of dynamic systems. The most wellknow stochastic process, the Poisson process is introduced and it is shown what is its relationship to other distributions. Simple stochastic systems are investigated serving as a building blocks for the more complex ones. Different methods and approaches are used to get the main performance measures of these systems. Variety of Examples helps the reader to understand the topic. The material is based mainly on the following books: Allen [], Ovcharov [7], Trivedi [4]. 4. Poisson Process Definition 2 Let τ, τ 2... be nonnegative, independent and identically distributed random variables. The random variable counting the number of events until time t, that is n ν(t) max { τ i < t} n is called renewal process, and its mean m(t) Eν(t)-t is referred to as f renewal function. Theorem 25 If τ, τ 2... are exponentially distributed with parameter λ then P (ν(t) k) (λt)k k! e λt. Proof: It can be seen from the construction that S n n i τ i is Erlang distributed with parameters (n, λ) thus its distribution function is i P (S n < x) F Sn (x) n i (λx) i e λx. i! In the proof we use that if an event A involves event B, denoted as A B in probability theory, then P (B \ A) P (B) P (A). Clearly, in our case event A can be defined as {S n+ < t} and event B is {S n < t}. 49

It can easily be seen that exactly k events occur iff S k < t, S k+ t. Hence P (ν(t) k) P (S k < t, S k+ t) F Sk (t) F Sk+ (t) k ( (λx) i k ) e λx (λx) i e λx i! i! i i (λt)k e λt. k! As we can see the number of events that happened until time t is Poisson distributed with parameter λt and it is called a Poisson process with rate λ. It is not difficult to verify that. P (ν(h) ) e λh λh + o(h), 2. P (ν(h) ) λhe λh λh( λh + o(h)) λh + (λh) 2 + λho(h) λh + o(h), 3. P (ν(h) 2) [( λh + o (h)) + λh + o 2 (h)] o(h). Definition 3 Rarity condition P (ν(h) 2) lim h P (ν(h) ) lim h o(h) λh + o(h) lim h o(h) h λ + o(h) h Let ν(t, t + h) denote the number of events (number of renewals ) that occurred in the interval (t, t+h). Due to the construction of the Poisson process and the memoryless property of the exponential distribution one can easily see that the distribution of ν(t, t + h) depends only on h irrespective to the position of the interval ( time-homogeneous ). In addition, the number of renewals happened during non-intersected intervals are independent random variables ( independent increments ).. The Poisson process has been introduced as a counting process with probability distribution P k (t) for the number of arrival during a given interval of length t, namely we have P (ν(t) k) (λt)k e λt. k! Let us investigate the joint distribution of the arrival epochs during a given time interval of length t when it is known in advance that exactly arrivals have occurred during that interval. Let us divide the interval (, t) into 2k + nonoverlapping intervals in the following way. Intervals of length α i always precede the interval of length β i, (i,..., k), and the interval is of length α k+ and in addition k+ α i + i k β i t. i Let A k denote the event that exactly one arrival occurs in each of the intervals β i, (i, 2,..., k), and that no arrival occurs in any of the intervals α i, (i, 2,..., k + ). 5

We would like to calculate the probability of event A k given that exactly k arrivals have occurred in the interval (, t). By the definition of the conditional probability thus we have P (A k exactlyk arrivals in (, t)) P (A k and exactly k arrivals in (, t)). P (exactly k arrivals in (, t)) When the number of arrivals of a Poisson process during nonoverlapping intervals are considered, they can be viewed as independent random variables with Poisson distribution. Thus the probability of the joint events may be calculated as the product of the individual probabilities. ( Poisson process has independent increments ) Therefore and P (one arrival in interval of length β i ) λβ i e λβ i P (no arrival in interval of length α i ) e λα i. By using these probabilities we immediately get P (A k exactly k arrivals in (, t)) (λβ λβ 2...λβ k e λβ e λβ 2...e λβ k )(e λα e λα 2...e λα k ) ((λt) k /k!)e λt β β 2...β k k!. t k On the other hand, let us consider another process that selects k points in the interval (, t) independently where each point has uniform distribution over this interval. It can easily be verified that P (A k exactly k arrival in (, t)) ( β t ) ( β2 t )... ( βk t ) k!, where the term k! comes about because the permutations of the k points are not distinguished. Since these two conditional distributions are the same we can conclude that if in an interval of length t there are exactly k arrivals from a Poisson process, then the joint distribution of the moments when these arrivals have occurred is the same as the distribution of k points independent and uniformly distributed over the same interval. Example 3 What is the rate ( intensity ) of the renewals E(ν(t)) lim t t λt lim t t λ. Eτ 5

Definition 4 (Stochastic convergence, convergence in probability) A sequence of random variables X n is said to converge in probability to a random variable X if, for any ε) >, lim n P ( X n X ) ε) Theorem 26 ν(t) t converges in probability to λ Proof: By applying the Chebychev inequality and observing that we get which implies that E( ν(t) ) λt t t λ, P ( ( ν(t) t V ar(ν(t) ) λt t t 2 λ ε) λ tε 2 lim P ( (ν(t) λ ε). t t λ t Definition 5 The rate of renewals is defined as. m(t) lim t t Theorem 27 ( The Elementary Renewal Theory) m(t) lim t t Eτ. Derivation of system of differential equations for the Poisson process Let P k (t) denote the probability that until time t k events occurred. Due to the properties of the Poisson process the following system of equations can be written P (t + h) P (t)( λh + o(h)) P (t + h) P (t)( λh + o(h)) + P (t)(λh + o(h)) P k (t + h) P k (t)( λh + o(h)) + P k (t(λh + o(h)) k + P k j (t)p (j events happened during h ) j2 P (). } {{ } o(h) 52

The first equation can be rewritten as which implies P (t + h) P (t) λhp (t) + o(h), P (t) λp (t). Similarly to this the other equations imply the following system of differential equations with initial condition P (t) λp (t) P (t) λp (t) + λp (t) P k(t) λp k (t) + λp k (t) P (). Notice that we have solved this system at Examples 23 and 3 and the solution is the Poisson distribution, that is P k (t) (λt)k e λt, k,,... k! 4.2 Performance Analysis of Some Simple Systems In the next section several simple systems are investigated with the aim that understanding their performance more complicated ones could be analyzed. Example 32 Let us consider a component having two states ( if it operating, if it failed ) and let us suppose that at time it is operating. Find the probability that at time t it is failed assuming that the operating times are exponentially distributed random variables with parameter λ and they are independent of the the repair times that are exponentially distributed random variables with parameter µ. To formulate the problem in mathematical terms let introduce the following notations. Let {, if at time t the component is operating X(t), if at time t the component is failed furthermore its distribution is denoted by P i (t) P (X(t) i), i,. Then by the help of the theorem of total probability and the memoryless property of the exponential distribution we get P (t + h) P (t)( λh + o(h)) + P (t)(µh + o(h)) + o(h) P (t) + P (t), which is called the normalizing condition P (), is the initial condition. 53

It is easy to see that after substitution and rearranging the terms we have Thus P (t + h) P (t) λhp (t) + o(h) + ( P (t))µh + o(h) + o(h). Hence the calculations have reduced to P (t + h) P (t) (λ + µ)hp (t) + µh + o(h) P (t + h) P (t) lim (λ + µ)p (t) + µ h h P (t) (λ + µ)p (t) + µ. P (t) + (λ + µ)p (t) µ, P (), which is a first-order inhomogeneous linear differential equation with constant coefficients. Its solution can be obtained in different ways. In the next part we show how to get the solution by applying the Laplace-transform method. In the mean time we shall use the following properties of the Laplace-transform (P (t)) (s) P () + sp (s) and the Laplace-transform of a constant c is c s. Taking the Laplace-transform of both side and keeping in mind these properties the transformed differential equation can be written as + sp (s) + (λ + µ)p (s) µ s By using the method of partial fractions we get P (s) µ + s s s + λ + µ A s + B s + λ + µ (A + B)s + A(λ + µ), s(s + λ + µ) resulting thus A + B, A(λ + µ) µ A µ λ + µ, B λ λ + µ. Therefore P (s) µ λ + µ s + λ λ + µ s + λ + µ. Knowing that (δe δt ) (s) δ s + δ, (e δt ) (s) s + δ inverting the terms we obtain the solution, namely P (t) µ λ + µ + λ λ + µ e (λ+µ)t, 54

P (t) λ λ + µ λ λ + µ e (λ+µ)t. If the initial condition is P (), then the solution is P (t) µ λ + µ µ λ + µ e (λ+µ)t, P (t) λ λ + µ + µ λ + µ e (λ+µ)t. Steady-state (stationary) distribution Let P i lim t P i (t), i,. Then taking the limits in the corresponding equations it is easy to see that the solution is at initial condition P () at initial condition P () P µ λ + µ, P λ λ + µ, P µ λ + µ, P λ λ + µ. Notice that we have the same distribution and the the initial condition has no effect on the limiting distribution. Example 33 Find P, P by the help of the steady-state balance equations. Since in steady-state the functions do not depend on the time their derivatives are zero. Hence from the corresponding differential equation and the normalizing condition we have P µ λ + µ, P λ λ + µ. Example 34 Find P by the help of the expectations of the operating and repair times. Let Y i, X i denote the operating times, repair times of the component, respectively. Let us assume that all these times are independent of each other. As the time goes the states of the component alternate, and Y i + X i create so called cycle which are independent of each other. It can be proved that the stationary distribution that the component is operating is the ratio of the mean operating time and the mean 55

cycle time. In the case of exponentially distributed times P EY EY + EX λ λ + µ λ µ+λ λµ which the same as we have in the previous Example. λµ λ λ + µ µ λ + µ, In the reliability theory the distribution of the time to the first system failure plays a very important role. Obviously, this distribution should depend on the initial condition of the system. The aim of the next Example to illustrate this topic. Example 35 Let us consider a system consisting of two components having exponentially distributed operating times with parameter λ. The failed component is maintained by a single repairman and the repair times are supposed to be exponentially distributed random variables with parameter µ. If both components are failed the system is said to be failed and the whole operation stops. Assuming that at the beginning all components are operating and they are independent of each other find the mean time to the first system failure. As in the previous Example let i denote the number of failed components, i,, 2. Since if the process enters to state 2 the system stops it is an absorbing state. It is not difficult to see that the transition rates between the states can be illustrated as follows Figure 4.: Transition rates in Example 35 It can easily be verified that for the distribution of the system the following differential equations with initial condition can be written P (t) 2λP (t) + µp (t) P (t) (λ + µ)p (t) + 2λP (t) P 2(t) λp (t) P (). It is enough to solve P (t) and P (t) since P 2 (t) (P (t) + P (t)). 56

Taking the Laplace-transform at both sides and using the initial condition we have sp (s) 2λP (s) + µp (s) sp (s) (λ + µ)p (s) + 2λP (s) P 2λ (s) s + λ + µ P (s) (s + 2λ)P 2λµ (s) s + λ + µ P (s) + (2λ + s)(s + λ + µ)p (s) 2λµP (s) + s + λ + µ [(2λ + s)(s + λ) + sµ]p (s) s + λ + µ. Thus P s + λ + µ (s) s 2 + (3λ + µ)s + 2λ 2 P 2λ (s) s 2 + (3λ + µ)s + 2λ. 2 Distribution of the time to the first system failure Let Y denote the time to the first system failure. It is easy to see P (Y < t) P 2 (t) (P (t) + P (t)). Thus E(Y ) since P i () P i (t) dt. P (Y > t) dt (P (t) + P (t)) dt P () + P () Therefore E(Y ) λ + µ 2λ 2 + λ λ + µ + 2λ 2λ 2 3λ + µ 2λ 2 3 2λ + µ 2λ 2 Specially, if µ, which means that there is no repair, this formula simplifies to E(Y ) 3 2λ 2λ + λ as we could see in the case of a parallel system. Of course, the density function of the time to the first system failure can be obtained this way. Let us see what to do get it. Determination of density function f Y (t) To get f Y (t) let us notice that f Y (t) P 2(t). Using the properties of the Laplace-transform this can be transformed to f Y (s) (P 2) (s) sp 2 (s) P 2 () 57

( ) sp2 (s) s( P (t) P (t)) (s) s s P (s) P (s). Alternatively, at balance equation taking the Laplace-transform we have f Y (s) λp (s) P 2(t) λp (t) 2λ 2 s 2 + (3λ + µ)s + 2λ 2 2λ2 α α 2 ( ) s + α 2 s + α where Thus Therefore E(Y ) s 2 + (3λ + µ)s + 2λ 2 (s + α )(s + α 2 ) α,2 (3λ + µ) ± λ 2 + 6λµ + µ 2. 2 f Y (t) yf Y (y) dy 2λ2 α α 2 (e α 2t e α t ). 2λ2 α α 2 [ α 2 2 α 2 ] 2λ2 (α + α 2 ) (α α 2 ) 2 2λ2 (3λ + µ) (2λ 2 ) 2 3 2λ + µ 2λ 2. Example 36 Modify the initial condition and let us assume that the system operation start with operating component. Find the mean time to the first system failure. Let us notice that we have the same system of differential equations just the initial condition has changed. That is P (t) 2λP (t) + µp (t) P (t) (λ + µ)p (t) + 2λP (t) P 2(t) λp (t) P (). Similarly to the previous solution we have sp (s) 2λP (s) + µp (s) sp (s) (λ + µ)p (s) + 2λP (s) P (s) s + 2λ µ P (s) 58

[(s + λ + µ)( s + 2λ µ ) + 2λ]P (s) Thus P µ (s) (s + λ + µ)(s + 2λ) + 2λµ µ s 2 + (3λ + µ)s + 2λ 2 + 2λµ + 2λµ µ s 2 + (3λ + µ)s + 2λ(λ + 2µ) P (s) s + 2λ µ P s + 2λ (s) s 2 + (3λ + µ)s + 2λ(λ + 2µ). Hence the mean is E(Y ) P () + P () µ 2λ(λ + 2µ) + 2λ 2λ(λ + 2µ) 2λ + µ 2λ(λ + 2µ). In particular, in the case of non-maintained system, that in when µ it reduces to E(Y ), which shows that our calculation is correct. λ Let E(Y i ) denote the mean time to the first system failure with initial state i. On the basic of the previous calculations we have It is easy to see that E(Y ) > E(Y ), since E(Y ) 3λ + µ 2λ 2, E(Y ) 2λ + µ 2λ(λ + 2µ). E(Y ) E(Y ) 3λ+µ 2λ 2 2λ+µ 2λ(λ+2µ) (3λ + µ)(λ + 2µ) λ(2λ + µ) 3λ2 + 7λµ + 2µ 2 2λ 2 + λµ which was expected. + λ2 + 6λµ + 2µ 2 2λ 2 + λµ >, Example 37 Find the steady-state probability that k components are operating in a system containing of n independent components. µ The key to this problem is the binomial distribution with parameters (n, ) since in λ+µ µ steady-state the probability that a given component is operating is. Since we have n λ+µ components the probability in question is ( )( ) k ( ) n k n µ λ P k. k λ + µ λ + µ 59

Mean operation time of a parallel system Let A denote the steady-state availability coefficient of a system, that is the steadystate probability that the system is operating. As we have seen a parallel system is operating if there exists operating component. In other words it is failed if all the components are failed. In the case of a system containing of n independent components having exponentially distributed operating time, repair times with parameters λ, µ, respectively, this probability is given by ( λ λ+µ )n. Thus A ( ) n λ. λ + µ Let E(S) denote the mean sojourn time of the system in failed state and let E(O) denote the mean operating time of the system. Then ( λ ) n E(O) A E(O) + E(S) µ. + λ µ Therefore A(E(O) + E(S)) E(O), E(O) A A E(S). In the case of a parallel system it has the form of E(O) ( λ µ + λ µ ( λ µ + λ µ ) n ) n The term is the mean time while all the components are failed. Since this time is the nµ minimum of the repair times it is exponentially distributed with parameter nµ because the repair times are exponentially distributed with parameter µ. Example 38 Let us consider a system with two components and two repairmen. Let i denote the number of failed components. Assuming independent exponentially distributed operating and repair times derive the corresponding set of differential equations. Similarly as we have done earlier it is easy to see that the transition rates can be written as it is illustrated and hence the differential equations can easily be derived in the usual way, that is we have nµ. P (t) 2λP (t) + µp (t) P (t) (λ + µ)p (t) + 2λP (t) + 2µP 2 (t) P 2(t) 2µP 2 (t) + λp (t) P (t) + P (t) + P 2 (t) P (). 6

Figure 4.2: 2 components, 2 repairmen Example 39 Let as consider the previous Example with a single repairman. Find the steady-state distribution, the mean operating time of the system, and the mean busy period of the repairman. Of course we have to modify the repair rates, as it is illustrated, but after that the steady-state balance equations can easily be derived in the usual way as follows Figure 4.3: 2 components, repairman It is easy to verify that the solution is 2λP µp (λ + µ)p 2λP + µp 2 µp 2 λp P + P + P 2 P 2λ µ P, P 2 λ µ P 2λ2 µ 2 P, P + 2λ µ + 2λ2 µ 2. For the mean operating time we have the general formula, namely E(O) A A E(S). 6

Since we have only a single repairman and the repair time is exponentially distributed thus E(S) µ. The availability coefficient A P 2. To answer the third question let us introduce the following notations. Let E(i) be the mean idle time and E(δ) be the mean busy period of the server. Since thus P E(i) E(i) + E(δ), E(δ) P E(i). P This time E(i), and in the case of n components it is E(i), since the idle time 2λ nλ is the minimum of the operating times of the components. Example 4 Compare the mean operation time E(O ) and E(O 2 ) of the systems with and 2 repairmen. In the case of two repairmen As we have calculated earlier E(O 2 ) ( λ λ+µ )2 ( λ λ+µ )2 2µ µ 2λ + 2 λ. As we have shown in Example 35 the mean time to the first system failure starting with two operating components is It is easy to see that T 3λ + µ 2λ 2. T > E(O 2 ). In the case of a single repairman Thus E(O ) 2λ2 P µ 2 2λ 2 P µ 2 µ, wherep E(O ) 2λ2 µ 2 µ 2 µ 2 + 2λµ + 2λ 2 2λ 2 µ 2 µ 2 µ 2 + 2λµ + 2λ 2 µ2 + 2λµ + 2λ 2 2λ 2 2λ 2 µ µ 2λ 2 + λ. 62 + 2λ + 2λ2 µ µ 2 µ µ2 + 2λµ µ 2 µ 2 + 2λµ + 2λ 2. µ 2 (µ 2 + 2λµ + 2λ 2 ) 2λ 2 µ 2 µ 2 (µ 2 + 2λµ + 2λ 2 ) 2λ 2 µ 2 µ 2 (µ 2 + 2λµ + 2λ 2 ) 2λ 2 µ 2λ + µ 2λ 2 µ

Surprisingly there is no difference in the two cases. Of course the steady-state distributions are different, but nevertheless the mean value is the same. So far we have investigated systems with homogeneous components. In the next section we are dealing with systems with heterogeneous elements resulting more complicated set of differential equations and formulas. Example 4 Let us consider a system with heterogeneous components and with 2 repairman. The ith component has exponentially distributed operating times and repair times with parameter λ i and µ i, respectively, i,2. Assuming that the involved random variables are independent of each other find the transient distribution of the system starting with 2 operating components. Furthermore, in steady-state compute the mean operating time of this parallel system. What is the mean operating time without repair? To describe the behavior of the system we need more sophisticated notations since we have to keep in mind the heterogeneity of the components. Thus let us denote by the state when both components are operating, by when component with index is failed, by 2 when component with index 2 is failed, and finally by, 2 when both components are failed. The transition rates are illustrated in the following Figure, showing a more complicated situation. By the help of these rates the corresponding differential equations can be written in the traditional way. Figure 4.4: Heterogeneous case with 2 repairmen 63

Transient distribution P (t) (λ + λ 2 )P (t) + µ P (t) + µ 2 P 2 (t) P (t) (λ 2 + µ )P (t) + λ P (t) + µ 2 P,2 (t) P 2(t) (λ + µ 2 )P 2 (t) + λ 2 P (t) + µ P,2 (t) P,2(t) (µ + µ 2 )P,2 (t) + λ 2 P (t) + λ P 2 (t) P (). After elementary calculations we can verify that the solution is ( µ P (t) + λ ) ( e (λ +µ )t µ2 + λ ) 2 e (λ 2+µ 2 )t λ + µ λ + µ λ 2 + µ 2 λ 2 + µ ( 2 λ P (t) λ ) ( e (λ +µ )t µ2 + λ ) 2 e (λ 2+µ 2 )t λ + µ λ + µ λ 2 + µ 2 λ 2 + µ ( 2 µ P 2 (t) + λ ) ( e (λ +µ )t λ2 λ ) 2 e (λ 2+µ 2 )t λ + µ λ + µ λ 2 + µ 2 λ 2 + µ ( 2 λ P,2 (t) λ ) ( e (λ +µ )t λ2 λ ) 2 e (λ 2+µ 2 )t λ + µ λ + µ λ 2 + µ 2 λ 2 + µ 2 Further performance measures are R(t) P,2 (t), A P,2, E(O) P,2 P,2 µ + µ 2 Steady-state distribution Let Q i P ( i components are failed ) As we have seen earlier Thus it is easy to see that P (ith component is operating) Q µ λ + µ µ 2 λ 2 + µ 2 λ i λ i +. µ i Q λ λ + µ µ 2 λ 2 + µ 2 + µ λ + µ λ 2 λ 2 + µ 2 Q 2 λ λ 2. λ + µ λ 2 + µ 2 Therefore the mean operation time of the system is E(O) Q 2 Q 2 E(S) λ λ 2 λ + µ λ 2 + µ 2 λ λ 2 λ + µ λ 2 + µ 2 (λ + µ )(λ 2 + µ 2 ) λ λ 2 λ λ 2 µ + µ 2 λ λ 2 + λ µ 2 + µ λ 2 + µ µ 2 λ λ 2 λ λ 2 λ µ 2 + µ λ 2 + µ µ 2 λ λ 2 64 µ + µ 2. µ + µ 2 µ + µ 2

Example 42 Let as consider the previous system with a single repairman. Find the steady-state performance measures under different service disciplines. Assuming that at the beginning both components are operating find the mean time to the first system failure in the case of a parallel system. Fist-In First-Out (FIFO) discipline As usual, first we have to introduce the states of the system keeping in mind the order of arrivals of the failed components. Thus - there is no failed component - component with index is failed 2 - component with index 2 is failed, 2 - both components are failed, but component with index arrived first 2, - both components are failed, but component with index 2 arrived first The transition rates are illustrated in the following Figure. The set of steady-state balance equations can be written as usual. Figure 4.5: FIFO discipline 65

The equations and the normalizing condition are (λ + λ 2 )P µ P + µ 2 P 2 (µ + λ 2 )P λ P + µ 2 P 2, (µ 2 + λ )P 2 λ 2 P + µ P,2 µ P,2 λ 2 P µ 2 P 2, λ P 2 P + P + P 2 + P,2 + P 2, After these by elementary but lengthly calculations one can verify that the solution is P + λ (λ + λ 2 + µ 2 ) λ 2 µ 2 + µ (λ + µ 2 ) + λ 2(λ 2 + λ + µ ) λ µ + µ 2 (λ 2 + µ ) + λ 2 λ (λ + λ 2 + µ 2 ) µ λ 2 µ 2 + µ (λ + µ 2 ) + λ λ 2 (λ 2 + λ + µ ) µ 2 λ µ + µ 2 (λ 2 + µ ) P λ (λ + λ 2 + µ 2 ) λ 2 µ 2 + µ (λ + µ 2 ) P P 2 λ 2(λ 2 + λ + µ ) λ µ + µ 2 (λ 2 + µ ) P P,2 λ 2 µ P P 2, λ µ 2 P 2 Hence the distribution of the number of failed components can be computed as Q P, Q P + P 2, Q 2 P,2 + P 2,. It is easy to see that the main performance measures are E(δ) ( P ) P λ + λ 2 E(O) Q 2 Q 2 ( µ P,2 Q 2 + µ 2 P2, Q 2 ). Processor Sharing (PS) discipline Under this discipline the order of arrivals is not significant and that is why if both components are failed it is denoted by, 2. However, in this state the repair intensity is halved. The transition rates are illustrated in the following Figure 66

Figure 4.6: Processor Sharing discipline The steady-state balance equations and the normalizing condition can be written as The solution is much simpler, namely (λ + λ 2 )P µ P + µ 2 P 2 (λ 2 + µ )P λ P + µ 2 2 P,2 (λ + µ 2 )P 2 λ 2 P + µ 2 P,2 ( µ 2 + µ ) 2 P,2 λ 2 P + λ 2 P 2 2 P + P + P 2 + P,2 Q P, Q P + P 2, Q 2 P,2 P λ µ P, P 2 λ 2 µ 2 P, P,2 2 λ λ 2 µ µ 2 P P + λ µ + λ 2 µ 2 + 2 λ λ 2 µ µ 2 The main performance measures are E(δ) ( P ), E(O) Q 2 2. P λ + λ 2 Q 2 µ + µ 2 Preemptive Priority discipline Under this discipline component with index has preemptive priority over component with index 2. This means if component fails when component 2 is under repair the service process stops and the service of component starts immediately. In other words, service of component 2 can be carried out only when component is operating. It is 67

Figure 4.7: Preemptive Priority discipline easy to see that the states remain the same but the transition rates are different as it is illustrated in the following Figure. For the steady-state balance equations we have (λ + λ 2 )P µ P + µ 2 P 2 (λ 2 + µ )P λ P (λ + µ 2 )P 2 λ 2 P + µ P,2 µ P,2 λ 2 P + λ P 2 P + P + P 2 + P,2. The distribution of the number of failed components can be obtained as Q P, Q P + P 2, Q 2 P,2. It is not too difficult to verify that the solution is of the form P + λ µ + λ 2 + λ 2 µ 2 λ + λ 2 + µ λ 2 + µ + 2 λ λ 2 µ µ 2 λ + λ 2 + µ + µ 2 λ 2 + µ P λ µ + λ 2 P P 2 λ 2 µ 2 λ + λ 2 + µ λ 2 + µ P P,2 2 λ λ 2 µ µ 2 λ + λ 2 + µ + µ 2 λ 2 + µ P. For the performance measures we get E(δ) P P λ + λ 2, E(O) Q 2 Q 2 µ. 68

Knowing the distribution in all 3 disciplines it is quite simple to get the distribution for the homogeneous case. Namely, we obtain Q 2λ µ P, as we have seen in the earlier Examples. Q 2 λ µ P 2λ2 µ 2 P, Q P + 2λ µ + 2λ2 µ 2, Mean time to the first system failure To get the distribution of the time to the first system failure one can easily see that the service discipline has no effect on it if we have only two components. Omitting the tiresome Laplace-transform method the mean time can be obtained relative simple by probabilistic reasoning. To do so we need the following notation. Let E(T i ) denote the mean time to the first system failure starting from state i, i,, 2. By the theorem of total expectation and the properties of the exponential distribution the following equations can be written E(T ) λ 2 λ 2 µ + λ 2 + µ µ + λ 2 E(T ), E(T 2 ) λ λ µ 2 + λ + µ 2 µ 2 + λ E(T ) E(T ) λ + λ 2 + λ λ + λ 2 E(T ) + λ 2 λ + λ 2 E(T 2 ). After elementary calculations we obtain E(T ) µ + λ 2 + µ E(T ) µ + λ 2 E(T ), E(T 2 ) ( + λ λ 2 + µ + λ 2 λ + µ 2 λ + λ 2 ( / λ µ λ 2 µ 2 λ + λ 2 µ + λ λ + λ 2 µ 2 + λ 2 + µ 2 E(T ) µ 2 + λ µ 2 + λ ) / ). In particular, if µ µ 2, that is when there is no repair this formula reduces to the result of Example 6, that is ( E(T ) + λ + λ ) 2. λ + λ 2 λ 2 λ 69

By applying the theorem of total moments the second moment of these times could be calculated and thus the variance of T could be obtained. In particular, if µ µ 2, that is when there is no repair this formula could reduce to the result of Example 6. 7

Chapter 5 Continuous-Time Markov Chains The evolution of time-dependent systems can be obtained by different methods, the most commonly known one is the method of differential equations. In addition, if they exhibit random movements the situation becomes even more complicated. It is not the aim of this chapter to deal with the theory of stochastic processes since it has a wide literature and their mathematical level exceeds the level of the note. Omitting the precise mathematical treatment we concentrate only on the simplest processes which be used later on in the performance modeling of queueing systems. There is a variety of sources on this topic in either digital or printed form but for our purposes the following books fit best: Allen [], Kleinrock [5], Ovcharov [7], Sztrik [2], Tijms [3], Trivedi [4]. This section is devoted to one of the most commonly used stochastic processes, that is when each time t we have a random variable taking values,,... representing the states of the process. To know the evolution of the process we need the connection of the random variables at different times, in other words, we have to formulate mathematically how the future depends on th past. The simplest relationship is when the future depends on the past only through the present which can expressed as follows Definition 6 (Markov-property) If for any n and states i,, i n+ P (X(t n+ ) i n+ X(t ) i,, X(t n ) i n ) P (X(t n+ ) i n+ X(t n ) i n ) then process X(t) is called a Markov chain. Let P ij (t, t + h) P (X(t + h) j X(t) i) denote the transition probability probability of the chain, which is in time-homogeneous case is denoted by P ij (h). Clearly it means that the process during time h changes its state from i to j irrespective to where it is. It is easy to see that P ij (t, t + h). j To get how the distribution of the states changes during the time we have to know how the transition probabilities changes. Thus we define 7

Definition 7 (Intensity matrix, rate matrix ) The intensity matrix Q with elements q ij is defined by { P lim ij (h) h, if i j, h q ij P lim ij (h) h, if i j. h {, if i j P ij (h) P ij () P ij (), that is q ij lim, if i j h h Hence the transition probabilities can be expressed as P ij (h) q ij h + o(h) P ii (h) q ii h + o(h) Let us introduce the distribution of the process at time t, that is Then for the balance equations we have P j (t) P (X(t) j), j,, 2.... P j (t + h) P j (t) P jj (h) + i j P i (t)p ij (h), j,, 2,... These can be written as P j (t + h) P j (t) P j (t) (P jj (h) ) + i j P i (t)p ij (h), j,, 2,... P j (t + h) P j (t) h P j(t) (P jj (h) ) h + i j P i (t)p ij (h), j,, 2,... h Taking the limit the desired system of differential equations can be obtained, namely P j(t) q jj P j (t) + q ij P i (t), j,, 2,... i j P j (t), normalizing condition j P j () {, if j k,, if j k Steady-state, stationary distribution k,, 2... initial condition. One can easily notice that all the systems treated in the previous problems are special cases of continuous-time Markov chains. As we have seen it is rather difficult to get the transient solution even for quite simple systems. To obtain treatable formulas we are interested in the limiting distribution which is called steady-state, stationary distribution. Mathematically this can be written as P k lim k P k (t). 72

Then the steady-state balance equations can be obtained as q j P j q ij P i, P j, q j q jj. i j j In performance modeling of different systems we have to identify the stochastic process describing the dynamic behavior of the system. The most widely used, consequently the most thoroughly investigated class of stochastic process is the Markov process. Many practical problems can be treated by the help of a continuous-time Markov chain that is why without proof we state the following theorem Theorem 28 A stochastic process X(t) is a continuous-time Markov chain if and only if the sojourn time in any state j is an exponentially distributed random variable with parameter q j, j,, 2.... 5. Birth-Death Processes To simplify the investigations let us assume that the process enters to neighboring states only. This situation can be formulated as follows q ii+ λ i, P ii+ (h) λ i h + o(h), i,,... q ii µ i, P ii (h) µ i h + o(h), i,... q ii (λ i + µ i ), P ii (h) (λ i + µ i )h + o(h), i,,... q ij, P ij o(h) if i j >, i, j,,... µ. In this case λ i, and µ i are called birth, death intensities, respectively. Then the balance equations are also simplified, namely P j(t) (λ j + µ j )P j (t) + λ j P j (t) + µ i+ P j+ (t), j,,... In steady-state we have (λ j + µ j )P j +λ j P j + µ i+ P j+, j,,... P j j µ To obtain the solution let us notice that for any j we get since it is easy to see that D j λ j P j µ j+ P j+, D λ P µ P, D j λ j P j µ j+ P j+ λ j P j µ j P j D j, j,.... 73

By using this relationship it can be verified that thus P j+ λ j µ j+ P j, (5.) P i λ λ i µ µ i P, i, 2,, P + i λ λ i µ µ i, which is called the steady-state distribution of a birth-death process and later on plays an important role in modeling several queueing systems. In the case of an infinite state space the series concerning to the normalizing condition should be convergent. In many times conditions assuring the convergence are referred to as stability conditions. It could be proved that under stability condition the solution is unique. Let us consider the following simple example Example 43 Let λ i λ, i,,... and µ i µ i, 2,.... Then ( ) i λ ( ) i λ P i P, P, µ µ which is convergent iff λ < µ. Pure Birth Processes If λ i λ, i,,... and µ i, i, 2,..., then we are speaking about a pure birth process and hence the set of differential equations reduces to P (t) λp (t) which was obtained for the Poisson process. i P j(t) λp j (t) + λp j (t) {, if k, P k (), if k. 74

Part II Exercises 75

Chapter 6 Basic Concepts from Probability Theory 6. Discrete Probability Distributions Exercise Show that if X B(n, p) then EX np, and V ar(x) np( p). n ( ) n n ( ) n EX k p k ( p) n k n p p k ( p) n (k ) k k k k n ( ) n np p j ( p) n j np(p + p) n np. j j } {{ } Binomimial theorem EX 2 n ( n k 2 k n ( n k(k ) k k k2 ) p k ( p) n k n ( n (k(k ) + k) k n ( n k k k ) p k ( p) n k + k ) p k ( p) n k ) p k ( p) n k } {{ } np n ( ) n 2 n(n )p 2 p k 2 ( p) n 2 (k 2) + np k 2 k2 n 2 ( ) n 2 n(n )p 2 p j ( p) n 2 j + np j j n(n )p 2 + np np((n )p + ) np(np p + ). V ar(x) EX 2 E 2 X np(np p + ) (np) 2 (np) 2 np 2 + np (np) 2 np( p). Exercise 2 Show that if X P o(λ), then EX λ, and V ar(x) λ. 77

EX k k λk k! e λ e λ λ k λ k (k )! e λ λ j λ j j! e λ λe λ λ. EX 2 k k2 k 2 λk k! e λ λ 2 e λ (k(k ) + k) λk k k(k ) λk k! e λ + k2 k k λk k! e λ } {{ } λ k! e λ λ k 2 (k 2)! + λ λ2 e λ e λ + λ λ 2 + λ. V ar(x) EX 2 E 2 X λ 2 + λ λ 2 λ. Exercise 3 Show that if X Geo(p), then EX q, and V ar(x). p p 2 EX kpq k p kq k p k k k ( q) ( )q p p ( q) 2 p 2 p. ( (q k ) p k ) ( ) q q k p q We used the fact that in the case of absolute convergent series the summation and derivative are interchangeable. Thus EX 2 k k k 2 pq k p k 2 q k p (k(k ) + k)q k k p k(k )q k + p ( pq k 2( p) p 2 k kq k } {{ } p ) q k + ( q p pq q + p 2 2p + p p 2 V arx 2 p p 2 k p k(k )q k 2 q + p pq (q k ) + p k ) + ( p pq 2 p p 2. ( q) 2 ( ) 2 2 p q p p 2 p. 2 78 k ) + p pq 2 ( q) 3 + p

In the following we can show how these results can be obtained by using the property of the geometric distribution. E(X) kp( p) k ( p) (k )p( p) k 2 + k ( p)e(x) +. So E(X) p. Similarly E(X 2 ) k 2 p( p) k k ( p) k p( p) k k ((k ) 2 + 2k )p( p) k k (k ) 2 p( p) k 2 + 2E(X) k ( p)e(x 2 ) + 2 p. Thus Hence E(X 2 ) ( p)e(x 2 ) + 2 p p E(X 2 ) 2 p p 2. V ar(x) 2 p p 2 p 2 p p 2. Exercise 4 Find the mean and variance of a modified geometric distribution with success parameter p. As we know the modified geometric distribution is P (X k) pq k, k,... and X X, where X Geo(p). Hence E(X ) E(X ) EX p p p V ar(x ) V ar(x). q p, Exercise 5 Show that that the geometric distribution yields P (X k + l X > k) P (X l), that is the so-called memoryless property holds. 79

P (X k + l X > k) P (X k + l) P (X > k) p( p) k+l ( jk+ p( p)j p) k+l jk+ ( p)j ( p)k+l ( p) k +p ( p)k+l ( p) k p p( p)k+l ( p) k p( p) l P (X l). Exercise 6 Let X B(n, ρ), Y B(m, ρ) and independent random variables. Find that P (X i X + Y k). P (X i X + Y k) P (X i, X + Y k) P (X + Y k) P (X i, Y k i). P (X + Y k) Since X and Y are independent then the convolution of X, Y is also binomial so we have P (X i X + Y k) ( n k) p i ( p) n i( ) m k i p k i ( p) m k+i ) pk ( p) n+m k ( n+m k that is we obtain the hypergeometric distribution. ( n m ) i)( k i ) ( n+m k Exercise 7 Let X P o(λ) and Y P o(β) independent random variables. Find that )( λ λ + β } {{ } p P (X k X + Y n). P (X k, Y n k) P (X k)p (Y n k) P (X k X + Y n) P (X + Y n) P (X + Y n) λ k βn k e λ k! (n k)! e β λ k β n k ( n ) k! (n k)! k λ k β n k ( ) n λ k (λ+β) n e n! (λ+y ) (λ+β) n (λ + Y ) β n k n k (λ + β) k (λ + β) n k n! ( ) k ( ) n k ( ) n β n p k ( p) n k B(n, p). k k λ + β } {{ } p 8

Exercise 8 Let us consider a supermarket at which customers arrive according to a Poisson distribution with parameter λ and choose the ith cashier with probability p i (i,..., n, i p i ). Find the distribution of the number of customers at cashier i. Let us perform a random experiment with N independent and identical trials. Let describe X i, i,..., n the number of ith outcome. As the joint distribution of (X,..., X n ) is a multinomial distribution with parameters N and p,..., p n we have P (X k,..., X n k n X +... + X n N) Since X +... + X n X P o(λ) N! k!... k n! pk... p kn n. P (X k,..., X n k n ) P (X k,..., X n k n X +... + X n N)P (X N) N! λ N k!... k n! pk... p kn n N! e λ pk... p kn n k!... k n! λk +...+k n e λ( (p λ) k k! e λpn... (p nλ) kn e λpn. k n! n i p i) It follows that X i P o(λp i ), i,..., n, and are independent random variables. 6.2 Continuous Probability Distributions Exercise 9 Let Γ(α) t α e t dt, α > so-called complete Gamma function ( Γ(α) function ). Show that Γ(α) (α )Γ(α ), α >. Using integration by parts we have Γ(α) t α e t dt [ t α e t] + (α ) t α 2 e t dt (α )Γ(α ) since the value of the first part is zero. It can easily be proved by the help of the L Hospital rule. It is easy to see that Γ(), so Γ(n) (n )! that is Γ(α) can be considered as the generalization of the factorial function. 8

( ) Exercise Show that Γ π. 2 Γ ( ) 2 Introducing the substitution t x2 2 Γ ( ) 2 t 2 e t dt we have dt dx e t dt. t x, thus 2 x 2 x e 2 xdx π e x2 2 dx π. 2π Exercise Find the mean, variance and the kth moment of the gamma distribution with parameters (α, λ). where E(X) Introducing the substitution u λx since Γ(α + ) αγ(α). Similarly E(X 2 ) E(X) Γ(α) u α e u Γ(α) x λ(λx)α e λx dx, Γ(α) t α e t dt. Γ(α + ) du λ λγ(α) x 2 λ(λx)α e λx dx Γ(α) λ λ 2 Γ(α) u α+ e u du 82 α λ, (λx) α+ e λx dx Γ(α) Γ(α + 2) λ 2 Γ(α) (α + )α λ 2.

Thus V ar(x) E(X 2 ) (E(X)) 2 That is the squared coefficient of variation is CX 2 than. Finally E(X k ) x k λ(λx)α e λx dx Γ(α) λ k λ k Γ(α) u k+α e u du (α + )α ( α ) 2 α λ 2 λ λ. 2 Γ(k + α) λ k Γ(α) /α, that can be less and greater (λx) k+α e λx dx Γ(α) α(α + )... (α + k ) λ k. In particular, in the case of α n we have the Erlang distribution with parameters (n, λ) and we obtain E(X k n(n + )... (n + k ) ). λ k In case of n it reduces to the exponential distribution, that is E(X k ) k! λ k. Exercise 2 Find the mean and variance of the Pareto distribution with parameters (k, α). Thus E(X) E(X 2 ) V ar(x) k xαk α x α dx [ αk α x α+ α k ] k k αk α x α dx kα, α > α, α k 2 α, α > 2 αk α x α+ α 2 dx, α 2 ( ) 2 k2 α kα α 2, α > 2 α 83

Exercise 3 Let X Exp(λ), and Y c e αx, where α, c >. Find the distribution function of Y. P (Y < x) P (ce αx < x) P ( ( x )) ( αx < ln P X < ( x ) ) c α ln c e λ α ln( x c ) e ln( c x) λ α ( c x) λ α, that is we obtain the Pareto distribution with parameters ( c, λ α). 84

Chapter 7 Fundamentals of Stochastic Modeling 7. Exponential Distribution and Related Distributions Exercise 4 Show that that the exponential distribution obeys P (X > x + y X > x) P (X > y), which is referred to as memoryless, or Markov property. P (X > x + y X > x) P (X > x + y) P (X > x) P (X < x + y) P (X < x) ( e λ(x+y) ) ( e λx ) e λy P (X > y). Exercise 5 Find the nth moment of an exponentially distributed random variable with parameter λ. E(X n ) x n λe λx dx [ x n e λx] + n λ x n λe λx dx. Using the L Hospital s rule it is easy to prove that the value of the first part is and thus E(X n ) n λ E(Xn ). Taking into account the recursion one can easily see that E(X n ) n! λ n. 85

Exercise 6 Let us assume that at t two independent activities start. Their durations are denoted by X, Y and are supposed to be exponentially distributed random variables with parameters λ, µ, respectively. Let V minima(x, Y ), Z maxima(x, Y ), and W Z V. Find. The distribution and mean of V, 2. P (X < Y ), that is X completes first, 3. The distribution and mean of W, that is the distribution of the time between the first and second events, 4. The probability that at an arbitrary time t 5. P (W + V < t)), 6. P (X < t X < τ), 7. P (X < t X < Y ), 8. P (X < t Y < X). P (X < t < Y ), P (V < t < Z), P (X < t, Y < t),. Distribution of the first event P (V < t) P (V > t) P (X > t, Y > t) P (X > t)p (Y > t) e λt e µt e (λ+µ)t that is, V is exponentially distributed with parameter λ + µ, consequently E(V ) λ+µ, 2. X completes first P (X < Y ) y P (X < y)f Y (y) dy 3. distribution of the time between the first and second event y ( e λy )µe µy dy µ λ + µ P (W < t) P (W < t X < Y )P (X < Y ) + P (W < t X > Y )P (X > Y ) λ λ + µ, given (X < Y ), W represents the residual time of Y and similar argument is valid for X. Due to the memoryless property of Y, X Consequently P (W < t X < Y )P (X < Y ) + P (W < t X > Y )P (X > Y ) λ λ + µ ( e µt ) + µ λ + µ ( eλt ). E(W ) λ µ(λ + µ) + µ λ(λ + µ), 86

4. X has been completed, but Y is still running P (X < t < Y ) P (X < t)p (Y > t) ( e λt )e µt, the first event has been completed, but the second is still running P (V < t < Z) P (V < t < Z X < Y )P (X < Y ) + P (V < t < Z X > Y )P (X > Y ) P (X < t < Y X < Y )P (X < Y ) + P (Y < t < X X > Y )P (X > Y ) P (X < t < Y, X < Y ) + P (Y < t < X, X > Y ) P (X < t < Y ) + P (Y < t < X) ( e λt )e µt + ( e µt )e λt, both events have been completed P (X < t, Y < t) P (Z < t) P (X < t)p (Y < t) ( e λt )( e µt ), 5. distribution of the sum of W and V Since W, V are dependent random variables their convolution cannot be applied. However, it is easy to see that P (W + V < t) P (Z < t) ( e λt )( e µt ), 6. distribution of X given X < τ P (X < t X < τ) P (X < t, X < τ) P (X < τ) { P (X<t) e λt ha < t < τ P (X<τ) e λτ ha t > τ. 7. distribution of X given X < Y P (X < t, X < Y ) P (X < t X < Y ) P (X < Y ) t y P (X < y)f Y (y) dy P (X < Y ) e (λ+µ)t, y + P (X < t, X < y)f Y (y) dy yt P (X < Y ) P (X < t)f Y (y) dy P (X < Y ) that is, it follows an exponential distribution with parameter λ + µ, 8. distribution of X given X > Y P (X < t, X > Y ) P (X < t X > Y ) P (X < t, X > y)f y Y (y) dy P (X > Y ) P (X > Y ) t y P (y < X < t)f t Y (y) dy + (F y X(t) F X (y))f Y (y) dy P (X > Y ) P (X > Y ) e λt λ µ e µt ( e µt ). 87

Exercise 7 Find the probability that X i min{x,..., X n }, supposing that X k Exp(λ k ), k,..., n and are independent. By the law of total probability P (X i < min(x,..., X i, X i+,..., X n )) ( n ( e λix ) ( n n j,j i j,j i j,j i λ j ) e n j,j i λ jx dx λ j ) e λ ix e n j,j i λ jx dx P (X i < x)f min(x,...,x i,x i+,...,x n)(x) dx ( n ) λ j e n j λ jx λ j dx n j j λ j } {{ } ( n j,j i e n j λ jx λ j ) e n j,j i λ jx dx ( n j,j i n j,j i λ j n j λ j λ j ) dx λ i n j λ. j Exercise 8 Find the distribution and mean of a series system consisting of independent and exponentially distributed components. In the case of series system E(min(X,..., X n )) n j λ j n j min(ex,..., EX n ). EX j Exercise 9 Find the distribution, mean and variance of a parallel system consisting of independent and exponentially distributed components with the same failure rate, that is X i Exp(λ), i,..., n. P (max(x,..., X n ) < x) n P (X i < x) ( e λx ) n. i Apply the following useful relation. If X then EX Substitute t e λx then P (X x) dx 88 ( F (x)) dx.

E(max(X,..., X n )) ( ( e λx ) n dx ( t n ) λ t dt ( + t +... + t n ) dt [ ] t + t2 λ λ 2 +... + tn [ + n λ 2 +... + ] n + +... +. }{{} nλ (n )λ } {{ } }{{} λ. failure 2. -. failure n. - n-. failure Due to the memoryless property of the exponential distribution the time between the consecutive failures are also exponentially distributed and are independent of each other. It is easy to see that the parameter of the time between (k ) th and kth failure is (n k + )λ, k,..., n. This fact can be used to calculate the mean and variance of the time to the kth failure. Hence E(time to the kth failure ) nλ +... + (n k + )λ V ar(time to the kth failure) (nλ) +... + 2 ((n k + )λ) 2 k,..., n. In particular, the variance of the life time of a parallel system is the variance of the last failure, that is (nλ) 2 +... + λ 2. Exercise 2 Let X i, i,..., n, be independent random variables. Show that E(max(X,..., X n )) max(e(x ),..., E(X n )) E(min(X,..., X n )) min(e(x ),..., E(X n )). P (max(x,..., X n ) < x) n P (X i < x) P (X i < x) i hence P (max(x,..., X n ) x) P (X i x) 89

thus E(max(X,..., X n )) P (max(x,..., X n ) x)dx from which the statement follows. Similarly thus P (min(x,..., X n ) x) E(min(X,..., X n )) P (X i x)dx E(X i ), i,..., n, n P (X i x) P (X i x), i P (min(x,..., X n ) x)dx P (X i x)dx E(X i ), i,..., n, from which the statement follows. Exercise 2 Prove that the distribution function of the Erlang distribution with parameters (n, λ) is n (λx) j F Yn (x) e λx! j! j F Yn (x) x λ(λt) n (n )! e λt dt λn (n )! x t n e λt dt Using integration by parts, where g(x) t n, f(x) λ e λt we get λ n x t n e λt dt (n )! } {{ } I n(x) λxn (n )! e λx +... n j x λn (n )! ([ λ e λt t n ] x λ(λt) n 2 (n 2)! e λt dt } {{ } I n (x) (λx) j e λx. j! x ( λ e λt )(n )t n 2 dt) λxn (n )! e λx λxn 2 (n 2)! e λx + I n 2 (x) 9

Consequently (λx) j e λx j! jn x λ(λt) n (n )! e λt dt. Exercise 22 Let X Exp(λ) and Y Exp(µ) and independent random variables. Find their convolution. z z z f X+Y (z) λe λx µe µ(z x) dx λµ e λx µ(z x) dx λµe µz e x(λ µ) dx λµe µz [ λ µ e x(λ µ) ] z λµe µz (λ µ) (e z(λ µ) ) λµ µ λ e λz λµ µ λ e µz µ µ λ λe λz + λ λ µ µe µz. Exercise 23 Find the mean of the previous convolution by using the density function. E(X + Y ) µ µ λ (λ + µ)(µ λ)) λµ(µ λ) µ x( µ λ λe λx + λ λ µ µe µx ) dx xλe λx dx + λ λ µ λ + µ λµ λ + µ. xµe µx dx µ µ λ λ + λ λ µ µ µ2 λ 2 λµ(µ λ) Obviously E(X + Y ) E(X) + E(Y ) thus we could check the correctness of the density function. Exercise 24 Derive the density function of the Erlang distribution with parameters (2, λ) from the 2-phase hypoexponential distribution. As we have seen f X+Y (x) λ λ µ µe µx + µ µ λ λe λx. 9

Taking the limit as µ λ we get the desired result, that is λ lim µ λ λ µ µe µx + µ ( λµ e λx e µx) µ λ λe λx lim µ λ µ λ, therefore we apply the L Hospital s rule. Thus we obtain λ 2 xe λx, what is the density function we needed. Exercise 25 Find the distribution function of the 2-phase hypoexponential distribution. F X+Y (x) x f X+Y (y)dy x ( λ λ µ µe µx + µ ) µ λ λe λx dy λ ( ) e µx + µ ( ) e λx λ µ µ λ λ λe µx µ + µe λx λ µ + ( µe λx λe µx). λ µ To check its correctness let us take the limit as µ λ. Applying the L Hospital s rule we have e λx λxe λx which is exactly the distribution function of the Erlang distribution with parameters (2, λ). Exercise 26 Let X Exp(λ), Y Exp(µ) and independent random variables. Find the conditional density function f X X+Y (x y). f X X+Y (x y) f X(x) f Y (y x) f X+Y (y) e (λ µ)x λe λx µ e µ(y x) λµ λ µ (e µy e λy ) (λ µ), < x < y. e (λ µ)y Specially, if λ µ, then using the L Hospital s rule and taking substitution z λ µ we get lim z z e z x lim e z y z z lim e z y z y e z y y, 92

that is we have the uniform distribution. If at the beginning we assume that λ µ then f X X+Y (x y) λe λx λ e λ(y x) λ(λy)e λy y, since X + Y follows the Erlang distribution with parameters (2, λ). Exercise 27 Find the squared coefficient of variation of the Erlang distribution with parameters (n, λ). C Yn DY n EY n D(X +... + X n ) E(X +... + X n ) V ar(x ) +... + V ar(x n ) EX +... + EX n n λ 2 n λ n n n. Exercise 28 Verify the density function of the hyperexponential distribution. It is easy to see that it is nonnegative, furthermore f Yn (x) dx n p i λ i e λix dx i n p i λ i e λix dx } {{ } i n p i. i Exercise 29 Show that the squared coefficient of variation of the hyperexponential distribution is always at least To prove it, we need n CY 2 i n p i 2 ( n λ 2 i p i i λ i ) 2 ( n i p i λ i ) 2 n i p i λ 2 i ( n i ) 2 p i, λ i which follows from the Cauchy-Bunyakovszkij-Schwartz inequality with substitutions. y i p i, x i pi λ i 93

Exercise 3 Let X i W (λ i, α), i,..., n, independent random variables. Show that ( n ) min(x,..., X n ) W λ i, α. It is well-known that P (min(x,..., X n ) < x) e i n ( P (X i < x)) i n i λ i x α, n ( e λ i ) x α from which our statement follows. In particular, if α we obtain the relations valid for the exponential distribution. i 7.2 Basics of Reliability Theory Exercise 3 Find the hazard rate ( known also as failure rate, intensity, conditional failure rate) function of the hyperexponential distribution. h(t) n p i λ i e λ it i n p i e λ it which is monotone decreasing and its image is in the interval i, [ min(λ,..., λ n ), n p i λ i ]. It can be shown in the following way. If h (t) < on the interval [, ) then h(t) is monotone decreasing on it. Since by the rule of the derivative of a ratio the denominator is always positive it is enough to investigate the sign of the numerator of h (t). For the numerator we have ( n ) ( n ) ( n 2 p i λ 2 i e λ it p i e λ it + p i λ i e it) λ. i i i i Apply the well-known inequality ( n ) 2 a i b i i 94 n i a 2 i n i b 2 i

with substitutions a i p i e λ it, b i λ i pi e λ it thus h (t) <. It can also be seen that h(t) takes its maximum at, therefore h() n p i λ i. Furthermore, one can easily verify that h(t) min(λ,..., λ n ). i Exercise 32 Find the hazard rate function of the 2-phase hypoexponential distribution. λ λ 2 ( λ h(t) 2 λ e λ t e ) λ 2t ( λ2 / e λt λ ) e λ 2t λ 2 λ λ 2 λ λ 2 λ λ ( λ 2 e λ t e ) λ 2t, λ 2 e λ t λ e λ 2t which is monotone increasing and its image is in the interval [, min(λ, λ 2 )]. Similarly to the previous exercise the sign of h (t) is determined by ( λ e λ t + λ 2 e λ 2t ) 2 + (λ λ 2 e λ t λ λ 2 e λ 2t )(e λ t e λ 2t ) ( λ e λ t + λ 2 e λ 2t ) 2 + λ λ 2 (e λ t e λ 2t ) 2 > thus h(t) is monotone increasing and h(). If λ < λ 2, then h(t) λ λ 2 λ 2 λ. Similarly, if λ 2 < λ, then h(t) λ λ 2 λ λ 2. Exercise 33 Find the hazard rate function of the Erlang distribution with parameters (n, λ) and show that it is monotone increasing. Similarly to the previous exercises we deal with the numerator of h (t), that is we have λλ(λx) n 2 (n ) (n )! λ 2 (λx)n 2 (n 2)! n i ( n 2 (λx) i i! (λx) i i i! λ(λx)n (n )! n 2 i λ(λx) i i! + (λx)n (n )! λx n 2 n Its sign depends on the second term. Let us denote it by S n n 2 i (λx) i i! S 2 λx i ( λx ) + (λx)n n (n )! + λx >. 95 ) (λx) i. i!

If n 3, then S n n 3 i n 3 (λx) i i! (λx) i i! i It is easy to see that ( λx n ( λx n S 3 λx 2 ) + (λx)n 2 (n 2)! ) + (λx)n 2 (n 2)!. + λx 2 + λx 2 ( λx ) + (λx)n n (n )! >. Let us suppose that S n > then prove that S n+ > by induction. since S n >. S n+ n 2 i n 3 (λx) i i! (λx) i i! i n 3 (λx) i > i! i ( λx ) + (λx)n n (n )! ( λx ) + (λx)n 2 n ( λx n S n + (λx)n (n 2)! (n )n (n 2)! ( λx ) + (λx)n n (n )! ) + (λx)n 2 (n 2)! + (λx)n (n 2)! >, ( n n ) 7.3 Random Sums Exercise 34 Find the distribution of the mixture of Erlang distributions with parameters (i, λ) and geometric distribution with parameter p. f(x) i i λ(λx)i p( p) (i )! e λx pλe λx j i (λx)i ( p) (i )! pλe λx (( p)λ)x) j pλe λx e ( p)λx pλe pλx Exp(pλ). j! i Exercise 35 Find the distribution of the mixture of binomial and Poisson distributions. ( p k (i) ( ) i k p i ( p) i k, q i λi i! e λ )? 96

p k i (pλ)k e λ k! ( i )p k i k λi ( p) k i! e λ e λ p k λ k k! j (( p)λ) j j! (pλ)k k! ik (( p)λ)i k (i k)! e λ e ( p)λ (pλ)k e pλ P o(pλ). k! Exercise 36 Let X i Exp(λ), ν Geo(p) and independent random variables. Find the density function of the random sum Y ν. By the theorem of total probability f(x) f Yν (x) f Yk (x) P (ν k) } {{ } k (n,λ) Erlang i λ(λx)i p( p) (i )! e λx pλe λx i (λx)i ( p) (i )! i pλe λx (( p)λ)x) j pλe λx e ( p)λx pλe pλx Exp(pλ). j! j i Exercise 37 Let X i (A) be Bernoulli distributed with parameter p, ν P o(λ) and independent random variables. Find the distribution of Y ν X (A) +... + X ν (A). p k ik (pλ)k e λ k! ( i )p k i k λi ( p) k i! e λ e λ p k λ k k! j (( p)λ) j j! (pλ)k k! ik (( p)λ)i k (i k)! e λ e ( p)λ (pλ)k e pλ P o(pλ). k! 97

98

Chapter 8 Analytic Tools, Transforms 8. Generating Function Exercise 38 Find the generating function of the binomial distribution with parameters (n, p) and then its mean, variance and distribution. G X (s) n ( ) n s k p k ( p) n k k k n k ( ) n (sp) k ( p) n k (sp+( p)) n (+p(s )) n. k EX G X () n( + p(s )) n p s np( + p(s )) n s np. G X() (np( + p(s )) n ) s np(n )p( + p(s )) n 2 s n(n )p 2. V ar(x) n(n )p 2 + np (np) 2 np( p). G (k) X (s) n(n )(n 2)... (n k + )pk ( + p(s )) n k. G (k) X () n(n )(n 2)... (n k + )pk ( p)) n k. p k G(k) X () k! n(n )(n 2)... (n k + ) p k ( p) n k. } {{ k! } ( n k) Exercise 39 Find the generating function of the geometric distribution with parameter p. Furthermore, investigate for which s it will be convergent, then calculate the mean and variance. G X (s) s k p( p) k sp (( p)s) k k k sp, if s < ( p)s p. EX p( ( p)s) sp( ( p)) ( ( p)s) 2 s p2 p 2 + p ( + p) 2 p. 99

Exercise 4 Find the distribution by the help of the generating function G X (s) e λ( s). Thus X (s) λ }. {{.. λ } e λ( s) λ k e λ( s). k G (k) p k G(k) X () k! λk k! e λ, that is, G X (s) is the generating function of the Poisson distribution with parameter λ. Exercise 4 Find the mean and variance of the random sum by the help of the generating function. As it has been proved the generating function of the random sum is Hence Furthermore G Yν (s) G ν (G X (s)). EY ν G ν(g X (s))g X (s) s G ν(g X ())G X } {{ } (s) EνEX. G Y ν (s) s (G ν(g X (s))g X (s)) s G ν(g X (s))g X (s)g X (s) s + G ν(g X (s))g X (s) s G ν()(ex ) 2 + EνG X (), thus E(Y 2 ν ) G ν()(ex ) 2 + EνG X () + EνEX. Therefore V ar(y ν ) G ν()(ex ) 2 + EνG X () + EνEX (EνEX ) 2 (Eν 2 Eν)(EX ) 2 + Eν(EX 2 EX ) + EνEX (EνEX ) 2 (EX ) 2 (Eν 2 (Eν) 2 ) + Eν(EX 2 (EX ) 2 ) (EX ) 2 V ar(ν) + EνV ar(x ).

8.2 Laplace-Transform Exercise 42 Find the mean and variance of the random sum by the help of the Laplacetransform. EY ν L Y ν () G ν(l X ()) L X } {{ } () EνEX, Eν L Y ν (s) G ν(l X (s))l X (s)l X (s) + L X (s)g ν(l X (s)) L Y ν () G ν(l X ())L X } {{ } ()L X () + L X ()G ν(l X ()) G } {{ } ν()( EX ) 2 + EXEν 2 (Eν 2 Eν)(EX ) 2 + EXEν 2 Eν 2 (EX ) 2 + Eν(EX 2 (EX ) 2 ) } {{ } V ar(x ) Therefore Eν 2 (EX ) 2 + EνV ar(x ). V ar(y ν ) Eν 2 (EX ) 2 + EνV ar(x ) (EνEX ) 2 (Eν 2 (Eν) 2 )(EX } {{ } ) 2 + EνV ar(x ) V ar(ν) V ar(ν)(ex ) 2 + EνV ar(x ). Exercise 43 Find the Laplace-transform of the Erlang distribution with parameters (n, λ), and then the mean and variance. L X (s) λn sx λ(λx)n e (n )! e λx dx λn (n )! (n )! (n )! λ + s (λ + s) λ n n (λ + s) n (λ + s)x n e (λ+s)x dx λ + s } {{ } ( ) n λ. λ + s E(X n ) (n )! (λ+s) n λ EX ( )(( λ + s )n ) s λ n ((λ + s) n ) s λ n ( n(λ + s) n ) s λ n ( n)λ n n λ L X() ( λ n ((λ + s) n ) s λ n ( n(λ + s) n )) s λ n ( n)( n )(λ + s) n 2 s λ n (n 2 + n)λ n 2 n2 + n λ 2. Therefore V ar(x) n2 + n λ 2 ( n λ )2 n λ 2.

Exercise 44 Find the mean of the hyperexponential distribution by the help of the Laplacetransform. ( n EX ( ) n i i p i ) ( λ i n ) s s λ i + s ( ) p i λ i ( )(λ i + s) 2 i n λ i n p i p i. λ 2 i λ i p i λ i (λ i + s) 2 s i i Exercise 45 Find the Laplace-transform of the hypoexponential distribution. By applying the properties of the Laplace-transform we have n ( ) n λi L Yn (s). λ i + s i Exercise 46 Find the Laplace-transform of the gamma distribution. L X (s) ( λ λ + s ( λ λ + s where z (λ + s)t. e st λ(λt) α e λt dt Γ(α) ) α ) α Γ(α) Γ(α) λ α t α e (α+s)t dt Γ(α) (λ + s) α t α e (α+s)t dt Γ(α) ( ) α λ, λ + s ( λ λ + s ) α z α e z dz Γ(α) Exercise 47 Show that if X i Γ(α i, λ), i,..., n and are independent random variables, then ( n n ) Y X i Γ α i, λ. L Y (s) which proves the statement. n i i i ( ) αi ( ) λ λ λ + s λ + s n i α i 2

Chapter 9 Stochastic Systems 9. Poisson Process Exercise 48 Find the correlation coefficient of the Poisson process. R(ν(t), ν(t + h)) where DX V ar(x). To get E(ν(t)ν(t + h)) we need the following steps E(ν(t)(ν(t + h) ν(t) )) E(ν(t)ν(t + h)) Eν 2 (t) } {{ } ν(h) E(ν(t)ν(t + h)) Eν(t)Eν(t + h), Dν(t)Dν(t + h) E(ν(t)(ν(t + h) ν(t) )) Eν(t)Eν(h) since ν(t) and ν(t + h) ν(t) are independent. } {{ } ν(h) Thus After substitution we obtain Eν(t)Eν(h) + Eν 2 (t) E(ν(t)ν(t + h)). Eν(t)(Eν(h) Eν(t + h)) + Eν 2 (t) Dν(t)Dν(t + h) (λt)2 + λt + (λt) 2 λt λ(t + h) λt λ2 t(t + h) λt(λh λt λh) + λt + (λt)2 λt λ(t + h) t t2 + th. + h t Exercise 49 Let us consider a service system at which the inter-arrival times of the customers are exponentially distributed with parameter λ and the service times are also exponentially distributed with parameter µ. Supposing that the involved times are independent of each other find the distribution of the number of customers arrived during a service. 3

By the theorem of total probability If f S (x) µe µx, then λk µ k! P (N a (S) k) P (N a (S) k) P (N a (S) k S x)f S (x) dx. x k (λ + µ)e (λµ)x dx λ + µ } {{ } k! (λ+µ) k (λx) k e λx + µe µx dx k! which is a modified geometric distribution with parameter λ k µ (λ + µ) µ ( ) k λ, k+ λ + µ λ + µ µ λ+µ. Exercise 5 Find the mean number of customers arrived during a service having a general distribution. To solve the problem let us apply the properties of the generating function and the Laplace-transform. So we get G Na(S)(z) z k (λx) k e λx f S (x) dx k! k k z k (λx)k e λx f S (x) dx k! k (zλx) k e λx f S (x) dx k! that is e zλx e λx f S (x) dx e λx( z) f S (x) dx L S (λ( z)), G Na(S)(z) L S (λ( z)). Therefore E(N a (S)) G N a(s)() (L S (λ( z))) s λl S () λes. Exercise 5 Let the service times be Erlang distributed random variables with parameters (r, µ). Similarly to the previous problem find the distribution of the number of customers arrived during a service time if the arrival process remains the same. 4

(λx) k k! λx µ(µx)r e (r )! e µx dx λk k! λk µ r k! (r )!(λ + µ) ( ) λ k µ r r + k (λ + µ) r+k r ( r + k r µ r (r )! x k+r (λ + µ)e (λ+µ)x dx } {{ } E(X k+r ) (r+k )! (λ+µ) r+k ( λ λ + µ ) k ( µ λ + µ ) ( p) k p r, where p µ λ + µ, x k+r e (λ+µ)x dx λk k! ) r ( ) r + k µ r (r + k )! (r )!(λ + µ) (λ + µ) r+k r that is we get the negative binomial ( Pascal ) distribution with parameters (p, r). 9.2 Some Simple Systems Exercise 52 Solve the following first-order inhomogeneous linear differential equation. P (t) + (λ + µ)p (t) µ with initial condition P () The homogeneous part is P (t) + (λ + µ)p (t) P (t) (λ + µ)p (t) P (t) (λ + µ) P (t) P (t) P (t) dt (λ + µ) dt ln P (t) (λ + µ)t + ln C P (t) Ce (λ+µ)t A particular solution of the inhomogeneous part can be obtained by appying the method of variation of parameters ( or variation of constant ), that is P (t) c(t)e (λ+µ)t c (t)e (λ+µ)t + c(t)( (λ + µ)e (λ+µ)t ) + (λ + µ)c(t)e (λ+µ)t µ c (t)e (λ+µ)t µ c (t) µe (λ+µ)t c(t) µ λ + µ e(λ+µ)t, 5

Thus a particular solution is Hence for the general solution we have P (t) µ λ + µ e(λ+µ)t e (λ+µ)t µ λ + µ. P (t) Ce (λ+µ)t + µ λ + µ. Taking into account the initial condition P () we obtain C + µ thus C λ+µ Therefore P (t) λ λ + µ e (λ+µ)t + µ λ + µ. P (t) P (t) λ λ + µ λ λ + µ e (λ+µ)t. If the initial condition is P () then the solution is P (t) µ λ + µ µ λ + µ e (λ+µ)t P (t) µ λ + µ e (λ+µ)t + λ λ + µ Taking the limit as t for the steady-state distribution we get. P lim P (t) µ t λ + µ λ P lim t P (t) λ + µ λ λ + µ µ λ + µ λ. λ+µ Exercise 53 Find the probability that at time t k components are operating provided that at the beginning n components were operational and m were failed. P,k (t) k ( )( n λ l λ + µ e (λ+µ)t + µ ) l ( λ λ + µ λ + µ )( µ λ + µ µ ) k l ( µ λ + µ e (λ+µ)t λ + µ e (λ+µ)t + l ( m k l λ ) n l λ + µ e (λ+µ)t λ ) m (k l). λ + µ For the steady-state distribution we have k ( )( ) l ( ) n l ( )( ) k l ( ) m (k l) n µ λ m µ λ lim P,k(t) t l λ + µ λ + µ k l λ + µ λ + µ l k ( )( )( ) k ( ) n+m k ( )( ) k ( ) n+m k n m µ λ n + m µ λ. l k l λ + µ λ + µ k λ + µ λ + µ l 6

Cold reserve Exercise 54 Let us consider a system containing a main component having an exponentially distributed operating time with parameter λ. As soon as it fails a reserve unit start operation with the same probabilistic manner. There are 2 repairmen and the service times are supposed to be exponentially distributed random variables with parameter µ. Assuming that the involved random variables are independent find the main stead-state performance measures of the system. It is easy to see that the transition rates are the following Figure 9.: Cold reserve In stationary case let us introduce the usual notations, that is denote by P i the probability that i components are failed. Then for the balance equations we have λp µp (λ + µ)p λp + 2µP 2 2µP 2 λp The solution can be obtained up to a multiplicative constant P λ µ P, P 2 λ 2µ P λ2 2µ 2 P, which must satisfy the normalizing condition, that is P 2µ 2 + λ + λ2 2µ 2 + 2λµ + λ, wherel ϱ λ 2 µ 2µ + ϱ + ϱ2 µ. 2 2 The mean operating time of the system is E(O) P 2 P 2 E(S) P 2 P 2 2µ. 7

Figure 9.2: Warm reserve Warm reserve Exercise 55 In this case both components can operate simultaneously but the reserve s failure rate is λ (λ < λ). Find the usual performance measures. In this case the balance equations are (λ + λ )P µp (λ + µ)p (λ + λ )P + 2µP 2 2µP 2 λp The solution is where Finally P λ + λ µ P, P 2 λ 2µ P λ + λ λ µ 2µ P P + λ + λ + λ + λ λ µ µ 2µ. E(O) P 2 P 2 2µ. Exercise 56 Let us consider a component which in case of failure needs a detection time before repairing. This time is supposed to be an exponentially distributed random variable with parameter ν. Find the steady-state distribution of the system. Figure 9.3: Detection time 8

Similarly to the previous parts it easy to see that we can obtain the steady-state balance equations as λp µp 2, νp λp, µp 2 νp. Thus P λ ν P, P 2 ν µ P ν λ µ ν P λ µ P Using the normalizing condition P + P + P 2 we have P + λ ν + λ µ νµ νµ + λ(µ + ν). Exercise 57 Assume that we have a two-component parallel-redundant system with a single repair facility. The operating times for both components are supposed to be exponentially distributed random variables with parameter λ and the repair times are also exponentially distributed with parameter µ. When both components have failed, the system is considered to have failed and no recovery is possible, in other words it is a parallel system with repairs. Let us denote by,, 2, the number of failed components and let the system start from state, that is both components are operating. Find the mean time to the first system failure supposing that the involved random variables are independent of each other. Figure 9.4: Parallel-redundant system The transient probability distribution of the system can be obtained from the following balance equations P (t + h) P (t)( 2λh + o(h)) + P (t)(µh + o(h)) + o(h) P (t + h) P (t)( (λ + µ)h + o(h)) + P (t)(2λh + o(h)) + o(h) P 2 (t + h) P (t)(λh + o(h)) + o(h). 9

For the differential equations we have P (t) 2λP (t) + µp (t) P (t) (λ + µ)p (t) + 2λP (t) P 2(t) λp (t) P (), P (), P 2 () with the above initial conditions. This case, however, we need the time dependent solution, that is we have to solve the system of differential equations. Using Laplace-transform we obtain After simple calculations we get sp (s) 2λP (s) + µp (s) sp (s) (λ + µ)p (s) + 2λP (s) sp 2 (s) λp (s). P (s) s λ P 2 (s), P (s) 2λ P 2 (s)( s2 λ + (λ + µ) s λ ). Since P (t) + P (t) + P 2 (t), it is clear that P (s) + P (s) + P2 (s), thus we have s P2 (s) s( + s + s2 + (λ+µ)s ) 2λ2 s(s 2 + (3λ + µ)s + 2λ 2 ). λ 2λ 2 2λ 2 By inversion P 2 (t) can be obtained, that is at time t the system is failed since there is no operating component. Let Y be denote the time to the first system failure. Then P 2 (t) means that the operating time of the system is less than t. Hence the reliability function of the system is R(t) P 2 (t), thus R (t) P 2(t), P (Y < t) P 2 (t), f Y (t) P 2(t). Using the technique of Laplace-transform we get (P 2) (s) sp 2 (s) P 2 () 2λ 2 s 2 + (3λ + µ)s + 2λ 2. The denominator can be written in the form (s + a )(s + a 2 ) and thus we can use the method of partial ratios. So ( ) ( 2λ 2 s 2 + (3λ + µ)s + 2λ A 2 2λ2 2λ 2 + B ) (s + a )(s + a 2 ) s + a s + a 2 Therefore where a,2 (3λ + µ) ± λ 2 + 6λµ + µ 2 2 2λ 2 s 2 + (3λ + µ)s + 2λ 2 2λ2 a a 2 and A a 2 a, B ( ). s + a 2 s + a a a 2.

Hence f Y (t) 2λ2 (e a2t e at ) since if f (s) a a 2 s + a, then f(t) e at. The mean time to the first system failure can be computed in the following way [ ] E(Y ) yf Y (y) dy 2λ2 ye a2y dy ye ay dy a a 2 [ 2λ2 a 2 ye a2y dy ] [ ya e ay dy 2λ2 ] a a 2 a 2 a a a 2 a 2 2 a 2 2λ2 (a + a 2 ) 2λ2 (3λ + µ) (a a 2 ) 2 (2λ 2 ) 2 3 2λ }{{} without repair + µ. }{{} 2λ 2 increase It should be noted that E(Y ) can be determined without the density function, since its Laplace-transform is known. Thus E(Y ) L Y () d(p ( ) 2) (s) 2λ 2 ds s s 2 + (3λ + µ)s + 2λ s 2 2λ 2 (2s + 3λ + µ) (s 2 + (3λ + µ)s + 2λ 2 ) 2 2λ2 (3λ + µ) 3λ + µ 3 s 4λ 4 2λ 2 2λ + µ 2λ. 2 In the case when the component are not repaired, that is when µ we get a parallel system treated earlier. After substitution we have and then f Y (s) 2λ2 λ α 2λ, ( s + λ ) s + 2λ α 2 λ 2λ 2 (s + λ)(s + 2λ) 2λ s + 2λ λ s + λ. This can be interpreted as follows. The system failure time is the sum of the time of the first failure of the components and the residual operating time of the second component. The first failure is exponentially distributed with parameter 2λ and the remaining time is also exponentially distributed with parameter λ, furthermore they are independent of each other. Exercise 58 Let us modify the previous system in the following way. The repairs are carried out by 2 repairmen and assume that the repair starts when both components are failed. Find the steady-state characteristics of the system. Let us introduce the following notations

Figure 9.5: Exercise 58 - both components are operating - component is failed, there is no repair 2-2 components are failed 3 - component is failed the other is under repair. It is easy to see that the steady-state balance equations are 2λP µp 3 λp 2λP 2µP 2 λp + λp 3 (λ + µ)p 3 2µP 2 For the solution we obtain P 3 2λ µ P, P 2P, P 2 λ + µ 2µ P 3 λ + µ λ 2µ µ P. Using the normalizing condition we have P 3 + (λ+µ)λ µ 2 + 2λ µ µ 2 3µ 2 + λ 2 + 3λµ. The availability A of the system is A P 2 3µ2 + 2λµ 3µ 2 + λ 2 + 3λµ, since P 2 Furthermore, for the mean operating time of the system we get E(O) P 2 P 2 2µ 2λ + 3µ 2λ(λ + µ). (λ + µ)λ 3µ 2 + λ 2 + 3λµ. 2

Appendix In this Appendix some properties of the generating function, sometimes called as z- transform, and the Laplace-transform are listed. More properties can be found, for example in Kleinrock [5]. Some properties of the generating function Sequence Generating function. f n, n,, 2,... G(z) f n z n n 2. af n + bg n ag(z) + bh(z) 3. a n f n f(az) 4. f n k, n, k, 2k,... G(zk ) 5. f n+k, k > G(z) z k 6. f n k, k > z k G(z) k z i k f i 7. n(n ) (n m + )f n z m dm G(z), dz m m 8. f n g n : f n k g k G(z)H(z) k 9. f n f n ( z)g(z). n f k, n,, 2,... k G(z) z i. a n a 2. Series sum property G() f n n 3. Alternating sum property G( ) ( ) n f n n 4. Initial value theorem G() f d 5. Intermediate value theorem n G(z) z f n! dz n n 6. Final value theorem lim z ( z)g(z) lim n f n 3

Some properties of the Laplace-transform Function Transform. f(t), t f (s) f(t)e st dt 2. af(t) + bg(t) af (s) + bg (s) 3. f ( t a), (a > ) af (as) 4. f(t a) e as f (s) 5. e at f(t) f (s + a) 6. t n f(t) ( ) n dn f (s) 7. 8. f(t) t f(t) t n 9. f(t) g(t). df(t) dt d n f(t) t f(t x)g(x)dx s s s s ds n f (s )ds ds ds 2... ds n f (s n ) s 2 s s ns n f (s)g (s) sf (s) f(). : f (n) (t) s n f (s) s n f() s n 2 f ()... f (n ) () dt n 2. f(t) a is parameter a F (s) a a 3. Integral property f () f(t)dt 4. Initial value theorem lim sf (s) lim f(t) s t 5. Final value theorem lim sf (s) lim f(t) s t if sf (s) is analytic for Re(s) 4

Bibliography [] Allen, A. O. Probability, statistics, and queueing theory with computer science applications, 2nd ed. Academic Press, Inc., Boston, MA, 99. [2] Gnedenko, B., Belyayev, J., and Solovyev, A. Mathematical methods of reliability theory. Academic Press, 969. [3] Jain, R. The Art of Computer Systems Performance Analysis. John Wiley and Sons, 99. [4] Karlin, S., and Taylor, H. An introduction to stochastic modeling. Harcourt, New York, 998. [5] Kleinrock, L. Queueing Systems, Volume I: Theory. John Wiley and Sons, 975. [6] Kobayashi, H., and Mark, B. System modeling and analysis: Foundations of system performance evaluation. Pearson Education Inc., Upper Sadle River, 28. [7] Ovcharov, L., and Wentzel, E. Applied Problems in Probability Theory. Mir Publishers, Moscow, 986. [8] Ravichandran, N. Stochastic Methods in Reliability Theory. John Wiley and Sons, 99. [9] Rényi, A. Probability Theory. Dover Pubns, 27. [] Ross, S. M. Introduction to Probability Models. Academic Press, Boston, 989. [] Stewart, W. Probability, Markov chains, queues, and simulation. Princeton University Press, Princeton, 29. [2] Sztrik, J. Introduction to theory of queues and its applications (in Hungarian). Kossuth Egyetemi Kiadó, Debrecen, 2. http://irh.inf.unideb.hu/user/jsztrik/education/enotes.htm. [3] Tijms, H. A first course in stochastic models. Wiley & Son, Chichester, 23. [4] Trivedi, K. Probability and Statistics with Reliability, Queueing and Computer Science Applications, 2nd edition. John Wiley & Sons, 22. 5