Conditional Tail Expectations for Multivariate Phase Type Distributions


 Jasmine Cook
 2 years ago
 Views:
Transcription
1 Conditional Tail Expectations for Multivariate Phase Type Distributions Jun Cai Department of Statistics and Actuarial Science University of Waterloo Waterloo, ON N2L 3G1, Canada Haijun Li Department of Mathematics Washington State University Pullman, WA 99164, U.S.A. April 2005
2 Abstract The conditional tail expectation in risk analysis describes the expected amount of risk that can be experienced given that a potential risk exceeds a threshold value, and provides an important measure for righttail risk. In this paper, we study the convolution and extreme values of dependent risks that follow a multivariate phase type distribution, and derive explicit formulas of several conditional tail expectations of the convolution and extreme values for such dependent risks. Utilizing the underlying Markovian property of these distributions, our method not only reveals structural insight, but also yields some new distributional properties for multivariate phase type distributions. Keywords: Univariate and multivariate phase type distributions, continuous time Markov chain, convolution, extreme value, MarshallOlkin distribution, conditional tail expectation (CTE), excess loss, residual lifetime Mathematics Subject Classification: Primary 62E15 Secondary 62P05; 91B30
3 1 Introduction Let risk X be a nonnegative random variable with cumulative distribution F, where X may refer to a claim for an insurance company or a loss on an investment portfolio. The conditional expectation of X given that X > t, denoted by CT E X (t) = E(X X > t), is called the conditional tail expectation (CTE) of X at t. Observe that CT E X (t) = t + E(X t X > t), where the random variable (X t X > t) is known as the residual lifetime in reliability (Shaked and Shanthikumar 1994) and the excess loss or excess risk in insurance and finance (Embrechts et al. 1997), respectively. Since d (t + E(X t X > t)) 0 for any continuous dt risk X (see page 45 of Shaked and Shanthikumar 1994), the CTE function CT E X (t) is increasing in t 0. CTE is an important measure of righttail risks, which is frequently encountered in the insurance and financial investment. properties of a coherent risk measure (Artzner et al. It is known that the CTE satisfies all the desirable 1999), and that the CTE provides a more conservative measure of risk than the value at risk for the same level of degree of confidence (Landsman and Valdez 2003). Hence, the CTE is preferable in many applications, and has recently received growing attentions in the insurance and finance literature. To analyze dependent risks, several CTE measures emerge, and the most popular ones are the CTEs of the total risk and extreme risks. Let X = (X 1,..., X n ) be a risk vector, where X i denotes risk (claim or loss) in subportfolio i for i = 1,..., n. Then, S = X X n is the aggregate risk or the total risk in a portfolio that consists of the n subportfolios, X (1) = min{x 1,..., X n } and X (n) = max{x 1,..., X n } are extreme risks in the portfolio. In risk analysis for such a portfolio, we are not only interested in the CTE of each risk X i, but also the CTEs of these statistics of S, X (1), and X (n), which are denoted by CT E S (t) = E(S S > t), (1.1) CT E X(1) (t) = E(X (1) X (1) > t), (1.2) CT E X(n) (t) = E(X (n) X (n) > t). (1.3) The risk measures related to these statistics also include, among others, for i = 1, 2,..., n. CT E Xi S(t) = E(X i S > t), (1.4) CT E Xi X (1) (t) = E(X i X (1) > t), (1.5) CT E Xi X (n) (t) = E(X i X (n) > t), (1.6) 1
4 All these risk measures have useful interpretations in insurance, finance and other fields. For instance, CT E Xi S(t) represents the contribution of the ith risk X i to the aggregate risk S since CT E S (t) = n i=1 CT E X i S(t). The CTE E(X (1) X (1) > t) (E(X (n) X (n) > t)) describes the expected minimal (maximal) risk in all the subportfolios given that the minimal (maximal) risk exceeds some threshold t. More interestingly, E(X i X (1) > t) represents the average contribution of the ith risk given that all the risks exceed some value t, whereas E(X i X (n) > t) represents the average contribution of the ith risk given that at least one risk exceeds a certain value t. Besides their interpretations in risk analysis, these CTEs in (1.2)(1.6) also have interpretations in life insurance. For instance, in a group life insurance, let X i, 1 i n, be the lifetime of the ith member in a group that consists of the n members. Then, X (1) is the jointlife status and X (n) is the lastsurvivor status (Bowers, et al. 1997). As such, E(X i X (1) > t) is the expected lifetime of member i given that all members are alive at time t, and E(X i X (n) > t) is the expected lifetime of member i given that there is at least one member who is alive at time t. Landsman and Valdez (2003) obtained the explicit formulas of CT E S (t) and CT E Xi S(t) for the multivariate elliptical distributions, which include the distributions such as multivariate normal, stable, studentt, etc. The focus of this paper is to derive the explicit formulas of various CTEs, such as CT E S (t), CT E X(1) (t), CT E X(n) (t), E(X (n) X (1) > t), E(X (n) X i > t) for i = 1, 2,..., n, for the multivariate phase type distributions. Univariate phase type distributions (see Section 2 for the definition) have been widely used in queueing and reliability modeling (Asmussen 2003 and Neuts 1981), and in risk management and finance (Asmussen 2000 and Rolski et al. 1999). Multivariate phase type distributions have also been introduced and studied in Assaf et al. (1984). The multivariate phase type distributions include, as special cases, many wellknown multivariate distributions, such as the MarshallOlkin distribution (Marshall and Olkin 1967), and also retain many desirable properties similar to those in the univariate case. For example, the set of ndimensional phase type distributions is dense in the set of all distributions on [0, ) n and hence any nonnegative ndimensional distribution can be approximated by ndimensional phase type distributions. Furthermore, Kulkarni (1989) showed that the sum of the random variables that have a multivariate phase type distribution follows a (univariate) phase type distribution. Due to their complex structure, however, the applications of multivariate phase type distributions have been limited. Cai and Li (2005) employed Kulkarni s method and derived the explicit phase type representation for the convolution, and applied the multivariate phase type distributions to ruin theory in a multidimensional risk model. In this paper, we further utilize the underlying Markovian structure to explore the righttail distributional 2
5 properties of phase type distributions that are relevant to explicit calculations of the CTE risk measures. Our method, in a unified fashion, yields the explicit expressions for most of the abovementioned CTE functions for phase type distributions, and it also gives some new distributional properties for multivariate phase type distributions. This paper is organized as follows. After a brief introduction of phase type distributions, Section 2 discusses the CTE for univariate phase type distributions. Section 3 details various CTE measures that involve the multivariate phase type distributions. Section 4 concludes the paper with some illustrative examples. Throughout this paper, we denote by X = st Y the fact that two random variables X and Y are identically distributed. The vector e denotes a column vector of 1 s with an appropriate dimension and the vector 0 denotes a row vector of zeros with an appropriate dimension. Note that the entries of all the probability vectors (subvectors), and matrices are indexed according to the state space of a Markov chain. 2 CTE for Univariate Phase Type Distributions A nonnegative random variable X or its distribution F is said to be of phase type (PH) with representation (α, A, d) if X is the time to absorption into the absorbing state 0 in a finite Markov chain {X(t), t 0} with state space {0, 1,..., d}, initial distribution β = (0, α), and infinitesimal generator [ ] 0 0 Q =, (2.1) Ae A where 0 = (0,..., 0) is the ddimensional row vector of zeros, e = (1,..., 1) T is the d dimensional column vector of 1 s, subgenerator A is a d d nonsingular matrix, and α = (α 1,..., α d ). Thus, a nonnegative random variable X is of phase type with representation (α, A, d) if X = inf{t 0 : X(t) = 0}, where {X(t), t 0} is the underlying Markov chain for X. Let F (x) = 1 F (x) denote the survival function. Then the random variable X is of phase type with representation (α, A, d) if and only if F (x) = Pr{X(x) {1,..., d}} = α e xa e, x 0. (2.2) Thus, EX k = 0 x k df (x) = ( 1) k k! (αa k e), k = 1, 2,... (2.3) See, for example, Rolski et al. (1999) for details. The CTE for a univariate phase type distribution has an explicit expression, which follows from the following proposition. 3
6 Proposition 2.1. If X has a PH distribution with representation (α, A, d), then for any t > 0, the excess risk (X t X > t) has a PH distribution with representation (α t, A, d), where α t = α eta α e ta e. (2.4) Proof. The survival function of (X t X > t) is given by Pr{X t > x X > t} = Pr{X > t + x X > t} = F (t + x). F (t) It follows from (2.2) that Pr{X t > x X > t} = α e(t+x)a e = α α e ta t e xa e. e Hence (X t X > t) has a PH distribution with representation (α t, A, d). Corollary 2.2. If random variable X has a PH distribution with representation (α, A, d), then for any t > 0, CT E X (t) = t α A 1 e ta e α e ta e. (2.5) Proof. It follows from Proposition 2.1 and (2.3) that CT E X (t) = t + E(X t X > t) = t α t A 1 e = t α eta A 1 e α e ta e where the last equality holds due to the fact that A 1 e ta = e ta A 1. = t α A 1 e ta e α e ta e, As one application of Corollary 2.2, we can obtain an explicit expression for conditional expectation E(X X t) immediately as follows. Proposition 2.3. If X has a PH distribution with representation (α, A, d), then for any t > 0, E(X X t) = α A 1 e t α e ta e + α A 1 e ta e, 1 α e ta e (2.6) E(t X X t) = t + α A 1 (I e ta ) e. 1 α e ta e (2.7) Proof. The formula (2.6) follows from E(X) = E(X X > t) P (X > t) + E(X X t) P (X t), E(X) = α A 1 e, P (X t) = 1 α e ta e, P (X > t) = α e ta e, and (2.5). obtained from E(t X X t) = t E(X X t) and (2.6). (2.7) is The conditional expectation E(X X t) will be used later in the paper. In risk analysis, E(t X X t) describes the surplus beyond the risk that has been experienced. It is also called the expected inactivity time in reliability theory. 4
7 3 CTE for Multivariate Phase Type Distributions Let {X(t), t 0} be a rightcontinuous, continuoustime Markov chain on a finite state space E with generator Q. Let E i, i = 1,..., n, be n nonempty stochastically closed subsets of E such that n i=1e i is a proper subset of E (A subset of the state space is said to be stochastically closed if once the process {X(t), t 0} enters it, {X(t), t 0} never leaves). We assume that absorption into n i=1e i is certain. Since we are interested in the process only until it is absorbed into n i=1e i, we may assume, without loss of generality, that n i=1e i consists of one state, which we shall denote by. Thus, without loss of generality, we may write E = ( n i=1e i ) E 0 for some subset E 0 E with E 0 E j = for 1 j n. The states in E are enumerated in such a way that is the first element of E. Thus, the generator of the chain has the form Q = [ 0 0 Ae A ], (3.1) where 0 = (0,..., 0) is the ddimensional row vector of zeros, e = (1,..., 1) T is the d dimensional column vector of 1 s, subgenerator A is a d d nonsingular matrix, and d = E 1. Let β = (0, α) be an initial probability vector on E such that β( ) = 0. We define X i = inf{t 0 : X(t) E i }, i = 1,..., n. (3.2) As in Assaf et al. (1984), for simplicity, we shall assume that P (X 1 > 0,..., X n > 0) = 1, which means that the underlying Markov chain {X(t), t 0} starts within E 0 almost surely. The joint distribution of (X 1,..., X n ) is called a multivariate phase type distribution (MPH) with representation (α, A, E, E 1,..., E n ), and (X 1,..., X n ) is called a phase type random vector. When n = 1, the distribution of (3.2) reduces to the univariate PH distribution introduced in Neuts (1981) (See Section 2). Examples of MPH distributions include, among many others, the wellknown MarshallOlkin distribution (Marshall and Olkin 1967). The MPH distributions, their properties, and some related applications in reliability theory were discussed in Assaf et al. (1984). As in the univariate case, those MPH distributions (and their densities, Laplace transforms and moments) can be written in a closed form. The set of ndimensional MPH distributions is dense in the set of all distributions on [0, ) n. It is also shown in Assaf et al. (1984) and in Kulkarni (1989) that MPH distributions are closed under marginalization, finite mixture, convolution, and the formation of coherent systems. The sum S = X X n, the extreme values X (1) = min{x 1,..., X n }, and X (n) = max{x 1,..., X n } are all of phase type if (X 1,..., X n ) has a multivariate phase type distribution. Thus, Corollary 2.2 will yield explicit expressions of CTEs for S, X (1), and X (n) if we 5
8 can provide the phase type representations of S, X (1), and X (n). In the following subsections, we will discuss these representations. We will also discuss the phase type representation for the random vector (X (1), X i, X (n) ), i = 1, 2,..., n, and then obtain the related CTEs. 3.1 CTE of Sums Cai and Li (2005) derived an explicit representation for the convolution distribution of S = X X n. To state this result, we partition the state space as follows. For any D {1,..., n}, Γ n = E 0, Γ n 1 i = E i k i (E i E k ), i = 1,..., n, Γ n 2 ij = E i E j k i, k j (E i E j E k ), i j,.... Γ n D D = i D E i k D (( i D E i ) E k ),..., Γ n = { }. In other words, Γ n D D contains the states only in E i for all i D, but not in any other E j, j / D. Note that these Γ s form a partition of E. For each state i E, define k(i) = number of indexes in {j : i / E j, 1 j n}. (3.3) For example, k(i) = n for all i Γ n, k(i) = 0 for all i Γ n, and in general, k(i) = n D for all i Γ n D D. Lemma 3.1. (Cai and Li 2005) Let (X 1,..., X n ) be a phase type vector whose distribution has representation (α, A, E, E 1,..., E n ), where A = (a i,j ). Then n i=1 X i has a phase type distribution with representation (α, T, E 1), where T = (t i,j ) is given by, t i,j = a i,j k(i), (3.4) that is, t i,j = a i,j m if i Γm D, for some D {1,..., n}. Theorem 3.2. Let (X 1,..., X n ) be a phase type vector whose distribution has representation (α, A, E, E 1,..., E n ). Then the CTE expression of S = X X n is given by, for any t > 0, where T is defined by (3.4). CT E S (t) = t α T 1 e tt e α e tt e, (3.5) Proof. (3.5) follows from the phase type representation (α, T, E 1) of S in Lemma 3.1 and (2.5). 6
9 3.2 CTE of Order Statistics Let F (x 1,..., x n ) and F (x 1,..., x n ) denote, respectively, the joint survival and distribution functions of a phase type random vector (X 1,..., X n ). 1984), for x 1 x 2... x n 0, F (x 1,..., x n ) = Pr{X 1 > x 1,..., X n > x n } Then, we have (see Assaf et al. = α e xna g n e (x n 1 x n)a g n 1 e (x 1 x 2 )A g 1 e, (3.6) F (x 1,..., x n ) = Pr{X 1 x 1,..., X n x n } = β e xnq h n e (x n 1 x n)q h n 1 e (x 1 x 2 )Q h 1 e, (3.7) where, for k = 1,..., n, g k is defined as a diagonal d d matrix whose ith diagonal element for i = 1,..., d equals 1 if i E E k and zero otherwise, and h k is defined as a diagonal (d + 1) (d + 1) matrix whose ith diagonal element for i = 1,..., d + 1 equals 1 if i E k and zero otherwise. For the matrix A in (3.1), we now introduce two Markov chains. Let E { } = S S, where S S =. The matrix Q in (3.1) can be partitioned as follows Q = (A S e + A SS e) A S A SS, (3.8) (A S Se + A S e) A S S A S where A S (A S ) is the submatrix of A by removing the sth row and sth column of A for all s / S (s / S ), and A SS (A S S) is the submatrix of A by removing the sth row and s th column of A for all s S, s S (s S, s S ). In particular, we have A E { } = A. 1. The matrix Q S = [ 0 0 A S e A S ], (3.9) is the generator of a Markov chain with state space S { } and absorbing state. This Markov chain combines all the states in S of Q into the absorbing state. 2. Let A [S] = A S + D(A SS ), where D(A SS ) is the diagonal matrix with its sth diagonal entry being the sth entry of A SS e. The matrix [ 0 0 Q [S] = A [S] e A [S] ], (3.10) is the generator of another Markov chain with state space S { } and absorbing state. This Markov chain removes all the transition rates of Q from E { } to S. 7
10 For any ddimensional probability vector α and any subset S E { }, we denote by α S the S dimensional subvector of α by removing its sth entry for all s / S. The vector I(S) denotes the column vector with the sth entry being 1 if s S and zero otherwise. Furthermore, for any S E { }, we write α t (S) for the following S dimensional row vector Note that α t (E { }) = α t, where α t is given by (2.4). α t (S) = α S e ta S α S e ta S e. (3.11) For any phase type random vector (X 1,..., X n ), Assaf et al. (1984) showed that the extreme values X (1) = min{x 1,..., X n } and X (n) = max{x 1,..., X n } are also of phase type. Their representations can be obtained from (3.6) and (3.7). Lemma 3.3. Let (X 1,..., X n ) be of phase type with representation (α, A, E, E 1,..., E n ). Then 1. X (1) is of phase type with representation (3.8). 2. X (n) is of phase type with representation (α, A, E 1). Proof. By (3.6), the survival function of X (1) is given by, for x 0, ( ) αe0 α E0 e, A E 0, E 0, where A E0 is defined as in F X(1) (x) = Pr{X (1) > x} = F (x,..., x) = α e xa g n g 1 e. (3.12) Note that g n g 1 e = I(E 0 ). Since E i, 1 i n, are all stochastically closed, we have F X(1) (x) = α e xa I(E 0 ) = α E 0 α E0 e exa E 0 e, ( ) αe0 which implies that X (1) is of phase type with representation α E0 e, A E 0, E 0. Similarly, by (3.7), the distribution function of X (n) is given by, for x 0, F X(n) (x) = Pr{X (n) x} = F (x,..., x) = β e xq h n h 1 e = β e xq I({ }) = 1 α e xa e. Thus, X (n) is of phase type with representation (α, A, E 1). In fact, X (1) is the exit time of {X(t), t 0} from E 0 and X (n) is the exit time of {X(t), t 0} from E { }, and thus X (1) and X (n) are of phase type with representations given in Lemma 3.3. In general, the kth order statistic X (k) is the exit time of {X(t), t 0} from E {{i1,i 2,...,i k }}( k j=1e ij ), and thus also of phase type. Hence, we obtain the following lemma. 8
11 Lemma 3.4. Let X (k), 1 k n, be the kth smallest among X 1,..., X n that are of phase type with representation (α, A, E, E 1,..., E n ). Then we have 1. (X (1),..., X (n) ) is of phase type with representation (α, A, E, O 1,..., O n ), where O k = {{i1,i 2,...,i k }}( k j=1e ij ), 1 k n. 2. X (k) is of phase type with representation ( ) αe Ok α E Ok e, A E O k, E O k, k = 1,..., n. Lemmas 3.3 and 3.4, and (2.5) immediately yield the CTE expressions of the extreme values X (1) and X (n) as follows. Theorem 3.5. Let (X 1,..., X n ) be of phase type with representation (α, A, E, E 1,..., E n ). 1. The excess loss (X (1) t X (1) > t) of a subportfolio with the least risk is of phase type with representation (α t (E 0 ), A E0, E 0 ), and the CTE of X (1) is given by CT E X(1) (t) = t α t (E 0 ) A 1 E 0 e. (3.13) 2. The excess loss (X (n) t X (n) > t) of the riskiest subportfolio is of phase type with representation (α t, A, E 1), and the CTE of X (n) is given by CT E X(n) (t) = t α t A 1 e, where α t = α eta α e ta e. (3.14) 3. In general, the excess loss (X (k) t X (k) > t) of the (n k +1)th riskiest subportfolio is of phase type with representation (α t (E O k ), A E Ok, E O k ), and the CTE of X (k) is given by CT E X(k) (t) = t α t (E O k ) A 1 E O k e, (3.15) and O k = {{i1,i 2,...,i k }}( k j=1e ij ), k = 1,..., n. 3.3 CTE Involving Different Subportfolios The Markovian method also leads to the expressions of CTEs among different subportfolios. Hereafter, for any ddimensional probability vector α and any subset S E { }, we denote, with slight abuse of notations, by (0, α S ) the ddimensional vector whose sth entry is the sth entry of α if s S, and zero otherwise. For example, (0, α t (S)) denotes the ddimensional vector whose sth entry is the sth entry of α t (S) (see (3.11)) if s S, and zero otherwise. 9
12 Theorem 3.6. If (X 1,..., X n ) is of phase type with representation (α, A, E, E 1,..., E n ), then the random vector [(X 1 t,..., X n t) X (1) > t] is of phase type with representation ((0, α t (E 0 )), A, E, E 1,..., E n ). Proof. Let {X(t), t 0} be the underlying Markov chain for (X 1,..., X n ). Then It follows from the Markov property that {X (1) > t} = {X 1 > t,..., X n > t} = {X(t) E 0 }. [(X 1 t,..., X n t) X (1) > t] = [(X 1 t,..., X n t) X(t) E 0 ] = st (inf{s > 0 : X (s) E 1 },..., inf{s > 0 : X (s) E n }), where {X (s), s 0} is a Markov chain with the same state space and generator as those of {X(t), t 0}, but the initial probability vector (0, α t (E 0 )) and α t (E 0 ) = 1 Pr{X(t) E 0 } (Pr{X(t) = i}, i E 0) = α E 0 e tae0 α E0 e ta E 0 e. Thus, the random vector [(X 1 t,..., X n t) X (1) > t] is of phase type with representation ((0, α t (E 0 )), A, E, E 1,..., E n ). Hence, any marginal distribution of [(X 1 t,..., X n t) X (1) > t] is also of phase type. Thus, the risk contribution from the ith subportfolio given that all the risks exceed a threshold value can be calculated. Corollary 3.7. Let (X 1,..., X n ) be of phase type with representation (α, A, E, E 1,..., E n ). Then, the excess loss (X i t X (1) > t) is of phase type with representation and ((0, α t (E 0 )), A E Ei, E E i ) CT E Xi X (1) (t) = t (0, α t (E 0 )) A 1 E E i e. (3.16) Proof. By Theorem 3.6, [(X 1 t,..., X n t) X (1) > t] has a phase type representation ((0, α t (E 0 )), A, E, E 1,..., E n ). Hence, (X i t X (1) > t) has a phase type representation ((0, α t (E 0 )), A E Ei, E E i ). Thus, (3.16) follows from (2.5) In fact, from a direct calculation, we have, CT E Xi X (1) (t) = E(X i X (1) > t) = t + t F (t,..., t, x, t,..., t)dx. (3.17) F (t, t,..., t) 10
13 Since random vector (X i, X 1,..., X i 1, X i+1,..., X n ) has an MPH distribution with representation (α, A, E, E i, E 1,..., E i 1, E i+1,..., E n ), then, by (3.6), for x > t, F (t,..., t, x, t,..., t) = Pr{X i > x, X 1 > t,..., X i 1 > t, X i+1 > t,..., X n > t} [ ] n = α e ta g k e (x t)a g i e. (3.18) k=1, k i Therefore, we obtain that for any 1 i n, [ n ] [ α e ta k=1, k i g k e (x t)a dx ] g t i e CT E Xi X (1) (t) = t + α e ta [ n k=1 g k] e [ n ] α e ta k=1, k i g k A 1 g i e = t α e ta [ n k=1 g. (3.19) k] e The expression (3.19) provides another formula for CT E Xi X (1) (t). Note that the expression (3.19) yields the same result as that in Corollary 3.7, due to the fact that [ n ] α e ta k=1, k i g k A 1 g i e α e ta [ n k=1 g k] e = (0, α E0 α E0 e eta E 0 ) A 1 g i e α E0 α E0 e eta E 0 e = (0, α t (E 0 ))A 1 E E i e. = (0, α E 0 e ta E 0 ) A 1 E E i e α E0 e ta E 0 e To develop a general scheme, we drive an expression for E(X i X k > t), i, k = 1,..., n, where (X 1,..., X n ) is of phase type with representation (α, A, E, E 1,..., E n ), and the underlying Markov chain {X(t), t 0}. Let q k = Pr{X k > t}. Since E k is stochastically closed, we have We have for any x 0, q k = Pr{X(t) E E k } = α e ta I(E E k ) = α E E k α E Ek e eta E E k e. Pr{X i > x X k > t} = 1 q k Pr{X i > x, X k > t} = 1 q k Pr{X i > x, X(t) (E E i E k ) (E i E i E k )} = p 1,k Pr{X i > x X(t) E E i E k } + p 2,k Pr{X i > x X(t) E i E i E k }, where p 1,k = Pr{X(t) E E i E k } q k and p 2,k = Pr{X(t) E i E i E k } q k. 11
14 Clearly p 1,k + p 2,k = 1. Thus, Since we have E(X i X k > t) = p 1,k E(X i X(t) E E i E k ) + p 2,k E(X i X(t) E i E i E k ). (3.20) Pr{X(t) E E i E k } = α E E i E k α E Ei E k e eta E E i E k e, Pr{X(t) E i E i E k } = αe ta I(E i E i E k ), p 1,k = ( αe Ek e α E Ei E k e ) ( αe Ei E k e ta E E i E k e α E Ek e ta E E k e ), (3.21) p 2,k = αeta I(E i E i E k ). (3.22) αe ta I(E E k ) To calculate the two conditional expectations in (3.20), we need the following. Lemma 3.8. Let (X 1,..., X n ) be MPH with representation (α, A, E, E 1,..., E n ). 1. For any i, k, if Pr{X(t) E E i E k } > 0, then E(X i X(t) E E i E k ) = t (0, α t (E E i E k )) A 1 E E i e. (3.23) 2. For any i k, if Pr{X(t) E i E i E k } > 0, then = E(X i X(t) E i E i E k ) (0, α) A 1 [E E k ] e t (0, α) eta [E E k] e + (0, α) A 1 1 (0, α) e ta [E E k] e [E E k ] eta [E E k] e, (3.24) where α = α E E i E k α E Ei E k e. Proof. To calculate E(X i X(t) E E i E k ), it follows from Markov property that E(X i X(t) E E i E k ) = t + E(inf{s > 0 : X (s) = }), where {X (t), t 0} is a Markov chain with the state space (E E i ) { }, subgenerator A E Ei, and the initial probability vector (0, α t (E E i E k )) with α t (E E i E k ) = (Pr{X(t) = j}, j E E i E k ) Pr{X(t) E E i E k } = α E E i E k e ta E E i E k α E Ei E k e ta E E i E k e. Since inf{s > 0 : X (s) = } is of phase type, we have, E(X i X(t) E E i E k ) = t (0, α t (E E i E k ))A 1 E E i e. (3.25) 12
15 To calculate E(X i X(t) E i E i E k ), consider {X(t) E i E i E k } = {X i t < X k }. Now we define a new Markov chain { X(t), t 0} with state space E E k as follows. The set of absorbing states is E i E i E k, the initial probability vector of { X(t), t 0} is (0, α), where α = α E E i E k α E Ei E k e. The generator of { X(t), t 0} is given by A [E Ek ] (see (3.10)). Let X i = inf{s > 0 : X(s) Ei E i E k }, which has a phase type distribution. Then, from (2.6), = E(X i X(t) E i E i E k ) = E( X i Xi t) (0, α) A 1 [E E k ] e t (0, α) eta [E E k] e + (0, α) A 1 1 (0, α) e ta [E E k] e [E E k ] eta [E E k] e. Thus, (3.24) holds. Observe that if E i E k, then X i X k almost surely. It follows from (3.20) that E(X i X k > t) = E(X i X(t) E E k ), if E i E k. (3.26) Thus, Lemma 3.8 (1) implies that (X i t X k > t) is of phase type with representation ((0, α t (E E k )), A E Ei, E E i ). This leads to the following corollaries. Corollary 3.9. Let (X 1,..., X n ) be MPH with representation (α, A, E, E 1,..., E n ). Let X (k), 1 k n, be the kth order statistic of (X 1,..., X n ). 1. For any k i, (X (i) t X (k) > t) is of phase type with representation and ((0, α t (E O k )), A E Oi, E O i ), E(X (i) X (k) > t) = t (0, α t (E O k )) A 1 E O i e. (3.27) 2. In particular, (X (n) t X (1) > t) is of phase type with representation and ((0, α t (E 0 )), A, E 1), E(X (n) X (1) > t) = t (0, α t (E 0 )) A 1 e. (3.28) 13
16 Proof. It follows from Lemma 3.4 that (X (k), X (i) ) is of phase type with representation (α, A, E, O k, O i ). Since X (i) X (k), the corollary follows from Lemma 3.8 (1), A E { } = A, and E { } = E 1. Corollary Let (X 1,..., X n ) be MPH with representation (α, A, E, E 1,..., E n ). Then (X (n) t X i > t) is of phase type with representation ((0, α t (E E i )), A, E 1), and E(X (n) X i > t) = t (0, α t (E E i )) A 1 e. (3.29) Proof. If (X 1,..., X n ) is MPH with representation (α, A, E 1,..., E n ), then (X i, X (n) ) is of phase type with representation (α, A, E, E i, { }). Then the results follow from (3.26), Lemma 3.8 and the fact that X (n) X i almost surely for any i. The expression for E(X i X (n) > t) is cumbersome, but can be obtained from (3.20), (3.21), (3.22), and Lemma CTE of MarshallOlkin Distributions In this section, we illustrate our results using the multivariate MarshallOlkin distribution, and also show some interesting effects of different parameters on the CTEs. Let {E S, S {1,..., n}} be a sequence of independent, exponentially distributed random variables, with E S having mean 1/λ S. Let X j = min{e S : S j}, j = 1,..., n. (4.1) The joint distribution of (X 1,..., X n ) is called the MarshallOlkin distribution with parameters {λ S, S {1,..., n}} (Marshall and Olkin 1967). In the reliability context, X 1,..., X n can be viewed as the lifetimes of n components operating in a random shock environment where a fatal shock governed by Poisson process {N S (t), t 0} with rate λ S destroys all the components with indexes in S {1,..., n} simultaneously. Assume that these Poisson shock arrival processes are independent, then, X j = inf{t : N S (t) 1, S j}, j = 1,..., n. (4.2) Let {M S (t), t 0}, S {1,..., n}, be independent Markov chains with absorbing state S, each representing the exponential distribution with parameter λ S. It follows from (4.2) that (X 1,..., X n ) is of phase type with the underlying Markov chain on the product space of these independent Markov chains with absorbing classes E j = {(e S ) : e S = S for some S j}, 1 j n. It is also easy to verify that the marginal distribution of the jth component of the MarshallOlkin distributed random vector is exponential with mean 1/ S:S j λ S. 14
17 To calculate the CTEs, we need to simplify the underlying Markov chain for the Marshall Olkin distribution and obtain its phase type representation. Let {X(t), t 0} be a Markov chain with state space E = {S : S {1,..., n}} = {, e 1,..., e d }, and starting at almost surely. The index set {1,..., n} is the absorbing state, and E 0 = { }, E j = {S : S j}, j = 1,..., n. It follows from (4.2) that its subgenerator is given by A = (a i,j ), where a i,j = L: L S, L S=S λ L, if e i = S, e j = S and S S, a i,i = L: L S λ L Λ, if e i = S and Λ = S and zero otherwise. Using the results in Sections 23 and these parameters, we can calculate the CTEs. To illustrate the results, we consider the bivariate case. Example 4.1. In the MarshallOlkin distribution, let n = 2, the state space E = {12, 2, 1, 0}, E j = {12, j}, j = 1, 2, where 12 is the absorbing state. Note that 0 in this and next examples abbreviates the state. Furthermore, let the initial probability vector be (0, α) with α = (0, 0, 1). Then, the subgenerator A for the twodimensional MarshallOlkin distribution is given by A = where Λ = λ 12 + λ 2 + λ 1 + λ. λ 12 λ λ 12 λ 2 0 λ 2 λ 1 Λ + λ To study the effect of dependence on the CTEs, we calculate CT E S (t), CT E X(1) (t), and CT E X(n) (t), respectively, under several different sets of model parameters. The analytic forms of CT E S (t), CT E X(1) (t), and CT E X(n) (t) in the following three cases and the numerical values in Table 1 were easily produced from (3.5), (3.13), and (3.14) by Mathematica. The first column of Table 1 lists several values of t, and the next several columns list values of these CTEs in the following three cases. 1. Case 1: λ 12 = 0, λ 1 = λ 2 = 2.5, λ = 0. In this case, X 1, X 2 are independent, and CT E S (t) = t t, CT E X(1) (t) = t, CT E X(n) (t) = λ S,, t (0.2 + t)e 2.5 t 2 e 2.5 t. 15
18 2. Case 2: λ 12 = 1, λ 1 = λ 2 = 1.5, λ = 1. In this case, X 1, X 2 are positively dependent, and CT E S (t) = t t ( t)e 0.5, e 0.5 t CT E X(1) (t) = t, CT E X(n) (t) = t ( t)e 1.5 t 2 e 1.5 t. 3. Case 3: λ 12 = 2.5, λ 1 = λ 2 = 0, λ = 2.5. This is the comonotone case where X 1 = X 2, and so the vector (X 1, X 2 ) has the strongest positive dependence. In this case, CT E S (t) = t, CT E X(1) (t) = CT E X(n) (t) = t. Table 1: Effects of Dependence on the CTEs of S, X (1), and X (n) CT E S (t) CT E X(1) (t) CT E X(n) (t) t Case 1 Case 2 Case 3 Case 1 Case 2 Case 3 Case 1 Case 2 Case In all the three cases, (X 1, X 2 ) has the same marginal distributions, namely, X 1 and X 2 have the exponential distributions with means 1/(λ 12 + λ 1 ) and 1/(λ 12 + λ 2 ), respectively. The only difference among them is the different correlation between X 1 and X 2. It can be easily verified directly that the correlation coefficient of (X 1, X 2 ) in Case 1 is smaller than that in Case 2, which, in turn, is smaller than that in Case 3. In fact, it follows from Proposition 5.5 in Li and Xu (2000) that the random vector in Case 1 is less dependent than that in Case 2, which, in turn, is less dependent than that in Case 3, all in supermodular dependence order. Table 1 shows that the CT E S (t) becomes larger as the correlation grows. The effect of dependence on CT E X(1) (t) is the same as that on CT E S (t). However, the effect of dependence on CT E X(n) (t) is different from those on CT E S (t) and CT E X(1) (t). Indeed, CT E X(n) (t) is neither increasing nor decreasing as the correlation grows. 16
19 Example 4.2. We now calculate E(X (n) X (1) > t), E(X (n) X 1 > t) and CT E X(n) (t) = E(X (n) X (n) > t) under several different sets of model parameters used in Example 4.1. The analytic forms of E(X (n) X (1) > t), E(X (n) X 1 > t) and E(X (n) X (n) > t) in the following three cases and the numerical values in Table 2 were easily produced from (3.28), (3.29), and (3.14) by Mathematica. The first column of Table 2 lists several values of t, and the next several columns list values of these conditional expectations in the following three cases corresponding to those in Example Case 1: λ 12 = 0, λ 1 = λ 2 = 2.5, λ = 0. In this case, E(X (n) X (1) > t) = t, E(X (n) X 1 > t) = t e 2.5t, E(X (n) X (n) > t) = 2. Case 2: λ 12 = 1, λ 1 = λ 2 = 1.5, λ = 1. In this case, E(X (n) X (1) > t) = t t (0.2 + t)e 2.5 t 2 e 2.5 t. E(X (n) X 1 > t) = t e 1.5t, E(X (n) X (n) > t) = 3. Case 3: λ 12 = 2.5, λ 1 = λ 2 = 0, λ = 2.5. In this case, t ( t)e 1.5 t 2 e 1.5 t. E(X (n) X (1) > t) = E(X (n) X 1 > t) = E(X (n) X (n) > t) = t. Table 2: Effects of Dependence and Different Conditions on the Maximal Risk. E(X (n) X (1) > t) E(X (n) X 1 > t) E(X (n) X (n) > t) t Case 1 Case 2 Case 3 Case 1 Case 2 Case 3 Case 1 Case 2 Case Table 2 shows that E(X (n) X (1) > t) E(X (n) X 1 > t) E(X (n) X (n) > t) for the values of t in all the three cases. But, neither E(X (n) X 1 > t) nor E(X (n) X (n) > t) exhibits any monotonicity property as the correlation grows. 17
20 5 Concluding Remarks Using Markovian method, we have derived the explicit expressions of various conditional tail expectations (CTE) for multivariate phase type distributions in a unified fashion. These CTEs can be applied to measure some righttail risks for a financial portfolio consisting of several stochastically dependent subportfolios. We have focused on the total risk, minimal risk and maximal risk of the portfolio, and as we illustrated in the numerical examples, our CTE formulas for these risks can be easily implemented. Some CTE function values in our numerical examples are increasing as the correlation among the subportfolios grows. This demonstrates that merely increasing in the correlation, while fixing the marginal risk for each subportfolio, would add more risk into the entire portfolio. In fact, since the minimum statistic X (1) of a MarshallOlkin distributed random vector (X 1,..., X n ) has an exponential distribution, it is easy to verify directly that the CTE of X (1) is increasing as (X 1,..., X n ) becomes more dependent in the sense of supermodular dependence order. However, whether or not a given CTE risk measure exhibits certain monotonicity property as the correlation among the subportfolios grows remains open, and as an important question, indeed needs further studies. Another problem that we have not addressed in this paper is the explicit expression of the CTE for X i n i=1 X i > t, which is of interest in the risk allocation study for the total risk. Using the Markovian method, we can obtain the expression for the CTE of X i n i=1 X i > t, but that expression is too cumbersome to have any value of practical usage. Alternatively, some kind of recursive algorithm defined on the underlying Markov structure would offer a promising computational option. Acknowledgments: We thank a referee for her/his careful reading of the paper and her/his helpful suggestions that improved the presentation of the paper. The research was supported in part by the NSERC grant RGPIN References [1] Asmussen, S. (2000) Ruin Probabilities. World Scientific, Singapore. [2] Asmussen, S. (2003) Applied Probability and Queues, Second Edition. Springer, New York. [3] Artzner, P., Delbaen, F., Eber, J.M. and Heath, D. (1999) Coherent measures of risks. Mathematical Finance 9,
21 [4] Assaf, D., Langberg, N., Savits, T., and Shaked, M. (1984) Multivariate phasetype distributions. Operations Research 32, [5] Bowers, N., Gerber, H., Hickman, Jones, D. and Nesbitt, C. (1997) Actuarial Mathematics. The Society of Actuaries, Schaumburg, Illinois. [6] Cai, J. and Li, H. (2005) Multivariate risk model of phase type. Insurance: Mathematics and Economics 36, [7] Embretchts, P., Klüppelberg, C. and Mikosch, T. (1997) Modelling Extremal Events for Insurance and Finance. Spring, Berlin. [8] Kulkarni, V. G. (1989) A new class of multivariate phase type distributions. Operations Research 37, [9] Landsman Z. and Valdez, E. (2003) Tail conditional expectations for elliptical distributions. North American Actuarial Journal 7, [10] Li, H. and S. H. Xu (2000) On the dependence structure and bounds of correlated parallel queues and their applications to synchronized stochastic systems. Journal of Applied Probability 37, [11] Marshall, A. W. and Olkin, I. (1967) A multivariate exponential distribution. Journal of the American Statistical Association 2, [12] Neuts, M. F. (1981) MatrixGeometric Solutions in Stochastic Models: An Algorithmic Approach. The Johns Hopkins University Press, Baltimore. [13] Rolski, T., Schmidli, H., Schmidt, V., Teugels, J. (1999) Stochastic Processes for Finance and Insurance. Wiley, New York. [14] Shaked, M. and J. G. Shanthikumar (1994) Stochastic Orders and Their Applications, Academic Press, New York. 19
SPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BIDIAGONAL REPRESENTATIONS FOR PHASE TYPE DISTRIBUTIONS AND MATRIXEXPONENTIAL DISTRIBUTIONS
Stochastic Models, 22:289 317, 2006 Copyright Taylor & Francis Group, LLC ISSN: 15326349 print/15324214 online DOI: 10.1080/15326340600649045 SPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BIDIAGONAL
More informationA SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS
A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS Eusebio GÓMEZ, Miguel A. GÓMEZVILLEGAS and J. Miguel MARÍN Abstract In this paper it is taken up a revision and characterization of the class of
More informationThe Exponential Distribution
21 The Exponential Distribution From DiscreteTime to ContinuousTime: In Chapter 6 of the text we will be considering Markov processes in continuous time. In a sense, we already have a very good understanding
More informationMathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 19967 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
More informationAnalysis of a Production/Inventory System with Multiple Retailers
Analysis of a Production/Inventory System with Multiple Retailers Ann M. Noblesse 1, Robert N. Boute 1,2, Marc R. Lambrecht 1, Benny Van Houdt 3 1 Research Center for Operations Management, University
More informationMarshallOlkin distributions and portfolio credit risk
MarshallOlkin distributions and portfolio credit risk Moderne Finanzmathematik und ihre Anwendungen für Banken und Versicherungen, Fraunhofer ITWM, Kaiserslautern, in Kooperation mit der TU München und
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationNonparametric adaptive age replacement with a onecycle criterion
Nonparametric adaptive age replacement with a onecycle criterion P. CoolenSchrijner, F.P.A. Coolen Department of Mathematical Sciences University of Durham, Durham, DH1 3LE, UK email: Pauline.Schrijner@durham.ac.uk
More informationTHE DYING FIBONACCI TREE. 1. Introduction. Consider a tree with two types of nodes, say A and B, and the following properties:
THE DYING FIBONACCI TREE BERNHARD GITTENBERGER 1. Introduction Consider a tree with two types of nodes, say A and B, and the following properties: 1. Let the root be of type A.. Each node of type A produces
More informationHeavytailed insurance portfolios: buffer capital and ruin probabilities
Heavytailed insurance portfolios: buffer capital and ruin probabilities Henrik Hult 1, Filip Lindskog 2 1 School of ORIE, Cornell University, 414A Rhodes Hall, Ithaca, NY 14853, USA 2 Department of Mathematics,
More informationNOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
More informationAggregate Loss Models
Aggregate Loss Models Chapter 9 Stat 477  Loss Models Chapter 9 (Stat 477) Aggregate Loss Models Brian Hartman  BYU 1 / 22 Objectives Objectives Individual risk model Collective risk model Computing
More informationModern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete
More informationDPolicy for a Production Inventory System with Perishable Items
Chapter 6 DPolicy for a Production Inventory System with Perishable Items 6.1 Introduction So far we were concentrating on invenory with positive (random) service time. In this chapter we concentrate
More informationLeast Squares Estimation
Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN13: 9780470860809 ISBN10: 0470860804 Editors Brian S Everitt & David
More informationA LOGNORMAL MODEL FOR INSURANCE CLAIMS DATA
REVSTAT Statistical Journal Volume 4, Number 2, June 2006, 131 142 A LOGNORMAL MODEL FOR INSURANCE CLAIMS DATA Authors: Daiane Aparecida Zuanetti Departamento de Estatística, Universidade Federal de São
More informationPortfolio Distribution Modelling and Computation. Harry Zheng Department of Mathematics Imperial College h.zheng@imperial.ac.uk
Portfolio Distribution Modelling and Computation Harry Zheng Department of Mathematics Imperial College h.zheng@imperial.ac.uk Workshop on Fast Financial Algorithms Tanaka Business School Imperial College
More informationDATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
More informationAsymptotics of discounted aggregate claims for renewal risk model with risky investment
Appl. Math. J. Chinese Univ. 21, 25(2: 29216 Asymptotics of discounted aggregate claims for renewal risk model with risky investment JIANG Tao Abstract. Under the assumption that the claim size is subexponentially
More informationUNIFORM ASYMPTOTICS FOR DISCOUNTED AGGREGATE CLAIMS IN DEPENDENT RISK MODELS
Applied Probability Trust 2 October 2013 UNIFORM ASYMPTOTICS FOR DISCOUNTED AGGREGATE CLAIMS IN DEPENDENT RISK MODELS YANG YANG, Nanjing Audit University, and Southeast University KAIYONG WANG, Southeast
More informationProbability and Statistics
CHAPTER 2: RANDOM VARIABLES AND ASSOCIATED FUNCTIONS 2b  0 Probability and Statistics Kristel Van Steen, PhD 2 Montefiore Institute  Systems and Modeling GIGA  Bioinformatics ULg kristel.vansteen@ulg.ac.be
More informationSome probability and statistics
Appendix A Some probability and statistics A Probabilities, random variables and their distribution We summarize a few of the basic concepts of random variables, usually denoted by capital letters, X,Y,
More informationAnalysis of an Artificial Hormone System (Extended abstract)
c 2013. This is the author s version of the work. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purpose or for creating
More informationStationary random graphs on Z with prescribed iid degrees and finite mean connections
Stationary random graphs on Z with prescribed iid degrees and finite mean connections Maria Deijfen Johan Jonasson February 2006 Abstract Let F be a probability distribution with support on the nonnegative
More informationCover. Optimal Retentions with Ruin Probability Target in The case of Fire. Insurance in Iran
Cover Optimal Retentions with Ruin Probability Target in The case of Fire Insurance in Iran Ghadir Mahdavi Ass. Prof., ECO College of Insurance, Allameh Tabataba i University, Iran Omid Ghavibazoo MS in
More informationMATHEMATICAL METHODS OF STATISTICS
MATHEMATICAL METHODS OF STATISTICS By HARALD CRAMER TROFESSOK IN THE UNIVERSITY OF STOCKHOLM Princeton PRINCETON UNIVERSITY PRESS 1946 TABLE OF CONTENTS. First Part. MATHEMATICAL INTRODUCTION. CHAPTERS
More informationA Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails
12th International Congress on Insurance: Mathematics and Economics July 1618, 2008 A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails XUEMIAO HAO (Based on a joint
More informationPÓLYA URN MODELS UNDER GENERAL REPLACEMENT SCHEMES
J. Japan Statist. Soc. Vol. 31 No. 2 2001 193 205 PÓLYA URN MODELS UNDER GENERAL REPLACEMENT SCHEMES Kiyoshi Inoue* and Sigeo Aki* In this paper, we consider a Pólya urn model containing balls of m different
More informationAn axiomatic approach to capital allocation
An axiomatic approach to capital allocation Michael Kalkbrener Deutsche Bank AG Abstract Capital allocation techniques are of central importance in portfolio management and riskbased performance measurement.
More informationSome remarks on twoasset options pricing and stochastic dependence of asset prices
Some remarks on twoasset options pricing and stochastic dependence of asset prices G. Rapuch & T. Roncalli Groupe de Recherche Opérationnelle, Crédit Lyonnais, France July 16, 001 Abstract In this short
More informationFundamentals of Actuarial Mathematics
Fundamentals of Actuarial Mathematics S. David Promislow York University, Toronto, Canada John Wiley & Sons, Ltd Contents Preface Notation index xiii xvii PARTI THE DETERMINISTIC MODEL 1 1 Introduction
More informationMatrix Differentiation
1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have
More informationCHAPTER IV  BROWNIAN MOTION
CHAPTER IV  BROWNIAN MOTION JOSEPH G. CONLON 1. Construction of Brownian Motion There are two ways in which the idea of a Markov chain on a discrete state space can be generalized: (1) The discrete time
More informationEXTREMES ON THE DISCOUNTED AGGREGATE CLAIMS IN A TIME DEPENDENT RISK MODEL
EXTREMES ON THE DISCOUNTED AGGREGATE CLAIMS IN A TIME DEPENDENT RISK MODEL Alexandru V. Asimit 1 Andrei L. Badescu 2 Department of Statistics University of Toronto 100 St. George St. Toronto, Ontario,
More informationDeterminants. Dr. Doreen De Leon Math 152, Fall 2015
Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.
More informationScalar Valued Functions of Several Variables; the Gradient Vector
Scalar Valued Functions of Several Variables; the Gradient Vector Scalar Valued Functions vector valued function of n variables: Let us consider a scalar (i.e., numerical, rather than y = φ(x = φ(x 1,
More informationJoint models for classification and comparison of mortality in different countries.
Joint models for classification and comparison of mortality in different countries. Viani D. Biatat 1 and Iain D. Currie 1 1 Department of Actuarial Mathematics and Statistics, and the Maxwell Institute
More informationIEOR 6711: Stochastic Models, I Fall 2012, Professor Whitt, Final Exam SOLUTIONS
IEOR 6711: Stochastic Models, I Fall 2012, Professor Whitt, Final Exam SOLUTIONS There are four questions, each with several parts. 1. Customers Coming to an Automatic Teller Machine (ATM) (30 points)
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationSYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation
SYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu September 19, 2015 Outline
More informationT ( a i x i ) = a i T (x i ).
Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)
More informationAsymptotics for a discretetime risk model with Gammalike insurance risks. Pokfulam Road, Hong Kong
Asymptotics for a discretetime risk model with Gammalike insurance risks Yang Yang 1,2 and Kam C. Yuen 3 1 Department of Statistics, Nanjing Audit University, Nanjing, 211815, China 2 School of Economics
More informationThe Hadamard Product
The Hadamard Product Elizabeth Million April 12, 2007 1 Introduction and Basic Results As inexperienced mathematicians we may have once thought that the natural definition for matrix multiplication would
More informationSF2940: Probability theory Lecture 8: Multivariate Normal Distribution
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,
More informationAssurancevie 2. Manuel Morales Department of Mathematics and Statistics University of Montreal. February 2006
Assurancevie 2 Manuel Morales Department of Mathematics and Statistics University of Montreal February 2006 Chapter 9 Multiple Life Functions 1. Joint Distributions 2. Jointlife status 3. Lastsurvivor
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 918/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More informationSection 6.1  Inner Products and Norms
Section 6.1  Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
More informationProbability Theory. Florian Herzog. A random variable is neither random nor variable. GianCarlo Rota, M.I.T..
Probability Theory A random variable is neither random nor variable. GianCarlo Rota, M.I.T.. Florian Herzog 2013 Probability space Probability space A probability space W is a unique triple W = {Ω, F,
More information3. ARTÍCULOS DE APLICACIÓN
3. ARTÍCULOS DE APLICACIÓN SOME PROBABILITY APPLICATIONS TO THE RISK ANALYSIS IN INSURANCE THEORY Jinzhi Li College of Science Central University for Nationalities, China Shixia Ma School of Sciences Hebei
More informationMinimally Infeasible Set Partitioning Problems with Balanced Constraints
Minimally Infeasible Set Partitioning Problems with alanced Constraints Michele Conforti, Marco Di Summa, Giacomo Zambelli January, 2005 Revised February, 2006 Abstract We study properties of systems of
More informationsome algebra prelim solutions
some algebra prelim solutions David Morawski August 19, 2012 Problem (Spring 2008, #5). Show that f(x) = x p x + a is irreducible over F p whenever a F p is not zero. Proof. First, note that f(x) has no
More informationProbabilities and Random Variables
Probabilities and Random Variables This is an elementary overview of the basic concepts of probability theory. 1 The Probability Space The purpose of probability theory is to model random experiments so
More informationLecture Notes 1: Matrix Algebra Part B: Determinants and Inverses
University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 57 Lecture Notes 1: Matrix Algebra Part B: Determinants and Inverses Peter J. Hammond email: p.j.hammond@warwick.ac.uk Autumn 2012,
More informationMaximum Likelihood Estimation
Math 541: Statistical Theory II Lecturer: Songfeng Zheng Maximum Likelihood Estimation 1 Maximum Likelihood Estimation Maximum likelihood is a relatively simple method of constructing an estimator for
More informationOn a comparison result for Markov processes
On a comparison result for Markov processes Ludger Rüschendorf University of Freiburg Abstract A comparison theorem is stated for Markov processes in polish state spaces. We consider a general class of
More informationSingle item inventory control under periodic review and a minimum order quantity
Single item inventory control under periodic review and a minimum order quantity G. P. Kiesmüller, A.G. de Kok, S. Dabia Faculty of Technology Management, Technische Universiteit Eindhoven, P.O. Box 513,
More informationDecomposing total risk of a portfolio into the contributions of individual assets
Decomposing total risk of a portfolio into the contributions of individual assets Abstract. Yukio Muromachi Graduate School of Social Sciences, Tokyo Metropolitan University 11 MinamiOhsawa, Hachiohji,
More informationPast and present trends in aggregate claims analysis
Past and present trends in aggregate claims analysis Gordon E. Willmot Munich Re Professor of Insurance Department of Statistics and Actuarial Science University of Waterloo 1st QuebecOntario Workshop
More informationLecture 2 of 4part series. Spring School on Risk Management, Insurance and Finance European University at St. Petersburg, Russia.
Principles and Lecture 2 of 4part series Capital Spring School on Risk, Insurance and Finance European University at St. Petersburg, Russia 24 April 2012 Fair Wang s University of Connecticut, USA page
More informationINSURANCE RISK THEORY (Problems)
INSURANCE RISK THEORY (Problems) 1 Counting random variables 1. (Lack of memory property) Let X be a geometric distributed random variable with parameter p (, 1), (X Ge (p)). Show that for all n, m =,
More informationA Note on the Ruin Probability in the Delayed Renewal Risk Model
Southeast Asian Bulletin of Mathematics 2004 28: 1 5 Southeast Asian Bulletin of Mathematics c SEAMS. 2004 A Note on the Ruin Probability in the Delayed Renewal Risk Model Chun Su Department of Statistics
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 10 Boundary Value Problems for Ordinary Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at UrbanaChampaign
More informationHedging bounded claims with bounded outcomes
Hedging bounded claims with bounded outcomes Freddy Delbaen ETH Zürich, Department of Mathematics, CH892 Zurich, Switzerland Abstract. We consider a financial market with two or more separate components
More informationCITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION
No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August
More informationa 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
More informationFactoring Algorithms
Factoring Algorithms The p 1 Method and Quadratic Sieve November 17, 2008 () Factoring Algorithms November 17, 2008 1 / 12 Fermat s factoring method Fermat made the observation that if n has two factors
More informationProperties of Future Lifetime Distributions and Estimation
Properties of Future Lifetime Distributions and Estimation Harmanpreet Singh Kapoor and Kanchan Jain Abstract Distributional properties of continuous future lifetime of an individual aged x have been studied.
More informationStructure Preserving Model Reduction for Logistic Networks
Structure Preserving Model Reduction for Logistic Networks Fabian Wirth Institute of Mathematics University of Würzburg Workshop on Stochastic Models of Manufacturing Systems Einhoven, June 24 25, 2010.
More informationThe two defaults scenario for stressing credit portfolio loss distributions
The two defaults scenario for stressing credit portfolio loss distributions Dirk Tasche arxiv:1002.2604v2 [qfin.rm] 25 Oct 2015 The impact of a stress scenario of default events on the loss distribution
More informationExplicit inverses of some tridiagonal matrices
Linear Algebra and its Applications 325 (2001) 7 21 wwwelseviercom/locate/laa Explicit inverses of some tridiagonal matrices CM da Fonseca, J Petronilho Depto de Matematica, Faculdade de Ciencias e Technologia,
More informationOverview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written
More informationOPTIMAl PREMIUM CONTROl IN A NONliFE INSURANCE BUSINESS
ONDERZOEKSRAPPORT NR 8904 OPTIMAl PREMIUM CONTROl IN A NONliFE INSURANCE BUSINESS BY M. VANDEBROEK & J. DHAENE D/1989/2376/5 1 IN A OPTIMAl PREMIUM CONTROl NONliFE INSURANCE BUSINESS By Martina Vandebroek
More informationTABLE OF CONTENTS. GENERAL AND HISTORICAL PREFACE iii SIXTH EDITION PREFACE v PART ONE: REVIEW AND BACKGROUND MATERIAL
TABLE OF CONTENTS GENERAL AND HISTORICAL PREFACE iii SIXTH EDITION PREFACE v PART ONE: REVIEW AND BACKGROUND MATERIAL CHAPTER ONE: REVIEW OF INTEREST THEORY 3 1.1 Interest Measures 3 1.2 Level Annuity
More informationFULL LIST OF REFEREED JOURNAL PUBLICATIONS Qihe Tang
FULL LIST OF REFEREED JOURNAL PUBLICATIONS Qihe Tang 87. Li, J.; Tang, Q. Interplay of insurance and financial risks in a discretetime model with strongly regular variation. Bernoulli 21 (2015), no. 3,
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An ndimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0534405967. Systems of Linear Equations Definition. An ndimensional vector is a row or a column
More informationConvex Hull Probability Depth: first results
Conve Hull Probability Depth: first results Giovanni C. Porzio and Giancarlo Ragozini Abstract In this work, we present a new depth function, the conve hull probability depth, that is based on the conve
More informationBipan Hazarika ON ACCELERATION CONVERGENCE OF MULTIPLE SEQUENCES. 1. Introduction
F A S C I C U L I M A T H E M A T I C I Nr 51 2013 Bipan Hazarika ON ACCELERATION CONVERGENCE OF MULTIPLE SEQUENCES Abstract. In this article the notion of acceleration convergence of double sequences
More informationUNIT 2 MATRICES  I 2.0 INTRODUCTION. Structure
UNIT 2 MATRICES  I Matrices  I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress
More information3. The Multivariate Normal Distribution
3. The Multivariate Normal Distribution 3.1 Introduction A generalization of the familiar bell shaped normal density to several dimensions plays a fundamental role in multivariate analysis While real data
More informationSome Further Results on the Winner s Rent in the SecondPrice Business Auction
Sankhyā : The Indian Journal of Statistics 28, Volume 7A, Part 1, pp. 124133 c 28, Indian Statistical Institute Some Further Results on the Winner s Rent in the SecondPrice Business Auction Maochao
More informationRandom Vectors and the Variance Covariance Matrix
Random Vectors and the Variance Covariance Matrix Definition 1. A random vector X is a vector (X 1, X 2,..., X p ) of jointly distributed random variables. As is customary in linear algebra, we will write
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationA note on the distribution of the aggregate claim amount at ruin
A note on the distribution of the aggregate claim amount at ruin Jingchao Li, David C M Dickson, Shuanming Li Centre for Actuarial Studies, Department of Economics, University of Melbourne, VIC 31, Australia
More informationActuarial Pricing for Minimum Death Guarantees in UnitLinked Life Insurance: A MultiPeriod Capital Allocation Problem
Actuarial Pricing for Minimum Death Guarantees in UnitLinked Life Insurance: A MultiPeriod Capital Allocation Problem S. Desmedt, X. Chenut, J.F. Walhin Secura, Avenue des Nerviens 931, B1040 Brussels,
More informationMarkov random fields and Gibbs measures
Chapter Markov random fields and Gibbs measures 1. Conditional independence Suppose X i is a random element of (X i, B i ), for i = 1, 2, 3, with all X i defined on the same probability space (.F, P).
More informationGenerating Valid 4 4 Correlation Matrices
Applied Mathematics ENotes, 7(2007), 5359 c ISSN 16072510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Generating Valid 4 4 Correlation Matrices Mark Budden, Paul Hadavas, Lorrie
More informationF Matrix Calculus F 1
F Matrix Calculus F 1 Appendix F: MATRIX CALCULUS TABLE OF CONTENTS Page F1 Introduction F 3 F2 The Derivatives of Vector Functions F 3 F21 Derivative of Vector with Respect to Vector F 3 F22 Derivative
More informationE3: PROBABILITY AND STATISTICS lecture notes
E3: PROBABILITY AND STATISTICS lecture notes 2 Contents 1 PROBABILITY THEORY 7 1.1 Experiments and random events............................ 7 1.2 Certain event. Impossible event............................
More informationSEQUENCES OF MAXIMAL DEGREE VERTICES IN GRAPHS. Nickolay Khadzhiivanov, Nedyalko Nenov
Serdica Math. J. 30 (2004), 95 102 SEQUENCES OF MAXIMAL DEGREE VERTICES IN GRAPHS Nickolay Khadzhiivanov, Nedyalko Nenov Communicated by V. Drensky Abstract. Let Γ(M) where M V (G) be the set of all vertices
More informationTHE CENTRAL LIMIT THEOREM TORONTO
THE CENTRAL LIMIT THEOREM DANIEL RÜDT UNIVERSITY OF TORONTO MARCH, 2010 Contents 1 Introduction 1 2 Mathematical Background 3 3 The Central Limit Theorem 4 4 Examples 4 4.1 Roulette......................................
More informationOn the computation of the aggregate claims distribution in the individual life model with bivariate dependencies
On the computation of the aggregate claims distribution in the individual life model with bivariate dependencies Carme Ribas, Jesús MarínSolano and Antonio Alegre July 12, 2002 Departament de Matemàtica
More informationWhat is Linear Programming?
Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to
More information4.5 Linear Dependence and Linear Independence
4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then
More informationStatistics in Retail Finance. Chapter 6: Behavioural models
Statistics in Retail Finance 1 Overview > So far we have focussed mainly on application scorecards. In this chapter we shall look at behavioural models. We shall cover the following topics: Behavioural
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a nonempty
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus ndimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More informationSpatial Statistics Chapter 3 Basics of areal data and areal data modeling
Spatial Statistics Chapter 3 Basics of areal data and areal data modeling Recall areal data also known as lattice data are data Y (s), s D where D is a discrete index set. This usually corresponds to data
More informationn k=1 k=0 1/k! = e. Example 6.4. The series 1/k 2 converges in R. Indeed, if s n = n then k=1 1/k, then s 2n s n = 1 n + 1 +...
6 Series We call a normed space (X, ) a Banach space provided that every Cauchy sequence (x n ) in X converges. For example, R with the norm = is an example of Banach space. Now let (x n ) be a sequence
More informationM/M/1 and M/M/m Queueing Systems
M/M/ and M/M/m Queueing Systems M. Veeraraghavan; March 20, 2004. Preliminaries. Kendall s notation: G/G/n/k queue G: General  can be any distribution. First letter: Arrival process; M: memoryless  exponential
More information