NonLife Insurance Mathematics. Christel Geiss and Stefan Geiss Department of Mathematics and Statistics University of Jyväskylä


 Griffin Campbell
 1 years ago
 Views:
Transcription
1 NonLife Insurance Mathematics Christel Geiss and Stefan Geiss Department of Mathematics and Statistics University of Jyväskylä July 29, 215
2 2
3 Contents 1 Introduction Some facts about probability Claim number process models The homogeneous Poisson process The renewal process The inhomogeneous Poisson process The total claim amount process S(t) The CramérLundbergmodel The renewal model Properties of S(t) Premium calculation principles Used principles Claim size distributions Examples The QQplot Modern premium calculation principles The exponential principle The quantile principle The Esscher principle The distribution of S(t) Mixture distributions Applications in insurance The Panjer recursion
4 4 CONTENTS 7.5 Approximation of F S(t) Monte Carlo approximations of F S(t) Reinsurance treaties 49 9 Probability of ruin The risk process Bounds for the ruin probability An asymptotics for the ruin probability Problems 81 A The LebesgueStieltjes integral 95 A.1 The RiemannStieltjes integral A.2 The LebesgueStieltjes integral
5 1. Introduction Insurance Mathematics is sometimes divided into life insurance, health insurance and nonlife insurance. Life insurance includes for instance life insurance contracts and pensions where long terms are covered. Nonlife insurance comprises insurances against fire, water damage, earthquake, industrial catastrophes or car insurance, for example. Nonlife insurances cover in general a year or other fixed time periods. Health insurance is special because it is differently organized in each country. The course material is based on the textbook NonLife Insurance Mathematics by Thomas Mikosch [6] The problem to solve We will consider the following situation. 1. Insurance contracts (or policies ) are sold. This is the income of the insurance company. 2. At times T i, T 1 T 2... claims happen. The times T i are called the claim arrival times. 3. The ith claim arriving at time T i causes the claim size X i. Task: Find a stochastic model for the T i s and X i s to compute or estimate how much an insurance company should demand for its contracts and how much initial capital of the insurance company is required to keep the probability of ruin below a certain level. 5
6 6 CHAPTER 1. INTRODUCTION 1.1 Some facts about probability We shortly recall some definitions and facts from probability theory which we need in this course. For more information see [9] or [2], for example. (i) A probability space is a triple (Ω, F, P), where Ω is a nonempty set, F is a σalgebra consisting of subsets of Ω and P is a probability measure on (Ω, F). (ii) A function f : Ω R is called a random variable if and only if for all intervals (a, b), < a < b < the preimage f 1 ((a, b)) : {ω Ω : a < f(ω) < b} F. (iii) The random variables f 1,..., f n are independent if and only if P(f 1 B 1,..., f n B n ) P(f 1 B 1 ) P(f n B n ) for all B k B(R), k 1,..., n. (Here B(R) denotes the Borel σ algebra.) If the f i s have discrete values, i.e. f i : Ω {x 1, x 2, x 3,...}, then the random variables f 1,..., f n are independent if and only if P(f 1 k 1,..., f n k n ) P(f 1 k 1 ) P(f n k n ) for all k i {x 1, x 2, x 3...}. (iv) If f 1,..., f n are independent random variables such that f i has the density function h i (x), i.e. P(f i (a, b)) b h a i(x)dx, then P((f 1,..., f n ) B) 1I B (x 1,..., x n )h 1 (x) h n (x n )dx 1 dx n R n for all B B(R n ). The σalgebra B(R n ) is the Borel σalgebra, which is the smallest σalgebra containing all the open rectangles (a 1, b 1 )... (a n, b n ). The function 1I B (x) is the indicator function for the set B, which is defined as { 1 if x B 1I B (x) if x B.
7 1.1. SOME FACTS ABOUT PROBABILITY 7 (v) A random variable f : Ω {, 1, 2,...} is Poisson distributed with parameter λ > if and only if λ λk P(f k) e k!. This is often written as f P ois(λ). (vi) A random variable g : Ω [, ) is exponentially distributed with parameter λ > if and only if for all a < b P(g (a, b)) λ b a 1I [, ) (x)e λx dx. The picture below shows the density λ1i [, ) (x)e λx for λ 3. density for lambda3 y x
8 8 CHAPTER 1. INTRODUCTION
9 2. Models for the claim number process N(t) In the following we will introduce three processes which are used as claim number processes: the Poisson process, the renewal process and the inhomogeneous Poisson process. 2.1 The homogeneous Poisson process with parameter λ > Definition (homogeneous Poisson process). A stochastic process N (N(t)) t [, ) is a Poisson process if the following conditions are fulfilled: (P1) N() a.s. (almost surely), i.e. P({ω Ω : N(, ω) }) 1. (P2) N has independent increments, i.e. if t < t 1 <... < t n, (n 1), then N(t n ) N(t n 1 ), N(t n 1 ) N(t n 2 ),..., N(t 1 ) N(t ) are independent. (P3) For any s and t > the random variable N(t+s) N(s) is Poisson distributed, i.e. λt (λt)m P(N(t + s) N(s) m) e, m, 1, 2,... m! (P4) The paths of N, i.e. the functions (N(t, ω)) t [, ) for fixed ω are almost surely right continuous and have left limits. One says N has càdlàg (continue à droite, limite à gauche) paths. 9
10 1 CHAPTER 2. CLAIM NUMBER PROCESS MODELS Lemma Assume W 1, W 2,... are independent and exponentially distributed with parameter λ >. Then, for any x > we have n 1 P(W W n x) 1 e λx k (λx) k. k! This means the sum of independent exponentially distributed random variables is a Gamma distributed random variable. Proof: Exercise. Definition Let W 1, W 2,... be independent and exponentially distributed with parameter λ >. Define T n : W W n and ˆN(t, ω) : #{i 1 : T i (ω) t}, t. Lemma For each n, 1, 2,... and for all t > it holds P({ω Ω : ˆN(t, ω) n}) e λt (λt)n, n! i.e. ˆN(t) is Poisson distributed with parameter λt. Proof: From the definition of ˆN it can be concluded that {ω : ˆN(t, ω) n} {ω : Tn (ω) t < T n+1 (ω)} {ω : T n (ω) t} \ {ω : T n+1 (ω) t} Because of T n T n+1 we have the inclusion {T n+1 t} {T n t}. This implies P( ˆN(t) n) P(T n t) P(T n+1 t) n 1 1 e λt (λt) k 1 + e λt k! k λt (λt)n e. n! n (λt) k k k!
11 2.1. THE HOMOGENEOUS POISSON PROCESS 11 Theorem (a) ˆN(t) t [, ) is a Poisson process with parameter λ >. (b) Any Poisson process N(t) with parameter λ > can be written as Proof: N(t) #{i 1, T i t}, t, where T n W W n, n 1, and W 1, W 2,... are independent and exponentially distributed with λ >. (a) We check the properties of the Definition (P1) From (vi) of Section 1.1 we get for any x > that P(W 1 > ) P(W 1 (, )) λ 1I [, ) (y)e λy dy 1. This implies that ˆN(, ω) if only < T 1 (ω) W 1 (ω) but W 1 > holds almost surely. Hence ˆN(, ω) a.s. (P2) We only show that ˆN(s) and ˆN(t) ˆN(s) are independent: i.e. P( ˆN(s) l, ˆN(t) ˆN(s) m)p( ˆN(s) l)p( ˆN(t) ˆN(s) m) (1) for l, m. The general case can be shown similarly. It holds P( ˆN(s) l, ˆN(t) ˆN(s) m) P( ˆN(s) l, ˆN(t) m + l) P(T l s < T l+1, T l+m t < T l+m+1 ) By defining functions f 1, f 2, f 3 and f 4 as f 1 : T l f 2 : W l+1 f 3 : W l W l+m f 4 : W l+m+1,
12 12 CHAPTER 2. CLAIM NUMBER PROCESS MODELS and h 1,..., h 4 as the respective densities, it follows that P(T l s < T l+1, T l+m t < T l+m+1 ) P(f 1 s < f 1 + f 2, f 1 + f 2 + f 3 t < f 1 + f 2 + f 3 + f 4 ) P( f 1 < s, s f 1 < f 2 <, f 3 < t f 1 f 2, : I 1 s t x 1 x 2 s x 1 h 4 (x 4 )dx 4 h 3 (x 3 )dx 3 t x 1 x 2 x } 3 {{ } I 4 (x 1,x 2,x 3 ) } {{ } I 3 (x 1,x 2 ) t (f 1 + f 2 + f 3 ) < f 4 < ) } {{ } I 2 (x 1 ) h 2 (x 2 )dx 2 h 1 (x 1 )dx 1 By direct computation and rewriting the density function of f 4 W l+m+1, I 4 (x 1, x 2, x 3 ) λe λx 4 1I [, ) (x 4 )dx 4 t x 1 x 2 x 3 e λ(t x 1 x 2 x 3 ). Here we used t x 1 x 2 x 3 >. This is true because the integration w.r.t. x 3 implies < x 3 < t x 1 x 2. The density of f 3 W l W l+m is Therefore, I 3 (x 1, x 2 ) x m 2 3 h 3 (x 3 ) λ m 1 (m 2)! 1I [, )(x 3 )e λx 3. t x 1 x 2 λ m 1 x m 2 3 (m 2)! e λx 3 e λ(t x 1 x 2 x 3 ) dx 3 1I [,t x1 )(x 2 )e λ(t x 1 x 2 ) λ m 1 (t x 1 x 2 ) m 1. (m 1)!
13 2.1. THE HOMOGENEOUS POISSON PROCESS 13 The density of f 2 W l+1 is This implies h 2 (x 2 ) 1I [, ) (x 2 )λe λx 2. I 2 (x 1 ) 1I [,t x1 )(x 2 )e λ(t x1 x2) λ m 1 (t x 1 x 2)m 1 (m 1)! s x 1 λ m e λ(t x 1) (t s)m. m! Finally, from Lemma we conclude I 1 s λ m e λ(t x 1) λ m λ l λt (t s)m e ( m! (λs) l If we sum l! (t s)m m! s l l! λ l x l 1 1 ) ( ) (λ(t s)) e λs m e λ(t s) m! P( ˆN(s) l)p( ˆN(t s) m). λe λx 2 dx 2 (l 1)! 1I [, )(x 1 )e λx 1 dx 1 P( ˆN(s) l, ˆN(t) ˆN(s) m) P( ˆN(s) l)p( ˆN(t s) m) over l N we get and hence (1). P( ˆN(t) ˆN(s) m) P( ˆN(t s) m) (2) (P3) follows from Lemma and (2). (P4) is clear from the construction. (b) The proof is an exercise.
14 14 CHAPTER 2. CLAIM NUMBER PROCESS MODELS Poisson, lambda5 X T Poisson, lambda1 X T 2.2 The renewal process To model windstorm claims, for example, it is not good to use the Poisson process because windstorm claims happen rarely, sometimes with years in between. The Pareto distribution, for example, which has the distribution function ( ) α κ F (x) 1 κ + x
15 2.2. THE RENEWAL PROCESS 15 with parameters α, κ > would fit better. For a Pareto distributed random variable it is more likely to have large values than for an exponential distributed random variable. Definition (Renewal process). Assume that W 1, W 2,... are i.i.d. (independent and identically distributed) random variables such that W 1 > a.s. Then { T : is a renewal sequence and is the renewal process. T n : W W n, n 1 N(t) : #{i 1 : T i t}, t In order to study the limit behavior of N we need the Strong Law of Large Numbers (SLLN): Theorem (SLLN). If the random variables X 1, X 2,... are i.i.d. with E X 1 < then X 1 + X X n n EX 1 a.s. n Theorem (SLLN for renewal processes). Assume N(t) is a renewal process. If EW 1 <, then N(t) lim t t 1 EW 1 a.s. Proof: Because of {ω Ω : N(t)(ω) n} {ω Ω : T n (ω) t < T n+1 (ω)}, n N we have for N(t)(ω) > T N(t)(ω) (ω) N(t)(ω) t N(t)(ω) < T N(t)(ω)+1(ω) N(t)(ω) T N(t)(ω)+1(ω) N(t)(ω) + 1 N(t)(ω) + 1. (3) N(t)(ω)
16 16 CHAPTER 2. CLAIM NUMBER PROCESS MODELS Note that Ω {ω Ω : T 1 (ω) < } {ω Ω : sup N(t) > }. t Theorem implies that T n n EW 1 (4) holds on a set Ω with P(Ω ) 1. Hence lim n T n on Ω and by definition of N also lim t N(t) on Ω. From (4) we get Finally 3 implies lim t lim t T N(t)(ω) N(t)(ω) EW 1 for ω Ω. t N(t)(ω) EW 1 for ω Ω. In the following we will investigate the behavior of EN(t) as t. Theorem (Elementary renewal theorem). Assume the above setting, i.e. N(t) is a renewal process. If EW 1 <, then EN(t) lim t t 1 EW 1. (5) Remark If the W i s are exponentially distributed with parameter λ >, W i Exp(λ), i 1, 2,..., then N(t) is a Poisson process. Consequently, EN(t) λt. Since EW i 1, it follows that for all t > λ EN(t) t 1 EW 1. (6) If the W i s are not exponentially distributed, then the equation (6) holds only for the limit t.
17 2.2. THE RENEWAL PROCESS 17 In order to prove Theorem we formulate the following Lemma of Fatou type: Lemma Let Z (Z t ) t [, ) be a stochastic process such that Then Z t : Ω [, ) for all t. E lim inf t Z t lim inf t EZ t. Proof. By monotone convergence, since t inf s t Z s is nondecreasing, we have E lim t inf s t Z s lim t E inf s t Z s. Obviously, E inf s t Z s EZ u for all u t which allows us to write This implies the assertion. Proof of Theorem 2.2.4: E inf s t Z s inf u t EZ u. Let c 1 EW 1. From Theorem we conclude c lim t N(t) t N(s) lim inf t s t s a.s. Since Z t : inf s t N(s) s N(s) c E lim inf t s t s fulfills the requirements of Lemma we have E lim inf t inf N(s) s t s lim inf t E N(t).and t We only have to show that lim sup t E N(t) t W c i : W i c lim inf t E inf N(s) s t s c. For c > we define and get T c i : W c W c i T i, i 1, 2,...
18 18 CHAPTER 2. CLAIM NUMBER PROCESS MODELS Since N (c) (t) : #{i 1 : T c i lim sup t Assume we could show that t} N(t) we obtain EN(t) t lim sup t lim sup t EN (c) (t) t Then EW c 1 EW 1, for c implies We start showing (7). Let and lim sup t EN(t) t EN (c) (t). t 1. (7) EW1 c c. τ(ω) : N (c) (t)(ω) + 1 F n : σ(w 1,..., W n ), n 1, F : {, Ω}. The random variable τ is a stopping time w.r.t. (F n ) i.e. {τ n} {N (c) (t) + 1 n} F n Hence it follows by Wald s identity that This implies lim sup t EN (c) (t) t ET c N (c) (t)+1 E τ lim sup t lim sup t i1 W c i EτEW c 1. EN (c) (t) + 1 t Eτ t lim sup t lim sup t lim sup t ET c N (c) (t)+1 t EW1 c E ( W1 c W c N (t)) + EW c (c) N (c) (t)+1 t EW1 c t + c 1. t EW1 c EW1 c
19 2.3. THE INHOMOGENEOUS POISSON PROCESS The inhomogeneous Poisson process and the mixed Poisson process Definition Let µ : [, ) [, ) be a function such that 1. µ() 2. µ is nondecreasing, i.e. s t µ(s) µ(t) 3. µ is càdlàg. Then the function µ is called a meanvalue function. µ(t)t y t µ(t) continuous y t
20 2 CHAPTER 2. CLAIM NUMBER PROCESS MODELS µ(t) càdlàg y t Definition (Inhomogeneous Poisson process). A stochastic process N N(t) t [, ) is an inhomogeneous Poisson process if and only if it has the following properties: (P1) N() a.s. (P2) N has independent increments, i.e. if t < t 1 <... < t n, (n 1), it holds that N(t n ) N(t n 1 ), N(t n 1 ) N(t n 2 ),..., N(t 1 ) N(t ) are independent. (P inh. 3) There exists a meanvalue function µ such that for s < t (µ(t) µ(s)) (µ(t) µ(s))m P(N(t) N(s) m) e, m! where m, 1, 2,..., and t >. (P4) The paths of N are càdlàg a.s. Theorem (Time change for the Poisson process). If µ denotes the meanvalue function of an inhomogeneous Poisson process N and Ñ is a homogeneous Poisson process with λ 1, then (1) (N(t)) t [, ) d (Ñ(µ(t))) t [, ) (2) If µ is continuous, increasing and lim t µ(t), then N(µ 1 (t)) t [, ) d (Ñ(t)) t [, ).
21 2.3. THE INHOMOGENEOUS POISSON PROCESS Here µ 1 (t) denotes the inverse function of µ and f d g means that the two random variables f and g have the same distribution (but one can not conclude that f(ω) g(ω) for ω Ω). Definition (Mixed Poisson process). Let ˆN be a homogeneous Poisson process with intensity λ 1 and µ be a meanvalue function. Let θ : Ω R be a random variable such that θ > a.s., and θ is independent of ˆN. Then N(t) : ˆN(θµ(t)), t is a mixed Poisson process with mixing variable θ. Proposition It holds ( var( ˆN(θµ(t))) E ˆN(θµ(t)) 1 + var(θ) ) Eθ µ(t). Proof: We recall that E ˆN(t) var( ˆN(t)) t and therefore E ˆN(t) 2 t + t 2. We conclude var( ˆN(θµ(t))) E ˆN(θµ(t)) 2 [ E ˆN(θµ(t)) ] 2 E ( θµ(t) + θ 2 µ(t) 2) (Eθµ(t)) 2 µ(t) (Eθ + varθµ(t)). The property var(n(t)) > EN(t) is called overdispersion. inhomogeneous Poisson process, then If N is an var(n(t)) EN(t).
22 22 CHAPTER 2. CLAIM NUMBER PROCESS MODELS
23 3. The total claim amount process S(t) 3.1 The CramérLundbergmodel Definition The CramérLundbergmodel considers the following setting: 1. Claims happen at the claim arrival times < T 1 < T 2 <... of a Poisson process N(t) #{i 1 : T i t}, t. 2. At time T i the claim size X i happens and it holds that the sequence (X i ) i1 is i.i.d., X i. 3. The processes (T i ) i1 and (X i ) i1 are independent. Remark: Are N and (X i ) i1 independent? 3.2 The renewal model Definition The renewal model (or SparreAndersonmodel) considers the following setting: 1. Claims happen at the claim arrival times T 1 T 2... of a renewal process N(t) #{i 1 : T i t}, t. 23
24 24 CHAPTER 3. THE TOTAL CLAIM AMOUNT PROCESS S(T ) 2. At time T i the claim size X i happens and it holds that the sequence (X i ) i1 is i.i.d., X i. 3. The processes (T i ) i1 and (X i ) i1 are independent. 3.3 Properties of the total claim amount process S(t) Definition The total claim amount process is defined as N(t) S(t) : X i, t. i1 The insurance company needs information about S(t) in order to determine a premium which covers the losses represented by S(t). In general, the distribution of S(t), i.e. P({ω Ω : S(t, ω) x}), x, can only be approximated by numerical methods or simulations while ES(t) and var(s(t)) are easy to compute exactly. One can establish principles which use only ES(t) and var(s(t)) to calculate the premium. This will be done in chapter 4. Proposition (a) For the CramérLundbergmodel it holds (i) ES(t) λtex 1, (ii) var(s(t)) λtex 2 1. (b) Assume the renewal model. Let EW 1 1 λ (, ) and EX 1 <. (i) Then lim t ES(t) t λex 1. (ii) If var(w 1 ) < and var(x 1 ) <, then var(s(t)) lim t t λ ( var(x 1 ) + var(w 1 )λ 2 (EX 1 ) 2).
25 3.3. PROPERTIES OF S(T) 25 Proof: Since by direct computation, 1 1I Ω (ω) 1I {N(t)k}, k N(t) ES(t) E E i1 k X i ( ( k ) X i )1I {N(t)k} i1 E(X X k ) E1I } {{ } {N(t)k} } {{ } k kex 1 P(N(t)k) EX 1 kp(n(t) k) k EX 1 EN(t). In the CLmodel we have EN(t) λt. For the general case we use the Elementary Renewal Theorem (Thereom 2.2.4) to get the assertion. We continue with ES(t) 2 N(t) E E i1 X i 2 ( ( k ) ) 2 E X i 1I {N(t)k} k ( k ) 2 X i 1I {N(t)k} k k i,j1 EX 2 1 i1 k E ( ) X i X j 1I {N(t)k} i1 kp(n(t) k) + (EX 1 ) 2 k k1 EX 2 1 EN(t) + (EX 1 ) 2 (EN(t) 2 EN(t)) var(x 1 )EN(t) + (EX 1 ) 2 EN(t) 2. k(k 1)P(N(t) k)
26 26 CHAPTER 3. THE TOTAL CLAIM AMOUNT PROCESS S(T ) It follows that var(s(t)) ES(t) 2 (ES(t)) 2 ES(t) 2 (EX 1 ) 2 (EN(t)) 2 var(x 1 )EN(t) + (EX 1 ) 2 var(n(t)). For the CramérLundbergmodel it holds EN(t) var(n(t)) λt, hence we have var(s(t)) λt(var(x 1 ) + (EX 1 ) 2 ) λtex1. 2 For the renewal model we get var(x 1 )EN(t) lim var(x 1 )λ. t t The relation is shown in [5, Theorem 2.5.2]. var(n(t)) lim t t var(w 1) (EW 1 ) 3. Theorem The Strong Law of Large Numbers (SLLN) and the Central Limit Theorem (CLT) for (S(t)) in the renewal model can be stated as follows: (i) SLLN for (S(t)): If EW 1 1 λ < and EX 1 <, then S(t) lim t t λex 1 a.s. (ii) CLT for (S(t)): If var(w 1 ) <, and var(x 1 ) <, then ( ) P S(t) ES(t) x Φ(x) t, var(s(t)) sup x R where Φ is the distribution function of the standard normal distribution, Φ(x) 1 2π x e y2 2 dy. Proof: (i) We follow the proof of [6, Theorem ]. We have shown that N(t) lim t t λ a.s.
27 3.3. PROPERTIES OF S(T) 27 and it holds lim N(t) t a.s. Because of S(t) X 1 + X X N(t) and, by the SSLN we get S(t) lim t t (ii) See [4, Theorem ] X X n lim n n lim t N(t) t lim t EX 1 a.s., S(t) N(t) λex 1 a.s.
28 28 CHAPTER 3. THE TOTAL CLAIM AMOUNT PROCESS S(T )
29 4. Classical premium calculation principles The standard problem for the insurance companies is to determine that amount of premium such that the losses S(t) are covered. On the over hand the price should be low enough to be competitive and attract customers. A first approximation of S(t) is given by ES(t). For the premium income p(t) this implies p(t) < ES(t) insurance company loses on average p(t) > ES(t) insurance company gains on average A reasonable solution would be p(t) (1 + ρ)es(t) where ρ > is the safety loading. Proposition tells us that in the renewal model with EW 1 1 λ it holds ES(t) λt EX 1 for large t. 4.1 Used principles (1) The net principle, p NET (t) ES(t) defines the premium to be a fair market premium. This however, can be very risky for the company, which one can conclude from the Central Limit Theorem for S(t). (2) The expected value principle, p EV (t) (1 + ρ)es(t), which is motivated by the Strong Law of Large Numbers. 29
30 3 CHAPTER 4. PREMIUM CALCULATION PRINCIPLES (3) The variance principle, p V AR (t) ES(t) + αvar(s(t)), α >. This principle is in the renewal model asymptotically the same as p EV (t), since by Proposition we have that lim t p EV (t) p V AR (t) is a constant. This means that α plays the role of a safety loading ρ. (4) The standard deviation principle, p SD (t) ES(t) + α var(s(t)), α >.
31 5. Claim size distributions What distributions one should choose to model the claim sizes (X i )? If one analyzes data of claim sizes that have happened in the past, for example by a histogram or a QQplot, it turns out that the distribution is often heavytailed. Definition Let F (x) be the distribution function of X 1, i.e. F is called lighttailed for some λ >. F is called heavytailed for all λ >. 5.2 Examples F (x) P({ω Ω : X 1 (ω) x}). lim sup 1 F (x) n x n e λx lim n < inf 1 F (x) > x n e λx (1) The exponential distribution Exp(α) is lighttailed for all α >, since the distribution function is F (x) 1 e αx, x >, and 1 F (x) e λx and by choosing < λ < α, e αx e λx e(λ α)x, sup e (λ α)x e (λ α)n, as n. x n 31
32 32 CHAPTER 5. CLAIM SIZE DISTRIBUTIONS (2) The Pareto distribution is heavytailed. The distribution function is F (x) 1 or κ α, x, α >, κ >, (κ + x) α F (x) 1 ba, x b >, a >. xa 5.3 The QQplot A quantile is the inverse of the distribution function. We take the left inverse if the distribution function is not strictly increasing and continuous which is is defined by F (t) : inf{x R, F (x) t}, < t < 1, and the empirical distribution function of the data X 1,...X n as F n (x) : 1 n n 1I (,x] (X i ), x R. i1 It can be shown that if X 1 F, (X i ) i1 i.i.d., then lim F n (t) F (t), n almost surely for all continuity points t of F. Hence, if X 1 F, then the plot of (F n (t), F (t)) should give almost the straight line y x.
33 5.3. THE QQPLOT 33 y left inverse of F(x) F(x) x
34 34 CHAPTER 5. CLAIM SIZE DISTRIBUTIONS
35 6. About modern premium calculation principles 6.1 The exponential principle The exponential principle is defined as p exp (t) : 1 δ log EeδS(t), for some δ >, where δ is the risk aversion constant. The function p exp (t) is defined via the socalled utility theory. 6.2 The quantile principle Suppose F (x) P({ω : S(t) x}), x R, is the distribution function of S(t). In Section 5.3 we defined the left inverse of the distribution function F by F (y) : inf{x R : F (x) y}, < y < 1. Then the (1 ε) quantile principle is defined as p quant (t) F (1 ε), where the expression F (1 ε) converges for ε to the probable maximal loss. This setting is related to the theory of Value at Risk. 6.3 The Esscher principle The Esscher principle is defined as p Ess (t) ES(t)eδS(t) Ee δ(s(t)), δ >. 35
36 36 CHAPTER 6. MODERN PREMIUM CALCULATION PRINCIPLES In all the above principles the expected value E(g(S(t)) needs to be computed for a certain function g(x) to compute p(t). This means it is not enough to know ES(t) and var(s(t)), the distribution of S(t) is needed as well.
37 7. The distribution of the total claim amount S(t) Theorem Let (Ω, F, P) be a probability space. (a) The distribution of a random variable f : Ω R can be uniquely described by its distribution function F : R [, 1], F (x) : P({ω Ω : f(ω) x}), x R. (b) Especially, it holds for g : R R, such that g 1 (B) B(R), for all B B(R), that Eg(f) g(x)df (x) R (in the sense that, if either side of this expression exists, so does the other, and then they are equal, see [7], pp ). (c) The distribution of f can also be determined by its characteristic function, (see [9]) ϕ f (u) : Ee iuf, u R, or by its momentgenerating function m f (h) : Ee hf, h ( h, h ) provided that Ee h f < for some h >. Remember: for independent random variables f and g it holds ϕ f+g (u) ϕ f (u)ϕ g (u). 37
38 38 CHAPTER 7. THE DISTRIBUTION OF S(T ) 7.2 Mixture distributions Definition (Mixture distributions). Let F i, i 1,..., n be distribution functions and p i [, 1] such that n i1 p i 1. Then G(x) p 1 F 1 (x) p n F n (x), x R, is called the mixture distribution of F 1,..., F n. Lemma Let f 1,..., f n be random variables with distribution function F 1,..., F n, respectively. Assume that J : Ω {1,..., n} is independent from f 1,..., f n and P(J i) p i. Then the random variable Z 1I {J1} f I {Jn} f n has the mixture distribution function G. Definition (Compound Poisson random variable). Let N λ P ois(λ) and (X i ) i1 i.i.d. random variables, independent from N λ. Then N λ Z : i1 is called a compound Poisson random variable. Proposition The sum of independent compound Poisson random variables is a compound Poisson random variable: Let S 1,..., S n given by j1 X i N k S k X (k) j, k 1,..., n be independent compound Poisson random variables such that N k P ois(λ k ), λ k >, (X (k) j ) j 1 i.i.d., and N k is independent from (X (k) j ) j 1 for all k 1,..., n. Then S : S S n is a compound Poisson random variable with representation N λ S d Y l, N λ P ois(λ), λ λ λ n l1
39 7.2. MIXTURE DISTRIBUTIONS 39 and (Y l ) l 1 is an i.i.d. sequence, independent from N λ and Y 1 d n k1 1I {Jk} X (k) 1, with P(J k) λ k λ, and J is independent of (X (k) 1 ) k. Proof: From Theorem we know that it is sufficient to show that S and N λ l1 Y l have the same characteristic function. We start with the characteristic function of S k : Then ϕ Sk (u) Ee ius k Ee iu N k j1 X(k) j E e iu m j1 X(k) j m 1I {Nk m} E e iux(k) 1... e iux(k) m 1I {Nk m} } {{ } m all of these are independent ) m (Ee iux(k) 1 P(Nk m) m ( m ( m ϕ X (k) 1 ϕ X (k) 1 (u)) m P(Nk m) ) m λ m k (u) ϕ S (u) Ee iu(s S n) Ee ius 1... Ee iusn ϕ S1 (u)... ϕ Sn (u) e λ 1(1 ϕ (1) X 1 ( ( exp λ 1 m! e λ k e λk(1 ϕ X (k)(u)) 1. (u)) λ n(1 ϕ (n) (u))... e X 1 n k1 λ k λ ϕ X (k) 1 (u)) ).
40 4 CHAPTER 7. THE DISTRIBUTION OF S(T ) Let ξ N λ l1 Y l. Then by the same computation as we have done for ϕ Sk (u) we get Finally, ϕ ξ (u) Ee iuξ e λ(1 ϕ Y 1 (u)). ϕ Y1 (u) Ee iu n k1 1I {Jk}X (k) 1 n ( E e iu ) n k1 1I {Jk}X (k) 1 1I {Jl} l1 l1 n ) E (e iux(l) 1 1I{Jl} n l1 ϕ X (l) 1 (u)) λ l λ. 7.3 Applications in insurance First application Assume that the claims arrive according to an inhomogeneous Poisson process, i.e. N(t) N(s) P ois(µ(t) µ(s)). The total claim amount in year l is S l Now, it can be seen, that S l d N(l) jn(l 1)+1 N(l) N(l 1) j1 X (l) j, l 1,..., n. X (l) j, l 1,..., n and S l is compound Poisson distributed. Proposition implies that the total claim amount of the first n years is again compound Poisson distributed,
41 7.3. APPLICATIONS IN INSURANCE 41 where N λ S(n) : S S d n N λ P ois(µ(n)) i1 Y d i 1I {J1} X (1) I {Jn} X (n) 1 µ(i) µ(i 1) P(J i). µ(n) Hence the total claim amount S(n) in the first n years (with possibly different claim size distributions in each year) has a representation as a compound Poisson random variable. Second application We can interprete the random variables N i S i X (i) j, N i P ois(λ i ), i 1,..., n, j1 as the total claim amounts of n independent portfolios for the same fixed period of time. The (X (i) j ) j 1 in the ith portfolio are i.i.d, but the distributions may differ from portfolio to portfolio (one particular type of car insurance, for example). Then N λ S(n) S S d n is again compound Poisson distributed with N λ P ois(λ λ n ) i1 Y i Y i Y d i 1I {J1} X (1) I {Jn} X (n) 1 and P(J l) λ l λ.
42 42 CHAPTER 7. THE DISTRIBUTION OF S(T ) 7.4 The Panjer recursion: an exact numerical procedure to calculate F S(t) Let S N X i, i1 N : Ω {, 1,...} and (X i ) i 1 i.i.d, N and (X i ) independent. Then, setting S :, S n : X X n, n 1 yields P(S x) P(S x, N n) n P(S x N n)p(n n) n P(S n x)p(n n) n n F n X 1 (x)p(n n), where F n X 1 (x) is the nth convolution of F X1, i.e. F 2 X 1 (x) P(X 1 + X 2 x) E1I {X1 +X 2 x} X 1,X 2 independent 1I {x1 +x 2 x}(x 1, x 2 )df X1 (x 1 )df X2 (x 2 ) R R 1I {x1 x x 2 }(x 1, x 2 )df X1 (x 1 )df X2 (x 2 ) R R F X1 (x x 2 )df X2 (x 2 ) and by recursion using F X1 F X2, F (n+1) X 1 (x) : R R F n X 1 (x y)df X1 (y). But the computation of F n X 1 (x) is numerically difficult. However, there is a recursion formula for P(S x) that holds under certain conditions:
43 7.4. THE PANJER RECURSION 43 Theorem (Panjer recursion scheme). Assume the following conditions: (C1) X i : Ω {, 1,...} (C2) for N it holds that Then for Proof: for some a, b R. q n P(N n) ( a + b ) q n 1, n 1, 2,... n p n : P(S n), n, 1, 2,... { q p, if P(X 1 ) EP(X 1 ) N (1), otherwise 1 n ( p n a + bi ) P(X 1 i)p n i, n 1. (2) 1 ap(x 1 ) n i1 This implies (1). For p n, n 1, p P(S ) P(S, N ) + P(S, N > ) P(S ) P(N ) + P(S, N > ) } {{ } 1 P(N ) +P(S, N > ) } {{ } q q }{{} + P(X 1 ) P(N) EP(X 1 ) N. P(X X k, N k) } {{ } P(X 1 ) k P(N k) } {{ } q k k1 p n P(S n) P(S k n)q k k1
44 44 CHAPTER 7. THE DISTRIBUTION OF S(T ) (C2) P(S k n)(a + b k )q k 1. (3) k1 Assume P(S k n) >. Now, because Q P( S k n) is a probability measure the following holds. n ( a + bl ) P(X 1 l S k n) n } {{ } l Q(X 1 l) a + b n E QX 1 a + b nk E Q(X X k ) a + b nk E } QS {{ } k n a + b k, (4) where the last equation yields from the fact that Q(S k n) 1. On the other hand, we can express the term a + b also by k n ( a + bl ) P(X 1 l S k n) n l n (a + bl n )P(X 1 l, S k X 1 n l) P(S k n) l n (a + bl n )P(X 1 l)p(s k 1 n l). P(S k n) (5) l Thanks to (4) we can now replace the term a + b in (3) by the RHS of (5) k which yields p n k1 n l n l ( a + bl n ( a + bl n ) P(X 1 l)p(s k 1 n l)q k 1 ) P(X 1 l) P(S k 1 n l)q k 1 k1 } {{ } P(Sn l)
45 7.5. APPROXIMATION OF F S(T ) 45 ap(x 1 )P(S n) + ap(x 1 )p n + n l1 which will give the equation (2) p n Remark ap(x 1 ) n l1 ( a + bl n n l1 ( a + bl n ) P(X 1 l)p(s n l) ) P(X 1 l)p n l, ( a + bl ) P(X 1 l)p n l n The Panjer recursion only works for distributions of X i on {, 1, 2,...} i.e. k P X i (k) 1 (or, by scaling, on a lattice {, d, 2d,...} for d > fixed). Traditionally, the distributions used to model X i have a density, and {,1,2,...} h x i (x)dx. But on the other hand, claim sizes are expressed in terms of prices, so they take values on a lattice. The density h Xi (x) could be approximated to have a distribution on a lattice, but how large would the approximation error then be? N can only be Poisson, binomially or negative binomially distributed. 7.5 Approximation of F S(t) using the Central Limit Theorem Assume, that the renewal model is used, and that N(t) S(t) X i, t. i1 In Theorem the Central Limit Theorem is used to state that if var(w 1 ) < and var(x 1 ) <, then ( ) P S(t) ES(t) x Φ(x) t. var(s(t)) sup x R
Probability: Theory and Examples. Rick Durrett. Edition 4.1, April 21, 2013
i Probability: Theory and Examples Rick Durrett Edition 4.1, April 21, 213 Typos corrected, three new sections in Chapter 8. Copyright 213, All rights reserved. 4th edition published by Cambridge University
More informationONEDIMENSIONAL RANDOM WALKS 1. SIMPLE RANDOM WALK
ONEDIMENSIONAL RANDOM WALKS 1. SIMPLE RANDOM WALK Definition 1. A random walk on the integers with step distribution F and initial state x is a sequence S n of random variables whose increments are independent,
More informationIntroduction to Queueing Theory and Stochastic Teletraffic Models
Introduction to Queueing Theory and Stochastic Teletraffic Models Moshe Zukerman EE Department, City University of Hong Kong Copyright M. Zukerman c 2000 2015 Preface The aim of this textbook is to provide
More informationA short course on mean field spin glasses
A short course on mean field spin glasses Anton Bovier 1 and Irina Kurkova 1 Weierstrass Institut für Angewandte Analysis und Stochastik, Mohrenstrasse 39, 10117 Berlin, Germany bovier@wiasberlin.de Laboratoire
More informationMean Reversion and Unit Root Properties of Diffusion Models
Mean Reversion and Unit Root Properties of Diffusion Models Jihyun Kim Joon Y. Park Abstract This paper analyzes the mean reversion and unit root properties of general diffusion models and their discrete
More informationThe Statistical Analysis of SelfExciting Point Processes
Statistica Sinica (2011): Preprint 1 The Statistical Analysis of SelfExciting Point Processes KA KOPPERSCHMDT and WNFRED STUTE The Nielsen Company (Germany) GmbH, Frankfurt and Mathematical nstitute,
More informationInfinite Mean Models and the LDA for Operational Risk
Infinite Mean Models and the LDA for Operational Risk Johanna Nešlehová RiskLab, Department of Mathematics ETH Zurich Rämistrasse 101 CH8092 Zurich, Switzerland Tel.: +41 44 632 3227 email: johanna@math.ethz.ch
More informationPoisson processes (and mixture distributions)
Poisson processes (and mixture distributions) James W. Daniel Austin Actuarial Seminars www.actuarialseminars.com June 26, 2008 c Copyright 2007 by James W. Daniel; reproduction in whole or in part without
More informationA SIMPLE OPTION FORMULA FOR GENERAL JUMPDIFFUSION AND OTHER EXPONENTIAL LÉVY PROCESSES
A SIMPLE OPTION FORMULA FOR GENERAL JUMPDIFFUSION AND OTHER EXPONENTIAL LÉVY PROCESSES ALAN L. LEWIS Envision Financial Systems and OptionCity.net August, 00 Revised: September, 00 Abstract Option values
More informationThe mean field traveling salesman and related problems
Acta Math., 04 (00), 9 50 DOI: 0.007/s50000467 c 00 by Institut MittagLeffler. All rights reserved The mean field traveling salesman and related problems by Johan Wästlund Chalmers University of Technology
More informationTWO L 1 BASED NONCONVEX METHODS FOR CONSTRUCTING SPARSE MEAN REVERTING PORTFOLIOS
TWO L 1 BASED NONCONVEX METHODS FOR CONSTRUCTING SPARSE MEAN REVERTING PORTFOLIOS XIAOLONG LONG, KNUT SOLNA, AND JACK XIN Abstract. We study the problem of constructing sparse and fast mean reverting portfolios.
More informationThe whole of analytic number theory rests on one marvellous formula due to Leonhard Euler (17071783): n s = primes p. 1 p
Chapter Euler s Product Formula. The Product Formula The whole of analytic number theory rests on one marvellous formula due to Leonhard Euler (707783): n N, n>0 n s = primes p ( p s ). Informally, we
More informationSpatial Point Processes and their Applications
Spatial Point Processes and their Applications Adrian Baddeley School of Mathematics & Statistics, University of Western Australia Nedlands WA 6009, Australia email: adrian@maths.uwa.edu.au A spatial
More informationLecture notes on the Stefan problem
Lecture notes on the Stefan problem Daniele Andreucci Dipartimento di Metodi e Modelli Matematici Università di Roma La Sapienza via Antonio Scarpa 16 161 Roma, Italy andreucci@dmmm.uniroma1.it Introduction
More informationHow to Use Expert Advice
NICOLÒ CESABIANCHI Università di Milano, Milan, Italy YOAV FREUND AT&T Labs, Florham Park, New Jersey DAVID HAUSSLER AND DAVID P. HELMBOLD University of California, Santa Cruz, Santa Cruz, California
More informationReinventing Pareto: Fits for both small and large losses. Michael Fackler Independent Actuary Munich, Germany Email: michael_fackler@web.
Reinventing Pareto: Fits for both small and large losses Michael Fackler Independent Actuary Munich, Germany Email: michael_fackler@web.de Abstract Fitting loss distributions in insurance is sometimes
More informationLEAST ANGLE REGRESSION. Stanford University
The Annals of Statistics 2004, Vol. 32, No. 2, 407 499 Institute of Mathematical Statistics, 2004 LEAST ANGLE REGRESSION BY BRADLEY EFRON, 1 TREVOR HASTIE, 2 IAIN JOHNSTONE 3 AND ROBERT TIBSHIRANI 4 Stanford
More informationFixed Point Theorems and Applications
Fixed Point Theorems and Applications VITTORINO PATA Dipartimento di Matematica F. Brioschi Politecnico di Milano vittorino.pata@polimi.it Contents Preface 2 Notation 3 1. FIXED POINT THEOREMS 5 The Banach
More informationLevel sets and extrema of random processes and fields. JeanMarc Azaïs Mario Wschebor
Level sets and extrema of random processes and fields. JeanMarc Azaïs Mario Wschebor Université de Toulouse UPS CNRS : Institut de Mathématiques Laboratoire de Statistiques et Probabilités 118 Route de
More informationFoundations of Data Science 1
Foundations of Data Science John Hopcroft Ravindran Kannan Version /4/204 These notes are a first draft of a book being written by Hopcroft and Kannan and in many places are incomplete. However, the notes
More informationIEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 50, NO. 5, MAY 2005 577. Least Mean Square Algorithms With Markov RegimeSwitching Limit
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 50, NO. 5, MAY 2005 577 Least Mean Square Algorithms With Markov RegimeSwitching Limit G. George Yin, Fellow, IEEE, and Vikram Krishnamurthy, Fellow, IEEE
More informationFoundations of Data Science 1
Foundations of Data Science John Hopcroft Ravindran Kannan Version 2/8/204 These notes are a first draft of a book being written by Hopcroft and Kannan and in many places are incomplete. However, the notes
More informationMeanVariance Optimal Adaptive Execution
MeanVariance Optimal Adaptive Execution Julian Lorenz and Robert Almgren January 23, 2011 Abstract Electronic trading of equities and other securities makes heavy use of arrival price algorithms, that
More informationANALYTIC NUMBER THEORY LECTURE NOTES BASED ON DAVENPORT S BOOK
ANALYTIC NUMBER THEORY LECTURE NOTES BASED ON DAVENPORT S BOOK ANDREAS STRÖMBERGSSON These lecture notes follow to a large extent Davenport s book [5], but with things reordered and often expanded. The
More informationRobust MeanCovariance Solutions for Stochastic Optimization
OPERATIONS RESEARCH Vol. 00, No. 0, Xxxxx 0000, pp. 000 000 issn 0030364X eissn 15265463 00 0000 0001 INFORMS doi 10.1287/xxxx.0000.0000 c 0000 INFORMS Robust MeanCovariance Solutions for Stochastic
More informationRESAMPLING FEWER THAN n OBSERVATIONS: GAINS, LOSSES, AND REMEDIES FOR LOSSES
Statistica Sinica 7(1997), 131 RESAMPLING FEWER THAN n OBSERVATIONS: GAINS, LOSSES, AND REMEDIES FOR LOSSES P. J. Bickel, F. Götze and W. R. van Zwet University of California, Berkeley, University of
More informationHow to Gamble If You Must
How to Gamble If You Must Kyle Siegrist Department of Mathematical Sciences University of Alabama in Huntsville Abstract In red and black, a player bets, at even stakes, on a sequence of independent games
More informationNew insights on the meanvariance portfolio selection from de Finetti s suggestions. Flavio Pressacco and Paolo Serafini, Università di Udine
New insights on the meanvariance portfolio selection from de Finetti s suggestions Flavio Pressacco and Paolo Serafini, Università di Udine Abstract: In this paper we offer an alternative approach to
More informationSET THEORY AND UNIQUENESS FOR TRIGONOMETRIC SERIES. Alexander S. Kechris* Department of Mathematics Caltech Pasadena, CA 91125
SET THEORY AND UNIQUENESS FOR TRIGONOMETRIC SERIES Alexander S. Kechris* Department of Mathematics Caltech Pasadena, CA 91125 Dedicated to the memory of my friend and colleague Stelios Pichorides Problems
More informationControllability and Observability of Partial Differential Equations: Some results and open problems
Controllability and Observability of Partial Differential Equations: Some results and open problems Enrique ZUAZUA Departamento de Matemáticas Universidad Autónoma 2849 Madrid. Spain. enrique.zuazua@uam.es
More information