Paid-Incurred Chain Claims Reserving Method

Similar documents
Runoff of the Claims Reserving Uncertainty in Non-Life Insurance: A Case Study

Modelling the Claims Development Result for Solvency Purposes

PREDICTIVE DISTRIBUTIONS OF OUTSTANDING LIABILITIES IN GENERAL INSURANCE

One-year reserve risk including a tail factor : closed formula and bootstrap approaches

How To Calculate Multi-Year Horizon Risk Capital For Non-Life Insurance Risk

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

THE MULTI-YEAR NON-LIFE INSURANCE RISK

Stochastic Claims Reserving Methods in Non-Life Insurance

STOCHASTIC CLAIMS RESERVING IN GENERAL INSURANCE. By P. D. England and R. J. Verrall. abstract. keywords. contact address. ".

Table 1 and the Standard Deviation of a Model

A three dimensional stochastic Model for Claim Reserving

11. Time series and dynamic linear models

A reserve risk model for a non-life insurance company

Solution to Homework 2

Master s Theory Exam Spring 2006

Bayesian prediction of disability insurance frequencies using economic indicators

STOCHASTIC CLAIM RESERVING BASED ON CRM FOR SOLVENCY II PURPOSES

Bayesian Statistics: Indian Buffet Process

Introduction to General and Generalized Linear Models

Credit Risk Models: An Overview

The Characteristic Polynomial

Similarity and Diagonalization. Similar Matrices

Statistics Graduate Courses

Handling attrition and non-response in longitudinal data

ON THE PREDICTION ERROR IN SEVERAL CLAIMS RESERVES ESTIMATION METHODS IEVA GEDIMINAITÄ–

Tutorial on Markov Chain Monte Carlo

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Gaussian Processes to Speed up Hamiltonian Monte Carlo

Statistical Machine Learning

Solving Linear Systems, Continued and The Inverse of a Matrix

STA 4273H: Statistical Machine Learning

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Continued Fractions and the Euclidean Algorithm

Tail-Dependence an Essential Factor for Correctly Measuring the Benefits of Diversification

NOTES ON LINEAR TRANSFORMATIONS

Statistical modelling and forecasting of outstanding liabilities in non-life insurance

S T O C H A S T I C C L A I M S R E S E R V I N G O F A S W E D I S H L I F E I N S U R A N C E P O R T F O L I O

Least Squares Estimation

Monte Carlo Methods and Models in Finance and Insurance

Centre for Central Banking Studies

Department of Economics

Modeling and Analysis of Call Center Arrival Data: A Bayesian Approach

The Study of Chinese P&C Insurance Risk for the Purpose of. Solvency Capital Requirement

A credibility method for profitable cross-selling of insurance products

7 Gaussian Elimination and LU Factorization

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Introduction to Matrix Algebra

AISAM-ACME study on non-life long tail liabilities

A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution

3. Mathematical Induction

Model-based Synthesis. Tony O Hagan

Systematic risk modelisation in credit risk insurance

Non Linear Dependence Structures: a Copula Opinion Approach in Portfolio Optimization

1 Teaching notes on GMM 1.

CHAPTER 3 EXAMPLES: REGRESSION AND PATH ANALYSIS

SAMPLE SIZE TABLES FOR LOGISTIC REGRESSION

Introduction to Markov Chain Monte Carlo

Notes on Determinant

E3: PROBABILITY AND STATISTICS lecture notes

Gaussian Conjugate Prior Cheat Sheet

BayesX - Software for Bayesian Inference in Structured Additive Regression

Micro-level stochastic loss reserving for general insurance

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Linear methods for classification

Factorization Theorems

Analysis of a Production/Inventory System with Multiple Retailers

Underwriting risk control in non-life insurance via generalized linear models and stochastic programming

INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition)

Bootstrap Modeling: Beyond the Basics

GENERATING SIMULATION INPUT WITH APPROXIMATE COPULAS

A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS

COMPLETE MARKETS DO NOT ALLOW FREE CASH FLOW STREAMS

Chapter 6: Multivariate Cointegration Analysis

Retrospective Test for Loss Reserving Methods - Evidence from Auto Insurers

FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z

The Monte Carlo Framework, Examples from Finance and Generating Correlated Random Variables

Basics of Statistical Machine Learning

Bootstrapping Big Data

SYSTEMS OF REGRESSION EQUATIONS

Probability and Statistics

Monte Carlo Simulation

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

BUY-AND-HOLD STRATEGIES AND COMONOTONIC APPROXIMATIONS 1

Bayesian inference for population prediction of individuals without health insurance in Florida

Spatial Statistics Chapter 3 Basics of areal data and areal data modeling

An introduction to Value-at-Risk Learning Curve September 2003

A Log-Robust Optimization Approach to Portfolio Management

Power and sample size in multilevel modeling

Applications of R Software in Bayesian Data Analysis

Monte Carlo Methods in Finance

Joint models for classification and comparison of mortality in different countries.

TESTING THE ASSUMPTIONS OF AGE-TO-AGE FACTORS. Abstract

Module 3: Correlation and Covariance

Estimating Industry Multiples

APPLIED MISSING DATA ANALYSIS

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Chapter 3: The Multiple Linear Regression Model

Understanding the Impact of Weights Constraints in Portfolio Theory

The van Hoeij Algorithm for Factoring Polynomials

Transcription:

Paid-Incurred Chain Claims Reserving Method Michael Merz Mario V. Wüthrich Abstract We present a novel stochastic model for claims reserving that allows to combine claims payments and incurred losses information. The main idea is to combine two claims reserving models Hertig s model [11] and Gogol s model [8] leading to a log-normal paid-incurred chain PIC model. Using a Bayesian point of view for the parameter modelling we derive in this Bayesian PIC model the full predictive distribution of the outstanding loss liabilities. On the one hand this allows for an analytical calculation of the claims reserves and the corresponding conditional mean square error of prediction. On the other hand simulation algorithms provide any other statistics and risk measure on these claims reserves. Keywords. Claims Reserving, Outstanding Loss Liabilities, Ultimate Loss, Claims Payments, Claims Incurred, Incurred Losses, Prediction Uncertainty. 1 Paid-Incurred Chain Model 1.1 Introduction The main task of reserving actuaries is to predict ultimate loss ratios and outstanding loss liabilities. In general, this prediction is based on past information that comes from different sources of information. In order to get a unified prediction of the outstanding loss liabilities one needs to rank these information channels by assigning credibility weights to the available information. Often this is a difficult task. Therefore, most classical claims reserving methods are based on one information channel only for instance, claims payments or incurred losses data. Halliwell [9, 10] was probably one of the first who investigated the problem of combining claims payments and incurred losses data for claims reserving from a statistical point of view. The analysis of Halliwell [9, 10] as well as of Venter [0] is done in a regression framework. University of Hamburg, Department of Business Administration, 0146 Hamburg, Germany ETH Zurich, Department of Mathematics, 809 Zurich, Switzerland 1

A second approach to unify claims prediction based on claims payments and incurred losses is the Munich chain ladder MCL method. The MCL method was introduced by Quarg-Mack [17] and their aim is to reduce the gap between the two chain ladder CL predictions that are based on claims payments and incurred losses data, respectively. The idea is to adust the CL factors with incurred-paid ratios to reduce the gap between the two predictions see Quarg-Mack [17], Verdier-Klinger [], Merz-Wüthrich [14] and Liu-Verrall [1]. The difficulty with the MCL method is that it involves several parameter estimations whose precision is difficult to quantify within a stochastic model framework. A third approach was presented in Dahms [4]. Dahms considers the complementary loss ratio method CLRM where the underlying volume measures are the case reserves which is the basis for the regression and CL analysis. Dahms CLRM can also be applied to incomplete data and he derives an estimator for the prediction uncertainty. In this paper we present a novel claims reserving method which is based on the combination of Hertig s log-normal claims reserving model [11] for claims payments and of Gogol s Bayesian claims reserving model [8] for incurred losses data. The idea is to use Hertig s model for the prior ultimate loss distribution needed in Gogol s model which leads to a paid-incurred chain PIC claims reserving method. Using basic properties of multivariate Gaussian distributions we obtain a mathematically rigorous and consistent model for the combination of the two information channels claims payments and incurred losses data. The analysis will attach credibility weights to these sources of information and it will also involve incurred-paid ratios similar to the MCL method, see Quarg-Mack [17]. Our PIC model will provide one single estimate for the claims reserves based on both information channels and, moreover, it has the advantage that we can quantify the prediction uncertainty and that it allows for complete model simulations. This means that this PIC model allows for the derivation of the full predictive distribution of the outstanding loss liabilities. Endowed with the simulated predictive distribution one is not only able to calculate estimators for the first two moments but one can also calculate any other risk measure, like Value-at-Risk or expected shortfall. Posthuma et al. [16] were probably the first who studied a PIC model. Under the assumption of multivariate normality they formed a claims development chain for the increments of claims payments and incurred losses. Their model was then treated in the spirit of generalized linear models similar to Venter [0]. Our model will be analyzed in the spirit of the Bayesian chain ladder link ratio models see Bühlmann et al. [].

1. Notation and Model Assumptions For the PIC model we consider two channels of information: i claims payments, which refer to the payments done for reported claims; ii incurred losses, which correspond to the reported claim amounts. Often, the difference between incurred losses and claims payments is called case reserves for reported claims. Ultimately, claims payments and incurred losses must reach the same value when all the claims are settled. In many cases, statistical analysis of claims payments and incurred losses data is done by accident years and development years, which leads to the so-called claims development triangles see Table 1, and Chapter 1 in Wüthrich-Merz [4]. In the following, we denote accident years by i 0,..., J} and development years by 0,..., J}. We assume that all claims are settled after the J-th development year. Cumulative claims payments in accident year i after development periods are denoted by P i, and the corresponding incurred losses by I i,. Moreover, for the ultimate loss we assume P i,j = I i,j with probability 1, which means that ultimately at time J they reach the same value. For an illustration we refer to Table 1. i / 0 J 0 / i 0 0 i claims payments P i, P i,j =I i,j incurred losses I i, i J J Table 1: Left-hand side: cumulative claims payments P i, development triangle; Right-hand side: incurred losses I i, development triangle; both leading to the same ultimate loss P i,j = I i,j. We define for 0,..., J} the sets B P = P i,l : 0 i J, 0 l }, B I = I i,l : 0 i J, 0 l }, B = B P B I, 3

the paid, incurred and oint paid and incurred data, respectively, up to development year. Model Assumptions 1.1 Log-Normal PIC Model Conditionally, given Θ = Φ 0,..., Φ J, Ψ 0,..., Ψ, σ 0,..., σ J, τ 0,..., τ, we have: the random vector ξ 0,0,..., ξ J,J, ζ 0,0,..., ζ J, has a multivariate Gaussian distribution with uncorrelated components given by ξ i, N Φ, σ ζ k,l N Ψ l, τl for i 0,..., J} and 0,..., J}, for k 0,..., J} and l 0,..., J 1}; cumulative payments P i, are given by the recursion P i, = P i, 1 exp ξ i, }, with initial value P i,0 = exp ξ i,0 } ; incurred losses I i, are given by the backwards recursion I i, 1 = I i, exp ζ i, 1 }, with initial value I i,j = P i,j. The components of Θ are independent and σ, τ > 0 for all. Remarks. This PIC model combines both cumulative payments and incurred losses data to get a unified predictor for the ultimate loss that is based on both sources of information. Thereby, the model assumption I i,j = P i,j guarantees that the ultimate loss coincides for claims payments and incurred losses data. This means that in this PIC model there is no gap between the two predictors based on cumulative payments and incurred losses, respectively. This is similar to Section 4 in Posthuma et al. [16] and to the CLRM see Dahms [4], but this is different to the MCL method see Quarg-Mack [17]. The cumulative payments P i, satisfy Hertig s model [11], conditional on the parameters Θ. The model assumption I i,j = P i,j also implies that we assume J } E [P i,j Θ] = E [I i,j Θ] = exp Φ m + σm/, 1.1 see also.. Henceforth, incurred losses I i, satisfy Gogol s model [8] with prior ultimate loss mean E [P i,j Θ]. 4 m=0

The assumption I i,j = P i,j means that all claims are settled after J development years and there is no so-called tail development factor. If there is a claims development beyond development year J, then one can extend the PIC model for the estimation of a tail development factor. Because this inclusion of a tail development factor requires rather extended derivations and discussions we provide the details in Merz-Wüthrich [15]. We assume conditional independence between all ξ i, s and ζ k,l s. One may question this assumption, especially, because Quarg-Mack [17] found high correlations between incurredpaid ratios. In the example section see Section 5 we calculate the implied posterior correlation between the Φ s and the Ψ l s see Table 1. Our findings are that these correlations for our data set are fairly small except in regions where we have only a few observations. Therefore, we refrain from introducing dependence between the components. However, this dependence could be implemented but then the solutions can only be found numerically and, moreover, the estimation of the correlation matrix is not obvious. We choose a log-normal PIC model. This has the advantage that the conditional distributions of P i,j, given Θ and B, B P or B I, respectively, can be calculated explicitly. Other distributional assumptions only allow for numerical solutions using simulations with missing data see, for instance, van Dyk-Meng [1]. Organisation. In the next section we are going to give the model properties and first model interpretations conditional on the knowledge of Θ. In Section 3 we discuss the estimation of the underlying model parameters Θ. In Section 4 we discuss prediction uncertainty and in Section 5 we provide an example. All proofs of the statements are given in the appendix. Simultaneous Payments and Incurred Losses Consideration.1 Cumulative Payments Our first observation is that, given Θ, cumulative payments P i, satisfy the assumptions of Hertig s [11] log-normal CL model see also Section 5.1 in Wüthrich-Merz [4]. That is, conditional on Θ, we have for 0 log P i, P i, 1 B P 1,Θ} N Φ, σ, 5

where we have set P i, 1 = 1. This gives the CL property see also Lemma 5. in Wüthrich-Merz [4] E [P i, B P 1, Θ ] = P i, 1 exp Φ + σ / }..1 The tower property for conditional expectations see, for example, Williams [3], 9.7 i then implies for the expected ultimate loss, given B P, Θ}, E [P i,j B P, Θ ] J = P i, exp Φ l + σl /... Incurred Losses l=+1 The model properties of incurred losses I i, are in the spirit of Gogol s [8] model. Namely, given Θ, the ultimate loss I i,j = P i,j has a log-normal distribution and, conditional on I i,j and Θ, the incurred losses I i, have also a log-normal distribution. This then allows for Bayesian inference on I i,j, given B I, similar to Lemma 4.1 in Wüthrich-Merz [4]. The key lemma is the following well-known property for multivariate Gaussian distributions see e.g. Appendix A in Posthuma et al. [16]: Lemma.1 Assume X 1,..., X n is multivariate Gaussian distributed with mean m 1,..., m n and positive definite covariance matrix Σ. Then we have for the conditional distribution X 1 X,...,X n} N m 1 + Σ 1, Σ 1, X m, Σ 1,1 Σ 1, Σ 1, Σ,1, where X = X,..., X n is multivariate Gaussian with mean m = m,..., m n and positive definite covariance matrix Σ,, Σ 1,1 is the variance of X 1 and Σ 1, = Σ,1 covariance vector between X 1 and X. is the Lemma.1 gives the following proposition whose proof is provided in the appendix. Proposition. Under Model Assumptions 1.1 we obtain for 0 < + l J log I i,+l B I,Θ} N µ +l + v +l v log I i, µ, v+l 1 v +l /v, where the parameters are given by an empty sum is set equal to 0 µ = J m=0 Φ m Ψ n and v = n= Note that µ J = J m=0 Φ m and v J = J m=0 σ m. J m=0 σm + τn. Henceforth, we have the Markov property and we obtain the following corollary: 6 n=

Corollary.3 Under Model Assumptions 1.1 we obtain for the expected ultimate loss I i,j, given B I, Θ}, E [I i,j B I, Θ ] = I 1 α i, exp 1 α Ψ l + α µj + vj/ l= = I i, exp Ψ l + τl / exp α µ log I i, τl /, l= l= with credibility weight α = 1 v J v = 1 v τl. l= Remark. Compare the statement of Corollary.3 with formula.. We see that under Model Assumptions 1.1 cumulative payments P i, fulfill the classical CL assumption.1 whereas incurred losses I i, do in general not satisfy the CL assumption, given Θ. This is different from the MCL method where one assumes that both cumulative payments and incurred losses satisfy the CL assumption see Quarg-Mack [17]. At this stage one may even raise the question about interesting stochastic models such that cumulative payments P i, and incurred losses I i, simultaneously fulfill the CL assumption. Our Model 1.1 does not fall into that class. In our model, the classical CL factor gets a correction term exp α µ log I i, τl /, } which adusts the CL factor exp l= Ψ l + τl / to the actual claims experience with credibility weight α. The smaller the development year the bigger is the credibility weight α. On the other hand we could also rewrite the right-hand side of Corollary.3 as E [I i,j B I, Θ ] = exp 1 α J log I i, + Ψ l + α Φ m exp α vj/ }, the first factor on the right-hand side shows that we consider a credibility weighted average between incurred losses log I i, + l= Ψ l and cumulative payments µ J = J m=0 Φ m. l= l= m=0.3 Cumulative Payments and Incurred Losses Finally, we would like to predict the ultimate loss P i,j = I i,j when we ointly consider payments and incurred losses information B. We therefore apply the full PIC model, given the model parameters Θ. 7

Theorem.4 Under Model Assumptions 1.1 we obtain for the ultimate loss P i,j = I i,j, given B, Θ}, 0 < J, log P i,j B,Θ} N µ J + 1 β log P i, η + β log I i, µ, 1 β vj w, where the parameters are given by η = Φ m and w = σm, and the credibility weight is given by m=0 m=0 β = v J w v w > 0. Remarks. The conditional distribution of log P i,j, given B, Θ}, changes accordingly to the observations log P i, and log I i,. Thereby is the prior expectation µ J for the ultimate loss P i,j = I i,j updated by a credibility weighted average between the paid residual log P i, η and the incurred residual log I i, µ, where the credibility weight is given by β = J m=+1 σ m J m=+1 σ m + n= τ. n Analogously, the prior variance v J w is reduced by the credibility weight 1 β, this is typical in credibility theory, see for example Theorem 4.3 in Bühlmann-Gisler [3]. Theorem.4 shows that in our log-normal PIC model we can calculate analytically the posterior distribution of the ultimate loss, given B and conditional on Θ. Henceforth, we can calculate the conditionally expected ultimate loss, see Corollary.5 below. Corollary.5 PIC Ultimate Loss Prediction Under Model Assumptions 1.1 we obtain for the expected ultimate loss I i,j = P i,j, given B, Θ}, J E [P i,j B, Θ] = P i, exp Φ l + σ l exp β log I J i, σ l µ η P i, l=+1 l=+1 = I i, exp Ψ l exp 1 β log P J i, σ l η µ + I i, l= l=+1 J = exp 1 β log P i, + Φ l + β log I i, + Ψ l l=+1 exp 1 β v J w / }. l= 8

Remark. Henceforth, if we consider simultaneously claims payments and incurred losses information, we obtain a correction term exp β log I i, µ η P i, J l=+1 σ l ] to the classical CL predictor E [P i,j B P, Θ. This adustment factor compares incurred-paid ratios and corresponds to the observed residuals log I i, P i, µ η. For example, a large incurred-paid ratio I i, /P i, gives a large correction term.3 to the classical CL predictor ] E [P i,j B P, Θ. This is a similar mechanism as in the MCL method that also adusts the predictors according to incurred-paid ratios see Quarg-Mack [17]..3 The last formula in the statement of Corollary.5 shows that we can also understand the PIC ultimate loss predictor as a credibility weighted average between claims payments and incurred losses information. 3 Parameter Estimation So far, all consideration were done for known parameters Θ. However, in general, they are not known and need to be estimated from the observations. Assume that we are at time J and that we have observations see also Table 1 D P J = P i, : i + J}, D I J = I i, : i + J} and D J = D P J D I J. We estimate the parameters in a Bayesian framework. Therefore we define the following model: Model Assumptions 3.1 Bayesian PIC Model Assume Model Assumptions 1.1 hold true with deterministic σ 0,..., σ J and τ 0,..., τ and Φ m N φ m, s m Ψ n N ψ n, t n for m 0,..., J}, for n 0,..., J 1}. In a full Bayesian approach one chooses an appropriate prior distribution for the whole parameter vector Θ. We will only use a prior distribution for Φ m and Ψ n and assume that σ m and τ n are known. This has the advantage that we can analytically calculate the posterior distributions that will allow for explicit model calculations and interpretations. 9

3.1 Cumulative Payments For claims payments we only need the parameters Φ = Φ 0,..., Φ J. The posterior density of Φ, given D P J, is given by we set P i, 1 = 1 u Φ DJ P J J =0 exp 1 σ Φ log This immediately provides the next theorem: P i, P i, 1 } J } exp 1 s Φ φ. Theorem 3. Under Model Assumptions 3.1 the posterior distribution of Φ, given D P J, has =0 independent components with Φ D P J } N φ P,post = γ P J 1 log P i, P i, 1 + 1 γ P φ, s P,post = 1 s 1 + σ, with = J + 1 and credibility weight γ P = + σ. /s Henceforth, the posterior mean is a credibility weighted average between the prior mean φ and the empirical mean φ = 1 J log P i, P i, 1, see also formula 5. in Wüthrich-Merz [4]. The posterior distribution of P i,j, given DJ P, is now completely determined. Moments can be calculated in closed form and Monte Carlo simulation provides the empirical posterior distribution of the ultimate losses vector P 1,J,..., P J,J D P J } = I 1,J,..., I J,J D P J }. In view of. and Theorem 3. the ultimate loss predictor, given DP J, is given by 3. Incurred Losses E [P i,j DJ P ] J = Pi,J i l=j i+1 } exp φ P,post l + σl / + sp,post l /. 3.1 In this subsection we concentrate on the parameter estimation given the data DJ I losses. We define the underlying parameter for I i,j by see also 1.1 J Ψ J = µ J = J J N ψ J = φ, t J =, Φ =0 =0 =0 s of incurred 10

which is independent from Ψ 0,..., Ψ. For incurred losses we then only need the parameters Ψ = Ψ 0,..., Ψ J. The posterior density of Ψ, given DJ I, is given by u Ψ DJ I J J exp 1 v Ψ n + log I i,j i 3. J i n=j i J 1 exp 1 τ Ψ + log I } J } i, exp 1 I i,+1 t Ψ ψ. =0 This immediately provides the next theorem: Theorem 3.3 Under Model Assumptions 3.1 the posterior distribution of Ψ, given D I J, is a multivariate Gaussian distribution with posterior mean ψ post DJ I and posterior covariance matrix ΣD I J. The inverse covariance matrix ΣDI J 1 = a I n,m 0 n,m J is given by a I n,m = t n =0 n m + J n τn 1n=m} + vi for 0 n, m J. The posterior mean ψ post DJ I = ψi,post 0,..., ψ I,post is obtained by with vector b I 0,... bi J given by Observe that b I b I = t J ψ post D I J = ΣD I J b I 0,... b I J, J 1 ψ τ log I i, I i,+1 v i log I J i,i. can be rewritten so that it involves a credibility weighted average between prior mean Ψ and the incurred losses observations, namely b I = t with credibility weight and empirical mean + 1τ [γ I ψ + 1 γ I ψ ] γ I 1 = 1 + τ, /t ψ = 1 J 1 1 log I i, I i,+1. v i log I J i,i, 3.3 The posterior distribution of P i,j = I i,j, given DJ I, is now completely determined. We obtain for the ultimate loss predictor, given DJ I, E [P i,j DJ I ] 1 α = I J i i,j i exp 1 α J i l=j i } ψ I,post l + α J i ψ I,post J + v J + s I,post i /, 11 3.4

where s I,post i = e I i ΣD I J e I i, with e I i = 0,..., 0, 1 α J i,..., 1 α J i, α J i R J+1. 3.3 Cumulative Payments and Incurred Losses Similar to the last section we determine the posterior distribution of Θ, given D J. Observe that log I i, B P,Θ} N µ η, v w J J = N Φ m Ψ n, σm +. m=+1 n= m=+1 This implies that the oint likelihood function of the data D J is given by we set P i, 1 = 1 l DJ Θ = J =0 J 1 exp 1 πσ P i, σ Φ log P i, P i, 1 } J 1 1 exp πvj i w J i I v i,j i J i w J i µ J i η J i log I } i,j i P i,j i J 1 1 exp 1 πτ I i, τ Ψ + log I } i,. I i,+1 =0 Under Model Assumptions 3.1 the posterior distribution u Θ D J of Θ, given D J, is given by u Θ D J l DJ Θ J m=0 exp 1 s m This immediately implies the following theorem: } Φ m φ m n=0 n= τ n 3.5 exp 1 } t ψ n Ψ n. 3.6 n Theorem 3.4 Under Model Assumptions 3.1, the posterior distribution u Θ D J is a multivariate Gaussian distribution with posterior mean θ post D J and posterior covariance matrix ΣD J. The inverse covariance matrix ΣD J 1 = a n,m 0 n,m J is given by a n,m = s n a J+1+n,J+1+m = t n + J n + 1 σn 1n=m} + a n,j+1+m = a J+1+m,n = + J n τn 1n=m} + n 1 m n 1 m 1 n m v i wi 1 for 0 n, m J, v i wi 1 for 0 n, m J 1, v i wi 1 for 0 n J, 0 m J 1. The posterior mean θ post D J = φ post 0,..., φ post J, ψ post 0,..., ψ post is obtained by θ post D J = ΣD J c 0,..., c J, b 0,... b, 1

with vector c 0,..., c J, b 0,... b given by c = s b = t J φ + σ ψ τ log J 1 P i, P i, 1 + log I i, I i,+1 J i=j +1 J i=j v J i w J i 1 log I i,j i P i,j i, v J i w J i 1 log I i,j i P i,j i. Henceforth, this implies for the expected ultimate loss in the Bayesian PIC model, given D J, see also Corollary.5 where E [P i,j D J ] = P 1 β J i i,j i I β J i i,j i exp exp 1 β J i J l=j i+1 1 β J i v J w J i s post i = e i ΣD J e i, φ post l + β J i + s post i / with e i = 0,..., 0, 1 β J i,..., 1 β J i, 0,..., 0, β J i,..., β J i R J+1. l=j i } ψ post l }, 3.7 4 Prediction Uncertainty The ultimate loss P i,j = I i,j is now predicted by its conditional expectations E [P i,j DJ P ], E [Pi,J DJ] I or E [P i,j D J ], depending on the available information D P J, DI J or D J see 3.1, 3.4 and 3.7. With Theorems 3., 3.3 and 3.4 all posterior distributions in the Bayesian PIC Model 3.1 are given analytically. Therefore any risk measure for the prediction uncertainty can be calculated with a simple Monte Carlo simulation approach. Here, we consider the conditional mean square error of prediction MSEP as risk measure. The conditional MSEP is probably the most popular risk measure in claims reserving practice and has the advantage that it is analytically tractable in our context. We derive it for the posterior distribution, given D J. The cases D P J and DI J are completely analogous. The conditional MSEP is given by [ J ] J [ J ] msep J P i,j D J E P i,j D J = E P i,j E P i,j D J D J J = Var P i,j D J, see Wüthrich-Merz [4], Section 3.1. For the conditional MSEP, given the observations D J, we obtain: 13

Theorem 4.1 Under Model Assumptions 3.1 we have, using information D J, [ J ] msep J P i,j D J E P i,j D J = e 1 β J ivj w J i 1 i=k}+e i ΣD J e k 1 E [P i,j D J ] E [P k,j D J ] 1 i,k J with E [P i,j D J ] given by 3.7. Similarly we obtain for the conditional MSEP w.r.t. D I J and DP J, respectively: Theorem 4. Under Model Assumptions 3.1 we have, using information D I J, [ J ] msep J P i,j D E P J I i,j DJ I = e α J ivj 1 i=k}+e I i ΣDJ I ei k 1 1 i,k J with E [P i,j D I J] given by 3.4. Using information D P J we obtain [ J ] msep J P i,j D E P J P i,j DJ P = e v J w J i 1 i=k}+e P i ΣDJ P ep k 1 with E [P i,j D P J 1 i,k J E [P i,j DJ] I E [Pk,J DJ I ] E [P i,j DJ P ] E [Pk,J DJ P ] ] given by 3.1, e P i = 0,..., 0, 1,..., 1 R J+1 and ΣD P J = diagsp,post. 5 Example We revisit the first example given in Dahms [4] and Dahms et al. [5] see Tables 10 and 11. We do a first analysis of the data under Model Assumptions 3.1 where we assume that σ and τ are deterministic parameters using plug-in estimates. In a second analysis we also model these parameters in a Bayesian framework. 5.1 Data Analysis in the Bayesian PIC Model 3.1. Because we do not have prior parameter information and because we would like to compare our results to Dahms results [4], we choose non-informative priors for Φ m and Ψ n. This means that we let s m and t n. This then implies for the credibility weights γm P = γm I = 1 which means that our claims payments prediction is based on D P J only see Theorem 3. and our incurred losses prediction is based on D I J only see 3.3, i.e. no prior knowledge φ m and ψ n 14

is used. Similarly for the oint PIC prediction no prior knowledge is needed for non-informative priors, because then the prior values φ m and ψ n disappear in c m and b n for s m and t n, see Theorem 3.4. Henceforth, there remains to provide the values for σ and τ. For the choice of these parameters we choose in this subsection an empirical Bayesian point of view and use the standard plug-in estimators sample standard deviation estimator, see e.g. 5.3 in Wüthrich-Merz [4]. Since the last variance parameters cannot be estimated from the data due to the lack of observations we use the extrapolation used in Mack [13]. This gives the parameter choices provided in Table. 0 1 3 4 5 6 7 8 9 σ 0.1393 0.0650 0.0731 0.0640 0.064 0.071 0.0405 0.07 0.0494 0.07 τ 0.0633 0.0459 0.0415 0.01 0.0083 0.0017 0.0019 0.0011 0.0006 Table : Standard deviation parameter choices for σ and τ. The expected outstanding loss liablities then provide the PIC claims reserves: RD J = E [P i,j D J ] P i,j i, if the ultimate loss prediction is based on the whole information D J and similarly for D P J DJ I, respectively. This gives the claims reserves provided in Table 3. and RD J P = RD I J = RDJ = E [ ] P i,j D P J Pi,J i E [ ] P i,j D I J Pi,J i E [ ] P i,j DJ Pi,J i 1 115 470 337 994 337 799 48 7 31 56 31 686 3 64 664 331 56 331 890 4 79 344 1 018 94 1 018 308 5 1 84 545 1 10 580 1 104 816 6 1 183 781 1 869 84 1 84 669 7 1 69 63 1 990 60 1 953 767 8 407 438 1 465 661 1 60 9 9 07 45 548 4 40 946 total 10 511 390 10 695 996 10 66 108 Table 3: Claims reserves in the Bayesian PIC Model 3.1. We observe that in all accident years the PIC reserves RD J based on the whole information D J are between the estimates based on DJ P and DI J. The deviations from RD J I are comparably small which comes from the fact that τ σ. 15

CL paid CL incurred CLRM MCL paid MCL incurred Mack [13] Mack [13] Dahms [4] Quarg-Mack [17] Quarg-Mack [17] 1 114 086 337 984 314 90 104 606 338 00 394 11 31 884 66 994 457 484 30 850 3 608 749 331 436 359 384 664 871 330 05 4 697 74 1 018 350 981 883 615 436 1 01 361 5 1 34 157 1 103 98 1 115 768 1 71 110 1 10 396 6 1 138 63 1 868 664 1 786 947 919 10 1 894 861 7 1 638 793 1 997 651 1 94 518 1 498 163 00 310 8 359 939 1 418 779 1 569 657 3 181 319 1 30 49 9 1 979 401 556 61 590 718 1 60 089 703 4 total 10 165 61 10 665 87 10 78 771 10 314 181 10 761 918 Table 4: Claims reserves from the CL method for claims payments and incurred losses see Mack [13], from the CLRM see Dahms [4], and from the MCL method for claims payments and incurred losses see Quarg-Mack [17]. In Table 4 we provide the claims reserves estimates for other popular methods. We observe that the reserves RD J P are close to the ones from CL paid the differences can partly be explained by the variance terms σl / in the expected value of log-normal distributions, see.. The reserves RD J I are very close to the ones from CL incurred. The PIC reserves RD J from the combined information look similar to the ones from the CLRM and to MCL incurred method. We also mention that for our data set the MCL does not really reduce the gap between the two predictions. This was also already previously observed for other data sets. claims reserves msep 1/ RD J P 10 511 390 1 559 8 RD J I 10 695 996 41 98 RD J 10 66 108 389 496 CL paid 10 165 61 1 517 480 CL incurred 10 665 87 455 794 CLRM paid 10 78 771 467 814 CLRM incurred 10 78 771 471 873 MCL paid 10 314 181 not available MCL incurred 10 761 918 not available Table 5: Total claims reserves and prediction uncertainty. In Table 5 we compare the corresponding prediction uncertainties measured by the square root of the conditional MSEP. Under Model Assumptions 3.1 these are calculated analytically with Theorems 3., 3.3, 3.4, 4.1 and 4.. First of all, we observe that in our model the PIC predictor 16

9'000'000 9'500'000 10'000'000 10'500'000 11'000'000 11'500'000 1'000'000 Table 6: Empirical density from 0 000 simulations and a comparison to the Gaussian density. RD J has a smaller prediction uncertainty compared to RD P J and RD I J. This is clear because increasing the set of information reduces the uncertainty. Therefore, the PIC predictor RD J should be prefered within Model Assumptions 3.1. Furthermore, our prediction uncertainties are comparable to the ones from the other models. We would also like to mention that in the CLRM there are two values for the prediction uncertainty due to the fact that one can use different parameter estimators in the CLRM, see Corollary 4.4 in Dahms [4] the claims reserves coincide. As mentioned in Section 4 we can not only calculate the conditional MSEP, but Theorem 3.4 allows for a Monte Carlo simulation approach to the full predictive distribution of the outstanding loss liabilities in the Bayesian PIC Model 3.1. This is done as follows: Firstly, we generate multivariate Gaussian samples Θ t with mean θ post D J and covariance matrix ΣD J according to Theorem 3.4. Theorem.4. Secondly, we generate samples I t 1,J,..., It J,J Θ t } according to In Table 6 we provide the empirical density for the outstanding loss liabilities from 0 000 simulations. Moreover, we compare it to the Gaussian density with the same mean and standard deviation see also Table 5, line RD J. We observe that these densities are very similar, the Gaussian density having slightly more probability mass in the lower tail and slightly less probability mass in the upper tail less heavy tailed. To further investigate this issue we provide the Q-Q-Plot in Table 7. We only observe differences in the tails as described above. The lower panel in Table 7 gives the upper tail for values above the 90% quantile. There we see that a Gaussian approximation underestimates the true risks. However, the differences are comparably small. In Table 3 we have observed that the resulting PIC reserves RD J are close to the claims reserves RD J I from incurred losses information only. The reason therefore is that the standard deviation 17

1'000'000 11'500'000 11'000'000 10'500'000 10'000'000 9'500'000 9'000'000 9'000'000 9'500'000 10'000'000 10'500'000 11'000'000 11'500'000 1'000'000 1'000'000 11'800'000 11'600'000 11'400'000 11'00'000 11'00'000 11'400'000 11'600'000 11'800'000 1'000'000 Table 7: Q-Q-plot from 0 000 simulations of the outstanding loss liabilities with the Gaussian distribution. Upper panel: full Q-Q-Plot; Lower panel: Q-Q-Plot of the upper 90% quantile. parameters τ for incurred losses are rather small. We now calculate a second example where we double these standard deviation parameters, i.e. τ = τ. The results are presented in Table 8. Firstly, we observe that the conditional MSEP using information DJ I and D J strongly increases. This is clear, because doubling the standard deviation parameters increases the uncertainty. More interestingly, we observe that the PIC reserves for each accident year i =,..., J are now closer to the claims reserves from cumulative payments especially for young accident years. The reason for this is that increasing the τ parameters means that we give less credibility weight to the incurred losses observations D I J and more credibility weight to the claims payments observations DJ P. Finally, in Table 1 we provide the posterior correlation matrix of Θ = Φ 0,..., Φ 9, Ψ 0,..., Ψ 8, given D J, which can directly be calculated from the posterior covariance matrix ΣD J see Theorem 3.4. We observe only small posterior correlations which may ustify the assumption of prior uncorrelatedness. 5. Full Bayesian PIC Model A full Bayesian approach suggests that one also models the standard deviation parameters σ and τ stochastically. We therefore modify Model Assumptions 3.1 as follows: Assume that σ 18

res. RD P J res. RD I J PIC res. RDJ res. RD P J res. RD I J PIC res. RDJ τ τ τ τ = τ τ = τ τ = τ 1 115 470 337 994 337 799 115 470 338 05 337 46 48 7 31 56 31 686 48 7 31 574 3 1 3 64 664 331 56 331 890 64 664 331 580 333 08 4 79 344 1 018 94 1 018 308 79 344 1 019 091 1 016 637 5 1 84 545 1 10 580 1 104 816 1 84 545 1 101 948 1 110 585 6 1 183 781 1 869 84 1 84 669 1 183 781 1 869 904 1 774 059 7 1 69 63 1 990 60 1 953 767 1 69 63 1 981 419 1 88 341 8 407 438 1 465 661 1 60 9 407 438 1 581 1 1 903 155 9 07 45 548 4 40 946 07 45 549 115 4 048 total 10 511 390 10 695 996 10 66 108 10 511 390 10 803 778 10 631 310 msep 1/ 1 559 8 41 98 389 496 1 559 8 741 89 614 453 Table 8: Total claims reserves and prediction uncertainty for τ and τ = τ. and τ have independent gamma distributions with σ Γ γ σ, c σ and τ Γ γ τ, c τ. Of course, the choice of gamma distributions for the standard deviation parameters is rather arbitrary and any other positive distribution would also fit. Then the posterior distribution of the parameters Θ = Φ 0,..., Φ J, Ψ 0,..., Ψ, σ 0,..., σ J, τ 0,..., τ is given by u Θ D J l DJ Θ J m=0 J m=0 exp 1 s m σ γσ 1 } Φ m φ m exp } c σ σ n=0 n=0 exp 1 } t ψ n Ψ n n τ γτ 1 exp c τ τ }, 5.1 with l DJ Θ given in 3.5. This distribution can no longer be handled analytically because the normalizing constant takes a non-trivial form. But because we can write down its likelihood function up to the normalizing constant, we can still apply the Markov chain Monte Carlo MCMC simulation methodology. MCMC methods are very popular for this kind of problems. For an introduction and overview to MCMC methods we refer to Gilks et al. [6], Asmussen-Glynn [1] and Scollnik [19]. Because MCMC methods are widely used we refrain from describing them in detail. We will use the Metropolis-Hastings algorithm as described in Section 4.4 in Wüthrich- Merz [4]. The aim is to construct a Markov chain Θ t t 0 whose stationary distribution is u Θ D J. Then, we run this Markov chain for sufficiently long, so that we obtain approximate samples Θ t s from that stationary distribution. This is achieved by definining an acceptance 19

probability α Θ t, Θ = min 1, u Θ D J q Θ t Θ } u Θ t D J q Θ, Θ t for the next step in the Markov chain, i.e. the move from Θ t to Θ t+1. Thereby, the proposal distribution q is chosen in such a way that we obtain an average acceptance rate of roughly 4% because this satisfies certain optimal mixing properties for Markov chains see Roberts et al. [18], Corollary 1.. We apply this algorithm to different coefficients of variations γ 1/ σ and γ 1/ τ of σ and τ, respectively. Moreover, we keep the means E[σ ] = γ σ /c σ and E[τ ] = γ τ /c τ fixed and choose them equal to the deterministic values provided in Table. The results are provided in Table Vcoσ = γ 1/ σ = Vcoτ = γ 1/ τ = 0% 10% 100% claims reserves RD J 10 66 108 10 589 180 10 701 455 prediction uncertainty msep 1/ 389 496 39 83 47 449 Table 9: Total claims reserves and prediction uncertainty in the Full Bayesian PIC model for different coefficients of variations for σ and τ. 9. We observe, as expected, an increase of the prediction uncertainty. The increase from Model 3.1 with deterministic σ s and τ s to a coefficient of variation of 10% is moderate, but it starts to increase strongly for larger coefficients of variations. 6 Conclusions We have defined a stochastic PIC model that simultaneously considers claims payments information and incurred losses information for the prediction of the outstanding loss liabilities by assigning appropriate credibility weights to these different channels of information. The benefits of our method are that it combines two different channels of information to get a unified ultimate loss prediction; for claims payments observation the CL structure is preserved using a credibility weighted correction terms to the CL factors; for deterministic standard deviation parameters we can calculate both the claims reserves and the conditional MSEP analytically; the full predictive distribution of the outstanding loss liabilities is obtained from Monte Carlo simulations, this allows to consider any risk measure; 0

for stochastic standard deviation parameters all key figures and the full predictive distribution of the outstanding loss liabilities are obtained from the MCMC method. a model extension will allow the inclusion of tail development factors, for details see Merz- Wüthrich [15]. A Appendix: Proofs In this appendix we prove all the statements. The proof of Theorem 3. easily follows from its likelihood function and it is a special version of Theorem 6.4 in Gisler-Wüthrich [7], therefore we omit its proof. Proof of Proposition.. Note that we only consider conditional distributions, given the parameter Θ, and for this conditional distributions claims in different accident years are independent. Therefore we can restrict ourselves to one fixed accident year i. The vector log I i,+l, log I i,, log I i, 1,..., log I i,0 Θ} has a multivariate Gaussian distribution with mean µ +l, µ, µ 1,..., µ 0 and covariance matrix Σ with elements given by: for n m + l,, 1,..., 0} Cov log I i,n, log I i,m Θ = vn. Henceforth, we can apply Lemma.1 to the random variable log I i,+l B I,Θ} with parameters m 1 = µ +l, m = µ,..., µ 0, Σ 1,1 = v+l, Σ, is the covariance matrix of X = log I i,,..., log I i,0 Θ} and Σ 1, = v+l,..., +l v R +1. We obtain from Lemma.1 a Gaussian distribution and there remains the calculation of the explicit parameters of the Gaussian distribution. Note that the covariance matrix Σ, has the following form Σ, = v+1 n +1 m, 1 n,m +1 where + 1 n + 1 m = max + 1 n, + 1 m}. Henceforth, Σ, has a fairly simple structure which gives a nice form for its inverse Σ 1, = b n,m 1 1 n,m +1,

with diagonal elements b 1,1 = b n,n = b +1,+1 = v 1 v v 1 v, v n v + n v +1 n v + n v n v +1 n for n,..., }, 1 v0, v 1 and off-diagonal elements 0 except for the side diagonals b n,n+1 = 1 v n v +1 n for n 1,..., }, and its symmetric counterpart b n,n 1 for n,..., + 1}. property from which the claim follows. Σ 1, Σ 1, = v +l /v, 0,..., 0 R +1, This matrix has the following Proof of Corollary.3. Proposition. implies for the conditional expectation } E [I i,j B I, Θ ] = exp µ J + v J v = exp µ J + 1 α log I i, µ + α vj/ } = I i, exp l= Ψ l log I i, µ + v J 1 v J v exp α log I i, µ + α v J/ }. Finally, observe that α vj τl = l= τl l= v J v 1 = α τl. l= This completes the proof. Proof of Theorem.4. The proof is similar to the one of Proposition. and uses Lemma.1. Again we only consider conditional distributions, given the parameter Θ, therefore we can restrict ourselves to one fixed accident year i. Using the Markov property of cumulative payments, we see that it suffices to consider the vector log I i,j, log P i,, log I i,,..., log I i,0 Θ},

which has a multivariate Gaussian distribution with mean µ J, η, µ,..., µ 0 and covariance matrix similar to Proposition. but with an additional column and row for Cov log P i,, log I i,l Θ = w for l J,,..., 0}. Henceforth, we can apply Lemma.1 to the random variable log I i,j B,Θ} with parameters m 1 = µ J, m = η, µ,..., µ 0, Σ 1,1 = v J, Σ, is the covariance matrix of X = log P i,, log I i,,..., log I i,0 Θ} and Σ 1, = w, v J,..., v J R +. Thus, there remains the calculation of the explicit parameters of the Gaussian distribution. Note that the covariance matrix Σ, has now the following form w for elements in the first column or first row, v + n + m for elements in the remaining right lower square n, m +. Therefore, Σ, has again a simple form whose inverse can easily be calculated and has a similar structure to the one given in Proposition.. v Σ 1, Σ 1, = vj v, v J w w v, 0,..., 0 w This implies note that v > v J > w = 1 β, β, 0,..., 0 R +. m 1 + Σ 1, Σ 1, X m = µ J + 1 β log P i, η + β log I i, µ. Moreover, we have Σ 1,1 Σ 1, Σ 1, Σ,1 = v J 1 β w β v J = 1 β v J w, from which the claim follows. Proof of Corollary.5. Theorem.4 implies E [I i,j B, Θ] = exp µ J + 1 β log P i, η + β log I i, µ + 1 β vj w / } J = P i, exp Φ l + σl / exp β η log P i, + log I i, µ vj w / }. l=+1 3

Proof of Theorem 3.3. From the likelihood 3. it immediately follows that the posterior distribution Ψ, given DJ I, is Gaussian. Hence, there remains the calculation of the posterior parameters. First we square out all the terms for obtaining the Ψ and Ψ Ψ n terms J 1 v J i = J J =0 n=j i Ψ Ψ n + =0 J τ + 1 t J 1 + 1 Ψ + τ v i 1 + J 1 t Ψ =0 J J 1. Ψ Ψ n v =0 n=+1 i This gives the covariance matrix ΣDJ I. The posterior mean ψpost DJ I is obtained by solving the posterior maximum likelihood functions for Ψ. They are given by log u Ψ DJ I = Ψ t ψ from which the claim follows. J 1 τ log I i, I i,+1 J i=j J i log I i,j i v J n=0 a I,nΨ n! = 0, Proof of Theorem 3.4. The proof is similar to the proof of Theorem 3.3. From 3.6 it is straightforward that the posterior distribution is again a multivariate Gaussian distribution. Henceforth, there remains the calculation of the first two moments. The inverse covariance matrix Σ 1 is obtained by considering all the terms Φ m, Ψ n and Φ m Ψ n in the posterior distribution u Θ D J similar to the proof of Theorem 3.3. The posterior mean is obtained by solving the posterior maximum likelihood functions for Φ and Ψ. They are given by and log u Θ D J Φ = log u Θ D J Ψ = J φ + σ log P i, + P i, 1 s J Φ m a,m m=0 t ψ J 1 m=0 J Φ m a J+1+,m m=0 J i=j +1 Ψ m a,j+1+m! = 0, τ log I i, I i,+1 m=0 J i=j v J i wj i 1 I i,j i log P i,j i v J i wj i 1 I i,j i log Ψ a J+1+,J+1+m! = 0. P i,j i 4

Henceforth, this implies c 0,..., c J, b 0,... b = ΣD J 1 Φ 0,..., Φ J, Ψ 0,..., Ψ from which the claim follows. Proof of Theorem 4.1. We use the usual decomposition for the variance given by J [ J [ J ] Var P i,j D J = Var E P i,j D J, Θ] D J + E Var P i,j D J, Θ D J. For the second term we use that the accident years are independent, conditionally given Θ. Using Theorem.4 this leads to [ J ] [ J ] E Var P i,j D J, Θ D J = E Var P i,j D J, Θ D J [ J ] = E E [P i,j D J, Θ] e 1 β J ivj w J i 1 D J = J e 1 β J ivj w J i 1 E [E [P i,j D J, Θ] DJ ]. For the first term we obtain [ J J Var E P i,j D J, Θ] D J = Var E [P i,j D J, Θ] D J [ ] = E E [P i,j D J, Θ] E [P k,j D J, Θ] D J E [P i,j D J ] E [P k,j D J ]. 1 i,k J 1 i,k J Henceforth, this provides J J Var P i,j D J = e 1 β J ivj w J i E [E [P i,j D J, Θ] ] DJ + 1 i<k J [ ] E E [P i,j D J, Θ] E [P k,j D J, Θ] D J Using Theorem 3.4 provides the claim: 1 i,k J E [P i,j D J ] E [P k,j D J ]. [ ] E E [P i,j D J, Θ] E [P k,j D J, Θ] D J = E [P i,j D J ] E [P k,j D J ] exp e } i ΣD J e k. The proof of Theorem 4. is similar to the proof of Theorem 4.1. 5

References [1] Asmussen, S., Glynn, P.W. 007. Stochastic Simulation. Springer. [] Bühlmann, H., De Felice, M., Gisler, A., Moriconi, F., Wüthrich, M.V. 009. Recursive credibility formula for chain ladder factors and the claims development result. Astin Bulletin 39/1, 75-306. [3] Bühlmann, H., Gisler, A. 005. A Course in Credibility Theory and its Applications. Springer Universitext. [4] Dahms, R. 008. A loss reserving method for incomplete claim data. Bulletin Swiss Association of Actuaries 008, 17-148. [5] Dahms, R., Merz, M., Wüthrich, M.V. 009. Claims development result for combined claims incurred and claims paid data. Bulletin Francais d Actuariat 9/18, 5-39. [6] Gilks, W.R., Richardson, S., Spiegelhalter, D.J. 1996. Markov Chain Monte Carlo in Practice. Chapman & Hall. [7] Gisler, A., Wüthrich, M.V. 008. Credibility for the chain ladder reserving method. Astin Bulletin 38/, 565-600. [8] Gogol, D. 1993. Using expected loss ratios in reserving. Insurance: Math. Econom. 1/3, 97-99. [9] Halliwell, L.J. 1997. Cooint prediction of paid and incurred losses. CAS Forum Summer 1997, 41-379. [10] Halliwell, L.J. 009. Modeling paid and incurred losses together. CAS E-Forum Spring 009. [11] Hertig, J. 1985. A statistical approach to the IBNR-reserves in marine insurance. Astin Bulletin 15/, 171-183. [1] Liu, H., Verrall, R.J. 008. Bootstrap estimation of the predictive distributions of reserves using paid and incurred claims. Conference paper, 38th Astin Colloquium 008, Manchester, UK. [13] Mack, T. 1993. Distribution-free calculation of the standard error of chain ladder reserve estimates. Astin Bulletin 3/, 13-5. [14] Merz, M., Wüthrich, M.V. 006. A credibility approach to the Munich chain-ladder method. Blätter DGVFM, Band XXVII, 619-68. [15] Merz, M., Wüthrich, M.V. 010. Estimation of tail factors in the paid-incurred chain reserving method. Submitted preprint. [16] Posthuma, B., Cator, E.A., Veerkamp, W., Zwet, van E.W. 008. Combined analysis of paid and incurred losses. CAS E-Forum Fall 008, 7-93. [17] Quarg, G., Mack, T. 004. Munich chain ladder. Blätter DGVFM, Band XXVI, 597-630. 6

[18] Roberts, G.O., Gelman, A., Gilks, W.R. 1997. Weak convergence and optimal scaling of random walks Metropolis algorithm. Annals of Applied Probability 7, 110-10. [19] Scollnik, D.P.M. 001. Actuarial modeling with MCMC and BUGS. North American Actuarial J. 5/, 96-15. [0] Venter, G.G. 008. Distribution and value of reserves using paid and incurred triangles. CAS E-Forum Fall 008, 348-375. [1] van Dyk, D.A., Meng, X.-L. 001. The art of data augmentation. J. Computational and Graphical Statistics 10/1, 1-50. [] Verdier, B., Klinger, A. 005. JAB Chain: a model-based calculation of paid and incurred loss development factors. Conference paper, 36th Astin Colloquium 005, Zürich, Switzerland. [3] Williams, D. 1991. Probability with Martingales. Cambridge Mathematical Textbooks. [4] Wüthrich, M.V., Merz, M. 008. Stochastic Claims Reserving Methods in Insurance. Wiley. 7

Data i/ 0 1 3 4 5 6 7 8 9 0 1 16 63 1 347 07 1 786 877 81 606 656 4 909 307 3 83 388 3 587 549 3 754 403 3 91 58 1 798 94 1 051 91 1 15 785 1 349 939 1 655 31 1 96 10 13 833 87 311 567 056 1 115 636 1 387 387 1 930 867 177 00 513 171 931 930 3 047 368 3 18 511 3 1 05 161 1 31 06 1 700 13 1 971 303 98 349 645 113 3 003 45 4 808 864 1 09 53 1 9 66 1 590 338 1 84 66 150 351 5 1 016 86 1 51 40 1 698 05 105 143 385 339 6 948 31 1 108 791 1 315 54 1 487 577 7 917 530 1 08 46 1 484 405 8 1 001 38 1 376 14 9 841 930 Table 10: Observed cumulative payments P i,. i/ 0 1 3 4 5 6 7 8 9 0 3 36 115 5 17 43 4 754 900 4 381 677 4 136 883 4 094 140 4 018 736 3 971 591 3 941 391 3 91 58 1 640 443 4 643 860 3 869 954 3 48 558 3 10 00 3 019 980 976 064 946 941 919 955 879 697 4 785 531 4 045 448 3 467 8 3 377 540 3 341 934 3 83 98 3 57 87 3 933 345 5 99 146 4 451 963 3 700 809 3 553 391 3 469 505 3 413 91 4 768 181 4 658 933 3 936 455 3 51 735 3 385 19 3 98 998 5 3 8 439 5 71 304 4 484 946 3 798 384 3 70 47 6 97 033 5 067 768 4 066 56 3 704 113 7 3 083 49 4 790 944 4 408 097 8 761 163 4 13 757 9 3 045 376 Table 11: Observed incurred losses I i,. 8

Φ0 Φ1 Φ Φ3 Φ4 Φ5 Φ6 Φ7 Φ8 Φ9 Ψ0 Ψ1 Ψ Ψ3 Ψ4 Ψ5 Ψ6 Ψ7 Ψ8 Φ0 100% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% 0% Φ1 0% 100% -% -1% -1% 0% -1% 0% -1% 0% 1% 1% 1% 0% 0% 0% 0% 0% 0% Φ 0% -% 100% -4% -% -1% -% -1% -% -1% % 3% 3% 1% 0% 0% 0% 0% 0% Φ3 0% -1% -4% 100% -3% -3% -4% -% -4% -1% 1% 3% 5% 1% 1% 0% 0% 0% 0% Φ4 0% -1% -% -3% 100% -% -3% -% -4% -1% 0% 1% % 1% 1% 0% 0% 0% 0% Φ5 0% 0% -1% -3% -% 100% -6% -3% -7% -% 0% 1% % 1% 1% 0% 0% 0% 0% Φ6 0% -1% -% -4% -3% -6% 100% -8% -18% -6% 1% 1% % % % 1% 1% 0% 0% Φ7 0% 0% -1% -% -% -3% -8% 100% -18% -6% 0% 1% 1% 1% 1% 0% 1% 0% 0% Φ8 0% -1% -% -4% -4% -7% -18% -18% 100% -34% 1% 1% 3% % % 1% 1% % 1% Φ9 0% 0% -1% -1% -1% -% -6% -6% -34% 100% 0% 0% 1% 1% 1% 0% 1% 1% % Ψ0 0% 1% % 1% 0% 0% 1% 0% 1% 0% 100% -1% -1% 0% 0% 0% 0% 0% 0% Ψ1 0% 1% 3% 3% 1% 1% 1% 1% 1% 0% -1% 100% -% 0% 0% 0% 0% 0% 0% Ψ 0% 1% 3% 5% % % % 1% 3% 1% -1% -% 100% -1% -1% 0% 0% 0% 0% Ψ3 0% 0% 1% 1% 1% 1% % 1% % 1% 0% 0% -1% 100% 0% 0% 0% 0% 0% Ψ4 0% 0% 0% 1% 1% 1% % 1% % 1% 0% 0% -1% 0% 100% 0% 0% 0% 0% Ψ5 0% 0% 0% 0% 0% 0% 1% 0% 1% 0% 0% 0% 0% 0% 0% 100% 0% 0% 0% Ψ6 0% 0% 0% 0% 0% 0% 1% 1% 1% 1% 0% 0% 0% 0% 0% 0% 100% 0% 0% Ψ7 0% 0% 0% 0% 0% 0% 0% 0% % 1% 0% 0% 0% 0% 0% 0% 0% 100% 0% Ψ8 0% 0% 0% 0% 0% 0% 0% 0% 1% % 0% 0% 0% 0% 0% 0% 0% 0% 100% Table 1: Posterior correlation matrix corresponding to ΣDJ. 9