Bonus-malus systems and Markov chains

Similar documents
THE BONUS-MALUS SYSTEM MODELLING USING THE TRANSITION MATRIX

LECTURE 4. Last time: Lecture outline

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Markov Chain Monte Carlo Simulation Made Simple

12.5: CHI-SQUARE GOODNESS OF FIT TESTS

Performance Analysis of a Telephone System with both Patient and Impatient Customers

Lecture 5 Principal Minors and the Hessian

1 Introduction to Matrices

CHAPTER 7 STOCHASTIC ANALYSIS OF MANPOWER LEVELS AFFECTING BUSINESS 7.1 Introduction

3.2 Roulette and Markov Chains

Name: Section Registered In:

Optimal Hiring of Cloud Servers A. Stephen McGough, Isi Mitrani. EPEW 2014, Florence

A linear algebraic method for pricing temporary life annuities

Chapter 6. Orthogonality

Systems of Linear Equations

Exam Introduction Mathematical Finance and Insurance

Section 6.1 Joint Distribution Functions

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

Notes on Probability Theory

5. Orthogonal matrices

M/M/1 and M/M/m Queueing Systems

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

A DISCRETE TIME MARKOV CHAIN MODEL IN SUPERMARKETS FOR A PERIODIC INVENTORY SYSTEM WITH ONE WAY SUBSTITUTION

Linear Algebra Notes

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

From the probabilities that the company uses to move drivers from state to state the next year, we get the following transition matrix:

Lecture 1: Systems of Linear Equations

1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let

Application of Markov chain analysis to trend prediction of stock indices Milan Svoboda 1, Ladislav Lukáš 2

Simple Markovian Queueing Systems

IEOR 6711: Stochastic Models, I Fall 2012, Professor Whitt, Final Exam SOLUTIONS

Fitting the Belgian Bonus-Malus System

Introduction to Markov Chain Monte Carlo

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

Markov Chains. Chapter Stochastic Processes

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

SPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BI-DIAGONAL REPRESENTATIONS FOR PHASE TYPE DISTRIBUTIONS AND MATRIX-EXPONENTIAL DISTRIBUTIONS

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Probability Generating Functions

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

SOLVING LINEAR SYSTEMS

Web-based Supplementary Materials for Bayesian Effect Estimation. Accounting for Adjustment Uncertainty by Chi Wang, Giovanni

Solving Systems of Linear Equations Using Matrices

Random access protocols for channel access. Markov chains and their stability. Laurent Massoulié.

Multi-state transition models with actuarial applications c

UNCOUPLING THE PERRON EIGENVECTOR PROBLEM

Math Quizzes Winter 2009

1 Determinants and the Solvability of Linear Systems

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Single item inventory control under periodic review and a minimum order quantity

UNIT 2 QUEUING THEORY

MATH APPLIED MATRIX THEORY

Credit Risk Models: An Overview

Lecture 2 Matrix Operations

Big Data Technology Motivating NoSQL Databases: Computing Page Importance Metrics at Crawl Time

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Probability: Terminology and Examples Class 2, 18.05, Spring 2014 Jeremy Orloff and Jonathan Bloom

THE $25,000,000,000 EIGENVECTOR THE LINEAR ALGEBRA BEHIND GOOGLE

1 Sufficient statistics

INTEGRATED OPTIMIZATION OF SAFETY STOCK

1.2 Solving a System of Linear Equations

Lectures on Stochastic Processes. William G. Faris

Data Mining: Algorithms and Applications Matrix Math Review

Direct Methods for Solving Linear Systems. Matrix Factorization

Maximum Likelihood Estimation

Homogeneous systems of algebraic equations. A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which

5.3 The Cross Product in R 3

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Math 312 Homework 1 Solutions

Reinforcement Learning

An explicit link between Gaussian fields and Gaussian Markov random fields; the stochastic partial differential equation approach


MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Markov Chains for the RISK Board Game Revisited. Introduction. The Markov Chain. Jason A. Osborne North Carolina State University Raleigh, NC 27695

Chapter 2 Discrete-Time Markov Models

STAT 315: HOW TO CHOOSE A DISTRIBUTION FOR A RANDOM VARIABLE

Tools for the analysis and design of communication networks with Markovian dynamics

BayesX - Software for Bayesian Inference in Structured Additive Regression

THE DYING FIBONACCI TREE. 1. Introduction. Consider a tree with two types of nodes, say A and B, and the following properties:

DERIVATIVES AS MATRICES; CHAIN RULE

minimal polyonomial Example

Bounding of Performance Measures for Threshold-Based Queuing Systems: Theory and Application to Dynamic Resource Management in Video-on-Demand Servers

Reinforcement Learning

1 Norms and Vector Spaces

STT315 Chapter 4 Random Variables & Probability Distributions KM. Chapter 4.5, 6, 8 Probability Distributions for Continuous Random Variables

Similarity and Diagonalization. Similar Matrices

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Poisson Models for Count Data

Chapter 2: Linear Equations and Inequalities Lecture notes Math 1010

Transcription:

Bonus-malus systems and Markov chains Dutch car insurance bonus-malus system class % increase new class after # claims 0 1 2 >3 14 30 14 9 5 1 13 32.5 14 8 4 1 12 35 13 8 4 1 11 37.5 12 7 3 1 10 40 11 7 3 1 9 45 10 6 2 1 8 50 9 5 1 1 7 55 8 4 1 1 6 60 7 3 1 1 5 70 6 1 1 1 4 80 5 1 1 1 3 90 4 1 1 1 2 100 3 1 1 1 1 120 2 1 1 1 New insurants enter at stage 100%. The second column represents the percentage increase / decrease of the basic premium (based on rating factors like price of the car, horsepower of the car, etc.). Definition and basic properties of Markov chains Definition: A stochastic process X = {X t, t } in discrete time is called a (first order) Markov chain if P (X t+1 = j X t = i, X t 1 = i t 1,..., X 0 = i 0 ) = P (X t+1 = j X t = i) for all j, i, i t 1,..., i 0 E, where E is the state space of the Markov chain. The probability p ij (t) = P (X t+1 = j X t = i) is called (one step) transition probability. The matrix P (t) = (p ij (t)) 1

is called transition matrix or matrix of transition probabilities. It is a stochastic matrix, since all rows sum to one. If p ij (t) p ij (and therefore P (t) P ), the Markov chain is called homogeneous. The probabilities p (h) ij (t) = P (X t+h = j X t = i) are called h-step transition probabilities, in particular, p ij (t) p ij (t, 1). Similarly, the matrix is called h-step transition matrix. P (h) (t) = (p (h) ij (t)) The distribution of X t+1 is conditionally independent from the past, given the current value X t. The above definition is only valid for discrete valued random variables, i.e. if the state space E is countable. We will only consider discrete Markov chains. Markov chains can be generalised to continuous time and are then called (discrete) Markov processes. Distribution of a Markov chain: Given the transition probabilities p ij and a starting distribution p (0) i of a Markov chain is uniquely determined. = P (X 0 = i), the distribution The h-step transition probabilities are determined by the Chapman-Kolmogorov equations: and therefore P (h) = P h = P... P. P (h+l) = P (h) P (l) For the transition probabilities the Chapman-Kolmogorov equations can be expressed as p (h+l) ij = k E p (h) ik p(l) kj. Based on the h-step transition probabilities, we obtain P (X tn = i n, X tn 1 = i n 1,..., X t1 = i 1, X 0 = i 0 ) = p (0) i 0 p (t 1) i 0 i 1 p (t 2 t 1 ) i 1 i 2... p (tn t n 1) i n 1 i n Example: Bonus-Malus system Bonus-Malus systems can be considered as homogeneous Markov chains with a finite state space E of bonus malus classes. Suppose that the l classes are ordered such that the corresponding premiums are decreasing, i.e. π 1 π 2... π l. The first class is sometimes called super malus class and the last class super bonus class. 2

Frequently a Poisson distribution is used to model the transition probabilities within a bonusmalus system. To be more specific, the Poisson distribution describes the number of claims for an individual and the transition probabilities are determined from this claim frequency distribution. class German bonus-malus system (up to 2002) premium new class after # claims rate 0 1 2 3 >4 18 200 13 18 18 18 18 17 200 13 18 18 18 18 16 175 13 17 18 18 18 15 175 13 16 17 18 18 14 125 13 16 17 18 18 13 100 12 14 16 17 18 12 85 11 13 14 16 18 11 70 10 13 14 16 18 10 65 9 12 13 14 18 9 60 8 11 13 14 18 8 55 7 11 13 14 18 7 50 6 11 13 14 18 6 45 5 11 13 14 18 5 40 4 10 12 13 18 4 40 3 9 11 13 18 3 40 2 8 11 13 18 2 40 1 7 11 13 18 1 40 1 7 11 13 18 3

4 where 1.. = 0 Transition matrix for the German bonus-malus system (up to 2002) i/j 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 1 {0}..... {1}... {2}. {3}.... {4, 5,...} 2 {0}..... {1}... {2}. {3}.... {4, 5,...} 3. {0}..... {1}.. {2}. {3}.... {4, 5,...} 4.. {0}..... {1}. {2}. {3}.... {4, 5,...} 5... {0}..... {1}. {2} {3}.... {4, 5,...} 6.... {0}..... {1}. {2} {3}... {4, 5,...} 7..... {0}.... {1}. {2} {3}... {4, 5,...} 8...... {0}... {1}. {2} {3}... {4, 5,...} 9....... {0}.. {1}. {2} {3}... {4, 5,...} 10........ {0}.. {1} {2} {3}... {4, 5,...} 11......... {0}.. {1} {2}. {3}. {4, 5,...} 12.......... {0}. {1} {2}. {3}. {4, 5,...} 13........... {0}. {1}. {2} {3} {4, 5,...} 14............ {0}.. {1} {2} {3, 4,...} 15............ {0}.. {1} {2} {3, 4,...} 16............ {0}... {1} {2, 3,...} 17............ {0}.... {1, 2,...} 18............ {0}.... {1, 2,...} 2. {k} = p k = λk k! e λ. 3. {k, k + 1,...} = i=k p i Supplementary material (Risk Theory) Summer term 2007

1. The classes with premium rate 100% are called bonus classes, the classes with premium rate > 100% are malus classes. 2. In the German bonus-malus system, the classes are called Schadenfreiheitsklassen. 3. Since 2003 Germany has a new bonus-malus system with 23 bonus classes and 3 malus classes (and reversed order as in the Dutch example from above). Stationary distribution of a Markov chain Under regularity conditions on the transition matrix P, the linear systems of equations µ = µp has a unique, strictly non-zero solution µ, where µ = (µ j, j E). This solution µ is called the stationary distribution of the Markov chain, since µ j = lim p (t) ij, t and for any starting distribution. lim t p(0) P t = p (0) P = µ 1. If p (0) = µ, then P p (0) = µ, i.e. the state probabilities are time-constant. This is the reason why µ is called stationary distribution. 2. The stationary distribution can be obtained as µ = (I P + Q) 1 where = (1,..., 1), I is the identity matrix and 1... 1 Q =...... 1... 1 3. In a bonus-malus system, the stationary distribution represents the percentage of drivers in the different classes after the bonus-malus system has been run for a long time. 5