Model Risk Within Operational Risk



Similar documents
INTERAGENCY GUIDANCE ON THE ADVANCED MEASUREMENT APPROACHES FOR OPERATIONAL RISK. Date: June 3, 2011

Regulatory and Economic Capital

LDA at Work: Deutsche Bank s Approach to Quantifying Operational Risk

An Internal Model for Operational Risk Computation

Implementing an AMA for Operational Risk

IMPLEMENTATION NOTE. Validating Risk Rating Systems at IRB Institutions

QUANTIFYING OPERATIONAL RISK Charles Smithson

Operational Risk Modeling *

A Simple Formula for Operational Risk Capital: A Proposal Based on the Similarity of Loss Severity Distributions Observed among 18 Japanese Banks

OF THE WASHINGTON, D.C SUBJECT: Supervisory Guidance for Data, Modeling, and Model Risk Management

Operational risk capital modelling. John Jarratt National Australia Bank

Credit Scorecards for SME Finance The Process of Improving Risk Measurement and Management

8. THE NORMAL DISTRIBUTION

Chapter 7. Statistical Models of Operational Loss

Modern Operational Risk Management

Practical Calculation of Expected and Unexpected Losses in Operational Risk by Simulation Methods

FRC Risk Reporting Requirements Working Party Case Study (Pharmaceutical Industry)

Risk Analysis and Quantification

Capital Adequacy: Advanced Measurement Approaches to Operational Risk

Stochastic Analysis of Long-Term Multiple-Decrement Contracts

SENSITIVITY ANALYSIS AND INFERENCE. Lecture 12

Methods of quantifying operational risk in Banks : Theoretical approaches

Chapter 3 RANDOM VARIATE GENERATION

Supplement to Call Centers with Delay Information: Models and Insights

Conn Valuation Services Ltd.

BEYOND AMA PUTTING OPERATIONAL RISK MODELS TO GOOD USE POINT OF VIEW

Mixing internal and external data for managing operational risk

Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page

Additional sources Compilation of sources:

Java Modules for Time Series Analysis

Matching Investment Strategies in General Insurance Is it Worth It? Aim of Presentation. Background 34TH ANNUAL GIRO CONVENTION

1) What kind of risk on settlements is covered by 'Herstatt Risk' for which BCBS was formed?

Supervisory Guidance on Operational Risk Advanced Measurement Approaches for Regulatory Capital

Article from: Risk Management. June 2009 Issue 16

Measurement of Banks Exposure to Interest Rate Risk and Principles for the Management of Interest Rate Risk respectively.

Why COSO is flawed. Operational risk is one of the most. COSO not only fails to help a firm assess its

Basel Committee on Banking Supervision. Results from the 2008 Loss Data Collection Exercise for Operational Risk

Basel II: Operational Risk Implementation based on Risk Framework

STRESS TESTING GUIDELINE

Northern Trust Corporation

Tail-Dependence an Essential Factor for Correctly Measuring the Benefits of Diversification

Industry Environment and Concepts for Forecasting 1

Dr Christine Brown University of Melbourne

Quantitative Inventory Uncertainty

LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING

Insurance as Operational Risk Management Tool

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Exploratory Data Analysis

RISK MITIGATION IN FAST TRACKING PROJECTS

Financial Risk Forecasting Chapter 8 Backtesting and stresstesting

EIOPACP 13/011. Guidelines on PreApplication of Internal Models

12.5: CHI-SQUARE GOODNESS OF FIT TESTS

MODERN OPERATIONAL RISK MANAGEMENT

PROJECT RISK MANAGEMENT - ADVANTAGES AND PITFALLS

Tutorial 5: Hypothesis Testing

Statistics for Retail Finance. Chapter 8: Regulation and Capital Requirements

ModelRisk for Insurance and Finance. Quick Start Guide

Fairfield Public Schools

Content Sheet 7-1: Overview of Quality Control for Quantitative Tests

How to Model Operational Risk, if You Must

An operational risk management framework for managing agencies

The Study of Chinese P&C Insurance Risk for the Purpose of. Solvency Capital Requirement

Standard Deviation Estimator

6.4 Normal Distribution

GLM, insurance pricing & big data: paying attention to convergence issues.

Estimating Volatility

ACTUARIAL CONSIDERATIONS IN THE DEVELOPMENT OF AGENT CONTINGENT COMPENSATION PROGRAMS

Dongfeng Li. Autumn 2010

A New Approach for Managing Operational Risk

Monte Carlo analysis used for Contingency estimating.

Sample Size and Power in Clinical Trials

Basel Committee on Banking Supervision. Working Paper No. 17

Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus

Measuring Exchange Rate Fluctuations Risk Using the Value-at-Risk

RISK MANAGEMENT CASE STUDY OIL AND GAS INDUSTRY

ERM-2: Introduction to Economic Capital Modeling

DRAFT RESEARCH SUPPORT BUILDING AND INFRASTRUCTURE MODERNIZATION RISK MANAGEMENT PLAN. April 2009 SLAC I

GN47: Stochastic Modelling of Economic Risks in Life Insurance

An Introduction to. Metrics. used during. Software Development

Measuring operational risk in financial institutions: Contribution of credit risk modelling

PRINCIPLES FOR THE MANAGEMENT OF CONCENTRATION RISK

Much attention has been focused recently on enterprise risk management (ERM),

Modelling operational risk in the insurance industry

Operational Risk - Scenario Analysis

Credibility and Pooling Applications to Group Life and Group Disability Insurance

Risk-Based Capital. Overview

Monte Carlo Simulation

Introduction to Quantitative Methods

Organizing Your Approach to a Data Analysis

Transcription:

Model Risk Within Operational Risk Santiago Carrillo, QRR Ali Samad-Khan, Stamford Risk Analytics June 4, 2015 Reinventing Risk Management

A Non Well Defined Framework The Loss Distribution Approach Framework The Percentile 99.9% Threshold Effect Conclusions Challenges in Operational Risk Modelling International Model Risk Management Conference Santiago Carrillo Menéndez Alberto Suárez González RiskLab-Madrid Quantitative Risk Research Toronto June 4, 2015 S. Carrillo, A. Suárez (UAM-QRR) Challenges in Operational Risk Modelling June 4, 2015 1 / 26

A Non Well Defined Framework The Loss Distribution Approach Framework The Percentile 99.9% Threshold Effect Conclusions A Non Well Defined Framework The defintion of the framework for AMA approach in Basel II is quite bare: The four components: Any risk measurement system must have certain key features to meet the supervisory soundness standard set out in this section. These elements must include the use of internal data, relevant external data, scenario analysis and factors reflecting the business environment and internal control systems (BEICF s). External data: A bank operational risk measurement system must use relevant external data..., especially when there is reason to believe that the bank is exposed to infrequent, yet potentially severe, losses.... A bank must have a systematic process for determining the situations for which external data must be used and the methodologies used to incorporate the data... Fat tails and Percentile 99.9: A bank must be able to demonstrate that its approach captures potentially severe tail loss events. Whatever approach is used, a bank must demonstrate that its operational risk measure meets a soundness standard comparable to that of the internal ratings based approach for credit risk, (i.e. comparable to a one year holding period and a 99.9 percent confidence interval. S. Carrillo, A. Suárez (UAM-QRR) Challenges in Operational Risk Modelling June 4, 2015 2 / 26

A Non Well Defined Framework The Loss Distribution Approach Framework The Percentile 99.9% Threshold Effect Conclusions The Loss Distribution Approach I With time, it became clear that advanced models means, essentially, LDA. That means: Define a severity and (independent) frequency distribution functions, Combine then in the (yearly) aggregate loss distribution, The risk measure is the percentile 99.9% of this aggregate loss distribution. S. Carrillo, A. Suárez (UAM-QRR) Challenges in Operational Risk Modelling June 4, 2015 3 / 26

A Non Well Defined Framework The Loss Distribution Approach Framework The Percentile 99.9% Threshold Effect Conclusions The Loss Distribution Approach II In this context, model risk is the risk of having a model which provides a wrong estimation for this percentile 99.9. Nevertheless, several issues had appeared Scarcity of data Loss events are not a time serie How to weight of the different sources of data? How to use external data? Scaled? Not scaled? If not, more complexity for weights election? No specifications about how to use scenarios. In addition, the use of thresholds may induce serious biases in capital estimation. At least each of the pointed steps is a potential source of model risk It is necessary to remove those inconvenient in order to have sound AMA approaches with all the benefits they brought to risk management. It is feasible! Many FI have done a lot of work in this direction. S. Carrillo, A. Suárez (UAM-QRR) Challenges in Operational Risk Modelling June 4, 2015 4 / 26

A Non Well Defined Framework The Loss Distribution Approach Framework The Percentile 99.9% Threshold Effect Conclusions The Percentile 99.9% Using Truncated Distributions Study on Truncated Distributions Instability of Pareto fitting The Percentile 99.9% One of the biggest issues in the LDA approach is the accurate calculation of the aggregate loss distribution 99.9 percentile. This requires a precise fit of the tail of the severity distribution. In practice, Data are sparse in the tails. Different distributions may offer similar goodness of fit to data with very different results in terms of capital: different lognormals with high sigma, g-and-h distributions, Pareto distribution. The goal is to extrapolate the shape of the severity distribution far in the tail, based on the knowledge of part of the body/tail. It may be very difficult (lack of sufficient data far in the tail) to distinguish between the different candidates. Model error is a real threat. Even more when high thresholds are used. S. Carrillo, A. Suárez (UAM-QRR) Challenges in Operational Risk Modelling June 4, 2015 5 / 26

A Non Well Defined Framework The Loss Distribution Approach Framework The Percentile 99.9% Threshold Effect Conclusions The Percentile 99.9% Using Truncated Distributions Study on Truncated Distributions Instability of Pareto fitting Severity Uncertainty I Because of the sensitivity of capital to the tail of the severity distribution, we need the best possible choice for the tail. There may not be enough empirical evidence to select model distributions with very different asymptotic behavior. In particular, we may have distributions with a nice fitting to empirical data which assign non negligible probabilities to losses bigger than (f.e.) ten times the total assets of the bank (or more). Does it make sense to even consider such distributions? Let us consider, for example: a lognormal (µ = 10, σ = 2.5) (the histogram of the following figure); a piecewise defined distribution with a lognormal body and a g-and-h (a = 0.5, b = 5 10 4, g = 2.25 and h = 0.25) tail (15% of data, u 0 = 3 10 5 ); a piecewise defined distribution with a lognormal body and a generalized Pareto (u = u 0, β = 5 10 5, ξ = 1) tail (15% of data, u 0 = 3 10 5 ). We are considering really heavy tailed distributions. S. Carrillo, A. Suárez (UAM-QRR) Challenges in Operational Risk Modelling June 4, 2015 6 / 26

A Non Well Defined Framework The Loss Distribution Approach Framework The Percentile 99.9% Threshold Effect Conclusions The Percentile 99.9% Using Truncated Distributions Study on Truncated Distributions Instability of Pareto fitting Severity Uncertainty II In the following figures we compare the lognormal distribution (histogram) with the lognormal + h-and-g (left) and lognormal + Pareto (right). These distributions are very similar except very far in the tails. The asymptotic behaviors of the distributions are very different. The resulting OpVar associated to those distributions (λ = 200) are: 1.42 10 9 (lognormal), 6.21 10 9 (g-and-h) and 1.54 10 10 (generalized Pareto), S. Carrillo, A. Suárez (UAM-QRR) Challenges in Operational Risk Modelling June 4, 2015 7 / 26

A Non Well Defined Framework The Loss Distribution Approach Framework The Percentile 99.9% Threshold Effect Conclusions The Percentile 99.9% Using Truncated Distributions Study on Truncated Distributions Instability of Pareto fitting Reasons for the Use of Truncated Distributions How to solve this issue? Economic and regulatory capital should have economic sense. It seems reasonable that the sequence (or the next to come): Basic Approach Standard Approach Advanced Models Approach leads to a reduction in the requirements of capital based on the huge work done for a better management. The use of heavy tailed distributions in an LDA framework may lead to results with no economic interpretation: infinite expected losses, very unstable estimates of CaR values, diverging estimates of conditional CaR values. The losses of a bank may not be arbitrarily large. The use of truncated distributions allows to surpass this problems. It is interesting to outline that the right truncation level may be very high. The determination of the this level is an open question. We propose to consider it as an economical parameter for the fine tuning of models. S. Carrillo, A. Suárez (UAM-QRR) Challenges in Operational Risk Modelling June 4, 2015 8 / 26

A Non Well Defined Framework The Loss Distribution Approach Framework The Percentile 99.9% Threshold Effect Conclusions The Percentile 99.9% Using Truncated Distributions Study on Truncated Distributions Instability of Pareto fitting The Framework We reconsider the previous model. The frequency of losses is Poisson with an average of 200 (λ = 200:) losses per year. For the severity, we shall use the three distributions previously indicated (same parameters): a lognormal; a piecewise defined distribution with a lognormal body and a g-and-h; a piecewise defined distribution with a lognormal body and a generalized Pareto. Several levels of right-truncation are considered: from 2 millions to 2 billions. This approach has several benefits: a faster congergence to real parameters (see appendix figures); a more robust estimation of capital; smaller differences between the different models, that is smaller differences induced by unobserved virtual losses. S. Carrillo, A. Suárez (UAM-QRR) Challenges in Operational Risk Modelling June 4, 2015 9 / 26

A Non Well Defined Framework The Loss Distribution Approach Framework The Percentile 99.9% Threshold Effect Conclusions The Percentile 99.9% Using Truncated Distributions Study on Truncated Distributions Instability of Pareto fitting Summary These results are summarized in the table. Treshold LN LN + GH LN + GP 2 10 6 4.01 10 7 4.03 10 7 4.07 10 7 4 10 6 5.86 10 7 5.90 10 7 5.82 10 7 1 10 7 9.25 10 7 v 9.45 10 7 8.93 10 7 2 10 7 1.27 10 8 1.35 10 8 1.23 10 8 4 10 7 1.71 10 8 1.88 10 8 1.68 10 8 1 10 8 2.54 10 8 2.92 10 8 2.67 10 8 2 10 8 3.50 10 8 4.22 10 8 3.89 10 8 4 10 8 4.91 10 8 6.11 10 8 5.78 10 8 1 10 9 8.76 10 8 1.08 10 9 1.06 10 9 2 10 9 1.23 10 9 1.84 10 9 1.89 10 9 1.42 10 9 6.21 10 9 1.54 10 10 S. Carrillo, A. Suárez (UAM-QRR) Challenges in Operational Risk Modelling June 4, 2015 10 / 26

A Non Well Defined Framework The Loss Distribution Approach Framework The Percentile 99.9% Threshold Effect Conclusions The Percentile 99.9% Using Truncated Distributions Study on Truncated Distributions Instability of Pareto fitting Recapitulation The use of right-truncated distributions allows more stable estimations of economic/regulatory capital. The differences between the various models are less pronounced. For very heavy-tailed distributions the capital estimates are less sensitive to variations in the truncation level than to the sampling fluctuations when there is no truncation. The fact they imply less economical capital suggests that a careful analysis is needed in order to determine a reasonable truncation level. A possibility: having a multiplying factor associated to this level of capital. This level could be associate, in particular, to the quality of management (controls) and results (real empirical data). In the case of business lines/type of risk, a possibility is to take the level of capital required for the business line for the Standard Approach as a cap: A bank having a greater loss in its data base will not be allowed for a smaller amount of capital. However, this procedure by itself doesn t guaranty that the capital is smaller than using the Standard Approach. Something not unusual. S. Carrillo, A. Suárez (UAM-QRR) Challenges in Operational Risk Modelling June 4, 2015 11 / 26

A Non Well Defined Framework The Loss Distribution Approach Framework The Percentile 99.9% Threshold Effect Conclusions The Percentile 99.9% Using Truncated Distributions Study on Truncated Distributions Instability of Pareto fitting Critical Issues when Using Pareto Distribution (I) ξ = 0.6 When fitting Pareto to actual loss data, it is usual to get high values for ξ (even greater than 1). The parameters estimates in the Pareto fit are unstable. The (absolute) fluctuations of economic capital are very important. In order to illustrate this, we generate 30 events greater than C10,000 quarterly Pareto-distributed (ξ = 0.6) and fit data to Pareto at the end of each period using all accumulated data. It doesn t seem to be an acceptable solution. S. Carrillo, A. Suárez (UAM-QRR) Challenges in Operational Risk Modelling June 4, 2015 12 / 26

A Non Well Defined Framework The Loss Distribution Approach Framework The Percentile 99.9% Threshold Effect Conclusions The Percentile 99.9% Using Truncated Distributions Study on Truncated Distributions Instability of Pareto fitting Critical Issues when Using Pareto Distribution (II) ξ = 1.1 For ξ > 1, the situation is even more dramatic. First, the expected value of the losses is infinite. Therefore one should expect problems of consistency in the calculation of economic capital. With real data it is very easy to get extremely unrealistic amounts of capital. In general, Pareto fits tend to overestimate the value of capital-at-risk. S. Carrillo, A. Suárez (UAM-QRR) Challenges in Operational Risk Modelling June 4, 2015 13 / 26

A Non Well Defined Framework The Loss Distribution Approach Framework The Percentile 99.9% Threshold Effect Conclusions The Threshold Effect The Body Effect Threshold Effect To illustrate the effect of varying the left threshold on the error in the model, we generate 10,000 lognormal random numbers for different values of σ (µ = 10), assuming λ = 2000. Two threshold levels are chosen (3,000 and 10,000). We present the results of the best fit (worst p-value of KS and AD tests). u = 3,000 u= 10,000 σ best fit 1.00 LN-gamma Gamma mixture 1.25 g-and-h g-and-h 1.50 g-and-h Burr 1.75 LN-gamma LN-gamma 2.00 LN Lognormal 2.25 LN LN mixture 2.50 LN mixture g-and-h 2.75 Burr Burr 3.00 Burr Burr What could happen with less well defined data? S. Carrillo, A. Suárez (UAM-QRR) Challenges in Operational Risk Modelling June 4, 2015 14 / 26

A Non Well Defined Framework The Loss Distribution Approach Framework The Percentile 99.9% Threshold Effect Conclusions The Threshold Effect The Body Effect Impact on Capital The impact on capital depends on the frequency of events. The frequency distribution must be corrected in order to take into account the probability mass of the losses under the left censoring threshold. For high frequencies, the impact on capital may be very important: u = 3,000 u= 10,000 σ CaR error CaR error 1.00 8.07E+07-0.01% 8.27E+07 2.37% 1.25 1.15E+08 0.29% 1.15E+08-0.26% 1.50 1.85E+08 3.04% 2.31E+08 28.52% 1.75 2.73E+08-12.08% 2.74E+08-11.70% 2.00 7.17E+08 0.43% 7.18E+08 0.50% 2.25 1.99E+09 6.94% 1.85E+09-0.74% 2.50 3.25E+09-30.54% 3.42E+09-26.92% 2.75 9.59E+10 462.20% 4.66E+10 173.29% 3.00 1.89E+11 201.73% 2.24E+11 257.60% S. Carrillo, A. Suárez (UAM-QRR) Challenges in Operational Risk Modelling June 4, 2015 15 / 26

A Non Well Defined Framework The Loss Distribution Approach Framework The Percentile 99.9% Threshold Effect Conclusions The Threshold Effect The Body Effect The Body Effect I Should single-loss events determine the economic capital? In a subexponential framework, high percentiles of the loss distributions levels are explained by a single extreme loss or a small amount of large losses. The value 99.9% is a very high percentile. Is it high enough in order to make the body (the part under the threshold) of the distribution irrelevant for the CaR calculation? In order to give an answer to this question, we perform the following simulation. Random values of the loss severity are generated with a lognormal distribution (µ = 5, σ = 2). Different thresholds (u), determined by the probability of the tail (p), are chosen. Three cases are to be considerated: case 0: empirical data; case 1: the losses under the threshold u are all equal to 0; case 2: the losses under the threshold u are all equal to u; S. Carrillo, A. Suárez (UAM-QRR) Challenges in Operational Risk Modelling June 4, 2015 16 / 26

A Non Well Defined Framework The Loss Distribution Approach Framework The Percentile 99.9% Threshold Effect Conclusions The Threshold Effect The Body Effect The Body Effect II The results for the CaR (in thousands) are displayed in the following table. In the case of conditional CaR the tendencies are similar. p u VaR 0 VaR 1 VaR 2 VaR 2 VaR 1 VaR 0 0.50 149 1,253 1,249 1,263 1,14% λ = 200 0.20 797 1,249 1,223 1,349 10,09% 0.10 1,946 1,256 1,204 1,557 28,05% 0.50 149 4,909 4,858 5,010 3,10% λ = 2.000 0.20 797 4,896 4,624 5,897 25,98% 0.10 1,946 4,911 4,399 7,903 71,36% 0.50 149 28,655 28,126 29,653 5,32% λ = 20.000 0.20 797 28,567 25,853 38,660 44,83% 0.10 1,946 28,727 23,519 58,630 122,22% S. Carrillo, A. Suárez (UAM-QRR) Challenges in Operational Risk Modelling June 4, 2015 17 / 26

A Non Well Defined Framework The Loss Distribution Approach Framework The Percentile 99.9% Threshold Effect Conclusions The Threshold Effect The Body Effect The Body Effect III Conclusions: For low frequencies, the contribution of the body of the distribution is not decisive. Nevertheless the greater is the frequency the larger is the contribution of the body of the distribution. The asymptotic approximation works well for small frequencies. Note however that, in these experiments, the probability mass of the losses under the threshold is the same in all cases. If the probability mass of the body is extrapolated from the tail fit, these figures would show a much larger variation. From a practical point of view this suggests to collect data with a low truncating threshold. Or, at least, to have a realistic estimation of thr probability under the threshold. This point of view may have important returns in the management of the expected loss and insurance premiums negotiation. That means costs reduction. S. Carrillo, A. Suárez (UAM-QRR) Challenges in Operational Risk Modelling June 4, 2015 18 / 26

A Non Well Defined Framework The Loss Distribution Approach Framework The Percentile 99.9% Threshold Effect Conclusions Conclusions We indicated some of the possible causes for Model Risk in Operational Risk. We proposed techniques which allow to avoid this type of risk. They probably are not the unique possibilities. They are solutions to make AMA more robust and accurate. Thank you S. Carrillo, A. Suárez (UAM-QRR) Challenges in Operational Risk Modelling June 4, 2015 19 / 26

Stability for the fitting of truncated distributions Part I Appendix S. Carrillo, A. Suárez (UAM-QRR) Challenges in Operational Risk Modelling June 4, 2015 20 / 26

Stability for the fitting of truncated distributions The Lognormal Distribution Truncated Case Truncation level: 2 10 8 Truncation level: 2 10 9 S. Carrillo, A. Suárez (UAM-QRR) Challenges in Operational Risk Modelling June 4, 2015 21 / 26

Stability for the fitting of truncated distributions The Lognormal Distribution Non Truncated Case S. Carrillo, A. Suárez (UAM-QRR) Challenges in Operational Risk Modelling June 4, 2015 22 / 26

Stability for the fitting of truncated distributions The Lognormal + g-and-h Distribution Truncated Case Truncation level: 2 10 8 Truncation level: 2 10 9 S. Carrillo, A. Suárez (UAM-QRR) Challenges in Operational Risk Modelling June 4, 2015 23 / 26

Stability for the fitting of truncated distributions The Lognormal Distribution + g-and-h Distribution Non Truncated Case S. Carrillo, A. Suárez (UAM-QRR) Challenges in Operational Risk Modelling June 4, 2015 24 / 26

Stability for the fitting of truncated distributions The Lognormal + GP Distribution Truncated Case Truncation level: 2 10 8 Truncation level: 2 10 9 S. Carrillo, A. Suárez (UAM-QRR) Challenges in Operational Risk Modelling June 4, 2015 25 / 26

Stability for the fitting of truncated distributions The Lognormal + GP Distribution Non Truncated Case S. Carrillo, A. Suárez (UAM-QRR) Challenges in Operational Risk Modelling June 4, 2015 26 / 26

Model Risk Within Operational Risk Issues and Challenges International Model Risk Management Conference June 2-4, 2015 Toronto, Canada Ali Samad-Khan ali.samad-khan@stamfordrisk.com www.stamfordrisk.com Reinventing Risk Management

OpRisk Modeling Challenges: Key Issues Can operational risk be measured accurately at the 99.9% confidence level, with a one year time horizon? Should internal loss data be the primary data source for modeling operational risk? Is it feasible to use loss scenarios and their corresponding probabilities to estimate operational risk under the AMA Scenario Based Approach? Is it practical or even feasible to weight the four AMA data elements (Internal Loss Data, External Loss Data, Scenario Analysis and BEIFCs) in arriving at an operational risk capital estimate? 1

Background: Understanding the difference between aggregate and single loss exceedence curves. Individual Loss Events Risk Matrix for Loss Data Intermediate Loss Distributions Final Loss Distributions 84,345,957 36,349,123 22,345,213 18,345,234 15,945,456 Corporate Finance Trading and Sales Number Mean Standard Deviation Number Mean Standard Deviation Internal Fraud 362 35,459 5,694 50 53,189 8,541 External Fraud 123 52,056 8,975 4 78,084 13,463 Employment Practices and Workplace Safety 25 3,456 3,845 35 5,184 5,768 Clients, Products and Business Practices 36 56,890 7,890 50 85,335 11,835 Damage to Physical Assets 33 56,734 3,456 46 85,101 5,184 Execution, Delivery and Process Management 150 1,246 245 210 1,869 368 Business Disruption and System Failures 2 89,678 23,543 3 134,517 35,315 Total 731 44,215 6,976 398 66,322 10,464 P Annual Event Frequency Retail Banking Number Mean Standard Deviation Commercial Banking Number Mean Standard Deviation Payment and Settlements Number Mean 45 47,870 7,687 41 43,083 6,918 37 38,774 4 70,276 12,116 3 63,248 10,905 3 56,923 32 4,666 5,191 28 4,199 4,672 26 3,779 45 76,802 10,652 41 69,121 9,586 37 62,209 42 76,591 4,666 37 68,932 4,199 34 62,039 189 1,682 331 170 1,514 298 153 1,363 3 121,065 31,783 2 108,959 28,605 2 98,063 360 59,690 9,417 322 53,721 8,476 292 48,349 0 1 2 3 4 Agency Services Asset Management Standard Deviation Number Mean Standard Deviation Number Mean Standard Deviation 6,226 44 46,529 7,472 40 41,876 6,725 9,814 4 68,308 11,777 3 61,477 10,599 4,205 31 4,535 5,045 28 4,081 4,541 8,628 44 74,651 10,353 40 67,186 9,318 3,779 40 74,446 4,535 36 67,002 4,081 268 184 1,635 321 165 1,472 289 25,744 2 117,675 30,893 2 105,908 27,804 7,628 349 58,018 9,154 314 52,217 8,238 24,458 22,346 16,234 11,236 10,378 Retail Brokerage Insurance Total Number Mean Standard Deviation Number Mean Standard Deviation Number Mean Standard Deviation 48 50,252 8069 43 45,226 7,262 710 45,653 7,331 4 73,773 12719 4 66,395 11,447 152 67,021 11,555 33 4,898 5449 30 4,408 4,904 268 4,450 4,950 48 80,623 11182 43 72,561 10,063 384 73,245 10,158 44 80,402 4898 39 72,362 4,408 351 73,044 4,450 198 1,766 347 179 1,589 312 1,598 1,604 315 3 127,090 33365 2 114,381 30,028 21 115,459 30,311 378 62,660 9886 340 56,394 8,897 3,484 56,926 8,981 P Single Event Severity (LEC) 0 10 10 20 20 30 30 40 40 50 2

One can use a pair of Frequency and Severity Distributions to estimate an Annual Aggregate Loss Distribution. P Annual Event Frequency 0 1 2 3 4 Single Event Severity (LEC) Monte Carlo Simulation P 0 10 10 20 20 30 30 40 40 50 3

One can also use a pair of Frequency and Severity Distributions to estimate an Annualized (Single) Loss Exceedence Curve or ALEC. Single Event Severity (LEC) Annual Event Frequency P P x = 0 10 10 20 20 30 30 40 40 50 0 1 2 3 4 A base ALEC can be estimated through the formula LEC x λ 4

What is model risk in an operational risk context? In order to estimate the degree of model risk one must know the true (actual) level of risk. Annualized Loss Exceedence Curve Probability of an Exceedence 5.0 2.0 1.0 0.5 0.1 0 Perceived Risk 99.9% Model risk is the risk that the perceived level of risk may be different from the actual level of risk*. Actual Risk 99.9% Degree of Model Risk 0 1 8 10 13 100 Loss Amount ($ Billions) Log Scale *Actual risk is a nebulous concept. For the purposes of this discussion we define actual risk as the most precise estimate of risk that can be made based on the best theoretical framework /model and data available at that time. 5

What are the sources of model risk within operational risk models? Impact of Model Error on Model Risk Fundamental Misconception of the Business Problem Magnitude of Model Error Conceptual Flaws Insufficient Simulation Iterations Imprecise Distribution Fitting Inappropriate Theoretical Framework Inappropriate Severity Distributions Used Precision Errors Data Quality Issues Type of Model Error 6

Accuracy and Precision: Conceptual Flaws vs. Precision Errors. Conceptual Flaws Precision Precision Errors Accuracy 7

Banks are required to estimate operational risk capital at the 99.9% confidence level, with a one year time horizon. a bank using the AMA must demonstrate to the satisfaction of its primary Federal supervisor that its systems for managing and measuring operational risk meet established standards, including producing an estimate of operational risk exposure that meets a one-year, 99.9th percentile soundness standard. Federal Register / Vol. 72, No. 235 / Friday, December 7, 2007 / Rules and Regulations 8

OpRisk Modeling Challenges: Key Questions Why 99.9% confidence level? Because the safety and soundness of the banking sector is critically important to the smooth operation of the global economy, banks must be held to a higher safety standard. Under this standard, assuming there are 1000 banks in the industry and each bank is like every other bank, then, on average, only 1/1000 banks will exceed its capital reserve in any given year. What does it mean to model risk at the 99.9% confidence level, with a one year time horizon? 0.1% probability of exceedence in the typical year, or 0.1% probability of exceedence in any given year (1 in a 1000 year event)? How should one determine data sufficiency where the goal is to model risk at the 99.9% confidence level, with a one year time horizon? What is the most important source of loss data for modeling risk at the 99.9% confidence level, with a one year time horizon: internal data or external data? 9

Suppose you want to build a house at the beach. How far up the beach should you build this house if you are willing to tolerate minimal risk of loss at the 99.9% level, with a one year time horizon? Question: what is the height of the 1 in 1000 year wave? Ten Years of Empirical Data (1,000,000 incidents) VaR 99.9% TVaR 99.9% Stress VaR 99.9% 10

Wind driven waves are irrelevant to this analysis. The only relevant data are tsunami waves. But suppose a tsunami wave takes place, on average, once every ten years. Then in an average ten year sample you have only one relevant data point. It is difficult to estimate the height of the 1 in 1000 year wave with only ten years of data, because these data will contain only wind driven waves and perhaps one or two tsunamis, which will be viewed as black swan events or outliers. Wind Driven Waves Where the data are heterogeneous, what is a better measure of data sufficiency? Number of data points Number of years in the data collection period 99.9% level VaR (typical year) Actual 99.9% level VaR (any given year) Tsunami Waves 11

The trade-off between accuracy and precision. 99.9% VaR No Tsunamis Precision 99.9% VaR Including Tsunamis Accuracy 12

If you want to base your analysis on ten years of data you must believe that fraud data are i.i.d., in other words that junior staff frauds have the same properties as all other types of frauds, including senior manager frauds. Where the data are heterogeneous, modeling risk at the 99.9% confidence level, with a one year time horizon, essentially means modeling the large, infrequent events. Junior Staff Frauds If you had 1000 years of data, you would not need a model to estimate risk at the 99.9% confidence level, with a one year time horizon. The largest loss would be a reasonable approximation for the 1 in 1000 year event. Senior Manager Frauds Confidence Interval 99.9% Confidence Level Describes a Rough Location on the X axis 13

If you want to base your analysis on ten years of data you must believe that fraud data are i.i.d., in other words that junior staff frauds have the same properties as all other types of frauds, including senior manager frauds. The reason one needs external data is to determine more precisely, how often a $10M, $50M and $100M loss exceedence takes place. Junior Staff Frauds Using data collected over 100 years, one can observe how often large loss exceedences actually take place. $10,000,000 20 events 1 in 5 years $50,000,000 10 events 1 in 10 years $100,000,000 5 events 1 in 20 years One can then fit this data to a curve and extrapolate the 1 in 1000 year event. Senior Manager Frauds Confidence Interval 99.9% Confidence Level Describes a Rough Location on the X axis 14

If you want to base your analysis on ten years of data you must believe that fraud data are i.i.d., in other words that junior staff frauds have the same properties as all other types of frauds, including senior manager frauds. To model operational risk at the 99.9% loss, one must follow a two-step process: Since one needs about 100 years of loss data, one must pool one s internal loss data with peer (external) loss data to create an industry loss data set (10 firms x 10 years = 100 firm years). Using such data one can estimate the risk profile of the average peer firm. After that one can use objective metrics, such as the ratio of the mean internal loss to the mean peer loss (per matrix cell) to estimate a firm specific VaR. Relevant external loss data does not mean relevant individual data points. Relevant data are all data (for each matrix cell) from all firms that have a similar risk profile (peer firms) without any modification, whatsoever. 15

Modeling operational risk at the 99.9% confidence level, with a one year time horizon, with internal data only, produces results with a very large confidence interval. 5.0 Annualized Loss Exceedence Curve Probability of an Exceedence 2.0 1.0 0.5 0.1 Internal Data (no outliers ) Actual Risk Confidence Interval Internal Data (some outliers ) 0 0 1 5 10 13 50 100 Loss Amount ($ Billions) Log Scale 16

Modeling operational risk at the 99.9% confidence level, with a one year time horizon, with industry data (internal + external data), produces results with a smaller confidence interval. 5.0 Annualized Loss Exceedence Curve Probability of an Exceedence 2.0 1.0 0.5 0.1 0 Industry Data (few outliers ) Actual Risk Confidence Interval Industry Data (many outliers ) 0 1 10 13 16 50 100 Loss Amount ($ Billions) Log Scale 17

How can one model operational risk at the 99.9% confidence level, with a one year time horizon, where the data are truncated, heterogeneous, subject to clustering and contain outliers? 18

The Traditional Property & Casualty Loss Modeling Approach. For severity fitting, the best practices approach is to fit one s loss data to a range of theoretical distributions using maximum likelihood estimation (MLE). Raw Loss Event Data (100 Years) Independently Fit Raw Loss Data to a Frequency and a Severity Distribution VaR Calculation 8,234,345,957 4,369,349,123 3,322,345,213 3,118,345,234 2,815,945,456 24,458 22,346 16,234 11,236 10,378 Frequency of Events P 0 1 2 3 4 Number of Events Single Event Severity P 0 10 10 20 20 30 30 40 40 50 Monte Carlo Simulation 19

Where the data are truncated, a modified form of MLE can be used to estimate parameters for distributions specified from zero. Maximum Likelihood Estimation (MLE) is a process used to fit empirical data to a theoretical distribution. Given a pre-specified theoretical distribution, MLE is used to find the set of parameters that have the maximum likelihood (probability) of describing the empirical data set. Generally, the likelihood function is the density function, but where loss data is censored or truncated (not collected above or below a certain threshold), an adjustment is required. In order to accommodate truncated data, the likelihood function must be modified to describe the conditional likelihood. For left truncated data, we do this by taking the original density function, in this case the lognormal, and dividing it by the probability of the data being drawn from above the threshold, as shown below: 20

However, where the data are heterogeneous, subject to clustering and contain outliers, severity fitting at high loss thresholds using the modified form of MLE can be problematic. 21

The ALEC Method is a newly invented computational technology that allows one to fit frequency and severity distributions simultaneously. Raw Loss Event Data (100 Years) Express Loss Event Data as Annualized Loss Exceedence Curve Values at Multiple Thresholds Simultaneously Derive the Combined Best-Fit Frequency and Severity Distribution Parameters VaR Calculation 8,234,345,957 4,369,349,123 3,322,345,213 3,118,345,234 2,815,945,456 24,458 22,346 16,234 11,236 10,378 Frequency of Events P 0 1 2 3 4 Number of Events Single Event Severity P 0 10 10 20 20 30 30 40 40 50 Monte Carlo Simulation Patent Pending 22

Where the data are truncated, heterogeneous, subject to clustering and contain outliers the ALEC method when implemented correctly appears to be a more reliable means of fitting loss data to theoretical distributions. Frequency and severity distributions are interdependent. In other words, if the severity fit is not precise, then the λ 0 (the implied frequency at the zero threshold) will also not be precise. Therefore, fitting frequency and severity simultaneously (both together) produces more reliable results than fitting frequency and severity independently or sequentially (one at a time). This is particularly relevant where the data are truncated, heterogeneous, subject to clustering and contain outliers. Patent Pending 23

The Scenario Based Approach to Modeling Operational Risk Scenario analysis under the advanced approaches rule is a systematic process of obtaining expert opinions from business managers and risk-management experts to derive reasoned assessments of the likelihood and loss impact of plausible, high-severity operational losses. Scenario analysis may include the well-reasoned evaluation and use of external operational loss event data adjusted, as appropriate, to ensure relevance to a bank s operational-risk profile and control structure. Scenario analysis provides a forward-looking view of operational risk that complements historical internal and external data. The scenario analysis process and its output are key risk-management tools that are especially relevant for assessing potential risks to which the bank may be exposed. Scenarios are typically developed through workshops that produce multiple scenarios at both the line of business and enterprise levels. Scenario development exercises allow subject matter experts to identify potential operational events and their impacts. Such exercises allow those experts to better prepare to identify and manage the risk exposures through business decisions, risk mitigation efforts, and capital planning. Inclusion of scenario data with other data elements in internal risk-management reporting can support development of a comprehensive operational risk profile of the bank. INTERAGENCY GUIDANCE ON THE ADVANCED MEASUREMENT APPROACHES FOR OPERATIONAL RISK, June 3, 2011 24

The Scenario Based Approach is based on the assumption that a risk curve, such as a LEC, consists of points that comprise individual loss scenarios and corresponding probabilities, which is false. Scenarios describe individual loss incidents (e.g., a senior manager embezzles $10,000,000). However, risk curve fitting involves estimating the probability of exceedence at specified loss thresholds (e.g., $10,000,000) for a type of event (e.g., fraud which includes all fraud incidents irrespective of scenarios). Each scenario (specific point on a curve) has a zero probability of occurrence: 1 out of infinity. Many scenario workshop procedures call for business experts to identify plausible large loss scenarios and then guess their corresponding probabilities. These are not only the wrong questions (the questions are based on several false premises), the answers are unknowable. 25

To estimate a risk curve one must estimate the probability of loss exceedence at multiple loss thresholds for a type of event (risk class). These loss thresholds have nothing to do with scenarios. Curve Fitting (Fraud Risk) 100 50 Relevant question: What is the probability of an event exceeding the $1,000, $10,000 and $25,000 loss thresholds from any type of fraud loss? For example, 20%, 10% and 5%, respectively. (But how does one estimate these probabilities?) Probability 20 10 Scenario 4 20% 10% 5 0 0 1 10 25 100 5% Loss Amount ($ Thousands) 26

Based on industry data one can estimate number of event exceedences at multiple loss thresholds and then calculate corresponding 1 in N year exceedences and subsequently probabilities. Loss Threshold (T) Expected Event Exceedences Per 100 Years 1 in N Year Frequency (N) Probability of One or More Losses Exceeding T in any Given Year $1,000 20 5 18.13% $10,000 10 10 9.52% $25,000 5 20 4.88% 27

To convert frequency into probability, one must assume a frequency distribution and then calculate the corresponding probability. Years (N) Mean Frequency if on Average 1 Event Occurs Every N Years Likelihood of Exactly 1 Event in a Single Year Likelihood of 1 or More Events in a Single Year N 1/N Prob. (1 Event) 1-Prob. (0 Events) 1000 0.001 0.000999 0.001000 500 0.002 0.001996 0.001998 200 0.005 0.004975 0.004988 100 0.010 0.009900 0.009950 75 0.013 0.013157 0.013245 50 0.020 0.019604 0.019801 40 0.025 0.024383 0.024690 30 0.033 0.032241 0.032784 25 0.040 0.038432 0.039211 20 0.050 0.047561 0.048771 10 0.10 0.090484 0.095163 5 0.20 0.163746 0.181269 4 0.25 0.194700 0.221199 3 0.33 0.238844 0.283469 2.5 0.4 0.268128 0.329680 2 0.5 0.303265 0.393469 1 1 0.367879 0.632121 0.5 2 0.270671 0.864665 2 8 28

But what type of curve should we be trying to fit? LEC, which is a severity distribution; it describes, if a loss were to take place, what is the probability that that loss will exceed $X. A severity distribution has no time element. ALEC, which is a combined frequencyseverity distribution; it describes the probability of a loss (one or more events) exceeding $X in any given year. No one has any intuition for LEC probabilities. However, with industry data one can estimate annual event exceedences, and then calculate 1 in N year exceedences and subsequently probabilities. 29

It is theoretically possible to estimate an entire ALEC curve using as few as three loss exceedence inputs (points on the curve). (However, additional inputs will likely produce more precise results.) Annualized Loss Exceedence Curve Probability of One or More Exceedences 50 20 10 5 1 0 20 Events > $1000 every 100 years 1-in-5 year event 10 Events > $1000 every 100 years 1-in-10 year event 5 Events > $25,000 every 100 years 1-in-20 year event 0 25 100 500 Loss Amount ($ Thousands) Patent Pending 30

After deriving the combined best-fit frequency and severity parameters, one can extrapolate and/or simulate the relevant risk metrics at the desired confidence level. 3 1 31

In order to derive the actual risk parameters by fitting the LEC directly, one would have to correctly estimate the (highly unintuitive) severity probabilities. ALEC Poisson λ = 0.2 @T=1,000 Lognormal (μ,σ) = (9.102649, 1.416347) LEC Lognormal (μ,σ) = (9.102649, 1.416347) Loss Threshold (T) The Probability of a Loss Exceeding T in any Given Year If a Loss Were to Take Place, the Probability of that Loss Exceeding T $1,000 18.13% 93.9392% $10,000 9.52% 46.9699% $25,000 4.88% 23.4848% 32

It is feasible to model operational risk at the 99.9% level, with a one year time horizon, using industry data. If done correctly, the results should be accurate, even though they may not be precise. However, it is not practical to manage risk exclusively at the 1/1000 year event level. a bank using the AMA must demonstrate to the satisfaction of its primary Federal supervisor that its systems for managing and measuring operational risk meet established standards, including producing an estimate of operational risk exposure that meets a one-year, 99.9th percentile soundness standard. Federal Register / Vol. 72, No. 235 / Friday, December 7, 2007 / Rules and Regulations The bank s internal operational risk measurement system must be closely integrated into the day-to-day risk management processes of the bank. Its output must be an integral part of the process of monitoring and controlling the bank s operational risk profile. For instance, this information must play a prominent role in risk reporting, management reporting, internal capital allocation, and risk analysis. The bank must have techniques for allocating operational risk capital to major business lines and for creating incentives to improve the management of operational risk throughout the firm. Basel Committee on Banking Supervision International Convergence of Capital Measurement and Capital, June 2006 33

Internal data is a key element for modeling operational risk at the 99.9% confidence level, with a one year time horizon, but not the most important element. While the advanced approaches rule provides flexibility in a bank s use and weighting of the four data elements, internal data is a key element in operational risk management and quantification. INTERAGENCY GUIDANCE ON THE ADVANCED MEASUREMENT APPROACHES FOR OPERATIONAL RISK, June 3, 2011 One common misconception in ORM is that internal modeling means modeling primarily with internal loss data. External data are almost always necessary for modeling... Society of Actuaries, A New Approach for Managing Operational Risk Addressing the Issues Underlying the 2008 Global Financial Crisis. Updated July 2010. A ten year internal data set cannot be used to reliably extrapolate the 1/1000 year event, where the data are heterogeneous, truncated, subject to clustering and contain outliers. 34

The Scenario Based Approach has nothing to do with loss scenarios and their corresponding probabilities. Scenario analysis under the advanced approaches rule is a systematic process of obtaining expert opinions from business managers and risk-management experts to derive reasoned assessments of the likelihood and loss impact of plausible, high-severity operational losses. Scenario analysis may include the well-reasoned evaluation and use of external operational loss event data adjusted, as appropriate, to ensure relevance to a bank s operational-risk profile and control structure. Scenario analysis provides a forward-looking view of operational risk that complements historical internal and external data. The scenario analysis process and its output are key risk-management tools that are especially relevant for assessing potential risks to which the bank may be exposed. INTERAGENCY GUIDANCE ON THE ADVANCED MEASUREMENT APPROACHES FOR OPERATIONAL RISK, June 3, 2011 To estimate risk curves the starting point should be good quality industry data. Modeling experts must first transform industry data (internal + external data) into event exceedences at specified loss thresholds (irrespective of scenarios), and then calculate corresponding 1 in N year exceedences and subsequently the corresponding probabilities. After that business managers can be brought in to determine whether the risk curve should be objectively and systematically modified based on observed changes in Business and Internal Control Environment Factors (BEICFs). 35

It is not practical or even feasible to literally weight the four required AMA data elements. The advanced approaches rule requires that a bank s operational risk data and assessment systems capture operational risks to which the firm is exposed. These systems must include credible, transparent, systematic, and verifiable processes on an ongoing basis that incorporate the four required AMA data elements of (1) internal operational loss event data, (2) external operational loss event data, (3) scenario analysis, and (4) BEICFs. The advanced approaches rule also requires that a bank s operational risk quantification systems generate estimates of the bank s operational risk exposure using the bank s operational risk data and assessment systems, and include a credible, transparent, systematic, and verifiable approach for weighting each of the four elements. INTERAGENCY GUIDANCE ON THE ADVANCED MEASUREMENT APPROACHES FOR OPERATIONAL RISK, June 3, 2011 Weighting model elements is almost invariably an arbitrary process. An operational risk model should follow a two-step process. The initial results should based on industry loss data (which contain more precise information on tail frequency). Here, the inputs can be based on raw external data or expert opinion ( scenarios ) informed by such data. Objective metrics (e.g., the ratio of mean internal to external loss) should then be used to bring these results which would represent the historical risk profile of the average peer institution into line with the current risk profile of the bank as part of the BEICF process. 36

Operational risk modeling practices have been converging, but not necessarily in the right direction. As new methods and tools are developed, the agencies anticipate that the operational risk discipline will continue to mature and converge toward a narrower range of effective risk management and measurement practices. INTERAGENCY GUIDANCE ON THE ADVANCED MEASUREMENT APPROACHES FOR OPERATIONAL RISK, June 3, 2011 The source of innovation has not been the major banks. Many large banks have developed operational risk models which are conceptually flawed and add no tangible value. Nevertheless, because the large banks are generally deemed to be the experts in operational risk modeling, many have mistakenly assumed that these highly questionable models represent the standard for industry best practices. The time has come for the regulators to comprehensively examine what works and what doesn t work. And subsequently to recraft several vague and potentially misleading regulations so as to encourage the use of sound operational risk measurement and management practices and, more importantly, to discourage the use of unsound operational risk measurement and management practices. 37

What is the moral of this story? Before you can find the solution, you must first understand the problem! 38

Biographical Information Ali Samad-Khan Ali Samad-Khan is Founder and President of Stamford Risk Analytics. He has over 15 years of experience in risk management and more than 25 years of experience in financial services and consulting. Ali has advised more than 100 of the world s leading banks; insurance, energy and transportation companies; and national and international regulators on a full range of risk management issues. Key elements of his Modern ERM/ORM framework and methodology have been adopted by leading institutions around the world. Recognized globally as a thought leader in the risk management space, Ali s provocative articles and white papers have served as a catalyst for change in the way organizations manage risk. For his pioneering work in this field, Ali was named one of the 100 most influential people in finance by Treasury & Risk Management magazine. Ali is also a charter member of Who s Who in Risk Management. Prior to founding Stamford Risk Analytics, Ali served as a Principal in the ERM Practice at Towers Perrin (now Towers Watson), where he was also Global Head of Operational Risk Management Consulting. Previously, Ali was Founder and President of OpRisk Analytics LLC, a software provider, which was acquired by SAS. Before that Ali worked at PricewaterhouseCoopers in New York, where he headed the Operational Risk Group within the Financial Risk Management Practice. Prior to that, he led the Strategic Risk Initiatives Group in the Operational Risk Management Department at Bankers Trust. He has also worked as a policy analyst at the Federal Reserve Bank of New York and at the World Bank. Ali holds a B.A. in Quantitative Economics from Stanford University and an M.B.A. in Finance from Yale University. 39