Extreme Value Theory with Applications in Quantitative Risk Management

Size: px
Start display at page:

Download "Extreme Value Theory with Applications in Quantitative Risk Management"

Transcription

1 Extreme Value Theory with Applications in Quantitative Risk Management Henrik Skaarup Andersen David Sloth Pedersen Master s Thesis Master of Science in Finance Supervisor: David Skovmand Department of Business Studies 2010 Aarhus School of Business Aarhus University

2 ...or how we learned to stop worrying and love em fat tails

3 Abstract In this thesis we investigate extreme value theory and its potential in financial risk management. In the first part of the thesis, we provide a thorough and rigorous exposition of extreme value theory (EVT). We describe the theoretical foundation of the theory covering the fundamental theorems and results. In relation to this, we explicitly emphasize the statistical issues and limitations of the theory with applications in financial risk management in mind. Moreover, we discuss how the theory may be applied to financial data and the specific issues that may arise in such applications. Also, we approach the issue of working with multivariate risk factors using copula theory and discuss some copula results in multivariate extreme value theory. In the second part of the thesis, we conduct an empirical study of the performance of EVT-based risk measurement methods based on an equallyweighted portfolio composed of three Danish stocks. The performance of the methods is evaluated by their ability to accurately estimate well-known risk measures such as Value at Risk (VaR) and Expected Shortfall (ES). Treating the portfolio value as a single risk factor, we consider a univariate EVT-method, HS-CONDEVT, which combines GARCH-type modeling of volatility and fitting of a generalized Pareto distribution to the tails of the underlying distribution. Compared to the performance of alternative univariate methods for risk measurement, the empirical results demonstrate that HS-CONDEVT outperforms alternative univariate methods such as historical simulation (HS) and HS combined with a GARCH-type model assuming normally distributed innovations. Moreover, HS-CONDEVT is found to be a viable alternative to filtered HS and to HS combined with a GARCH-type model assuming t distributed innovations. Treating the three stocks in the portfolio as risk factors, we consider a multivariate EVT-based method, MCONDEVT, which combines a copula with margins based on GARCH-type modeling and GPD fitting. MCON- DEVT is implemented in three variants using three different copulas; a Gaussian, a t, and a Gumbel copula. Comparatively, we find that the variants of the MCONDEVT method outperform other multivariate methods such as variance-covariance (VC), VC combined with a multivariate EWMA model, multivariate GARCH based on a constant conditional correlation structure, and multivariate GARCH based on a dynamic conditional correlation structure. Finally, comparing the performance univariate and multivariate methods altogether, we find that the implemented variants of the MCONDEVT method are among the top performing methods. Especially, MCONDEVT based on a t copula appears to have the best overall performance among the competing methods.

4 Contents 1 Introduction Purpose and Research Questions Delimitations Structure Theoretical Framework Quantitative Risk Modeling and Measurement: Essential Concepts and Methods Empirical Properties of Financial Return Data Risk Factors and Loss Distribution Risk Measures Quantitative Methods for Risk Modeling Extreme Value Theory Modeling of Extremal Events I: Block Maxima Models Modeling of Extremal Events II: Peaks over Threshold Models Copula Theory and Multivariate Extremes Copulas and Dependence Modeling Copula Results in Multivariate Extreme Value Theory 40 3 Methodology Data Selection Method Selection and Implementation Univariate Methods Multivariate Methods Dynamic Models for Changing Volatility Backtesting Methodology Backtesting Value at Risk Backtesting Expected Shortfall Empirical Results Univariate Risk Measurement Methods Preliminary Data Analysis and Descriptive Statistics. 69

5 4.1.2 Dynamic Model Selection Relative Performance of the Methods Multivariate Risk Measurement Methods Preliminary Data Analysis and Descriptive Statistics Dynamic Model Selection Relative Performance of the Methods Overall Comparison of Backtest Results Reflections Implications and Limitations of the Results Applicability in Practice Ideas for Further Research Conclusion 113 Bibliography 116 A Derivations 124 A.1 Derivation of Equation A.2 Derivation of Equation A.3 Derivation of Equation A.4 Derivation of Equation A.5 Derivation of Equation A.6 Derivation of Equation A.7 Derivation of Equation A.8 Derivation of Equation A.9 Derivation of Equation B Tables 131

6 List of Tables 4.1 Descriptive statistics for the portfolio losses Parameter estimates and descriptive statistics for the standardized residuals day VaR-based backtest results: Right tail day VaR-based backtest results: Left tail day VaR-based backtest results day ES-based backtest results day ES-based backtest results Sensitivity analysis: VaR-based backtest results Sensitivity analysis: ES-based backtest result Descriptive statistics for the risk factor return series Parameter estimates and descriptive statistics for the standardized residuals day VaR-based backtest results: Right tail day VaR-based backtest results: Left tail day VaR-based backtest results day ES-based backtest results day ES-based backtest results Sensitivity analysis: VaR-based backtest results Sensitivity analysis: ES-based backtest result B.1 Information criteria values for the fitted dynamic models B.2 1-day VaR-based backtest results: Right tail B.3 1-day VaR-based backtest results: Left tail B.4 1-day ES-based backtest results B.5 10-day VaR-based backtest results B.6 10-day ES-based backtest results

7 List of Figures 4.1 Time series of portfolio losses Correlograms for the in-sample raw portfolio losses (A) and their squared values (B) as well as for the total sample raw portfolio losses (C) and their squared values (D) Information criteria values for the fitted models. AR1 denotes a first order autoregressive mean specification, t and n denotes the distribution assumption, and GPQ denotes a GARCHtype variance specification with P and Q order numbers Correlograms for the in-sample raw standardized residuals (A) and their squared values (B) as well as for the total sample raw standardized residuals (C) and their squared values (D) extracted from the AR(1)-GJR(1,1) model fitted with the QML estimation method Correlograms for the in-sample raw standardized residuals (A) and their squared values (B) as well as for the total sample raw standardized residuals (C) and their squared values (D) extracted from the AR(1)-GJR(1,1) model fitted with ML under the assumption of t distributed innovations QQ-plots for the in-sample standardized residuals versus a normal distribution (A) and a t distribution (B) as well as for the total sample standardized residuals versus a normal distribution (C) and a t distribution (D) day VaR α=99% estimates plotted against the portfolio losses day VaR α=99% estimates plotted against the 10-day portfolio losses Time series of risk-factor returns. These are log-returns on (A) NOVO B, (B) CARLS B, and (C) DANSKE day VaR α=99% estimates plotted against the portfolio losses day VaR α=99% estimates plotted against the 10-day portfolio losses

8 Chapter 1 Introduction In the last fifty years, the ten most extreme days in the financial markets represent half the returns. [Taleb, 2007, p. 275]. Lately, the financial crisis has exposed major shortcomings of the traditional risk assessment methodologies in terms of capturing the risk of rare but damaging events, which has made the search for better approaches to risk modeling and measurement more crucial than ever. The above quote by Taleb from his famous book The Black Swan captures the essence of what we are up against. By its very nature, the risk of extreme events (e.g. very large losses) is related to the tails of the distribution of the underlying data generating process. Thus, a crucial challenge in getting good risk measure estimates is to be able to estimate the tail of the underlying distribution as accurately as possible. Since the pioneering works of Mandelbrot [1963] and Fama [1965], several studies have documented that financial return series have more mass in the tail areas than what would be predicted by a normal distribution. In other words, the return distributions have fat-tails causing the probability of extreme values to be higher than in a normal distribution. To capture this phenomenon, early studies tried to model the distribution of financial returns using stable distributions like the Cauchy distribution. However, because financial theory almost always requires finite second moments of returns, and often higher moments as well, these distributions have lost their popularity [Campbell et al., 1997]. Instead, more recent studies have resorted to some kind of mixture distribution, e.g. the Normal-Inverse-Gaussian or the 1

9 Variance-Gamma distributions, which are more tractable as moments of all orders exist. For the purpose of measuring financial risk, however, our practical interest is concentrated on the tails. So, instead of forcing a single distribution for the entire return series, one might just investigate the tails of the returns using some kind of limit laws. This is where extreme value theory may become the star of the show by providing statistically well-grounded results on the limiting behavior of the underlying distribution. Pioneered by Fisher and Tippett [1928], Gnedenko [1943], and Gumbel [1958] and later by Balkema and de Haan [1974] and Pickands [1975], extreme value theory has been around for quite some time as a discipline within probability and statistics. Applications of the theory have since appeared in diverse areas such as hydrology or wind engineering. Only recently, though, has extreme value theory seen the light of day within the realms of finance, the first comprehensive volume on the theory completely devoted to finance and insurance being Embrechts et al. [1997]. However, since its introduction to finance, the body of research on financial applications of extreme value theory has grown considerably. With respect to tail estimation and risk measurement, two crucial properties make extreme value theory particularly attractive. First, it is based on well-established and sound statistical theory. Secondly, it offers a parametric form for the tail allowing us to model rare and extreme phenomena that lie outside the range of available observations. Thus, extreme value theory may provide the means to obtain more accurate risk measure estimates that are true to the extremal or fat-tailed behavior of the underlying distribution. In this thesis, we wish to investigate this possibility further. 1.1 Purpose and Research Questions The purpose of this thesis is to investigate extreme value theory and its potential in financial risk management. We will provide a thorough and rigorous exposition of the theoretical foundation of extreme value theory. In relation to this, we will explicitly emphasize the statistical issues and limitations of the theory with applications in financial risk management in mind. Moreover, we will discuss how the theory may be applied to financial data and the specific issues that may arise in such applications. Using return data on three Danish stocks, we conduct an empirical study of the performance of risk measurement methods based on extreme value theory. Compared to the performance of various alternative methods for risk measurement, the methods are evaluated by their ability to accurately estimate well-known risk measures such as Value at Risk (VaR) and Expected Shortfall (ES). Furthermore, most studies on extreme value theory in finance have focused solely on a univariate setting where only one risk factor is 2

10 accounted for; this thesis will also investigate the performance of extreme value theory based methods in a more realistic, multivariate setting where we deal with more than one risk factor. In conclusion, the thesis seeks to provide answers to the following three research questions: 1. What is the theoretical foundation of extreme value theory? 2. How can extreme value theory be applied to financial risk measurement and what kind of issues arise? 3. Compared to alternative risk measurement methods, how do methods based on extreme value theory perform with respect to estimating Value at Risk and Expected Shortfall? 1.2 Delimitations In our discussions of financial risk measurement as well as in the empirical study, we concentrate on market risk. Market risk is the risk of a movement in the value of a financial position due to changes in the value of the underlying components on which the position depends, such as stock, bond, and commodity prices, exchange rates, etc. However, banks and other financial institutions are exposed to other categories of risks. One such category is credit risk, which is the risk of not receiving promised repayments on outstanding investments such as loans and bonds, because of the default of the borrower. Another category is operational risk, the risk of losses resulting from inadequate or failed internal processes, people and systems, or from external events [McNeil et al., 2005]. Even though, we consider these kinds of risks equally important, they will not be further covered in this thesis. Furthermore, in the empirical study, we only consider the market risk associated with a financial position in stocks, specifically three stocks listed on the Danish C20 index (OMXC20). We also refrain from discussing nor applying other risk measures than Value at Risk and Expected Shortfall. We acknowledge that other risk measures may have merit. However, since Value at Risk was sanctioned by the Basel Committee in 1996 for market risk capital requirement, it has become the standard measure for financial market risk [Wong, 2007]. Expected Shortfall is closely related to Value at Risk but so far less used in practice. Nonetheless, it addresses some of the deficiencies of Value at Risk that critics have pointed out. For these reasons, we concentrate on these two measures of risk. Further delimitations will be made throughout the thesis when appropriate. 3

11 1.3 Structure The main part of the thesis is structured in three chapters covering the theoretical framework of the thesis, the methodology of the empirical study, and ultimately a presentation of the empirical results. This is followed by a short chapter on the implications and limitations of our study, leading up to a conclusion in the final chapter. Thus, the structure of the thesis is as follows Chapter 2 The chapter presents the theoretical framework of the thesis. We first review some essential concepts and methods within quantitative risk management. Following this, we turn to the primary topic of the thesis, extreme value theory. We provide a thorough exposition of the theory and its main results with applications in financial risk management in mind. Finally, we approach the issue of working with multivariate risk factors using copula theory and discuss some copula results in multivariate extreme value theory. Chapter 3 The chapter outlines the methodology of the empirical study. We first discuss the selection of financial data for our empirical investigation. We then turn to the selection and implementation of the different risk measurement methods, giving special emphasis to the statistical issues involved. Lastly, we describe the methodology used for backtesting and performance evaluation of the implemented risk measurement methods. Chapter 4 The chapter presents the results of our empirical study. We evaluate the relative performance of the risk measurement methods, those based on extreme value theory as well as alternative methods, with respect to estimating Value-at-Risk and Expected Shortfall. To this end, we use both statistical tests and more qualitative assessments. Chapter 5 The chapter discusses the implications and limitations of our study and main results. In this connection, ideas for further research within extreme value theory and its applications in financial risk management are proposed. Chapter 6 The final chapter summarizes our main findings and concludes on our study. 4

12 Chapter 2 Theoretical Framework In this chapter we present the theoretical framework of the thesis. The concepts, theories, and results discussed in the following sections constitute the foundation of the empirical study. In Section 2.1, we discuss some essential concepts and methods within quantitative risk modeling and measurement, which will be used throughout the thesis. After this, we turn to the primary topic of the thesis in Section 2.2, namely extreme value theory, giving a thorough and rigorous account and discussion of the theory and its central results with applications in financial risk management in mind. Finally, in Section 2.3, we approach the issue of working with multivariate risk factors using copula theory and we discuss some copula results in multivariate extreme value theory. 2.1 Quantitative Risk Modeling and Measurement: Essential Concepts and Methods In this section we introduce a series of concepts and definitions within the discipline of quantitative finance, which will be used throughout the thesis. We start by describing some general empirical properties of financial return series data in Section This is followed by a discussion of the concept of financial loss distributions and risk factors in Section Here we especially dwell on the difference between unconditional and conditional loss distributions. In Section we discuss well-known measures of financial risk such as Value at Risk and Expected Shortfall. Finally, we outline the three main quantitative methods for modeling financial risk and their limitations in Section Empirical Properties of Financial Return Data Since stock prices are mostly non-stationary (usually, integrated of order 1), it is common to model relative changes of prices, i.e. the log-return series 5

13 [Cont, 2001]. In this section, we give an overview of some typical properties of daily financial return data, which have become known as stylized facts. These properties often also extend to series of both longer (weekly or monthly) and shorter (intra-day) time interval series [McNeil et al., 2005]. Financial return series are not independently and identically distributed (iid). They tend to exhibit temporal dependence in the second moment. In other words, while return series seem to show little serial correlation, absolute or squared returns seem to be highly serially correlated, causing time-varying volatility and volatility clustering. Volatility clustering is the tendency for large returns (of either sign) to be followed by more large returns (of either sign) [Campbell et al., 1997, McNeil et al., 2005]. Also, Black [1976] found that negative innovations to stock returns tend to increase volatility more than positive innovations of similar magnitudes. This phenomenon has become known as the leverage effect. Furthermore, Fama [1965] found that financial returns appear to have heavy-tailed or leptokurtic distributions. Compared to the normal or Gaussian distribution, return distributions tend to exhibit excess kurtosis (i.e. kurtosis larger than 3) indicating that returns have more mass in the tail areas than predicted by the normal distribution [Campbell et al., 1997]. In mathematical terms, the tails seem to display a slow, power-law type of decay different from the faster, exponential type of decay displayed by the normal distribution [Cont, 2001]. When we deal with multivariate return series we have similar stylized facts. While multivariate returns series show little evidence of cross correlation (except for contemporaneous returns), absolute returns of such series tend to exhibit cross correlation. In addition, the contemporaneous correlation between returns appears to be time varying. Also, multivariate returns series tend to exhibit tail or extremal dependence, i.e. extreme returns in different return series often tend to coincide. Moreover, extremal dependence seems to be asymmetric; joint negative returns show more tail dependence than joint positive returns [McNeil et al., 2005]. The last two stylized facts correspond to the phenomenon that correlations observed in calm periods differ from correlations observed during financial turmoil Risk Factors and Loss Distribution Consider a portfolio of financial assets and let V t denote its current value. The portfolio value is assumed to be observable at time t. The portfolio loss over the time interval from t to t + 1 is written as L t+1 = (V t+1 V t ) (2.1) Because V t+1 is unknown to us, L t+1 is random from the perspective of time t. The distribution of L t+1 will be referred to as the loss distribution. Note that the definition of a loss presented here implicitly assumes that the portfolio 6

14 composition is constant over the considered time interval. The portfolio value V t will be modeled by a function of time and a set of d underlying risk factors. We write V t = f(t, Z t ), (2.2) for some measurable function f : R + R d R, where Z t = (Z t,1,..., Z t,d ) denotes a d-dimensional vector of risk factors. We define the time series process of risk factor changes {X t } t N, where X t := Z t Z t 1. Using the function f we can relate the risk factor changes to the changes in the portfolio value as L t+1 = (f(t + 1, Z t + X t+1 ) f(t, Z t )). (2.3) Given realizations z t of Z t, we define the loss operator l [t] : R d R at time t as l [t] (x) := (f(t + 1, z t + x) f(t, z t ), x R d, (2.4) and we can write L t+1 = l [t] (X t+1 ) as shorthand notation for the portfolio loss. In practice, it is often convenient to work with the so-called delta approximation. Assuming that the mapping f is differentiable, we may use a first-order approximation of the loss operator instead of the true operator. We define the linearized loss operator as ) d l[t] (f (x) := t (t, z t ) + f zi (t, z t )x i, (2.5) where the terms f t and f zi are the partial derivatives of f with respect to time and risk factor i. The linear approximation makes the problem of modeling l [t] simpler to handle analytically by representing it as a linear function of the risk-factor changes. The quality of the approximation is influenced by the length of the time interval and the size of the second order derivatives. It works best for short time horizons and if the portfolio value is approximately linear in the risk factors. As Z t is observable from the perspective of time t, the loss distribution is determined entirely by the distribution of X t+1. If we assume that {X t } t N follows a stationary time series, we have to make a distinction between the conditional and unconditional loss distribution. If we assume instead that {X t } t N is an iid series, the two distributions coincide. Let F t = σ ({X s : s t}) be the Borel σ-field representing all information on the risk factor developments up to the present. This leads us to the formal definitions below. Definition 1 (Unconditional Loss Distribution). The unconditional loss distribution F Lt+1 is the distribution of l [t] ( ) under the stationary distribution F X, where F X denotes the unconditional distribution of X assuming stationarity. i=0 7

15 Definition 2 (Conditional Loss Distribution). The conditional loss distribution F Lt+1 F t is the distribution of l [t] ( ) under F Xt+1 F t, where F Xt+1 F t denotes the conditional distribution of X t+1 given F t. Conditional risk measurement focuses on modeling the dynamics of {X t } t N in order to make risk forecasts. If we do not model any dynamics, we basically assume that X forms a stationary time series with a stationary distribution F X on R d. We will mainly consider conditional risk measurement methods as they appear most suitable for market risk management and shorter time intervals. The worries of market risk managers center on the possible size of the short-term (e.g. one-day or two-week) loss caused by unfavorable shifts in market values. Thus, they are concerned about the tails of the conditional loss distribution, given the current volatility background [McNeil and Frey, 2000]. The unconditional loss distribution is more relevant when interest centers on possible worst case scenarios over longer periods (e.g. one year or more) and is more frequently used in credit risk management [McNeil et al., 2005] Risk Measures In this section we discuss statistical summaries of the loss distribution that quantify the portfolio risk. We call these summaries risk measures. First, we introduce the so-called axioms of coherence, which are properties deemed desirable for measures of risk. Hereafter, we discuss two widely used measures of financial risk: Value at Risk and Expected Shortfall. Both risk measures consider only the downside risk, i.e. the right tail of the loss distribution. Artzner et al. [1999] argue that a good measure of risk should satisfy a set of properties termed the axioms of coherence. Let financial risks be represented by a set M interpreted here as portfolio losses, i.e. L M. Risk measures are real-valued functions ϱ : M R. The amount ϱ(l) represents the capital required to cover a position facing a loss L. The risk measure ϱ is coherent if it satisfies the following four axioms: 1. Monotonicity: L 1 L 2 = ϱ(l 1 ) ϱ(l 2 ). 2. Positive homogeneity: ϱ(λl) = λϱ(l), λ > Translation invariance: ϱ(l + l) = ϱ(l) + l, l R. 4. Subadditivity: ϱ(l 1 + L 2 ) ϱ(l 1 ) + ϱ(l 2 ). Monotonicity states that positions which lead to higher losses in every state of the world require more risk capital. Positive homogeneity implies that the capital required to cover a position is proportional to the size of that position. Translation invariance states that if a deterministic amount l is added to the position, the capital needed to cover L is changed by precisely 8

16 that amount. Subadditivity reflects the intuitive property that risk should be reduced or at least not increased by diversification, i.e. the amount of capital needed to cover two combined portfolios should not be greater than the capital needed to cover the portfolios evaluated separately. In the following discussion of Value at Risk and Expected Shortfall, we put aside the distinction between l [t] and l[t] and also between unconditional and conditional loss distributions, assuming that the choice of focus has been made from the outset of the analysis. Also, we denote the distribution function of the loss L t+1 := L by F L, so that F L (x) = P(L x), x R. Value at Risk Value at Risk (VaR) is the maximum loss over a given period that is not exceeded with a high probability. We begin with a formal definition of the concept. Definition 3 (Value at Risk). The Value at Risk (VaR) at confidence level α (0, 1) is defined as the smallest value x such that the probability of L exceeding x is no larger than (1 α) VaR α := inf {x R : P(L > x) 1 α} = inf {x R : F L (x) α}. (2.6) Using the concepts of generalized inverse and quantile functions given in Definition 4, it is clear that VaR is simply the quantile of the loss distribution F L. Consequently, (2.6) can be written as VaR α := q α (F L ) = F L (α). (2.7) Definition 4 (Generalized Inverse and Quantile Function). 1. Given some increasing function F : R R, the generalized inverse of F is defined as F (y) = inf {x R : F (x) y}, where we set inf { } =. 2. At any confidence level α (0, 1), the quantile of a distribution function F is defined as q α (F ) = inf {x R : F (x) α} = F (α). VaR α has been adopted into the regulatory Basel framework for banks as the major determinant of the risk capital required for covering potential losses arising from market risks [Basel Committee on Banking Supervision, 2004]. A major advantage of VaR α is that it does not depend on a specific kind of distribution and therefore, in theory, can be applied to any kind of financial asset [Danielsson, 2007]. In addition, VaR α is intuitively appealing because of its ability to describe the financial risk of a portfolio in a single figure. Its simplicity makes it an attractive risk measure because it is easily comprehended and communicated to the quantitatively novice compared to other risk measures. 9

17 However, by definition VaR α gives no information about the size of the losses which occur with probability smaller than 1 α, i.e. the measure does not tell how bad it gets if things go wrong. Moreover, Artzner et al. [1999] make the observation that VaR α fails to satisfy the axiom of subadditivity in all cases, implying that the VaR α of a portfolio is not necessarily bounded above by VaR α of the individual portfolio components added together. 1 This is very unfortunate as non-subadditive risk measures can lead to misleading conclusions and wrong incentives, e.g. to avoid portfolio diversification and to split entire companies up into separate legal entities to reduce regulatory capital requirements. This conceptual deficiency has led to much debate and criticism of VaR α as a risk measure. Given these problems with VaR α, we seek an alternative measure which satisfies the axioms of coherence. Expected Shortfall The second risk measure we consider is Expected Shortfall (ES). Again, we begin with a formal definition of the concept. Definition 5 (Expected Shortfall). For a loss L with E( L ) < and distribution function F L, the Expected Shortfall (ES) at confidence level α (0, 1) is defined as ES α := 1 1 α 1 α q ϕ (F L )dϕ = 1 1 α 1 where q ϕ (F L ) = F L (ϕ) is the quantile function of F L. α VaR ϕ (L)dϕ, (2.8) If the loss distribution F L is continuous, ES α can be thought of as the average loss given VaR α is exceeded. That is ES α := E ( ) LI {L VaRα} = E(L L VaR α ), (2.9) 1 α where I {L V arα} is a binary violation indicator. ES α may be considered superior to VaR α for two reasons. First, in contrast to VaR α, ES α gives an idea of how bad things can get, i.e. it informs about the probable size of the worst loss which occur with probability 1 α. Second, Artzner et al. [1999] find that ES α satisfies the axioms of coherence, including subadditivity (for a formal proof of ES being a coherent risk measure, see McNeil et al. [2005] p. 243) Quantitative Methods for Risk Modeling Statistical methods for modeling the distribution of a loss L t+1 = l [t] (X t+1 ) can be divided into three main methods: Variance-Covariance, Historical 1 McNeil et al. [2005] demonstrate that the non-subadditivity of VaR α can occur when the dependence structure is of a highly asymmetric form or when the portfolio components have highly asymmetric loss distributions. 10

18 Simulation, and Monte Carlo Simulation. We present the basics of each method, discuss its limitations and suggest possible extensions. Variance-Covariance Method We begin by presenting the unconditional version of the variance-covariance (VC) method. In contrast to historical simulation and Monte Carlo methods, VC provides an analytical solution to the risk measure estimation problem which requires no simulation. The method is based on the following two assumptions: 1. The vector of risk factor changes X t+1 has an (unconditional) multivariate normal distribution denoted by X t+1 N d (µ, Σ), where µ is the mean vector and Σ is the covariance matrix. 2. The linearized loss in terms of risk factors L t+1 := l [t] (X t+1) is a sufficiently accurate approximation of L t+1. The second assumption allows us to estimate risk measures based on the distribution of L t+1 instead of L t+1, which makes the estimation problem analytically tractable. Taken together, the assumptions ensure that the loss distribution is linear in the risk factor changes and univariate normal. Specifically, the linearized loss operator has the form l [t] (x) = (c t+b tx) and since the multivariate normal distribution is stable under affine transformation, we have l [t] (X t+1) N ( c t b t µ, b tσb t ), (2.10) where c t and b t denote some constant and constant vector known at time t, respectively. The mean vector µ and the covariance matrix Σ are estimated from the risk factor change data X t n+1,..., X t. Estimates of the risk measures VaR and ES are calculated from the estimated moments of the distribution. The assumptions underlying the method may have some undesirable consequences. The linearized loss can be a poor approximation for portfolios with nonlinear instruments such as options or if risk is measured over long time horizons as the first-order approximation only works well with small risk factor changes. However, the most paramount disadvantage is the unconditional normality assumption which may lead to underestimation of the risk exposure due to the small probability assigned to large losses [Hull and White, 1998]. A conditional version of the VC method is obtained if we alter the first assumption and instead assume that the vector X t+1 follows a conditional multivariate normal distribution, i.e. X t+1 F t N d (µ t+1, Σ t+1 ). In consequence, the conditional loss distribution has conditional mean E(L t+1 F t) = (c t + b tµ t+1 ) and conditional variance V ar(l t+1 F t) = b tσ t+1 b t. The conditional moments can be estimated based on a multivariate dynamic model, and risk measure estimates can then be calculated from these estimated moments. 11

19 Historical Simulation In the Historical Simulation (HS) method the loss distribution of L t+1 is estimated under the empirical distribution of historical data X t n+1,..., X t. Thus, the method does not rely on any parametric model assumptions. It does, however, rely on stationarity of X to ensure convergence of the empirical loss distribution to the true loss distribution. The historically simulated loss series is generated by using the loss operator applied to recent historical risk factor changes. The univariate dataset of historically simulated losses is given by { L s = l [t] (X s ) : s = t n + 1,..., t}, (2.11) where the values L s represent the losses that would occur if the historical riskfactor returns on day s reoccurred at time t + 1. Statistical inference about the loss distribution and risk measures can be made using the historically simulated data L t n+1,..., L t. To ensure sufficient estimation precision, HS requires large amounts of relevant and synchronized data for all risk factors. However, it is not always practically feasible to obtain such large appropriate samples of data. Even if data is available, the history of appropriate data may only contain a few if any extreme observations. Additionally, the unconditional nature of the method makes it likely to miss periods of temporarily elevated volatility which can result in clusters of VaR violations [Jorion, 2001]. We can combine HS with a univariate time series model calibrated to the historical simulation data and thereby estimate a conditional loss distribution. In principle, we are not estimating the conditional loss distribution defined in Section 2.1.2; we are estimating the conditional loss distribution F Lt+1 G t, where G t = σ({ L s : s t}). Even though we are working with a reduced information set, this simple method may work well in practice [McNeil et al., 2005]. Monte Carlo Simulation The main idea of the Monte Carlo (MC) method is to estimate the distribution of L t+1 = l [t] (X t+1 ) under some explicit parametric model for X t+1. Unlike VC, we make no use of the linearized loss operator to make the estimation problem analytically tractable. Instead, we make inference about the loss distribution by simulating new risk factor change data. MC is essentially a three-step method. First, we set up a data generating process (DGP) by calibrating the parameters of a suitable parametric model to historical risk factor change data X t n+1,..., X t. Second, we simulate a large set of m independent future realizations of {X t } t N, denoted by X (1) (m) t+1,..., X t+1. Third, we construct Monte Carlo simulated loss data by applying the loss operator to each realization { L (i) t+1 = l [t](x (i) t+1 ) : i = 1,..., m}. (2.12) 12

20 Statistical inference about the loss distribution and risk measures is made (1) (m) using the simulated losses L t+1,..., L t+1. In contrast to HS, the method avoids the problem of having insufficient synchronized historical data. Also, we can address heavy tails and extreme scenarios through the pre-specified stochastic risk factor change process. For large portfolios the computational costs of using MC can be large, especially if the loss operator is difficult to evaluate. This is the case when the portfolio holds complex instruments, e.g. derivatives for which no closedform price solution is available. The same critique applies to HS but to a smaller degree since the sample size n representing the number of historical simulations is usually smaller than the number of simulations m in MC. An alternative or supplement to MC is to use bootstrapping 2. Where MC simulates new data by setting up a DGP and generating random numbers from a hypothetical distribution, bootstrapping simulates new data by vector-wise random sampling from X = (X t 1+n,..., X t ) with replacement as many times as needed [Efron and Tibshirani, 1993]. However, a large sample size is needed to ensure that the bootstrapped distribution is a good approximation of the true one. A further drawback is that any pattern of time variation in X is broken by the random sampling. This can be circumvented by combining MC and bootstrapping. In this case, we would set up a DGP without assuming a theoretical innovation distribution but instead applying bootstrapping to the standardized residuals. 2.2 Extreme Value Theory The purpose of this section is to give a thorough and self-contained account and discussion of extreme value theory (EVT) with applications in financial risk management in mind, but without losing mathematical rigor. Within the context of EVT, there are roughly two approaches to modeling extremal events. One of them is the direct modeling of the distribution of maximum (or minimum) realizations. These kinds of models are known as block maxima models. The other approach is the modeling of exceedances of a particular threshold. Models based on this approach are known as peaks over threshold models. Today, it is generally acknowledged that the latter approach uses data more efficiently and it is therefore considered the most useful for practical applications [McNeil et al., 2005]. EVT rests on the assumption of independently and identically distributed (iid) data. In this thesis, however, we are concerned with financial time series data and a stylized fact of financial log-returns is that they tend to exhibit dependence in the second moment, i.e. while they are seemingly uncorrelated, the autocorrelation of the squared (or absolute) log-returns is significant. Consequently, when EVT is applied to financial time series data 2 For an introduction to bootstrap methods, see Efron and Tibshirani [1993] 13

21 we need to take temporal dependence into account. If not, we will produce estimators with non-optimal performance [Brodin and Klüppelberg, 2006]. The Sections and describe the block maxima and the peaks over threshold models, respectively, and are organized as follows: First, we present the mathematical concepts and results that constitute the theoretical foundation of the two extreme value modeling approaches. Second, we present the models, their assumptions, limitations, and statistical estimation based on maximum likelihood (ML). Third, we discuss how the models can be generalized and applied to financial time series data and the subtleties this involves. And finally, we discuss how to estimate quantiles and risk measures Modeling of Extremal Events I: Block Maxima Models Suppose that {X i } i N is a sequence of iid non-degenerate random variables representing financial losses with common distribution function F (x) = P(X i x). The Generalized Extreme Value Distribution Let M n = n i=1 X i = max(x 1,..., X n ) define the sample maxima of the iid random variables. In classical EVT we are interested in the limiting distribution of affinely transformed (normalized) maxima. The mathematical foundation is the class of extreme value limit laws originally derived by Fisher and Tippett [1928] and summarized in Theorem 1. Theorem 1 (Fisher and Tippett [1928], Gnedenko [1943]). If there exist norming constants c n > 0 and d n R such that c 1 n (M n d n ) d H (2.13) for some non-degenerate 3 distribution function H, then H belongs to one of the following three families of distributions: Gumbel: Λ(x) = exp { e x }, x R. Fréchet: Φ α (x) = { 0, x 0, exp { x α }, x > 0, α > 0. Weibull: Ψ α (x) = { exp { ( x) α }, x 0, α > 0, 1, x > 0. A rigorous proof of the theorem can be found in Gnedenko [1943]. 3 A non-degenerate distribution function is a limiting distribution function that is not concentrated on a single point [McNeil et al., 2005]. 14

22 The Λ, Φ α and Ψ α distribution functions are called standard extreme value distributions. In accordance with von Mises [1936] and Jenkinson [1955], we can obtain a one-parameter representation of the three standard distributions. This representation is known as the standard generalized extreme value (GEV) distribution. Definition 6 (Generalized Extreme Value Distribution). The distribution function of the standard GEV distribution is given by { exp { (1 + ξx) 1/ξ}, ξ 0, H ξ (x) = exp { e x (2.14) }, ξ = 0, where 1 + ξx > 0 and ξ is the shape parameter. The related location-scale family H ξ;µ,σ can be introduced by replacing the argument ( x above by (x µ)/σ for µ R, σ > 0; that is H ξ,µ,σ (x) := H x µ ) ξ σ. The support has to be adjusted accordingly. Moreover, due to its crucial role in determining the likelihood function when fitting the GEV distribution, we calculate the density function of the three-parameter GEV distribution, obtained by differentiating H ξ,µ,σ (x) with respect to x. { 1 ( σ 1 + ξ x µ ) 1/ξ 1 σ exp { ( 1 + ξ x µ ) } 1/ξ σ, ξ 0, h ξ,µ,σ (x) = 1 σ exp { x µ } { σ exp e (x µ)/σ } (2.15), ξ = 0, Theorem 1 shows that affinely transformed maxima converge in distribution to the GEV distribution H ξ, and convergence of type (see Embrechts et al. [1997], p. 121 and p. 554) insures that the limiting distribution is uniquely determined up to affine transformations. 4 Under the iid assumptiom, the exact distribution function of the maxima M n is P(M n x) = P(X 1 x,..., X n x) = F n (x), x R, n N. (2.16) As a result of (2.16) and the fact that the extreme value distribution functions are continuous on R, c 1 n (M n d n ) d H is equivalent to lim P(M n c n x + d n ) = lim F n (c n x + d n ) = H(x), (2.17) n n or equivalently, by taking logarithms and using ln(1 y) y as y 0, we have lim n(1 F (c nx + d n )) = lim n F (c n x + d n ) = ln H(x). (2.18) n n 4 Using the identity min(x 1,..., X n) = max( X 1,..., X n), it can be shown that the appropriate limits of minima are distributions of type 1 H ξ ( x), see McNeil et al. [2005] p

23 In fact, we have the more general equivalence. sequence u n of real numbers For 0 τ and any lim n F (u n ) = τ n lim P(M n u n ) = e τ. (2.19) n where F is defined by F = 1 F and denotes the tail of F. Definition 7 (Maximum Domain of Attraction). If (2.17) holds for some norming constants c n > 0, d n R and non-degenerate distribution function H, we say that the distribution function F belongs to the maximum domain of attraction of the extreme value distribution H, and we write F MDA(H). Consequently, we can restate Theorem 1: Theorem 2 (Fisher-Tippett-Gnedenko Theorem Restated). If F is in the maximum domain of attraction of some non-generate distribution function H (F MDA(H)), then H must be a GEV distribution, i.e. H belongs to the distribution family H ξ. The Fisher-Tippett-Gnedenko Theorem essentially says that the GEV distribution is the only possible limiting distribution for normalized maxima. If ξ = α 1 > 0, F is said to be in the maximum domain of attraction of the Fréchet distribution Φ α. Distributions in this class include the Pareto, t, Burr, log-gamma, and Cauchy distributions. If ξ = 0, F is in the maximum domain of attraction of the Gumbel distribution Λ, which includes distributions such as the normal, log-normal, and gamma distributions. Finally, if ξ = α 1 < 0, F is in the maximum domain of attraction of the Weibull distribution Ψ α, which includes distributions such as the uniform and beta distributions [McNeil et al., 2005, Embrechts et al., 1997]. Maximum Domain of Attraction In the following we will investigate what kind of underlying distributions F that give rise to which limit laws by characterizing the maximum domain of attraction of each of the three extreme value distributions. For the Fréchet distribution the maximum domain of attraction consists of distribution functions F whose tails are regularly varying 5 with negative index of variation. Regularly varying functions are functions which can be represented by power functions multiplied by slowly varying 6 functions. 5 A Lebesque-measurable function ψ: R + R + is regularly varying at with index ρ R if ψ(tx) lim x ψ(x) = tρ, t > 0, and we write ψ RV ρ. See Resnick [2007] ch A Lebesque-measurable function L: R + R + is slowly varying at if L(tx) lim = 1, t > 0, x L(x) and we write L RV 0. See Resnick [2007] ch

24 Thus, the distribution function F belongs to the maximum domain of attraction of Φ α, α > 0, if and only if F = x α L(x) for some slowly varying function L. That is F MDA(Φ α ) F RV α, where α = 1/ξ is called the tail index of the distribution. This class of distribution functions contains very heavy-tailed distributions in the sense that E[X k ] =, k > α for some non-negative stochastic variable X with distribution function F MDA(Φ α ), which makes the distributions specifically attractive for modeling large fluctuations in log-returns and other financial applications. The maximum domain of attraction of the Weibull distribution consists of distribution functions F with support bounded to the right, i.e. they have a finite right endpoint, x F = sup{x R : F (x) < 1} <. The distribution function F belongs to the maximum domain of attraction of Ψ α, α > 0, if and only if x F < and F (x F x 1 ) = x α L(x) for some slowly varying function L. That is F MDA(Ψ α ) x F <, F (xf x 1 ) RV α. The fact that x F < renders this class of distributions the least appropriate for modeling extremal events in finance. In practice, financial losses clearly have an upper limit, but often distributions with x F = is favored since they allow for arbitrarily large losses in a sample. The maximum domain of attraction of the Gumbel distribution consists of the so-called von Mises distribution functions and their tail-equivalent distribution functions. A distribution function F is called a von Mises function if there exists z < x F such that F has the representation F (x) = c exp { x z 1 a(t) dt }, z < x < x F, where c is a positive constant, a( ) is a positive and absolutely continuous function with density a and lim x xf a (x) = 0. Furthermore, two distribution functions F and G are called tail-equivalent if they have the same right endpoint, i.e. x F = x G, and lim x xf F (x)/ Ḡ(x) = c for some constant 0 < c < [Embrechts et al., 1997]. MDA(Λ) contains a large variety of distributions with very different tails ranging from light-tailed distributions (e.g. the exponential or Gaussian distributions) to moderately heavy-tailed distributions (e.g. the log-normal or heavy-tailed Weibull distributions), which makes the Gumbel class interesting for financial applications alongside the Fréchet class [McNeil et al., 2005]. However, the tails of the distributions in the Gumbel class decrease to zero much faster than any power law and thereby the regularly varying power-tailed distributions of the Fréchet class. 17

25 A non-negative stochastic variable X with distribution function F MDA(Λ) has finite moments of any positive order, i.e. E[X k ] < for every k > 0. Also, the distributions in the Gumbel class can have both finite and infinite right endpoints, x F [Embrechts et al., 1997]. Method for Block Maxima Modeling Based on the theoretical results presented in the previous sections, we are now ready to present the practical and statistical application of the block maxima model. Assume that we have data from an underlying distribution with distribution function F MDA(H ξ ) and these data are iid. We know from the previous sections and Theorem 1, in particular, that the true distribution of the maxima M n can be approximated by a GEV distribution for large n. In practice, we do not know the true distribution of losses and can therefore not determine the norming constants c n and d n ; thus we use the three-parameter specification H ξ,µ,σ where we have replaced c n and d n by σ > 0 and µ [McNeil et al., 2005]. The implementation of the method is relatively straightforward. First, we divide the data into m blocks of size n and collect the maximum value in each block, denoting the block maximum of the jth block by M n (j). This, of course, requires that the data can be divided in some natural way. Assuming that we are dealing with daily return data (or similarly, daily losses), we could e.g. divide the data into monthly, quarterly or yearly blocks. 7 However, to avoid seasonality, it might be preferable to choose yearly periods [Gilli and Kellezi, 2006]. Next, we fit the three-parameter GEV distribution to the m block maximum observations M n (1),..., M n (m). One estimation procedure is the theoretically well-established maximum likelihood (ML) method [Prescott and Walden, 1983, Hosking, 1985], which allows us to give estimates of statistical error for the parameter estimates. However, alternative methods do exist, e.g. Hosking et al. [1985] propose method of probability-weighted moments (PWM) but the theoretical justification for this method is less wellestablished [Embrechts et al., 1997]. Assuming that the block size n is large enough so that the m block maximum observations can be assumed independent, regardless of whether the underlying data is dependent, then the likelihood function based on the data M n (1),..., M n (m) is given by L(ξ, µ, σ; M n (1),..., M n (m) ) = m i=1 h ξ,µ,σ (M (i) n ), where h ξ,µ,σ is the density function of the GEV distribution given in (2.15). 7 We ignore that the exact number of days in each block will differ slightly. 18

Contents. List of Figures. List of Tables. List of Examples. Preface to Volume IV

Contents. List of Figures. List of Tables. List of Examples. Preface to Volume IV Contents List of Figures List of Tables List of Examples Foreword Preface to Volume IV xiii xvi xxi xxv xxix IV.1 Value at Risk and Other Risk Metrics 1 IV.1.1 Introduction 1 IV.1.2 An Overview of Market

More information

Non Linear Dependence Structures: a Copula Opinion Approach in Portfolio Optimization

Non Linear Dependence Structures: a Copula Opinion Approach in Portfolio Optimization Non Linear Dependence Structures: a Copula Opinion Approach in Portfolio Optimization Jean- Damien Villiers ESSEC Business School Master of Sciences in Management Grande Ecole September 2013 1 Non Linear

More information

Statistics for Retail Finance. Chapter 8: Regulation and Capital Requirements

Statistics for Retail Finance. Chapter 8: Regulation and Capital Requirements Statistics for Retail Finance 1 Overview > We now consider regulatory requirements for managing risk on a portfolio of consumer loans. Regulators have two key duties: 1. Protect consumers in the financial

More information

A Coefficient of Variation for Skewed and Heavy-Tailed Insurance Losses. Michael R. Powers[ 1 ] Temple University and Tsinghua University

A Coefficient of Variation for Skewed and Heavy-Tailed Insurance Losses. Michael R. Powers[ 1 ] Temple University and Tsinghua University A Coefficient of Variation for Skewed and Heavy-Tailed Insurance Losses Michael R. Powers[ ] Temple University and Tsinghua University Thomas Y. Powers Yale University [June 2009] Abstract We propose a

More information

The Best of Both Worlds:

The Best of Both Worlds: The Best of Both Worlds: A Hybrid Approach to Calculating Value at Risk Jacob Boudoukh 1, Matthew Richardson and Robert F. Whitelaw Stern School of Business, NYU The hybrid approach combines the two most

More information

An introduction to Value-at-Risk Learning Curve September 2003

An introduction to Value-at-Risk Learning Curve September 2003 An introduction to Value-at-Risk Learning Curve September 2003 Value-at-Risk The introduction of Value-at-Risk (VaR) as an accepted methodology for quantifying market risk is part of the evolution of risk

More information

Introduction to time series analysis

Introduction to time series analysis Introduction to time series analysis Margherita Gerolimetto November 3, 2010 1 What is a time series? A time series is a collection of observations ordered following a parameter that for us is time. Examples

More information

Dr Christine Brown University of Melbourne

Dr Christine Brown University of Melbourne Enhancing Risk Management and Governance in the Region s Banking System to Implement Basel II and to Meet Contemporary Risks and Challenges Arising from the Global Banking System Training Program ~ 8 12

More information

Financial Risk Forecasting Chapter 8 Backtesting and stresstesting

Financial Risk Forecasting Chapter 8 Backtesting and stresstesting Financial Risk Forecasting Chapter 8 Backtesting and stresstesting Jon Danielsson London School of Economics 2015 To accompany Financial Risk Forecasting http://www.financialriskforecasting.com/ Published

More information

Volatility modeling in financial markets

Volatility modeling in financial markets Volatility modeling in financial markets Master Thesis Sergiy Ladokhin Supervisors: Dr. Sandjai Bhulai, VU University Amsterdam Brian Doelkahar, Fortis Bank Nederland VU University Amsterdam Faculty of

More information

Actuarial Teachers and Researchers Conference 2008. Investment Risk Management in the Tails of the Distributions. Chris Sutton 3 rd July 2008

Actuarial Teachers and Researchers Conference 2008. Investment Risk Management in the Tails of the Distributions. Chris Sutton 3 rd July 2008 watsonwyatt.com Actuarial Teachers and Researchers Conference 2008 Investment Risk Management in the Tails of the Distributions Chris Sutton 3 rd July 2008 Agenda Brief outline of current quantitative

More information

Practical Calculation of Expected and Unexpected Losses in Operational Risk by Simulation Methods

Practical Calculation of Expected and Unexpected Losses in Operational Risk by Simulation Methods Practical Calculation of Expected and Unexpected Losses in Operational Risk by Simulation Methods Enrique Navarrete 1 Abstract: This paper surveys the main difficulties involved with the quantitative measurement

More information

An Introduction to Extreme Value Theory

An Introduction to Extreme Value Theory An Introduction to Extreme Value Theory Petra Friederichs Meteorological Institute University of Bonn COPS Summer School, July/August, 2007 Applications of EVT Finance distribution of income has so called

More information

Monte Carlo Simulation

Monte Carlo Simulation 1 Monte Carlo Simulation Stefan Weber Leibniz Universität Hannover email: sweber@stochastik.uni-hannover.de web: www.stochastik.uni-hannover.de/ sweber Monte Carlo Simulation 2 Quantifying and Hedging

More information

Nonparametric adaptive age replacement with a one-cycle criterion

Nonparametric adaptive age replacement with a one-cycle criterion Nonparametric adaptive age replacement with a one-cycle criterion P. Coolen-Schrijner, F.P.A. Coolen Department of Mathematical Sciences University of Durham, Durham, DH1 3LE, UK e-mail: Pauline.Schrijner@durham.ac.uk

More information

Java Modules for Time Series Analysis

Java Modules for Time Series Analysis Java Modules for Time Series Analysis Agenda Clustering Non-normal distributions Multifactor modeling Implied ratings Time series prediction 1. Clustering + Cluster 1 Synthetic Clustering + Time series

More information

Credit Risk Models: An Overview

Credit Risk Models: An Overview Credit Risk Models: An Overview Paul Embrechts, Rüdiger Frey, Alexander McNeil ETH Zürich c 2003 (Embrechts, Frey, McNeil) A. Multivariate Models for Portfolio Credit Risk 1. Modelling Dependent Defaults:

More information

Tail-Dependence an Essential Factor for Correctly Measuring the Benefits of Diversification

Tail-Dependence an Essential Factor for Correctly Measuring the Benefits of Diversification Tail-Dependence an Essential Factor for Correctly Measuring the Benefits of Diversification Presented by Work done with Roland Bürgi and Roger Iles New Views on Extreme Events: Coupled Networks, Dragon

More information

Operational Risk Management: Added Value of Advanced Methodologies

Operational Risk Management: Added Value of Advanced Methodologies Operational Risk Management: Added Value of Advanced Methodologies Paris, September 2013 Bertrand HASSANI Head of Major Risks Management & Scenario Analysis Disclaimer: The opinions, ideas and approaches

More information

A Log-Robust Optimization Approach to Portfolio Management

A Log-Robust Optimization Approach to Portfolio Management A Log-Robust Optimization Approach to Portfolio Management Dr. Aurélie Thiele Lehigh University Joint work with Ban Kawas Research partially supported by the National Science Foundation Grant CMMI-0757983

More information

Master of Mathematical Finance: Course Descriptions

Master of Mathematical Finance: Course Descriptions Master of Mathematical Finance: Course Descriptions CS 522 Data Mining Computer Science This course provides continued exploration of data mining algorithms. More sophisticated algorithms such as support

More information

Supplement to Call Centers with Delay Information: Models and Insights

Supplement to Call Centers with Delay Information: Models and Insights Supplement to Call Centers with Delay Information: Models and Insights Oualid Jouini 1 Zeynep Akşin 2 Yves Dallery 1 1 Laboratoire Genie Industriel, Ecole Centrale Paris, Grande Voie des Vignes, 92290

More information

Basics of Statistical Machine Learning

Basics of Statistical Machine Learning CS761 Spring 2013 Advanced Machine Learning Basics of Statistical Machine Learning Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu Modern machine learning is rooted in statistics. You will find many familiar

More information

Chapter 4: Vector Autoregressive Models

Chapter 4: Vector Autoregressive Models Chapter 4: Vector Autoregressive Models 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie IV.1 Vector Autoregressive Models (VAR)...

More information

Masters in Financial Economics (MFE)

Masters in Financial Economics (MFE) Masters in Financial Economics (MFE) Admission Requirements Candidates must submit the following to the Office of Admissions and Registration: 1. Official Transcripts of previous academic record 2. Two

More information

Marshall-Olkin distributions and portfolio credit risk

Marshall-Olkin distributions and portfolio credit risk Marshall-Olkin distributions and portfolio credit risk Moderne Finanzmathematik und ihre Anwendungen für Banken und Versicherungen, Fraunhofer ITWM, Kaiserslautern, in Kooperation mit der TU München und

More information

Treatment of Incomplete Data in the Field of Operational Risk: The Effects on Parameter Estimates, EL and UL Figures

Treatment of Incomplete Data in the Field of Operational Risk: The Effects on Parameter Estimates, EL and UL Figures Chernobai.qxd 2/1/ 1: PM Page 1 Treatment of Incomplete Data in the Field of Operational Risk: The Effects on Parameter Estimates, EL and UL Figures Anna Chernobai; Christian Menn*; Svetlozar T. Rachev;

More information

Integrated Resource Plan

Integrated Resource Plan Integrated Resource Plan March 19, 2004 PREPARED FOR KAUA I ISLAND UTILITY COOPERATIVE LCG Consulting 4962 El Camino Real, Suite 112 Los Altos, CA 94022 650-962-9670 1 IRP 1 ELECTRIC LOAD FORECASTING 1.1

More information

How to model Operational Risk?

How to model Operational Risk? How to model Operational Risk? Paul Embrechts Director RiskLab, Department of Mathematics, ETH Zurich Member of the ETH Risk Center Senior SFI Professor http://www.math.ethz.ch/~embrechts now Basel III

More information

An axiomatic approach to capital allocation

An axiomatic approach to capital allocation An axiomatic approach to capital allocation Michael Kalkbrener Deutsche Bank AG Abstract Capital allocation techniques are of central importance in portfolio management and risk-based performance measurement.

More information

Corrected Diffusion Approximations for the Maximum of Heavy-Tailed Random Walk

Corrected Diffusion Approximations for the Maximum of Heavy-Tailed Random Walk Corrected Diffusion Approximations for the Maximum of Heavy-Tailed Random Walk Jose Blanchet and Peter Glynn December, 2003. Let (X n : n 1) be a sequence of independent and identically distributed random

More information

A comparison of Value at Risk methods for measurement of the financial risk 1

A comparison of Value at Risk methods for measurement of the financial risk 1 A comparison of Value at Risk methods for measurement of the financial risk 1 Mária Bohdalová, Faculty of Management, Comenius University, Bratislava, Slovakia Abstract One of the key concepts of risk

More information

APPROACHES TO COMPUTING VALUE- AT-RISK FOR EQUITY PORTFOLIOS

APPROACHES TO COMPUTING VALUE- AT-RISK FOR EQUITY PORTFOLIOS APPROACHES TO COMPUTING VALUE- AT-RISK FOR EQUITY PORTFOLIOS (Team 2b) Xiaomeng Zhang, Jiajing Xu, Derek Lim MS&E 444, Spring 2012 Instructor: Prof. Kay Giesecke I. Introduction Financial risks can be

More information

Extreme Value Theory for Heavy-Tails in Electricity Prices

Extreme Value Theory for Heavy-Tails in Electricity Prices Extreme Value Theory for Heavy-Tails in Electricity Prices Florentina Paraschiv 1 Risto Hadzi-Mishev 2 Dogan Keles 3 Abstract Typical characteristics of electricity day-ahead prices at EPEX are the very

More information

Financial Time Series Analysis (FTSA) Lecture 1: Introduction

Financial Time Series Analysis (FTSA) Lecture 1: Introduction Financial Time Series Analysis (FTSA) Lecture 1: Introduction Brief History of Time Series Analysis Statistical analysis of time series data (Yule, 1927) v/s forecasting (even longer). Forecasting is often

More information

Quantitative Operational Risk Management

Quantitative Operational Risk Management Quantitative Operational Risk Management Kaj Nyström and Jimmy Skoglund Swedbank, Group Financial Risk Control S-105 34 Stockholm, Sweden September 3, 2002 Abstract The New Basel Capital Accord presents

More information

An Internal Model for Operational Risk Computation

An Internal Model for Operational Risk Computation An Internal Model for Operational Risk Computation Seminarios de Matemática Financiera Instituto MEFF-RiskLab, Madrid http://www.risklab-madrid.uam.es/ Nicolas Baud, Antoine Frachot & Thierry Roncalli

More information

CONTENTS. List of Figures List of Tables. List of Abbreviations

CONTENTS. List of Figures List of Tables. List of Abbreviations List of Figures List of Tables Preface List of Abbreviations xiv xvi xviii xx 1 Introduction to Value at Risk (VaR) 1 1.1 Economics underlying VaR measurement 2 1.1.1 What is VaR? 4 1.1.2 Calculating VaR

More information

Applying Generalized Pareto Distribution to the Risk Management of Commerce Fire Insurance

Applying Generalized Pareto Distribution to the Risk Management of Commerce Fire Insurance Applying Generalized Pareto Distribution to the Risk Management of Commerce Fire Insurance Wo-Chiang Lee Associate Professor, Department of Banking and Finance Tamkang University 5,Yin-Chuan Road, Tamsui

More information

delta r R 1 5 10 50 100 x (on log scale)

delta r R 1 5 10 50 100 x (on log scale) Estimating the Tails of Loss Severity Distributions using Extreme Value Theory Alexander J. McNeil Departement Mathematik ETH Zentrum CH-8092 Zíurich December 9, 1996 Abstract Good estimates for the tails

More information

Algorithmic Trading Session 1 Introduction. Oliver Steinki, CFA, FRM

Algorithmic Trading Session 1 Introduction. Oliver Steinki, CFA, FRM Algorithmic Trading Session 1 Introduction Oliver Steinki, CFA, FRM Outline An Introduction to Algorithmic Trading Definition, Research Areas, Relevance and Applications General Trading Overview Goals

More information

Statistical pitfalls in Solvency II Value-at-Risk models

Statistical pitfalls in Solvency II Value-at-Risk models Statistical pitfalls in Solvency II Value-at-Risk models Miriam Loois, MSc. Supervisor: Prof. Dr. Roger Laeven Student number: 6182402 Amsterdam Executive Master-programme in Actuarial Science Faculty

More information

Financial Assets Behaving Badly The Case of High Yield Bonds. Chris Kantos Newport Seminar June 2013

Financial Assets Behaving Badly The Case of High Yield Bonds. Chris Kantos Newport Seminar June 2013 Financial Assets Behaving Badly The Case of High Yield Bonds Chris Kantos Newport Seminar June 2013 Main Concepts for Today The most common metric of financial asset risk is the volatility or standard

More information

VI. Real Business Cycles Models

VI. Real Business Cycles Models VI. Real Business Cycles Models Introduction Business cycle research studies the causes and consequences of the recurrent expansions and contractions in aggregate economic activity that occur in most industrialized

More information

A constant volatility framework for managing tail risk

A constant volatility framework for managing tail risk A constant volatility framework for managing tail risk Alexandre Hocquard, Sunny Ng and Nicolas Papageorgiou 1 Brockhouse Cooper and HEC Montreal September 2010 1 Alexandre Hocquard is Portfolio Manager,

More information

Introduction. Who Should Read This Book?

Introduction. Who Should Read This Book? This book provides a quantitative, technical treatment of portfolio risk analysis with a focus on real-world applications. It is intended for both academic and practitioner audiences, and it draws its

More information

Measuring downside risk of stock returns with time-dependent volatility (Downside-Risikomessung für Aktien mit zeitabhängigen Volatilitäten)

Measuring downside risk of stock returns with time-dependent volatility (Downside-Risikomessung für Aktien mit zeitabhängigen Volatilitäten) Topic 1: Measuring downside risk of stock returns with time-dependent volatility (Downside-Risikomessung für Aktien mit zeitabhängigen Volatilitäten) One of the principal objectives of financial risk management

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Math 541: Statistical Theory II Lecturer: Songfeng Zheng Maximum Likelihood Estimation 1 Maximum Likelihood Estimation Maximum likelihood is a relatively simple method of constructing an estimator for

More information

1 Short Introduction to Time Series

1 Short Introduction to Time Series ECONOMICS 7344, Spring 202 Bent E. Sørensen January 24, 202 Short Introduction to Time Series A time series is a collection of stochastic variables x,.., x t,.., x T indexed by an integer value t. The

More information

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not. Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C

More information

CITIGROUP INC. BASEL II.5 MARKET RISK DISCLOSURES AS OF AND FOR THE PERIOD ENDED MARCH 31, 2013

CITIGROUP INC. BASEL II.5 MARKET RISK DISCLOSURES AS OF AND FOR THE PERIOD ENDED MARCH 31, 2013 CITIGROUP INC. BASEL II.5 MARKET RISK DISCLOSURES AS OF AND FOR THE PERIOD ENDED MARCH 31, 2013 DATED AS OF MAY 15, 2013 Table of Contents Qualitative Disclosures Basis of Preparation and Review... 3 Risk

More information

LDA at Work: Deutsche Bank s Approach to Quantifying Operational Risk

LDA at Work: Deutsche Bank s Approach to Quantifying Operational Risk LDA at Work: Deutsche Bank s Approach to Quantifying Operational Risk Workshop on Financial Risk and Banking Regulation Office of the Comptroller of the Currency, Washington DC, 5 Feb 2009 Michael Kalkbrener

More information

How to Model Operational Risk, if You Must

How to Model Operational Risk, if You Must How to Model Operational Risk, if You Must Paul Embrechts ETH Zürich (www.math.ethz.ch/ embrechts) Based on joint work with V. Chavez-Demoulin, H. Furrer, R. Kaufmann, J. Nešlehová and G. Samorodnitsky

More information

Copulas in Financial Risk Management Dr Jorn Rank Department for Continuing Education University of Oxford A thesis submitted for the diploma in Mathematical Finance 4 August 2000 Diploma in Mathematical

More information

Multivariate Normal Distribution

Multivariate Normal Distribution Multivariate Normal Distribution Lecture 4 July 21, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #4-7/21/2011 Slide 1 of 41 Last Time Matrices and vectors Eigenvalues

More information

ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE

ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE YUAN TIAN This synopsis is designed merely for keep a record of the materials covered in lectures. Please refer to your own lecture notes for all proofs.

More information

Booth School of Business, University of Chicago Business 41202, Spring Quarter 2015, Mr. Ruey S. Tsay. Solutions to Midterm

Booth School of Business, University of Chicago Business 41202, Spring Quarter 2015, Mr. Ruey S. Tsay. Solutions to Midterm Booth School of Business, University of Chicago Business 41202, Spring Quarter 2015, Mr. Ruey S. Tsay Solutions to Midterm Problem A: (30 pts) Answer briefly the following questions. Each question has

More information

An analysis of the dependence between crude oil price and ethanol price using bivariate extreme value copulas

An analysis of the dependence between crude oil price and ethanol price using bivariate extreme value copulas The Empirical Econometrics and Quantitative Economics Letters ISSN 2286 7147 EEQEL all rights reserved Volume 3, Number 3 (September 2014), pp. 13-23. An analysis of the dependence between crude oil price

More information

How To Know Market Risk

How To Know Market Risk Chapter 6 Market Risk for Single Trading Positions Market risk is the risk that the market value of trading positions will be adversely influenced by changes in prices and/or interest rates. For banks,

More information

Implications of Alternative Operational Risk Modeling Techniques *

Implications of Alternative Operational Risk Modeling Techniques * Implications of Alternative Operational Risk Modeling Techniques * Patrick de Fontnouvelle, Eric Rosengren Federal Reserve Bank of Boston John Jordan FitchRisk June, 2004 Abstract Quantification of operational

More information

VaR-x: Fat tails in nancial risk management

VaR-x: Fat tails in nancial risk management VaR-x: Fat tails in nancial risk management Ronald Huisman, Kees G. Koedijk, and Rachel A. J. Pownall To ensure a competent regulatory framework with respect to value-at-risk (VaR) for establishing a bank's

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct

More information

A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails

A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails 12th International Congress on Insurance: Mathematics and Economics July 16-18, 2008 A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails XUEMIAO HAO (Based on a joint

More information

Statistics in Retail Finance. Chapter 6: Behavioural models

Statistics in Retail Finance. Chapter 6: Behavioural models Statistics in Retail Finance 1 Overview > So far we have focussed mainly on application scorecards. In this chapter we shall look at behavioural models. We shall cover the following topics:- Behavioural

More information

Validating Market Risk Models: A Practical Approach

Validating Market Risk Models: A Practical Approach Validating Market Risk Models: A Practical Approach Doug Gardner Wells Fargo November 2010 The views expressed in this presentation are those of the author and do not necessarily reflect the position of

More information

Spatial Statistics Chapter 3 Basics of areal data and areal data modeling

Spatial Statistics Chapter 3 Basics of areal data and areal data modeling Spatial Statistics Chapter 3 Basics of areal data and areal data modeling Recall areal data also known as lattice data are data Y (s), s D where D is a discrete index set. This usually corresponds to data

More information

Time Series Analysis

Time Series Analysis JUNE 2012 Time Series Analysis CONTENT A time series is a chronological sequence of observations on a particular variable. Usually the observations are taken at regular intervals (days, months, years),

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

APPLYING COPULA FUNCTION TO RISK MANAGEMENT. Claudio Romano *

APPLYING COPULA FUNCTION TO RISK MANAGEMENT. Claudio Romano * APPLYING COPULA FUNCTION TO RISK MANAGEMENT Claudio Romano * Abstract This paper is part of the author s Ph. D. Thesis Extreme Value Theory and coherent risk measures: applications to risk management.

More information

Pricing Alternative forms of Commercial Insurance cover

Pricing Alternative forms of Commercial Insurance cover Pricing Alternative forms of Commercial Insurance cover Prepared by Andrew Harford Presented to the Institute of Actuaries of Australia Biennial Convention 23-26 September 2007 Christchurch, New Zealand

More information

Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091)

Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091) Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091) Magnus Wiktorsson Centre for Mathematical Sciences Lund University, Sweden Lecture 5 Sequential Monte Carlo methods I February

More information

Chapter 1 INTRODUCTION. 1.1 Background

Chapter 1 INTRODUCTION. 1.1 Background Chapter 1 INTRODUCTION 1.1 Background This thesis attempts to enhance the body of knowledge regarding quantitative equity (stocks) portfolio selection. A major step in quantitative management of investment

More information

Business Valuation under Uncertainty

Business Valuation under Uncertainty Business Valuation under Uncertainty ONDŘEJ NOWAK, JIŘÍ HNILICA Department of Business Economics University of Economics Prague W. Churchill Sq. 4, 130 67 Prague 3 CZECH REPUBLIC ondrej.nowak@vse.cz http://kpe.fph.vse.cz

More information

Pricing of a worst of option using a Copula method M AXIME MALGRAT

Pricing of a worst of option using a Copula method M AXIME MALGRAT Pricing of a worst of option using a Copula method M AXIME MALGRAT Master of Science Thesis Stockholm, Sweden 2013 Pricing of a worst of option using a Copula method MAXIME MALGRAT Degree Project in Mathematical

More information

FORWARDS AND EUROPEAN OPTIONS ON CDO TRANCHES. John Hull and Alan White. First Draft: December, 2006 This Draft: March 2007

FORWARDS AND EUROPEAN OPTIONS ON CDO TRANCHES. John Hull and Alan White. First Draft: December, 2006 This Draft: March 2007 FORWARDS AND EUROPEAN OPTIONS ON CDO TRANCHES John Hull and Alan White First Draft: December, 006 This Draft: March 007 Joseph L. Rotman School of Management University of Toronto 105 St George Street

More information

This PDF is a selection from a published volume from the National Bureau of Economic Research. Volume Title: The Risks of Financial Institutions

This PDF is a selection from a published volume from the National Bureau of Economic Research. Volume Title: The Risks of Financial Institutions This PDF is a selection from a published volume from the National Bureau of Economic Research Volume Title: The Risks of Financial Institutions Volume Author/Editor: Mark Carey and René M. Stulz, editors

More information

LOGNORMAL MODEL FOR STOCK PRICES

LOGNORMAL MODEL FOR STOCK PRICES LOGNORMAL MODEL FOR STOCK PRICES MICHAEL J. SHARPE MATHEMATICS DEPARTMENT, UCSD 1. INTRODUCTION What follows is a simple but important model that will be the basis for a later study of stock prices as

More information

Affine-structure models and the pricing of energy commodity derivatives

Affine-structure models and the pricing of energy commodity derivatives Affine-structure models and the pricing of energy commodity derivatives Nikos K Nomikos n.nomikos@city.ac.uk Cass Business School, City University London Joint work with: Ioannis Kyriakou, Panos Pouliasis

More information

The Dangers of Using Correlation to Measure Dependence

The Dangers of Using Correlation to Measure Dependence ALTERNATIVE INVESTMENT RESEARCH CENTRE WORKING PAPER SERIES Working Paper # 0010 The Dangers of Using Correlation to Measure Dependence Harry M. Kat Professor of Risk Management, Cass Business School,

More information

Forecasting model of electricity demand in the Nordic countries. Tone Pedersen

Forecasting model of electricity demand in the Nordic countries. Tone Pedersen Forecasting model of electricity demand in the Nordic countries Tone Pedersen 3/19/2014 Abstract A model implemented in order to describe the electricity demand on hourly basis for the Nordic countries.

More information

NEXT GENERATION RISK MANAGEMENT and PORTFOLIO CONSTRUCTION

NEXT GENERATION RISK MANAGEMENT and PORTFOLIO CONSTRUCTION STONYBROOK UNIVERSITY CENTER FOR QUANTITATIVE FINANCE EXECUTIVE EDUCATION COURSE NEXT GENERATION RISK MANAGEMENT and PORTFOLIO CONSTRUCTION A four-part series LED BY DR. SVETLOZAR RACHEV, DR. BORYANA RACHEVA-IOTOVA,

More information

Black-Scholes-Merton approach merits and shortcomings

Black-Scholes-Merton approach merits and shortcomings Black-Scholes-Merton approach merits and shortcomings Emilia Matei 1005056 EC372 Term Paper. Topic 3 1. Introduction The Black-Scholes and Merton method of modelling derivatives prices was first introduced

More information

Measurement of Banks Exposure to Interest Rate Risk and Principles for the Management of Interest Rate Risk respectively.

Measurement of Banks Exposure to Interest Rate Risk and Principles for the Management of Interest Rate Risk respectively. INTEREST RATE RISK IN THE BANKING BOOK Over the past decade the Basel Committee on Banking Supervision (the Basel Committee) has released a number of consultative documents discussing the management and

More information

1 Prior Probability and Posterior Probability

1 Prior Probability and Posterior Probability Math 541: Statistical Theory II Bayesian Approach to Parameter Estimation Lecturer: Songfeng Zheng 1 Prior Probability and Posterior Probability Consider now a problem of statistical inference in which

More information

Market Risk Capital Disclosures Report. For the Quarter Ended March 31, 2013

Market Risk Capital Disclosures Report. For the Quarter Ended March 31, 2013 MARKET RISK CAPITAL DISCLOSURES REPORT For the quarter ended March 31, 2013 Table of Contents Section Page 1 Morgan Stanley... 1 2 Risk-based Capital Guidelines: Market Risk... 1 3 Market Risk... 1 3.1

More information

INTERNATIONAL COMPARISON OF INTEREST RATE GUARANTEES IN LIFE INSURANCE

INTERNATIONAL COMPARISON OF INTEREST RATE GUARANTEES IN LIFE INSURANCE INTERNATIONAL COMPARISON OF INTEREST RATE GUARANTEES IN LIFE INSURANCE J. DAVID CUMMINS, KRISTIAN R. MILTERSEN, AND SVEIN-ARNE PERSSON Abstract. Interest rate guarantees seem to be included in life insurance

More information

Evaluating the Lead Time Demand Distribution for (r, Q) Policies Under Intermittent Demand

Evaluating the Lead Time Demand Distribution for (r, Q) Policies Under Intermittent Demand Proceedings of the 2009 Industrial Engineering Research Conference Evaluating the Lead Time Demand Distribution for (r, Q) Policies Under Intermittent Demand Yasin Unlu, Manuel D. Rossetti Department of

More information

Quantitative Inventory Uncertainty

Quantitative Inventory Uncertainty Quantitative Inventory Uncertainty It is a requirement in the Product Standard and a recommendation in the Value Chain (Scope 3) Standard that companies perform and report qualitative uncertainty. This

More information

Generating Random Samples from the Generalized Pareto Mixture Model

Generating Random Samples from the Generalized Pareto Mixture Model Generating Random Samples from the Generalized Pareto Mixture Model MUSTAFA ÇAVUŞ AHMET SEZER BERNA YAZICI Department of Statistics Anadolu University Eskişehir 26470 TURKEY mustafacavus@anadolu.edu.tr

More information

EXTREMES ON THE DISCOUNTED AGGREGATE CLAIMS IN A TIME DEPENDENT RISK MODEL

EXTREMES ON THE DISCOUNTED AGGREGATE CLAIMS IN A TIME DEPENDENT RISK MODEL EXTREMES ON THE DISCOUNTED AGGREGATE CLAIMS IN A TIME DEPENDENT RISK MODEL Alexandru V. Asimit 1 Andrei L. Badescu 2 Department of Statistics University of Toronto 100 St. George St. Toronto, Ontario,

More information

Benchmark Rates for XL Reinsurance Revisited: Model Comparison for the Swiss MTPL Market

Benchmark Rates for XL Reinsurance Revisited: Model Comparison for the Swiss MTPL Market Benchmark Rates for XL Reinsurance Revisited: Model Comparison for the Swiss MTPL Market W. Hürlimann 1 Abstract. We consider the dynamic stable benchmark rate model introduced in Verlaak et al. (005),

More information

Using simulation to calculate the NPV of a project

Using simulation to calculate the NPV of a project Using simulation to calculate the NPV of a project Marius Holtan Onward Inc. 5/31/2002 Monte Carlo simulation is fast becoming the technology of choice for evaluating and analyzing assets, be it pure financial

More information

Optimization under uncertainty: modeling and solution methods

Optimization under uncertainty: modeling and solution methods Optimization under uncertainty: modeling and solution methods Paolo Brandimarte Dipartimento di Scienze Matematiche Politecnico di Torino e-mail: paolo.brandimarte@polito.it URL: http://staff.polito.it/paolo.brandimarte

More information

Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

More information

Bootstrapping Big Data

Bootstrapping Big Data Bootstrapping Big Data Ariel Kleiner Ameet Talwalkar Purnamrita Sarkar Michael I. Jordan Computer Science Division University of California, Berkeley {akleiner, ameet, psarkar, jordan}@eecs.berkeley.edu

More information

Risk Measures, Risk Management, and Optimization

Risk Measures, Risk Management, and Optimization Risk Measures, Risk Management, and Optimization Prof. Dr. Svetlozar (Zari) Rachev Frey Family Foundation Chair-Professor, Applied Mathematics and Statistics, Stony Brook University Chief Scientist, FinAnalytica

More information

Financial Development and Macroeconomic Stability

Financial Development and Macroeconomic Stability Financial Development and Macroeconomic Stability Vincenzo Quadrini University of Southern California Urban Jermann Wharton School of the University of Pennsylvania January 31, 2005 VERY PRELIMINARY AND

More information

Measuring Portfolio Value at Risk

Measuring Portfolio Value at Risk Measuring Portfolio Value at Risk Chao Xu 1, Huigeng Chen 2 Supervisor: Birger Nilsson Department of Economics School of Economics and Management, Lund University May 2012 1 saintlyjinn@hotmail.com 2 chenhuigeng@gmail.com

More information

Contributions to Modeling Extreme Events on Financial and Electricity Markets

Contributions to Modeling Extreme Events on Financial and Electricity Markets Contributions to Modeling Extreme Events on Financial and Electricity Markets Inauguraldissertation zur Erlangung des Doktorgrades der Wirtschafts- und Sozialwissenschaftlichen Fakultät der Universität

More information

Charles University, Faculty of Mathematics and Physics, Prague, Czech Republic.

Charles University, Faculty of Mathematics and Physics, Prague, Czech Republic. WDS'09 Proceedings of Contributed Papers, Part I, 148 153, 2009. ISBN 978-80-7378-101-9 MATFYZPRESS Volatility Modelling L. Jarešová Charles University, Faculty of Mathematics and Physics, Prague, Czech

More information