The Skewed t Distribution for Portfolio Credit Risk
|
|
|
- Geraldine Horton
- 9 years ago
- Views:
Transcription
1 The Skewed t Distribution for Portfolio Credit Risk Wenbo Hu Bell Trading Alec N. Kercheval Florida State University Abstract Portfolio credit derivatives, such as basket credit default swaps (basket CDS), require for their pricing an estimation of the dependence structure of defaults, which is known to exhibit tail dependence as reflected in observed default contagion. A popular model with this property is the (Student s) t copula; unfortunately there is no fast method to calibrate the degree of freedom parameter. In this paper, within the framework of Schönbucher s copula-based trigger-variable model for basket CDS pricing, we propose instead to calibrate the full multivariate t distribution. We describe a version of the EM algorithm that provides very fast calibration speeds compared to the current copula-based alternatives. The algorithm generalizes easily to the more flexible skewed t distributions. To our knowledge, we are the first to use the skewed t distribution in this context. Journal of Economic Literature Classification Codes: C16, G12. Key words: Portfolio Credit Risk, Basket Credit Default Swaps, Skewed t Distribution, t Distribution, t Copula. Bell Trading. [email protected] Corresponding author, [email protected], Department of Mathematics, Florida State University, Tallahassee, FL
2 1 Introduction For portfolio risk modeling and basket derivative pricing, it is essential to understand the dependence structure of prices, default times, or other assetrelated variables. This structure is completely described by the second moments (the covariance matrix) for jointly Normal variables, so practitioners often use the covariance matrix as a simple proxy for multivariate dependence. However, it is widely acknowledged that prices, returns, and other financial variables are not Normally distributed. They have fat tails, and exhibit tail dependence (see Section 4), in which correlations are observed to rise during extreme events. Therefore there is a need for practical uses of more general multivariate distributions to model joint price behavior. This raises the question of how to choose these distributions, and, once chosen, how to efficiently calibrate them to data. In this paper we look at the multivariate (Student s) t distribution, which has become a popular choice because of its heavy tails and non-zero tail dependence, and it s generalization, the skewed t distribution, described, for example, by Demarta and McNeil (2005) see Section 2 below. It has become popular and useful to isolate the dependence structure of a distribution from the individual marginal distributions by looking at its copula (see Section 3). Copulas that come from known distributions inherit their names e.g. we have the Gaussian copulas, the t copulas, etc. 1
3 There are now many financial applications of copulas. For example, Di Clemente and Romano (2003b) used copulas to minimize expected shortfall (ES) in modeling operational risk. Di Clemente and Romano (2004) applied the same framework in the portfolio optimization of credit default swaps. Masala et al. (working paper) used the t copula and a transition matrix with a gamma-distributed hazard rate and a beta-distributed recovery rate to compute the efficient frontier for credit portfolios by minimizing ES. The success of copulas greatly depends both on good algorithms for calibrating the copula itself, and on the availability of a fast algorithm to calculate the cumulative distribution functions (CDF ) and quantiles of the corresponding one dimensional marginal distributions. The calibration of a t copula is very fast if we fix the degree of freedom parameter ν, which in turn is optimized by maximizing a log likelihood. However, the latter is slow. Detailed algorithms for calibrating t copulas can be found in the work of many researchers, such as Di Clemente and Romano (2003a), Demarta and McNeil (2005), Mashal and Naldi (2002), and Galiani (2003). The calibration of a t copula is (by definition) separate from the calibration of marginal distributions. It is generally suggested to use the empirical distributions to fit the margins, but empirical distributions tend to have poor performance in the tails. A hybrid of the parametric and non parametric method considers the use of the empirical distribution in the center and a generalized Pareto distribution (GP D) in the tails. Some use a Gaussian 2
4 distribution in the center. To model multivariate losses, Di Clemente and Romano (2003a) used a t copula and Gaussian distribution in the center and left tail and a GP D in the right tail for the margins. We will be able to avoid these issues because we can effectively calibrate the full distribution directly by using t or skewed t distributions. In this paper, the primary application we have in mind is portfolio credit risk specifically, the pricing of multiname credit derivatives such as kth-todefault basket credit default swaps (basket CDS). For this problem, the most important issue is the correlation structure among the default obligors as described by the copula of their default times. Unfortunately, defaults are rarely observed, so it is difficult to calibrate their correlations directly. In this paper we follow Cherubini et al. (2004) and use the distribution of daily equity prices to proxy the dependence structure of default times. See Section 6.2 below. Several groups have discussed the pricing of basket CDS and CDO via copulas, such as Galiani (2003), Mashal and Naldi (2002), and Meneguzzo and Vecchiato (2002), among others. However, in this paper, we find that calibrating the full joint distribution is much faster than calibrating the copula separately, because of the availabilty of the EM algorithm discussed below. In Hu (2005), we looked at the large family of generalized hyperbolic distributions to model multivariate equity returns by using the EM algorithm (see Section 2). We showed that the skewed t has better performance and faster convergence than other generalized hyperbolic distributions. Further- 3
5 more, for the t distribution, we have greatly simplified formulas and an even faster algorithm. For the t copula, there is still no good method to calibrate the degree of freedom ν except to find it by direct search. The calibration of a t copula takes days while the calibration of a skewed t or t distribution via the EM algorithm takes minutes. To our knowledge, we are the first to directly calibrate the skewed t or t distributions to price basket credit default swaps. This paper is organized as follows. In section 2, we introduce the skewed t distribution from the normal mean variance mixture family and provide a version of the EM algorithm to calibrate it, including the limiting t distribution. We give an introduction to copulas in section 3, and review rank correlation and tail dependence in section 4. In section 5, we follow Rutkowski (1999) to review the reduced form approach to single name credit risk. In section 6, we follow Schönbucher (2003) to provide our model setup for calculating default probabilities for the k-th to default using a copula-based trigger variable method. There we also discuss the calibration problem. In section 7 we apply all the previous ideas to describe a method for pricing basket credit default swaps. We illustrate how selecting model copulas with different tail dependence coefficients influences the relative probabilities of first and last to default. We then argue that calibrating the skewed t distribution is the best and fastest approach, among the common alternatives. 4
6 2 Skewed t distributions and the EM algorithm 2.1 Skewed t and t distributions Definition 2.1 Inverse Gamma Distribution. The random variable X has an inverse gamma distribution, written X InverseGamma(α, β), if its probability density function is (2.1) f(x) = β α x α 1 e β/x /Γ(α), x > 0, α > 0, β > 0, where Γ is the usual gamma function. We have the following standard formulas: (2.2) E(X) = β α 1, if α > 1 (2.3) V ar(x) = β 2 (α 1) 2 (α 2), if α > 2 (2.4) E(log(X)) = log(β) ψ(α), where (2.5) ψ(x) = d log(γ(x))/dx is the digamma function. The skewed t distribution is a subfamily of the generalized hyperbolic distributions see McNeil et al. (2005), who suggested the name skewed t. It can be represented as a normal mean-variance mixture, where the mixture variable is inverse gamma distributed. 5
7 Definition 2.2 Normal Mean-Variance Mixture Representation of Skewed t Distribution. Let µ and γ be parameter vectors in R d, Σ a d d real positive semidefinite matrix, and ν > 2. The d dimensional skewed t distributed random vector X, which is denoted by X SkewedT d (ν, µ, Σ, γ), is a multivariate normal mean-variance mixture variable with distribution given by (2.6) X d = µ + W γ + W Z, where 1. Z N d (0, Σ), the multivariate normal with mean 0 and covariance Σ, 2. W InverseGamma(ν/2, ν/2), and 3. W is independent of Z. Here, µ are location parameters, γ are skewness parameters and ν is the degree of freedom. From the definition, we can see that (2.7) X W N d (µ + W γ, W Σ). This is also why it is called a normal mean-variance mixture distribution. We can get the following moment formulas easily from the mixture definition: (2.8) E(X) = µ + E(W )γ, 6
8 (2.9) COV (X) = E(W )Σ + var(w )γγ, when the mixture variable W has finite variance var(w ). Definition 2.3 Setting γ equal to zero in Definition 2.2 defines the multivariate t distribution, (2.10) X d = µ + W Z. For convenience we next give the density functions of these distributions. Denote by K λ (x), x > 0, the modified Bessel function of the third kind, with index λ: K λ (x) = y λ 1 e x 2 (y+y 1) dy. The following formula may be computed using (2.7), and is given in Mc- Neil et al. (2005). Proposition 2.4 Skewed t Distribution. Let X be skewed t distributed, and define (2.11) ρ(x) = (x µ) Σ 1 (x µ). Then the joint density function of X is given by (2.12) f(x) = c ( (ν + ρ(x)) (γ Σ γ)) 1 e (x µ) Σ 1 γ ( (ν ν+d, + ρ(x)) (γ Σ γ)) 1 2 K ν+d 2 where the normalizing constant is (1 + ρ(x) ) ν+d 2 ν c = ν+d Γ( ν )(πν). d 2 Σ
9 The mean and covariance of a skewed t distributed random vector X are (2.13) E(X) = µ + γ ν ν 2, (2.14) COV (X) = ν ν 2 Σ + γγ 2ν 2 (ν 2) 2 (ν 4), where the covariance matrix is only defined when ν > 4, and the expectation only when ν > 2. Furthermore, in the limit as γ 0, we get the joint density function of the t distribution: (2.15) f(x) = Γ( ν+d 2 ) Γ( ν 2 )(πν) d 2 Σ 1 2 (1 + ρ(x) ν+d ν ) 2. The mean and covariance of a t distributed random vector X are (2.16) E(X) = µ, (2.17) COV (X) = ν ν 2 Σ. 2.2 Calibration of t and Skewed t Distributions Using the EM Algorithm The mean-variance representation of the skewed t distribution has a great advantage: the so-called EM algorithm can be applied to such a representation. See McNeil et al. (2005) for a general discussion of this algorithm for calibrating generalized hyperbolic distributions. 8
10 The EM (Expectation-Maximization) algorithm is a two-step iterative process in which (the E-step) an expected log likelihood function is calculated using current parameter values, and then (the M-step) this function is maximized to produce updated parameter values. After each E and M step, the log likelihood is increased, and the method converges to a maximum log likelihood estimate of the distribution parameters. What helps this along is that the skewed t distribution can be represented as a conditional normal distribution, so most of the parameters (Σ, µ, γ) can be calibrated, conditional on W, like a Gaussian distribution. We give a brief summary of our version of the EM algorithms for skewed t and t distributions here. Detailed derivations, along with comparisons to other versions, can be found in Hu (2005). To explain the idea, suppose we have i.i.d. data x 1,..., x n R d that we want to fit to a skewed t distribution. We seek parameters θ = (ν, µ, Σ, γ) to maximize the log likelihood log L(θ; x 1, x 2,..., x n ) = n log f(x i ; θ), i=1 where f( ; θ) denotes the skewed t density function. The method is motivated by the observation that if the latent variables w 1,..., w n were observable, our optimization would be straightforward. We define the augmented log-likelihood function log L(θ; x 1,..., x n, w 1,..., w n ) = n log f X,W (x i, w i ; θ), i=1 9
11 n n = log f X W (x i w i ; µ, Σ, γ) + log h W (w i ; ν) i=1 i=1 where f X W ( w; µ, Σ, γ) is the conditional normal N(µ + wγ, wσ) and h W ( ; ν) is the the density of InverseGamma(ν/2, ν/2). These two terms could be maximized separately if the latent variables were observable. Since they are not, the method is instead to maximize the expected value of the augmented log-likelihood L conditional on the data and on a guess for the parameters θ. We must condition on the parameters because the distribution of the latent variables depends on the parameters. This produces an updated guess for the parameters, which we then use to repeat the process until convergence. To be more explicit, suppose we have a step k parameter estimate θ [k]. We carry out the following steps. E-step: compute an objective function Q(θ; θ [k] ) = E(log L(θ; x 1,..., x n, W 1,..., W n ) x 1,..., x n ; θ [k] ) This can be done analytically and requires formulas for quantities like E(W i x i, θ [k] ), E(1/W i x i, θ [k] ), and E(log W i x i, θ [k] ), which can all be explicitly derived from the definitions. M-step: Maximize Q to find θ [k+1]. Using our explicit formulas for the skewed t distribution, we can compute the expectation and the subsequent maximum θ explicitly. Below we 10
12 summarize the resulting formulas needed for directly implementing this algorithm. We will use a superscript in square brackets to denote the iteration counter. Given, at the kth step, parameter estimates ν [k], Σ [k], µ [k], and γ [k], let, for i = 1,..., n, Define the auxiliary variables θ [k] i (2.18) θ [k] i = (2.19) η [k] i = ρ [k] i = (x i µ [k] ) (Σ [k] ) 1 (x i µ [k] ). ( ρ [k] i + ν [k] γ [k] Σ [k] 1 γ [k] ( ρ [k] i + ν [k] γ [k] Σ [k] 1 γ [k], η [k] i, and ξ [k] i by ( ) (ρ [k] i + ν [k] )(γ [k] Σ [k] 1 γ [k] ) ( ) (ρ [k] i + ν [k] )(γ [k] Σ [k] 1 γ [k] ) ) 1 2 K ν+d+2 2 K ν+d 2 ) 1 2 K ν+d 2 2 K ν+d 2 ( ) i + ν [k] )(γ [k] Σ [k] 1 γ [k] ) ) (ρ [k] ( (ρ [k] i + ν [k] )(γ [k] Σ [k] 1 γ [k] ) (2.20) ( ) i = 1 2 log ρ [k] i + ν [k] + γ [k] Σ [k] 1 γ [k] ξ [k] K ν+d 2 +α K ν+d 2 q «(ρ [k] i +ν [k] )(γ [k] Σ [k] 1 γ [k] ) ( (ρ [k] i + ν [k] )(γ [k] Σ [k] 1 γ [k] ) ). α α=0 In the special case of the multivariate t distributions, we have simpler forms for above formulas: (2.21) θ [k] i = ν[k] + d + ν [k] ρ [k] i 11
13 (2.22) η [k] (2.23) ξ [k] i Let us denote (2.24) θ = 1 n = log( ρ[k] i i = ρ[k] i + ν [k] ν [k] + d 2 + ν [k] 2 n θ i, η = 1 n 1 ) ψ( d + ν[k] ). 2 n 1 η i, ξ = 1 n n ξ i. Algorithm 2.5 EM algorithm for calibrating the t and skewed t distributions 1. Set the iteration counter k=1. Select staring values for ν [1], γ [1], µ [1] and Σ [1]. Reasonable starting value for mean and dispersion matrix are the sample mean and sample covariance matrix Calculate θ [k] i, η [k], and ξ [k] i i and their averages θ, η and ξ. 3. Update γ, µ and Σ according to (2.25) γ [k+1] = n 1 n i=1 θ[k] i ( x x i ) θ [k] η [k] 1 µ [k+1] = n 1 n i=1 θ[k] i x i γ [k+1] (2.26) θ [k] Σ [k+1] = 1 n (2.27) θ [k] i (x i µ [k+1] )(x i µ [k+1] ) η [k] γ [k+1] γ [k+1]. n i=1 4. Compute ν [k+1] by numerically solving the equation (2.28) ψ( ν 2 ) + log(ν/2) + 1 ξ [k] θ [k] = Set counter k := k+1 and go back to step 2 unless the relative increment of log likelihood is small and in this case, we terminate the iteration. 12
14 The result of this algorithm is an estimate of the maximum likelihood parameter values for the given data. 3 Copulas Copulas are used to describe the dependence structure of a multivariate distribution. A good general reference is Nelsen (1999). One of the definitions can be found in Li (1999), the first one to use copulas to price portfolio credit risk. Definition 3.1 Copula Functions. U is a uniform random variable if it has a uniform distribution on the interval [0, 1]. For d uniform random variables U 1, U 2,, U d, the joint distribution function C,defined as C(u 1, u 2,, u d ) = P [U 1 u 1, U 2 u 2,, U d u d ], is called a copula function. Proposition 3.2 Sklar s Theorem. Let F be a joint distribution function with margins F 1, F 2,, F d, then there exists a copula C such that for all (x 1, x 2,, x d ) R d, (3.1) F (x 1, x 2,, x d ) = C(F 1 (x 1 ), F 2 (x 2 ),, F d (x d )). If F 1, F 2,, F d are continuous, then C is unique. Conversely, if C is a copula and F 1, F 2,, F d are distribution functions, then the function F defined by equation 3.1 is a joint distribution function with margins F 1, F 2,, F d. 13
15 Corollary 3.3 If F 1, F 2,, F m are continuous, then, for any (u 1,, u m ) [0, 1] m, we have (3.2) C(u 1, u 2,, u m ) = F (F 1 1 (u 1 ), F 1 2 (u 2 ),, F 1 m (u m )), where F 1 i (u i ) denotes the inverse of the cumulative distribution function, namely, for u i [0, 1], F 1 i (u i ) = inf{x : F i (x) u i }. The name copula means a function that couples a joint distribution function to its marginal distributions. If X 1, X 2,, X d are random variables with distributions F 1, F 2,, F d, respectively, and a joint distribution F, then the corresponding copula C is also called the copula of X 1, X 2,, X d, and (U 1, U 2,, U d ) = (F 1 (X 1 ), F 2 (X 2 ),, F d (X d )) also has copula C. We will use this property to price basket credit default swaps later. We often assume the marginal distributions to be empirical distributions. Suppose that the sample data is x i = (x i,1,, x i,d ), where i = 1,, n, then we may take the empirical estimator of jth marginal distribution function to be (3.3) ˆFj (x) = n i=1 I {x i,j x}. n + 1 (Demarta and McNeil (2005) suggested dividing by n + 1 to keep the estimation away from the boundary 1.) By using different copulas and empirical or other margins, we can create a rich family of multivariate distributions. It is not required that the margins and joint distribution be the same type of distribution. 14
16 Two types of copulas are widely used: Archimedean copulas and elliptical copulas. Archimedean copulas form a rich family of examples of bivariate copulas, including the well-known Frank, Gumbel and Clayton copulas. These have only one parameter and are easy to calibrate. However, the usefulness of Archimedean copulas of more than two variables is quite limited: they have only one or two parameters, and enforce a lot of symmetry in the dependence structure, such as bivariate exchangeability, that is unrealistic for a portfolio of heterogeneous firms. Therefore we now restrict attention to the elliptical copulas, which are created from multivariate elliptical distributions, such as the Gaussian and t distributions, and their immediate generalizations, such as the skewed t copula. Definition 3.4 Multivariate Gaussian Copula. Let R be a positive semidefinite matrix with diag(r) = 1 and let Φ R be the standardized multivariate normal distribution function with correlation matrix R. Then the multivariate Gaussian copula is defined as (3.4) C(u 1, u 2,, u m ; R) = Φ R (Φ 1 (u 1 ), Φ 1 (u 2 ),, Φ 1 (u m )), where Φ 1 (u) denotes the inverse of the standard univariate normal cumulative distribution function. Definition 3.5 Multivariate t Copula. Let R be a positive semidefinite matrix with diag(r) = 1 and let T R,ν be the standardized multivariate t 15
17 distribution function with correlation matrix R and ν degrees of freedom. Then the multivariate t copula is defined as (3.5) C(u 1, u 2,, u m ; R, ν) = T R,ν (T 1 ν (u 1 ), Tν 1 (u 2 ),, Tν 1 (u m )), where Tν 1 (u) denotes the inverse of standard univariate t cumulative distribution function. 4 Measures of Dependence All dependence information is contained in the copula of a distribution. However, it is helpful to have real-valued measures of the dependence of two variables. The most familiar example of this is Pearson s linear correlation coefficient; however, this does not have the nice properties we will see below. 4.1 Rank Correlation Definition 4.1 Kendall s Tau. Kendall s tau rank correlation for the bivariate random vector (X, Y ) is defined as (4.1) τ(x, Y ) = P ((X ˆX)(Y Ŷ ) > 0) P ((X ˆX)(Y Ŷ ) < 0), where ( ˆX, Ŷ ) is an independent copy of (X, Y ). As suggested by Meneguzzo and Vecciato (2002), the sample consistent estimator of Kendall s tau is given by n i,j=1,i<j sign[(x i x j )(y i y j )] (4.2) ˆτ = n(n 1)/2, 16
18 where sign(x) = 1 if x 0, otherwise sign(x) = 0, and n is the number of observations. In the case of elliptical distributions, Lindskog et al. (2003) showed that (4.3) τ(x, Y ) = 2 π arcsin(ρ), where ρ is Pearson s linear correlation coefficient between random variable X and Y. However, Kendall s tau is more useful in discussions of dependence structure because it depends in general only on the copula of (X, Y ) (Nelsen, 1999): (4.4) τ(x, Y ) = 4 C(u, v)dc(u, v) 1. [0,1] 2 It has nothing to do with the marginal distributions. Sometimes, we may need the following formula (4.5) τ(x, Y ) = 1 4 C u (u, v)c v (u, v)dudv, [0,1] 2 where C u denotes the partial derivative of C(u, v) with respect to u and C v denotes the partial derivative of C(u, v) with respect to v. Proposition 4.2 Copula of Transformations (Nelsen, 1999). Let X and Y be continuous random variables with copula C XY. If both α(x) and β(y ) are strictly increasing on RanX and RanY respectively, then C α(x)β(y ) = C XY. If both α(x) and β(y ) are strictly decreasing on RanX and RanY respectively, then C α(x)β(y ) (u, v) = u + v 1 + C XY (1 u, 1 v). 17
19 Corollary 4.3 Invariance of Kendall s Tau Under Monotone Transformation. Let X and Y be continuous random variables with copula C XY. If both α(x) and β(y ) are strictly increasing or strictly decreasing on RanX and RanY respectively, then τ α(x)β(y ) = τ XY. Proof: we just need to show the second part. If both α(x) and β(y ) are strictly decreasing, then C α(x)β(y ) (u, v) = u+v 1+C XY (1 u, 1 v). From equation 4.5, we have τ α(x)β(y ) = 1 4 (1 C 1 (1 u, 1 v))(1 C 2 (1 u, 1 v))dudv [0,1] 2 where C i denotes the partial derivative with respect to ith variable to avoid confusion. By replacing 1 u by x and 1 v by y, we have τ α(x)β(y ) = 1 4 (1 C 1 (x, y))(1 C 2 (x, y))dxdy. [0,1] 2 Since we have τ α(x)β(y ) = τ XY. C 1 (x, y)dxdy = [0,1] 2 [0,1] ydy = 0.5, These results are the foundation of modeling of default correlation in the pricing of portfolio credit risk. From now on, when we talk about correlation, we will mean Kendall s tau rank correlation. 4.2 Tail Dependence Corresponding to the heavy tail property in univariate distributions, tail dependence is used to model the co-occurrence of extreme events. For credit 18
20 risk, this is the phenomenon of default contagion. Realistic portfolio credit risk models should exhibit positive tail dependence, as defined next. Definition 4.4 Tail Dependence Coefficient(T DC). Let (X 1, X 2 ) be a bivariate vector of continuous random variables with marginal distribution functions F 1 and F 2. The level of upper tail dependence λ U and lower tail dependence λ L are given, respectively, by (4.6) λ U = lim u 1 P [X 2 > F 1 2 (u) X 1 > F 1 1 (u)], (4.7) λ L = lim u 0 P [X 2 F 1 2 (u) X 1 F 1 1 (u)]. If λ U > 0, then the two random variables (X 1, X 2 ) are said to be asymptotically dependent in the upper tail. If λ U = 0, then (X 1, X 2 ) are asymptotically independent in the upper tail. Similarly for λ L and the lower tail. Joe (1997) gave the copula version of T DC, [1 2u + C(u, u)] (4.8) λ U = lim, u 1 1 u C(u, u) (4.9) λ L = lim. u 0 u For elliptical copulas, λ U = λ L, denoted simply by λ. Embrechts et al. (2001) showed that for a Gaussian copula, λ = 0, and for a t copula, ( ) ν 1 ρ (4.10) λ = 2 2t ν+1 + 1, 1 + ρ where ρ is the Pearson correlation coefficient. We can see that λ is an increasing function of ρ and a decreasing function of the degree of freedom ν. 19
21 The t copula is a tail dependent copula. We can see the difference of the tail dependence between Gaussian copulas and t copulas from figure 1. FIGURE 1 ABOUT HERE 5 Single Name Credit Risk Before looking at the dependence structure of defaults for a portfolio, we first review the so-called reduced form approach to single firm credit risk, sometimes called stochastic intensity modeling. We follow the approach of Rutkowski (1999). 5.1 Defaultable Bond Pricing Suppose that τ is the default time of a firm. Let H t = I τ t, and H t = σ(h s : s t) denote the default time information filtration. We denote by F the right-continuous cumulative distribution function of τ, i.e., F (t) = P (τ t). Definition 5.1 Hazard Function. The function Γ : R + R + given by (5.1) Γ(t) = log (1 F (t)), t R + is called the hazard function. If F is absolutely continuous, i.e., F (t) = t f(u)du, where f is the probability density function of τ, then so is Γ(t), 0 and we define the intensity function λ(t) = Γ (t). 20
22 It is easy to check that (5.2) F (t) = 1 e R t 0 λ(u)du, and (5.3) f(t) = λ(t)s(t), where S(t) = 1 F (t) = e R t 0 λ(u)du is called the survival function. For simplicity, we suppose the risk free short interest rate r(t) is a nonnegative deterministic function, so that the price at time t of a unit of default free zero coupon bond with maturity T equals B(t, T ) = e R T t r(u)du. Suppose now we have a defaultable zero-coupon bond that pays c at maturity T if there is no default, or pays a recovery amount h(τ) if there is a default at time τ < T. The time-t present value of the bond s payoff is therefore Y t = I {t<τ T } h(τ)e R τ t r(u)du + I {τ>t } ce R T t r(u)du. When the only information is contained in the default filtration H t, we have the following pricing formula. Proposition 5.2 Rutkowski (1999). Assume that t T, and Y t is defined as above. If Γ(t) is absolutely continuous, then (5.4) E(Y t H t ) = I {τ>t} ( T where ˆr(v) = r(v) + λ(v). t ) h(u)λ(u)e R u t ˆr(v)dv du + ce R T t ˆr(u)du, 21
23 The first term is the price of the default payment, the second is the price of the survival payment. Note that in the first term we have used equation 5.3 to express the probability density function of τ. In the case of zero recovery, the formula tells us that a defaultable bond can be valued as if it were default free by replacing the interest rate by the sum of the interest rate and a default intensity, which can be interpreted as a credit spread. We use this proposition to price basket credit default swaps. 5.2 Credit Default Swaps A credit default swap (CDS) is a contract that provides insurance against the risk of default of a particular company. The buyer of a CDS contract obtains the right to sell a particular bond issued by the company for its par value once a default occurs. The buyer pays to the seller a periodic payment, at time t 1,, t n, as a fraction q of the nominal value M, until the maturity of the contract T = t n or until a default at time τ < T occurs. If a default occurs, the buyer still needs to pay the accrued payment from the last payment time to the default time. There are 1/θ payments a year (for semiannual payments, θ = 1/2), and every payment is θqm. 5.3 Valuation of Credit Default Swaps Set the current time t 0 = 0. Let us suppose the only information available is the default information, interest rates are deterministic, the recovery rate R is a constant, and the expectation operator E( ) is relative to a risk neutral 22
24 measure. We use Proposition 5.2 to get the premium leg P L, accrued payment AC, and default leg DL. P L is the present value of periodic payments and AP is the present value of the accumulated amount from last payment to default time. The default leg DL is the present value of the net gain to the buyer in case of default. We have (5.5) P L = Mθq = Mθq n E(B(0, t i )I{τ > t i }) i=1 n i=1 B(0, t i )e R t i 0 λ(u)du, (5.6) AP = Mθq = Mθq n E i=1 ti n i=1 ( ) τ ti 1 B(0, τ)i{t i 1 < τ t i } t i t i 1 t i 1 u t i 1 B(0, u)λ(u)e R u 0 λ(s)ds du, t i t i 1 (5.7) DL = M(1 R)E ( B(0, τ)i {τ T } ) = M(1 R) T 0 B(0, u)λ(u)e R u 0 λ(s)ds du. The spread price q is the value of q such that the value of the credit default swap is zero, (5.8) P L(q ) + AP (q ) = DL. 23
25 5.4 Calibration of Default Intensity: Illustration As Hull (2002) points out, the credit default swap market is so liquid that we can use credit default swap spread data to calibrate the default intensity using equation (5.8). In table 1, we have credit default spread prices for five companies on 07/02/2004 from GF I ( The spread price is quoted in basis points. It is the annualized payment made by the buyer of the CDS per dollar of nominal value. The mid price is the average of bid price and ask price. TABLE 1 ABOUT HERE We denote the maturities of the CDS contracts as (T 1,, T 5 ) = (1, 2, 3, 4, 5). It is usually assumed that the default intensity is a step function, with step size of 1 year, expressed in the following form (where T 0 = 0), (5.9) λ(t) = 5 c i I (Ti 1,T i ](t). i=1 We can get c 1 by using the 1 year CDS spread price first. Knowing c 1, we can estimate c 2 using the 2 year CDS spread price. Following this procedure, we can estimate all the constants c i for the default intensity. In our calibration, we assume a recovery rate R of 0.4, a constant risk free interest rate of 0.045, and semiannual payments (θ = 1/2). In this setting, we can get PL, AP and DL explicitly. The calibrated default intensity is shown in table 2. 24
26 TABLE 2 ABOUT HERE 6 Portfolio Credit Risk 6.1 Setup Our setup for portfolio credit risk is to use default trigger variables for the survival functions (Schönbucher and Schubert, 2001), as a means of introducing default dependencies through a specified copula. Suppose we are standing at time t=0. Model Setup and Assumptions. Suppose there are d firms. For each obligor 1 i d, we define 1. The default intensity λ i (t): a deterministic function. We usually assume it to be a step function. 2. The survival function S i (t): ( (6.1) S i (t) := exp t 0 ) λ i (u)du. 3. The default trigger variables U i : uniform random variables on [0, 1]. The d-dimensional vector U = (U 1, U 2,, U d ) is distributed according to the d-dimensional copula C (see Definition 3.1). 4. The time of default τ i of obligor i, where i = 1,, d, (6.2) τ i := inf{t : S i (t) 1 U i }. 25
27 The copula C of U is also called the survival copula of 1 U. (See Georges et al. (2001) for more details about survival copulas.) From equation (6.2), we can see that the default time τ i is a increasing function of the uniform random variable U i, so the rank correlation Kendall s tau between default times is the same as the Kendall s tau between the uniform random variables, and the copula of τ equals the copula of U. Equivalently, the copula of 1 U, is the survival copula of τ. Define the default function, F i (t) = 1 S i (t). Theorem 6.1 Joint Default Probabilities. The joint default probabilities of (τ 1, τ 2,, τ d ) are given by (6.3) P [τ 1 T 1, τ 2 T 2,, τ d T d ] = C(F 1 (T 1 ),, F d (T d )). Proof: From the definition of default in equation (6.2), we have P [τ 1 T 1,, τ d T d ] = P [1 U 1 S 1 (T 1 ),, 1 U d S d (T d )]. By the definition of the copula C of U we have P [τ 1 T 1,, τ d T d ] = C ( F 1 (T 1 ),, F d (T d ) ). 6.2 Calibration In the preceding setup, two kinds of quantities need to be calibrated: the default intensities λ i (t), and the default time copula C. Calibration of the 26
28 default intensities can be accomplished using the single name credit default spreads visible in the market, as described below in Section 7. However, calibration of the default time copula C is difficult. Indeed, it is a central and fundamental problem for portfolio credit risk modeling to properly calibrate correlations of default times. The trouble is that data is scarce for example, a given basket of blue chips may not have any defaults at all in recent history. On the other hand, calibration using market prices of basket CDS is hampered by the lack of a liquid market with observable prices. Even if frequently traded basket CDS prices were observable, we would need many different basket combinations in order to extract full correlation information among all the names. Therefore in the modeling process we need to choose some way of proxying the required data. McNeil et al. (2005) report that asset price correlations are commonly used as a proxy for default time correlations. This is also the approach taken by Cherubini et al. (2004), who remark that it is consistent with most market practice. From the perspective of Merton-style value threshold models of default, it makes sense to use firm value correlations, since downward value comovements will be associated with co-defaults. However, firm values are frequently not available, so asset prices can be used instead even if, as Schönbucher (2003) points out, liquidity effects may lead to higher correlations for asset prices than for firm values. Another way to simplify this calibration problem is to restrict to a family 27
29 of copulas with only a small number of parameters, such as Archimedean copulas. Because this introduces too much symmetry among the assets, we choose instead to use asset price correlations as a proxy for default time correlations in this paper. This specific choice does not affect our conclusions, which apply to calibrating the copula of any asset-specific data set chosen to represent default time dependence. A good choice of copula family for calibration is the t-copula, because it naturally incorporates default contagion through tail dependence, which is not present in the Gaussian copula. An even better choice is the skewed t-copula, for which the upper and lower tail dependence need not be equal. When applying this copula approach, a direct calibration of the t-copula or skewed t-copula is time-consuming because there is no fast method of finding the degree of freedom ν except by looping. Instead, we will show that it is much faster to find the copula by calibrating the full multivariate distribution and extracting the implied copula, as in equation (3.5). This may seem counterintuitive, since the full distribution also contains the marginals as well as the dependence structure. However, for calibrating the full distribution function, we have at our disposal the fast EM algorithm; we know of no corresponding algorithm for the copula alone. Moreover, we will see that the marginals are needed anyway to contruct uniform variates. If they are not provided as a by-product of calibrating the full distribution, they need to be separately estimated. 28
30 7 Pricing of Basket Credit Default Swaps: Elliptical Copulas vs the Skewed t Distribution 7.1 Basket CDS contracts We now address the problem of basket CDS pricing. For ease of illustration we will look at a 5 year basket CDS, where the basket contains the five firms used in section 5.4; other maturities and basket sizes are treated in the same way. All the settings are the same as the single CDS except that the default event is triggered by the k-th default in the basket, where k is the seniority level of this structure, specified in the contract. The seller of the basket CDS will face the default payment upon the k-th default, and the buyer will pay the spread price until k-th default or until maturity T. Let (τ 1,, τ 5 ) denote the default order. The premium leg, accrued payment, and default leg are (7.1) P L = Mθq n E(B(0, t i )I{τ k > t i }), i=1 (7.2) AP = Mθq n E i=1 ( ) τk t i 1 B(0, τ k )I{t i 1 < τ k t i }, t i t i 1 (7.3) DL = M(1 R)E ( B(0, τ)i {τ k T }). 29
31 The spread price q is the q such that the value of credit default swap is zero, i.e., (7.4) P L(q ) + AP (q ) = DL. 7.2 Pricing Method To solve this equation, we now need the the distribution of τ k, the time of the k-th default in the basket, so that we can evaluate the foregoing expectations. To do this, we need all the preceding tools of this paper. Here is a summary of the steps. 1. Select firm-specific critical variables X whose dependence structure will proxy for the dependence structure of default times. (In the study below we use equity prices.) 2. Calibrate the copula C of X from a selected parametric family of copulas or distributions, such as the t copula or the skewed t distribution. In the distribution case, use the EM algorithm. 3. Separately, calibrate deterministic default intensities from single name CDS spread quotes, as in section Use the default intensities to calculate survival functions S i (t) for each of the firms, using equation (6.1). 5. Using the copula C, develop the distribution of kth-to-default times by monte carlo sampling of many scenarios, as follows. In each scenario, 30
32 choose a sample value of U from the copula C. Use equation (6.2) to determine the default time for each firm in this scenario. Order these times from first to last to define τ 1,..., τ 5. By repeating this simulation over many scenarios, we can develop a simulated unconditional distribution of each of the kth-to-default times τ k. 6. Use these distributions to compute the expectations in equation (7.4) in order to solve for the basket CDS spread price q. 7.3 The distribution of kth-to-default times Before describing our empirical results for this basket CDS pricing method, we elaborate a little on item 5 above, and examine via some experiments how the distributions depend on the choice of copula, comparing four different commonly used bivariate copulas: Gaussian, t, Clayton, and Gumbel. To simplify the picture, we assume there are two idealized firms, with Kendall s τ = 0.5 for all copulas. We take a five year horizon and set the default intensity of the first firm to be a constant 0.05, and 0.03 for the second firm. We want to look at the first to default (FTD) and last to default (LTD) probabilities at different times before maturity Algorithm We calculated the k-th to default probabilities using the following procedure. 1. Use Matlab(TM) copula toolbox 1.0 to simulate Gaussian, t, Clayton and Gumbel copulas uniform variables u i,j with the same Kendall s tau 31
33 correlation, where i = 1, 2, j = 1,, n and n is the number of samples. 2. From equation (6.2), we get τ i,j and sort according to column. The k-th row is a series of k-th to default times τ k i. 3. Divide the interval from year 0 to year 5 into 500 small sub-intervals. Count the number of τ k i values that fall into each sub-interval and divide by the number of samples to get the default probabilities for each small sub interval, and hence an approximate probability density function. In the following, we illustrate results for FTD and LTD using n = 1, 000, 000 samples Empirical Probabilities of Last to Default (LTD) and First to Default (FTD) First, we recall that the t-copula is both upper and lower tail dependent, the Clayton copula is lower tail dependent, but upper tail independent, the Gumbel copula is the reverse, and the Gaussian copula is tail independent in both tails. FIGURE 2 ABOUT HERE We can see from Figure 2 that a copula function with lower tail dependence (Clayton copula) leads to the highest default probabilities for LT D, while a copula function with upper tail dependence (Gumbel copula) leads to 32
34 the lowest default probabilities. The tail dependent t-copula leads to higher default probabilities than tail independent Gaussian copula. Default events tend happen when the uniform random variables U are small (close to 0). Since the LT D requires that both uniform variables in the basket are small, a lower tail dependent copula will lead to higher LT D probalities than a copula without lower tail dependence. FIGURE 3 ABOUT HERE In Figure 3, we see that the Clayton copula with only lower tail dependence leads to the lowest F T D probabilities, while the Gumbel copula with only upper tail dependence leads to the highest F T D probabilities. These results illustrate the sometimes unexpected relationships between tail dependence and F T D probabilities. 7.4 Empirical Basket CDS Pricing Comparison We now use the method of Section 7.2 to compare two approaches to the calibration of the copula C. The first approach, popular in the literature, is to directly calibrate a t copula. Since this copula has tail dependence, it provides a way to introduce default contagion explicitly into the model. In order to get uniform variates, we will still need to specify marginal distributions, which we will take to be the empirical distributions. The second approach is to calibrate the skewed t distribution using the EM algorithm described earlier. Calibrating the full distribution frees us 33
35 from the need to separately estimate the marginals. Also, the skewed t distribution, has heavier tails than the t distribution, and does not suffer from the bivariate exchangeability of the t copula, which some argue is an unrealistic symmetry in the dependence structure of defaults. In this experiment we use for our critical variables the equity prices for the same five underlying stocks as used above: AT&T, Bell South, Century Tel, SBC, Sprint. We obtained the adjusted daily closing prices from finance.yahoo.com for the period 07/02/1998 to 07/02/ Copula approach We first use the empirical distribution to model the marginal distributions and transform the equity prices into uniform variables. Then we can calibrate the t copula using those variates. For comparison, we also calibrate a Gaussian copula. If we fix in advance the degree of freedom ν, the calibration of the t copula is fast see Di Clemente and Romano (2003a), Demarta and McNeil (2005), and Galiani (2003). However, we know of no good method to calibrate the degree of freedom ν. With this data, we find the degree of freedom to be 7.406, which is found by maximizing the log likelihood using direct search, looping ν from to 20 with step size Each step takes about 5 seconds, and the full calibration takes about 24 hours (2005 vintage laptop running Windows XP). The maximum log likelihood for the Gaussian copula was , while 34
36 for the t copula it was , substantially better. After calibration, we follow the remaining steps of section 7.2 and report the results in the table below. Demarta and McNeil (2005) also suggest using the skewed t copula, but we were not able to calibrate it directly for this study Distribution approach We calibrate the multivariate t and skewed t distributions using the EM algorithm described in Section 2. The calibration is fast compared to the copula calibration: with the same data and equipment, it takes less than one minute, compared to 24 hours for the looping search of ν. The calibrated degree of freedom for both t and skewed t is The log likelihood for skewed t and t are almost the same: and , respectively. Spread prices for the k-th to default basket CDS are reported in Table 3. We can see that lower tail dependent t copula, compared to the Gaussian, leads to higher default probability for LT D and lower probability for F T D, thus leads to higher spread price for LT D and lower spread price for F T D. The t distribution has almost the same log likelihood and almost the same spread price of k-th to default as the skewed t distribution. Both distributions lead to higher spread price for LT D and lower spread price for F T D. The calibration of the t distribution is a superior approach, both because there is no extra requirement to assume a form for the marginals, and because the EM algorithm has tremendous speed advantages. Basket credit 35
37 default swaps or collateralized debt obligations usually have a large number of securities. For example, a synthetic CDO called EuroStoxx50 issued on May 18, 2001 has 50 single name credit default swaps on 50 credits that belong to the DJ EuroStoxx50 equity index. In this case, the calibration of a t copula will be extremely slow. TABLE 3 ABOUT HERE 8 Summary and Concluding Remarks We follow Rukowski s single name credit risk modeling and Schönbucher and Schubert s portfolio credit risk copula approach to price basket credit default swaps. The t copula is widely used in the pricing of basket credit default swaps for its lower tail dependence. However, we need to specify the marginal distributions first and calibrate the marginal distributions and copula separately. In addition, there is no good (fast) method to calibrate the degree of freedom ν. Instead, we suggest using the fast EM algorithm for t distribution and skewed t distribution calibration, where all the parameters are calibrated together. To our knowledge, we are the first to suggest calibrating the full multivariate distribution to price basket credit default swaps with this triggervariable approach. As compared to the Gaussian copula, the t copula leads to higher default probabilities and spread prices of basket LT D credit default swaps, and lower 36
38 default probabilities and spread prices for F T D, because of the introduction of tail dependence to model default contagion. Both the t distribution and the skewed t distribution lead to yet higher spread prices of basket LT D credit default swaps and lower spread prices for F T D than the t copula. This is suggestive of a higher tail dependence of default times than is reflected in the pure copula approach. Because default contagion has shown itself to be pronounced during extreme events, we suspect that this is a more useful model of real default outcomes. We feel the skewed t distribution has potential to become a powerful tool for quantitative analysts doing rich-cheap analysis of credit derivatives. References Cherubini, U., E. Luciano, and W. Vecchiato Copula Methods in Finance, John Wiley and Sons. Demarta, S. and A. McNeil The t copula and related copulas, International Statistical Review, to appear. Di Clemente, A. and C. Romano. 2003a. Beyond Markowitz: building the optimal portfolio using non-elliptical asset return distributions, Rome, Italy, working paper b. A copula extreme value theory approach for modeling operational risk, Rome, Italy, working paper Measuring and optimizing portfolio credit risk: a copula-based approach, Rome, Italy, working paper. Embrechts, P., F. Lindskog, and A. McNeil Modelling dependence with copulas and applications to risk management, Zurich. Galiani, Stefano Copula functions and their applications in pricing and risk managing multiname credit derivative products, Dept. of Mathematics, King s College, London, Master s thesis. 37
39 Georges, P., A. Lamy, E. Nicolas, G. Quibel, and T. Roncalli Multivariate survival modelling: a unified approach with copulas. Hu, Wenbo Calibration of Multivariate Generalized Hyperbolic Distributions using the EM Algorithm, with Applications in Risk Management, Portfolio Optimization and Portfolio Credit Risk, Florida State University, Tallahassee, FL, PhD Dissertation. Hull, John Options, Futures & Other Derivatives, 5th ed., Prentice Hall. Joe, H Multivariate Models and Dependence Concepts, Monographs on statistics and applied probability 73. Li, David On default correlation: A copula function approach, working paper. Masala, G., M. Menzietti, and M. Micocci. Optimization of conditional VaR in an actuarial model for credit risk assuming a student t copula dependence structure, University of Cagliari, University of Rome, working paper. Mashal, R. and M. Naldi Pricing multiname credit derivatives: heavy tailed hybrid approach, working paper. McNeil, A., R. Frey, and P. Embrechts Quantitative Risk Management: Concepts, Techniques and Tools, Princeton Univ. Press. Meneguzzo, D. and W. Vecciato Copula sensitivity in collateralized debt obligations and basket default swaps pricing and risk monitoring, preprint. Nelsen, Roger An Introduction to Copulas, Springer Verlag. Rutkowski, M On models of default risk by R. Elliott, M. Jeanblanc and M. Yor, Warsaw Univ. of Technology, working paper. Schönbucher, P.J. and D. Schubert Copula-dependent default risk in intensity models, working paper. Schönbucher, P Credit Derivatives and Pricing Models: Models, Pricing and Implementation, John Wiley and Sons. 38
40 V 1 Gaussian copula with tau Student t copula with tau 0.5 and v U Figure 1: 1000 samples of Gaussian and t copula with Kendall s τ = 0.5. There are more points in both corners for the t copula. 39
41 Company Year 1 Year 2 Year 3 Year 4 year 5 AT&T Bell South Century Tel SBC Sprint Table 1: Credit default swap mid price quote, where year1,, year5 mean maturities. 40
42 Company Year 1 Year 2 Year 3 Year 4 year 5 AT&T Bell South Century Tel SBC Sprint Table 2: Calibrated default intensity 41
43 Clayton Gaussian t Gumbel default probability of LTD years Figure 2: Default probabilities of LT D. 42
44 Clayton Gaussian t Gumbel default probability of FTD years Figure 3: Default probabilities of F T D. 43
45 Model FTD 2TD 3TD 4TD LTD Gaussian copula t copula t distribution Skewed t distribution Table 3: Spread price for k-th to default using different models 44
Portfolio Distribution Modelling and Computation. Harry Zheng Department of Mathematics Imperial College [email protected]
Portfolio Distribution Modelling and Computation Harry Zheng Department of Mathematics Imperial College [email protected] Workshop on Fast Financial Algorithms Tanaka Business School Imperial College
Credit Risk Models: An Overview
Credit Risk Models: An Overview Paul Embrechts, Rüdiger Frey, Alexander McNeil ETH Zürich c 2003 (Embrechts, Frey, McNeil) A. Multivariate Models for Portfolio Credit Risk 1. Modelling Dependent Defaults:
Pricing of a worst of option using a Copula method M AXIME MALGRAT
Pricing of a worst of option using a Copula method M AXIME MALGRAT Master of Science Thesis Stockholm, Sweden 2013 Pricing of a worst of option using a Copula method MAXIME MALGRAT Degree Project in Mathematical
Monte Carlo Simulation
1 Monte Carlo Simulation Stefan Weber Leibniz Universität Hannover email: [email protected] web: www.stochastik.uni-hannover.de/ sweber Monte Carlo Simulation 2 Quantifying and Hedging
Tail-Dependence an Essential Factor for Correctly Measuring the Benefits of Diversification
Tail-Dependence an Essential Factor for Correctly Measuring the Benefits of Diversification Presented by Work done with Roland Bürgi and Roger Iles New Views on Extreme Events: Coupled Networks, Dragon
FORWARDS AND EUROPEAN OPTIONS ON CDO TRANCHES. John Hull and Alan White. First Draft: December, 2006 This Draft: March 2007
FORWARDS AND EUROPEAN OPTIONS ON CDO TRANCHES John Hull and Alan White First Draft: December, 006 This Draft: March 007 Joseph L. Rotman School of Management University of Toronto 105 St George Street
arxiv:1412.1183v1 [q-fin.rm] 3 Dec 2014
Regulatory Capital Modelling for Credit Risk arxiv:1412.1183v1 [q-fin.rm] 3 Dec 2014 Marek Rutkowski a, Silvio Tarca a, a School of Mathematics and Statistics F07, University of Sydney, NSW 2006, Australia.
Non Linear Dependence Structures: a Copula Opinion Approach in Portfolio Optimization
Non Linear Dependence Structures: a Copula Opinion Approach in Portfolio Optimization Jean- Damien Villiers ESSEC Business School Master of Sciences in Management Grande Ecole September 2013 1 Non Linear
Supplement to Call Centers with Delay Information: Models and Insights
Supplement to Call Centers with Delay Information: Models and Insights Oualid Jouini 1 Zeynep Akşin 2 Yves Dallery 1 1 Laboratoire Genie Industriel, Ecole Centrale Paris, Grande Voie des Vignes, 92290
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,
Modelling the dependence structure of financial assets: A survey of four copulas
Modelling the dependence structure of financial assets: A survey of four copulas Gaussian copula Clayton copula NORDIC -0.05 0.0 0.05 0.10 NORDIC NORWAY NORWAY Student s t-copula Gumbel copula NORDIC NORDIC
Least Squares Estimation
Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David
APPLYING COPULA FUNCTION TO RISK MANAGEMENT. Claudio Romano *
APPLYING COPULA FUNCTION TO RISK MANAGEMENT Claudio Romano * Abstract This paper is part of the author s Ph. D. Thesis Extreme Value Theory and coherent risk measures: applications to risk management.
A Coefficient of Variation for Skewed and Heavy-Tailed Insurance Losses. Michael R. Powers[ 1 ] Temple University and Tsinghua University
A Coefficient of Variation for Skewed and Heavy-Tailed Insurance Losses Michael R. Powers[ ] Temple University and Tsinghua University Thomas Y. Powers Yale University [June 2009] Abstract We propose a
Numerical methods for American options
Lecture 9 Numerical methods for American options Lecture Notes by Andrzej Palczewski Computational Finance p. 1 American options The holder of an American option has the right to exercise it at any moment
Working Paper A simple graphical method to explore taildependence
econstor www.econstor.eu Der Open-Access-Publikationsserver der ZBW Leibniz-Informationszentrum Wirtschaft The Open Access Publication Server of the ZBW Leibniz Information Centre for Economics Abberger,
Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab
Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab 1 Overview of Monte Carlo Simulation 1.1 Why use simulation?
Pricing of an Exotic Forward Contract
Pricing of an Exotic Forward Contract Jirô Akahori, Yuji Hishida and Maho Nishida Dept. of Mathematical Sciences, Ritsumeikan University 1-1-1 Nojihigashi, Kusatsu, Shiga 525-8577, Japan E-mail: {akahori,
An Internal Model for Operational Risk Computation
An Internal Model for Operational Risk Computation Seminarios de Matemática Financiera Instituto MEFF-RiskLab, Madrid http://www.risklab-madrid.uam.es/ Nicolas Baud, Antoine Frachot & Thierry Roncalli
Lecture 3: Linear methods for classification
Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,
A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS
A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS Eusebio GÓMEZ, Miguel A. GÓMEZ-VILLEGAS and J. Miguel MARÍN Abstract In this paper it is taken up a revision and characterization of the class of
A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails
12th International Congress on Insurance: Mathematics and Economics July 16-18, 2008 A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails XUEMIAO HAO (Based on a joint
Nonparametric adaptive age replacement with a one-cycle criterion
Nonparametric adaptive age replacement with a one-cycle criterion P. Coolen-Schrijner, F.P.A. Coolen Department of Mathematical Sciences University of Durham, Durham, DH1 3LE, UK e-mail: [email protected]
Basics of Statistical Machine Learning
CS761 Spring 2013 Advanced Machine Learning Basics of Statistical Machine Learning Lecturer: Xiaojin Zhu [email protected] Modern machine learning is rooted in statistics. You will find many familiar
An exact formula for default swaptions pricing in the SSRJD stochastic intensity model
An exact formula for default swaptions pricing in the SSRJD stochastic intensity model Naoufel El-Bachir (joint work with D. Brigo, Banca IMI) Radon Institute, Linz May 31, 2007 ICMA Centre, University
Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression
Logistic Regression Department of Statistics The Pennsylvania State University Email: [email protected] Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max
Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.
Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x
Marshall-Olkin distributions and portfolio credit risk
Marshall-Olkin distributions and portfolio credit risk Moderne Finanzmathematik und ihre Anwendungen für Banken und Versicherungen, Fraunhofer ITWM, Kaiserslautern, in Kooperation mit der TU München und
Quantitative Operational Risk Management
Quantitative Operational Risk Management Kaj Nyström and Jimmy Skoglund Swedbank, Group Financial Risk Control S-105 34 Stockholm, Sweden September 3, 2002 Abstract The New Basel Capital Accord presents
Spatial Statistics Chapter 3 Basics of areal data and areal data modeling
Spatial Statistics Chapter 3 Basics of areal data and areal data modeling Recall areal data also known as lattice data are data Y (s), s D where D is a discrete index set. This usually corresponds to data
Maximum Likelihood Estimation
Math 541: Statistical Theory II Lecturer: Songfeng Zheng Maximum Likelihood Estimation 1 Maximum Likelihood Estimation Maximum likelihood is a relatively simple method of constructing an estimator for
MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators...
MATH4427 Notebook 2 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 2009-2016 by Jenny A. Baglivo. All Rights Reserved. Contents 2 MATH4427 Notebook 2 3 2.1 Definitions and Examples...................................
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance
The Dangers of Using Correlation to Measure Dependence
ALTERNATIVE INVESTMENT RESEARCH CENTRE WORKING PAPER SERIES Working Paper # 0010 The Dangers of Using Correlation to Measure Dependence Harry M. Kat Professor of Risk Management, Cass Business School,
Copula model estimation and test of inventory portfolio pledge rate
International Journal of Business and Economics Research 2014; 3(4): 150-154 Published online August 10, 2014 (http://www.sciencepublishinggroup.com/j/ijber) doi: 10.11648/j.ijber.20140304.12 ISS: 2328-7543
Introduction to General and Generalized Linear Models
Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby
Uncertainty quantification for the family-wise error rate in multivariate copula models
Uncertainty quantification for the family-wise error rate in multivariate copula models Thorsten Dickhaus (joint work with Taras Bodnar, Jakob Gierl and Jens Stange) University of Bremen Institute for
Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.
Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C
Measuring Portfolio Value at Risk
Measuring Portfolio Value at Risk Chao Xu 1, Huigeng Chen 2 Supervisor: Birger Nilsson Department of Economics School of Economics and Management, Lund University May 2012 1 [email protected] 2 [email protected]
Risk Management at a Leading Canadian Bank An Actuarial Science Graduate's View
at a Leading Canadian Bank An Actuarial Science Graduate's View Yu Zhou Quantitative Analytics, Group Risk Management TD Bank Financial Group at a Leading Canadian Bank: An Actuarial Science Graduate s
Dependence Concepts. by Marta Monika Wawrzyniak. Master Thesis in Risk and Environmental Modelling Written under the direction of Dr Dorota Kurowicka
Dependence Concepts by Marta Monika Wawrzyniak Master Thesis in Risk and Environmental Modelling Written under the direction of Dr Dorota Kurowicka Delft Institute of Applied Mathematics Delft University
Java Modules for Time Series Analysis
Java Modules for Time Series Analysis Agenda Clustering Non-normal distributions Multifactor modeling Implied ratings Time series prediction 1. Clustering + Cluster 1 Synthetic Clustering + Time series
1 Prior Probability and Posterior Probability
Math 541: Statistical Theory II Bayesian Approach to Parameter Estimation Lecturer: Songfeng Zheng 1 Prior Probability and Posterior Probability Consider now a problem of statistical inference in which
Understanding the Impact of Weights Constraints in Portfolio Theory
Understanding the Impact of Weights Constraints in Portfolio Theory Thierry Roncalli Research & Development Lyxor Asset Management, Paris [email protected] January 2010 Abstract In this article,
Statistical Machine Learning
Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes
INSURANCE RISK THEORY (Problems)
INSURANCE RISK THEORY (Problems) 1 Counting random variables 1. (Lack of memory property) Let X be a geometric distributed random variable with parameter p (, 1), (X Ge (p)). Show that for all n, m =,
MULTIPLE DEFAULTS AND MERTON'S MODEL L. CATHCART, L. EL-JAHEL
ISSN 1744-6783 MULTIPLE DEFAULTS AND MERTON'S MODEL L. CATHCART, L. EL-JAHEL Tanaka Business School Discussion Papers: TBS/DP04/12 London: Tanaka Business School, 2004 Multiple Defaults and Merton s Model
General Sampling Methods
General Sampling Methods Reference: Glasserman, 2.2 and 2.3 Claudio Pacati academic year 2016 17 1 Inverse Transform Method Assume U U(0, 1) and let F be the cumulative distribution function of a distribution
Machine Learning and Pattern Recognition Logistic Regression
Machine Learning and Pattern Recognition Logistic Regression Course Lecturer:Amos J Storkey Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh Crichton Street,
Credit Derivatives: fundaments and ideas
Credit Derivatives: fundaments and ideas Roberto Baviera, Rates & Derivatives Trader & Structurer, Abaxbank I.C.T.P., 14-17 Dec 2007 1 About? 2 lacked knowledge Source: WSJ 5 Dec 07 3 Outline 1. overview
Generating Random Numbers Variance Reduction Quasi-Monte Carlo. Simulation Methods. Leonid Kogan. MIT, Sloan. 15.450, Fall 2010
Simulation Methods Leonid Kogan MIT, Sloan 15.450, Fall 2010 c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 1 / 35 Outline 1 Generating Random Numbers 2 Variance Reduction 3 Quasi-Monte
Modelling the Dependence Structure of MUR/USD and MUR/INR Exchange Rates using Copula
International Journal of Economics and Financial Issues Vol. 2, No. 1, 2012, pp.27-32 ISSN: 2146-4138 www.econjournals.com Modelling the Dependence Structure of MUR/USD and MUR/INR Exchange Rates using
Copula Simulation in Portfolio Allocation Decisions
Copula Simulation in Portfolio Allocation Decisions Gyöngyi Bugár Gyöngyi Bugár and Máté Uzsoki University of Pécs Faculty of Business and Economics This presentation has been prepared for the Actuaries
Statistics and Probability Letters. Goodness-of-fit test for tail copulas modeled by elliptical copulas
Statistics and Probability Letters 79 (2009) 1097 1104 Contents lists available at ScienceDirect Statistics and Probability Letters journal homepage: www.elsevier.com/locate/stapro Goodness-of-fit test
Modeling Operational Risk: Estimation and Effects of Dependencies
Modeling Operational Risk: Estimation and Effects of Dependencies Stefan Mittnik Sandra Paterlini Tina Yener Financial Mathematics Seminar March 4, 2011 Outline Outline of the Talk 1 Motivation Operational
MULTIVARIATE PROBABILITY DISTRIBUTIONS
MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined
Statistical Machine Learning from Data
Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Gaussian Mixture Models Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole Polytechnique
1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let
Copyright c 2009 by Karl Sigman 1 Stopping Times 1.1 Stopping Times: Definition Given a stochastic process X = {X n : n 0}, a random time τ is a discrete random variable on the same probability space as
Measuring Economic Capital: Value at Risk, Expected Tail Loss and Copula Approach
Measuring Economic Capital: Value at Risk, Expected Tail Loss and Copula Approach by Jeungbo Shim, Seung-Hwan Lee, and Richard MacMinn August 19, 2009 Please address correspondence to: Jeungbo Shim Department
Contents. List of Figures. List of Tables. List of Examples. Preface to Volume IV
Contents List of Figures List of Tables List of Examples Foreword Preface to Volume IV xiii xvi xxi xxv xxix IV.1 Value at Risk and Other Risk Metrics 1 IV.1.1 Introduction 1 IV.1.2 An Overview of Market
CONDITIONAL, PARTIAL AND RANK CORRELATION FOR THE ELLIPTICAL COPULA; DEPENDENCE MODELLING IN UNCERTAINTY ANALYSIS
CONDITIONAL, PARTIAL AND RANK CORRELATION FOR THE ELLIPTICAL COPULA; DEPENDENCE MODELLING IN UNCERTAINTY ANALYSIS D. Kurowicka, R.M. Cooke Delft University of Technology, Mekelweg 4, 68CD Delft, Netherlands
Statistics for Retail Finance. Chapter 8: Regulation and Capital Requirements
Statistics for Retail Finance 1 Overview > We now consider regulatory requirements for managing risk on a portfolio of consumer loans. Regulators have two key duties: 1. Protect consumers in the financial
Jung-Soon Hyun and Young-Hee Kim
J. Korean Math. Soc. 43 (2006), No. 4, pp. 845 858 TWO APPROACHES FOR STOCHASTIC INTEREST RATE OPTION MODEL Jung-Soon Hyun and Young-Hee Kim Abstract. We present two approaches of the stochastic interest
Hedging Illiquid FX Options: An Empirical Analysis of Alternative Hedging Strategies
Hedging Illiquid FX Options: An Empirical Analysis of Alternative Hedging Strategies Drazen Pesjak Supervised by A.A. Tsvetkov 1, D. Posthuma 2 and S.A. Borovkova 3 MSc. Thesis Finance HONOURS TRACK Quantitative
For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i )
Probability Review 15.075 Cynthia Rudin A probability space, defined by Kolmogorov (1903-1987) consists of: A set of outcomes S, e.g., for the roll of a die, S = {1, 2, 3, 4, 5, 6}, 1 1 2 1 6 for the roll
Multivariate Analysis (Slides 13)
Multivariate Analysis (Slides 13) The final topic we consider is Factor Analysis. A Factor Analysis is a mathematical approach for attempting to explain the correlation between a large set of variables
Stock Price Dynamics, Dividends and Option Prices with Volatility Feedback
Stock Price Dynamics, Dividends and Option Prices with Volatility Feedback Juho Kanniainen Tampere University of Technology New Thinking in Finance 12 Feb. 2014, London Based on J. Kanniainen and R. Piche,
The Bivariate Normal Distribution
The Bivariate Normal Distribution This is Section 4.7 of the st edition (2002) of the book Introduction to Probability, by D. P. Bertsekas and J. N. Tsitsiklis. The material in this section was not included
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written
Properties of Future Lifetime Distributions and Estimation
Properties of Future Lifetime Distributions and Estimation Harmanpreet Singh Kapoor and Kanchan Jain Abstract Distributional properties of continuous future lifetime of an individual aged x have been studied.
Two Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering
Two Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering Department of Industrial Engineering and Management Sciences Northwestern University September 15th, 2014
A LOGNORMAL MODEL FOR INSURANCE CLAIMS DATA
REVSTAT Statistical Journal Volume 4, Number 2, June 2006, 131 142 A LOGNORMAL MODEL FOR INSURANCE CLAIMS DATA Authors: Daiane Aparecida Zuanetti Departamento de Estatística, Universidade Federal de São
1 Portfolio mean and variance
Copyright c 2005 by Karl Sigman Portfolio mean and variance Here we study the performance of a one-period investment X 0 > 0 (dollars) shared among several different assets. Our criterion for measuring
Poisson Models for Count Data
Chapter 4 Poisson Models for Count Data In this chapter we study log-linear models for count data under the assumption of a Poisson error structure. These models have many applications, not only to the
Some probability and statistics
Appendix A Some probability and statistics A Probabilities, random variables and their distribution We summarize a few of the basic concepts of random variables, usually denoted by capital letters, X,Y,
Reject Inference in Credit Scoring. Jie-Men Mok
Reject Inference in Credit Scoring Jie-Men Mok BMI paper January 2009 ii Preface In the Master programme of Business Mathematics and Informatics (BMI), it is required to perform research on a business
Presenter: Sharon S. Yang National Central University, Taiwan
Pricing Non-Recourse Provisions and Mortgage Insurance for Joint-Life Reverse Mortgages Considering Mortality Dependence: a Copula Approach Presenter: Sharon S. Yang National Central University, Taiwan
Chapter 3 RANDOM VARIATE GENERATION
Chapter 3 RANDOM VARIATE GENERATION In order to do a Monte Carlo simulation either by hand or by computer, techniques must be developed for generating values of random variables having known distributions.
t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).
1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction
4: SINGLE-PERIOD MARKET MODELS
4: SINGLE-PERIOD MARKET MODELS Ben Goldys and Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2015 B. Goldys and M. Rutkowski (USydney) Slides 4: Single-Period Market
Chapter 2: Binomial Methods and the Black-Scholes Formula
Chapter 2: Binomial Methods and the Black-Scholes Formula 2.1 Binomial Trees We consider a financial market consisting of a bond B t = B(t), a stock S t = S(t), and a call-option C t = C(t), where the
Gamma Distribution Fitting
Chapter 552 Gamma Distribution Fitting Introduction This module fits the gamma probability distributions to a complete or censored set of individual or grouped data values. It outputs various statistics
Package EstCRM. July 13, 2015
Version 1.4 Date 2015-7-11 Package EstCRM July 13, 2015 Title Calibrating Parameters for the Samejima's Continuous IRT Model Author Cengiz Zopluoglu Maintainer Cengiz Zopluoglu
Hedging Barriers. Liuren Wu. Zicklin School of Business, Baruch College (http://faculty.baruch.cuny.edu/lwu/)
Hedging Barriers Liuren Wu Zicklin School of Business, Baruch College (http://faculty.baruch.cuny.edu/lwu/) Based on joint work with Peter Carr (Bloomberg) Modeling and Hedging Using FX Options, March
Actuarial and Financial Mathematics Conference Interplay between finance and insurance
KONINKLIJKE VLAAMSE ACADEMIE VAN BELGIE VOOR WETENSCHAPPEN EN KUNSTEN Actuarial and Financial Mathematics Conference Interplay between finance and insurance CONTENTS Invited talk Optimal investment under
Practice problems for Homework 11 - Point Estimation
Practice problems for Homework 11 - Point Estimation 1. (10 marks) Suppose we want to select a random sample of size 5 from the current CS 3341 students. Which of the following strategies is the best:
Aggregate Loss Models
Aggregate Loss Models Chapter 9 Stat 477 - Loss Models Chapter 9 (Stat 477) Aggregate Loss Models Brian Hartman - BYU 1 / 22 Objectives Objectives Individual risk model Collective risk model Computing
1 Sufficient statistics
1 Sufficient statistics A statistic is a function T = rx 1, X 2,, X n of the random sample X 1, X 2,, X n. Examples are X n = 1 n s 2 = = X i, 1 n 1 the sample mean X i X n 2, the sample variance T 1 =
Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page
Errata for ASM Exam C/4 Study Manual (Sixteenth Edition) Sorted by Page 1 Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page Practice exam 1:9, 1:22, 1:29, 9:5, and 10:8
Financial Mathematics and Simulation MATH 6740 1 Spring 2011 Homework 2
Financial Mathematics and Simulation MATH 6740 1 Spring 2011 Homework 2 Due Date: Friday, March 11 at 5:00 PM This homework has 170 points plus 20 bonus points available but, as always, homeworks are graded
The Monte Carlo Framework, Examples from Finance and Generating Correlated Random Variables
Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh The Monte Carlo Framework, Examples from Finance and Generating Correlated Random Variables 1 The Monte Carlo Framework Suppose we wish
