In models of decision making under uncertainty we often are faced with the problem of

Save this PDF as:

Size: px
Start display at page:

Download "In models of decision making under uncertainty we often are faced with the problem of"

Transcription

1 Generating Scenario Trees for Multistage Decision Problems Kjetil Høyland Stein W. Wallace Department of Industrial Economics and Technology Management, Norwegian University of Science and Technology, N-7491 Trondheim, Norway In models of decision making under uncertainty we often are faced with the problem of representing the uncertainties in a form suitable for quantitative models. If the uncertainties are expressed in terms of multivariate continuous distributions, or a discrete distribution with far too many outcomes, we normally face two possibilities: either creating a decision model with internal sampling, or trying to find a simple discrete approximation of the given distribution that serves as input to the model. This paper presents a method based on nonlinear programming that can be used to generate a limited number of discrete outcomes that satisfy specified statistical properties. Users are free to specify any statistical properties they find relevant, and the method can handle inconsistencies in the specifications. The basic idea is to minimize some measure of distance between the statistical properties of the generated outcomes and the specified properties. We illustrate the method by single- and multiple-period problems. The results are encouraging in that a limited number of generated outcomes indeed have statistical properties that are close to or equal to the specifications. We discuss how to verify that the relevant statistical properties are captured in these specifications, and argue that what are the relevant properties, will be problem dependent. (Scenario Generation; Asset Allocation; Nonconvex Programming) 1. Introduction and Motivation In models of decision making under uncertainty, it is essential to represent uncertainties in a form suitable for computation. If random variables are represented by multidimensional continuous distributions, or by discrete distributions with large numbers of outcomes, computation is difficult because the models explicitly or implicitly require integration over such variables. To avoid this problem, we normally resort to either internal sampling or procedures that replace the distribution with a small set of discrete outcomes. In stochastic programming, which provides the motivation for this paper, internal sampling is used in models of stochastic decomposition (see, for example, Higle and Sen 1991) and importance sampling (see, for example, Infanger 1994 and the references therein) /01/4702/0295\$ electronic ISSN We can also resort to sampling when a continuous distribution is to be represented by a discrete approximation. To make sure that the distribution properties of the sample are close to those of the continuous distribution, the number of outcomes has to be large. However, this may easily result in another situation where the decision model fails because of the implied integration. Therefore, there is a need for a means of generating the outcomes in a more intelligent way than by sampling, irrespective of whether that sampling is stratified or not. Spetzler and Holstein (1975) define probability encoding to be the process of extracting and quantifying individual judgement about uncertain quantities. Two additional steps in Merkhofer (1987) expand the original process, which consisted of five stages. This paper addresses the final step in the expanded Management Science 2001 INFORMS Vol. 47, No. 2, February 2001 pp

2 process, discretizing the continuous probability distribution. The standard approach for approximating a continuous distribution by a discrete distribution is the following: (1) divide the outcome region into intervals, (2) select a representing point in each interval, and (3) assign a probability to each point. An example of such an approach is the bracket mean method. The intervals are found by dividing the outcome region into N equally probable intervals, the representative point is the mean of the corresponding interval, and the assigned probability is 1/N. Miller and Rice (1983) point out that bracket mean methods always underestimate the even moments and usually underestimate the odd moments of the original distributions. They illustrate a method that overcomes this flaw. The procedure, which is based on Gaussian integration rules, generates an N -point distribution that matches the first 2N 1 moments of the continuous distribution. Smith (1993) reviews different methods of constructing discrete distributions, and proposes an efficient method for accurately computing the moments of an output distributions (or value lottery) given the moments of the input distribution. Keefer (1994) draws attention to the fact that even though the discrete distribution matches the first several moments of the continuous distribution, the approximation of the expected utility (EU) or certainty equivalent (CEV) (which are the bases for the decisions) can be poor. He proposes six different three-point approximations that lead to quite accurate estimations of the CEVs when the risklevel and the continuous distributions are within reasonable bounds. Some of the literature described above discusses multiple-variable problems see, for example, Smith (1993) and Keefer and Bodily (1983). These approaches construct (multivariable) scenario trees by discretizing each variable individually (conditionally). The proposed method in this paper approximates multiple-variable outcomes simultaneously. In contrast to what we have found in the literature, we also address multiple-period problems. The idea behind the method is to minimize some measure of distance between the specifications and the statistical properties of the discrete approximation. A part of the problem therefore is to specify which properties are relevant in a given case. When empirical analysis is used to determine the distribution properties of the uncertain variables, possibly with expert judgments added, checking for inconsistencies in the specifications is especially important. Consider the estimation of distribution properties based only on empirical analysis. Often empirical data consists of time series of various lengths for different variables. For example, if we have two series of data, one longer than the other, we would use the long series whenever possible, but would have to use the intersection of the two series for covariance calculations. This will almost certainly create inconsistencies between the variance in the variance/covariance matrix and the variance stemming from the long series. In many situations it is rational to determine some statistical properties based on expert judgment and others on empirical analysis. For example, the decision-maker may wish to base the estimation of the variance/covariance matrix on empirical data, but for the estimation of the mean, his own subjective views are used. The variance/covariance matrix is then based on the empirical mean, which probably is different from the subjective mean. Hence, the specified distribution properties (mean and variance/covariance) might not be internally consistent. In such a case we may find that an underlying distribution with the specified properties does not exist. The method presented here can handle inconsistencies in the specifications. If the specifications are inconsistent, it will of course not be possible to obtain a full match. In that case, the decision-maker might reconsider the specifications, or he may accept a set of outcomes with statistical properties that only approximately match the specifications. The decision-maker can weight the different specifications to obtain the correct trade-off between them. The method is flexible with regard to user specifications. Users can specify the structure of the outcomes to be constructed and whatever distribution properties they find relevant. The model can, at least in principle, handle any moments of the distribution, and interperiod dependencies. We address the crucial 296 Management Science/Vol. 47, No. 2, February 2001

3 question of what are the relevant statistical properties in 4.2. The motivation for developing the method presented in this paper is the implementation of a stochastic multistage asset allocation model (Høyland 1998). In this context, the generation of discrete outcomes for the random variables is referred to as scenario generation. One scenario in such a model represents realizations of all random variables in all time periods. An adequate way of generating scenarios is essential for the validity of the asset allocation models. Cariño et al. (1994) developed the first genuine commercial application of an asset allocation model for a Japanese insurance company. The most recent publications describing the model are found in works by Cariño and Ziemba (1998) and Cariño et al. (1998). Three different methods are applied for scenario generation. The first assumes independence between returns in each time period. The decision-maker has to provide a pool of joint outcomes in each time period. The outcomes are obtained either from empirical data, random sampling from an asset return model, or a combination of forecasting models and expert judgments. The pool will usually be larger than the model can handle. A method for reducing the number of outcomes in each time period while preserving the mean and the variance of the marginal distribution is therefore applied. The idea of preserving certain statistical properties is also used in this paper, but our methodology is different, and we take a more general approach as to which properties may be important. The second scenario generation method takes the dependencies between time periods into account. Factor analysis and time series processes for the factors are applied to generate the scenarios for the uncertain variables. In the last approach the decision-maker has total flexibility in describing the scenarios as the returns are constructed manually. Zenios (1995) applies a discrete space binomial process for generating interest rate scenarios for asset liability management for fixed income securities. He particularly focuses on generating return scenarios for fixed income securities with contingent claims, and emphasizes that the future price not only depends on the future state, but also on the path of getting there. Because the number of states in the binomial lattice grows exponentially with the number of periods, Zenios and Shtilmann (1993) show how to generate subsets of the lattice that will generate estimates at a prespecified level of accuracy relative to using the whole lattice. A problem closely related to the one discussed in this paper, is that of deleting scenarios from an already existing collection, and possibly also using this to generate some new scenarios. For examples of this problem, we refer to Wang (1995), Dupacová (1996), Consigli and Dempster (1996), and Chen et al. (1997). The rest of the paper is organized as follows: Section 2 presents the scenario generation method. Section 3 illustrates the method with single- and multiple period examples. In 4 we focus on the critical factors for the success of the method, and also apply the method on a real-world portfolio management model. Section 5 concludes the paper. 2. Model Description The presented methodology can be applied to many types of decision problems under uncertainty. The focus in this paper is on generating the scenario tree, which is often important in decision analysis and stochastic programming. The presented methodology can be adjusted to other types of decision problems that require a different structure The Scenario Tree A scenario tree is illustrated in Figure 1. The nodes in the tree represent states of the world at a particular point in time. In stochastic programming, decisions will be made at the nodes. The arcs represent realizations of the uncertain variables. The scenario tree branches off for each possible value of a random vector x = x 1 x 2 x 1 in each stage t = 1 T. Note the difference from a decision tree which branches on both decisions and events The Scenario Generation Method A starting point for generating the scenario tree is a description of the statistical properties of the random variables. If the random variables are discrete Management Science/Vol. 47, No. 2, February

4 Figure 1 The Scenario Tree Note. A path through the tree is called a scenario and consists of realizations of all random variables in all time periods. with few (joint) outcomes, the generation of the tree is straightforward, as it can be done manually. However, in all other cases, constructing the tree manually is practically impossible. Hence, we need a procedure for generating a scenario tree with the proper statistical properties. In some cases the specified statistical properties partially describe a known underlying distribution, while in other cases the underlying distribution is unknown. In any case we denote our specifications as the specified statistical properties or the specified distribution. To present the model we introduce the following notation. Let S be the set of all specified statistical properties, and S VALi be the specified value of statistical property i in S. Furthermore, let I be the number of random variables, T be the number of stages, and N t be the number of (conditional) outcomes in stage t. In this presentation we assume, for simplicity, a symmetrical tree, meaning that the number of branches is the same for all conditional distributions in the same period. The tree in Figure 1 is symmetrical with T = 3, N 1 = 5, N 2 = 3, and N 3 = 2. Define x to be the outcome vector of dimension I N 1 + I N 1 N 2 + +I N 1 N 2 N T, p to be the probability vector of dimension N 1 + N 1 N 2 + +N 1 N 2 N T, and let f i x p be the mathematical expression for statistical property i in S. Let M be a matrix of zeroes and ones, whose number of rows equals the length of p and whose number of columns equals the number of nodes in the scenario tree, where each column is the indicator of a conditional distribution at one node. Each column in M extracts a conditional distribution in the scenario tree. Finally, let w i be the weight for statistical property i in S. We want to construct x and p so that the statistical properties of the approximating distribution match (as well as possible) the specified statistical properties. We do this by minimizing a measure of distance between the statistical properties of the constructed distribution and the specifications, subject to constraints defining the probabilities to be nonnegative and to sum up to one. For the rest of this paper, we shall use the square norm to measure distance. Of course, other choices are possible. min x p w i f i x p S VALi 2 i S p M = 1 (1) p 0 In this general description of the model, we let p be a variable in the optimization problem. We might also treat p as a parameter. See 4.1 for guidance with regard to this choice. Because of nonconvexities of the optimization problem, or inconsistent specifications, we might not obtain a perfect match. If this is the case, the weights w i can incorporate the relative importance of satisfying the different specifications, as well as the quality of (trust in) the data. Of course, the weights are only relevant if there exists a trade-off between some of the specifications, which means that not all specifications are perfectly satisfied. Since the optimization problem (1) is generally not convex, the solution might be (and probably is) a local solution. But for our purposes it is satisfactory to have a solution with distribution properties equal to or close to the specifications even if there might exist other and even better solutions. An objective value equal to or close to zero indicates that the distribution of the scenarios has a perfect or good match with the specifications. To solve the problem, we apply a heuristic where we rerun the model from different starting points until a satisfactory match is obtained; see 3.1 and 3.2 for details. For more sophisticated nonconvex optimization methods, see Horst and Tuy (1990), and the references therein. Observe all the types of specifications that the general model description embraces. Any central moments and co-moments can be specified in any period. In later periods, specifications can be given over all outcomes or over outcomes with a common history. For the latter, the distribution properties can 298 Management Science/Vol. 47, No. 2, February 2001

5 Table 1 Percentiles of the Marginal Cumulative Distributions 0% 5% 25% 50% 75% 95% 100% 3 6 Cash 3 0% 3 2% % 4 0% 5 0% 6 2% 7 0% Bonds 4 5% 4 8% 5 2% 5 8% 6 5% 7 4% 8 2% Domestic stocks 30 0% 25 0% 0 0% 8 5% 15 0% 25 0% 35 0% International stocks 35 0% 30 0% 0 0% 9 0% 15 0% 30 0% 40 0% either be conditional on, or independent of, the outcomes in earlier periods. In addition to specifying the higher moments of the distribution, worst case outcomes can be included to ensure that extreme events are captured. Consider an asset allocation problem where the scenario tree describes the uncertain returns in different asset classes. Empirical studies of stockmarkets have documented an effect called volatility clumping, meaning that a period with high volatility is likely to be followed by a new period of high volatility; see, for example, Billio and Pelizzon (1997) for details. To model this phenomenon, we would like the volatility (standard deviation) of the nth period outcomes with a common extreme n 1 th period outcome to be higher than average. By letting the nth period standard deviation be parametric in the n 1 th period outcome, this relation can be expressed in the model. The expected value, skewness, or other distribution properties can be state dependent in the same way. The freedom to specify any desired property also leads to some possible pitfalls. A more detailed description of these pitfalls, and guidance on how to avoid them, are given in Generation of a Single- and a Multiple-Period Scenario Tree This section illustrates the scenario generation method by single- and multiple-period examples. We leave the discussion of what are the relevant statistical properties for 4, and for now simply assume that the relevant properties are specified. The example is taken from finance and the problem is to find the optimal allocation of funds between main groups of asset classes. We assume that the decision-maker is to split the funds between cash, bonds, domestic stocks, and international stocks Single-Period Scenario Tree The specifications that follow apply to the singleperiod case and to the first period of the multiperiod case in 3.2. For each individual asset class, we let the decision-maker specify his or her market views in terms of (subjective) percentiles for the marginal distributions as shown in Table 1. Note that for cash and bonds the market views are in terms of expectations for the interest rate, while for stocks the expectations are given for the total return. We fit an approximated cumulative distribution to the percentiles by using a NAG C library routine and derive the marginal distributions, as shown in Figures 2 and 3. The NAG C routine used for fitting the cumulative distribution does not guarantee that the second derivative changes in sign only once. This might cause a somewhat peculiar form of the density function. Given the fitted cumulative distribution, we calculate the (central) moments. The example takes the first four moments into account. The decision-maker realizes that extreme negative events influence the solutions, and specifies a worst-case event to be included in the scenarios. This can be done in several ways. Here we include a worst-case event where all asset classes move in the wrong direction at the same time, and we let the size of the move be proportional to the standard deviation. Table 2 summarizes all specifications of marginal distribution properties. The worst-case event for asset class i is generated by the following formula: WC i = E x i F SD x i, where E x i is the expected value of asset class i, SD x i is the standard deviation of asset class i, and F is a constant. We let F = 2 5, and let the probability of the worst case event be 0.5%. Note that for bonds and cash, the worst-case event is an increase in interest rates, which leads to a decrease in total return. For co-moments, we assume that only the correlations are Management Science/Vol. 47, No. 2, February

6 Figure 2 The Fitted Cumulative Distribution Functions and the Derived Density Functions for the Bond Classes Note. Specified percentiles are marked with triangles. Figure 3 The Fitted Cumulative Distribution Functions and the Derived Density Functions for the Stock Classes 300 Management Science/Vol. 47, No. 2, February 2001

7 Table 2 Statistical Properties Derived from the Marginal Distributions in Figures 2 and 3 Expected Standard Worst-case value deviation event (%) (%) Skewness Kurtosis (%) 2 6 Cash Bonds Domestic stocks International stocks The normal distribution has a kurtosis of three. A kurtosis of less than three means that the distribution is less peaked around the mean than the normal distribution. Table 3 Specification of Correlations Domestic International Cash Bonds stocks stocks Cash Bonds Domestic stocks International stocks 1 Figure 4 Six Scenarios (with Probabilities in %) for Which the Distribution Properties Exactly Match the Specifications in Tables 2 and Multiperiod Scenario Tree Expanding from one to several periods complicates the scenario generation in many ways, and in particular implies that intertemporal dependencies need to be considered. In 2, we gave an example from finance and argued that the volatility for stocks depends on previous returns. Some statistical properties are clearly state dependent, while others might be specified independently of the state. In this example we have chosen the expected value and the standard deviation to be state dependent, while the other statistical properties are independent of the state. The volatility for asset class i in period t >1 is modeled in the following way in order to capture the volatility clumping effect: 1 SD x i t = VC i x i t 1 E x i t VC i SD AV x i t (2) where VC i 0 1 is the volatility clumping parameter (a high VC i leads to a large degree of volatility clumping), x i t is the outcome for asset class i in period t E x i t is the expected value for the outcome in asset class i in period t and SD AV x i t is the average standard deviation for asset class i in period t. We model a mean reversion effect for the two bond classes. 2 The expected interest rate for bond class i at the end of period t >1 is given by E x i t = MRF i MRL i + 1 MRF i x i t 1 (3) Note. The worst-case event is given in the left-most scenario. relevant. The correlations are estimated by empirical data, see Table 3. A perfect match with the specifications in Tables 2 and 3 is obtained when the number of scenarios is six or higher. Due to the noncovexity of Problem (1), there are several possible distributions. One of these is given in Figure 4. See 4.1 for a more general discussion of the necessary number of scenarios versus the number of specifications. where MRF i 0 1 is the mean reversion factor (a high MRF i leads to a large degree of mean reversion), MRL i is the mean reversion level, and x i t is the interest rate for bond class i at the end of period t. For stocks we assume that there is a premium in terms of higher expected return for taking more risk, 1 Empirical studies have shown that the volatility for stocks only increases after a large decrease in stockprices, not after a large increase, see Billio and Pelizzon (1997). Modeling this asymmetry is straightforward, but to simplify the presentation we assume symmetric and equal volatility dependencies for all asset classes. 2 Mean reversion means that interest rates tend to revert to an average level. When interest rates are high, the economy slows down, and interest rates tend to fall, and when interest rates are low, the economy booms and interest rates tend to rise. Management Science/Vol. 47, No. 2, February

8 Table 4 Specification of Market Expectations Distribution (End of) (End of) (End of) Asset class property Period 1 Period 2 Period 3 Cash duration three months expected value of spot rate 4 33% State dep State dep standard deviation 0 94% State dep State dep skewness kurtosis worst-case event 6 68% State dep State dep Bonds duration six years expected value spot rate 5 91% State dep State dep standard deviation 0 82% State dep State dep skewness kurtosis worst-case event 7 96% State dep State dep Domestic stocks expected value total return 7 61% State dep State dep standard deviation 13 38% State dep State dep skewness kurtosis worst-case event 25 84% State dep State dep International stocks expected value total return 8 09% State dep State dep standard deviation return 15 70% State dep State dep skewness kurtosis worst-case event 31 16% State dep State dep and let the expected total return for stockclass i in period t be given by: E x i t = r t 1 + RP i SD x i t (4) where r t is the risk-free interest rate (in this model approximated by the cash interest rate) at the end of period t SD x i t is the specified standard deviation on stockclass i in period t, and RP t is a riskpremium constant for period t. For Period 1 we assume the same set of specifications as for the single-period case. For Periods 2 and 3 the expected values and the standard deviations are state dependent as illustrated, while the rest of the specifications are state independent and assumed to be equal to the specifications in the first period. Table 4 summarizes the marginal distribution properties. Table 3 is used for the state independent correlations in all three periods. The riskpremium constant in the stockpricing model, RP t, is set to 0.3 for t = 1 2, and 3. We let the volatility clumping parameter, VC i = 0 3 for all assets, the mean reversion factor, MRF i = 0 2 for interest rate classes, and the mean reversion level, MRL i = 4 0% and 5 8% for cash and bonds, respectively. A three-period tree that has a perfect match with the specifications in Tables 3 and 4 is generated. See Figure 5 for the outcomes in the two first periods. There are alternative ways of constructing the multiperiod tree. In the given example we have chosen a sequential procedure: Specify statistical properties for the first period and generate first-period outcomes that are consistent with these specifications. For each generated first-period outcome, specify conditional distribution properties for the second period, and generate conditional second-period outcomes that are consistent with these specifications. Continue to specify conditional distribution properties and generate consistent outcomes through all periods. The sequential approach has numerical advantages due to the decomposition into single-period trees. Each single-period optimization problem is nonconvex, but by adjusting the number of outcomes and by running the optimization from different starting points, we can normally ensure that a perfect match is obtained in each of the generated single-period trees, provided one exists. 302 Management Science/Vol. 47, No. 2, February 2001

9 Figure 5 The Two First Periods of the Generated Three-Period Scenario Tree Note. The scenario tree has statistical properties that match the specifications in Tables 3 and 4. Since the sequential approach requires the distribution properties to be specified under each node in the tree, the approach lacks a direct control of the statistical properties defined over all outcomes in the later periods t > 1. These properties are specified implicitly by the other specifications and after constructing the tree, the decision-maker should analyze the generated distribution to checkif his or her judgements can be trusted in this respect. Also, the sequential approach involves a more rigid optimization scheme. Several first-period trees might satisfy the first-period specifications. Some of them might lead to conditional second-period specifications which make it impossible to obtain a full match, while others might lead to a full match in the second period. As opposed to the sequential approach, a model that constructs the whole tree in one large optimization is better in this regard, as the first-period trees that create difficulties are not feasible. The main disadvantage with generating the whole tree in one large optimization is that the degree of nonconvexity increases, and a good or perfect match will be hard to construct. When the number of periods grows, this will trigger the need for more sophisticated solution procedures. In the example we produced a full match between the generated scenarios and the specifications. The tree was generated by solving the problem from a new set of starting values until a full match was obtained. The choice of starting values should in general reflect the specified distributions. In this implementation we generated starting values for each random variable by sampling from uniform distributions over an interval from 3 to +3 standard deviations, not taking the correlations into account. This crude approach worked well for the given example, and has proved successful for other cases as well. We have tested problems of up to five periods and approximately 8,000 scenarios, and it appears that the sequential procedure with the simple restart-heuristic leads to a full match if there are no inconsistencies in the specifications, and the tree is large enough. With regard to solution times, the three-period tree of which two periods are shown in Figure 5, took63 seconds to generate on a Sun Ultra Sparc 1. Each single-period tree takes less than a second to construct. 4. Critical Success Factors This section discusses the critical issues for the success of the scenario generation method. In 4.1 we focus on how to specify the distribution properties, and address possible pitfalls and provide guidelines for how to avoid them. We also discuss how to find the minimal tree size that allows for a perfect match with different sets of specifications. In this context we also give some guidance with regard to the choice of defining the probabilities as variables or parameters. Management Science/Vol. 47, No. 2, February

10 Section 4.2 raises the crucial question of what distribution properties should be matched Pitfalls in the Specifications In 3 we saw that some of the statistical properties in later periods are derived from the outcomes in earlier periods. We denote these derived specifications. The user should verify that the derived specifications are not implausible or contradictory. An example of the latter would be a conditional distribution in which one financial asset becomes first order stochastically dominant to another, creating an opportunity for arbitrage in a later time period. Implicit specifications mean that some statistical properties are specified implicitly by other specifications. A simple example is the specification of means in one period being dependent on the outcome in the previous period, which implicitly specifies the correlation between periods. An implicit specification of a distribution property combined with a different explicit specification of the same property, will most likely lead to inconsistent specifications. The method can handle such inconsistencies, but there will of course be a trade-off between them. However, the decision-maker might not be comfortable with such a contradiction. Verification of implicit specifications and an understanding of how all the specifications relate are essential to avoid this. Overspecifications means that the specifications are too extensive relative to the size of the scenario tree. Obvious examples are to specify a skewness (different from zero) or a correlation (different from minus one, zero, or one) for a discrete two point distribution with fixed probabilities. If the number of scenarios is large relative to the requirements of the specifications, the problem is underspecified. The resulting scenario tree might satisfy the specifications but still have undesirable characteristics. Test examples have shown that in cases where the probabilities are defined as variables, underspecification leads to a solution where the extra degrees of freedom are used to produce zero probability outcomes. If the probabilities are fixed in an underspecified model, we typically observe that the scenarios that are not needed obtain outcomes of the random variables very close to their means. This might cause problems if we wish to generate larger trees, and we shall see later in this section how to proceed in such cases. To avoid underspecifications, the number of specifications must be balanced relative to the size of the tree. To understand when over and underspecifications occur, an analysis is made of the relationship between the characteristics of the specifications and the number of outcomes necessary to obtain a perfect match. Consider a single-period problem where a continuous distribution of a single variable is to be approximated by a discrete distribution. It is known (see for example Miller and Rice 1983 for a discussion) that the first 2 N 1 moments plus the requirement that the probabilities sum up to one, can be matched with N points. In that case there are equally many variables (outcomes and probabilities) as there are constraints. The strength of this result lies in the fact that we get positive probabilities. As soon as we add extra requirements, this simple way of counting variables and constraints does not hold. But we may still use the idea of counting degrees of freedom to make a guess about the size of the tree. As an example, take a five-dimensional case and specify the first four moments for each variable plus all correlations. The number of specifications is 30 (5 times 4 central moments plus 10 correlations). The number of variables in our tree in D + 1 y 1, where D is the dimensionality of the problem (five in the example), and y is the number of outcomes. The minimum y that leads to 30 variables or more is 6. Although this rule of thumb may not always work, it seems to give a reasonable starting point for selecting the number of outcomes. In experiments working with power moments and correlations the authors have only very rarely needed to go above this minimal tree size. Of course, if inconsistencies are present, a perfect match can never be achieved, and we must simply lookfor a reasonable tree, at least so large that we do not face overspecification on top of inconsistencies. We see that defining the probabilities as parameters increases the number of outcomes needed to obtain a perfect match. Increasing the number of discrete outcomes means that we capture more of the support of the random variables. Testing indicates 304 Management Science/Vol. 47, No. 2, February 2001

11 that whether the probabilities are defined as variables or parameters does not significantly influence the chance of obtaining a full match, despite the fact that the degree of nonconvexity increases when the probabilities are defined as variables as opposed to parameters Relevant Properties In this section we show that the relevant statistical properties will depend on the characteristics of the problem at hand. For some problems, the relevant properties are easy to find, while for others they are harder. Our postulate is that if the relevant properties are captured, all scenario trees that possess these properties will lead to approximately the same objective function value when used in the decision model. With the help of this postulate, we provide guidance with respect to finding the relevant properties. As an example of a case where it is easy to determine the relevant properties, consider a singleperiod mean-variance model with no legal or policy constraints. The objective function can be written as a trade-off between the expected value and the variance: max E w 1 VAR w where 0 1 is a utility parameter determining the riskaversion and w is the uncertain wealth at the end of the planning period. For more details, see Markowitz (1959). For this model, all relevant properties are captured in the first two moments, and different scenario trees with the same mean and covariance matrix will lead to identical solutions. For the mean-variance model we knew in advance what were the relevant properties. For most decision problems, this is not the case. If we do not know the relevant properties, how do we find them? To illustrate, we apply a model that was developed for a Norwegian life insurance company and is currently in use for actual decision making. A detailed description of the model is given in Høyland and Wallace (1997). The objective is to maximize the riskadjusted portfolio value at the end of the planning period. For this analysis, a two-period (i.e., three-stage) version of that model is used, and, as in 3, four asset classes are introduced. To find the relevant properties for this problem, we generate many different scenario trees with the same statistical properties and checkthe stability in the objective function values. If the stability is not good enough, we add new properties, and checkthe stability again. If the result is still not good enough, we can either continue adding statistical properties or we can use a sampling approach. If we sample, several different small trees (with the same statistical properties) are constructed and aggregated into one large tree. In other words, we sample a number of small trees and combine these small trees while preserving the specified statistical properties to create the large tree, which is the input to the optimization. By doing this we reduce the noise in the statistical properties that are not specified. The following analyses show the effect of both adding statistical properties and the effect of sampling. We test for three sets of specifications with different characteristics. In Set 1 the correlation and the first and second moments of the marginal distributions are specified. In addition to the specifications of the first set, Set 2 includes specifications of the third and the fourth moments of the marginal distributions. Set 3 also includes a worst case outcomes. We generate two-period scenario trees as described in 3.2. The values of the statistical properties that are specified are the same in all three sets, and given by Table 3 and by the specifications for the two first periods of Table 4. While Set 1 includes the two first moments of the specifications of Table 4, Set 3 includes all the specifications. We first generate scenario trees with 30 outcomes in the first period and 6 outcomes in the second, obtained by aggregating 5 small trees with 6 outcomes in each period. Table 5 shows the stability of the objective function value for the three sets of specifications and we see that the stability improves as more statistical properties are added. The stability in Set 3 is improved by more than 50% relative to Set 1. We see that the specification of the third and the fourth (marginal) moments has a large influence. To further show the effect of sampling, we increase the size of the scenario tree and checkfor stability again. This time we aggregate 25 small trees with 6 outcomes on each period to create a large tree with 150 outcomes in the first period and 6 outcomes Management Science/Vol. 47, No. 2, February

12 Table 5 Stability in the Objective Function Value for Three Different Sets of Specifications E x STD(x) High(x) Low(x) Set 1: EV, STDEV, CORR Set 2: EV, STDEV, SKEW, KURT, CORR Set 3: EV, STDEV, SKEW, KURT, CORR and worst case outcomes Note. For each specification set, 200 different scenario trees are generated, and 200 objective function values x s of the decision model are obtained. The figures show the expected value and the standard deviation in addition to the highest and lowest objective function value for each specification set. in the second period (i.e., 900 scenarios), using specification Set 2. Solving the model for 60 different such scenario trees shows that the standard deviation in the objective value is reduced from (refer to Table 5) to For practical applications the sampling procedure has proved to be a good alternative to adding statistics to match. In particular, higher that second order co-moments will be difficult for a decision-maker to quantify and they will also complicate the construction of the scenario tree by making the problem much harder to solve. We have measured the quality of the scenario tree by the stability in the objective function value, not by the stability in the decision variables. Usually we will not require stability in the decisions. If the objective function value is flat with respect to changes the decisions, i.e., many different decision structures are approximately equally good, we might not achieve stability in the solutions even though we have stability in the objective function value. However, we do not see this as a problem, but rather as a desirable characteristic of the decision problem. The model above, though, is stable also in the decisions with respect to different scenario trees; see Table 6. For the portfolio management model above, the relevant statistical properties seem to be captured by the first four central moments and the correlation matrix (all explicitly specified), and the sampling procedure which is applied to reduce the noise in the statistical properties which are not specified. It is hard to give a general characterization of relevant statistical properties and how to find them. Table 6 Stability in the Decision Variables Domestic Foreign Cash Bonds stocks stocks Average optimal portfolio Standard deviation in optimal allocation Note. The model is solved for 60 different two-period scenario trees of 900 scenarios generated by constructing and combining 25 small scenario trees with statistical properties as in Set 2. As illustrated, the relevant properties depend on the objective function. With a quadratic utility function, the means and the variance/covariance matrix accurately determine the objective function. However, the relevant statistical properties will also depend on legal regulations, business environment, and selfproclaimed policy restrictions, i.e., on the constraints in the model. For instance, a portfolio management problem with a quadratic utility function in the presence of capital adequacy constraints, solved for two different scenario trees with the same mean and variance/covariance, but different third and fourth moments, might lead to different optimal solutions. 5. Conclusions In models of decision making under uncertainty it is essential to represent the uncertainties in a form suitable for analysis. We have illustrated a method that generates a limited number of discrete scenarios that satisfy prespecified statistical properties. A singleperiod and a three-period scenario tree were generated and the results illustrate the strengths of the method: The decision-maker was allowed to specify whatever distribution properties were found relevant, and a limited number of scenarios, with distribution properties that were consistent with the specifications, were generated. The user should be aware of the possible pitfalls when specifying the statistical properties. We have drawn attention to derived, implicit, over and underspecifications, and discussed how to avoid these pitfalls. Further, we gave a simple formula as guidance for finding the smallest number of outcomes needed to obtain a perfect match. The crucial choice when applying the method is what distribution properties to match, and we 306 Management Science/Vol. 47, No. 2, February 2001

13 showed that this choice is problem dependent. We argued that these properties could be determined ex ante for some models, while for others they are harder or impossible to prescribe. The postulate we used to find the relevant properties in the difficult cases is that if different scenario trees, all possessing the same statistical properties, lead to the same objective function value in the decision model, then the relevant properties are captured in these trees. The paper also analyzed a multistage portfolio management model. When solving the model for different scenario trees and matching the first four central moments and correlations, the stability in the objective function value was reasonably good. However, we showed that the stability was further improved by solving for larger scenario trees generated by aggregating many different small scenario trees. Hence, a combination of scenario construction and tree aggregation leads to the best results. Acknowledgments This research is partly supported by Gjensidige Forsikring. The authors want to thankerikranberg, Sjur Westgaard and Tore Andre Lysebo from Gjensidige for valuable discussions and contributions. Further, we want to acknowledge Amund Skavhaug at Norwegian University of Science and Technology for his help with the implementation and the debugging. Francesco A. Rossi at Universitá di Verona deserves our gratitude for reading and commenting on an early version of the paper. They are also indebted to the associate editor and two referees. References Billio, M., L. Pelizzon Pricing options with switching volatility. Working paper 97.07, Department of Economics, University of Venice, Venice, Italy. Cariño, D.R., T. Kent, D.H. Myers, S. Stacy, M. Sylvanus, A.L. Turner, K. Watanabe, W.T. Ziemba The Russel- Yasuda Kasai model: An asset/liability model for a Japanese insurance company using multistage stochastic programming. Interfaces , D.H. Myers, W.T. Ziemba Concepts, technical issues, and uses of the Russel-Yasuda Kasai financial planning model. Oper. Res , W.T. Ziemba Formulation of the Russel-Yasuda Kasai financial planning model. Oper. Res Chen, Z., G. Consigli, M.A.H. Dempster, N. Hicks-Pedrón Towards sequential sampling algorithms for dynamic portfolio management. Working paper 01/97, Finance Research Group, Judge Institute of Management Studies, University of Cambridge, UK. Consigli, G., M.A.H. Dempster Dynamic stochastic programming for asset-liability management. Working paper 04/96, Finance Research Group, Judge Institute of Management Studies, University of Cambridge, UK. Dupacová, J Scenario based stochastic programs: Resistance with respect to sample. Ann. Oper. Res Higle, J.L., S. Sen Stochastic decomposition: An algorithm for two stage stochastic programs with recourse. Math. Oper. Res Horst, R., H. Tuy Global Optimization. Springer-Verlag, Berlin, Germany. Høyland, K Asset liability management for a life insurance company a stochastic programming approach. Ph.D. dissertation, Norwegian University of Science and Technology, Trondheim, Norway., S.W. Wallace Analyzing the legal regulations in the norwegian life insurance business using a multistage asset liability management model. Eur. J. Oper. Res. To appear. Infanger, G Planning Under Uncertainty Solving Large-Scale Stochastic Linear Programs. Boyd and Fraser Publishing Company, MA. Keefer, D.L Certainty equivalents for three-point discretedistribution approximations. Management Sci , S.E. Bodily Three-point approximations for continuous random variables. Management Sci Markowitz, H.M Portfolio Selection: Efficient Diversification of Investment. Yale University Press, New Haven, CT. Merkhofer, M.W Quantifying judgmental uncertainty: Methodolgy, experiences and insights. IEEE Trans. Systems, Man, and Cybernetics Miller, A.C., T.R. Rice Discrete approximations of probability distributions. Management Sci Smith, J.E Moment methods for decision analysis. Management Sci Spetzler, C.S., C.-A.S. Stael von Holstein Probability encoding in decision analysis. Management Sci Wang, J Statistical inference for stochastic programming. Reports of the Institute of Mathematics 95.5, Nanjing University, Nanjing , Peoples Republic of China. Zenios, S Asset/liability management under uncertainty for fixed-income securities. Ann. Oper. Res , M.S. Shtilman Constructing optimal samples from a binomial lattice. J. Inform. Optim. Sci Accepted by Robert Nau; received on May This paper was with the authors 26 months for 4 revisions. Management Science/Vol. 47, No. 2, February

CASH FLOW MATCHING PROBLEM WITH CVaR CONSTRAINTS: A CASE STUDY WITH PORTFOLIO SAFEGUARD. Danjue Shang and Stan Uryasev

CASH FLOW MATCHING PROBLEM WITH CVaR CONSTRAINTS: A CASE STUDY WITH PORTFOLIO SAFEGUARD Danjue Shang and Stan Uryasev PROJECT REPORT #2011-1 Risk Management and Financial Engineering Lab Department of

An introduction to Value-at-Risk Learning Curve September 2003

An introduction to Value-at-Risk Learning Curve September 2003 Value-at-Risk The introduction of Value-at-Risk (VaR) as an accepted methodology for quantifying market risk is part of the evolution of risk

MODERN PORTFOLIO THEORY AND INVESTMENT ANALYSIS

MODERN PORTFOLIO THEORY AND INVESTMENT ANALYSIS EIGHTH EDITION INTERNATIONAL STUDENT VERSION EDWIN J. ELTON Leonard N. Stern School of Business New York University MARTIN J. GRUBER Leonard N. Stern School

The Best of Both Worlds:

The Best of Both Worlds: A Hybrid Approach to Calculating Value at Risk Jacob Boudoukh 1, Matthew Richardson and Robert F. Whitelaw Stern School of Business, NYU The hybrid approach combines the two most

Asymmetry and the Cost of Capital

Asymmetry and the Cost of Capital Javier García Sánchez, IAE Business School Lorenzo Preve, IAE Business School Virginia Sarria Allende, IAE Business School Abstract The expected cost of capital is a crucial

Stability analysis of portfolio management with conditional value-at-risk

Stability analysis of portfolio management with conditional value-at-risk Michal Kaut and Stein W. Wallace and Hercules Vladimirou and Stavros Zenios Norwegian University of Science and Technology, Trondheim,

Probability and Statistics

CHAPTER 2: RANDOM VARIABLES AND ASSOCIATED FUNCTIONS 2b - 0 Probability and Statistics Kristel Van Steen, PhD 2 Montefiore Institute - Systems and Modeling GIGA - Bioinformatics ULg kristel.vansteen@ulg.ac.be

FE670 Algorithmic Trading Strategies. Stevens Institute of Technology

FE670 Algorithmic Trading Strategies Lecture 6. Portfolio Optimization: Basic Theory and Practice Steve Yang Stevens Institute of Technology 10/03/2013 Outline 1 Mean-Variance Analysis: Overview 2 Classical

A Log-Robust Optimization Approach to Portfolio Management

A Log-Robust Optimization Approach to Portfolio Management Dr. Aurélie Thiele Lehigh University Joint work with Ban Kawas Research partially supported by the National Science Foundation Grant CMMI-0757983

Chapter 1 INTRODUCTION. 1.1 Background

Chapter 1 INTRODUCTION 1.1 Background This thesis attempts to enhance the body of knowledge regarding quantitative equity (stocks) portfolio selection. A major step in quantitative management of investment

Single item inventory control under periodic review and a minimum order quantity

Single item inventory control under periodic review and a minimum order quantity G. P. Kiesmüller, A.G. de Kok, S. Dabia Faculty of Technology Management, Technische Universiteit Eindhoven, P.O. Box 513,

Moral Hazard. Itay Goldstein. Wharton School, University of Pennsylvania

Moral Hazard Itay Goldstein Wharton School, University of Pennsylvania 1 Principal-Agent Problem Basic problem in corporate finance: separation of ownership and control: o The owners of the firm are typically

When to Refinance Mortgage Loans in a Stochastic Interest Rate Environment

When to Refinance Mortgage Loans in a Stochastic Interest Rate Environment Siwei Gan, Jin Zheng, Xiaoxia Feng, and Dejun Xie Abstract Refinancing refers to the replacement of an existing debt obligation

SELECTING AN OPTIMAL INVESTMENT PORTFOLIO FOR A PENSION INSURANCE COMPANY

SELECTING AN OPTIMAL INVESTMENT PORTFOLIO FOR A PENSION INSURANCE COMPANY Katja Ainassaari, Finland Markku Kallio, Finland Antero Anne, Finland Summary This article presents a portfolio management model

Quantitative Methods for Finance

Quantitative Methods for Finance Module 1: The Time Value of Money 1 Learning how to interpret interest rates as required rates of return, discount rates, or opportunity costs. 2 Learning how to explain

Optimization under uncertainty: modeling and solution methods

Optimization under uncertainty: modeling and solution methods Paolo Brandimarte Dipartimento di Scienze Matematiche Politecnico di Torino e-mail: paolo.brandimarte@polito.it URL: http://staff.polito.it/paolo.brandimarte

Multiple Linear Regression in Data Mining

Multiple Linear Regression in Data Mining Contents 2.1. A Review of Multiple Linear Regression 2.2. Illustration of the Regression Process 2.3. Subset Selection in Linear Regression 1 2 Chap. 2 Multiple

Dr Christine Brown University of Melbourne

Enhancing Risk Management and Governance in the Region s Banking System to Implement Basel II and to Meet Contemporary Risks and Challenges Arising from the Global Banking System Training Program ~ 8 12

Non Linear Dependence Structures: a Copula Opinion Approach in Portfolio Optimization

Non Linear Dependence Structures: a Copula Opinion Approach in Portfolio Optimization Jean- Damien Villiers ESSEC Business School Master of Sciences in Management Grande Ecole September 2013 1 Non Linear

On the Efficiency of Competitive Stock Markets Where Traders Have Diverse Information

Finance 400 A. Penati - G. Pennacchi Notes on On the Efficiency of Competitive Stock Markets Where Traders Have Diverse Information by Sanford Grossman This model shows how the heterogeneous information

EEV, MCEV, Solvency, IFRS a chance for actuarial mathematics to get to main-stream of insurance value chain

EEV, MCEV, Solvency, IFRS a chance for actuarial mathematics to get to main-stream of insurance value chain dr Krzysztof Stroiński, dr Renata Onisk, dr Konrad Szuster, mgr Marcin Szczuka 9 June 2008 Presentation

Computer Handholders Investment Software Research Paper Series TAILORING ASSET ALLOCATION TO THE INDIVIDUAL INVESTOR

Computer Handholders Investment Software Research Paper Series TAILORING ASSET ALLOCATION TO THE INDIVIDUAL INVESTOR David N. Nawrocki -- Villanova University ABSTRACT Asset allocation has typically used

1 Portfolio mean and variance

Copyright c 2005 by Karl Sigman Portfolio mean and variance Here we study the performance of a one-period investment X 0 > 0 (dollars) shared among several different assets. Our criterion for measuring

Allocation of Required Capital by Line of Business

Allocation of Required Capital by Line of Business Author: Tony Zeppetella Consultant The Phoenix Companies 56 Prospect Street Hartford, CT 06115-0480 860-403-5402 tony.zeppetella@phoenixwm.com ABSTRACT

OPTIONS and FUTURES Lecture 2: Binomial Option Pricing and Call Options

OPTIONS and FUTURES Lecture 2: Binomial Option Pricing and Call Options Philip H. Dybvig Washington University in Saint Louis binomial model replicating portfolio single period artificial (risk-neutral)

Tail-Dependence an Essential Factor for Correctly Measuring the Benefits of Diversification

Tail-Dependence an Essential Factor for Correctly Measuring the Benefits of Diversification Presented by Work done with Roland Bürgi and Roger Iles New Views on Extreme Events: Coupled Networks, Dragon

Chapter 11 Monte Carlo Simulation

Chapter 11 Monte Carlo Simulation 11.1 Introduction The basic idea of simulation is to build an experimental device, or simulator, that will act like (simulate) the system of interest in certain important

Chapter 2: Systems of Linear Equations and Matrices:

At the end of the lesson, you should be able to: Chapter 2: Systems of Linear Equations and Matrices: 2.1: Solutions of Linear Systems by the Echelon Method Define linear systems, unique solution, inconsistent,

Using Microsoft Excel to build Efficient Frontiers via the Mean Variance Optimization Method

Using Microsoft Excel to build Efficient Frontiers via the Mean Variance Optimization Method Submitted by John Alexander McNair ID #: 0061216 Date: April 14, 2003 The Optimal Portfolio Problem Consider

Switching Cost, Competition, and Pricing in the Property/Casualty Insurance Market for Large Commercial Accounts

Switching Cost, Competition, and Pricing in the Property/Casualty Insurance Market for Large Commercial Accounts Lisa L. Posey * Abstract: With large commercial accounts, a small number of insurers negotiate

How to assess the risk of a large portfolio? How to estimate a large covariance matrix?

Chapter 3 Sparse Portfolio Allocation This chapter touches some practical aspects of portfolio allocation and risk assessment from a large pool of financial assets (e.g. stocks) How to assess the risk

Evaluating the Lead Time Demand Distribution for (r, Q) Policies Under Intermittent Demand

Proceedings of the 2009 Industrial Engineering Research Conference Evaluating the Lead Time Demand Distribution for (r, Q) Policies Under Intermittent Demand Yasin Unlu, Manuel D. Rossetti Department of

CFA Examination PORTFOLIO MANAGEMENT Page 1 of 6

PORTFOLIO MANAGEMENT A. INTRODUCTION RETURN AS A RANDOM VARIABLE E(R) = the return around which the probability distribution is centered: the expected value or mean of the probability distribution of possible

15.062 Data Mining: Algorithms and Applications Matrix Math Review

.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

The Performance of Option Trading Software Agents: Initial Results

The Performance of Option Trading Software Agents: Initial Results Omar Baqueiro, Wiebe van der Hoek, and Peter McBurney Department of Computer Science, University of Liverpool, Liverpool, UK {omar, wiebe,

Forecast Confidence Level and Portfolio Optimization

and Portfolio Optimization by Richard O. Michaud and Robert O. Michaud New Frontier Advisors Newsletter July 2004 Abstract This report focuses on the role and importance of the uncertainty in forecast

Risk/Arbitrage Strategies: An Application to Stock Option Portfolio Management

Risk/Arbitrage Strategies: An Application to Stock Option Portfolio Management Vincenzo Bochicchio, Niklaus Bühlmann, Stephane Junod and Hans-Fredo List Swiss Reinsurance Company Mythenquai 50/60, CH-8022

DECISION MAKING UNDER UNCERTAINTY:

DECISION MAKING UNDER UNCERTAINTY: Models and Choices Charles A. Holloway Stanford University TECHNISCHE HOCHSCHULE DARMSTADT Fachbereich 1 Gesamtbibliothek Betrtebswirtscrtaftslehre tnventar-nr. :...2>2&,...S'.?S7.

Operations and Supply Chain Management Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Operations and Supply Chain Management Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture - 36 Location Problems In this lecture, we continue the discussion

Exam Introduction Mathematical Finance and Insurance

Exam Introduction Mathematical Finance and Insurance Date: January 8, 2013. Duration: 3 hours. This is a closed-book exam. The exam does not use scrap cards. Simple calculators are allowed. The questions

Optimization applications in finance, securities, banking and insurance

IBM Software IBM ILOG Optimization and Analytical Decision Support Solutions White Paper Optimization applications in finance, securities, banking and insurance 2 Optimization applications in finance,

2 Forecasting by Error Correction Neural Networks

Keywords: portfolio management, financial forecasting, recurrent neural networks. Active Portfolio-Management based on Error Correction Neural Networks Hans Georg Zimmermann, Ralph Neuneier and Ralph Grothmann

FTS Real Time System Project: Portfolio Diversification Note: this project requires use of Excel s Solver

FTS Real Time System Project: Portfolio Diversification Note: this project requires use of Excel s Solver Question: How do you create a diversified stock portfolio? Advice given by most financial advisors

VANDERBILT AVENUE ASSET MANAGEMENT

SUMMARY CURRENCY-HEDGED INTERNATIONAL FIXED INCOME INVESTMENT In recent years, the management of risk in internationally diversified bond portfolios held by U.S. investors has been guided by the following

15.401 Finance Theory

Finance Theory MIT Sloan MBA Program Andrew W. Lo Harris & Harris Group Professor, MIT Sloan School Lecture 13 14 14: : Risk Analytics and Critical Concepts Motivation Measuring Risk and Reward Mean-Variance

Advanced Fixed Income Analytics Lecture 1

Advanced Fixed Income Analytics Lecture 1 Backus & Zin/April 1, 1999 Vasicek: The Fixed Income Benchmark 1. Prospectus 2. Models and their uses 3. Spot rates and their properties 4. Fundamental theorem

Life Cycle Asset Allocation A Suitable Approach for Defined Contribution Pension Plans

Life Cycle Asset Allocation A Suitable Approach for Defined Contribution Pension Plans Challenges for defined contribution plans While Eastern Europe is a prominent example of the importance of defined

ANT COLONY OPTIMIZATION ALGORITHM FOR RESOURCE LEVELING PROBLEM OF CONSTRUCTION PROJECT

ANT COLONY OPTIMIZATION ALGORITHM FOR RESOURCE LEVELING PROBLEM OF CONSTRUCTION PROJECT Ying XIONG 1, Ya Ping KUANG 2 1. School of Economics and Management, Being Jiaotong Univ., Being, China. 2. College

Risk Budgeting: Concept, Interpretation and Applications

Risk Budgeting: Concept, Interpretation and Applications Northfield Research Conference 005 Eddie Qian, PhD, CFA Senior Portfolio Manager 60 Franklin Street Boston, MA 00 (67) 439-637 7538 8//005 The Concept

Contents. List of Figures. List of Tables. List of Examples. Preface to Volume IV

Contents List of Figures List of Tables List of Examples Foreword Preface to Volume IV xiii xvi xxi xxv xxix IV.1 Value at Risk and Other Risk Metrics 1 IV.1.1 Introduction 1 IV.1.2 An Overview of Market

Answers to Concepts in Review 1. A portfolio is simply a collection of investments assembled to meet a common investment goal. An efficient portfolio is a portfolio offering the highest expected return

D-optimal plans in observational studies

D-optimal plans in observational studies Constanze Pumplün Stefan Rüping Katharina Morik Claus Weihs October 11, 2005 Abstract This paper investigates the use of Design of Experiments in observational

An Improved Approach to Computing Implied Volatility. Donald R. Chambers* Lafayette College. Sanjay K. Nawalkha University of Massachusetts at Amherst

EFA Eastern Finance Association The Financial Review 38 (2001) 89-100 Financial Review An Improved Approach to Computing Implied Volatility Donald R. Chambers* Lafayette College Sanjay K. Nawalkha University

Multivariate Analysis of Ecological Data

Multivariate Analysis of Ecological Data MICHAEL GREENACRE Professor of Statistics at the Pompeu Fabra University in Barcelona, Spain RAUL PRIMICERIO Associate Professor of Ecology, Evolutionary Biology

Dynamic Asset Allocation Using Stochastic Programming and Stochastic Dynamic Programming Techniques

Dynamic Asset Allocation Using Stochastic Programming and Stochastic Dynamic Programming Techniques Gerd Infanger Stanford University Winter 2011/2012 MS&E348/Infanger 1 Outline Motivation Background and

Active Versus Passive Low-Volatility Investing

Active Versus Passive Low-Volatility Investing Introduction ISSUE 3 October 013 Danny Meidan, Ph.D. (561) 775.1100 Low-volatility equity investing has gained quite a lot of interest and assets over the

The Binomial Option Pricing Model André Farber

1 Solvay Business School Université Libre de Bruxelles The Binomial Option Pricing Model André Farber January 2002 Consider a non-dividend paying stock whose price is initially S 0. Divide time into small

Notes on Factoring. MA 206 Kurt Bryan

The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor

Chapter 21: The Discounted Utility Model

Chapter 21: The Discounted Utility Model 21.1: Introduction This is an important chapter in that it introduces, and explores the implications of, an empirically relevant utility function representing intertemporal

Electric Company Portfolio Optimization Under Interval Stochastic Dominance Constraints

4th International Symposium on Imprecise Probabilities and Their Applications, Pittsburgh, Pennsylvania, 2005 Electric Company Portfolio Optimization Under Interval Stochastic Dominance Constraints D.

Investment Statistics: Definitions & Formulas

Investment Statistics: Definitions & Formulas The following are brief descriptions and formulas for the various statistics and calculations available within the ease Analytics system. Unless stated otherwise,

Exploratory Data Analysis

Exploratory Data Analysis Johannes Schauer johannes.schauer@tugraz.at Institute of Statistics Graz University of Technology Steyrergasse 17/IV, 8010 Graz www.statistics.tugraz.at February 12, 2008 Introduction

A Simple Utility Approach to Private Equity Sales

The Journal of Entrepreneurial Finance Volume 8 Issue 1 Spring 2003 Article 7 12-2003 A Simple Utility Approach to Private Equity Sales Robert Dubil San Jose State University Follow this and additional

Simulation and Risk Analysis

Simulation and Risk Analysis Using Analytic Solver Platform REVIEW BASED ON MANAGEMENT SCIENCE What We ll Cover Today Introduction Frontline Systems Session Ι Beta Training Program Goals Overview of Analytic

A THEORETICAL COMPARISON OF DATA MASKING TECHNIQUES FOR NUMERICAL MICRODATA

A THEORETICAL COMPARISON OF DATA MASKING TECHNIQUES FOR NUMERICAL MICRODATA Krish Muralidhar University of Kentucky Rathindra Sarathy Oklahoma State University Agency Internal User Unmasked Result Subjects

A Comparative Study of the Pickup Method and its Variations Using a Simulated Hotel Reservation Data

A Comparative Study of the Pickup Method and its Variations Using a Simulated Hotel Reservation Data Athanasius Zakhary, Neamat El Gayar Faculty of Computers and Information Cairo University, Giza, Egypt

FIN 683 Financial Institutions Management Market Risk

FIN 683 Financial Institutions Management Market Risk Professor Robert B.H. Hauswald Kogod School of Business, AU Overview Market risk: essentially a price risk a FI might need to liquidate a position

Cost Models for Vehicle Routing Problems. 8850 Stanford Boulevard, Suite 260 R. H. Smith School of Business

0-7695-1435-9/02 \$17.00 (c) 2002 IEEE 1 Cost Models for Vehicle Routing Problems John Sniezek Lawerence Bodin RouteSmart Technologies Decision and Information Technologies 8850 Stanford Boulevard, Suite

Is the Forward Exchange Rate a Useful Indicator of the Future Exchange Rate?

Is the Forward Exchange Rate a Useful Indicator of the Future Exchange Rate? Emily Polito, Trinity College In the past two decades, there have been many empirical studies both in support of and opposing

Notes for STA 437/1005 Methods for Multivariate Data

Notes for STA 437/1005 Methods for Multivariate Data Radford M. Neal, 26 November 2010 Random Vectors Notation: Let X be a random vector with p elements, so that X = [X 1,..., X p ], where denotes transpose.

Marketing Mix Modelling and Big Data P. M Cain

1) Introduction Marketing Mix Modelling and Big Data P. M Cain Big data is generally defined in terms of the volume and variety of structured and unstructured information. Whereas structured data is stored

No-Arbitrage Condition of Option Implied Volatility and Bandwidth Selection

Kamla-Raj 2014 Anthropologist, 17(3): 751-755 (2014) No-Arbitrage Condition of Option Implied Volatility and Bandwidth Selection Milos Kopa 1 and Tomas Tichy 2 1 Institute of Information Theory and Automation

NOTES ON THE BANK OF ENGLAND OPTION-IMPLIED PROBABILITY DENSITY FUNCTIONS

1 NOTES ON THE BANK OF ENGLAND OPTION-IMPLIED PROBABILITY DENSITY FUNCTIONS Options are contracts used to insure against or speculate/take a view on uncertainty about the future prices of a wide range

A New Quantitative Behavioral Model for Financial Prediction

2011 3rd International Conference on Information and Financial Engineering IPEDR vol.12 (2011) (2011) IACSIT Press, Singapore A New Quantitative Behavioral Model for Financial Prediction Thimmaraya Ramesh

A MANAGER S ROADMAP GUIDE FOR LATERAL TRANS-SHIPMENT IN SUPPLY CHAIN INVENTORY MANAGEMENT

A MANAGER S ROADMAP GUIDE FOR LATERAL TRANS-SHIPMENT IN SUPPLY CHAIN INVENTORY MANAGEMENT By implementing the proposed five decision rules for lateral trans-shipment decision support, professional inventory

http://www.jstor.org This content downloaded on Tue, 19 Feb 2013 17:28:43 PM All use subject to JSTOR Terms and Conditions

A Significance Test for Time Series Analysis Author(s): W. Allen Wallis and Geoffrey H. Moore Reviewed work(s): Source: Journal of the American Statistical Association, Vol. 36, No. 215 (Sep., 1941), pp.

Current Accounts in Open Economies Obstfeld and Rogoff, Chapter 2

Current Accounts in Open Economies Obstfeld and Rogoff, Chapter 2 1 Consumption with many periods 1.1 Finite horizon of T Optimization problem maximize U t = u (c t ) + β (c t+1 ) + β 2 u (c t+2 ) +...

A Review of Cross Sectional Regression for Financial Data You should already know this material from previous study

A Review of Cross Sectional Regression for Financial Data You should already know this material from previous study But I will offer a review, with a focus on issues which arise in finance 1 TYPES OF FINANCIAL

BINOMIAL OPTIONS PRICING MODEL. Mark Ioffe. Abstract

BINOMIAL OPTIONS PRICING MODEL Mark Ioffe Abstract Binomial option pricing model is a widespread numerical method of calculating price of American options. In terms of applied mathematics this is simple

Stochastic Analysis of Long-Term Multiple-Decrement Contracts

Stochastic Analysis of Long-Term Multiple-Decrement Contracts Matthew Clark, FSA, MAAA, and Chad Runchey, FSA, MAAA Ernst & Young LLP Published in the July 2008 issue of the Actuarial Practice Forum Copyright

Algorithmic Trading Session 1 Introduction. Oliver Steinki, CFA, FRM

Algorithmic Trading Session 1 Introduction Oliver Steinki, CFA, FRM Outline An Introduction to Algorithmic Trading Definition, Research Areas, Relevance and Applications General Trading Overview Goals

A Brief Study of the Nurse Scheduling Problem (NSP)

A Brief Study of the Nurse Scheduling Problem (NSP) Lizzy Augustine, Morgan Faer, Andreas Kavountzis, Reema Patel Submitted Tuesday December 15, 2009 0. Introduction and Background Our interest in the

Example 1: Calculate and compare RiskMetrics TM and Historical Standard Deviation Compare the weights of the volatility parameter using,, and.

3.6 Compare and contrast different parametric and non-parametric approaches for estimating conditional volatility. 3.7 Calculate conditional volatility using parametric and non-parametric approaches. Parametric

Accurately and Efficiently Measuring Individual Account Credit Risk On Existing Portfolios

Accurately and Efficiently Measuring Individual Account Credit Risk On Existing Portfolios By: Michael Banasiak & By: Daniel Tantum, Ph.D. What Are Statistical Based Behavior Scoring Models And How Are

STRATEGIC CAPACITY PLANNING USING STOCK CONTROL MODEL

Session 6. Applications of Mathematical Methods to Logistics and Business Proceedings of the 9th International Conference Reliability and Statistics in Transportation and Communication (RelStat 09), 21

Chapter 4 DECISION ANALYSIS

ASW/QMB-Ch.04 3/8/01 10:35 AM Page 96 Chapter 4 DECISION ANALYSIS CONTENTS 4.1 PROBLEM FORMULATION Influence Diagrams Payoff Tables Decision Trees 4.2 DECISION MAKING WITHOUT PROBABILITIES Optimistic Approach

Hedging Illiquid FX Options: An Empirical Analysis of Alternative Hedging Strategies

Hedging Illiquid FX Options: An Empirical Analysis of Alternative Hedging Strategies Drazen Pesjak Supervised by A.A. Tsvetkov 1, D. Posthuma 2 and S.A. Borovkova 3 MSc. Thesis Finance HONOURS TRACK Quantitative

Combining decision analysis and portfolio management to improve project selection in the exploration and production firm

Journal of Petroleum Science and Engineering 44 (2004) 55 65 www.elsevier.com/locate/petrol Combining decision analysis and portfolio management to improve project selection in the exploration and production

Decision Theory. 36.1 Rational prospecting

36 Decision Theory Decision theory is trivial, apart from computational details (just like playing chess!). You have a choice of various actions, a. The world may be in one of many states x; which one

Reducing Active Return Variance by Increasing Betting Frequency

Reducing Active Return Variance by Increasing Betting Frequency Newfound Research LLC February 2014 For more information about Newfound Research call us at +1-617-531-9773, visit us at www.thinknewfound.com

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE

ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE YUAN TIAN This synopsis is designed merely for keep a record of the materials covered in lectures. Please refer to your own lecture notes for all proofs.

Review of Random Variables

Chapter 1 Review of Random Variables Updated: January 16, 2015 This chapter reviews basic probability concepts that are necessary for the modeling and statistical analysis of financial data. 1.1 Random

: Table of Contents... 1 Overview of Model... 1 Dispersion... 2 Parameterization... 3 Sigma-Restricted Model... 3 Overparameterized Model... 4 Reference Coding... 4 Model Summary (Summary Tab)... 5 Summary

Lecture 1: Asset Allocation

Lecture 1: Asset Allocation Investments FIN460-Papanikolaou Asset Allocation I 1/ 62 Overview 1. Introduction 2. Investor s Risk Tolerance 3. Allocating Capital Between a Risky and riskless asset 4. Allocating

Optimal Consumption with Stochastic Income: Deviations from Certainty Equivalence

Optimal Consumption with Stochastic Income: Deviations from Certainty Equivalence Zeldes, QJE 1989 Background (Not in Paper) Income Uncertainty dates back to even earlier years, with the seminal work of