Robust MeanCovariance Solutions for Stochastic Optimization


 Magnus Ray
 3 years ago
 Views:
Transcription
1 OPERATIONS RESEARCH Vol. 00, No. 0, Xxxxx 0000, pp issn X eissn INFORMS doi /xxxx c 0000 INFORMS Robust MeanCovariance Solutions for Stochastic Optimization Ioana Popescu INSEAD, Decision Sciences Area, Blvd. de Constance, Fontainebleau, France. We provide a method for deriving robust solutions to certain stochastic optimization problems, based on meancovariance information about the distributions underlying the uncertain vector of returns. We prove that for a general class of objective functions, the robust solutions amount to solving a certain deterministic parametric quadratic program. We first prove a general projection property for multivariate distributions with given means and covariances, which reduces our problem to optimizing a univariate meanvariance robust objective. This allows us to use known univariate results in the multidimensional setting, and to add new results in this direction. In particular, we characterize a general class of objective functions (so called one or twopoint support functions), for which the robust objective is reduced to a deterministic optimization problem in one variable. Finally, we adapt a result from Geoffrion (1967a) to reduce the main problem to a parametric quadratic program. In particular, our results are true for increasing concave utilities with convex or concaveconvex derivatives. Closed form solutions are obtained for special discontinuous criteria, motivated by bonus and commission based incentive schemes for portfolio management. We also investigate a multiproduct pricing application, which motivates extensions of our results for the case of nonnegative and decision dependent returns. Subject classifications : programming: stochastic, quadratic; decision analysis: risk; finance: portfolio. Area of review : Decision Analysis. History : Received January 2004; accepted December
2 2 Operations Research 00(0), pp , c 0000 INFORMS 1. Introduction In this paper we provide a general approach for deriving robust solutions for stochastic optimization programs, based on meancovariance information about the distributions underlying the return vector. Consider a decision maker interested in maximizing his expected utility u of a linear reward function, subject to a given set of convex constraints P on the vector of decision variables x. The decision maker solves the following general stochastic mathematical program without recourse: (P ) maximize E[u(x R)] subject to x P, Throughout the paper, random variables and random vectors will be denoted in bold, to be distinguished from deterministic quantities. We will use max and min in the wide sense of sup and inf, to account for the cases when the optimum is not achieved. When the distribution of the uncertain return vector R is exactly known, Problem (P) is a general one stage stochastic program. One obvious case when the solution of Problem (P) does not depend on the actual distribution of R, except its mean and covariance, is for quadratic utilities u(r) = ar 2 + br + c, in which case it amounts to: max x P ax (Σ + µµ )x + bx µ + c. However, this is only the case for quadratic (or linear) utility functions, explaining in part why these classes are particularly favored in economics and financial models. In many applications of stochastic programming, there is considerable uncertainty regarding the distribution of R. In many instances, only partial information about this distribution is available, or reliable, such as means, variances and covariances. A common approach in the literature is to assume that the underlying distribution is multivariate normal, and solve the corresponding stochastic program. However, this assumption is hard to justify unless the randomness comes from a central limit theorem type of process. Alternatively, the robust approach consists in basing the decision strictly on the available information, and maximizing the worst case expected utility over all possible distributions with the given properties. Even if the underlying distribution is actually known, the robust approach can be used to provide tractable approximations for the solution of Problem (P), which is typically a very difficult problem. The robust problem is the focus of our investigation, and can be formulated as follows: (G) max x P min R (µ,σ) E[u(x R)], (1) where we denote R (µ, Σ) to represent the fact that the distribution of R belongs in the class M n (µ,σ) of nvariate distributions with mean µ and covariance Σ. By a simple change of variable
3 Operations Research 00(0), pp , c 0000 INFORMS 3 u = u we obtain the minmax version of Problem (G). All the results in this paper can be rephrased to hold for minmax problems, so for simplicity we restrict the analysis to the maxmin case. As an example, consider a portfolio manager interested in choosing a portfolio x among n assets with returns R = (R 1,..., R n ) in order to maximize the expected value or utility of his payoff (e.g. induced by his incentive system). The optimization is subject to a set of portfolio constraints, such as P = {x [0, 1] n x 1 n = 1}. In practice, one seldom has full information about the joint distribution of asset returns, which are typically assumed multivariate normal and interpolated from very few data points (hence are not robust). Very often the only reliable data consists of first and second order moments. For example, RiskMetrics ( provides very large data sets of asset covariances, but no additional distributional information. In this case, if the manager has access only to the mean and covariance matrix of the asset returns, and not to the entire distribution, he may be interested in the best and worst case return of a given portfolio, and in the portfolios that would maximize such a worst case return. For instance, a risk neutral manager incentivised by a bonus for performance above a given target return will maximize the worst case probability of reaching that target level. There is a considerable amount of literature on robust portfolio optimization; see for example Goldfarb and Iyengar (2003), Rustem et al. (2000), El Ghaoui et al. (2003). We contribute to this stream by studying the portfolio example from the manager s perspective in Section 5. One can think about Problem (G) as a strategic game, where the first player chooses a decision x while nature picks the worst possible distribution R (see e.g. Dupačová 1966). The robust approach adopted in this paper is very much in the spirit of the maxmin expected utility framework axiomatized by Gilboa and Schmeidler (1989). The concept of robustness, or incomplete knowledge about distributions, is captured in various ways in the stochastic optimization literature, the most common ones being uncertainty sets (BenTal and Nemirovski 2002) and moments of the underlying distribution. Our model falls in the second category, and is related in that sense to a large literature including Scarf (1958), Birge and Wets (1987), Birge and Dulá (1991), Kall (1988), Dupačová (2001), Shapiro and Kleywegt (2002). For a given vector x, we refer to the problem (I) U(x) = min E[u(x R)] (2) R (µ,σ) as the inner optimization problem and to the function U(x) as the robust objective (or criterion). There are three main steps in our solution to Problem (G).
4 4 Operations Research 00(0), pp , c 0000 INFORMS 1. First, we transform the inner optimization problem into an equivalent problem over univariate distributions with a given mean and variance. This reduces the original problem to a bicriteria optimization problem. 2. Second, we show that for some general classes of objective functions, this bicriteria objective reduces to a deterministic optimization problem in one variable. 3. Finally, based on these results, we provide conditions to reduce the original problem to solving a parametric quadratic problem. The first key result of our derivations is a new projection property for classes of multivariate distributions R with given mean vector µ and covariance matrix Σ. Based on this result, we show in Proposition 1 that for any vector x, Problem (I) reduces to an optimization problem over the class of projected univariate distributions r x with given mean µ x = x µ and variance σ 2 x = x Σx, that is: U(x) = min R (µ,σ) E[u(x R)] = min E[u(r x )]. r x (µ x,σx 2) The above problem is further reduced (Proposition 2) to optimizing over a restricted class of distributions with at most three support points. The robust objective U(x) can be cast as a deterministic optimization problem in at most three variables. This procedure reduces the robust stochastic optimization problem (G) to a deterministic bicriteria problem that only depends on the mean and variance of the random variable x R. The results of Section 2 rely on the classical theory of moments (see e.g. Karlin and Studden 1966, Rogosinsky 1958) and require no assumptions on the objective function u. Section 3 provides conditions on the objective function u to express the robust objective U as an optimization problem in one variable. This holds for socalled one or twopoint support functions u (see Definitions 1 and 3). Such functions are supported from below by a quadratic and intersect it at exactly one or two points. Special cases include concave utilities with convex, concave or concaveconvex derivatives, in particular corresponding to risk averse and prudent/imprudent preferences, as well as some targetbased piecewise polynomials. The resulting second order moment bounds are refinements of the Jensen and EdmundsonMadansky inequalities (see Dulá and Murthy 1992, Dokov and Morton 2002) and extend results of Krein and Nudelman (1977) and Birge and Dulá (1991) for distributions with unbounded support. These results are also useful for sensitivity analysis in many decision sciences applications (see e.g. Smith 1995), in particular in stochastic programming when the underlying distribution is actually known, but difficult to work with (Birge and Wets 1987).
5 Operations Research 00(0), pp , c 0000 INFORMS 5 In Section 4, we use results from Geoffrion (1967a) to show (Proposition 9) that for some general classes of objective functions, the robust Problem (G) over a convex polyhedral set P is equivalent to solving the following parametric quadratic program (PQP): (P λ ) max x P λ x µ (1 λ) x Σx. (3) In particular, this is true for increasing and concave utilities u that satisfy the one or twopoint support property characterized in Section 3, which includes utilities with convex or concaveconvex derivative. Section 5 illustrates our results with an application in the context of portfolio management incentives. We provide optimal robust portfolios corresponding to various (noncontinuous, nondifferentiable) reward systems for portfolio managers, involving bonuses, commissions and combinations thereof, as well as nonexpectation criteria such as value at risk. Our results show that the resulting robust portfolios are meanvariance efficient and solvable as PQPs. We obtain new results, as well as new proofs and insights for known technical results. In particular, we find that under bonus systems, a manager who assumes joint normal return distributions chooses the same portfolio as the most pessimistic manager, with the same given meancovariance information. In Section 6 we propose two extensions of our results motivated by an application in multiproduct pricing. First, we show that our general model and solution naturally extend to the case when the distribution R (its mean and covariance) is dependent on the decision variable x. Second, we attempt to extend our framework to the case of nonnegative random returns. While we prove that bicriteria solutions are generally not optimal, we provide a method to compute (weak) bicriteria bounds on the corresponding robust problems, and show how they can be solved using parametric quadratic programming. Our conclusions are summarized in the last section. 2. The Robust Objective In this section we focus on Problem (I): for a nonzero vector x, we consider the problem U(x) = min E[u(x R)]. We show that this problem can be formulated as a bicriterion objective, which R (µ,σ) can be cast as a deterministic optimization problem in (at most) three variables. The results of this section hold without any assumptions on the objective u A General Projection Property A lower bound on the robust objective U(x) can be obtained by optimizing over univariate distributions with mean µ x and variance σ 2 x: U(x) = min R (µ,σ) E[u(x R)] min r (µ x,σ 2 x ) E[u(r)]. (4)
6 6 Operations Research 00(0), pp , c 0000 INFORMS This is because for any feasible random vector R (µ, Σ) on the left hand side, the corresponding projected random variable r x = x R has mean x µ = µ x and variance x Σx = σx, 2 hence r x is feasible on the right hand side. Our first key result is that, irrespective of the objective function u, relation (4) above holds with equality. Proposition 1. For any objective function u and any given real vector x, we have that: min R (µ,σ) E[u(x R)] = min E[u(r)]. (5) r (µ x,σx 2) The central idea behind Proposition 1 is that for any nonzero vector x, the optimization over random vectors R (µ, Σ) is equivalent to solving the corresponding projected problem over the class of univariate distributions with mean µ x and variance σx. 2 We provide a constructive proof for the following stronger result: for any nonzero vector x and any distribution r (µ x, σx), 2 there exists a multivariate distribution R (µ, Σ) such that r = x R. We remind the notation M n for (µ,σ) the set of probability measures on R n with mean µ and covariance Σ (with the default superscript n = 1). Proposition 1 is a direct corollary of the following stronger result, proved in the Appendix: Theorem 1 (General Projection Property). For any given nvector µ, (n n)matrix Σ 0 and any nonzero nvector x, the xprojection mapping r = x R from nvariate distributions R (µ, Σ) to univariate distributions r (µ x, σx) 2 is onto, meaning that every distribution r in M (µx,σx 2) can be obtained from some distribution R in M n by r = (µ,σ) x R Equivalent Deterministic Formulation Based on Proposition 1, the general robust stochastic optimization problem (G) amounts to maximizing U(x) = min E[u(r)], over the domain x P. This is a special case of a univariate moment r (µ x,σx 2) problem, where only the first and second order moments (mean and variance) of the distribution are known. In this section we show that such problems can be reduced to a deterministic optimization problems in three variables. A classic result in the theory of moments, apparently due to Rogosinsky (1958, Theorem 1), is that moment problems with n moment constraints can be solved by optimizing only over discrete distributions with at most n + 1support points (see Smith 1995, for a more accessible presentation). In our case, this result implies that the robust objective reduces to optimizing over (µ x, σx) 2 distributions with at most three support points. We obtain the following deterministic formulation in three variables:
7 Operations Research 00(0), pp , c 0000 INFORMS 7 Proposition 2. For any objective u and x 0, the robust objective U can be calculated as: where U(x) = min p a u(a) + p b u(b) + p c u(c) s.t. a µ x σ2 x b µ x + σ2 x c µ x µ x a c, (6) p a = { σ 2 x +(µ x b)(µ x c) (a b)(a c), if a < b 1 σ2 x +(µx a)2, if a = b, p c = (c a) 2 { σ 2 x +(µ x a)(µ x b), if b < c (c a)(c b) 1 σ2 x +(µx c)2, if b = c, p b = 1 p a p c. (c a) 2 Proof: It remains to show that if a distribution with at most three support points satisfies the (µ x, σx)moment 2 constraints, then it is of the above form. Note that twopoint distributions with mean µ x and variance σx 2 1 p assign mass p to the point µ x + σ p x and mass 1 p to the point µ x p σ 1 p x, with p (0, 1). When a = b or b = c above, then we obtain precisely two point distributions of this form. Suppose now that a < b < c. The expressions for the probabilities are equivalent to the meanvariance conditions. In order for these to be nonnegative, it is necessary and sufficient that a µ x σ2 x c µ x b µ x + σ2 x c. µ x a In particular, optimizing only over feasible two point distributions in problem (6), we obtain the following upper bound on U(x): U(x) min p u(µ x + p (0,1) 1 p 3. Simplified Formulation of The Robust Objective p p σ x) + (1 p) u(µ x 1 p σ x). (7) This section investigates the case when the above upper bound is sharp. We provide conditions on the objective u for the optimum in Problem (I) to be achieved exactly or asymptotically by two point distributions. This holds for a very general class of functions u (socalled one or twopoint support functions), including all monotone convex functions and all concave functions with convex, concave or concaveconvex derivative, as well as some important piecewise linear classes. These results are important because they allow to reduce the robust objective to an optimization problem in one variable. Moreover, in certain cases, closed form formulations of the robust objective can be obtained based on these results, as illustrated in Section 5. Finally, in the next section, we show that optimizing Problem (G) is relatively easy for one and two point support objectives. The key element of this section is the family Q = {(A, B, C) q(y) = Ay 2 + By + C u(y) y R} corresponding to coefficients of quadratic functions dominated by u. Clearly, E[u(r)] E[q(r)] = A(µ 2 x + σ 2 x) + Bµ x + C for any random variable r (µ x, σ 2 x), implying:
8 8 Operations Research 00(0), pp , c 0000 INFORMS Proposition 3. We have U(x) = min E[u(r)] r (µ x,σx 2) max (A,B,C) Q A(µ 2 x + σ 2 x) + Bµ x + C. Strong duality for moment problems (see e.g. Karlin and Studden 1966) insures that under certain regularity conditions (in our case amounting to σ x > 0) equality holds in Proposition 3. We mention this result for its general interest, however it will not be necessary for our work Two Point Support Objectives Consider (A 0, B 0, C 0 ) Q so that q 0 (y) = A 0 y 2 + B 0 y + C 0 intersects u at two points a < b. Suppose that there exists a distribution r (µ x, σ 2 x) with support {a, b}. It follows that E[u(r)] = E[q 0 (r)] = A 0 (µ 2 x + σ 2 x) + B 0 µ x + C 0. From Proposition 3, this must equal U(x). So in this case U(x) can be obtained by minimizing only over feasible two point distributions, i.e. equality holds in (7). We formalize this result as follows: Definition 1. A function u satisfies the twopoint support property with respect to (µ, σ 2 ) if there exists a quadratic function that supports u from below and intersects it at two points a, b such that a feasible (µ, σ 2 ) distribution with support {a, b} exists. A function u satisfies the twopoint support property if it satisfies it with respect to any (µ, σ 2 ). Proposition 4. If u has two point support, then the robust objective equals: 1 p p U(x) = min U(p, x), where U(p, x) = p u(µ x + p (0,1) p σ x) + (1 p)u(µ x 1 p σ x). (8) A similar result for a special piecewise linear objective and nonnegative distributions has been derived by Scarf (1958), in the context of an inventory problem. Birge and Dulá (1991) use two point support functions to obtain the above result over a bounded domain. Note that a function can have twopoint support on an interval, but not on the whole real line, e.g. if a support point lies on the boundary. This can be seen from the following example: Example 1. Let u(y) = 1 (µ kσ,µ+kσ) (y), where 1 S denotes the indicator function of a set S. We have that min E[u(r)] = r (µ,σ 2 ) min r (µ,σ 2 ) P ( r µ < kσ) = max(0, 1 1 k 2 ). This is the twosided Chebyshev inequality, achieved by the three point distribution with mass 1, 1 1, respectively 1, at the points µ kσ, µ, respectively µ + kσ. Note that minimizing 2k 2 k 2 2k 2 ( over two point distributions yields a strictly larger bound, namely max 1 k, 2 1+k 2 1+k 2 ). In light of Proposition 4, u cannot have two point support. On the other hand, the same objective function, defined say on [µ kσ, ) has two point support for k 1.
9 Operations Research 00(0), pp , c 0000 INFORMS 9 Denote the two support points in relation (8) by a = µ x p σ 1 p 1 p x and b = µ x + σ p x (so a < b unless x = 0). For differentiable functions u we obtain: U p (p, x) = u(b) u(a) (b (b) + u (a) a)u. (9) 2 The first order condition for U(p, x), x 0, can be stated as: u(b) u(a) b a = u (a) + u (b). (10) 2 The optimum in (8) is either obtained at a point where the above equality holds, or as p 0, or p 1. In the first case, the two point support is precisely {a, b}. In the latter cases it degenerates into the onepoint support case discussed in the next section. The remainder of the section provides conditions for a function to have twopoint support. The next result is a technical restatement of Definition 1: Lemma 1. The function u has two point support if and only if for any µ and any σ > 0 there exist a < b and q a, q b such that the following conditions hold: (a) (b µ)(µ a) = σ 2 ; (b) u(b) u(a) b a = q a + q b ; 2 (c) q(y) = Ay 2 + By + C u(y), y R, where A= q b q a, B= bqa aq b, C = bu(a) au(b) 2(b a) b a b a ab qa q b 2(b a). Proof: From (b) and (c), it follows that q(a) = u(a) and q(b) = u(b) and q supports u. Condition (a) states that the intersection points a, b are the support of a (µ, σ 2 )distribution. Hence the conditions of Definition 1 are met. Conversely, if u has two point support, then the conditions of the lemma are met for q a = q (a) and q b = q (b), where a, b and the quadratic q come from Definition 1. By Proposition 3, the above result shows that: Remark 1. If u satisfies the conditions of Lemma 1, then an optimal distribution for U(x), if it exists, has support {a, b} as defined by the conditions of the Lemma with respect to (µ x, σ x ). If u is differentiable at a, b, the construction in Lemma 1 implies q a = q (a) = u (a), q b = q (b) = u (b), i.e. q intersects u tangentially at a, b. In this case, condition (b) in Lemma 1 becomes precisely Eq. (10). This condition says that the average value of the derivative u (if it exists) on [a, b] equals the average of the values of u at the end points a, b. Geometrically, the area under the graph of u between a and b equals the area of the trapezoid with corners a, u (a), u (b), b. The existence of two such points a, b is insured for example if u is nonincreasing and changes convexity at least once (between a and b). Moreover, we show that conditions (a) and (c) are satisfied if u is concave until some point and then convex, and has finite limits at ±.
10 10 Operations Research 00(0), pp , c 0000 INFORMS Definition 2. A function f defined on R is convexconcave if there exists x 0 R such that f is convex on (, x 0 ) and concave on (x 0, ). A function f defined on R is concaveconvex if f is convexconcave. A function f is Sshaped if it is increasing and convexconcave. A function f is inverse Sshaped if f is Sshaped. The following corollary of Lemma 1 is proved in the Appendix. An analogue result has been obtained by Birge and Dulá (1991, Theorem 5.1) for functions with bounded support. Proposition 5. If u is differentiable such that u is inverse Sshaped with finite limits at ±, then u satisfies the twopoint support property. Furthermore, if u has two point support, then by Definition 1, u(x) + q(x) also has two point support for any quadratic q. Examples of functions satisfying these conditions include combinations of exponentials and logarithms (e.g. loglogistic u(x) = C + log 1/(1 + e ax ) used in statistics, classification), and hyperbolic functions (e.g. the catenary u(x) = C b cosh(ax) used in physics, engineering), where in all cases the constant C can be replaced by a concave quadratic. The conditions of Proposition 5 are not necessary, however, for a function to have twopoint support. In general, twopoint support functions need not be differentiable, not even continuous; such examples are investigated in Section 5. Furthermore, even for differentiable functions neither concavity, nor the concaveconvex derivative condition are necessary for twopoint support, as illustrated in the following counterexample. Example 2. The function u(y) = y ln(1 + y 2 ) is increasing, but not concave (a plot of its derivative is provided in Figure 1). However, u has the twopoint support property. Note that u (y) + u ( y) = 2 for all real y. Hence conditions (a) and (b) of Lemma 1 hold for b = a = µ2 + σ 2 and q a = u (a), q b = u (b). By construction, the quadratic q defined in part (c) of the Lemma intersects u tangentially at a, b. The convexity properties of u imply that q supports u. From Proposition 4, for a given vector x, we have that U(x) = min U(p, x), where p (0,1) 1 p U(p, x) = µ x p ln(1 + (µ x + p σ x) 2 ) (1 p) ln(1 + (µ x p 1 p σ x) 2 ). Remark that lim U(p, x) = lim U(p, x) = u(µ x ) = µ x ln(1 + µ 2 p 0 p 1 x), and U(p, x) is decreasing in p around p = 0 and increasing around p = 1, so the minimum is achieved somewhere inbetween. Suppose, for example that for a certain x = x 0 we have µ x = 0 and σ x = 1. Then U(p, x 0 ) = p ln p + (1 p) ln(1 p), which is minimized for p = 1/2 and U(x 0 ) = ln 2.
11 Operations Research 00(0), pp , c 0000 INFORMS 11 Figure 1 Plots of the functions u(y) = y ln(1 + y 2 ) and u (y) = 1 2y 1+y u 2 u One Point Support Objectives The previous section provided a simplified expression for the robust objective U(x) when u has twopoint support. In particular this is true when u is concave with concaveconvex derivative. We now provide other conditions that allow a simplified formulation for U when u has one point support. Relevant special cases include u concave with a convex or concave derivative, and u monotonic and convex. The onepoint support property is a limiting case of twopoint support, where the expected value of the supporting quadratic is approached by a sequence of feasible twopoint distributions, with one point approaching the mean and the other arbitrarily large (or small). This is formally defined as follows: Definition 3. A function u has the onepoint support property with respect to (µ, σ 2 ) if there exists a quadratic q that supports u and intersects it at µ, and either lim E[u(r p )] = E[q(r p )] p 0 or lim E[u(r p )] = E[q(r p )], where r p is a (µ, σ 2 1 p ) random variable with support {µ + σ, µ p 1 p p σ}. A function u has the onepoint support property if it satisfies it with respect to any 1 p (µ, σ 2 ). The following result follows trivially from Proposition 3. Proposition 6. If u satisfies the one point support property then U(x) = min U(p, x), specifically p (0,1) either U(x) = lim U(p, x) or U(x) = lim U(p, x). p 0 p 1 In the following we identify some general relevant classes of onepoint support functions. Proposition 5 shows that concave functions with concaveconvex derivative have twopoint support. In the extreme case when the derivative is either convex or concave (x 0 = ± in Definition 2), i.e.
12 12 Operations Research 00(0), pp , c 0000 INFORMS the risk averse agent is prudent or imprudent, we obtain onepoint support functions. The proof of the next result is provided in the Appendix. Proposition 7. If u is concave and twice differentiable with a monotonic second derivative then u has onepoint support. Specifically, (a) if u is convex, then U(x) = u(µ x ) + lim y u (y) σ2 x 2 (b) if u is concave, then U(x) = u(µ x ) + lim y u (y) σ2 x 2 for all x; for all x. Example 3. Consider a portfolio manager facing an exponential utility u(x) = 1 e ax, with a > 0. Since lim y u (y) =, Proposition 7(a) shows that the robust solution should focus on maximizing the expected portfolio return µ x while setting σ x = 0, i.e. finding a perfect hedge. In particular, if Σ is nonsingular, the optimal robust solution is not to invest at all, as for any investment strategy there exists a corresponding distribution that can make the expected utility of the return arbitrarily small. (These are two point distributions, with an arbitrarily large loss probability 1 p 1). While this is an extreme scenario, one should expect the robust approach to yield strategies that are by definition conservative. We now consider the case when u is monotonic and convex, so we need to optimize over concave quadratics that are tangent to u at exactly one point. The next result is proved in the Appendix. Proposition 8. If u is monotone convex, then it has one point support and U(x) = u(µ x ). More generally, the same result holds if u is convex with either u ( ) = 0 or u ( ) = 0. The class of onepoint support functions includes utility functions most commonly used for mathematical modeling: those whose (first three) derivatives alternate in sign, i.e. those consistent with (up to third order) stochastic dominance (see Brockett and Golden 1987). Examples include the standard exponentials, logarithms, power functions and hyperbolic functions (see Pratt 1964), sumex and linear plus exponential (Bell s oneswitch functions, see Bell 1988), or more generally, any mixtures of exponentials. In summary, one or twopoint support functions are fairly general, including (but not limited to) convex monotone functions and concave functions with at most one third derivative sign change. Some additional relevant nonsmooth classes include piecewise linear functions, discussed in the context of a financial application in Section Bicriteria Optimization via PQP The previous sections focused on reducing the inner problem from an optimization problem over multivariate distributions to a deterministic optimization problem in one or three variables. The remaining step in the solution to Problem (G) is to optimize this robust objective.
13 Operations Research 00(0), pp , c 0000 INFORMS 13 This section exploits the fact that the robust objective depends on the decision variable x only through two criteria: the return s mean µ x and standard deviation σ x, U(x) V (µ x, σ x ). This allows a simple and efficient parametric quadratic programming (PQP) formulation and solution to Problem (G), based on Geoffrion (1967a). For a fairly general class of functions u, we show that optimizing the corresponding robust objective is equivalent to a meanvariance optimization problem, for a certain tradeoff coefficient, i.e. the resulting optimal solution of Problem (G) is meanvariance efficient. Based on Propositions 4 and 6, we argue that these results hold in particular for increasing and concave utilities with one or twopoint support, as well as other nonconcave objectives, as illustrated in Section 5. Recall that a real function f defined on a convex set S is said to be quasiconcave if and only if for any λ [0, 1] and x, y S we have f(λx + (1 λ)y) min(f(x), f(y)). Equivalently, f is quasiconcave whenever for any l, the upper level sets L l = {x S f(x) l} are convex. The following result due to Geoffrion (1967a) (see also Geoffrion 1967b) provides general sufficient conditions for solving bicriteria problems efficiently as parametric quadratic programs. Theorem 2 (Geoffrion 1967a). Consider the program max v(p 1(f 1 (x)), p 2 (f 2 (x))), (11) x X where X R n is a convex set. Suppose that f 1, f 2, p 1 (f 1 ), p 2 (f 2 ) are concave on X, p 1 and p 2 are increasing on the image of X under f 1, f 2 respectively, and v is nondecreasing in each coordinate and quasiconcave on the convex hull of the image of X under (p 1 (f 1 ), p 2 (f 2 )), and all functions are continuous. Further assume that the optimal solution x (λ) of the parametric program max x X λf 1(x)+ (1 λ)f 2 (x) is continuous for λ [0, 1]. Then the function w(λ) = v(p 1 (f 1 (x (λ))), p 2 (f 2 (x (λ)))) is continuous and unimodal on [0, 1]. Furthermore, if λ is its maximum, then x (λ ) is optimal for (11). Proposition 1 reduced Problem (G) to solving a bicriteria (meanvariance) optimization problem. Based on Theorem 2, the following result shows that under fairly unrestrictive conditions, Problem (G) can be optimized by solving a parametric quadratic program. Proposition 9. Suppose that the robust objective function V (µ x, σ x ) = min E[u(r)] is continuous, nondecreasing in µ x, nonincreasing in σ x and quasiconcave. Then Problem (G) over r (µ x,σx 2) a convex polyhedral set P is equivalent to solving the following parametric quadratic program: (P λ ) max x P λ x µ (1 λ) x Σx. (12)
14 14 Operations Research 00(0), pp , c 0000 INFORMS Specifically, if x (λ) denotes the optimal solution of Problem (P λ ), then the function w(λ) = V (µ x (λ), σ x (λ)) is continuous and unimodal in λ [0, 1]. Moreover, if λ x (λ ) is optimal for Problem (G). is its maximum, then Proof: Let f 1 (x) = µ x and f 2 (x) = x Σx, p 1 (f 1 ) = f 1 and p 2 (f 2 ) = ( f 2 ) 1 2, these are all concave, and the latter two are nondecreasing. In the notation of Theorem 2, we obtain v(µ x, σ x ) = v(p 1 (f 1 (x)), p 2 (f 2 (x))) = min E[u(r)] = V (µ x, σ x ). The assumptions on V insure that v is nondecreasing and quasiconcave. Furthermore, the convex polyhedral structure of the feasible set r (µ x,σx 2) P insures that there exists a continuous optimal solution x (λ) (see e.g. Wolfe 1959, Geoffrion 1967a). The result follows from Theorem 2. It is reasonable to assume that the robust utility U(x) = V (µ x, σ x ) is nondecreasing in the mean µ x and nonincreasing in the standard deviation σ x of the underlying return. This basically says that the decision maker prefers distributions with higher return and lower risk. Quasiconcavity can be interpreted as a diminishing marginal rate of substitution between mean and standard deviation, a property shared by most utility functions. When the actual utility function u is increasing and concave (as is typically assumed in decision analysis), and has one or twopoint support, then these conditions are automatically satisfied. We obtain the following result: Proposition 10. For increasing concave utilities with one or two point support, Problem (G) over a convex polyhedral set P is equivalent to solving the parametric quadratic program (P λ ) for a suitable value λ [0, 1]. Proof: From Propositions 6 respectively 4, we have that V (µ x, σ x ) = min V (p, µ x, σ x ), where p (0,1) 1 p p V (p, µ x, σ x ) = p u(µ x + p σ x) + (1 p) u(µ x 1 p σ x). Monotonicity of u implies that V (p, µ x, σ x ) is nondecreasing in µ x. Concavity of u implies monotonicity of V (p, µ x, σ x ) in σ x, and concavity in (µ x, σ x ). Since monotonicity and concavity are preserved through minimization, the same results hold for V (µ x, σ x ). The final result follows from Proposition 9. In particular, the above results hold for increasing concave utilities with convex or concaveconvex derivative. For more general utilities, one can attempt to check monotonicity of V in µ x and σ x as well as quasiconcavity using the results of Propositions 2, 4 or 6, depending of the support properties of the function u. In particular, for one or twopoint support functions, monotonicity in µ x can be ensured by assuming that u is nondecreasing. Some examples are provided in the next section.
15 Operations Research 00(0), pp , c 0000 INFORMS 15 Remark that if u is concave and has one or two point support, Propositions 4 and 6 imply that U(x) is concave in x. So alternatively, traditional computational methods for convex programming may also be suitable to solving Problem (G) without exploiting its bicriteria structure. Several algorithms are available for solving parametric quadratic programs (Wolfe 1959, Geoffrion 1967a,b, Best 1996). They all yield an optimal solution x (λ) that is continuous in λ. Geoffrion (1967a) solves (P λ ) with λ = 1 and decreases λ until the unimodal function w (λ) = v(µ x (λ), σ x (λ)) achieves its maximum on [0, 1]. In our applications, the intermediate solutions are also of interest, as they yield a portion of the meanvariance tradeoff curve. More recent algorithms for PQP are due to Best (1996) and Arseneau and Best (1999). For a general treatment of parametric quadratic programs see Bank et al. (1982). Sensitivity analysis for quadratic programs in the meanvariance framework of Markowitz (1959) is discussed in Dupačová et al. (2002, Ch. II.7). Example 4. In Problem (G), suppose that the constraint set is given by P = {x x i = 1, x 0}, and let u(y) = ln( 1 1+e y ). Then u is increasing and concave and its derivative u (y) = e y 1+e y is inverse Sshaped (see Figure 2). Therefore V (µ x, σ x ) = min p ln(1 + e x+q (µ p (0,1) 1 p p σ x) ) (1 p) ln(1 + e (µx p 1 p σx) ) is increasing in µ x, decreasing in σ x and concave in (µ x, σ x ), i.e. satisfies the conditions of Proposition 9. To solve Problem (G), we first compute x (λ), the optimal solution of: (P λ ) max λ x µ (1 λ) x Σx s.t. xi = 1 x 0, and then maximize over λ (0, 1) the function w(λ) = V (µ x (λ), σ x (λ)), which is unimodal. This can be seen from Figure 3, where we display w(λ) for the following three data sets: µ 1 = (1 3), Σ 1 = ( ) , µ = ( ) , Σ 2 = , respectively µ 3 = µ 2, Σ 3 = Σ 2. The resulting optimal values w(λ ) are the desired optimal values of (G), in our case , and , achieved at λ =.60,.72, and respectively 1. The corresponding optimal solutions of Problem (G) are x (λ ) = ( ), ( ) and ( ). All programs were run in MATLAB 5.3 under Windows XP. 5. Application: Portfolio Management Incentives In this section we present an applications of our results in the context of financial portfolio management incentives. This application prompts us to analyse a set of special noncontinuous/nondifferentiable twopoint support objectives, induced by bonus and commissionbased incentives, as
16 16 Operations Research 00(0), pp , c 0000 INFORMS Figure 2 Plots of the functions u(y) = ln( ) and u (y) = e y 1+e y 1 1+e y. u u Figure 3 Plots of the unimodal function w(λ) w 1 (λ) 0.4 w 2 (λ) 0.3 w 3 (λ) λ λ λ well as nonexpectation criteria such as value at risk. We obtain new results, as well as new proofs and insight for some technically known results. Consider the example described in the introduction: a portfolio manager is interested in a robust portfolio among a set of n assets with random returns R 1,..., R n. Suppose that the manager does not know the distribution of the random returns, but estimates them to have means µ and covariance structure Σ. The manager is interested in a robust portfolio that maximizes the worst case expected payoff u(r), where r is the portfolio return achieved at the end of the evaluation period. This problem can be formulated as follows: max x P min R (µ,σ) E[u(x R)], where the underlying feasible set P is convex polyhedral, and may have different forms, according to the type of portfolio problem of interest. For example, P = {x R n A x b} or in particular P = {x [0, 1] n x 1 n = 1}. The objective function u can either represent the agent s utility function for a reward proportional to the portfolio return, or the valuation induced by a more complex
17 Operations Research 00(0), pp , c 0000 INFORMS 17 compensation mechanism. For example, if a (risk neutral) agent is rewarded a oneshot bonus for outperforming a certain index s, then u is precisely the cdf of s. According to Proposition 10, if u is increasing and concave (i.e. the agent is risk averse) with one or twopoint support, this problem amounts to solving the PQP (12) over the portfolio choice set P for a suitable value λ. In particular, the resulting optimal portfolio is meanvariance efficient (a portfolio is meanvariance efficient if it has the highest expected return for a given variance, or, equivalently, if it has smallest variance for a given expected return (see e.g. Markowitz 1959). The robust approach associates with each utility function u a worst case risk factor κ = (1 λ )/λ, obtainable by the methods of Theorem 2 and Proposition 9, such that solving the robust optimization problem is equivalent to solving a deterministic problem with quadratic utility u κ (x) = µ x κ x Σx. This is a deterministic meanvariance portfolio problem that can be solved by standard methods. Consider for comparison purposes the alternative approach, based on the assumption that the underlying distribution of returns is multivariate normal. In this case the problem can be formulated as follows: (N) max E[u(r x)] = max x P x P 1 u(y) e (y µ x) 2 x Σx dy. 2π For increasing and concave utilities u, it is well known that solving the above problem yields a meanvariance efficient portfolio (see e.g. Kira and Ziemba 1977). Moreover, for exponential utilities, the solution can be easily computed. For more general utilities, however, Problem (N) involves numerical integration. In the following we investigate Problem (G) for some relevant nondifferentiable criteria that arise from typical incentive schemes for portfolio managers. The resulting objectives do not satisfy the conditions of Proposition 10, so stronger results are needed. We first argue that the objective has two point support, then calculate the robust objective U(x) = V (µ x, σ x ) in closed form. Finally, we show that V satisfies the conditions of Proposition 9, hence the optimal robust portfolio is meanvariance efficient and the problem can be solved as a PQP. Bonus and Commission Incentives. Portfolio managers, and generally sales managers, are typically incentivised by a fixed salary plus a bonus β, contingent on the realization of a certain preset performance target, denoted α. It is often argued that fixed target bonus systems are unappealing because managers have no direct incentive to overperform the target. As a remedy, managers are offered an additional commission rate γ for overachieving the target. Such a reward system, mixing a bonus β and commission rate γ for output above the target α, can be modeled
18 18 Operations Research 00(0), pp , c 0000 INFORMS as u(y) = γ(y α) + + β1 (y>α). If the distribution of returns is normal, then the problem amounts to optimizing (numerically) the objective: γσ x Ψ( µ x α ) + βφ( µ x α ), where Ψ(a) = E[(Z + a) + ], Φ(a) = P (Z a), Z N(0, 1). (13) σ x σ x Given only means and covariances of the asset returns, the manager maximizes over x P the corresponding robust criterion: U + α (x) = min R (µ,σ) γe[(x R α) + ] + βp (x R > α) = min γe[(r α) + ] + βp (r > α), (14) r (µ x,σx 2) where the last equality follows by Proposition 1. The following result is proved in the Appendix. Proposition 11. The optimal robust portfolio for a manager incentivized by a bonus β plus a commission rate γ for performance above a target α maximizes over x P the robust objective: (µ U + x α) 2 + α (x) = γ(µ x α) + + β σx 2 + (µ x α). 2 Moreover, the optimal portfolio is meanvariance efficient and can be solved as a PQP. In particular, with a commissiononly reward (β = 0), the manager only cares about expected return. If the manager is only rewarded a bonus (γ = 0), the objective is to maximize the (worst case) probability of achieving the given target α. Proposition 11 reduces the optimal robust portfolio to maximizing (µ x α) +. This follows also from Proposition 1 and Chebyshev s inequality. For σ x comparison, from (13), the corresponding problem with normal returns is equivalent to optimizing the portfolio s Sharpe ratio µ x α σ x (see e.g. Charnes and Cooper 1963). The interesting analogy between the robust and normal criteria can be interpreted as follows: Consider two portfolio managers who have access to the same set of assets with known means µ and covariance Σ and are incentivised by the same achievable target α (i.e. there exists x P such that µ x α). The first manager believes that stocks follow a multivariate normal distribution with these parameters, whereas the other manager ignores the distribution and adopts a worst case approach. This result says that the two managers, despite their different beliefs, will choose the same portfolio. Penalty and Value at Risk (VaR) Incentives. One problem with the above incentives is that managers who are not on the winning track (i.e. have low chances of making the bonus) may undertake extremely risky strategies, with potentially disastruous consequences for the firm. As a remedy, downside incentive schemes are designed to minimize risk exposure; we consider two
19 Operations Research 00(0), pp , c 0000 INFORMS 19 examples, based on penalties and value at risk. With both up and downside incentives in place, successful managers will attempt to maximize gains, while underperformers minimize losses. Ross (2004) suggests an incentive system whereby managers are penalized for performances below a benchmark level α; this is modeled by an optiontype payoff u(y) = (α y) +, with the unit penalty normalized wlog to 1. The corresponding robust portfolio minimizes over x P the worst case penalty Uα = max R (µ,σ) E[(α x R) + ]. Although this is a minmax formulation, a simple change of sign transformation can convert it to maxmin, and our standard results apply. The next result is proved in the Appendix. A direct proof of the univariate bound is due to Jagannathan (1977), who also provides solutions for nonnegative and symmetric distributions. Proposition 12. The optimal robust portfolio for a manager incentivized by a linear penalty for performance below a target α, minimizes over x P the robust criterion: U α = 1 2 [(α µ x) + σx 2 + (α µ x ) 2 ]. Moreover, the optimal portfolio is meanvariance efficient and can be solved as a PQP. Another relevant measure of downside risk, which financial firms, as well as portfolio and insurance managers are often required to quote and optimize is value at risk (see e.g. Linsmeier and Pearson 1996). The value at risk of a return x (or loss x) at a confidence level α (typically α = 0.95 or α = 0.99) is defined as the following fractile criterion: V ar(α; x) = min{ξ P ( x < ξ) α} = max{z P (x > z) α}. The definition with strict inequality is needed for our results. However, weak inequalities can replace strict ones throughout if return distributions are known to be continuous. For normal returns, Charnes and Cooper (1963) prove that V ar N (α) = Φ 1 (α)σ x µ x. In practice, return distributions are typically not normal, and exact computation of VaR may amount to difficult numeric integration. Robust (or worst case) value at risk is defined as the largest VaR attainable, given only the mean and covariance of the returns. The robust portfolio with VaR criterion solves: V ar(α) = max x P {z min P (x R > z) α} R (µ,σ) = max {z (µ x z) 2 α, z µ x P (µ x z) 2 + σx 2 x }. The second formulation follows by Proposition 11 for γ = 0; the optimum is achieved when equality holds in the first constraint. Solving for z, we obtain the following result; a proof based on the multivariate Chebyshev inequality (Karlin and Studden 1966) is due to El Ghaoui et al. (2003). (15)
20 20 Operations Research 00(0), pp , c 0000 INFORMS Proposition 13. The optimal robust portfolio for a manager minimizing an αvar incentive α solves min x P 1 α σ x µ x, so it is meanvariance efficient. Notice that the VaR objective is not an expectation; this suggests possible extensions of our results beyond the main framework of this paper. 6. A MultiProduct Pricing Application and Extensions In this section we consider the optimal pricing decision of a firm (or sales manager) setting prices for a line of products so as to maximize the expected utility of profits (respectively, of his expected compensation). In this case the random returns are the demands for each product, which are nonnegative and depend on the set prices. This setting indicates two possible extensions of our original problem, which we address in this section: (1) we incorporate dependence of the random vector R on the decision variable x, and (2) we analyse the case of nonnegative random vectors R. We extend Proposition 9 to address the first issue, and we provide weak bounds for nonnegative random variables to address the second. Consider a firm setting a vector of prices x for a set of n products with random demands R(x) = (R 1 (x),..., R n (x)) in order to maximize the expected utility of profits, given by u(r), where r is the total profit. Suppose that the demands for the n products are correlated due to substitution and/or complementarity effects, and depend on the prices of all products. Demand is estimated to have mean µ(x) and covariance structure Σ(x). The variable cost per product is given by the vector c. The problem of the firm looking for a robust pricing scheme for the n products can be formulated as follows: ( G) max x P min E[u((x c) R(x))]. R (µ(x),σ(x)) The marketing constraints on the feasible set of prices can be assumed to be of the following form: P = {x R n + A x b, x min i x i x max i } Returns Dependent on the Decision Variable The above problem ( G) is not a special case of the robust problem (G) studied in this paper, since the random return (in this case the demand), together with its mean and variance, actually depend on the decision variable (in this case the price x). While the results of this section are presented in the context of the specific pricing application, one can naturally extend them to the general abstract setting.
Moral Hazard. Itay Goldstein. Wharton School, University of Pennsylvania
Moral Hazard Itay Goldstein Wharton School, University of Pennsylvania 1 PrincipalAgent Problem Basic problem in corporate finance: separation of ownership and control: o The owners of the firm are typically
More information2.3 Convex Constrained Optimization Problems
42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions
More informationA LogRobust Optimization Approach to Portfolio Management
A LogRobust Optimization Approach to Portfolio Management Dr. Aurélie Thiele Lehigh University Joint work with Ban Kawas Research partially supported by the National Science Foundation Grant CMMI0757983
More informationAdaptive Online Gradient Descent
Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650
More information1 Portfolio mean and variance
Copyright c 2005 by Karl Sigman Portfolio mean and variance Here we study the performance of a oneperiod investment X 0 > 0 (dollars) shared among several different assets. Our criterion for measuring
More informationMathematics for Economics (Part I) Note 5: Convex Sets and Concave Functions
Natalia Lazzati Mathematics for Economics (Part I) Note 5: Convex Sets and Concave Functions Note 5 is based on Madden (1986, Ch. 1, 2, 4 and 7) and Simon and Blume (1994, Ch. 13 and 21). Concave functions
More informationOptimization under uncertainty: modeling and solution methods
Optimization under uncertainty: modeling and solution methods Paolo Brandimarte Dipartimento di Scienze Matematiche Politecnico di Torino email: paolo.brandimarte@polito.it URL: http://staff.polito.it/paolo.brandimarte
More information1 if 1 x 0 1 if 0 x 1
Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or
More informationCostPerImpression and CostPerAction Pricing in Display Advertising with Risk Preferences
MANUFACTURING & SERVICE OPERATIONS MANAGEMENT Vol. 00, No. 0, Xxxxx 0000, pp. 000 000 issn 15234614 eissn 15265498 00 0000 0001 INFORMS doi 10.1287/xxxx.0000.0000 c 0000 INFORMS CostPerImpression and
More informationStochastic Inventory Control
Chapter 3 Stochastic Inventory Control 1 In this chapter, we consider in much greater details certain dynamic inventory control problems of the type already encountered in section 1.3. In addition to the
More information4: SINGLEPERIOD MARKET MODELS
4: SINGLEPERIOD MARKET MODELS Ben Goldys and Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2015 B. Goldys and M. Rutkowski (USydney) Slides 4: SinglePeriod Market
More informationFIRST YEAR CALCULUS. Chapter 7 CONTINUITY. It is a parabola, and we can draw this parabola without lifting our pencil from the paper.
FIRST YEAR CALCULUS WWLCHENW L c WWWL W L Chen, 1982, 2008. 2006. This chapter originates from material used by the author at Imperial College, University of London, between 1981 and 1990. It It is is
More informationDuality of linear conic problems
Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least
More informationFE670 Algorithmic Trading Strategies. Stevens Institute of Technology
FE670 Algorithmic Trading Strategies Lecture 6. Portfolio Optimization: Basic Theory and Practice Steve Yang Stevens Institute of Technology 10/03/2013 Outline 1 MeanVariance Analysis: Overview 2 Classical
More informationSensitivity analysis of utility based prices and risktolerance wealth processes
Sensitivity analysis of utility based prices and risktolerance wealth processes Dmitry Kramkov, Carnegie Mellon University Based on a paper with Mihai Sirbu from Columbia University Math Finance Seminar,
More informationarxiv:1112.0829v1 [math.pr] 5 Dec 2011
How Not to Win a Million Dollars: A Counterexample to a Conjecture of L. Breiman Thomas P. Hayes arxiv:1112.0829v1 [math.pr] 5 Dec 2011 Abstract Consider a gambling game in which we are allowed to repeatedly
More informationFurther Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1
Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 J. Zhang Institute of Applied Mathematics, Chongqing University of Posts and Telecommunications, Chongqing
More information5. Convergence of sequences of random variables
5. Convergence of sequences of random variables Throughout this chapter we assume that {X, X 2,...} is a sequence of r.v. and X is a r.v., and all of them are defined on the same probability space (Ω,
More information2. Meanvariance portfolio theory
2. Meanvariance portfolio theory (2.1) Markowitz s meanvariance formulation (2.2) Twofund theorem (2.3) Inclusion of the riskfree asset 1 2.1 Markowitz meanvariance formulation Suppose there are N
More information1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.
Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S
More informationConvex analysis and profit/cost/support functions
CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences Convex analysis and profit/cost/support functions KC Border October 2004 Revised January 2009 Let A be a subset of R m
More informationUnraveling versus Unraveling: A Memo on Competitive Equilibriums and Trade in Insurance Markets
Unraveling versus Unraveling: A Memo on Competitive Equilibriums and Trade in Insurance Markets Nathaniel Hendren January, 2014 Abstract Both Akerlof (1970) and Rothschild and Stiglitz (1976) show that
More informationAnalyzing the Demand for Deductible Insurance
Journal of Risk and Uncertainty, 18:3 3 1999 1999 Kluwer Academic Publishers. Manufactured in The Netherlands. Analyzing the emand for eductible Insurance JACK MEYER epartment of Economics, Michigan State
More informationBy W.E. Diewert. July, Linear programming problems are important for a number of reasons:
APPLIED ECONOMICS By W.E. Diewert. July, 3. Chapter : Linear Programming. Introduction The theory of linear programming provides a good introduction to the study of constrained maximization (and minimization)
More information3 Introduction to Assessing Risk
3 Introduction to Assessing Risk Important Question. How do we assess the risk an investor faces when choosing among assets? In this discussion we examine how an investor would assess the risk associated
More informationCourse 221: Analysis Academic year , First Semester
Course 221: Analysis Academic year 200708, First Semester David R. Wilkins Copyright c David R. Wilkins 1989 2007 Contents 1 Basic Theorems of Real Analysis 1 1.1 The Least Upper Bound Principle................
More informationChapter 15 Introduction to Linear Programming
Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2014 WeiTa Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of
More informationProperties of BMO functions whose reciprocals are also BMO
Properties of BMO functions whose reciprocals are also BMO R. L. Johnson and C. J. Neugebauer The main result says that a nonnegative BMOfunction w, whose reciprocal is also in BMO, belongs to p> A p,and
More informationA LogRobust Optimization Approach to Portfolio Management
A LogRobust Optimization Approach to Portfolio Management Ban Kawas Aurélie Thiele December 2007, revised August 2008, December 2008 Abstract We present a robust optimization approach to portfolio management
More informationChapter 2 Portfolio Management and the Capital Asset Pricing Model
Chapter 2 Portfolio Management and the Capital Asset Pricing Model In this chapter, we explore the issue of risk management in a portfolio of assets. The main issue is how to balance a portfolio, that
More informationOPRE 6201 : 2. Simplex Method
OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2
More informationChoice under Uncertainty
Choice under Uncertainty Part 1: Expected Utility Function, Attitudes towards Risk, Demand for Insurance Slide 1 Choice under Uncertainty We ll analyze the underlying assumptions of expected utility theory
More informationMARKET STRUCTURE AND INSIDER TRADING. Keywords: Insider Trading, Stock prices, Correlated signals, Kyle model
MARKET STRUCTURE AND INSIDER TRADING WASSIM DAHER AND LEONARD J. MIRMAN Abstract. In this paper we examine the real and financial effects of two insiders trading in a static Jain Mirman model (Henceforth
More informationModule MA1S11 (Calculus) Michaelmas Term 2016 Section 3: Functions
Module MA1S11 (Calculus) Michaelmas Term 2016 Section 3: Functions D. R. Wilkins Copyright c David R. Wilkins 2016 Contents 3 Functions 43 3.1 Functions between Sets...................... 43 3.2 Injective
More informationChapter 7. Sealedbid Auctions
Chapter 7 Sealedbid Auctions An auction is a procedure used for selling and buying items by offering them up for bid. Auctions are often used to sell objects that have a variable price (for example oil)
More informationALMOST COMMON PRIORS 1. INTRODUCTION
ALMOST COMMON PRIORS ZIV HELLMAN ABSTRACT. What happens when priors are not common? We introduce a measure for how far a type space is from having a common prior, which we term prior distance. If a type
More informationProbability and Statistics
CHAPTER 2: RANDOM VARIABLES AND ASSOCIATED FUNCTIONS 2b  0 Probability and Statistics Kristel Van Steen, PhD 2 Montefiore Institute  Systems and Modeling GIGA  Bioinformatics ULg kristel.vansteen@ulg.ac.be
More informationx if x 0, x if x < 0.
Chapter 3 Sequences In this chapter, we discuss sequences. We say what it means for a sequence to converge, and define the limit of a convergent sequence. We begin with some preliminary results about the
More informationChapter 21: The Discounted Utility Model
Chapter 21: The Discounted Utility Model 21.1: Introduction This is an important chapter in that it introduces, and explores the implications of, an empirically relevant utility function representing intertemporal
More informationA Portfolio Model of Insurance Demand. April 2005. Kalamazoo, MI 49008 East Lansing, MI 48824
A Portfolio Model of Insurance Demand April 2005 Donald J. Meyer Jack Meyer Department of Economics Department of Economics Western Michigan University Michigan State University Kalamazoo, MI 49008 East
More informationARBITRAGEFREE OPTION PRICING MODELS. Denis Bell. University of North Florida
ARBITRAGEFREE OPTION PRICING MODELS Denis Bell University of North Florida Modelling Stock Prices Example American Express In mathematical finance, it is customary to model a stock price by an (Ito) stochatic
More informationLINEAR PROGRAMMING WITH ONLINE LEARNING
LINEAR PROGRAMMING WITH ONLINE LEARNING TATSIANA LEVINA, YURI LEVIN, JEFF MCGILL, AND MIKHAIL NEDIAK SCHOOL OF BUSINESS, QUEEN S UNIVERSITY, 143 UNION ST., KINGSTON, ON, K7L 3N6, CANADA EMAIL:{TLEVIN,YLEVIN,JMCGILL,MNEDIAK}@BUSINESS.QUEENSU.CA
More informationPersuasion by Cheap Talk  Online Appendix
Persuasion by Cheap Talk  Online Appendix By ARCHISHMAN CHAKRABORTY AND RICK HARBAUGH Online appendix to Persuasion by Cheap Talk, American Economic Review Our results in the main text concern the case
More informationU.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009. Notes on Algebra
U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009 Notes on Algebra These notes contain as little theory as possible, and most results are stated without proof. Any introductory
More informationCost Minimization and the Cost Function
Cost Minimization and the Cost Function Juan Manuel Puerta October 5, 2009 So far we focused on profit maximization, we could look at a different problem, that is the cost minimization problem. This is
More informationRegret and Rejoicing Effects on Mixed Insurance *
Regret and Rejoicing Effects on Mixed Insurance * Yoichiro Fujii, Osaka Sangyo University Mahito Okura, Doshisha Women s College of Liberal Arts Yusuke Osaki, Osaka Sangyo University + Abstract This papers
More informationOnline Appendix to Stochastic Imitative Game Dynamics with Committed Agents
Online Appendix to Stochastic Imitative Game Dynamics with Committed Agents William H. Sandholm January 6, 22 O.. Imitative protocols, mean dynamics, and equilibrium selection In this section, we consider
More informationPUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.
PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include
More informationSome Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.
Some Polynomial Theorems by John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.com This paper contains a collection of 31 theorems, lemmas,
More informationThe Dirichlet Unit Theorem
Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if
More informationSinglePeriod Balancing of Pay PerClick and PayPerView Online Display Advertisements
SinglePeriod Balancing of Pay PerClick and PayPerView Online Display Advertisements Changhyun Kwon Department of Industrial and Systems Engineering University at Buffalo, the State University of New
More informationDate: April 12, 2001. Contents
2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........
More informationAbsolute Value Programming
Computational Optimization and Aplications,, 1 11 (2006) c 2006 Springer Verlag, Boston. Manufactured in The Netherlands. Absolute Value Programming O. L. MANGASARIAN olvi@cs.wisc.edu Computer Sciences
More informationNotes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand
Notes V General Equilibrium: Positive Theory In this lecture we go on considering a general equilibrium model of a private ownership economy. In contrast to the Notes IV, we focus on positive issues such
More information1 Polyhedra and Linear Programming
CS 598CSC: Combinatorial Optimization Lecture date: January 21, 2009 Instructor: Chandra Chekuri Scribe: Sungjin Im 1 Polyhedra and Linear Programming In this lecture, we will cover some basic material
More informationEconomics 121b: Intermediate Microeconomics Problem Set 2 1/20/10
Dirk Bergemann Department of Economics Yale University s by Olga Timoshenko Economics 121b: Intermediate Microeconomics Problem Set 2 1/20/10 This problem set is due on Wednesday, 1/27/10. Preliminary
More informationOPTIMAL CONTROL OF A COMMERCIAL LOAN REPAYMENT PLAN. E.V. Grigorieva. E.N. Khailov
DISCRETE AND CONTINUOUS Website: http://aimsciences.org DYNAMICAL SYSTEMS Supplement Volume 2005 pp. 345 354 OPTIMAL CONTROL OF A COMMERCIAL LOAN REPAYMENT PLAN E.V. Grigorieva Department of Mathematics
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the
More informationTangent and normal lines to conics
4.B. Tangent and normal lines to conics Apollonius work on conics includes a study of tangent and normal lines to these curves. The purpose of this document is to relate his approaches to the modern viewpoints
More informationLecture 05: MeanVariance Analysis & Capital Asset Pricing Model (CAPM)
Lecture 05: MeanVariance Analysis & Capital Asset Pricing Model (CAPM) Prof. Markus K. Brunnermeier Slide 051 Overview Simple CAPM with quadratic utility functions (derived from stateprice beta model)
More informationChapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm.
Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm. We begin by defining the ring of polynomials with coefficients in a ring R. After some preliminary results, we specialize
More informationNotes from Week 1: Algorithms for sequential prediction
CS 683 Learning, Games, and Electronic Markets Spring 2007 Notes from Week 1: Algorithms for sequential prediction Instructor: Robert Kleinberg 2226 Jan 2007 1 Introduction In this course we will be looking
More informationReview of Fundamental Mathematics
Review of Fundamental Mathematics As explained in the Preface and in Chapter 1 of your textbook, managerial economics applies microeconomic theory to business decision making. The decisionmaking tools
More informationThe Relative Worst Order Ratio for OnLine Algorithms
The Relative Worst Order Ratio for OnLine Algorithms Joan Boyar 1 and Lene M. Favrholdt 2 1 Department of Mathematics and Computer Science, University of Southern Denmark, Odense, Denmark, joan@imada.sdu.dk
More informationEconomics 2020a / HBS 4010 / HKS API111 FALL 2010 Solutions to Practice Problems for Lectures 1 to 4
Economics 00a / HBS 4010 / HKS API111 FALL 010 Solutions to Practice Problems for Lectures 1 to 4 1.1. Quantity Discounts and the Budget Constraint (a) The only distinction between the budget line with
More information1. R In this and the next section we are going to study the properties of sequences of real numbers.
+a 1. R In this and the next section we are going to study the properties of sequences of real numbers. Definition 1.1. (Sequence) A sequence is a function with domain N. Example 1.2. A sequence of real
More informationNo: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics
No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results
More informationOn the Efficiency of Competitive Stock Markets Where Traders Have Diverse Information
Finance 400 A. Penati  G. Pennacchi Notes on On the Efficiency of Competitive Stock Markets Where Traders Have Diverse Information by Sanford Grossman This model shows how the heterogeneous information
More informationSeparation Properties for Locally Convex Cones
Journal of Convex Analysis Volume 9 (2002), No. 1, 301 307 Separation Properties for Locally Convex Cones Walter Roth Department of Mathematics, Universiti Brunei Darussalam, Gadong BE1410, Brunei Darussalam
More informationIntroduction to Algebraic Geometry. Bézout s Theorem and Inflection Points
Introduction to Algebraic Geometry Bézout s Theorem and Inflection Points 1. The resultant. Let K be a field. Then the polynomial ring K[x] is a unique factorisation domain (UFD). Another example of a
More information24. The Branch and Bound Method
24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NPcomplete. Then one can conclude according to the present state of science that no
More informationTest of Hypotheses. Since the NeymanPearson approach involves two statistical hypotheses, one has to decide which one
Test of Hypotheses Hypothesis, Test Statistic, and Rejection Region Imagine that you play a repeated Bernoulli game: you win $1 if head and lose $1 if tail. After 10 plays, you lost $2 in net (4 heads
More informationPacific Journal of Mathematics
Pacific Journal of Mathematics GLOBAL EXISTENCE AND DECREASING PROPERTY OF BOUNDARY VALUES OF SOLUTIONS TO PARABOLIC EQUATIONS WITH NONLOCAL BOUNDARY CONDITIONS Sangwon Seo Volume 193 No. 1 March 2000
More information15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
More informationFairness in Routing and Load Balancing
Fairness in Routing and Load Balancing Jon Kleinberg Yuval Rabani Éva Tardos Abstract We consider the issue of network routing subject to explicit fairness conditions. The optimization of fairness criteria
More informationIntroduction and message of the book
1 Introduction and message of the book 1.1 Why polynomial optimization? Consider the global optimization problem: P : for some feasible set f := inf x { f(x) : x K } (1.1) K := { x R n : g j (x) 0, j =
More informationAdvanced Lecture on Mathematical Science and Information Science I. Optimization in Finance
Advanced Lecture on Mathematical Science and Information Science I Optimization in Finance Reha H. Tütüncü Visiting Associate Professor Dept. of Mathematical and Computing Sciences Tokyo Institute of Technology
More informationLarge induced subgraphs with all degrees odd
Large induced subgraphs with all degrees odd A.D. Scott Department of Pure Mathematics and Mathematical Statistics, University of Cambridge, England Abstract: We prove that every connected graph of order
More informationMultivariate Normal Distribution
Multivariate Normal Distribution Lecture 4 July 21, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #47/21/2011 Slide 1 of 41 Last Time Matrices and vectors Eigenvalues
More informationLecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization
Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization 2.1. Introduction Suppose that an economic relationship can be described by a realvalued
More informationAdvanced Microeconomics
Advanced Microeconomics Ordinal preference theory Harald Wiese University of Leipzig Harald Wiese (University of Leipzig) Advanced Microeconomics 1 / 68 Part A. Basic decision and preference theory 1 Decisions
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An ndimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0534405967. Systems of Linear Equations Definition. An ndimensional vector is a row or a column
More informationConvex Rationing Solutions (Incomplete Version, Do not distribute)
Convex Rationing Solutions (Incomplete Version, Do not distribute) Ruben Juarez rubenj@hawaii.edu January 2013 Abstract This paper introduces a notion of convexity on the rationing problem and characterizes
More informationPractical Guide to the Simplex Method of Linear Programming
Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear
More informationMOP 2007 Black Group Integer Polynomials Yufei Zhao. Integer Polynomials. June 29, 2007 Yufei Zhao yufeiz@mit.edu
Integer Polynomials June 9, 007 Yufei Zhao yufeiz@mit.edu We will use Z[x] to denote the ring of polynomials with integer coefficients. We begin by summarizing some of the common approaches used in dealing
More information1 Short Introduction to Time Series
ECONOMICS 7344, Spring 202 Bent E. Sørensen January 24, 202 Short Introduction to Time Series A time series is a collection of stochastic variables x,.., x t,.., x T indexed by an integer value t. The
More informationOverview of Math Standards
Algebra 2 Welcome to math curriculum design maps for Manhattan Ogden USD 383, striving to produce learners who are: Effective Communicators who clearly express ideas and effectively communicate with diverse
More informationMinimize subject to. x S R
Chapter 12 Lagrangian Relaxation This chapter is mostly inspired by Chapter 16 of [1]. In the previous chapters, we have succeeded to find efficient algorithms to solve several important problems such
More informationmax cx s.t. Ax c where the matrix A, cost vector c and right hand side b are given and x is a vector of variables. For this example we have x
Linear Programming Linear programming refers to problems stated as maximization or minimization of a linear function subject to constraints that are linear equalities and inequalities. Although the study
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationMathematics Notes for Class 12 chapter 12. Linear Programming
1 P a g e Mathematics Notes for Class 12 chapter 12. Linear Programming Linear Programming It is an important optimization (maximization or minimization) technique used in decision making is business and
More informationLimits and convergence.
Chapter 2 Limits and convergence. 2.1 Limit points of a set of real numbers 2.1.1 Limit points of a set. DEFINITION: A point x R is a limit point of a set E R if for all ε > 0 the set (x ε,x + ε) E is
More informationNotes on weak convergence (MAT Spring 2006)
Notes on weak convergence (MAT4380  Spring 2006) Kenneth H. Karlsen (CMA) February 2, 2006 1 Weak convergence In what follows, let denote an open, bounded, smooth subset of R N with N 2. We assume 1 p
More informationOn the Interaction and Competition among Internet Service Providers
On the Interaction and Competition among Internet Service Providers Sam C.M. Lee John C.S. Lui + Abstract The current Internet architecture comprises of different privately owned Internet service providers
More informationCritical points of once continuously differentiable functions are important because they are the only points that can be local maxima or minima.
Lecture 0: Convexity and Optimization We say that if f is a once continuously differentiable function on an interval I, and x is a point in the interior of I that x is a critical point of f if f (x) =
More informationSingle item inventory control under periodic review and a minimum order quantity
Single item inventory control under periodic review and a minimum order quantity G. P. Kiesmüller, A.G. de Kok, S. Dabia Faculty of Technology Management, Technische Universiteit Eindhoven, P.O. Box 513,
More informationMathematics Review for MS Finance Students
Mathematics Review for MS Finance Students Anthony M. Marino Department of Finance and Business Economics Marshall School of Business Lecture 1: Introductory Material Sets The Real Number System Functions,
More informationDiscussion on the paper Hypotheses testing by convex optimization by A. Goldenschluger, A. Juditsky and A. Nemirovski.
Discussion on the paper Hypotheses testing by convex optimization by A. Goldenschluger, A. Juditsky and A. Nemirovski. Fabienne Comte, Celine Duval, Valentine GenonCatalot To cite this version: Fabienne
More informationA Simple Model of Price Dispersion *
Federal Reserve Bank of Dallas Globalization and Monetary Policy Institute Working Paper No. 112 http://www.dallasfed.org/assets/documents/institute/wpapers/2012/0112.pdf A Simple Model of Price Dispersion
More information