Robust MeanCovariance Solutions for Stochastic Optimization


 Magnus Ray
 1 years ago
 Views:
Transcription
1 OPERATIONS RESEARCH Vol. 00, No. 0, Xxxxx 0000, pp issn X eissn INFORMS doi /xxxx c 0000 INFORMS Robust MeanCovariance Solutions for Stochastic Optimization Ioana Popescu INSEAD, Decision Sciences Area, Blvd. de Constance, Fontainebleau, France. We provide a method for deriving robust solutions to certain stochastic optimization problems, based on meancovariance information about the distributions underlying the uncertain vector of returns. We prove that for a general class of objective functions, the robust solutions amount to solving a certain deterministic parametric quadratic program. We first prove a general projection property for multivariate distributions with given means and covariances, which reduces our problem to optimizing a univariate meanvariance robust objective. This allows us to use known univariate results in the multidimensional setting, and to add new results in this direction. In particular, we characterize a general class of objective functions (so called one or twopoint support functions), for which the robust objective is reduced to a deterministic optimization problem in one variable. Finally, we adapt a result from Geoffrion (1967a) to reduce the main problem to a parametric quadratic program. In particular, our results are true for increasing concave utilities with convex or concaveconvex derivatives. Closed form solutions are obtained for special discontinuous criteria, motivated by bonus and commission based incentive schemes for portfolio management. We also investigate a multiproduct pricing application, which motivates extensions of our results for the case of nonnegative and decision dependent returns. Subject classifications : programming: stochastic, quadratic; decision analysis: risk; finance: portfolio. Area of review : Decision Analysis. History : Received January 2004; accepted December
2 2 Operations Research 00(0), pp , c 0000 INFORMS 1. Introduction In this paper we provide a general approach for deriving robust solutions for stochastic optimization programs, based on meancovariance information about the distributions underlying the return vector. Consider a decision maker interested in maximizing his expected utility u of a linear reward function, subject to a given set of convex constraints P on the vector of decision variables x. The decision maker solves the following general stochastic mathematical program without recourse: (P ) maximize E[u(x R)] subject to x P, Throughout the paper, random variables and random vectors will be denoted in bold, to be distinguished from deterministic quantities. We will use max and min in the wide sense of sup and inf, to account for the cases when the optimum is not achieved. When the distribution of the uncertain return vector R is exactly known, Problem (P) is a general one stage stochastic program. One obvious case when the solution of Problem (P) does not depend on the actual distribution of R, except its mean and covariance, is for quadratic utilities u(r) = ar 2 + br + c, in which case it amounts to: max x P ax (Σ + µµ )x + bx µ + c. However, this is only the case for quadratic (or linear) utility functions, explaining in part why these classes are particularly favored in economics and financial models. In many applications of stochastic programming, there is considerable uncertainty regarding the distribution of R. In many instances, only partial information about this distribution is available, or reliable, such as means, variances and covariances. A common approach in the literature is to assume that the underlying distribution is multivariate normal, and solve the corresponding stochastic program. However, this assumption is hard to justify unless the randomness comes from a central limit theorem type of process. Alternatively, the robust approach consists in basing the decision strictly on the available information, and maximizing the worst case expected utility over all possible distributions with the given properties. Even if the underlying distribution is actually known, the robust approach can be used to provide tractable approximations for the solution of Problem (P), which is typically a very difficult problem. The robust problem is the focus of our investigation, and can be formulated as follows: (G) max x P min R (µ,σ) E[u(x R)], (1) where we denote R (µ, Σ) to represent the fact that the distribution of R belongs in the class M n (µ,σ) of nvariate distributions with mean µ and covariance Σ. By a simple change of variable
3 Operations Research 00(0), pp , c 0000 INFORMS 3 u = u we obtain the minmax version of Problem (G). All the results in this paper can be rephrased to hold for minmax problems, so for simplicity we restrict the analysis to the maxmin case. As an example, consider a portfolio manager interested in choosing a portfolio x among n assets with returns R = (R 1,..., R n ) in order to maximize the expected value or utility of his payoff (e.g. induced by his incentive system). The optimization is subject to a set of portfolio constraints, such as P = {x [0, 1] n x 1 n = 1}. In practice, one seldom has full information about the joint distribution of asset returns, which are typically assumed multivariate normal and interpolated from very few data points (hence are not robust). Very often the only reliable data consists of first and second order moments. For example, RiskMetrics (www.riskmetrics.com) provides very large data sets of asset covariances, but no additional distributional information. In this case, if the manager has access only to the mean and covariance matrix of the asset returns, and not to the entire distribution, he may be interested in the best and worst case return of a given portfolio, and in the portfolios that would maximize such a worst case return. For instance, a risk neutral manager incentivised by a bonus for performance above a given target return will maximize the worst case probability of reaching that target level. There is a considerable amount of literature on robust portfolio optimization; see for example Goldfarb and Iyengar (2003), Rustem et al. (2000), El Ghaoui et al. (2003). We contribute to this stream by studying the portfolio example from the manager s perspective in Section 5. One can think about Problem (G) as a strategic game, where the first player chooses a decision x while nature picks the worst possible distribution R (see e.g. Dupačová 1966). The robust approach adopted in this paper is very much in the spirit of the maxmin expected utility framework axiomatized by Gilboa and Schmeidler (1989). The concept of robustness, or incomplete knowledge about distributions, is captured in various ways in the stochastic optimization literature, the most common ones being uncertainty sets (BenTal and Nemirovski 2002) and moments of the underlying distribution. Our model falls in the second category, and is related in that sense to a large literature including Scarf (1958), Birge and Wets (1987), Birge and Dulá (1991), Kall (1988), Dupačová (2001), Shapiro and Kleywegt (2002). For a given vector x, we refer to the problem (I) U(x) = min E[u(x R)] (2) R (µ,σ) as the inner optimization problem and to the function U(x) as the robust objective (or criterion). There are three main steps in our solution to Problem (G).
4 4 Operations Research 00(0), pp , c 0000 INFORMS 1. First, we transform the inner optimization problem into an equivalent problem over univariate distributions with a given mean and variance. This reduces the original problem to a bicriteria optimization problem. 2. Second, we show that for some general classes of objective functions, this bicriteria objective reduces to a deterministic optimization problem in one variable. 3. Finally, based on these results, we provide conditions to reduce the original problem to solving a parametric quadratic problem. The first key result of our derivations is a new projection property for classes of multivariate distributions R with given mean vector µ and covariance matrix Σ. Based on this result, we show in Proposition 1 that for any vector x, Problem (I) reduces to an optimization problem over the class of projected univariate distributions r x with given mean µ x = x µ and variance σ 2 x = x Σx, that is: U(x) = min R (µ,σ) E[u(x R)] = min E[u(r x )]. r x (µ x,σx 2) The above problem is further reduced (Proposition 2) to optimizing over a restricted class of distributions with at most three support points. The robust objective U(x) can be cast as a deterministic optimization problem in at most three variables. This procedure reduces the robust stochastic optimization problem (G) to a deterministic bicriteria problem that only depends on the mean and variance of the random variable x R. The results of Section 2 rely on the classical theory of moments (see e.g. Karlin and Studden 1966, Rogosinsky 1958) and require no assumptions on the objective function u. Section 3 provides conditions on the objective function u to express the robust objective U as an optimization problem in one variable. This holds for socalled one or twopoint support functions u (see Definitions 1 and 3). Such functions are supported from below by a quadratic and intersect it at exactly one or two points. Special cases include concave utilities with convex, concave or concaveconvex derivatives, in particular corresponding to risk averse and prudent/imprudent preferences, as well as some targetbased piecewise polynomials. The resulting second order moment bounds are refinements of the Jensen and EdmundsonMadansky inequalities (see Dulá and Murthy 1992, Dokov and Morton 2002) and extend results of Krein and Nudelman (1977) and Birge and Dulá (1991) for distributions with unbounded support. These results are also useful for sensitivity analysis in many decision sciences applications (see e.g. Smith 1995), in particular in stochastic programming when the underlying distribution is actually known, but difficult to work with (Birge and Wets 1987).
5 Operations Research 00(0), pp , c 0000 INFORMS 5 In Section 4, we use results from Geoffrion (1967a) to show (Proposition 9) that for some general classes of objective functions, the robust Problem (G) over a convex polyhedral set P is equivalent to solving the following parametric quadratic program (PQP): (P λ ) max x P λ x µ (1 λ) x Σx. (3) In particular, this is true for increasing and concave utilities u that satisfy the one or twopoint support property characterized in Section 3, which includes utilities with convex or concaveconvex derivative. Section 5 illustrates our results with an application in the context of portfolio management incentives. We provide optimal robust portfolios corresponding to various (noncontinuous, nondifferentiable) reward systems for portfolio managers, involving bonuses, commissions and combinations thereof, as well as nonexpectation criteria such as value at risk. Our results show that the resulting robust portfolios are meanvariance efficient and solvable as PQPs. We obtain new results, as well as new proofs and insights for known technical results. In particular, we find that under bonus systems, a manager who assumes joint normal return distributions chooses the same portfolio as the most pessimistic manager, with the same given meancovariance information. In Section 6 we propose two extensions of our results motivated by an application in multiproduct pricing. First, we show that our general model and solution naturally extend to the case when the distribution R (its mean and covariance) is dependent on the decision variable x. Second, we attempt to extend our framework to the case of nonnegative random returns. While we prove that bicriteria solutions are generally not optimal, we provide a method to compute (weak) bicriteria bounds on the corresponding robust problems, and show how they can be solved using parametric quadratic programming. Our conclusions are summarized in the last section. 2. The Robust Objective In this section we focus on Problem (I): for a nonzero vector x, we consider the problem U(x) = min E[u(x R)]. We show that this problem can be formulated as a bicriterion objective, which R (µ,σ) can be cast as a deterministic optimization problem in (at most) three variables. The results of this section hold without any assumptions on the objective u A General Projection Property A lower bound on the robust objective U(x) can be obtained by optimizing over univariate distributions with mean µ x and variance σ 2 x: U(x) = min R (µ,σ) E[u(x R)] min r (µ x,σ 2 x ) E[u(r)]. (4)
6 6 Operations Research 00(0), pp , c 0000 INFORMS This is because for any feasible random vector R (µ, Σ) on the left hand side, the corresponding projected random variable r x = x R has mean x µ = µ x and variance x Σx = σx, 2 hence r x is feasible on the right hand side. Our first key result is that, irrespective of the objective function u, relation (4) above holds with equality. Proposition 1. For any objective function u and any given real vector x, we have that: min R (µ,σ) E[u(x R)] = min E[u(r)]. (5) r (µ x,σx 2) The central idea behind Proposition 1 is that for any nonzero vector x, the optimization over random vectors R (µ, Σ) is equivalent to solving the corresponding projected problem over the class of univariate distributions with mean µ x and variance σx. 2 We provide a constructive proof for the following stronger result: for any nonzero vector x and any distribution r (µ x, σx), 2 there exists a multivariate distribution R (µ, Σ) such that r = x R. We remind the notation M n for (µ,σ) the set of probability measures on R n with mean µ and covariance Σ (with the default superscript n = 1). Proposition 1 is a direct corollary of the following stronger result, proved in the Appendix: Theorem 1 (General Projection Property). For any given nvector µ, (n n)matrix Σ 0 and any nonzero nvector x, the xprojection mapping r = x R from nvariate distributions R (µ, Σ) to univariate distributions r (µ x, σx) 2 is onto, meaning that every distribution r in M (µx,σx 2) can be obtained from some distribution R in M n by r = (µ,σ) x R Equivalent Deterministic Formulation Based on Proposition 1, the general robust stochastic optimization problem (G) amounts to maximizing U(x) = min E[u(r)], over the domain x P. This is a special case of a univariate moment r (µ x,σx 2) problem, where only the first and second order moments (mean and variance) of the distribution are known. In this section we show that such problems can be reduced to a deterministic optimization problems in three variables. A classic result in the theory of moments, apparently due to Rogosinsky (1958, Theorem 1), is that moment problems with n moment constraints can be solved by optimizing only over discrete distributions with at most n + 1support points (see Smith 1995, for a more accessible presentation). In our case, this result implies that the robust objective reduces to optimizing over (µ x, σx) 2 distributions with at most three support points. We obtain the following deterministic formulation in three variables:
7 Operations Research 00(0), pp , c 0000 INFORMS 7 Proposition 2. For any objective u and x 0, the robust objective U can be calculated as: where U(x) = min p a u(a) + p b u(b) + p c u(c) s.t. a µ x σ2 x b µ x + σ2 x c µ x µ x a c, (6) p a = { σ 2 x +(µ x b)(µ x c) (a b)(a c), if a < b 1 σ2 x +(µx a)2, if a = b, p c = (c a) 2 { σ 2 x +(µ x a)(µ x b), if b < c (c a)(c b) 1 σ2 x +(µx c)2, if b = c, p b = 1 p a p c. (c a) 2 Proof: It remains to show that if a distribution with at most three support points satisfies the (µ x, σx)moment 2 constraints, then it is of the above form. Note that twopoint distributions with mean µ x and variance σx 2 1 p assign mass p to the point µ x + σ p x and mass 1 p to the point µ x p σ 1 p x, with p (0, 1). When a = b or b = c above, then we obtain precisely two point distributions of this form. Suppose now that a < b < c. The expressions for the probabilities are equivalent to the meanvariance conditions. In order for these to be nonnegative, it is necessary and sufficient that a µ x σ2 x c µ x b µ x + σ2 x c. µ x a In particular, optimizing only over feasible two point distributions in problem (6), we obtain the following upper bound on U(x): U(x) min p u(µ x + p (0,1) 1 p 3. Simplified Formulation of The Robust Objective p p σ x) + (1 p) u(µ x 1 p σ x). (7) This section investigates the case when the above upper bound is sharp. We provide conditions on the objective u for the optimum in Problem (I) to be achieved exactly or asymptotically by two point distributions. This holds for a very general class of functions u (socalled one or twopoint support functions), including all monotone convex functions and all concave functions with convex, concave or concaveconvex derivative, as well as some important piecewise linear classes. These results are important because they allow to reduce the robust objective to an optimization problem in one variable. Moreover, in certain cases, closed form formulations of the robust objective can be obtained based on these results, as illustrated in Section 5. Finally, in the next section, we show that optimizing Problem (G) is relatively easy for one and two point support objectives. The key element of this section is the family Q = {(A, B, C) q(y) = Ay 2 + By + C u(y) y R} corresponding to coefficients of quadratic functions dominated by u. Clearly, E[u(r)] E[q(r)] = A(µ 2 x + σ 2 x) + Bµ x + C for any random variable r (µ x, σ 2 x), implying:
8 8 Operations Research 00(0), pp , c 0000 INFORMS Proposition 3. We have U(x) = min E[u(r)] r (µ x,σx 2) max (A,B,C) Q A(µ 2 x + σ 2 x) + Bµ x + C. Strong duality for moment problems (see e.g. Karlin and Studden 1966) insures that under certain regularity conditions (in our case amounting to σ x > 0) equality holds in Proposition 3. We mention this result for its general interest, however it will not be necessary for our work Two Point Support Objectives Consider (A 0, B 0, C 0 ) Q so that q 0 (y) = A 0 y 2 + B 0 y + C 0 intersects u at two points a < b. Suppose that there exists a distribution r (µ x, σ 2 x) with support {a, b}. It follows that E[u(r)] = E[q 0 (r)] = A 0 (µ 2 x + σ 2 x) + B 0 µ x + C 0. From Proposition 3, this must equal U(x). So in this case U(x) can be obtained by minimizing only over feasible two point distributions, i.e. equality holds in (7). We formalize this result as follows: Definition 1. A function u satisfies the twopoint support property with respect to (µ, σ 2 ) if there exists a quadratic function that supports u from below and intersects it at two points a, b such that a feasible (µ, σ 2 ) distribution with support {a, b} exists. A function u satisfies the twopoint support property if it satisfies it with respect to any (µ, σ 2 ). Proposition 4. If u has two point support, then the robust objective equals: 1 p p U(x) = min U(p, x), where U(p, x) = p u(µ x + p (0,1) p σ x) + (1 p)u(µ x 1 p σ x). (8) A similar result for a special piecewise linear objective and nonnegative distributions has been derived by Scarf (1958), in the context of an inventory problem. Birge and Dulá (1991) use two point support functions to obtain the above result over a bounded domain. Note that a function can have twopoint support on an interval, but not on the whole real line, e.g. if a support point lies on the boundary. This can be seen from the following example: Example 1. Let u(y) = 1 (µ kσ,µ+kσ) (y), where 1 S denotes the indicator function of a set S. We have that min E[u(r)] = r (µ,σ 2 ) min r (µ,σ 2 ) P ( r µ < kσ) = max(0, 1 1 k 2 ). This is the twosided Chebyshev inequality, achieved by the three point distribution with mass 1, 1 1, respectively 1, at the points µ kσ, µ, respectively µ + kσ. Note that minimizing 2k 2 k 2 2k 2 ( over two point distributions yields a strictly larger bound, namely max 1 k, 2 1+k 2 1+k 2 ). In light of Proposition 4, u cannot have two point support. On the other hand, the same objective function, defined say on [µ kσ, ) has two point support for k 1.
9 Operations Research 00(0), pp , c 0000 INFORMS 9 Denote the two support points in relation (8) by a = µ x p σ 1 p 1 p x and b = µ x + σ p x (so a < b unless x = 0). For differentiable functions u we obtain: U p (p, x) = u(b) u(a) (b (b) + u (a) a)u. (9) 2 The first order condition for U(p, x), x 0, can be stated as: u(b) u(a) b a = u (a) + u (b). (10) 2 The optimum in (8) is either obtained at a point where the above equality holds, or as p 0, or p 1. In the first case, the two point support is precisely {a, b}. In the latter cases it degenerates into the onepoint support case discussed in the next section. The remainder of the section provides conditions for a function to have twopoint support. The next result is a technical restatement of Definition 1: Lemma 1. The function u has two point support if and only if for any µ and any σ > 0 there exist a < b and q a, q b such that the following conditions hold: (a) (b µ)(µ a) = σ 2 ; (b) u(b) u(a) b a = q a + q b ; 2 (c) q(y) = Ay 2 + By + C u(y), y R, where A= q b q a, B= bqa aq b, C = bu(a) au(b) 2(b a) b a b a ab qa q b 2(b a). Proof: From (b) and (c), it follows that q(a) = u(a) and q(b) = u(b) and q supports u. Condition (a) states that the intersection points a, b are the support of a (µ, σ 2 )distribution. Hence the conditions of Definition 1 are met. Conversely, if u has two point support, then the conditions of the lemma are met for q a = q (a) and q b = q (b), where a, b and the quadratic q come from Definition 1. By Proposition 3, the above result shows that: Remark 1. If u satisfies the conditions of Lemma 1, then an optimal distribution for U(x), if it exists, has support {a, b} as defined by the conditions of the Lemma with respect to (µ x, σ x ). If u is differentiable at a, b, the construction in Lemma 1 implies q a = q (a) = u (a), q b = q (b) = u (b), i.e. q intersects u tangentially at a, b. In this case, condition (b) in Lemma 1 becomes precisely Eq. (10). This condition says that the average value of the derivative u (if it exists) on [a, b] equals the average of the values of u at the end points a, b. Geometrically, the area under the graph of u between a and b equals the area of the trapezoid with corners a, u (a), u (b), b. The existence of two such points a, b is insured for example if u is nonincreasing and changes convexity at least once (between a and b). Moreover, we show that conditions (a) and (c) are satisfied if u is concave until some point and then convex, and has finite limits at ±.
10 10 Operations Research 00(0), pp , c 0000 INFORMS Definition 2. A function f defined on R is convexconcave if there exists x 0 R such that f is convex on (, x 0 ) and concave on (x 0, ). A function f defined on R is concaveconvex if f is convexconcave. A function f is Sshaped if it is increasing and convexconcave. A function f is inverse Sshaped if f is Sshaped. The following corollary of Lemma 1 is proved in the Appendix. An analogue result has been obtained by Birge and Dulá (1991, Theorem 5.1) for functions with bounded support. Proposition 5. If u is differentiable such that u is inverse Sshaped with finite limits at ±, then u satisfies the twopoint support property. Furthermore, if u has two point support, then by Definition 1, u(x) + q(x) also has two point support for any quadratic q. Examples of functions satisfying these conditions include combinations of exponentials and logarithms (e.g. loglogistic u(x) = C + log 1/(1 + e ax ) used in statistics, classification), and hyperbolic functions (e.g. the catenary u(x) = C b cosh(ax) used in physics, engineering), where in all cases the constant C can be replaced by a concave quadratic. The conditions of Proposition 5 are not necessary, however, for a function to have twopoint support. In general, twopoint support functions need not be differentiable, not even continuous; such examples are investigated in Section 5. Furthermore, even for differentiable functions neither concavity, nor the concaveconvex derivative condition are necessary for twopoint support, as illustrated in the following counterexample. Example 2. The function u(y) = y ln(1 + y 2 ) is increasing, but not concave (a plot of its derivative is provided in Figure 1). However, u has the twopoint support property. Note that u (y) + u ( y) = 2 for all real y. Hence conditions (a) and (b) of Lemma 1 hold for b = a = µ2 + σ 2 and q a = u (a), q b = u (b). By construction, the quadratic q defined in part (c) of the Lemma intersects u tangentially at a, b. The convexity properties of u imply that q supports u. From Proposition 4, for a given vector x, we have that U(x) = min U(p, x), where p (0,1) 1 p U(p, x) = µ x p ln(1 + (µ x + p σ x) 2 ) (1 p) ln(1 + (µ x p 1 p σ x) 2 ). Remark that lim U(p, x) = lim U(p, x) = u(µ x ) = µ x ln(1 + µ 2 p 0 p 1 x), and U(p, x) is decreasing in p around p = 0 and increasing around p = 1, so the minimum is achieved somewhere inbetween. Suppose, for example that for a certain x = x 0 we have µ x = 0 and σ x = 1. Then U(p, x 0 ) = p ln p + (1 p) ln(1 p), which is minimized for p = 1/2 and U(x 0 ) = ln 2.
11 Operations Research 00(0), pp , c 0000 INFORMS 11 Figure 1 Plots of the functions u(y) = y ln(1 + y 2 ) and u (y) = 1 2y 1+y u 2 u One Point Support Objectives The previous section provided a simplified expression for the robust objective U(x) when u has twopoint support. In particular this is true when u is concave with concaveconvex derivative. We now provide other conditions that allow a simplified formulation for U when u has one point support. Relevant special cases include u concave with a convex or concave derivative, and u monotonic and convex. The onepoint support property is a limiting case of twopoint support, where the expected value of the supporting quadratic is approached by a sequence of feasible twopoint distributions, with one point approaching the mean and the other arbitrarily large (or small). This is formally defined as follows: Definition 3. A function u has the onepoint support property with respect to (µ, σ 2 ) if there exists a quadratic q that supports u and intersects it at µ, and either lim E[u(r p )] = E[q(r p )] p 0 or lim E[u(r p )] = E[q(r p )], where r p is a (µ, σ 2 1 p ) random variable with support {µ + σ, µ p 1 p p σ}. A function u has the onepoint support property if it satisfies it with respect to any 1 p (µ, σ 2 ). The following result follows trivially from Proposition 3. Proposition 6. If u satisfies the one point support property then U(x) = min U(p, x), specifically p (0,1) either U(x) = lim U(p, x) or U(x) = lim U(p, x). p 0 p 1 In the following we identify some general relevant classes of onepoint support functions. Proposition 5 shows that concave functions with concaveconvex derivative have twopoint support. In the extreme case when the derivative is either convex or concave (x 0 = ± in Definition 2), i.e.
12 12 Operations Research 00(0), pp , c 0000 INFORMS the risk averse agent is prudent or imprudent, we obtain onepoint support functions. The proof of the next result is provided in the Appendix. Proposition 7. If u is concave and twice differentiable with a monotonic second derivative then u has onepoint support. Specifically, (a) if u is convex, then U(x) = u(µ x ) + lim y u (y) σ2 x 2 (b) if u is concave, then U(x) = u(µ x ) + lim y u (y) σ2 x 2 for all x; for all x. Example 3. Consider a portfolio manager facing an exponential utility u(x) = 1 e ax, with a > 0. Since lim y u (y) =, Proposition 7(a) shows that the robust solution should focus on maximizing the expected portfolio return µ x while setting σ x = 0, i.e. finding a perfect hedge. In particular, if Σ is nonsingular, the optimal robust solution is not to invest at all, as for any investment strategy there exists a corresponding distribution that can make the expected utility of the return arbitrarily small. (These are two point distributions, with an arbitrarily large loss probability 1 p 1). While this is an extreme scenario, one should expect the robust approach to yield strategies that are by definition conservative. We now consider the case when u is monotonic and convex, so we need to optimize over concave quadratics that are tangent to u at exactly one point. The next result is proved in the Appendix. Proposition 8. If u is monotone convex, then it has one point support and U(x) = u(µ x ). More generally, the same result holds if u is convex with either u ( ) = 0 or u ( ) = 0. The class of onepoint support functions includes utility functions most commonly used for mathematical modeling: those whose (first three) derivatives alternate in sign, i.e. those consistent with (up to third order) stochastic dominance (see Brockett and Golden 1987). Examples include the standard exponentials, logarithms, power functions and hyperbolic functions (see Pratt 1964), sumex and linear plus exponential (Bell s oneswitch functions, see Bell 1988), or more generally, any mixtures of exponentials. In summary, one or twopoint support functions are fairly general, including (but not limited to) convex monotone functions and concave functions with at most one third derivative sign change. Some additional relevant nonsmooth classes include piecewise linear functions, discussed in the context of a financial application in Section Bicriteria Optimization via PQP The previous sections focused on reducing the inner problem from an optimization problem over multivariate distributions to a deterministic optimization problem in one or three variables. The remaining step in the solution to Problem (G) is to optimize this robust objective.
13 Operations Research 00(0), pp , c 0000 INFORMS 13 This section exploits the fact that the robust objective depends on the decision variable x only through two criteria: the return s mean µ x and standard deviation σ x, U(x) V (µ x, σ x ). This allows a simple and efficient parametric quadratic programming (PQP) formulation and solution to Problem (G), based on Geoffrion (1967a). For a fairly general class of functions u, we show that optimizing the corresponding robust objective is equivalent to a meanvariance optimization problem, for a certain tradeoff coefficient, i.e. the resulting optimal solution of Problem (G) is meanvariance efficient. Based on Propositions 4 and 6, we argue that these results hold in particular for increasing and concave utilities with one or twopoint support, as well as other nonconcave objectives, as illustrated in Section 5. Recall that a real function f defined on a convex set S is said to be quasiconcave if and only if for any λ [0, 1] and x, y S we have f(λx + (1 λ)y) min(f(x), f(y)). Equivalently, f is quasiconcave whenever for any l, the upper level sets L l = {x S f(x) l} are convex. The following result due to Geoffrion (1967a) (see also Geoffrion 1967b) provides general sufficient conditions for solving bicriteria problems efficiently as parametric quadratic programs. Theorem 2 (Geoffrion 1967a). Consider the program max v(p 1(f 1 (x)), p 2 (f 2 (x))), (11) x X where X R n is a convex set. Suppose that f 1, f 2, p 1 (f 1 ), p 2 (f 2 ) are concave on X, p 1 and p 2 are increasing on the image of X under f 1, f 2 respectively, and v is nondecreasing in each coordinate and quasiconcave on the convex hull of the image of X under (p 1 (f 1 ), p 2 (f 2 )), and all functions are continuous. Further assume that the optimal solution x (λ) of the parametric program max x X λf 1(x)+ (1 λ)f 2 (x) is continuous for λ [0, 1]. Then the function w(λ) = v(p 1 (f 1 (x (λ))), p 2 (f 2 (x (λ)))) is continuous and unimodal on [0, 1]. Furthermore, if λ is its maximum, then x (λ ) is optimal for (11). Proposition 1 reduced Problem (G) to solving a bicriteria (meanvariance) optimization problem. Based on Theorem 2, the following result shows that under fairly unrestrictive conditions, Problem (G) can be optimized by solving a parametric quadratic program. Proposition 9. Suppose that the robust objective function V (µ x, σ x ) = min E[u(r)] is continuous, nondecreasing in µ x, nonincreasing in σ x and quasiconcave. Then Problem (G) over r (µ x,σx 2) a convex polyhedral set P is equivalent to solving the following parametric quadratic program: (P λ ) max x P λ x µ (1 λ) x Σx. (12)
14 14 Operations Research 00(0), pp , c 0000 INFORMS Specifically, if x (λ) denotes the optimal solution of Problem (P λ ), then the function w(λ) = V (µ x (λ), σ x (λ)) is continuous and unimodal in λ [0, 1]. Moreover, if λ x (λ ) is optimal for Problem (G). is its maximum, then Proof: Let f 1 (x) = µ x and f 2 (x) = x Σx, p 1 (f 1 ) = f 1 and p 2 (f 2 ) = ( f 2 ) 1 2, these are all concave, and the latter two are nondecreasing. In the notation of Theorem 2, we obtain v(µ x, σ x ) = v(p 1 (f 1 (x)), p 2 (f 2 (x))) = min E[u(r)] = V (µ x, σ x ). The assumptions on V insure that v is nondecreasing and quasiconcave. Furthermore, the convex polyhedral structure of the feasible set r (µ x,σx 2) P insures that there exists a continuous optimal solution x (λ) (see e.g. Wolfe 1959, Geoffrion 1967a). The result follows from Theorem 2. It is reasonable to assume that the robust utility U(x) = V (µ x, σ x ) is nondecreasing in the mean µ x and nonincreasing in the standard deviation σ x of the underlying return. This basically says that the decision maker prefers distributions with higher return and lower risk. Quasiconcavity can be interpreted as a diminishing marginal rate of substitution between mean and standard deviation, a property shared by most utility functions. When the actual utility function u is increasing and concave (as is typically assumed in decision analysis), and has one or twopoint support, then these conditions are automatically satisfied. We obtain the following result: Proposition 10. For increasing concave utilities with one or two point support, Problem (G) over a convex polyhedral set P is equivalent to solving the parametric quadratic program (P λ ) for a suitable value λ [0, 1]. Proof: From Propositions 6 respectively 4, we have that V (µ x, σ x ) = min V (p, µ x, σ x ), where p (0,1) 1 p p V (p, µ x, σ x ) = p u(µ x + p σ x) + (1 p) u(µ x 1 p σ x). Monotonicity of u implies that V (p, µ x, σ x ) is nondecreasing in µ x. Concavity of u implies monotonicity of V (p, µ x, σ x ) in σ x, and concavity in (µ x, σ x ). Since monotonicity and concavity are preserved through minimization, the same results hold for V (µ x, σ x ). The final result follows from Proposition 9. In particular, the above results hold for increasing concave utilities with convex or concaveconvex derivative. For more general utilities, one can attempt to check monotonicity of V in µ x and σ x as well as quasiconcavity using the results of Propositions 2, 4 or 6, depending of the support properties of the function u. In particular, for one or twopoint support functions, monotonicity in µ x can be ensured by assuming that u is nondecreasing. Some examples are provided in the next section.
15 Operations Research 00(0), pp , c 0000 INFORMS 15 Remark that if u is concave and has one or two point support, Propositions 4 and 6 imply that U(x) is concave in x. So alternatively, traditional computational methods for convex programming may also be suitable to solving Problem (G) without exploiting its bicriteria structure. Several algorithms are available for solving parametric quadratic programs (Wolfe 1959, Geoffrion 1967a,b, Best 1996). They all yield an optimal solution x (λ) that is continuous in λ. Geoffrion (1967a) solves (P λ ) with λ = 1 and decreases λ until the unimodal function w (λ) = v(µ x (λ), σ x (λ)) achieves its maximum on [0, 1]. In our applications, the intermediate solutions are also of interest, as they yield a portion of the meanvariance tradeoff curve. More recent algorithms for PQP are due to Best (1996) and Arseneau and Best (1999). For a general treatment of parametric quadratic programs see Bank et al. (1982). Sensitivity analysis for quadratic programs in the meanvariance framework of Markowitz (1959) is discussed in Dupačová et al. (2002, Ch. II.7). Example 4. In Problem (G), suppose that the constraint set is given by P = {x x i = 1, x 0}, and let u(y) = ln( 1 1+e y ). Then u is increasing and concave and its derivative u (y) = e y 1+e y is inverse Sshaped (see Figure 2). Therefore V (µ x, σ x ) = min p ln(1 + e x+q (µ p (0,1) 1 p p σ x) ) (1 p) ln(1 + e (µx p 1 p σx) ) is increasing in µ x, decreasing in σ x and concave in (µ x, σ x ), i.e. satisfies the conditions of Proposition 9. To solve Problem (G), we first compute x (λ), the optimal solution of: (P λ ) max λ x µ (1 λ) x Σx s.t. xi = 1 x 0, and then maximize over λ (0, 1) the function w(λ) = V (µ x (λ), σ x (λ)), which is unimodal. This can be seen from Figure 3, where we display w(λ) for the following three data sets: µ 1 = (1 3), Σ 1 = ( ) , µ = ( ) , Σ 2 = , respectively µ 3 = µ 2, Σ 3 = Σ 2. The resulting optimal values w(λ ) are the desired optimal values of (G), in our case , and , achieved at λ =.60,.72, and respectively 1. The corresponding optimal solutions of Problem (G) are x (λ ) = ( ), ( ) and ( ). All programs were run in MATLAB 5.3 under Windows XP. 5. Application: Portfolio Management Incentives In this section we present an applications of our results in the context of financial portfolio management incentives. This application prompts us to analyse a set of special noncontinuous/nondifferentiable twopoint support objectives, induced by bonus and commissionbased incentives, as
16 16 Operations Research 00(0), pp , c 0000 INFORMS Figure 2 Plots of the functions u(y) = ln( ) and u (y) = e y 1+e y 1 1+e y. u u Figure 3 Plots of the unimodal function w(λ) w 1 (λ) 0.4 w 2 (λ) 0.3 w 3 (λ) λ λ λ well as nonexpectation criteria such as value at risk. We obtain new results, as well as new proofs and insight for some technically known results. Consider the example described in the introduction: a portfolio manager is interested in a robust portfolio among a set of n assets with random returns R 1,..., R n. Suppose that the manager does not know the distribution of the random returns, but estimates them to have means µ and covariance structure Σ. The manager is interested in a robust portfolio that maximizes the worst case expected payoff u(r), where r is the portfolio return achieved at the end of the evaluation period. This problem can be formulated as follows: max x P min R (µ,σ) E[u(x R)], where the underlying feasible set P is convex polyhedral, and may have different forms, according to the type of portfolio problem of interest. For example, P = {x R n A x b} or in particular P = {x [0, 1] n x 1 n = 1}. The objective function u can either represent the agent s utility function for a reward proportional to the portfolio return, or the valuation induced by a more complex
17 Operations Research 00(0), pp , c 0000 INFORMS 17 compensation mechanism. For example, if a (risk neutral) agent is rewarded a oneshot bonus for outperforming a certain index s, then u is precisely the cdf of s. According to Proposition 10, if u is increasing and concave (i.e. the agent is risk averse) with one or twopoint support, this problem amounts to solving the PQP (12) over the portfolio choice set P for a suitable value λ. In particular, the resulting optimal portfolio is meanvariance efficient (a portfolio is meanvariance efficient if it has the highest expected return for a given variance, or, equivalently, if it has smallest variance for a given expected return (see e.g. Markowitz 1959). The robust approach associates with each utility function u a worst case risk factor κ = (1 λ )/λ, obtainable by the methods of Theorem 2 and Proposition 9, such that solving the robust optimization problem is equivalent to solving a deterministic problem with quadratic utility u κ (x) = µ x κ x Σx. This is a deterministic meanvariance portfolio problem that can be solved by standard methods. Consider for comparison purposes the alternative approach, based on the assumption that the underlying distribution of returns is multivariate normal. In this case the problem can be formulated as follows: (N) max E[u(r x)] = max x P x P 1 u(y) e (y µ x) 2 x Σx dy. 2π For increasing and concave utilities u, it is well known that solving the above problem yields a meanvariance efficient portfolio (see e.g. Kira and Ziemba 1977). Moreover, for exponential utilities, the solution can be easily computed. For more general utilities, however, Problem (N) involves numerical integration. In the following we investigate Problem (G) for some relevant nondifferentiable criteria that arise from typical incentive schemes for portfolio managers. The resulting objectives do not satisfy the conditions of Proposition 10, so stronger results are needed. We first argue that the objective has two point support, then calculate the robust objective U(x) = V (µ x, σ x ) in closed form. Finally, we show that V satisfies the conditions of Proposition 9, hence the optimal robust portfolio is meanvariance efficient and the problem can be solved as a PQP. Bonus and Commission Incentives. Portfolio managers, and generally sales managers, are typically incentivised by a fixed salary plus a bonus β, contingent on the realization of a certain preset performance target, denoted α. It is often argued that fixed target bonus systems are unappealing because managers have no direct incentive to overperform the target. As a remedy, managers are offered an additional commission rate γ for overachieving the target. Such a reward system, mixing a bonus β and commission rate γ for output above the target α, can be modeled
18 18 Operations Research 00(0), pp , c 0000 INFORMS as u(y) = γ(y α) + + β1 (y>α). If the distribution of returns is normal, then the problem amounts to optimizing (numerically) the objective: γσ x Ψ( µ x α ) + βφ( µ x α ), where Ψ(a) = E[(Z + a) + ], Φ(a) = P (Z a), Z N(0, 1). (13) σ x σ x Given only means and covariances of the asset returns, the manager maximizes over x P the corresponding robust criterion: U + α (x) = min R (µ,σ) γe[(x R α) + ] + βp (x R > α) = min γe[(r α) + ] + βp (r > α), (14) r (µ x,σx 2) where the last equality follows by Proposition 1. The following result is proved in the Appendix. Proposition 11. The optimal robust portfolio for a manager incentivized by a bonus β plus a commission rate γ for performance above a target α maximizes over x P the robust objective: (µ U + x α) 2 + α (x) = γ(µ x α) + + β σx 2 + (µ x α). 2 Moreover, the optimal portfolio is meanvariance efficient and can be solved as a PQP. In particular, with a commissiononly reward (β = 0), the manager only cares about expected return. If the manager is only rewarded a bonus (γ = 0), the objective is to maximize the (worst case) probability of achieving the given target α. Proposition 11 reduces the optimal robust portfolio to maximizing (µ x α) +. This follows also from Proposition 1 and Chebyshev s inequality. For σ x comparison, from (13), the corresponding problem with normal returns is equivalent to optimizing the portfolio s Sharpe ratio µ x α σ x (see e.g. Charnes and Cooper 1963). The interesting analogy between the robust and normal criteria can be interpreted as follows: Consider two portfolio managers who have access to the same set of assets with known means µ and covariance Σ and are incentivised by the same achievable target α (i.e. there exists x P such that µ x α). The first manager believes that stocks follow a multivariate normal distribution with these parameters, whereas the other manager ignores the distribution and adopts a worst case approach. This result says that the two managers, despite their different beliefs, will choose the same portfolio. Penalty and Value at Risk (VaR) Incentives. One problem with the above incentives is that managers who are not on the winning track (i.e. have low chances of making the bonus) may undertake extremely risky strategies, with potentially disastruous consequences for the firm. As a remedy, downside incentive schemes are designed to minimize risk exposure; we consider two
19 Operations Research 00(0), pp , c 0000 INFORMS 19 examples, based on penalties and value at risk. With both up and downside incentives in place, successful managers will attempt to maximize gains, while underperformers minimize losses. Ross (2004) suggests an incentive system whereby managers are penalized for performances below a benchmark level α; this is modeled by an optiontype payoff u(y) = (α y) +, with the unit penalty normalized wlog to 1. The corresponding robust portfolio minimizes over x P the worst case penalty Uα = max R (µ,σ) E[(α x R) + ]. Although this is a minmax formulation, a simple change of sign transformation can convert it to maxmin, and our standard results apply. The next result is proved in the Appendix. A direct proof of the univariate bound is due to Jagannathan (1977), who also provides solutions for nonnegative and symmetric distributions. Proposition 12. The optimal robust portfolio for a manager incentivized by a linear penalty for performance below a target α, minimizes over x P the robust criterion: U α = 1 2 [(α µ x) + σx 2 + (α µ x ) 2 ]. Moreover, the optimal portfolio is meanvariance efficient and can be solved as a PQP. Another relevant measure of downside risk, which financial firms, as well as portfolio and insurance managers are often required to quote and optimize is value at risk (see e.g. Linsmeier and Pearson 1996). The value at risk of a return x (or loss x) at a confidence level α (typically α = 0.95 or α = 0.99) is defined as the following fractile criterion: V ar(α; x) = min{ξ P ( x < ξ) α} = max{z P (x > z) α}. The definition with strict inequality is needed for our results. However, weak inequalities can replace strict ones throughout if return distributions are known to be continuous. For normal returns, Charnes and Cooper (1963) prove that V ar N (α) = Φ 1 (α)σ x µ x. In practice, return distributions are typically not normal, and exact computation of VaR may amount to difficult numeric integration. Robust (or worst case) value at risk is defined as the largest VaR attainable, given only the mean and covariance of the returns. The robust portfolio with VaR criterion solves: V ar(α) = max x P {z min P (x R > z) α} R (µ,σ) = max {z (µ x z) 2 α, z µ x P (µ x z) 2 + σx 2 x }. The second formulation follows by Proposition 11 for γ = 0; the optimum is achieved when equality holds in the first constraint. Solving for z, we obtain the following result; a proof based on the multivariate Chebyshev inequality (Karlin and Studden 1966) is due to El Ghaoui et al. (2003). (15)
20 20 Operations Research 00(0), pp , c 0000 INFORMS Proposition 13. The optimal robust portfolio for a manager minimizing an αvar incentive α solves min x P 1 α σ x µ x, so it is meanvariance efficient. Notice that the VaR objective is not an expectation; this suggests possible extensions of our results beyond the main framework of this paper. 6. A MultiProduct Pricing Application and Extensions In this section we consider the optimal pricing decision of a firm (or sales manager) setting prices for a line of products so as to maximize the expected utility of profits (respectively, of his expected compensation). In this case the random returns are the demands for each product, which are nonnegative and depend on the set prices. This setting indicates two possible extensions of our original problem, which we address in this section: (1) we incorporate dependence of the random vector R on the decision variable x, and (2) we analyse the case of nonnegative random vectors R. We extend Proposition 9 to address the first issue, and we provide weak bounds for nonnegative random variables to address the second. Consider a firm setting a vector of prices x for a set of n products with random demands R(x) = (R 1 (x),..., R n (x)) in order to maximize the expected utility of profits, given by u(r), where r is the total profit. Suppose that the demands for the n products are correlated due to substitution and/or complementarity effects, and depend on the prices of all products. Demand is estimated to have mean µ(x) and covariance structure Σ(x). The variable cost per product is given by the vector c. The problem of the firm looking for a robust pricing scheme for the n products can be formulated as follows: ( G) max x P min E[u((x c) R(x))]. R (µ(x),σ(x)) The marketing constraints on the feasible set of prices can be assumed to be of the following form: P = {x R n + A x b, x min i x i x max i } Returns Dependent on the Decision Variable The above problem ( G) is not a special case of the robust problem (G) studied in this paper, since the random return (in this case the demand), together with its mean and variance, actually depend on the decision variable (in this case the price x). While the results of this section are presented in the context of the specific pricing application, one can naturally extend them to the general abstract setting.
The newsvendor problem has a rich history that has
OR CHRONICLE PRICING AND THE NEWSVENDOR PROBLEM: A REVIEW WITH EXTENSIONS NICHOLAS C. PETRUZZI University of Illinois, Champaign, Illinois MAQBOOL DADA Purdue University, West Lafayette, Indiana (Received
More informationSubspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity
Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Wei Dai and Olgica Milenkovic Department of Electrical and Computer Engineering University of Illinois at UrbanaChampaign
More informationHow to Use Expert Advice
NICOLÒ CESABIANCHI Università di Milano, Milan, Italy YOAV FREUND AT&T Labs, Florham Park, New Jersey DAVID HAUSSLER AND DAVID P. HELMBOLD University of California, Santa Cruz, Santa Cruz, California
More informationTHE THEORY OF GOODDEAL PRICING IN FINANCIAL MARKETS A. CERNY, S.D. HODGES
ISSN 17446783 THE THEORY OF GOODDEAL PRICING IN FINANCIAL MARKETS A. CERNY, S.D. HODGES Tanaka Business School Discussion Papers: TBS/DP04/14 London: Tanaka Business School, 2004 The Theory of GoodDeal
More informationDIVERSIFICATION, REBALANCING, AND THE GEOMETRIC MEAN FRONTIER. William J. Bernstein. David Wilkinson
DIVERSIFICATION, REBALANCING, AND THE GEOMETRIC MEAN FRONTIER William J. Bernstein wbern@mail.coos.or.us and David Wilkinson DaveWilk@worldnet.att.net November 24, 1997 Abstract The effective (geometric
More informationOPRE 6201 : 2. Simplex Method
OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2
More informationPrice discrimination through communication
Price discrimination through communication Itai Sher Rakesh Vohra May 26, 2014 Abstract We study a seller s optimal mechanism for maximizing revenue when a buyer may present evidence relevant to her value.
More informationDimensioning Large Call Centers
OPERATIONS RESEARCH Vol. 52, No. 1, January February 2004, pp. 17 34 issn 0030364X eissn 15265463 04 5201 0017 informs doi 10.1287/opre.1030.0081 2004 INFORMS Dimensioning Large Call Centers Sem Borst
More informationSteering User Behavior with Badges
Steering User Behavior with Badges Ashton Anderson Daniel Huttenlocher Jon Kleinberg Jure Leskovec Stanford University Cornell University Cornell University Stanford University ashton@cs.stanford.edu {dph,
More informationTHE PROBLEM OF finding localized energy solutions
600 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 3, MARCH 1997 Sparse Signal Reconstruction from Limited Data Using FOCUSS: A Reweighted Minimum Norm Algorithm Irina F. Gorodnitsky, Member, IEEE,
More informationMaximizing the Spread of Influence through a Social Network
Maximizing the Spread of Influence through a Social Network David Kempe Dept. of Computer Science Cornell University, Ithaca NY kempe@cs.cornell.edu Jon Kleinberg Dept. of Computer Science Cornell University,
More informationA general equilibrium model of statistical discrimination
Journal of Economic Theory 114 (2004) 1 30 A general equilibrium model of statistical discrimination Andrea Moro a and Peter Norman b, a Department of Economics, University of Minnesota, MN, USA b Department
More informationSelling assets: When is the whole worth more than the sum of its parts?
Selling assets: When is the whole worth more than the sum of its parts? Robert Marquez University of California, Davis Rajdeep Singh University of Minnesota October, 214 Abstract When is it better to sell
More informationFrom Anticipations to Present Bias: A Theory of ForwardLooking Preferences
From Anticipations to Present Bias: A Theory of ForwardLooking Preferences Simone Galperti UC, San Diego Bruno Strulovici Northwestern University October 31, 2014 Abstract How do future wellbeing and
More informationTruthful Mechanisms for OneParameter Agents
Truthful Mechanisms for OneParameter Agents Aaron Archer Éva Tardos y Abstract In this paper, we show how to design truthful (dominant strategy) mechanisms for several combinatorial problems where each
More informationA Tutorial on Support Vector Machines for Pattern Recognition
c,, 1 43 () Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. A Tutorial on Support Vector Machines for Pattern Recognition CHRISTOPHER J.C. BURGES Bell Laboratories, Lucent Technologies
More informationA direct approach to false discovery rates
J. R. Statist. Soc. B (2002) 64, Part 3, pp. 479 498 A direct approach to false discovery rates John D. Storey Stanford University, USA [Received June 2001. Revised December 2001] Summary. Multiplehypothesis
More informationSampling 50 Years After Shannon
Sampling 50 Years After Shannon MICHAEL UNSER, FELLOW, IEEE This paper presents an account of the current state of sampling, 50 years after Shannon s formulation of the sampling theorem. The emphasis is
More informationEXTENSION RULES OR WHAT WOULD THE SAGE DO? January 23, 2013
EXTENSION RULES OR WHAT WOULD THE SAGE DO? EHUD LEHRER AND ROEE TEPER January 23, 2013 Abstract. Quite often, decision makers face choices that involve new aspects and alternatives never considered before.
More informationOn SetBased Multiobjective Optimization
1 On SetBased Multiobjective Optimization Eckart Zitzler, Lothar Thiele, and Johannes Bader Abstract Assuming that evolutionary multiobjective optimization (EMO) mainly deals with set problems, one can
More informationReinventing Pareto: Fits for both small and large losses. Michael Fackler Independent Actuary Munich, Germany Email: michael_fackler@web.
Reinventing Pareto: Fits for both small and large losses Michael Fackler Independent Actuary Munich, Germany Email: michael_fackler@web.de Abstract Fitting loss distributions in insurance is sometimes
More informationNot Only What but also When: A Theory of Dynamic Voluntary Disclosure
Not Only What but also When: A Theory of Dynamic Voluntary Disclosure By ILAN GUTTMAN, ILAN KREMER, AND ANDRZEJ SKRZYPACZ We examine a dynamic model of voluntary disclosure of multiple pieces of private
More informationThe Set Covering Machine
Journal of Machine Learning Research 3 (2002) 723746 Submitted 12/01; Published 12/02 The Set Covering Machine Mario Marchand School of Information Technology and Engineering University of Ottawa Ottawa,
More informationINTERIOR POINT POLYNOMIAL TIME METHODS IN CONVEX PROGRAMMING
GEORGIA INSTITUTE OF TECHNOLOGY SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES INTERIOR POINT POLYNOMIAL TIME METHODS IN CONVEX PROGRAMMING ISYE 8813 Arkadi Nemirovski On sabbatical leave from
More informationExplaining Cointegration Analysis: Part II
Explaining Cointegration Analysis: Part II David F. Hendry and Katarina Juselius Nuffield College, Oxford, OX1 1NF. Department of Economics, University of Copenhagen, Denmark Abstract We describe the concept
More information4.1 Learning algorithms for neural networks
4 Perceptron Learning 4.1 Learning algorithms for neural networks In the two preceding chapters we discussed two closely related models, McCulloch Pitts units and perceptrons, but the question of how to
More informationRmax A General Polynomial Time Algorithm for NearOptimal Reinforcement Learning
Journal of achine Learning Research 3 (2002) 213231 Submitted 11/01; Published 10/02 Rmax A General Polynomial Time Algorithm for NearOptimal Reinforcement Learning Ronen I. Brafman Computer Science
More informationSet Identified Linear Models
Set Identified Linear Models Christian Bontemps, Thierry Magnac, Eric Maurin First version, August 2006 This version, December 2007 Abstract We analyze the identification and estimation of parameters β
More informationInvesting for the Long Run when Returns Are Predictable
THE JOURNAL OF FINANCE VOL. LV, NO. 1 FEBRUARY 2000 Investing for the Long Run when Returns Are Predictable NICHOLAS BARBERIS* ABSTRACT We examine how the evidence of predictability in asset returns affects
More informationControllability and Observability of Partial Differential Equations: Some results and open problems
Controllability and Observability of Partial Differential Equations: Some results and open problems Enrique ZUAZUA Departamento de Matemáticas Universidad Autónoma 2849 Madrid. Spain. enrique.zuazua@uam.es
More information