Mathematical Economics - Part I - Optimization. Filomena Garcia. Fall Optimization. Filomena Garcia. Optimization. Existence of Solutions

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Mathematical Economics - Part I - Optimization. Filomena Garcia. Fall Optimization. Filomena Garcia. Optimization. Existence of Solutions"

Transcription

1 of Mathematical Economics - Part I - Fall 2009

2 of 1 in R n Problems in Problems: Some of in 8 Continuity: The Maximum Theorem 9 Supermodularity and Monotonicity

3 of An optimization problem in R n is one where the values of a given function f : R n R are to be maximized or minimized over a given set D R n. The function f is called objective function and the set D is called the constraint set. We denote the optimization problem as: max {f (x) x D} A solution to the problem is a point x D such that f (x) f (y) for all y D. We call f (D) the set of attainable values of f in D.

4 of It is worth noting the following: 1 A solution to an optimization problem may not exist Example: Let D = R + and f (x) = x, then f (D) = R + and supf (D) = +, so the problem max {f (x) x D} has no solution. 2 There may be multiple solutions to the optimization problem Example: Let D = [ 1, 1] and f (x) = x 2, then the maximization problem max {f (x) x D} has two solutions: x = 1 and x = 1.

5 of We will be concerned about the set of solutions to the optimization problem, knowing that this set could be empty. argmax [f (x) x D] = {x D f (x) f (y), y D } Two important results to bear in mind: 1 x is a maximum of f on D if and only if it is a minimum of f on D. 2 Let ϕ : R R be a strictly increasing function. Then x is a maximum of f on D if and only if x is also a maximum of the composition ϕ f.

6 Problems in of problems are often presented in parametric form, i.e. both the objective function and/or the feasible set depend on some parameter θ from a set of feasible parameter values Θ. In this case, we denote the optimization problem as: max {f (x, θ) x D(θ)} In general, this problem has a solution which also depends on θ, i.e. x(θ) = argmax {f (x, θ) x D(θ)}

7 Problems: Utility Maximization of 1 Utility Maximization In consumer theory the agent maximizes his utility from consuming x i units of commodity i. The utility is given by u(x 1,..., x n ). The constraint set is the set of feasible bundles given that the agent has an income I and the commodities are priced p = (p 1,...p n ), i.e. B(p, I ) = { x R n + p x I } Notice that this is a parametrized optimization problem, the parameters being (p, I ) Other parameters may exist within the utility function, namely Cobb Douglas weights

8 Problems: Expenditure Minimization of 2 Expenditure Minimization It is the dual problem to the utility maximization. Consists on minimizing the expenditure constrained on attaining a certain level of utility ū. The problem is given by: min {p x u(x) ū}

9 Problems: Profit Maximization of 3 Profit Maximization It is the problem of a firm which produces a single output using n inputs through the production relationship y = g(x 1,..., x n ). Given the production of y units of good, the firm may charge the price p(y) 2. w denotes the vector of input prices. The firm chooses the input mix which maximizes her profits, i.e., max { p(g(x))g(x) w x x R n } + 2 p(y) is the inverse demand function

10 Problems: Cost Minimization of 4 Cost Minimization It is the dual problem to the profit maximization. Consists on minimizing the cost of producing at least ȳ units of output. The problem is given by: min {w x g(x) ȳ}

11 Problems: Consumption-Leisure Choice of 5 Consumption-Leisure Choice It is a model of the labor-supply decision process of households in which the income becomes also an object of choice. Let H be the amount of hours that the agent has available. The wage rate of labor time is given by w. If working for L units of time, the agent earns wl. The agent obtains utility from consumption of x and from leisure time, l. Then, the agent s problem is: max {u(x, l) p x wl, L + l H}

12 of of of are: 1 Identify the set of conditions on f and D under which the existence of solutios to optimization problems is guaranteed 2 Obtain a characterization of the set of optimal points, namely: The identification of necessary conditions for an optimum, i.e. conditions that every solution must verify. The identification of sufficient conditions for an optimum, i.e. conditions such that any point that meets them is a solution.

13 of of The identification of conditions ensuring the uniqueness of the solution. The identification of a general theory of parametric variation in a parametrized family of optimization problems. For example: The identification of conditions under which the solution set varies continuously with the parameter θ. In problems where the parameters and actions have a natural ordering, the identification of conditions in which parametric monotonicity is verified, i.e., increasing the value of the parameter, increases the value of the action.

14 How we will proceed of Section 2: Studies the question of existence of solutions. The main result is the Weierstrass Theorem, which provides a general set of conditions under which optimization problems are guaranteed to have both maxima and minima. From Section 3 to Section 5 we will examine the necessary conditions for optima using differentiability assumptions on the undelying problem. We focus on local optima, given that differentiability is only a local property. Section 3 focuses on the situation in which the optimum occurs in the interior the feasible set D. The main result are the necessary conditions that must be met by the objective function at the optimum. Also sufficient conditions that identify optima are provided.

15 How we will proceed of In Sections 4 and 5 some or all of the constraints determine the optima. : Section 4 focuses on equality constraints and the main result is the Theorem of Lagrange (both necessary and sufficient conditions for optima are presented. Section 5 focuses on inequality constraints and the main result is the Theorem of Kuhn and Tucker which describes necessary conditions for optima. Sections 6 and 7 turn to the study of sufficient conditions, i.e., conditions that, when met, will identify points as being global optima. These are mainly related to the convexity and quasiconvexity of the feasible sets and objective functions.

16 How we will proceed of Section 8 and 9 address respectivey the issue of parametric continuity and monotonicity of solutions. The main result of Section 8 is the Maximum Theorem, which states that the continuity assumptions of the primitives of the optimization problem are inherited, although not entirely, by the solutions. Section 9 relies on the concept of supermodularity of the primitives to conclude about the parametric monotonicity of solutions. The main result is the Theorem of Tarski.

17 of Main Result: The Weierstrass Theorem Let D R n be a compact and let f : D R be a continuous function on D. Then f attains a max and a min on D, i.e., z 1, z 2 D: x D f (z 1 ) f (x) f (z 2 )

18 of Some Definitions: Compact set: closed and bounded bounded set: is countained within a finite ball. closed: contains its frontier. (valid even if the frontier is empty like in R) : closed interval in R. Continuous function: for any point a in the domain, we have that: lim x a f (x) = f (a)

19 of Notice: The Weierstrass Theorem gives us sufficient conditions for maxima or minima to exist, however, we don t know what happens when some or even all of the conditions of the Theorem are violated.

20 of Failure of the Weierstrass Theorem: 1 Let D = R and f (x) = x 3. f is continuous but D is not compact (it is not bounded). Since f (D) = R, f does not attain a maximum or a minimum on D. 2 Let D = (0, 1) and f (x) = x. f is continuous but D is not compact (it is not closed). The set f (D) = (0, 1), so f does not attain a maximum or a minimum on D. 3 Let D = [ 1, 1] and let f be given by f (x) = { 0, x = 1, x = 1 1, 1 < x < 1 D is compact, but f fails to be continuous. f (D) is the open interval ( 1, 1)

21 of Intuition of the Proof of the Weierstrass Theorem: 1 First we show that under the stated conditions f (D) is a compact set, hence, closed and bounded. 2 Second, it is shown that if A is a compact set in R, then max A and min A are always well defined, never beeing the case that A fails to contain the supremum and the infimum. Hence max f (D) and min f (D) must exist on D.

22 The Weierstrass Theorem in Applications of Consider the applications mentioned in section 1. The Utility Maximization Problem: Maximize u(x) subject to x B(p, I ) = { x R n + p x I } We assume u(x) to be continuous. By the Weierstrass Theorem, if B is compact, we must have a solution. p >> 0

23 The Weierstrass Theorem in Applications of The Cost Minimization Problem: Minimize wx over x F (y) = { x R n + g(x) y } The objective function is continuous. But the feasible action set is not compact because it is not bounded. The Weierstrass Theorem does not guarantee or preclude a solution. What can we do to guarantee a solution? Bound the set.

24 of optima The constraints do not affect the optimal point; in other words, the optimum is interior. Local Maximum: A point x is a local maximum of f on if there is r > 0 such that f (x) f (y) for all y B(x, r). Global Maximum: A point x is a global maximum of f if for all r > 0, and all y D, f (x) f (y) for all y B (x, r).

25 First Order Conditions of Necessary Conditions If x is a local maximum (or minimum) of the function f on D and f is differentiable, then Df (x ) = 0 Notice, the reverse is not necessarily true. Example: x = 0 for f (x) = x 3 Any point that satisfies the first order conditions (FOC) is a critical point of f on D.

26 Second Order Conditions of Sufficient Conditions Suppose f is a C 2 function on D, and x is a point in the interior of D. Then: If Df (x ) = 0 and D 2 f (x ) is negative definite at x, then x is a local strict local maximum of f on D. If Df (x ) = 0 and D 2 f (x ) is positive definite at x, then x is a local strict local minimum of f on D. Notice: This conditions only identify local optima (not global).

27 Second Order Conditions of Example: Let f (x) = 2x 3 3x 2 and D = R. The first order conditions are: f (x) = 0 and identify as critical points x = 0 and x = 1. At these points, the second order conditions are: f (0) = 6 < 0 and f (1) = 6 > 0. Hence, 0 is a local maximum and 1 is a local minimum. However, these are not global max or min. In fact, this function has no global max nor min (why?). The existence of a global maximum (resp. minimum) in D is guaranteed only if the function is concave (resp. convex) in D

28 Constrained Problems - of 1 It is not often that optimization problems have unconstrained solutions. 2 Typically some, or all of the constraints will matter. 3 In this chapter we will look into the constraints of the form: g(x) = 0 where g : R n R k The problem: Maximize f (x) s.t. g(x)=0

29 Lagrange Theorem - 2 variables, one restriction We can solve graphically of The solution is: f x 1 f x 2 = g x 1 g g(x 1, x 2 ) = 0 (1) x 2

30 Lagrange Theorem - 2 variables, one restriction of We can solve by substitution Since g(x 1, x 2 ) = 0 in the solution, and g x 2 implicit funtion (what is this?) x 2 (x 1 ). So, the problem becomes: max x1 f (x 1, x 2 (x 1 )) 0, we have an

31 Lagrange Theorem - 2 variables, one restriction of The FOC is: f + f x 2 = 0 x 1 x 2 x 1 By the IMPLICIT FUNCTION THEOREM: The solution is: f x 1 f x 2 = x 2 = x 1 g x 1 g x 2 g x 1 g g(x 1, x 2 ) = 0 (2) x 2

32 Lagrange Theorem - 2 variables, one restriction of We can rewrite the FOC of the previous problem as: or: where f f x 1 x 2 g x 1 g x 2 = 0 f + λ g = 0 x 1 x 1 λ = f x 2 g x 2 which is the FOC of the Lagrangean.

33 Lagrange Theorem of Lagrange Theorem Let f, g : R 2 R be C 1 functions. Suppose that x = (x 1, x 2 ) is a local maximizer or minimizer of f subject to g(x 1 ; x 2 ) = 0. Suppose also that Dg(x ) 0. Then, there exists a scalar λ R such that (x 1, x 2, λ ) is a critical point of the lagrangean function: L(x 1 ; x 2 ; λ) = f (x 1, x 2 ) + λg(x 1, x 2 ) In other words, at (x 1, x 2, λ ): L x 1 = 0, L x 2 = 0 and L λ = 0

34 Lagrange Theorem of The constraint qualification We must assume Dg(x ) 0. (If there are many variables and restrictions, this amounts to say that the Jacobian matrix of g is invertible or that the rank is equal to the number of constraints). This condition allows us to identify g as an implicit function. The inverse of Dg is used in computing λ.

35 Lagrange Theorem - Necessary conditions of Notice: The Lagrange Theorem provides necessary but not sufficient conditions for local optima. It is not said that if we find x and λ such that g(x) = 0 and the FOC of the Lagrangean are equal to zero, that x, λ are indeed an optimum, even if Dg(x) 0.

36 Lagrange Theorem - Necessary conditions of Then why can we use the Lagrange method so often? Proposition Suppose that the following two conditions hold: 1 A global optimum x exists in the given problem. 2 The constraint qualification is met at x. Then, there exists a λ such that (x, λ ) is a critical point of L. So, under the conditions of Proposition 27, the Lagrangean method does identify the optimum.

37 Lagrange Theorem - Sufficient Conditions of Sufficient Conditions Suppose that there is x and λ which satisfy the constraint qualification and solve the FOC of the Lagrangean. Then: If the Bordered Hessean has a positive determinant, we have a local maximum If the Bordered Hessean has a negative determinant, we have a local minimum

38 of An Example: Max x x 2 y 2 s.t. x 2 + y 2 = 1 First we verify that the conditions of Proposition work: 1 The function f is continuous and the constraint set is compact. 2 The Constraint qualification, namely Dg(x, y) = ( 2x, 2y) 0, is met. Then, we can rely on the Theorem of Lagrange to pinpoint the optima: L(x; y; λ) = f (x, y) + λg(x, y) = x 2 y 2 + λ(1 x 2 y 2 )

39 of An Example (cont.): The first order condtitions are: L x L y = 0 2x λ2x = 0 = 0 2y λ2y = 0 L λ = 0 x 2 + y 2 = 1 The critical points are: (1,0,1);(-1,0,1);(0,1,-1);(0,-1,-1). The first two are global maximizers, and the second are global minimizers.

40 constraints and Kuhn Tucker of Most economic problems include non-negativity restrictions and inequalities in the optimization. For instance, it is natural to impose that consumption be non negative, or work hours, or capital. In this chapter we will study problems of the form: max x s.t.g(x) = 0andh(x) 0 Our tool will be the Kuhn-Tucker Theorem.

41 Kuhn-Tucker Theorem of Kuhn-Tucker Theorem - Maximization Let f : R n R and h j : R n R be C 1 functions, i = 1,..., l. Suppose x is a local maximum of f on D = {x R n h i (x) i = 1,..., l} Let E denote the set of effective (binding) constraints at x, and let h E = (h i ) i E. Suppose that the rank of Dh E = E. Then, there exists a vector λ = (λ 1, λ 2..., λ l ) such that the following constraints are met: 1 λ i 0 and λ i h i(x ) = 0, for i = 1,..., l. 2 Df (x ) + l i=1 λ i Dh i(x ) = 0

42 Kuhn-Tucker Theorem of Kuhn-Tucker Theorem - Minimization Let f : R n R and h j : R n R be C 1 functions, i = 1,..., l. Suppose x is a local minimum of f on D. Let E denote the set of effective (binding) constraints at x, and let h E = (h i ) i E. Suppose that the rank of Dh E = E. Then, there exists a vector λ = (λ 1, λ 2..., λ zl) such that the following constraints are met: 1 λ i 0 and λ i h i(x ) = 0, for i = 1,..., l. 2 Df (x ) l i=1 λ i Dh i(x ) = 0

43 Kuhn-Tucker Theorem - Some Observations of 1 A pair x, λ meets the FOC condition for a maximum in an inequality constrained maximization problem if it satisfies h(x ) 0 and conditions 1 and 2 of the theorem. 2 Condition 1 is called the Complementary slackness condition. 3 K-T theorem provides only necessary conditions for local optima which meet the constraint qualification. 4 Df (x ) l i=1 λ i Dh i(x ) = 0

44 Kuhn-Tucker Theorem - Interpreting the conditions of Complementary Slackness condition The Lagrange multipliers give us the effect in the maximum of relaxing the restriction. Hence we can have one of three situations: 1 the restriction is verified with strict inequality, and in that case, relaxing the restricting will not change the maximum value attained: λ = 0 and h > 0. 2 the restriction is verified with equality, and in that case, relaxing the restricting will at least increase the value attained: λ > 0 and h = 0. 3 the restriction is verified with equality, and in that case, relaxing the restricting will maintain the value attained: λ = 0 and h = 0.

45 Kuhn-Tucker Theorem - Constraint Qualification of Constraint Qualification r(dh E (x )) = E An example where the constraint qualification fails: Let: f (x, y) = (x 2 + y 2 ) and h(x, y) = (x 1) 3 y 2. Consider the problem of maximizing f subject to h 0. By inspection we see that f attains its max when (x 2 + y 2 ) attains its min. We also must have x 1 and y 0. So the maximum is obtained at (x, y ) = (1, 0), a point where the constraint is binding. Dh(x, y ) = (3(x 1) 2, 2y ) = (0, 0). The K-T theorem fails. In fact: Df (x, y ) + λ(x, y ) = (2, 0).

46 Why the procedure usually works of In general we can rely on the K-T theorem due to the following proposition: Proposition Suppose the following conditions hold: 1 A global maximum x exists to the given inequality-constrained problem. 2 The constraint qualification is met at x. Then, there exists λ such that (x, λ ) is a critical point of L.

47 An Example of Let g(x, y) = 1 x 2 y 2. Consider the problem of maximizing f (x, y) = x 2 y over the set D = (x, y) g(x, y) 0. 1 The conditions of proposition 2 are met: by weierstrass and the fact that if the constraint is met, x and y cannot be simmultaneously zero, and hence r(dg) = 1. 2 We can use the K-T theorem. L x L y L(x, y, λ) = x 2 y + λ(1 x 2 y 2 ) = 0 2x 2λx = 0 = 0 1 2λy = 0 λ 0,1 x 2 y 2 0, λ(1 x 2 y 2 ) = 0

48 An Example (cont.) of From (1) either x = 0 or λ = 1. If λ = 1, (2) implies that y = 1/2 and x 2 + y 2 = 1 x = ± 3 2. If x = 0, we may have λ = 0 or λ > 0. Take the first case: λ > 0, then from the CSC we must have y = ±1. y = 1 is inconsistent from (2). so y = 1 and λ = 1 2. λ = 0 is inconsistent with (2). So, the critical points are: (x, y, λ) = ( ± 3 2, 1 2, 1) and (x, y, λ) = (0, 1, 1 2 ) the value of the function is respectively 5/4 and 1, hence we obtain the maxima. (why not the minimum)?

49 Interpreting the Lagrange Multipliers of The Lagrange multipliers of the Lagrange Theorem and of the Kuhn-Tucker Theorem can be interpreted as the sensitivity of the value of the function in the optimum to relaxing the restriction. Case 1: restrictions Suppose that we have the following optimization problem: max x f (x)s.t.g(x) + c = 0 We would solve the problem by writing the FOC of the Lagrangean w.r.t. x and λ, namely: L x = f (x) x + λ g(x) x ; g(x) + c = 0 The optimum is x (c) and the value of the function in the optimum is f (x (c)).

50 of We can compute the effect of c on the optimum of the function. Also: g(x (c)) x f c = f (x (c)) x f (x (c)) x x (c) c = λ g(x (c)) x = 1 (because the constraint must be observed in the optimum), hence: f c = λ

51 Interpreting the Lagrange Multipliers of Case 2: restrictions Suppose that we have the following optimization problem: max x f (x)s.t.h(x) + c 0 We would solve the problem by writing the FOC of the Lagrangean w.r.t. x and λ, namely: L x = f (x) x + λ g(x) x λ 0, h(x) + c 0, λ(h(x) + c) = 0 The optimum is x (c) and the value of the function in the optimum is f (x (c)).

52 of We can compute the effect of c on the optimum of the function. f c = f (x (c)) x f (x (c)) x x (c) c = λ h(x (c)) x If the restriction is not binding, λ = 0 and, hence, f (x (c)) x = 0 If the restriction is binding, h(x) + c = 0 h(x (c)) x So: f c = λ. x (c) c = 1

53 ity and of The notion of convexity occupies a central position in the study of optimization theory. It encompasses not only the idea of convex sets, but also of concave and convex functions. The attractiveness of convexity for optimization theory arises from the fact that when an optimization problem meets suitable convexity conditions, the same first-order conditions that we have shown in previous to be necessary for local optima, also become sufficient for global optima. Moreover, if the convexity conditions are tightened to what are called strict convexity conditions, we get the additional bonus of uniqueness of the solution.

54 ity Defined of We say that an optimization problem is convex if the objective function is concave (max) or convex (min) and the feasible set is convex. set A set D is said to be convex if given x 1, x 2 D we have λx 1 + (1 λ)x 2 D. concave function A function defined on a convex set D is concave if: given x 1, x 2 D f (λx 1 + (1 λ)x 2 ) λf (x 1 ) + (1 λ)f (x 2 ) convex function A function defined on a convex set D is concave if: given x 1, x 2 D f (λx 1 + (1 λ)x 2 ) λf (x 1 ) + (1 λ)f (x 2 )

55 ity Defined (cont.) of Strictly concave function A function defined on a convex set D is concave if: given x 1, x 2 D f (λx 1 + (1 λ)x 2 ) > λf (x 1 ) + (1 λ)f (x 2 ) Strictly convex function A function defined on a convex set D is concave if: given x 1, x 2 D f (λx 1 + (1 λ)x 2 ) < λf (x 1 ) + (1 λ)f (x 2 )

56 Implications of convexity of Every concave or convex function must also be continuous on the interior of its domain. Every concave or convex function must possess minimal differentiability properties. The concavity or convexity of an everywhere differentiable function f can be completely characterized in terms of the behavior of its second derivative D 2 f

57 ity and of Some General Observations Theorem 1 Suppose D R n is convex and f : D R is concave. Then 1 Any local maximum of f is a global maximum of f. 2 The set argmax {f (x) x D} of maximizers of f on D is either empty or convex.

58 ity and of Some General Observations Theorem 2 Suppose D R n is convex and f : D R is strictly concave. Then 1 Any local maximum of f is a global maximum of f. 2 The set argmax {f (x) x D} of maximizers of f on D is either empty or or contains a single point (uniqueness).

59 ity and of ity and Theorem 3 Suppose D R n is convex and f : D R is concave and differentiable function on D. Then, x is an unconstrained maximum of f on D if and only if Df (x) = 0.

60 ity and of ity and the Theorem of Kuhn-Tucker Theorem 4 Suppose D R n is open and convex and f : D R is concave and differentiable function on D. For i = 1, 2...l, let h i : D R also concave. Suppose that there is some x D such that h i ( x) > 0, i = 1,...l. Then, x maximizes f over the constraint set, if and only if there is λ Rk such that the Kuhn Tucker first order conditions hold.

61 ity and of The Slater Condition The condition that there is some x D such that h i ( x) > 0 is called the Slater Condition 1 The SC is used only in the proof that the K-T conditions are necessary at an optimum. It is not related to sufficiency. 2 The rank condition established in the K-T theorem can be replaced by the SC and concavity of the constraints.

62 in of ity carries important implications for optimization. However it is a quite restrictive assumption. For instance, such a commonly used utility function, such as the Cobb Douglas function is not concave unless n i=1 α i 1. u(x 1,..., x n = x α 1 αn 1...xn. In this chapter we examine optimization under a weakening of the condition of convexity, which is called quasi-convexity.

63 and Quasiconcavity defined of quasi-concave function A function f defined on a convex set D is quasi-concave if and only if: for all x 1, x 2 D and for all λ (0, 1) it is the case that f (λx 1 + (1 λ)x 2 ) minf (x 1 ), f (x 2 ) quasi-convex function A function defined on a convex set D is concave if: given x 1, x 2 D f (λx 1 + (1 λ)x 2 ) maxf (x 1 ), f (x 2 )

64 and Quasiconcavity defined of strict quasi-concave function A function f defined on a convex set D is quasi-concave if and only if: for all x 1, x 2 D and for all λ (0, 1) it is the case that f (λx 1 + (1 λ)x 2 ) > minf (x 1 ), f (x 2 ) strict quasi-convex function A function defined on a convex set D is concave if: given x 1, x 2 D f (λx 1 + (1 λ)x 2 ) < maxf (x 1 ), f (x 2 )

65 as a generalization of convexity of If a function f is concave on a convex set, it is also quasi-concave on the convex set. Same for concavity. The reverse does not hold. Let f : R R.If f is a nondecreasing function on R, then, f is both quasi concave and quasi-convex.

66 Implications of quasi-convexity of Quasi-concave and quasi-convex functions are not necessarily continuous on the interior of their domains. Quasi-concave and quasi-convex functions may have local optima that are not global optima. First order conditions are not necessary to identify every local optima under quasi-convexity.

67 and of Theorem Suppose that f : D R and h i are strictly quasiconcave and D is convex. Suppose that there are x and λ such that the K-T conditions are satisfied. Then, x maximizes f over D provided at least one of the following conditions holds: 1 Df (x ) 0 2 f is concave

68 and of Theorem Suppose that f : D R is strictly quasiconcave and D is convex. Then, any local maximum of f on D is also a global maximum of f on D. Moreover, the set or maximizers of f is either empty or a singleton. Same holds for a minimum and strictly quasiconvex functions.

69 of

70 of

A FIRST COURSE IN OPTIMIZATION THEORY

A FIRST COURSE IN OPTIMIZATION THEORY A FIRST COURSE IN OPTIMIZATION THEORY RANGARAJAN K. SUNDARAM New York University CAMBRIDGE UNIVERSITY PRESS Contents Preface Acknowledgements page xiii xvii 1 Mathematical Preliminaries 1 1.1 Notation

More information

Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}.

Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}. Walrasian Demand Econ 2100 Fall 2015 Lecture 5, September 16 Outline 1 Walrasian Demand 2 Properties of Walrasian Demand 3 An Optimization Recipe 4 First and Second Order Conditions Definition Walrasian

More information

Readings. D Chapter 1. Lecture 2: Constrained Optimization. Cecilia Fieler. Example: Input Demand Functions. Consumer Problem

Readings. D Chapter 1. Lecture 2: Constrained Optimization. Cecilia Fieler. Example: Input Demand Functions. Consumer Problem Economics 245 January 17, 2012 : Example Readings D Chapter 1 : Example The FOCs are max p ( x 1 + x 2 ) w 1 x 1 w 2 x 2. x 1,x 2 0 p 2 x i w i = 0 for i = 1, 2. These are two equations in two unknowns,

More information

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all. 1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.

More information

Lecture 4: Equality Constrained Optimization. Tianxi Wang

Lecture 4: Equality Constrained Optimization. Tianxi Wang Lecture 4: Equality Constrained Optimization Tianxi Wang wangt@essex.ac.uk 2.1 Lagrange Multiplier Technique (a) Classical Programming max f(x 1, x 2,..., x n ) objective function where x 1, x 2,..., x

More information

Revealed Preference. Ichiro Obara. October 8, 2012 UCLA. Obara (UCLA) Revealed Preference October 8, / 17

Revealed Preference. Ichiro Obara. October 8, 2012 UCLA. Obara (UCLA) Revealed Preference October 8, / 17 Ichiro Obara UCLA October 8, 2012 Obara (UCLA) October 8, 2012 1 / 17 Obara (UCLA) October 8, 2012 2 / 17 Suppose that we obtained data of price and consumption pairs D = { (p t, x t ) R L ++ R L +, t

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

15 Kuhn -Tucker conditions

15 Kuhn -Tucker conditions 5 Kuhn -Tucker conditions Consider a version of the consumer problem in which quasilinear utility x 2 + 4 x 2 is maximised subject to x +x 2 =. Mechanically applying the Lagrange multiplier/common slopes

More information

Cost Minimization and the Cost Function

Cost Minimization and the Cost Function Cost Minimization and the Cost Function Juan Manuel Puerta October 5, 2009 So far we focused on profit maximization, we could look at a different problem, that is the cost minimization problem. This is

More information

The Envelope Theorem 1

The Envelope Theorem 1 John Nachbar Washington University April 2, 2015 1 Introduction. The Envelope Theorem 1 The Envelope theorem is a corollary of the Karush-Kuhn-Tucker theorem (KKT) that characterizes changes in the value

More information

Convex Optimization SVM s and Kernel Machines

Convex Optimization SVM s and Kernel Machines Convex Optimization SVM s and Kernel Machines S.V.N. Vishy Vishwanathan vishy@axiom.anu.edu.au National ICT of Australia and Australian National University Thanks to Alex Smola and Stéphane Canu S.V.N.

More information

6.254 : Game Theory with Engineering Applications Lecture 5: Existence of a Nash Equilibrium

6.254 : Game Theory with Engineering Applications Lecture 5: Existence of a Nash Equilibrium 6.254 : Game Theory with Engineering Applications Lecture 5: Existence of a Nash Equilibrium Asu Ozdaglar MIT February 18, 2010 1 Introduction Outline Pricing-Congestion Game Example Existence of a Mixed

More information

Date: April 12, 2001. Contents

Date: April 12, 2001. Contents 2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........

More information

TOPIC 4: DERIVATIVES

TOPIC 4: DERIVATIVES TOPIC 4: DERIVATIVES 1. The derivative of a function. Differentiation rules 1.1. The slope of a curve. The slope of a curve at a point P is a measure of the steepness of the curve. If Q is a point on the

More information

General Equilibrium Theory: Examples

General Equilibrium Theory: Examples General Equilibrium Theory: Examples 3 examples of GE: pure exchange (Edgeworth box) 1 producer - 1 consumer several producers and an example illustrating the limits of the partial equilibrium approach

More information

Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

More information

In this section, we will consider techniques for solving problems of this type.

In this section, we will consider techniques for solving problems of this type. Constrained optimisation roblems in economics typically involve maximising some quantity, such as utility or profit, subject to a constraint for example income. We shall therefore need techniques for solving

More information

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR Solutions Of Some Non-Linear Programming Problems A PROJECT REPORT submitted by BIJAN KUMAR PATEL for the partial fulfilment for the award of the degree of Master of Science in Mathematics under the supervision

More information

THE SECOND DERIVATIVE

THE SECOND DERIVATIVE THE SECOND DERIVATIVE Summary 1. Curvature Concavity and convexity... 2 2. Determining the nature of a static point using the second derivative... 6 3. Absolute Optima... 8 The previous section allowed

More information

Schooling, Political Participation, and the Economy. (Online Supplementary Appendix: Not for Publication)

Schooling, Political Participation, and the Economy. (Online Supplementary Appendix: Not for Publication) Schooling, Political Participation, and the Economy Online Supplementary Appendix: Not for Publication) Filipe R. Campante Davin Chor July 200 Abstract In this online appendix, we present the proofs for

More information

MTH4100 Calculus I. Lecture notes for Week 8. Thomas Calculus, Sections 4.1 to 4.4. Rainer Klages

MTH4100 Calculus I. Lecture notes for Week 8. Thomas Calculus, Sections 4.1 to 4.4. Rainer Klages MTH4100 Calculus I Lecture notes for Week 8 Thomas Calculus, Sections 4.1 to 4.4 Rainer Klages School of Mathematical Sciences Queen Mary University of London Autumn 2009 Theorem 1 (First Derivative Theorem

More information

Name. Final Exam, Economics 210A, December 2011 Here are some remarks to help you with answering the questions.

Name. Final Exam, Economics 210A, December 2011 Here are some remarks to help you with answering the questions. Name Final Exam, Economics 210A, December 2011 Here are some remarks to help you with answering the questions. Question 1. A firm has a production function F (x 1, x 2 ) = ( x 1 + x 2 ) 2. It is a price

More information

Constrained optimization.

Constrained optimization. ams/econ 11b supplementary notes ucsc Constrained optimization. c 2010, Yonatan Katznelson 1. Constraints In many of the optimization problems that arise in economics, there are restrictions on the values

More information

Lecture 2: Consumer Theory

Lecture 2: Consumer Theory Lecture 2: Consumer Theory Preferences and Utility Utility Maximization (the primal problem) Expenditure Minimization (the dual) First we explore how consumers preferences give rise to a utility fct which

More information

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results

More information

Profit Maximization and the Profit Function

Profit Maximization and the Profit Function Profit Maximization and the Profit Function Juan Manuel Puerta September 30, 2009 Profits: Difference between the revenues a firm receives and the cost it incurs Cost should include all the relevant cost

More information

Definition and Properties of the Production Function: Lecture

Definition and Properties of the Production Function: Lecture Definition and Properties of the Production Function: Lecture II August 25, 2011 Definition and : Lecture A Brief Brush with Duality Cobb-Douglas Cost Minimization Lagrangian for the Cobb-Douglas Solution

More information

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the

More information

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen (für Informatiker) M. Grepl J. Berger & J.T. Frings Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2010/11 Problem Statement Unconstrained Optimality Conditions Constrained

More information

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand Notes V General Equilibrium: Positive Theory In this lecture we go on considering a general equilibrium model of a private ownership economy. In contrast to the Notes IV, we focus on positive issues such

More information

The Method of Lagrange Multipliers

The Method of Lagrange Multipliers The Method of Lagrange Multipliers S. Sawyer October 25, 2002 1. Lagrange s Theorem. Suppose that we want to maximize (or imize a function of n variables f(x = f(x 1, x 2,..., x n for x = (x 1, x 2,...,

More information

TOPIC 3: CONTINUITY OF FUNCTIONS

TOPIC 3: CONTINUITY OF FUNCTIONS TOPIC 3: CONTINUITY OF FUNCTIONS. Absolute value We work in the field of real numbers, R. For the study of the properties of functions we need the concept of absolute value of a number. Definition.. Let

More information

Economics 2020a / HBS 4010 / HKS API-111 FALL 2010 Solutions to Practice Problems for Lectures 1 to 4

Economics 2020a / HBS 4010 / HKS API-111 FALL 2010 Solutions to Practice Problems for Lectures 1 to 4 Economics 00a / HBS 4010 / HKS API-111 FALL 010 Solutions to Practice Problems for Lectures 1 to 4 1.1. Quantity Discounts and the Budget Constraint (a) The only distinction between the budget line with

More information

Multi-variable Calculus and Optimization

Multi-variable Calculus and Optimization Multi-variable Calculus and Optimization Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Multi-variable Calculus and Optimization 1 / 51 EC2040 Topic 3 - Multi-variable Calculus

More information

Economics 401. Sample questions 1

Economics 401. Sample questions 1 Economics 40 Sample questions. Do each of the following choice structures satisfy WARP? (a) X = {a, b, c},b = {B, B 2, B 3 }, B = {a, b}, B 2 = {b, c}, B 3 = {c, a}, C (B ) = {a}, C (B 2 ) = {b}, C (B

More information

Proximal mapping via network optimization

Proximal mapping via network optimization L. Vandenberghe EE236C (Spring 23-4) Proximal mapping via network optimization minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping Introduction this lecture:

More information

Lecture Notes on Elasticity of Substitution

Lecture Notes on Elasticity of Substitution Lecture Notes on Elasticity of Substitution Ted Bergstrom, UCSB Economics 20A October 26, 205 Today s featured guest is the elasticity of substitution. Elasticity of a function of a single variable Before

More information

24. The Branch and Bound Method

24. The Branch and Bound Method 24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

More information

Nonlinear Optimization: Algorithms 3: Interior-point methods

Nonlinear Optimization: Algorithms 3: Interior-point methods Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris Jean-Philippe.Vert@mines.org Nonlinear optimization c 2006 Jean-Philippe Vert,

More information

Constrained Optimisation

Constrained Optimisation CHAPTER 9 Constrained Optimisation Rational economic agents are assumed to make choices that maximise their utility or profit But their choices are usually constrained for example the consumer s choice

More information

Follow links for Class Use and other Permissions. For more information send email to: permissions@pupress.princeton.edu

Follow links for Class Use and other Permissions. For more information send email to: permissions@pupress.princeton.edu COPYRIGHT NOTICE: Ariel Rubinstein: Lecture Notes in Microeconomic Theory is published by Princeton University Press and copyrighted, c 2006, by Princeton University Press. All rights reserved. No part

More information

Lecture 2 Dynamic Equilibrium Models : Finite Periods

Lecture 2 Dynamic Equilibrium Models : Finite Periods Lecture 2 Dynamic Equilibrium Models : Finite Periods 1. Introduction In macroeconomics, we study the behavior of economy-wide aggregates e.g. GDP, savings, investment, employment and so on - and their

More information

On Lexicographic (Dictionary) Preference

On Lexicographic (Dictionary) Preference MICROECONOMICS LECTURE SUPPLEMENTS Hajime Miyazaki File Name: lexico95.usc/lexico99.dok DEPARTMENT OF ECONOMICS OHIO STATE UNIVERSITY Fall 993/994/995 Miyazaki.@osu.edu On Lexicographic (Dictionary) Preference

More information

Critical points of once continuously differentiable functions are important because they are the only points that can be local maxima or minima.

Critical points of once continuously differentiable functions are important because they are the only points that can be local maxima or minima. Lecture 0: Convexity and Optimization We say that if f is a once continuously differentiable function on an interval I, and x is a point in the interior of I that x is a critical point of f if f (x) =

More information

The variable λ is a dummy variable called a Lagrange multiplier ; we only really care about the values of x, y, and z.

The variable λ is a dummy variable called a Lagrange multiplier ; we only really care about the values of x, y, and z. Math a Lagrange Multipliers Spring, 009 The method of Lagrange multipliers allows us to maximize or minimize functions with the constraint that we only consider points on a certain surface To find critical

More information

HOMEWORK 4 SOLUTIONS. All questions are from Vector Calculus, by Marsden and Tromba

HOMEWORK 4 SOLUTIONS. All questions are from Vector Calculus, by Marsden and Tromba HOMEWORK SOLUTIONS All questions are from Vector Calculus, by Marsden and Tromba Question :..6 Let w = f(x, y) be a function of two variables, and let x = u + v, y = u v. Show that Solution. By the chain

More information

Definition of a Linear Program

Definition of a Linear Program Definition of a Linear Program Definition: A function f(x 1, x,..., x n ) of x 1, x,..., x n is a linear function if and only if for some set of constants c 1, c,..., c n, f(x 1, x,..., x n ) = c 1 x 1

More information

Economics 121b: Intermediate Microeconomics Problem Set 2 1/20/10

Economics 121b: Intermediate Microeconomics Problem Set 2 1/20/10 Dirk Bergemann Department of Economics Yale University s by Olga Timoshenko Economics 121b: Intermediate Microeconomics Problem Set 2 1/20/10 This problem set is due on Wednesday, 1/27/10. Preliminary

More information

Advanced Microeconomics

Advanced Microeconomics Advanced Microeconomics Ordinal preference theory Harald Wiese University of Leipzig Harald Wiese (University of Leipzig) Advanced Microeconomics 1 / 68 Part A. Basic decision and preference theory 1 Decisions

More information

constraint. Let us penalize ourselves for making the constraint too big. We end up with a

constraint. Let us penalize ourselves for making the constraint too big. We end up with a Chapter 4 Constrained Optimization 4.1 Equality Constraints (Lagrangians) Suppose we have a problem: Maximize 5, (x 1, 2) 2, 2(x 2, 1) 2 subject to x 1 +4x 2 =3 If we ignore the constraint, we get the

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

Consumer Theory. The consumer s problem

Consumer Theory. The consumer s problem Consumer Theory The consumer s problem 1 The Marginal Rate of Substitution (MRS) We define the MRS(x,y) as the absolute value of the slope of the line tangent to the indifference curve at point point (x,y).

More information

What is Linear Programming?

What is Linear Programming? Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

Problem Set 6 - Solutions

Problem Set 6 - Solutions ECO573 Financial Economics Problem Set 6 - Solutions 1. Debt Restructuring CAPM. a Before refinancing the stoc the asset have the same beta: β a = β e = 1.2. After restructuring the company has the same

More information

Lecture 8: Market Equilibria

Lecture 8: Market Equilibria Computational Aspects of Game Theory Bertinoro Spring School 2011 Lecturer: Bruno Codenotti Lecture 8: Market Equilibria The market setting transcends the scenario of games. The decentralizing effect of

More information

Duality in Linear Programming

Duality in Linear Programming Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadow-price interpretation of the optimal simplex multipliers is a very useful concept. First, these shadow

More information

First Welfare Theorem

First Welfare Theorem First Welfare Theorem Econ 2100 Fall 2015 Lecture 17, November 2 Outline 1 First Welfare Theorem 2 Preliminaries to Second Welfare Theorem Last Class Definitions A feasible allocation (x, y) is Pareto

More information

The Expectation Maximization Algorithm A short tutorial

The Expectation Maximization Algorithm A short tutorial The Expectation Maximiation Algorithm A short tutorial Sean Borman Comments and corrections to: em-tut at seanborman dot com July 8 2004 Last updated January 09, 2009 Revision history 2009-0-09 Corrected

More information

14.451 Lecture Notes 10

14.451 Lecture Notes 10 14.451 Lecture Notes 1 Guido Lorenzoni Fall 29 1 Continuous time: nite horizon Time goes from to T. Instantaneous payo : f (t; x (t) ; y (t)) ; (the time dependence includes discounting), where x (t) 2

More information

Practice with Proofs

Practice with Proofs Practice with Proofs October 6, 2014 Recall the following Definition 0.1. A function f is increasing if for every x, y in the domain of f, x < y = f(x) < f(y) 1. Prove that h(x) = x 3 is increasing, using

More information

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725 Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T

More information

ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE

ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE YUAN TIAN This synopsis is designed merely for keep a record of the materials covered in lectures. Please refer to your own lecture notes for all proofs.

More information

Consumer Theory: The Mathematical Core

Consumer Theory: The Mathematical Core Consumer Theory: The Mathematical Core Dan McFadden, C13 Suppose an individual has a utility function U(x) which is a function of non-negative commodity vectors x = (x 1,x,...,x N ), and seeks to maximize

More information

Convex analysis and profit/cost/support functions

Convex analysis and profit/cost/support functions CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences Convex analysis and profit/cost/support functions KC Border October 2004 Revised January 2009 Let A be a subset of R m

More information

Chapter 4 Online Appendix: The Mathematics of Utility Functions

Chapter 4 Online Appendix: The Mathematics of Utility Functions Chapter 4 Online Appendix: The Mathematics of Utility Functions We saw in the text that utility functions and indifference curves are different ways to represent a consumer s preferences. Calculus can

More information

Stochastic Inventory Control

Stochastic Inventory Control Chapter 3 Stochastic Inventory Control 1 In this chapter, we consider in much greater details certain dynamic inventory control problems of the type already encountered in section 1.3. In addition to the

More information

Introduction. Agents have preferences over the two goods which are determined by a utility function. Speci cally, type 1 agents utility is given by

Introduction. Agents have preferences over the two goods which are determined by a utility function. Speci cally, type 1 agents utility is given by Introduction General equilibrium analysis looks at how multiple markets come into equilibrium simultaneously. With many markets, equilibrium analysis must take explicit account of the fact that changes

More information

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

More information

Convex Rationing Solutions (Incomplete Version, Do not distribute)

Convex Rationing Solutions (Incomplete Version, Do not distribute) Convex Rationing Solutions (Incomplete Version, Do not distribute) Ruben Juarez rubenj@hawaii.edu January 2013 Abstract This paper introduces a notion of convexity on the rationing problem and characterizes

More information

The Market-Clearing Model

The Market-Clearing Model Chapter 5 The Market-Clearing Model Most of the models that we use in this book build on two common assumptions. First, we assume that there exist markets for all goods present in the economy, and that

More information

CITY AND REGIONAL PLANNING 7230. Consumer Behavior. Philip A. Viton. March 4, 2015. 1 Introduction 2

CITY AND REGIONAL PLANNING 7230. Consumer Behavior. Philip A. Viton. March 4, 2015. 1 Introduction 2 CITY AND REGIONAL PLANNING 7230 Consumer Behavior Philip A. Viton March 4, 2015 Contents 1 Introduction 2 2 Foundations 2 2.1 Consumption bundles........................ 2 2.2 Preference relations.........................

More information

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION 1 A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION Dimitri Bertsekas M.I.T. FEBRUARY 2003 2 OUTLINE Convexity issues in optimization Historical remarks Our treatment of the subject Three unifying lines of

More information

Lecture Notes on Elasticity of Substitution

Lecture Notes on Elasticity of Substitution Lecture Notes on Elasticity of Substitution Ted Bergstrom, UCSB Economics 210A March 3, 2011 Today s featured guest is the elasticity of substitution. Elasticity of a function of a single variable Before

More information

Second degree price discrimination

Second degree price discrimination Bergals School of Economics Fall 1997/8 Tel Aviv University Second degree price discrimination Yossi Spiegel 1. Introduction Second degree price discrimination refers to cases where a firm does not have

More information

Envelope Theorem. Kevin Wainwright. Mar 22, 2004

Envelope Theorem. Kevin Wainwright. Mar 22, 2004 Envelope Theorem Kevin Wainwright Mar 22, 2004 1 Maximum Value Functions A maximum (or minimum) value function is an objective function where the choice variables have been assigned their optimal values.

More information

Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

More information

Alternative proof for claim in [1]

Alternative proof for claim in [1] Alternative proof for claim in [1] Ritesh Kolte and Ayfer Özgür Aydin The problem addressed in [1] is described in Section 1 and the solution is given in Section. In the proof in [1], it seems that obtaining

More information

MATHEMATICS (CLASSES XI XII)

MATHEMATICS (CLASSES XI XII) MATHEMATICS (CLASSES XI XII) General Guidelines (i) All concepts/identities must be illustrated by situational examples. (ii) The language of word problems must be clear, simple and unambiguous. (iii)

More information

Investigación Operativa. The uniform rule in the division problem

Investigación Operativa. The uniform rule in the division problem Boletín de Estadística e Investigación Operativa Vol. 27, No. 2, Junio 2011, pp. 102-112 Investigación Operativa The uniform rule in the division problem Gustavo Bergantiños Cid Dept. de Estadística e

More information

Quadratic Functions, Optimization, and Quadratic Forms

Quadratic Functions, Optimization, and Quadratic Forms Quadratic Functions, Optimization, and Quadratic Forms Robert M. Freund February, 2004 2004 Massachusetts Institute of echnology. 1 2 1 Quadratic Optimization A quadratic optimization problem is an optimization

More information

Moral Hazard. Itay Goldstein. Wharton School, University of Pennsylvania

Moral Hazard. Itay Goldstein. Wharton School, University of Pennsylvania Moral Hazard Itay Goldstein Wharton School, University of Pennsylvania 1 Principal-Agent Problem Basic problem in corporate finance: separation of ownership and control: o The owners of the firm are typically

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

Nash Equilibrium. Ichiro Obara. January 11, 2012 UCLA. Obara (UCLA) Nash Equilibrium January 11, 2012 1 / 31

Nash Equilibrium. Ichiro Obara. January 11, 2012 UCLA. Obara (UCLA) Nash Equilibrium January 11, 2012 1 / 31 Nash Equilibrium Ichiro Obara UCLA January 11, 2012 Obara (UCLA) Nash Equilibrium January 11, 2012 1 / 31 Best Response and Nash Equilibrium In many games, there is no obvious choice (i.e. dominant action).

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

88 CHAPTER 2. VECTOR FUNCTIONS. . First, we need to compute T (s). a By definition, r (s) T (s) = 1 a sin s a. sin s a, cos s a

88 CHAPTER 2. VECTOR FUNCTIONS. . First, we need to compute T (s). a By definition, r (s) T (s) = 1 a sin s a. sin s a, cos s a 88 CHAPTER. VECTOR FUNCTIONS.4 Curvature.4.1 Definitions and Examples The notion of curvature measures how sharply a curve bends. We would expect the curvature to be 0 for a straight line, to be very small

More information

Follow links for Class Use and other Permissions. For more information send email to: permissions@pupress.princeton.edu

Follow links for Class Use and other Permissions. For more information send email to: permissions@pupress.princeton.edu COPYRIGHT NOTICE: Ariel Rubinstein: Lecture Notes in Microeconomic Theory is published by Princeton University Press and copyrighted, c 2006, by Princeton University Press. All rights reserved. No part

More information

Linear Codes. In the V[n,q] setting, the terms word and vector are interchangeable.

Linear Codes. In the V[n,q] setting, the terms word and vector are interchangeable. Linear Codes Linear Codes In the V[n,q] setting, an important class of codes are the linear codes, these codes are the ones whose code words form a sub-vector space of V[n,q]. If the subspace of V[n,q]

More information

Problem Set I: Preferences, W.A.R.P., consumer choice

Problem Set I: Preferences, W.A.R.P., consumer choice Problem Set I: Preferences, W.A.R.P., consumer choice Paolo Crosetto paolo.crosetto@unimi.it Exercises solved in class on 18th January 2009 Recap:,, Definition 1. The strict preference relation is x y

More information

OPRE 6201 : 2. Simplex Method

OPRE 6201 : 2. Simplex Method OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2

More information

Mathematical finance and linear programming (optimization)

Mathematical finance and linear programming (optimization) Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may

More information

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and

More information

The Pareto-Frontier in a Simple Mirrleesian Model of Income Taxation

The Pareto-Frontier in a Simple Mirrleesian Model of Income Taxation The Pareto-Frontier in a Simple Mirrleesian Model of Income Taxation Felix J Bierbrauer Pierre C Boyer CESIFO WORKING PAPER NO 4399 CATEGORY 1: PUBLIC FINANCE SEPTEMBER 2013 An electronic version of the

More information

Unraveling versus Unraveling: A Memo on Competitive Equilibriums and Trade in Insurance Markets

Unraveling versus Unraveling: A Memo on Competitive Equilibriums and Trade in Insurance Markets Unraveling versus Unraveling: A Memo on Competitive Equilibriums and Trade in Insurance Markets Nathaniel Hendren January, 2014 Abstract Both Akerlof (1970) and Rothschild and Stiglitz (1976) show that

More information

MSc Economics Economic Theory and Applications I Microeconomics. General Equilibrium. Dr Ken Hori,

MSc Economics Economic Theory and Applications I Microeconomics. General Equilibrium. Dr Ken Hori, MSc Economics Economic Theory and Applications I Microeconomics General Equilibrium Dr Ken Hori, k.hori@bbk.ac.uk Birkbeck College, University of London October 2004 Contents 1 General Equilibrium in a

More information

x if x 0, x if x < 0.

x if x 0, x if x < 0. Chapter 3 Sequences In this chapter, we discuss sequences. We say what it means for a sequence to converge, and define the limit of a convergent sequence. We begin with some preliminary results about the

More information

Game Theory: Supermodular Games 1

Game Theory: Supermodular Games 1 Game Theory: Supermodular Games 1 Christoph Schottmüller 1 License: CC Attribution ShareAlike 4.0 1 / 22 Outline 1 Introduction 2 Model 3 Revision questions and exercises 2 / 22 Motivation I several solution

More information

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 J. Zhang Institute of Applied Mathematics, Chongqing University of Posts and Telecommunications, Chongqing

More information

Lecture 15. Ranking Payoff Distributions: Stochastic Dominance. First-Order Stochastic Dominance: higher distribution

Lecture 15. Ranking Payoff Distributions: Stochastic Dominance. First-Order Stochastic Dominance: higher distribution Lecture 15 Ranking Payoff Distributions: Stochastic Dominance First-Order Stochastic Dominance: higher distribution Definition 6.D.1: The distribution F( ) first-order stochastically dominates G( ) if

More information

Sample Midterm Solutions

Sample Midterm Solutions Sample Midterm Solutions Instructions: Please answer both questions. You should show your working and calculations for each applicable problem. Correct answers without working will get you relatively few

More information