constraint. Let us penalize ourselves for making the constraint too big. We end up with a

Save this PDF as:

Size: px
Start display at page:

Download "constraint. Let us penalize ourselves for making the constraint too big. We end up with a"

Transcription

1 Chapter 4 Constrained Optimization 4.1 Equality Constraints (Lagrangians) Suppose we have a problem: Maximize 5, (x 1, 2) 2, 2(x 2, 1) 2 subject to x 1 +4x 2 =3 If we ignore the constraint, we get the solution x 1 =2;x 2 = 1, which is too large for the constraint. Let us penalize ourselves for making the constraint too big. We end up with a function L(x 1 ;x 2 ;)=5, (x 1, 2) 2, 2(x 2, 1) 2 + (3, x 1, 4x 2 ) This function is called the Lagrangian of the problem. The main idea is to adjust so that we use exactly the right amount of the resource. = 0 leads to (2; 1). = 1 leads to (3=2; 0) which uses too little of the resource. = 2=3 gives (5=3; 1=3) and the constraint is satised exactly. We now explore this idea more formally. Given a nonlinear program (P) with equality constraints: Minimize (or maximize) f(x) subject to g 1 (x) =b 1 g 2 (x) =b 2. g m (x) =b m a solution can be found using the Lagrangian: L(x; )=f(x)+ (Note: this can also be written f(x), P m i=1 i (g i (x), b i )). Each i gives the price associated with constraint i. 43 mx i=1 i (b i, g i (x))

2 44 CHAPTER 4. CONSTRAINED OPTIMIZATION The reason L is of interest is the following: Assume x =(x 1;x 2;:::;x n) maximizes or minimizes f(x) subject to the constraints g i (x) =b i, for i =1; 2;:::;m. Then either (i) the vectors rg 1 (x ); rg 2 (x );:::;rg m (x ) are linearly dependent, or (ii) there exists a vector =( 1 ; 2 ;:::; m) such that rl(x ; )=0. I.e. (x ; )= (x ; )== (x ; n 1 (x ; m (x ; )=0 Of course, Case (i) above cannot occur when there is only one constraint. The following example shows how it might occur. Example Minimize x 1 + x 2 + x 2 3 subject to x 1 =1 x x2 2 =1: It is easy to check directly that the minimum is acheived at (x 1 ;x 2 ;x 3 )=(1; 0; 0). The associated Lagrangian is Observe that L(x 1 ;x 2 ;x 3 ; 1 ; 2 )=x 1 + x 2 + x (1, x 1 )+ 2 (1, x 2 1, x2 2 (1; 0; 0; 1 ; 2 )=1 for all 1 ; 2 ; and 2 does not vanish at the optimal solution. The reason for this is the following. Let g 1 (x 1 ;x 2 ;x 3 )=x 1 and g 2 (x 1 ;x 2 ;x 3 )=x x 2 2 denote the left hand sides of the constraints. Then rg 1 (1; 0; 0) = (1; 0; 0) and rg 2 (1; 0; 0) = (2; 0; 0) are linearly dependent vectors. So Case (i) occurs here! Nevertheless, Case (i) will not concern us in this course. When solving optimization problems with equality constraints, we will only look for solutions x that satisfy Case (ii). Note that the equation is nothing more i (x ; )=0 b i, g i (x )=0 or g i (x )=b i : In other words, taking the partials with respect to does nothing more than return the original constraints.

3 4.1. EQUALITY CONSTRAINTS (LAGRANGIANS) 45 Once we have found candidate solutions x, it is not always easy to gure out whether they correspond to a minimum, a maximum or neither. The following situation is one when we can conclude. If f(x) is concave and all of the g i (x) are linear, then any feasible x with a corresponding making rl(x ; ) = 0 maximizes f(x) subject to the constraints. Similarly, iff(x) is convex and each g i (x) is linear, then any x with a making rl(x ; ) = 0 minimizes f(x) subject to the constraints. Example Minimize 2x x2 2 subject to x 1 + x 2 =1 L(x 1 ;x 2 ;)=2x x (1, x 1, x 2 1 (x 1;x 2; )=4x 1, 1 2 (x 1 ;x 2 ; )=2x 2, 1 (x 1;x 2; )=1, x 1, x 2 =0 Now, the rst two equations imply 2x = 1 x 2. Substituting into the nal equation gives the solution x 1 =1=3, x 2 =2=3 and =4=3, with function value 2/3.! 4 0 Since f(x 1 ;x 2 ) is convex (its Hessian matrix H(x 1 ;x 2 )= is positive denite) and 0 2 g(x 1 ;x 2 )=x 1 + x 2 is a linear function, the above solution minimizes f(x 1 ;x 2 ) subject to the constraint Geometric Interpretation There is a geometric interpretation of the conditions an optimal solution must satisfy. If we graph Example 4.1.2, we get a picture like that in Figure 4.1. Now, examine the gradients of f and g at the optimum point. They must point in the same direction, though they may have dierent lengths. This implies: rf(x )= rg(x ) which, along with the feasibility ofx, is exactly the condition rl(x ; ) = 0 of Case (ii) Economic Interpretation The values i have an important economic interpretation: If the right hand side b i of Constraint i is increased by, then the optimum objective value increases by approximately i. In particular, consider the problem Maximize p(x) subject to g(x)=b,

4 46 CHAPTER 4. CONSTRAINED OPTIMIZATION 1 f= lambda g f(x)=2/3 2/3 x* g(x)=1 0 1/3 1 Figure 4.1: Geometric interpretation where p(x) is a prot to maximize and b is a limited amount of resource. Then, the optimum Lagrange multiplier is the marginal value of the resource. Equivalently, ifb were increased by, prot would increase by. This is an important result to remember. It will be used repeatedly in your Managerial Economics course. Similarly, if Minimize c(x) subject to d(x) =b, represents the minimum cost c(x) of meeting some demand b, the optimum Lagrange multiplier is the marginal cost of meeting the demand. In Example Minimize 2x x 2 2 subject to x 1 + x 2 =1, if wechange the right hand side from 1 to 1:05 (i.e. = 0:05), then the optimum objective function value goes from 2 to roughly :2 (0:05) = 3 3 :

5 4.1. EQUALITY CONSTRAINTS (LAGRANGIANS) 47 If instead the right hand side became 0:98, our estimate of the optimum objective function value would be :92 (,0:02) = 3 3 Example Suppose we have a renery that must ship nished goods to some storage tanks. Suppose further that there are two pipelines, A and B, to do the shipping. The cost of shipping x units on A is ax 2 ; the cost of shipping y units on B is by 2, where a>0 and b>0 are given. How can we ship Q units while minimizing cost? What happens to the cost if Q increases by r%? Minimize ax 2 + by 2 Subject to x + y = Q L(x; y; )=ax 2 + by 2 + (Q, x, (x ;y ; )=2ax, (x ;y ; )=2by, (x ;y ; )=Q, x, y =0 The rst two constraints give x = b, which leads to a y x = bq a + b ; = aq y a + b ; = 2abQ a + b and cost of abq2 a+b. The Hessian matrix H(x 1;x 2 )= 2a 0 is positive denite since a>0 and 0 2b b>0. So this solution minimizes cost, given a; b; Q. If Q increases by r%, then the RHS of the constraint increases by =rq and the minimum cost increases by = 2abrQ2. That is, the minimum cost increases by 2r%. a+b Example How should one divide his/her savings between three mutual funds with expected returns 10%, 10% and 15% repectively, so as to minimize risk while achieving an expected return of 12%. We measure risk as the variance of the return on the investment (you will learn more about measuring risk in ): when a fraction x of the savings is invested in Fund 1, y in Fund 2 and z in Fund 3, where x + y + z = 1, the variance of the return has been calculated to be So your problem is 400x y xy z yz: min 400x y xy z yz s:t: x + y + 1:5z = 1:2 x + y + z = 1 Using the Lagrangian method, the following optimal solution was obtained x =0:5 y =0:1 z =0:4 1 = =,1380!

6 48 CHAPTER 4. CONSTRAINED OPTIMIZATION where 1 is the Lagrange multiplier associated with the rst constraint and 2 with the second constraint. The corresponding objective function value (i.e. the variance on the return) is 390. If an expected return of 12.5% was desired (instead of 12%), what would be (approximately) the correcponding variance of the return? Since = 0:05, the variance would increase by So the answer is = =0: = 90: Exercise 34 Record'm Records needs to produce 100 gold records at one or more of its three studios. The cost of producing x records at studio 1 is 10x; the cost of producing y records at studio 2 is 2y 2 ; the cost of producing z records at studio 3 is z 2 +8z. (a) Formulate the nonlinear program of producing the 100 records at minimum cost. (b) What is the Lagrangian associated with your formulation in (a)? (c) Solve this Lagrangian. What is the optimal production plan? (d) What is the marginal cost of producing one extra gold record? (e) Union regulations require that exactly 60 hours of work be done at studios 2 and 3 combined. Each gold record requires 4 hours at studio 2 and 2 hours at studio 3. Formulate the problem of nding the optimal production plan, give the Lagrangian, and give the set of equations that must be solved to nd the optimal production plan. It is not necessary to actually solve the equations. Exercise 35 (a) Solve the problem max 2x + y subject to 4x 2 + y 2 =8 (b) Estimate the change in the optimal objective function value when the right hand side increases by 5%, i.e. the right hand side increases from 8 to 8.4. Exercise 36 (a) Solve the following constrained optimization problem using the method of Lagrange multipliers. max ln x +2lny +3lnz subject to x + y + z =60 (b) Estimate the change in the optimal objective function value if the right hand side of the constraint increases from 60 to 65.

7 4.2. EQUALITY AND INEQUALITY CONSTRAINTS Equality and Inequality Constraints How do we handle both equality and inequality constraints in (P)? Let (P) be: Maximize f(x) Subject to g 1 (x) =b 1. g m (x) =b m h 1 (x) d 1. h p (x) d p If you have a program with constraints, convert it into by multiplying by,1. Also convert a minimization to a maximization. The Lagrangian is L(x; ; )=f(x)+ The fundamental result is the following: mx i=1 i (b i, g i (x)) + px j=1 j (d j, h j (x)) Assume x =(x 1 ;x 2 n) maximizes f(x) subject to the constraints ;:::;x g i (x) =b i, for i =1; 2;:::;m and h j (x) d j, for j =1; 2;:::;p. Then either (i) the vectors rg 1 (x );:::;rg m (x ), rh 1 (x );:::;rh p (x ) are linearly dependent, or (ii) there exists vectors =( 1 m) and ;:::; =( 1 p) such ;:::; that rf(x ), mx i=1 i rg i(x ), px j=1 j rh j(x )=0 j (h j(x ), d j )=0 (Complementarity) j 0 In this course, we will not concern ourselves with Case (i). We will only look for candidate solutions x for which we can nd and satisfying the equations in Case (ii) above. In general, to solve these equations, you begin with complementarity and note that either j must be zero or h j (x ), d j = 0. Based on the various possibilities, you come up with one or more candidate solutions. If there is an optimal solution, then one of your candidates will be it. The above conditions are called the Kuhn{Tucker (or Karush{Kuhn{Tucker) conditions. Why do they make sense? For x optimal, some of the inequalities will be tight and some not. Those not tight can be ignored (and will have corresponding price j = 0). Those that are tight can be treated as equalities which leads to the previous Lagrangian stu. So

8 50 CHAPTER 4. CONSTRAINED OPTIMIZATION forces either the price j j (h j (x ), d j )=0 (Complementarity) to be 0 or the constraint to be tight. Example Maximize x 3, 3x Subject to x 2 The Lagrangian is L = x 3, 3x + (2, x) So we need 3x 2, 3, =0 x 2 (2, x) =0 0 Typically, at this point wemust break the analysis into cases depending on the complementarity conditions. If = 0 then 3x 2, 3=0sox =1orx =,1. Both are feasible. f(1) =,2, f(,1) = 2. If x = 2 then = 9 which again is feasible. Since f(2) = 2, we have two solutions: x =,1 and x =2. Example Minimize (x, 2) 2 +2(y, 1) 2 Subject to x +4y 3 x y First we convert to standard form, to get Maximize,(x, 2) 2, 2(y, 1) 2 Subject to x +4y 3,x + y 0 L(x; y; 1 ; 2 )=,(x, 2) 2, 2(y, 1) (3, x, 4y)+ 2 (0 + x, y) which gives optimality conditions,2(x, 2), =0,4(y, 1), 4 1, 2 =0 1 (3, x, 4y) =0

9 4.2. EQUALITY AND INEQUALITY CONSTRAINTS 51 2 (x, y) =0 x +4y 3,x + y 0 1 ; 2 0 Since there are two complementarity conditions, there are four cases to check: 1 =0; 2 = 0: gives x =2,y = 1 which is not feasible. 1 =0;x, y = 0: gives x =4=3;y=4=3; 2 =,4=3 which is not feasible. 2 =0; 3, x, 4y = 0 gives x =5=3;y=1=3; 1 =2=3 which is O.K. 3, x, 4y =0;x, y = 0 gives x =3=5;y=3=5; 1 =22=25; 2 =,48=25 which is not feasible. Since it is clear that there is an optimal solution, x =5=3;y=1=3 is it! Economic Interpretation The economic interpretation is essentially the same as the equality case. If the right hand side of a constraint ischanged by a small amount, then the optimal objective changes by, where is the optimal Lagrange multiplier corresponding to that constraint. Note that if the constraint is not tight then the objective does not change (since then = 0). Handling Nonnegativity A special type of constraint is nonnegativity. If you have a constraint x k 0, you can write this as,x k 0 and use the above result. This constraint would get a Lagrange multiplier of its own, and would be treated just like every other constraint. An alternative is to treat nonnegativity implicitly. If x k must be nonnegative: 1. Change the equality associated with its partial to a less than or equal k, X 2. Add a new complementarity @x k, i X i i k, i k, 3. Don't forget that x k 0 for x to be feasible. X j X j j k 0 j k 1 A xk =0

10 52 CHAPTER 4. CONSTRAINED OPTIMIZATION Suciency of conditions The Karush{Kuhn{Tucker conditions give us candidate optimal solutions x. When are these conditions sucient for optimality? That is, given x with and satisfying the KKT conditions, when can we be certain that it is an optimal solution? The most general condition available is: 1. f(x) is concave, and 2. the feasible region forms a convex region. While it is straightforward to determine if the objective is concave by computing its Hessian matrix, it is not so easy to tell if the feasible region is convex. A useful condition is as follows: The feasible region is convex if all of the g i (x) are linear and all of the h j (x) are convex. If this condition is satised, then any point that satises the KKT conditions gives a point that maximizes f(x) subject to the constraints. Example Suppose we can buy a chemical for \$10 per ounce. There are only oz available. We can transform this chemical into two products: A and B. Transforming to A costs \$3 per oz, while transforming to B costs \$5 per oz. If x 1 oz of A are produced, the price wecommand for Ais\$30, x 1 ;ifx 2 oz of B are produced, the price we get for B is \$50, x 2. How much chemical should we buy, and what should we transform it to? There are many ways to model this. Let's let x 3 be the amount ofchemical we purchase. Here is one model: Maximize x 1 (30, x 1 )+x 2 (50, 2x 2 ), 3x 1, 5x 2, 10x 3 Subject to x 1 + x 2, x 3 0 x 3 17:25 The KKT conditions are the above feasibility constraints along with: 30, 2x 1, 3, 1 =0 50, 4x 2, 5, 1 =0,10 + 1, 2 =0 1 (,x 1, x 2 + x 3 )=0 2 (17:25, x 3 )=0 1 ; 2 0 There are four cases to check: 1 =0; 2 = 0. This gives us,10 = 0 in the third constraint, so is infeasible. 1 =0;x 3 =17:25. This gives 2 =,10 so is infeasible.,x 1, x 2 + x 3 =0, 2 = 0. This gives 1 =10;x 1 =8:5;x 2 =8:75;x 3 =17:25, which is feasible. Since the objective is concave and the constraints are linear, this must be an optimal solution. So there is no point in going through the last case (,x 1, x 2 + x 3 =0,x 3 =17:25). We are done with x 1 =8:5;x 2 =8:75; and x 3 =17:25. What is the value of being able to purchase 1 more unit of chemical?

11 4.3. EXERCISES 53 This question is equivalent to increasing the right hand side of the constraint x 3 17:25 by 1 unit. Since the corresponding lagrangian value is 0, there is no value in increasing the right hand side. Review of Optimality Conditions. The following reviews what we have learned so far: Single Variable (Unconstrained) Solve f 0 (x) = 0 to get candidate x. If f 00 (x ) > 0 then x is a local min. f 00 (x ) < 0 then x is a local max. If f(x) is convex then a local min is a global min. f(x) is concave then a local max is a global max. Multiple Variable (Unconstrained) Solve rf(x) = 0 to get candidate x. If H(x ) is positive denite then x is a local min. H(x ) is negative denite x is a local max. If f(x) is convex then a local min is a global min. f(x) is concave then a local max is a global max. P Multiple Variable (Equality constrained) Form Lagrangian L(x; )=f(x)+ i i(b i, g i (x)) Solve rl(x; ) = 0 to get candidate x (and ). Best x is optimum if optimum exists. Multiple Variable (Equality and Inequality constrained) Put into standard form (maximize and constraints) Form Lagrangian L(x; )=f(x)+ P i i(b i, g i (x)) + P j j(d j, h j (x)) Solve rf(x), P i i rg i (x), P j j rh j (x) =0 g i (x) =b i h j (x) d j j (d j, h j (x)) = 0 j 0 to get candidates x (and, ). Best x is optimum if optimum exists. Any x is optimum if f(x) concave, g i (x) convex, h j (x) linear. 4.3 Exercises Exercise 37 Solve the following constrained optimization problem using the method of Lagrange multipliers. max 2lnx 1 +3lnx 2 +3lnx 3 s.t. x 1 +2x 2 +2x 3 =10

12 54 CHAPTER 4. CONSTRAINED OPTIMIZATION Exercise 38 Find the two points on the ellipse given by x 2 1+4x 2 2 = 4 that are at minimum distance of the point (1; 0). Formulate the problem as a minimization problem and solve it by solving the Lagrangian equations. [Hint: To minimize the distance d between two points, one can also minimize d 2. The formula for the distance between points (x 1 ;x 2 ) and (y 1 ;y 2 )isd 2 =(x 1,y 1 ) 2 +(x 2,y 2 ) 2.] Exercise 39 Solve using Lagrange multipliers. a) min x x2 2 + x2 3 subject to x 1 + x 2 + x 3 = b, where b is given. b) max p x 1 + p x 2 + p x 3 subject to x 1 + x 2 + x 3 = b, where b is given. c) min c 1 x c 2 x c 3 x 2 3 subject to x 1 + x 2 + x 3 = b, where c 1 > 0, c 2 > 0, c 3 > 0 and b are given. d) min x x x 2 3 subject to x 1 + x 2 = b 1 and x 2 + x 3 = b 2, where b 1 and b 2 are given. Exercise 40 Let a, b and c be given positive scalars. What is the change in the optimum value of the following constrained optimization problem when the right hand side of the constraint is increased by 5%, i.e. a is changed to a a. Give your answer in terms of a, b and c. max by, x 4 s.t. x 2 + cy = a Exercise 41 You wanttoinvest in twomutual funds so as to maximize your expected earnings while limiting the variance of your earnings to a given gure s 2. The expected yield rates of Mutual Funds 1 and 2 are r 1 and r 2 respectively, and the variance of earnings for the portfolio (x 1 ;x 2 )is 2 x 2 + x 1 1x x 2 2.Thus the problem is max r 1 x 1 + r 2 x 2 s.t. 2 x x 1 x x 2 2 = s 2 (a) Use the method of Lagrange multipliers to compute the optimal investments x 1 and x 2 in Mutual Funds 1 and 2 respectively. Your expressions for x 1 and x 2 should not contain the Lagrange multiplier. (b) Suppose both mutual funds have the same yield r. Howmuch should you invest in each? Exercise 42 You want to minimize the surface area of a cone-shaped drinking cup having xed volume V 0. Solve the problem as a constrained optimization problem. To simplify the algebra, minimize the square of the area. The area is r p r 2 + h 2. The problem is, min 2 r r 2 h 2 s.t. 1 3 r2 h = V 0 : Solve the problem using Lagrange multipliers. Hint. You can assume that r 6= 0 in the optimal solution.

13 4.3. EXERCISES 55 Exercise 43 A company manufactures two types of products: a standard product, say A, and a more sophisticated product B. If management charges a price of p A for one unit of product A and a price of p B for one unit of product B, the company can sell q A units of A and q B units of B, where q A = 400, 2p A + p B ; q B = p A, p B : Manufacturing one unit of product A requires 2 hours of labor and one unit of raw material. For one unit of B, 3 hours of labor and 2 units of raw material are needed. At present, 1000 hours of labor and 200 units of raw material are available. Substituting the expressions for q A and q B, the problem of maximizing the company's total revenue can be formulated as: max 400p A + 200p B, 2p 2 A, p2 B +2p Ap B s.t.,p A, p B,400,p B,600 (a) Use the Khun-Tucker conditions to nd the company's optimal pricing policy. (b) What is the maximum the company would be willing to pay for { another hour of labor, { another unit of raw material?

14 56 CHAPTER 4. CONSTRAINED OPTIMIZATION

In this section, we will consider techniques for solving problems of this type.

Constrained optimisation roblems in economics typically involve maximising some quantity, such as utility or profit, subject to a constraint for example income. We shall therefore need techniques for solving

Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

The Envelope Theorem 1

John Nachbar Washington University April 2, 2015 1 Introduction. The Envelope Theorem 1 The Envelope theorem is a corollary of the Karush-Kuhn-Tucker theorem (KKT) that characterizes changes in the value

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen

(für Informatiker) M. Grepl J. Berger & J.T. Frings Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2010/11 Problem Statement Unconstrained Optimality Conditions Constrained

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the

(a) We have x = 3 + 2t, y = 2 t, z = 6 so solving for t we get the symmetric equations. x 3 2. = 2 y, z = 6. t 2 2t + 1 = 0,

Name: Solutions to Practice Final. Consider the line r(t) = 3 + t, t, 6. (a) Find symmetric equations for this line. (b) Find the point where the first line r(t) intersects the surface z = x + y. (a) We

Lecture 4: Equality Constrained Optimization. Tianxi Wang

Lecture 4: Equality Constrained Optimization Tianxi Wang wangt@essex.ac.uk 2.1 Lagrange Multiplier Technique (a) Classical Programming max f(x 1, x 2,..., x n ) objective function where x 1, x 2,..., x

Date: April 12, 2001. Contents

2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........

HOMEWORK 4 SOLUTIONS. All questions are from Vector Calculus, by Marsden and Tromba

HOMEWORK SOLUTIONS All questions are from Vector Calculus, by Marsden and Tromba Question :..6 Let w = f(x, y) be a function of two variables, and let x = u + v, y = u v. Show that Solution. By the chain

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725

Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T

Prot Maximization and Cost Minimization

Simon Fraser University Prof. Karaivanov Department of Economics Econ 0 COST MINIMIZATION Prot Maximization and Cost Minimization Remember that the rm's problem is maximizing prots by choosing the optimal

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.

2.3 Convex Constrained Optimization Problems

42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

Economics 2020a / HBS 4010 / HKS API-111 FALL 2010 Solutions to Practice Problems for Lectures 1 to 4

Economics 00a / HBS 4010 / HKS API-111 FALL 010 Solutions to Practice Problems for Lectures 1 to 4 1.1. Quantity Discounts and the Budget Constraint (a) The only distinction between the budget line with

24. The Branch and Bound Method

24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

Cost Minimization and the Cost Function

Cost Minimization and the Cost Function Juan Manuel Puerta October 5, 2009 So far we focused on profit maximization, we could look at a different problem, that is the cost minimization problem. This is

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization 2.1. Introduction Suppose that an economic relationship can be described by a real-valued

Linear Programming. March 14, 2014

Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

Name: ID: Discussion Section:

Math 28 Midterm 3 Spring 2009 Name: ID: Discussion Section: This exam consists of 6 questions: 4 multiple choice questions worth 5 points each 2 hand-graded questions worth a total of 30 points. INSTRUCTIONS:

1.5. Factorisation. Introduction. Prerequisites. Learning Outcomes. Learning Style

Factorisation 1.5 Introduction In Block 4 we showed the way in which brackets were removed from algebraic expressions. Factorisation, which can be considered as the reverse of this process, is dealt with

Nonlinear Optimization: Algorithms 3: Interior-point methods

Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris Jean-Philippe.Vert@mines.org Nonlinear optimization c 2006 Jean-Philippe Vert,

Constrained optimization.

ams/econ 11b supplementary notes ucsc Constrained optimization. c 2010, Yonatan Katznelson 1. Constraints In many of the optimization problems that arise in economics, there are restrictions on the values

Math 120 Final Exam Practice Problems, Form: A

Math 120 Final Exam Practice Problems, Form: A Name: While every attempt was made to be complete in the types of problems given below, we make no guarantees about the completeness of the problems. Specifically,

Envelope Theorem. Kevin Wainwright. Mar 22, 2004

Envelope Theorem Kevin Wainwright Mar 22, 2004 1 Maximum Value Functions A maximum (or minimum) value function is an objective function where the choice variables have been assigned their optimal values.

Math 53 Worksheet Solutions- Minmax and Lagrange

Math 5 Worksheet Solutions- Minmax and Lagrange. Find the local maximum and minimum values as well as the saddle point(s) of the function f(x, y) = e y (y x ). Solution. First we calculate the partial

Economics 326: Duality and the Slutsky Decomposition. Ethan Kaplan

Economics 326: Duality and the Slutsky Decomposition Ethan Kaplan September 19, 2011 Outline 1. Convexity and Declining MRS 2. Duality and Hicksian Demand 3. Slutsky Decomposition 4. Net and Gross Substitutes

1 Portfolio mean and variance

Copyright c 2005 by Karl Sigman Portfolio mean and variance Here we study the performance of a one-period investment X 0 > 0 (dollars) shared among several different assets. Our criterion for measuring

ALGEBRA 2: 4.1 Graph Quadratic Functions in Standard Form

ALGEBRA 2: 4.1 Graph Quadratic Functions in Standard Form Goal Graph quadratic functions. VOCABULARY Quadratic function A function that can be written in the standard form y = ax 2 + bx+ c where a 0 Parabola

Constrained Optimization: The Method of Lagrange Multipliers:

Constrained Optimization: The Method of Lagrange Multipliers: Suppose the equation p(x,) x 60x 7 00 models profit when x represents the number of handmade chairs and is the number of handmade rockers produced

15 Kuhn -Tucker conditions

5 Kuhn -Tucker conditions Consider a version of the consumer problem in which quasilinear utility x 2 + 4 x 2 is maximised subject to x +x 2 =. Mechanically applying the Lagrange multiplier/common slopes

Recitation 4. 24xy for 0 < x < 1, 0 < y < 1, x + y < 1 0 elsewhere

Recitation. Exercise 3.5: If the joint probability density of X and Y is given by xy for < x

Math 265 (Butler) Practice Midterm II B (Solutions)

Math 265 (Butler) Practice Midterm II B (Solutions) 1. Find (x 0, y 0 ) so that the plane tangent to the surface z f(x, y) x 2 + 3xy y 2 at ( x 0, y 0, f(x 0, y 0 ) ) is parallel to the plane 16x 2y 2z

SOLUTIONS. f x = 6x 2 6xy 24x, f y = 3x 2 6y. To find the critical points, we solve

SOLUTIONS Problem. Find the critical points of the function f(x, y = 2x 3 3x 2 y 2x 2 3y 2 and determine their type i.e. local min/local max/saddle point. Are there any global min/max? Partial derivatives

Vectors, Gradient, Divergence and Curl. 1 Introduction A vector is determined by its length and direction. They are usually denoted with letters with arrows on the top a or in bold letter a. We will use

A Globally Convergent Primal-Dual Interior Point Method for Constrained Optimization Hiroshi Yamashita 3 Abstract This paper proposes a primal-dual interior point method for solving general nonlinearly

Solutions for Review Problems

olutions for Review Problems 1. Let be the triangle with vertices A (,, ), B (4,, 1) and C (,, 1). (a) Find the cosine of the angle BAC at vertex A. (b) Find the area of the triangle ABC. (c) Find a vector

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.

PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include

Multi-variable Calculus and Optimization

Multi-variable Calculus and Optimization Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Multi-variable Calculus and Optimization 1 / 51 EC2040 Topic 3 - Multi-variable Calculus

Support Vector Machine (SVM)

Support Vector Machine (SVM) CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

Solutions Of Some Non-Linear Programming Problems A PROJECT REPORT submitted by BIJAN KUMAR PATEL for the partial fulfilment for the award of the degree of Master of Science in Mathematics under the supervision

Optimization Modeling for Mining Engineers

Optimization Modeling for Mining Engineers Alexandra M. Newman Division of Economics and Business Slide 1 Colorado School of Mines Seminar Outline Linear Programming Integer Linear Programming Slide 2

Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

Practice with Proofs

Practice with Proofs October 6, 2014 Recall the following Definition 0.1. A function f is increasing if for every x, y in the domain of f, x < y = f(x) < f(y) 1. Prove that h(x) = x 3 is increasing, using

Economics 121b: Intermediate Microeconomics Problem Set 2 1/20/10

Dirk Bergemann Department of Economics Yale University s by Olga Timoshenko Economics 121b: Intermediate Microeconomics Problem Set 2 1/20/10 This problem set is due on Wednesday, 1/27/10. Preliminary

Chapter 5. Linear Inequalities and Linear Programming. Linear Programming in Two Dimensions: A Geometric Approach

Chapter 5 Linear Programming in Two Dimensions: A Geometric Approach Linear Inequalities and Linear Programming Section 3 Linear Programming gin Two Dimensions: A Geometric Approach In this section, we

Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test

Math Review for the Quantitative Reasoning Measure of the GRE revised General Test www.ets.org Overview This Math Review will familiarize you with the mathematical skills and concepts that are important

Economics 326: Marshallian Demand and Comparative Statics. Ethan Kaplan

Economics 326: Marshallian Demand and Comparative Statics Ethan Kaplan September 17, 2012 Outline 1. Utility Maximization: General Formulation 2. Marshallian Demand 3. Homogeneity of Degree Zero of Marshallian

Lecture Notes on Elasticity of Substitution

Lecture Notes on Elasticity of Substitution Ted Bergstrom, UCSB Economics 20A October 26, 205 Today s featured guest is the elasticity of substitution. Elasticity of a function of a single variable Before

Linear Programming I

Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins

1 Lecture: Integration of rational functions by decomposition

Lecture: Integration of rational functions by decomposition into partial fractions Recognize and integrate basic rational functions, except when the denominator is a power of an irreducible quadratic.

3.1 Solving Systems Using Tables and Graphs

Algebra 2 Chapter 3 3.1 Solve Systems Using Tables & Graphs 3.1 Solving Systems Using Tables and Graphs A solution to a system of linear equations is an that makes all of the equations. To solve a system

The equivalence of logistic regression and maximum entropy models

The equivalence of logistic regression and maximum entropy models John Mount September 23, 20 Abstract As our colleague so aptly demonstrated ( http://www.win-vector.com/blog/20/09/the-simplerderivation-of-logistic-regression/

Study Guide 2 Solutions MATH 111

Study Guide 2 Solutions MATH 111 Having read through the sample test, I wanted to warn everyone, that I might consider asking questions involving inequalities, the absolute value function (as in the suggested

Optimization in R n Introduction

Optimization in R n Introduction Rudi Pendavingh Eindhoven Technical University Optimization in R n, lecture Rudi Pendavingh (TUE) Optimization in R n Introduction ORN / 4 Some optimization problems designing

Topic 1: Matrices and Systems of Linear Equations.

Topic 1: Matrices and Systems of Linear Equations Let us start with a review of some linear algebra concepts we have already learned, such as matrices, determinants, etc Also, we shall review the method

Lecture Notes on Elasticity of Substitution

Lecture Notes on Elasticity of Substitution Ted Bergstrom, UCSB Economics 210A March 3, 2011 Today s featured guest is the elasticity of substitution. Elasticity of a function of a single variable Before

Insurance. Michael Peters. December 27, 2013

Insurance Michael Peters December 27, 2013 1 Introduction In this chapter, we study a very simple model of insurance using the ideas and concepts developed in the chapter on risk aversion. You may recall

The Point-Slope Form

7. The Point-Slope Form 7. OBJECTIVES 1. Given a point and a slope, find the graph of a line. Given a point and the slope, find the equation of a line. Given two points, find the equation of a line y Slope

Name. Final Exam, Economics 210A, December 2011 Here are some remarks to help you with answering the questions.

Name Final Exam, Economics 210A, December 2011 Here are some remarks to help you with answering the questions. Question 1. A firm has a production function F (x 1, x 2 ) = ( x 1 + x 2 ) 2. It is a price

Linear Programming. April 12, 2005

Linear Programming April 1, 005 Parts of this were adapted from Chapter 9 of i Introduction to Algorithms (Second Edition) /i by Cormen, Leiserson, Rivest and Stein. 1 What is linear programming? The first

To give it a definition, an implicit function of x and y is simply any relationship that takes the form:

2 Implicit function theorems and applications 21 Implicit functions The implicit function theorem is one of the most useful single tools you ll meet this year After a while, it will be second nature to

TOPIC 3: CONTINUITY OF FUNCTIONS

TOPIC 3: CONTINUITY OF FUNCTIONS. Absolute value We work in the field of real numbers, R. For the study of the properties of functions we need the concept of absolute value of a number. Definition.. Let

Lagrange Interpolation is a method of fitting an equation to a set of points that functions well when there are few points given.

Polynomials (Ch.1) Study Guide by BS, JL, AZ, CC, SH, HL Lagrange Interpolation is a method of fitting an equation to a set of points that functions well when there are few points given. Sasha s method

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)

ROOTS OF POLYNOMIAL EQUATIONS In this unit we discuss polynomial equations. A polynomial in x of degree n, where n 0 is an integer, is an expression of the form P n (x) =a n x n + a n 1 x n 1 + + a 1 x

ECO 317 Economics of Uncertainty Fall Term 2009 Week 5 Precepts October 21 Insurance, Portfolio Choice - Questions

ECO 37 Economics of Uncertainty Fall Term 2009 Week 5 Precepts October 2 Insurance, Portfolio Choice - Questions Important Note: To get the best value out of this precept, come with your calculator or

Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}.

Walrasian Demand Econ 2100 Fall 2015 Lecture 5, September 16 Outline 1 Walrasian Demand 2 Properties of Walrasian Demand 3 An Optimization Recipe 4 First and Second Order Conditions Definition Walrasian

Section 2.4: Equations of Lines and Planes

Section.4: Equations of Lines and Planes An equation of three variable F (x, y, z) 0 is called an equation of a surface S if For instance, (x 1, y 1, z 1 ) S if and only if F (x 1, y 1, z 1 ) 0. x + y

DERIVATIVES AS MATRICES; CHAIN RULE

DERIVATIVES AS MATRICES; CHAIN RULE 1. Derivatives of Real-valued Functions Let s first consider functions f : R 2 R. Recall that if the partial derivatives of f exist at the point (x 0, y 0 ), then we

Chapter 13: Binary and Mixed-Integer Programming

Chapter 3: Binary and Mixed-Integer Programming The general branch and bound approach described in the previous chapter can be customized for special situations. This chapter addresses two special situations:

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming

3.8 Finding Antiderivatives; Divergence and Curl of a Vector Field

3.8 Finding Antiderivatives; Divergence and Curl of a Vector Field 77 3.8 Finding Antiderivatives; Divergence and Curl of a Vector Field Overview: The antiderivative in one variable calculus is an important

CONSTRAINED NONLINEAR PROGRAMMING

149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

On closed-form solutions of a resource allocation problem in parallel funding of R&D projects

Operations Research Letters 27 (2000) 229 234 www.elsevier.com/locate/dsw On closed-form solutions of a resource allocation problem in parallel funding of R&D proects Ulku Gurler, Mustafa. C. Pnar, Mohamed

Several Views of Support Vector Machines

Several Views of Support Vector Machines Ryan M. Rifkin Honda Research Institute USA, Inc. Human Intention Understanding Group 2007 Tikhonov Regularization We are considering algorithms of the form min

Review for Calculus Rational Functions, Logarithms & Exponentials

Definition and Domain of Rational Functions A rational function is defined as the quotient of two polynomial functions. F(x) = P(x) / Q(x) The domain of F is the set of all real numbers except those for

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points

Introduction to Algebraic Geometry Bézout s Theorem and Inflection Points 1. The resultant. Let K be a field. Then the polynomial ring K[x] is a unique factorisation domain (UFD). Another example of a

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes

Solving Polynomial Equations 3.3 Introduction Linear and quadratic equations, dealt within Sections 3.1 and 3.2, are members of a class of equations, called polynomial equations. These have the general

Solutions to old Exam 1 problems

Solutions to old Exam 1 problems Hi students! I am putting this old version of my review for the first midterm review, place and time to be announced. Check for updates on the web site as to which sections

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS

A QUIK GUIDE TO THE FOMULAS OF MULTIVAIABLE ALULUS ontents 1. Analytic Geometry 2 1.1. Definition of a Vector 2 1.2. Scalar Product 2 1.3. Properties of the Scalar Product 2 1.4. Length and Unit Vectors

What does the number m in y = mx + b measure? To find out, suppose (x 1, y 1 ) and (x 2, y 2 ) are two points on the graph of y = mx + b.

PRIMARY CONTENT MODULE Algebra - Linear Equations & Inequalities T-37/H-37 What does the number m in y = mx + b measure? To find out, suppose (x 1, y 1 ) and (x 2, y 2 ) are two points on the graph of

Interior-Point Algorithms for Quadratic Programming Thomas Reslow Krüth Kongens Lyngby 2008 IMM-M.Sc-2008-19 Technical University of Denmark Informatics and Mathematical Modelling Building 321, DK-2800

1. Determine graphically the solution set for each system of inequalities and indicate whether the solution set is bounded or unbounded:

Final Study Guide MATH 111 Sample Problems on Algebra, Functions, Exponents, & Logarithms Math 111 Part 1: No calculator or study sheet. Remember to get full credit, you must show your work. 1. Determine

The Method of Lagrange Multipliers

The Method of Lagrange Multipliers S. Sawyer October 25, 2002 1. Lagrange s Theorem. Suppose that we want to maximize (or imize a function of n variables f(x = f(x 1, x 2,..., x n for x = (x 1, x 2,...,

BX in ( u, v) basis in two ways. On the one hand, AN = u+

1. Let f(x) = 1 x +1. Find f (6) () (the value of the sixth derivative of the function f(x) at zero). Answer: 7. We expand the given function into a Taylor series at the point x = : f(x) = 1 x + x 4 x

More Quadratic Equations Math 99 N1 Chapter 8 1 Quadratic Equations We won t discuss quadratic inequalities. Quadratic equations are equations where the unknown appears raised to second power, and, possibly

Section 9.5: Equations of Lines and Planes

Lines in 3D Space Section 9.5: Equations of Lines and Planes Practice HW from Stewart Textbook (not to hand in) p. 673 # 3-5 odd, 2-37 odd, 4, 47 Consider the line L through the point P = ( x, y, ) that

Algebra Practice Problems for Precalculus and Calculus

Algebra Practice Problems for Precalculus and Calculus Solve the following equations for the unknown x: 1. 5 = 7x 16 2. 2x 3 = 5 x 3. 4. 1 2 (x 3) + x = 17 + 3(4 x) 5 x = 2 x 3 Multiply the indicated polynomials

Discrete Optimization

Discrete Optimization [Chen, Batson, Dang: Applied integer Programming] Chapter 3 and 4.1-4.3 by Johan Högdahl and Victoria Svedberg Seminar 2, 2015-03-31 Todays presentation Chapter 3 Transforms using

6.4 Logarithmic Equations and Inequalities

6.4 Logarithmic Equations and Inequalities 459 6.4 Logarithmic Equations and Inequalities In Section 6.3 we solved equations and inequalities involving exponential functions using one of two basic strategies.

Chapter 4 Online Appendix: The Mathematics of Utility Functions

Chapter 4 Online Appendix: The Mathematics of Utility Functions We saw in the text that utility functions and indifference curves are different ways to represent a consumer s preferences. Calculus can

Week 1: Functions and Equations

Week 1: Functions and Equations Goals: Review functions Introduce modeling using linear and quadratic functions Solving equations and systems Suggested Textbook Readings: Chapter 2: 2.1-2.2, and Chapter

6 Rational Inequalities, (In)equalities with Absolute value; Exponents and Logarithms

AAU - Business Mathematics I Lecture #6, March 16, 2009 6 Rational Inequalities, (In)equalities with Absolute value; Exponents and Logarithms 6.1 Rational Inequalities: x + 1 x 3 > 1, x + 1 x 2 3x + 5

2 Integrating Both Sides

2 Integrating Both Sides So far, the only general method we have for solving differential equations involves equations of the form y = f(x), where f(x) is any function of x. The solution to such an equation

3. INNER PRODUCT SPACES

. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

Week 2 Quiz: Equations and Graphs, Functions, and Systems of Equations

Week Quiz: Equations and Graphs, Functions, and Systems of Equations SGPE Summer School 014 June 4, 014 Lines: Slopes and Intercepts Question 1: Find the slope, y-intercept, and x-intercept of the following

TOPIC 4: DERIVATIVES

TOPIC 4: DERIVATIVES 1. The derivative of a function. Differentiation rules 1.1. The slope of a curve. The slope of a curve at a point P is a measure of the steepness of the curve. If Q is a point on the