15 Kuhn -Tucker conditions

Size: px
Start display at page:

Download "15 Kuhn -Tucker conditions"

Transcription

1 5 Kuhn -Tucker conditions Consider a version of the consumer problem in which quasilinear utility x x 2 is maximised subject to x +x 2 =. Mechanically applying the Lagrange multiplier/common slopes technique produces x x Negative quantity! The tangency solution violates an unspoken economic requirement x 2 0. There is a systematic approach to inequality-constrained maximisation a.k.a. concave programming or nonlinear programming. To be explicit the full set of restrictions for the 2 good consumer problem is not p x + p 2 x 2 = m but p x + p 2 x 2 m 0 x 0 x 2 0 The aspect I emphasise is that the constraints are inequalities. We have seen how the Lagrange multiplier method for dealing with one equality constraint extends naturally to the case of several constraints. If we wish to maximise f(x) subject to g(x) =b and h(x) = c, we work with the Lagrangian L(x, λ,µ)=f(x) λ [g(x) b] µ [h(x) c] with a multiplier for each constraint. (See Dixit 24ff) The inequality constrained optimisation problem (SH 682) is:

2 Maximise the objective function f(x) where f : R n R subject to the constraints g j (x) 0whereg j : R n R and j =,..., m. Terminology: The set of points satisfying the constraintsiscalledtheconstraint set, admissible set or feasible set. Ifattheoptimumx,g j (x )=0then the j-th constraint is binding; ifnot,itisslack. If at least one constraint is binding then x is on the boundary of the feasible set; if none are binding x is an interior point. Example 5. Intheconsumerproblemswehaveseen the budget constraint is binding all income is spent because the consumer is never satiated. The constraint x 2 0 may be binding as in the situation pictured at the beginning of this section. The method of Kuhn-Tucker multipliers is a variation on the Lagrange multiplier method. If all the constraints are binding then the Lagrange method will produce the same results as KT. The Kuhn-Tucker approach involves forming the Lagrangean in more or less the usual way mx L = f(x) λ j g j (x) j= with the same derivative with respect to the choice variables conditions L =0,i=,..., n or L =0. x i However the further conditions specify the interaction between the multipliers and the constraints. The complementary slackness conditions state λ j 0 for all i λ j = 0 whenever g j (x ) < 0 If the constraint is slack the corresponding multiplier is zero.

3 Solving this assortment of equalities and inequalities in the n+m unknowns (choice variables and multipliers) is messier than the Lagrange method for equality constrained problems. c =2:x =0 c =0:x =0 To see how it works, consider a transparent case Example 5.2 Maximise the (strictly concave) function y = x 2 subject to x c. Theoptimalx can either be interior (< c) or on the boundary (= c) of the feasible set which will depend on the value of c. The pictures show c =2, 0,. c = :x = To do the Kuhn-Tucker analysis, form the Lagrangean L = x 2 λ(x c). The K-T conditions are L x = 2x λ =0 λ = 2x λ 0 λ = 0 if (x c) < 0

4 Remark 5. The conditions L x = ( x2 ) x λ =0 and λ 0 imply that the derivative at the maximum cannot be negative. It is obvious that the derivative cannot be negative at a maximum because a reduction in x (this is always feasible) would then raise the value of the objective. Now for the three examples, c =, 0and2. Remark 5.2 Often the Kuhn-Tucker conditions are used not for finding a solution but for providing information about a solution. For example in the general problem of maximising a strictly concave function subject to x c, the conditions imply that at a maximum theslopecannotbenegative. c = : there are 2 possibilities: x = or x <. The latter is impossible for it would imply that λ = 0 and hence x =0,acontradiction. Sox =. c = 0 : there are 2 possibilities: x =0orx < 0. As before the latter is impossible. All the conditions are satisfied when x =0. c = 2 : there are 2 possibilities: x =2orx < 2. The former is impossible for it makes 2x and hence λ negative. 6 Kuhn -Tucker theorem There are lots of propositions linking the Kuhn-Tucker conditions to the existence of a maximum. The conditions can be interpreted as necessary conditions for a maximum (compare the treatment of Lagrange multipliers in 8.2). Or, making strong assumptions about f and g j, as sufficient conditions. That line is taken in the next theorem.

5 Theorem 6. ( Kuhn-Tucker sufficiency) Consider the inequality constrained optimisation problem with concave objective and convex constraints: i.e. to maximise f(x) (where f : R n R) subject to the constraints g j (x) 0 where g j : R n R and j = P,..., m. Define L = f(x) m λ j g j (x) and let x j= be a feasible point. Suppose we can find numbers λ j such that L(x ) = 0, λ j 0 for all i and λ j =0whenever g j (x ) < 0. Then x solves the maximisation problem Proof. Since f is concave the supporting hyperplane theorem takes the form f(x) f(x )+ f(x )(x x ). Using L(x ) = 0, we can write this as f(x) f(x )+ X λ j g j (x )(x x ). The aim is to show that the sum term on the right is not positive. The multipliers associated with slack constraints will be zero so we need only attend to the binding constraints g j (x )=0. In such cases, since g j is convex we have 0 g j (x) 0+ g j (x )(x x ). Because the λ j are nonnegative, P λ j g j (x )(x x ) is not positive as required. Remark 6. Like Lagrange multipliers these Kuhn- Tucker multipliers can be interpreted as measures of the sensitivity of the maximum value to changes in the constraint (0.2) but we won t go into the details. See SH 696. Remark 6.2 This theorem can be extended to apply to quasi-concave objective functions. Dixit 97ff discusses the extension.

6 7 Quasi-linear utility again Return to the quasi-linear utility case and now incorporate all the inequality constraints and include prices and income Maximise u(x) = x2 + αx 2, α > 0 s.t. p x + p 2 x 2 m 0, x 0, x 2 0. The Lagrangean L is x 2 +αx 2 λ 0 (p x +p 2 x 2 m) λ ( x ) λ 2 ( x 2 ) The Kuhn-Tucker conditions are L 2 λ 0 p + λ =0 x = 2 x L x 2 = λ 0 p 2 + λ 2 =0 α λ 0, λ, λ 2 0 λ 0 (p x + p 2 x 2 m) = 0 λ i ( x i ) = 0,i=, 2.. Because the objective function is strictly increasing in x and x 2 the budget constraint is binding so λ 0 > The constraint x 0, cannot bind for then 2 x 2 would be infinitely large. So λ =0. 3. The other constraint may or may not bind. Putting this information about the budget constraint and λ = 0 into the Kuhn-Tucker conditions: L 2 λ 0 p =0, x = 2 x L x 2 = λ 0 p 2 + λ 2 =0, α (p x + p 2 x 2 m) = 0, λ 2 ( x 2 ) = 0. Consider the possibility x 2 = 0 : from the budget constraint we get x = m p

7 and so L x = 2 λ 0 = 2 Ã! m 2 λ0 p =0 p Ã! 2 p m Putting this value into x L =0 2 L = α Ã! 2 p2 + λ 2 =0 x 2 2 p m But as λ 2 0 it must be the case that when x 2 =0, α satisfies α Ã! p p m So small values of α produce a corner solution. It is reasonable that the consumer always consume some of the first good because marginal utility w.r.t. it approaches infinity as x 0 while marginal utility w.r.t. the other good is constant at α. 8 Dynamic optimisation In dynamic optimisation a time-path is chosen. Simple dynamic optimisation problems can be treated by the same methods as the static optimisation problem. However dynamic problems have special features which often suggest a different treatment. The interior solution x,x 2 > 0 is associated with larger values of α and corresponds to the case λ = λ 2 =0.) Example 8. Consider a simple T -period problem where agivenstockb 0 is consumer over T periods (formally

8 a variation on the consumer problem with logarithmic utility) max x U(x) = T X δ t ln x t TX s.t. x t = b 0 (fixed) (*) Form the Lagrangean TX L(x, λ) = δ t TX ln x t λ( x t b 0 ) and go through the usual steps to obtain a solution that involves x t = δx t for t =2,..., T This kind of dynamic equation is called a difference equation and is characteristic of the discrete time formulation. If the problem were formulated in continuous time a differential equation would appear at this point. In more complicated problems diagonalising methods are used for investigating the properties of the solution. One difference between static and dynamic optimisation is that dynamic equations appear naturally in the latter. A second difference is that multiple constraints (one for each time period) are routine. Example 8.2 (continues Ex. 8.) In complicated problems it is usually convenient to specify a budget constraint for each time period. Thus the constraint (*) would appear as: b t = b t x t, t =,..., T ; b 0 fixed (**) This law of motion describes how the available chocolate stock evolves: the bars of chocolate left over at the end of period t equals the bars available at the end of t less what has been consumed in period t. (*) collapses these dynamic equations into one constraint TP x t = b 0, eliminating the b,b 2,..., b T.

9 The Lagrange method extends to multiple constraints by introducing a multiplier for each constraint. Thus here TX L(x, λ) = δ t TX ln x t λ t (x t b t b t ). There are 2T equations to solve: T of the form L t =0 and T making up (**). Just as there is a sequence {x,..., x T } there is a sequence of multipliers {λ,..., λ T }. The usual algebra produces conditions like x t = δ λ t. x t λ t We already know that x t = δx t and it turns out that λ t isthesameforalltimeperiods. A third difference between static and dynamic optimisation is the existence of specialised techniques for treating the latter including (Pontryagin s) maximum principle and dynamic programming. 8. Maximum principle The maximum principle is widely used in Macroeconomics, usually in its continuous time form. I will go through a discrete time version to suggest where the coninuous time forms come from. A fairly general formulation covering the chocolate stock example and extensions to include production and investment involves the choice variables c(),..., c(t ); thesesymbolsareeasierontheeyethanc etc. The notation reflects the terminology of control theory. There is a state variable s governed by an equation of motion or state equation. The problem is to choose a control variable sequence c to maximise a value function. This may involve one or both of the statevariableandthecontrolvariable. max V (s, c) = c TX v(s(t),c(t))

10 s.t. s(t +) s(t) =f(s(t),c(t)) (***) for t =,..., T and with s() and s(t +) fixed at s and s T + respectively. (Other end conditions are possible.) The Lagrangian is L = TX v(s(t),c(t)) TX λ(t)[s(t +) s(t) f(s(t),c(t))] In optimal control the λ s are called co-state variables. Differentiating w.r.t. c(t) and s(t) (writing partial derivatives using subscripts) the first order conditions are v c(t) + λ(t)f c(t) = 0, t =,...,T v s(t) λ(t ) + λ(t)+λ(t)f s(t) = 0, t =2,...,T These conditions can be obtained as first order conditions involving a new function H (Hamiltonian) defined for all t by H(s(t),c(t), λ(t)) v(s(t),c(t)) + λ(t)f(s(t),c(t). Differentiating w.r.t. c(t) ands(t) H = 0, t =,...,T c(t) λ(t) λ(t ) = H s(t), t =2,...,T 8.. In continuous time In the more usual continuous time formulation the problem is to choose the time path of consumption c(t) to maximise s.t ds dt V = ZT o v(s(t),c(t))dt = f(s(t),c(t))

11 The first order conditions for a maximum are conditions on the partial derivatives of H, H(t, s(t), c(t), λ(t)) = v(t, s(t), c(t))+λ(t)f(t, s(t), c(t)) The first order conditions are H = 0 c dλ = H dt s. Example 8.3 Logarithmic chocolate in continuous time. Choose a consumption path x(t) to maximise U(x(t)) = ZT 0 ln x(t)e ρt dt subject to (writing k for the stock). k = x k(0) = given k(t ) = free In this case the chocolate stock is the state variable its derivative appears in the constraint and consumption is the control variable. The choice of labels may not seem very natural you control the chocolate stock by consuming chocolate. In this example the state variable does not appear in the objective function. The Hamiltonian is H(t, k(t),x(t), λ(t)) = ln x(t)e ρt λ(t)x(t) The first order conditions are. k = H λ = x(t). λ = H k =0 H x = e ρt λ(t) =0 x(t) The second condition. λ= 0 is so simple because k does not appear in the Hamiltonian; it implies that λ(t) is constant So from the third condition x(t) e ρt The time path of consumption is exponentially declining and so is the chocolate stock.

12 THE END

In this section, we will consider techniques for solving problems of this type.

In this section, we will consider techniques for solving problems of this type. Constrained optimisation roblems in economics typically involve maximising some quantity, such as utility or profit, subject to a constraint for example income. We shall therefore need techniques for solving

More information

TOPIC 4: DERIVATIVES

TOPIC 4: DERIVATIVES TOPIC 4: DERIVATIVES 1. The derivative of a function. Differentiation rules 1.1. The slope of a curve. The slope of a curve at a point P is a measure of the steepness of the curve. If Q is a point on the

More information

Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}.

Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}. Walrasian Demand Econ 2100 Fall 2015 Lecture 5, September 16 Outline 1 Walrasian Demand 2 Properties of Walrasian Demand 3 An Optimization Recipe 4 First and Second Order Conditions Definition Walrasian

More information

Date: April 12, 2001. Contents

Date: April 12, 2001. Contents 2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........

More information

MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

MATH 425, PRACTICE FINAL EXAM SOLUTIONS. MATH 45, PRACTICE FINAL EXAM SOLUTIONS. Exercise. a Is the operator L defined on smooth functions of x, y by L u := u xx + cosu linear? b Does the answer change if we replace the operator L by the operator

More information

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all. 1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.

More information

The Heat Equation. Lectures INF2320 p. 1/88

The Heat Equation. Lectures INF2320 p. 1/88 The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)

More information

14.451 Lecture Notes 10

14.451 Lecture Notes 10 14.451 Lecture Notes 1 Guido Lorenzoni Fall 29 1 Continuous time: nite horizon Time goes from to T. Instantaneous payo : f (t; x (t) ; y (t)) ; (the time dependence includes discounting), where x (t) 2

More information

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen (für Informatiker) M. Grepl J. Berger & J.T. Frings Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2010/11 Problem Statement Unconstrained Optimality Conditions Constrained

More information

Practice with Proofs

Practice with Proofs Practice with Proofs October 6, 2014 Recall the following Definition 0.1. A function f is increasing if for every x, y in the domain of f, x < y = f(x) < f(y) 1. Prove that h(x) = x 3 is increasing, using

More information

Linear Programming Problems

Linear Programming Problems Linear Programming Problems Linear programming problems come up in many applications. In a linear programming problem, we have a function, called the objective function, which depends linearly on a number

More information

Economics 2020a / HBS 4010 / HKS API-111 FALL 2010 Solutions to Practice Problems for Lectures 1 to 4

Economics 2020a / HBS 4010 / HKS API-111 FALL 2010 Solutions to Practice Problems for Lectures 1 to 4 Economics 00a / HBS 4010 / HKS API-111 FALL 010 Solutions to Practice Problems for Lectures 1 to 4 1.1. Quantity Discounts and the Budget Constraint (a) The only distinction between the budget line with

More information

constraint. Let us penalize ourselves for making the constraint too big. We end up with a

constraint. Let us penalize ourselves for making the constraint too big. We end up with a Chapter 4 Constrained Optimization 4.1 Equality Constraints (Lagrangians) Suppose we have a problem: Maximize 5, (x 1, 2) 2, 2(x 2, 1) 2 subject to x 1 +4x 2 =3 If we ignore the constraint, we get the

More information

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the

More information

Deriving Demand Functions - Examples 1

Deriving Demand Functions - Examples 1 Deriving Demand Functions - Examples 1 What follows are some examples of different preference relations and their respective demand functions. In all the following examples, assume we have two goods x

More information

Constrained Optimisation

Constrained Optimisation CHAPTER 9 Constrained Optimisation Rational economic agents are assumed to make choices that maximise their utility or profit But their choices are usually constrained for example the consumer s choice

More information

ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE

ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE YUAN TIAN This synopsis is designed merely for keep a record of the materials covered in lectures. Please refer to your own lecture notes for all proofs.

More information

Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

Schooling, Political Participation, and the Economy. (Online Supplementary Appendix: Not for Publication)

Schooling, Political Participation, and the Economy. (Online Supplementary Appendix: Not for Publication) Schooling, Political Participation, and the Economy Online Supplementary Appendix: Not for Publication) Filipe R. Campante Davin Chor July 200 Abstract In this online appendix, we present the proofs for

More information

3.1 Solving Systems Using Tables and Graphs

3.1 Solving Systems Using Tables and Graphs Algebra 2 Chapter 3 3.1 Solve Systems Using Tables & Graphs 3.1 Solving Systems Using Tables and Graphs A solution to a system of linear equations is an that makes all of the equations. To solve a system

More information

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION 1 A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION Dimitri Bertsekas M.I.T. FEBRUARY 2003 2 OUTLINE Convexity issues in optimization Historical remarks Our treatment of the subject Three unifying lines of

More information

Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

More information

EXHAUSTIBLE RESOURCES

EXHAUSTIBLE RESOURCES T O U L O U S E S C H O O L O F E C O N O M I C S NATURAL RESOURCES ECONOMICS CHAPTER III EXHAUSTIBLE RESOURCES CHAPTER THREE : EXHAUSTIBLE RESOURCES Natural resources economics M1- TSE INTRODUCTION The

More information

Lecture Notes on Elasticity of Substitution

Lecture Notes on Elasticity of Substitution Lecture Notes on Elasticity of Substitution Ted Bergstrom, UCSB Economics 210A March 3, 2011 Today s featured guest is the elasticity of substitution. Elasticity of a function of a single variable Before

More information

Lecture 5 Principal Minors and the Hessian

Lecture 5 Principal Minors and the Hessian Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and

More information

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and

More information

U = x 1 2. 1 x 1 4. 2 x 1 4. What are the equilibrium relative prices of the three goods? traders has members who are best off?

U = x 1 2. 1 x 1 4. 2 x 1 4. What are the equilibrium relative prices of the three goods? traders has members who are best off? Chapter 7 General Equilibrium Exercise 7. Suppose there are 00 traders in a market all of whom behave as price takers. Suppose there are three goods and the traders own initially the following quantities:

More information

Constrained optimization.

Constrained optimization. ams/econ 11b supplementary notes ucsc Constrained optimization. c 2010, Yonatan Katznelson 1. Constraints In many of the optimization problems that arise in economics, there are restrictions on the values

More information

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR Solutions Of Some Non-Linear Programming Problems A PROJECT REPORT submitted by BIJAN KUMAR PATEL for the partial fulfilment for the award of the degree of Master of Science in Mathematics under the supervision

More information

24. The Branch and Bound Method

24. The Branch and Bound Method 24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

More information

A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails

A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails 12th International Congress on Insurance: Mathematics and Economics July 16-18, 2008 A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails XUEMIAO HAO (Based on a joint

More information

Homework # 3 Solutions

Homework # 3 Solutions Homework # 3 Solutions February, 200 Solution (2.3.5). Noting that and ( + 3 x) x 8 = + 3 x) by Equation (2.3.) x 8 x 8 = + 3 8 by Equations (2.3.7) and (2.3.0) =3 x 8 6x2 + x 3 ) = 2 + 6x 2 + x 3 x 8

More information

The Envelope Theorem 1

The Envelope Theorem 1 John Nachbar Washington University April 2, 2015 1 Introduction. The Envelope Theorem 1 The Envelope theorem is a corollary of the Karush-Kuhn-Tucker theorem (KKT) that characterizes changes in the value

More information

The Optimal Growth Problem

The Optimal Growth Problem LECTURE NOTES ON ECONOMIC GROWTH The Optimal Growth Problem Todd Keister Centro de Investigación Económica, ITAM keister@itam.mx January 2005 These notes provide an introduction to the study of optimal

More information

What is Linear Programming?

What is Linear Programming? Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

Online Learning of Optimal Strategies in Unknown Environments

Online Learning of Optimal Strategies in Unknown Environments 1 Online Learning of Optimal Strategies in Unknown Environments Santiago Paternain and Alejandro Ribeiro Abstract Define an environment as a set of convex constraint functions that vary arbitrarily over

More information

Second Order Linear Partial Differential Equations. Part I

Second Order Linear Partial Differential Equations. Part I Second Order Linear Partial Differential Equations Part I Second linear partial differential equations; Separation of Variables; - point boundary value problems; Eigenvalues and Eigenfunctions Introduction

More information

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results

More information

CHAPTER 4 Consumer Choice

CHAPTER 4 Consumer Choice CHAPTER 4 Consumer Choice CHAPTER OUTLINE 4.1 Preferences Properties of Consumer Preferences Preference Maps 4.2 Utility Utility Function Ordinal Preference Utility and Indifference Curves Utility and

More information

Chapter 4 Online Appendix: The Mathematics of Utility Functions

Chapter 4 Online Appendix: The Mathematics of Utility Functions Chapter 4 Online Appendix: The Mathematics of Utility Functions We saw in the text that utility functions and indifference curves are different ways to represent a consumer s preferences. Calculus can

More information

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725 Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Charlie Frogner 1 MIT 2011 1 Slides mostly stolen from Ryan Rifkin (Google). Plan Regularization derivation of SVMs. Analyzing the SVM problem: optimization, duality. Geometric

More information

Lecture 3: Growth with Overlapping Generations (Acemoglu 2009, Chapter 9, adapted from Zilibotti)

Lecture 3: Growth with Overlapping Generations (Acemoglu 2009, Chapter 9, adapted from Zilibotti) Lecture 3: Growth with Overlapping Generations (Acemoglu 2009, Chapter 9, adapted from Zilibotti) Kjetil Storesletten September 10, 2013 Kjetil Storesletten () Lecture 3 September 10, 2013 1 / 44 Growth

More information

The Real Business Cycle Model

The Real Business Cycle Model The Real Business Cycle Model Ester Faia Goethe University Frankfurt Nov 2015 Ester Faia (Goethe University Frankfurt) RBC Nov 2015 1 / 27 Introduction The RBC model explains the co-movements in the uctuations

More information

Functional Optimization Models for Active Queue Management

Functional Optimization Models for Active Queue Management Functional Optimization Models for Active Queue Management Yixin Chen Department of Computer Science and Engineering Washington University in St Louis 1 Brookings Drive St Louis, MO 63130, USA chen@cse.wustl.edu

More information

c 2008 Je rey A. Miron We have described the constraints that a consumer faces, i.e., discussed the budget constraint.

c 2008 Je rey A. Miron We have described the constraints that a consumer faces, i.e., discussed the budget constraint. Lecture 2b: Utility c 2008 Je rey A. Miron Outline: 1. Introduction 2. Utility: A De nition 3. Monotonic Transformations 4. Cardinal Utility 5. Constructing a Utility Function 6. Examples of Utility Functions

More information

The Mean Value Theorem

The Mean Value Theorem The Mean Value Theorem THEOREM (The Extreme Value Theorem): If f is continuous on a closed interval [a, b], then f attains an absolute maximum value f(c) and an absolute minimum value f(d) at some numbers

More information

To give it a definition, an implicit function of x and y is simply any relationship that takes the form:

To give it a definition, an implicit function of x and y is simply any relationship that takes the form: 2 Implicit function theorems and applications 21 Implicit functions The implicit function theorem is one of the most useful single tools you ll meet this year After a while, it will be second nature to

More information

36 CHAPTER 1. LIMITS AND CONTINUITY. Figure 1.17: At which points is f not continuous?

36 CHAPTER 1. LIMITS AND CONTINUITY. Figure 1.17: At which points is f not continuous? 36 CHAPTER 1. LIMITS AND CONTINUITY 1.3 Continuity Before Calculus became clearly de ned, continuity meant that one could draw the graph of a function without having to lift the pen and pencil. While this

More information

Randomization Approaches for Network Revenue Management with Customer Choice Behavior

Randomization Approaches for Network Revenue Management with Customer Choice Behavior Randomization Approaches for Network Revenue Management with Customer Choice Behavior Sumit Kunnumkal Indian School of Business, Gachibowli, Hyderabad, 500032, India sumit kunnumkal@isb.edu March 9, 2011

More information

Nonparametric adaptive age replacement with a one-cycle criterion

Nonparametric adaptive age replacement with a one-cycle criterion Nonparametric adaptive age replacement with a one-cycle criterion P. Coolen-Schrijner, F.P.A. Coolen Department of Mathematical Sciences University of Durham, Durham, DH1 3LE, UK e-mail: Pauline.Schrijner@durham.ac.uk

More information

Lecture 2: The SVM classifier

Lecture 2: The SVM classifier Lecture 2: The SVM classifier C19 Machine Learning Hilary 2015 A. Zisserman Review of linear classifiers Linear separability Perceptron Support Vector Machine (SVM) classifier Wide margin Cost function

More information

Definition and Properties of the Production Function: Lecture

Definition and Properties of the Production Function: Lecture Definition and Properties of the Production Function: Lecture II August 25, 2011 Definition and : Lecture A Brief Brush with Duality Cobb-Douglas Cost Minimization Lagrangian for the Cobb-Douglas Solution

More information

The one dimensional heat equation: Neumann and Robin boundary conditions

The one dimensional heat equation: Neumann and Robin boundary conditions The one dimensional heat equation: Neumann and Robin boundary conditions Ryan C. Trinity University Partial Differential Equations February 28, 2012 with Neumann boundary conditions Our goal is to solve:

More information

1 Portfolio mean and variance

1 Portfolio mean and variance Copyright c 2005 by Karl Sigman Portfolio mean and variance Here we study the performance of a one-period investment X 0 > 0 (dollars) shared among several different assets. Our criterion for measuring

More information

Lectures 5-6: Taylor Series

Lectures 5-6: Taylor Series Math 1d Instructor: Padraic Bartlett Lectures 5-: Taylor Series Weeks 5- Caltech 213 1 Taylor Polynomials and Series As we saw in week 4, power series are remarkably nice objects to work with. In particular,

More information

Math 120 Final Exam Practice Problems, Form: A

Math 120 Final Exam Practice Problems, Form: A Math 120 Final Exam Practice Problems, Form: A Name: While every attempt was made to be complete in the types of problems given below, we make no guarantees about the completeness of the problems. Specifically,

More information

International Doctoral School Algorithmic Decision Theory: MCDA and MOO

International Doctoral School Algorithmic Decision Theory: MCDA and MOO International Doctoral School Algorithmic Decision Theory: MCDA and MOO Lecture 2: Multiobjective Linear Programming Department of Engineering Science, The University of Auckland, New Zealand Laboratoire

More information

Critical points of once continuously differentiable functions are important because they are the only points that can be local maxima or minima.

Critical points of once continuously differentiable functions are important because they are the only points that can be local maxima or minima. Lecture 0: Convexity and Optimization We say that if f is a once continuously differentiable function on an interval I, and x is a point in the interior of I that x is a critical point of f if f (x) =

More information

Cost Minimization and the Cost Function

Cost Minimization and the Cost Function Cost Minimization and the Cost Function Juan Manuel Puerta October 5, 2009 So far we focused on profit maximization, we could look at a different problem, that is the cost minimization problem. This is

More information

Duality in Linear Programming

Duality in Linear Programming Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadow-price interpretation of the optimal simplex multipliers is a very useful concept. First, these shadow

More information

The continuous and discrete Fourier transforms

The continuous and discrete Fourier transforms FYSA21 Mathematical Tools in Science The continuous and discrete Fourier transforms Lennart Lindegren Lund Observatory (Department of Astronomy, Lund University) 1 The continuous Fourier transform 1.1

More information

Envelope Theorem. Kevin Wainwright. Mar 22, 2004

Envelope Theorem. Kevin Wainwright. Mar 22, 2004 Envelope Theorem Kevin Wainwright Mar 22, 2004 1 Maximum Value Functions A maximum (or minimum) value function is an objective function where the choice variables have been assigned their optimal values.

More information

This is a simple guide to optimal control theory. In what follows, I borrow freely from Kamien and Shwartz (1981) and King (1986).

This is a simple guide to optimal control theory. In what follows, I borrow freely from Kamien and Shwartz (1981) and King (1986). ECON72: MACROECONOMIC THEORY I Martin Boileau A CHILD'S GUIDE TO OPTIMAL CONTROL THEORY 1. Introduction This is a simple guide to optimal control theory. In what follows, I borrow freely from Kamien and

More information

Lecture 13 Linear quadratic Lyapunov theory

Lecture 13 Linear quadratic Lyapunov theory EE363 Winter 28-9 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discrete-time

More information

DIFFERENTIABILITY OF THE VALUE FUNCTION WITHOUT INTERIORITY ASSUMPTIONS

DIFFERENTIABILITY OF THE VALUE FUNCTION WITHOUT INTERIORITY ASSUMPTIONS Working Paper 07-14 Departamento de Economía Economic Series 05 Universidad Carlos III de Madrid March 2007 Calle Madrid, 126 28903 Getafe (Spain) Fax (34-91) 6249875 DIFFERENTIABILITY OF THE VALUE FUNCTION

More information

A FIRST COURSE IN OPTIMIZATION THEORY

A FIRST COURSE IN OPTIMIZATION THEORY A FIRST COURSE IN OPTIMIZATION THEORY RANGARAJAN K. SUNDARAM New York University CAMBRIDGE UNIVERSITY PRESS Contents Preface Acknowledgements page xiii xvii 1 Mathematical Preliminaries 1 1.1 Notation

More information

Long Dated Life Insurance and Pension Contracts

Long Dated Life Insurance and Pension Contracts INSTITUTT FOR FORETAKSØKONOMI DEPARTMENT OF FINANCE AND MANAGEMENT SCIENCE FOR 1 211 ISSN: 15-466 MAY 211 Discussion paper Long Dated Life Insurance and Pension Contracts BY KNUT K. AASE This paper can

More information

Consumer Theory. The consumer s problem

Consumer Theory. The consumer s problem Consumer Theory The consumer s problem 1 The Marginal Rate of Substitution (MRS) We define the MRS(x,y) as the absolute value of the slope of the line tangent to the indifference curve at point point (x,y).

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

Mathematical economics

Mathematical economics Mathematical economics M. Bray, R. Razin, A. Sarychev EC3120 2014 Undergraduate study in Economics, Management, Finance and the Social Sciences This is an extract from a subject guide for an undergraduate

More information

Lecture 2: August 29. Linear Programming (part I)

Lecture 2: August 29. Linear Programming (part I) 10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.

More information

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 J. Zhang Institute of Applied Mathematics, Chongqing University of Posts and Telecommunications, Chongqing

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

Brownian Motion and Stochastic Flow Systems. J.M Harrison

Brownian Motion and Stochastic Flow Systems. J.M Harrison Brownian Motion and Stochastic Flow Systems 1 J.M Harrison Report written by Siva K. Gorantla I. INTRODUCTION Brownian motion is the seemingly random movement of particles suspended in a fluid or a mathematical

More information

Discrete Dynamic Optimization: Six Examples

Discrete Dynamic Optimization: Six Examples Discrete Dynamic Optimization: Six Examples Dr. Tai-kuang Ho Associate Professor. Department of Quantitative Finance, National Tsing Hua University, No. 101, Section 2, Kuang-Fu Road, Hsinchu, Taiwan 30013,

More information

An Introduction to Partial Differential Equations

An Introduction to Partial Differential Equations An Introduction to Partial Differential Equations Andrew J. Bernoff LECTURE 2 Cooling of a Hot Bar: The Diffusion Equation 2.1. Outline of Lecture An Introduction to Heat Flow Derivation of the Diffusion

More information

Advanced Microeconomics

Advanced Microeconomics Advanced Microeconomics Ordinal preference theory Harald Wiese University of Leipzig Harald Wiese (University of Leipzig) Advanced Microeconomics 1 / 68 Part A. Basic decision and preference theory 1 Decisions

More information

1 Sufficient statistics

1 Sufficient statistics 1 Sufficient statistics A statistic is a function T = rx 1, X 2,, X n of the random sample X 1, X 2,, X n. Examples are X n = 1 n s 2 = = X i, 1 n 1 the sample mean X i X n 2, the sample variance T 1 =

More information

Lecture notes on Moral Hazard, i.e. the Hidden Action Principle-Agent Model

Lecture notes on Moral Hazard, i.e. the Hidden Action Principle-Agent Model Lecture notes on Moral Hazard, i.e. the Hidden Action Principle-Agent Model Allan Collard-Wexler April 19, 2012 Co-Written with John Asker and Vasiliki Skreta 1 Reading for next week: Make Versus Buy in

More information

The integrating factor method (Sect. 2.1).

The integrating factor method (Sect. 2.1). The integrating factor method (Sect. 2.1). Overview of differential equations. Linear Ordinary Differential Equations. The integrating factor method. Constant coefficients. The Initial Value Problem. Variable

More information

Econometrics Simple Linear Regression

Econometrics Simple Linear Regression Econometrics Simple Linear Regression Burcu Eke UC3M Linear equations with one variable Recall what a linear equation is: y = b 0 + b 1 x is a linear equation with one variable, or equivalently, a straight

More information

Prot Maximization and Cost Minimization

Prot Maximization and Cost Minimization Simon Fraser University Prof. Karaivanov Department of Economics Econ 0 COST MINIMIZATION Prot Maximization and Cost Minimization Remember that the rm's problem is maximizing prots by choosing the optimal

More information

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where. Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

More information

Economics of Insurance

Economics of Insurance Economics of Insurance In this last lecture, we cover most topics of Economics of Information within a single application. Through this, you will see how the differential informational assumptions allow

More information

Section 6.1 Joint Distribution Functions

Section 6.1 Joint Distribution Functions Section 6.1 Joint Distribution Functions We often care about more than one random variable at a time. DEFINITION: For any two random variables X and Y the joint cumulative probability distribution function

More information

Macroeconomics Lecture 1: The Solow Growth Model

Macroeconomics Lecture 1: The Solow Growth Model Macroeconomics Lecture 1: The Solow Growth Model Richard G. Pierse 1 Introduction One of the most important long-run issues in macroeconomics is understanding growth. Why do economies grow and what determines

More information

Numerical methods for American options

Numerical methods for American options Lecture 9 Numerical methods for American options Lecture Notes by Andrzej Palczewski Computational Finance p. 1 American options The holder of an American option has the right to exercise it at any moment

More information

6.4 Logarithmic Equations and Inequalities

6.4 Logarithmic Equations and Inequalities 6.4 Logarithmic Equations and Inequalities 459 6.4 Logarithmic Equations and Inequalities In Section 6.3 we solved equations and inequalities involving exponential functions using one of two basic strategies.

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Notes on metric spaces

Notes on metric spaces Notes on metric spaces 1 Introduction The purpose of these notes is to quickly review some of the basic concepts from Real Analysis, Metric Spaces and some related results that will be used in this course.

More information

More Quadratic Equations

More Quadratic Equations More Quadratic Equations Math 99 N1 Chapter 8 1 Quadratic Equations We won t discuss quadratic inequalities. Quadratic equations are equations where the unknown appears raised to second power, and, possibly

More information

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0.

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0. Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients We will now turn our attention to nonhomogeneous second order linear equations, equations with the standard

More information

Name: ID: Discussion Section:

Name: ID: Discussion Section: Math 28 Midterm 3 Spring 2009 Name: ID: Discussion Section: This exam consists of 6 questions: 4 multiple choice questions worth 5 points each 2 hand-graded questions worth a total of 30 points. INSTRUCTIONS:

More information

minimal polyonomial Example

minimal polyonomial Example Minimal Polynomials Definition Let α be an element in GF(p e ). We call the monic polynomial of smallest degree which has coefficients in GF(p) and α as a root, the minimal polyonomial of α. Example: We

More information

The Method of Partial Fractions Math 121 Calculus II Spring 2015

The Method of Partial Fractions Math 121 Calculus II Spring 2015 Rational functions. as The Method of Partial Fractions Math 11 Calculus II Spring 015 Recall that a rational function is a quotient of two polynomials such f(x) g(x) = 3x5 + x 3 + 16x x 60. The method

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete

More information

Economics 326: Duality and the Slutsky Decomposition. Ethan Kaplan

Economics 326: Duality and the Slutsky Decomposition. Ethan Kaplan Economics 326: Duality and the Slutsky Decomposition Ethan Kaplan September 19, 2011 Outline 1. Convexity and Declining MRS 2. Duality and Hicksian Demand 3. Slutsky Decomposition 4. Net and Gross Substitutes

More information