ON COMPUTING COURNOT-NASH EQUILIBRIA

Similar documents
2.3 Convex Constrained Optimization Problems

Duality of linear conic problems

1 if 1 x 0 1 if 0 x 1

24. The Branch and Bound Method

What is Linear Programming?

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

Convex analysis and profit/cost/support functions

Nonlinear Programming Methods.S2 Quadratic Programming

Continued Fractions and the Euclidean Algorithm

Mathematics Course 111: Algebra I Part IV: Vector Spaces

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

No: Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

Notes on Determinant

Notes on Factoring. MA 206 Kurt Bryan

Linear Algebra I. Ronald van Luijk, 2012

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand

MA651 Topology. Lecture 6. Separation Axioms.

Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}.

Lecture 5 Principal Minors and the Hessian

NOTES ON LINEAR TRANSFORMATIONS

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

BANACH AND HILBERT SPACE REVIEW

3. Linear Programming and Polyhedral Combinatorics

Equilibria and Dynamics of. Supply Chain Network Competition with Information Asymmetry. and Minimum Quality Standards

ON SEQUENTIAL CONTINUITY OF COMPOSITION MAPPING. 0. Introduction

TOPIC 4: DERIVATIVES

Lecture 7: Finding Lyapunov Functions 1

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

Lecture 3: Finding integer solutions to systems of linear equations

ALMOST COMMON PRIORS 1. INTRODUCTION

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

1 Norms and Vector Spaces

Nonlinear Optimization: Algorithms 3: Interior-point methods

Game Theory: Supermodular Games 1

Similarity and Diagonalization. Similar Matrices

Numerical methods for American options

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Research Article Stability Analysis for Higher-Order Adjacent Derivative in Parametrized Vector Optimization

Chapter 7. Sealed-bid Auctions

Math 4310 Handout - Quotient Vector Spaces

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Mathematical finance and linear programming (optimization)

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform

6.254 : Game Theory with Engineering Applications Lecture 2: Strategic Form Games

Metric Spaces. Chapter 1

Linear Programming Notes V Problem Transformations

1 Sets and Set Notation.

Fixed Point Theorems

Cryptography and Network Security. Prof. D. Mukhopadhyay. Department of Computer Science and Engineering. Indian Institute of Technology, Kharagpur

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Continuity of the Perron Root

GROUPS ACTING ON A SET

Example 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum. asin. k, a, and b. We study stability of the origin x

FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT MINING SYSTEM

4.6 Linear Programming duality

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

Mathematical Induction

6.207/14.15: Networks Lecture 15: Repeated Games and Cooperation

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

DATA ANALYSIS II. Matrix Algorithms

Separation Properties for Locally Convex Cones

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

Critical points of once continuously differentiable functions are important because they are the only points that can be local maxima or minima.

THE BANACH CONTRACTION PRINCIPLE. Contents

Notes on Symmetric Matrices

THE FUNDAMENTAL THEOREM OF ARBITRAGE PRICING

Practical Guide to the Simplex Method of Linear Programming

Multi-variable Calculus and Optimization

UNCOUPLING THE PERRON EIGENVECTOR PROBLEM

Scalar Valued Functions of Several Variables; the Gradient Vector

Adaptive Online Gradient Descent

3. Mathematical Induction

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

A FIRST COURSE IN OPTIMIZATION THEORY

Tiers, Preference Similarity, and the Limits on Stable Partners

Module1. x y 800.

Mathematical Methods of Engineering Analysis

Solving Systems of Linear Equations

Systems of Linear Equations

Mathematics for Computer Science/Software Engineering. Notes for the course MSM1F3 Dr. R. A. Wilson

Inner Product Spaces

Atomic Cournotian Traders May Be Walrasian

Metric Spaces. Chapter Metrics

15 Limit sets. Lyapunov functions

Date: April 12, Contents

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Lecture 2: August 29. Linear Programming (part I)

LEARNING OBJECTIVES FOR THIS CHAPTER

1 Solving LPs: The Simplex Algorithm of George Dantzig

it is easy to see that α = a

(Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties

Linear Algebra Notes

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

Transcription:

ON COMPUTING COURNOT-NASH EQUILIBRIA The quotation marks in the title above pertain to the primary source for this lecture which is: Charles D. Kolstad and Lars Mathiesen, Computing Cournot-Nash Equilibria, Operations Research 39 (1991) 739 748. This paper nicely employs many of the ideas presented in this course. Its main mission, however, is to discuss conditions that assure the convergence of a successive linear complementarity algorithm for solving the nonlinear complementarity problem that arises from a barebones version of the Cournot-Nash equilibrium problem. Much of the discussion in the paper is done in economic terms.

Other references Most of the heavy lifting in this paper is done by applying results from the literature, especially S. Karamardian, The Complementarity Problem, Mathematical Programming 2 (1972) 107 129. J.J. Moré, Classes of Functions and Feasibility Conditions in Nonlinear Complementarity Problems, Mathematical Programming 6 (1974) 327 338. J.J. Moré and W.C. Rheinboldt, On P- and S-functions and Related Classes of n-dimensional Nonlinear Mappings, Linear Algebra and its Applications 6 (1973) 45 68. J-S. Pang and D. Chan, Iterative Methods for Variational and Complementarity Problems, Mathematical Programming 24 (1982) 284 313.

Notational problems There is a clash of notation between what is traditionally used in economics and what is used in LCP theory. This is especially noticeable when it comes to the letter q. In economics, and, more importantly, in this paper, q usually denotes a vector of quantities and is a variable. In the LCP, q is usually denotes a constant vector, as in (q,m). The Kolstad-Mathiesen (K-M) paper tends to be confusing on this issue by switching without warning from the variable q (economics) to the variable x (mathematics) or to the LCP constant q.

To bring a little more clarity to the discussion, we will not use the symbol q as the constant vector of an LCP. For that matter, we will use (b, A) as the notation for an LCP with affine mapping b + Ax. The K-M paper lacks a notation for the subvector obtained by disregarding one of its components. So, with a slight abuse of notation we write q =(q 1,...,q N )=(q i, q i ). Up to a permutation of coordinates, q = q i q i.

The problem We assume there are N firms, each producing the same good. The variable q i (to be determined) denotes the output of the ith firm. The cost of this production is C i (q i ) and is assumed to be known. There is a known inverse demand function P (Q) where Q = e T q, the sum of all outputs. The profit of firm i is given by π i (q i, q i )=P (Q)q i C i (q i ) (1) Strictly speaking, π i is a function of all N output variables, but only q i is under the control of firm i. Definition. A Cournot equilibrium for this set of firms is a vector q such that π i (q i, q i ) = max q i 0 (q i, q i ) for all i =1,...N Note the absence of any constraints on the q i other than nonnegativity.

Assumptions The inverse demand function P ( ) and the cost functions C i ( ) are assumed to be twice continuously differentiable. The profit functions are assumed to be concave with respect to own output. That is, for q i fixed, π i is a concave function of q i. Under these assumptions, the set of Cournot equilibria is the same as the set of simultaneous solutions of the first-order optimality conditions of the N individual firms problems. These are f i π i q i = C i (q i) P (Q) q i P (Q) 0, π q i i q i = 0, q i 0 i (2) We recognize this system as the nonlinear complementarity problem CP (R N +,f) f(q) 0, q 0, q T f(q) =0. (3)

Lemma 1. If the profit functions π i (q i, q i ) are all twice continuously differentiable and concave with respect to own output q i, then q is a Cournot equilibrium if and only if q SOL CP(R+ N,f) as given in (2) and (3). This is easily proved and can be related to the proposition giving necessary and sufficient conditions for a point to be a Nash equilibrium which were expressed in terms of a variational inequality problem. Under the present circumstances, that VI problem is equivalent to a the nonlinear complementarity problem CP (R N +,f).

A switch to complementarity theory Having set up the above nonlinear complementarity problem, Kolstad and Mathiesen state a couple of results from (nonlinear) complementarity theory. With this digression, they also switch the notation of the variable from q to x. The following lemma is due to S. Karamardian (1972, Theorem 3.2). Lemma 2. Let f : R+ N RN be continuous. Assume there exists a nonempty compact set K R+ N such that for every x RN + \ K there exists a y K with (x y) T f(x) > 0. Then there exists a solution to CP (R+ N,f). Note. There is a slight error in the K&M statement of Lemma 2. They take K to be a subset of R N whereas Karamardian requires it to be a subset of R+ N so that f will be well defined there.

Proof of Lemma 2. For every u R N +, let D u = {x : x K, (u x) T f(x) 0} The sets D u are clearly closed. The intersection of any finite number of the D u s is nonempty. To see this, let D be the convex hull of K {u 1,...,u m }. Then D is a nonempty, compact convex subset of R+ N. It follows from the Hartman-Stampacchia Theorem on variational inequalities that there exists a point x D such that (x x) T f( x) 0 for all x D. So in particular, (u i x) T f( x) 0 for i =1,...,m. It remains to prove x K. If not, the assumption of the lemma implies there exists y K such that (y x) T f( x) < 0 in contradiction to the second inequality of the proof. From the finite intersection property of compact sets, we have

D u, u R N + This implies that there exists x K such that (x x ) T f(x ) 0 for all x R+ N, meaning that x solves CP (R+ N,f).

Kolstad and Mathiesen attribute the following lemma to Jorge Moré (1974). Actually, the result is informally stated on p.329 of Moré s paper but is attributed to Moré and Rheinboldt (1973, Theorem 5.2). Lemma 3. If f is continuously differentiable (on an open set containing R+ N ) and f(x) is a P-matrix for all x RN +, then there exists at most one solution to CP (R+ N,f).

Kolstad and Mathiesen attribute the following lemma to Jorge Moré (1974). Actually, the result is informally stated on p.329 of Moré s paper but is attributed to Moré and Rheinboldt (1973, Theorem 5.2). Lemma 3. If f is continuously differentiable (on an open set containing R+ N ) and f(x) is a P-matrix for all x RN +, then there exists at most one solution to CP (R+ N,f). Remark. Note that Lemma 3 asserts only the uniqueness not the existence of a solution to CP (R+ N,f). Indeed, it is not enough for f(x) to be a P-matrix for all x R+ N, even if the problem is feasible. This was illustrated by Cottle (1965, p.151). The latter paper establishes the existence of a solution through a heavy-handed assumption called positive boundedness of the Jacobian matrix which in this case would say that there exists δ (0, 1) such that for all x R N, every principal minor of f(x) lies between δ and δ 1.

Towards existence As noted above, establishing the existence of a solution requires something stronger than the positivity of the principal minors of f(x). Here is what Kolstad and Mathiesen use. Definition. Industry output is bounded by Q >0 if output in excess of Q from any producer (firm) implies that the marginal profits of all producers are negative. Industry boundedness is a reasonable condition and is assumed below.

A bit of matrix terminology Let A =[a ij ] R N N. Then A is row strictly diagonally dominant if there exists a positive vector d such that d i a ii > j =i d j a ij i =1,...,N (4) Analogous terminology holds for column strict diagonal dominance. If d i =1for all i, then A is called an Hadamard matrix. There is quite a bit of literature on diagonally dominant matrices. If A has a positive diagonal and is row strictly diagonally dominant it is a P-matrix. If A has a positive diagonal and there is a positive vector d with respect to which A has both row and column strict diagonal dominance, then it is called positive strictly diagonally dominant. Such a matrix has to be positive definite in the sense used in this course. A good place to find information on diagonally dominant matrices and their applications in economics is Section 21 of the book Convex Structures and Economic Theory by H. Nikaido (New York: Academic Press, 1968)).

Sufficient conditions for existence of a Cournot equilibrium Theorem 1. If (i) each firm s profit function π i ( ) is twice continuously differentiable, (ii) each firm s proifit function is concave, (iii) industry output is bounded, and (iv) f has a negative strictly dominant diagonal for all q 0, then there exists a unique Cournot equilibrium. Proof. The assumptions imply that f(q) has a positive dominant diagonal for all q 0, and hence it is a P-matrix. By Lemma 3, CP (R+ N,f) has at most one solution. By assumption (ii), Lemma 2 implies that a solution exists (take y 0 that violates the industry bound condition). The Cournot equilibrium is unique by Lemma 1.

Preliminaries of the SLCP Algorithm The SLCP Algorithm for CP (R+ N,f) is obvious and easily executed. Proving its local or global convergence is another matter. The linearization of f at y is Lf(x y) f(y)+ f(y)(x y) (5) Note that Lf(x y) =f(y) f(y)y + f(y)x form b + Ax. which is of the Note also that there is an implicit assumption here that f is known.

The SLCP Algorithm Step 0. Initialization. Pick x 0 R N + and set k =0. Step 1. Test for termination. If x k satisfacties the convergence criterion, stop. Step 2. Linearization. Compute Lf(x x k ). Step 3. Solve the approximation. Let x k+1 solve the LCP (b, A) = (f(x k ) f(x k )x k, f(x k )). Return to Step 1 with k k +1. Note. In the application of this algorithm to the Cournot equilibrium problem, the vectors x k are really vectors of production levels q k of the N firms. Note. Under the assumptions of Theorem 1, f(x k ) is a P-matrix for all k 0.

Local convergence and global convergence Different sorts of algorithms yield different sorts of convergence results. There is a variant of Newton s method that is sometimes used on complementarity problems. Under favorable conditions, it converges quadratically, but from a starting point sufficiently near the solution. For this, it is helpful to have the following Definition. The function g is Lipschitz continuous if there exists a positive scalar β such that for all x and y in the domain of g, g(x) g(y) <β x y.

A local convergence result Kolstad and Mathiesen attribute the following lemma to Pang and Chan (1982). It appears to be a variant of Pang and Chan s Corollary 2.6, a consequence of their Theorem 2.1 which is about variational inequalities and Newton s method, not Lemke s Algorithm. Lemma 4. Let K be a nonempty closed convex subset of R N. Let f : R N + RN be continuously differentiable. Suppose that x solves the CP (R N +,f) and that f(x ) is positive definite. Then there exists a neighborhood of x such that if the initial iterate x 0 is chosen there, the sequence {x k } of solutions to CP (f(x k 1 ) f(x k 1 )x k 1, f(x k 1 )) is well defined and converges to x. Moreover, if f is Lipschitz continuous at x, then {x k } converges quadratically to x.

Lemma 4. Let K be a nonempty closed convex subset of R N. Let f : R N + RN be continuously differentiable. Suppose that x solves the CP (R N +,f) and that f(x ) is positive definite. Then there exists a neighborhood of x such that if the initial iterate x 0 is chosen there, the sequence {x k } of solutions to LCP (f(x k 1 ) f(x k 1 )x k 1, f(x k 1 )) is well defined and converges to x. Moreover, if f is Lipschitz continuous at x, then {x k } converges quadratically to x. Do you notice anything strange about this lemma?

Kolstad and Mathiesen prove the following local convergence theorem by applying Lemma 4. Theorem 2. Let q solve CP (2), the conditions for a Cournot equilibrium. Assume that (i) each firm s profit function π i, is twice continuously differentiable, and (ii) the Jacobian of the marginal profits (i.e., f) has a negative strictly dominant diagonal. Then there exists a neighborhood of q such that when the initial iterate q 0 is chosen there, the SLCP Algorithm computes a sequence {q k } that converges to q (quadratically if 2 π is Lipschitz continuous).

Global convergence Here Kolstad and Mathiesen are interested in getting a convergent sequence of iterates {q k } without having to start from a point in a sufficiently small neighborhood of the solution being sought. Here, following Pang and Chan, they bring in K-matrices, that is matrices belonging to P Z. Their version of Pang and Chan s Theorem 4.2 is Lemma 5. Let f : R N + RN be convex and differentiable. If there exists a K-matrix X such that f(x)x Z for all x 0 such that f(x) 0, then there exists a well defined sequence {x k }, where for k 1, x k solves the LCP (f(x k 1 ) f(x k 1 )x k 1, f(x k 1 )) such that {x k } converges to some solution of CP (R N +,f) for any feasible x 0. To make this lemma applicable to the Cournot equilibrium problem, Kolstad and Mathiesen impose extra conditions on the mapping f.

The assume that the components of f are convex functions and that there exists a K-matrix X such that f(q)x Z for all feasible q. Some technical details follow on why these conditions are economically reasonable. This discussion culminates with Theorem 3. Assume that (i) the inverse demand and costs are twice continuously differentiable, (ii) the industry output is bounded, (iii) the marginal profits ( f i in (2)) are concave, and (iv) the Jacobian of the marginal profits ( f) has a negative strictly dominant diagonal with unitary scales (i.e., is Hadamard). Then there exists a well defined sequence {q k } where each q k solves the LCP (f(q k 1 ) f(q k 1 )q k 1, f(q k 1 )) such that {q k } converges to the unique solution of CP (R N +,f) for any feasible q0 (at least one of which exists).