Cheap Talk : Multiple Senders and Multiple Receivers



Similar documents
EconS Advanced Microeconomics II Handout on Cheap Talk

2. Information Economics

Persuasion by Cheap Talk - Online Appendix

Chapter 7. Sealed-bid Auctions

Unraveling versus Unraveling: A Memo on Competitive Equilibriums and Trade in Insurance Markets

CPC/CPA Hybrid Bidding in a Second Price Auction

Moral Hazard. Itay Goldstein. Wharton School, University of Pennsylvania

6.254 : Game Theory with Engineering Applications Lecture 2: Strategic Form Games

Nash Equilibria and. Related Observations in One-Stage Poker

Working Paper Series

WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT?

6.207/14.15: Networks Lecture 15: Repeated Games and Cooperation

A Simple Model of Price Dispersion *

When is Reputation Bad? 1

Midterm March (a) Consumer i s budget constraint is. c i b i c i H 12 (1 + r)b i c i L 12 (1 + r)b i ;

Online Appendix Feedback Effects, Asymmetric Trading, and the Limits to Arbitrage

Game Theory: Supermodular Games 1

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2015

Infinitely Repeated Games with Discounting Ù

Imperfect monitoring in communication networks

Economics of Insurance

Perfect Bayesian Equilibrium

How To Play A Static Bayesian Game

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand

Credible Discovery, Settlement, and Negative Expected Value Suits

ALMOST COMMON PRIORS 1. INTRODUCTION

Alok Gupta. Dmitry Zhdanov

The Relation between Two Present Value Formulae

On the Efficiency of Competitive Stock Markets Where Traders Have Diverse Information

Nonparametric adaptive age replacement with a one-cycle criterion

Not Only What But also When: A Theory of Dynamic Voluntary Disclosure

2DI36 Statistics. 2DI36 Part II (Chapter 7 of MR)

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

i/io as compared to 4/10 for players 2 and 3. A "correct" way to play Certainly many interesting games are not solvable by this definition.

11 Ideals Revisiting Z

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

Optimal Nonlinear Income Taxation with a Finite Population

ECON 312: Oligopolisitic Competition 1. Industrial Organization Oligopolistic Competition

[Refer Slide Time: 05:10]

Capital Structure. Itay Goldstein. Wharton School, University of Pennsylvania

Week 7 - Game Theory and Industrial Organisation

Oligopoly: Cournot/Bertrand/Stackelberg

Gambling Systems and Multiplication-Invariant Measures

The Effects ofVariation Between Jain Mirman and JMC

Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA

The Plaintiff s Attorney in the Liability Insurance Claims Settlement Process: A Game Theoretic Approach

Financial Markets. Itay Goldstein. Wharton School, University of Pennsylvania

Mechanisms for Fair Attribution

1 if 1 x 0 1 if 0 x 1

Sequential lmove Games. Using Backward Induction (Rollback) to Find Equilibrium

National Responses to Transnational Terrorism: Intelligence and Counterterrorism Provision

On the optimality of optimal income taxation

A New Interpretation of Information Rate

Equilibrium computation: Part 1

Optimal Auctions Continued

TAKE-AWAY GAMES. ALLEN J. SCHWENK California Institute of Technology, Pasadena, California INTRODUCTION

MOP 2007 Black Group Integer Polynomials Yufei Zhao. Integer Polynomials. June 29, 2007 Yufei Zhao

6.207/14.15: Networks Lectures 19-21: Incomplete Information: Bayesian Nash Equilibria, Auctions and Introduction to Social Learning

Prices versus Exams as Strategic Instruments for Competing Universities

On the effect of taxation in the online sports betting market

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

arxiv: v1 [math.pr] 5 Dec 2011

A Servant of Two Masters: Communication and the Selection of International Bureaucrats

THE FUNDAMENTAL THEOREM OF ARBITRAGE PRICING

Cournot s model of oligopoly

Equilibrium: Illustrations

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products

Game Theory in Wireless Networks: A Tutorial

Using Generalized Forecasts for Online Currency Conversion

Chapter ML:IV. IV. Statistical Learning. Probability Basics Bayes Classification Maximum a-posteriori Hypotheses

10.2 Series and Convergence

CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e.

Dynamics and Equilibria

Notes from Week 1: Algorithms for sequential prediction

Adaptive Online Gradient Descent

Second degree price discrimination

Work incentives and household insurance: Sequential contracting with altruistic individuals and moral hazard

1 Representation of Games. Kerschbamer: Commitment and Information in Games

Oligopoly: How do firms behave when there are only a few competitors? These firms produce all or most of their industry s output.

Continued Fractions and the Euclidean Algorithm

Buyer Search Costs and Endogenous Product Design

An example of a computable

How To Prove The Dirichlet Unit Theorem

Example 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum. asin. k, a, and b. We study stability of the origin x

Optimal Paternalism: Sin Taxes and Health Subsidies

A dynamic auction for multi-object procurement under a hard budget constraint

How to Sell a (Bankrupt) Company

Online Appendix to Stochastic Imitative Game Dynamics with Committed Agents

Transcription:

Corso di Laurea in Economia indirizzo Models and Methods of Quantitative Economics Prova finale di Laurea Cheap Talk : Multiple Senders and Multiple Receivers Relatore Prof. Piero Gottardi Laureando Arya Kumar Srustidhar Chand Matricola 816487 Anno Accademico 2007 / 2008

. Cheap Talk : Multiple Senders and Multiple Receivers Arya Kumar Srustidhar Chand Advisor : Prof. Piero Gottardi Ca Foscari University, Venice, Italy July, 2008 Dissertation submitted to the Ca Foscari University, in partial fulfillment of the requirements for the award of the Erasmus Mundus Master Degree in Models and Methods of Quantitative Economics.

Introduction The notion of Cheap Talk was first introduced in the famous paper Strategic Information Transmission by Vincent Crawford and Joel Sobel (1982). Cheap Talk is the strategic information transmission between two types of agents or players; (1) Senders who have the private information about the state of the Nature and (2) Receivers who do not have the information, but whose actions affect the welfare of both the Senders and the Receivers. The word Cheap comes from the fact that it does not cost anything exogenuously to Senders to offer the advice nor the Receives to obtain the advice. The Senders send possibly noisy signals or messages, based on their private information, to the Receivers. The Receivers then take actions, based upon the information contained in the signal. In equilibrium, the decision rules that describe, how agents choose their actions in the situations in which they find themseleves, are best responses among themselves. To motivate the subject, first we mention some practical applications of Cheap Talk and then present the main outline of our work. We can view the Senders as experts who have the private information and the Receivers as decision makers or audiences who seek the advice of experts. Consider the case of corporte CEOs or political leaders, who in practical are not experts in all relevant fields and hence need the outside experts. CEOs routinely seek the advice of marketing specialists, investment bankers and management consultants. Political leaders rely on a group of economic and political advisors. Investors seek tips from stockbrokers and financial advisors. But the experts are by no means disinterested in the actions taken by the decision makers. Investment banks stand to gain from new issues and corporate mergers, decisions about which they regularly offer advice. The political future of economic and military advisors may be affected by the decisions on which they give counsel. Stockbrokers are interested in the investment decisions of thier clients. But the preferences of the experts may not match those of the decision makers. Sharing information makes available better results, but it also has strategic effects that make one suspect that revealing all to an opponent is not usually the most advantageous policy. But completely self interested experts will still frequently find it is advantageous to reveal some information.so the natural questions are what will be the advices of the experts, in which case they will reveal the truth, in which cases they will not reveal the truth, whether it is necessary for decision makers to consult

a group of experts or which expert is enough, whether the experts like to advise a group of competitors or they prefer to advice some certain competitors. Here we try to obtain the answers to all these questions by considering some simple models. In Crawford and Sobel[1], Cheap Talk has been described between one Sender and one Receiver in a continuous one-dimensional bounded state space and one-dimensional action space setting. They have shown that if the preferences of the players do not match, then there is no truth revealation and a partition equilibrium occurs where the number of actions taken by the Receiver is finite and for each element of the partition of the state space, only one action is induced. Farrell and Gibbons[2], discusses a basic Cheap Talk model between one Sender and two Receivers taking only two possible states of the Nature and two possible actions for each player with the assumption that each Receiver s utility is independent of other Receiver s action. They have proved that there may not be separating equilibrium between each Sender and each Receiver alone, but in presence of two Receivers, there may be a separating equilibrium where the Sender reveals the true state. Krishna and Morgan[3] discusses Cheap Talk between two Senders and one Receiver in a continuous one dimensional bounded state space and one dimensional action space where the Senders send signals sequentially and in public. They have shown that if the Receiver consults two Senders whose preferences lie opposite to his preference, then he can extract truth for some interval of the state space whereas if he consults two Senders whose preferences lie in one side of his preference, then it is better for him to consult the Sender whose preference lies more close to his preference. An important thing which can be noticed from above papers that if the Senders tell the truth and the Receivers believe them, then there is always more utility i.e. truth telling always pays off. But the preferences of agents are not always same, so the Senders tend not to reveal the truth and so also the Receivers tend not to believe the signals sent by the Senders. This makes the calculation of the equilibrium complicated which is rightly summarized in the words of Sir Walter Scott, Oh, what a tangled web we weave, when first we practice to decieve! Our work has drawn inspiration from all the above works and has tried 3

to generalize Cheap Talk between multiple Senders and multiple Receivers in a continous one dimensional bounded state space and one dimensional action space setting keeping the sequentiality of the Senders in public and the assumption that each Receiver s action does not affect other Receivers utility. We find the necessity of generalizing Cheap Talk to multiple Senders and multiple Receivers because in practical situations, typically we have many Senders and many Receivers. The work is structured in the following way. In Chapter 1, we briefly discuss the Cheap Talk model between one Sender and one Receiver presented in Crawford and Sobel with the detailed discussion of an example. In Chapter 2, we discuss the work by Farrell and Gibbons between one Sender and two Receivers and we extend their model to the continuous one dimensional bounded state space and one dimensional action space like Crawford and Sobel in Section 2 of Chapter 2 and present an example in Section 3 of Chapter 2. In Chapter 3, we present the work by Krishna and Morgan between two Senders and one Receiver. The discussion of the above papers have been done in the spirit of extending Cheap Talk to multiple Senders and multiple Receivers case. In the end in Chapter 4, we merge our extension of the model by Farrell and Gibbons to the continous case and the work by Krishna and Morgan to obtain the equilibria for the Cheap Talk between two Senders and two Receivers. In our work, we have used the Quadratic Loss utility functions for the agents. As the Cheap Talk models have a plethora of equilibria, so in most cases among the equilibria, we have considered the most informative equilibrium i.e. the equilibrium where the Senders send more information regarding the true state. In addition to the abstract setting, we have provided sufficient examples with calculations to obtain the solutions and demonstrate the implications of the models. The examples offer the essential intuition for the general results and all the main propositions can be understood by means of the examples. The welfare implications of the equilibria have been discussed in the examples. Now we talk more about the results we obtained from our work. As we mentioned before, we have extended the model of Farrell and Gibbons to continous case in Section 2 and Section 3 of Chapter 2. There we found that unless the preference of all players are same or the preference of the Sender lies exactly in the middle of the preferences of the Receivers, then we have 4

a partition equilibrium having properties like Crawford and Sobel. We have shown that the equilibrium actions of the Receivers are separated by the difference of their biases and if the Sender s preference lies in between the preferences of the Receivers then we have more partitions in the equilibrium than that with the Sender and each Receiver separately. As told before, in Chapter 4, we have presented our extension to two Senders and two Receivers. We have shown that the equilibrium actions of the Receivers are separated by the difference of their biases. Next we have presented the cases where there are finite number of actions in the equilibirum and where there are infinte number of actions i.e. truth revealation for some interval of the state space. We have shown that if the preferences of both the Senders striclty lie to one side of the preferences of the Receivers, then we have partition equilibrium which resemble the results for like biases in Krishna and Morgan. The interesting cases which we have obtained are that when the preferences of both the Receivers lie in between the preferences of the Senders, we have infinite number of actions induced in the equilibrium. Similarly if the preferences of both the Senders lie in between the preferences of the Receives, and if the preference of the Sender with smaller preference lies more towards the preference of the Receiver with smaller preference and the preference of the Sender with higher preference lies more towards the preference of the Receiver with higher preference, then we have infinite number of actions induced in the equilibrium. Also if the preferences of the Senders and Receivers lie alternatively and if the preference of one of the Receivers is the highest among all players and if the preference of the Sender with higher preference lies close to his preference than the other Receiver s preference, then there will be infinite number of actions. Similarly if the preference of one of the Receivers is the lowest among all players and if the preference of the Sender with lower preference lies more towards him than the other Receiver, we too have infinite number of actions. So with two Senders and two Receivers, we see that there are so many situations where we can extract truth for some interval of the state space by adjusting the preferences suitably. Hence the players with suitable preferences can form a cabinet so that truth is revealed which gives more utility to all the players. 5

Acknowledgements I express my sincere gratitude to my advisor Prof. Piero Gottardi, who is the principal force behind this work, for his thorough guidance, insights into the problems, encouragement and help. I am thankful to the Ca Foscari University for providing me a wonderful working environment and a Ph.D admission to continue my further studies. I am indebted to all my teachers and administrative staffs in the University of Paris 1, Pantheon-Sorbonne, France, Bielefeld University, Germany and Ca Foscari University for the care and guidance they have provided me during the two years of the Erasmus Mundus Master Program. I am grateful to my family members for their love, affection, grace and care in all steps of my career. I thank all of my friends across the globe for their friendship, help and support. With a deep respect to the Indian Philosophy, Values and Way of Life, I humbly offer this work at His Holy Feet of Parama Premamaya Sri Sri Thakur Anukulchandra.

Contents 1 One Sender and One Receiver 1 1 The Model............................. 1 2 Equilibrium............................ 3 3 Example.............................. 4 2 One Sender and Two Receivers 9 1 The Model in Discrete Case................... 9 1.1 Equilibrium........................ 11 2 Extension to Continuous Case.................. 14 2.1 Equilibrium........................ 15 3 Example.............................. 20 3 Two Senders and One Receiver 25 1 The Model............................. 25 2 Equilibrium............................ 27 3 Example.............................. 35 4 Two Senders and Two Receivers 41 1 The Model............................. 41 2 Equilibrium............................ 43 3 Example.............................. 59 4 I Senders and J Receivers.................... 64

Chapter 1 One Sender and One Receiver 1 The Model Here we discuss the main results of Cheap Talk between one Sender and one Receiver as given in the Crawford and Sobel[1] and then work out the example discussed in Crawford and Sobel in great detail to look at the results. The material we present here has been aimed for the extension to multiple Senders and multiple Receivers. There are two players, a Sender (S) and a Receiver (R); only S has private information. S observes the true state of the nature θ whose diffrentiable probability distribution function, F (θ) with density f(θ) is supported on [0, 1] (f(θ) 0, θ [0, 1]). So the prior belief of the players about the state of the nature is F (θ). S has a twice continuously differentiable von Neumann-Morgenstern utility function U S (y, θ, b S ) where y, a real number, is the action taken by R upon receiving S s signal. R s twice continuously differentiable von Neumann- Morgenstern utility function is denoted by U R (y, θ, b R ). All aspects of the game except the state θ is common knowledge. b S is the bias of S and b R is the bias of R which are used to measure how nearly agents interests coincide. Throughout our work, we shall assume that, for each θ and for i = S, R, denoting partial derivatives by subscripts in the usual way, U i (y, θ, b i ) = 0 for some y and U i 11(.) < 0, so that U i has a maximum in y for each given (θ, b i ) pair and that U i 12 < 0. The latter condition ensures that the best value 1

of y from a fully informed agent s standpoint is strictly increasing function of the true value of θ. The Quadratic Loss utility functions for the agents, satisfy the above properties. U S (y, θ, b S ) = (y (θ + b S )) 2 U R (y, θ, b R ) = (y (θ + b R )) 2 (1.1) The game proceeds as follows. S observes the true state θ and then sends a signal or message n [0, 1] to R. The signal may be random and can be viewed as a noisy estimate of θ. R processes the information in S s signal and chooses an action or decision, which determine the players payoffs. The solution concept employed here is Harsanyi s Bayesian Nash equilibrium which is simply a Nash equilibrium in the decision rules that relate agents strategies to their information and to the situations in which they find themselves. Each agent responds optimally to his opponent s strategy choice, taking into account its implications in the light of his probabilistic beliefs and maximizing expected utility over his possible strategy choices. Although S s signal necessarily precedes R s action in time, because R observes only the signal (not the signaling rule) S s choice of signaling rule and R s choice of action rule are strategically simultaneous. Definition 1.1 A family of signaling rules for S, denoted by q(n θ) (the probability of sending signal n in state θ) and an action rule for R denoted y(n) (action taken upon receiving the signal n) constitutes a Bayesian Nash equilibrium of the above mentioned game if (I) For each θ [0, 1], q(n θ) dn = 1, where N is the set of feasible signals N for given θ and if n is in the support of q(. θ), then n solves max (n N) U S (y(n), θ, b S ); and (II) For each n [0, 1], y(n) solves max y 1 0 U R (y, θ, b R ) p(θ n) dθ, where p(θ n) q(n θ) f(θ)/ 1 q(n t) f(t) dt. 0 Condition (I) says that S s signaling rule yields an expected-utility maximizing action for true state θ, taking R s action rule as given. Condition (II) 2

says that R responds optimally to each possible signal, using Bayes Rule to update his prior belief, taking into account S s signaling strategy and the signal he receives. Since U R 11(.) < 0, the objective function in (II) is strictly concave in y; therefore, R will never use mixed strategies in equilibrium. 2 Equilibrium In this section we give the characterization of the equilibrium as given in Crawford and Sobel. Define, for all θ [0, 1], y S (θ, b S ) arg max y U S (y, θ, b S ) y R (θ, b R ) arg max y U R (y, θ, b R ) (1.2) where arg max U S (y, θ, b S ) denotes the value of y that maximizes U S (y, θ, b S ). Since U i 11(.) < 0 and U i 12(.) > 0, i = R, S, y S (θ, b S ) and y R (θ, b R ) are well defined and continuous in θ. Lemma 2.1 (Crawford and Sobel): If y S (θ, b S ) y R (θ, b R ) for all θ, then there exists an ε > 0 such that if u and v are two actions taken by R which are induced in equilibrium by S, then u v ε. Further the set of actions induced in equilibrium is finite. So Lemma 2.1 establishes the fact that any two actions induced in equilibrium is separated by some distance. Since the state space is bounded between 0 and 1 and the set of actions taken by the Receiver is bounded by y R (0, b R ) and y R (1, b R ) we have finite number of actions and so the state space [0, 1] is divided into finite number of partitions where for different partitions, different actions are induced. We call this situation as partition equilibrium. There may exist more than one partition equilibria i.e. more than one way of partitioning the state space [0, 1]. Define for all a, a [0, 1], a a, { a arg max y(a, a) U R y (y, θ, b a R ) f(θ) dθ if a < a y R (a, b R ) if a = a (1.3) 3

Theorem 2.2 (Crawford and Sobel): Suppose b S, b R are such that y S (θ, b S ) y R (θ, b R ) for all θ [0, 1]. Then there exists a positive integer N(b S, b R ) such that for every N with 1 N N(b S, b R ), there exists at least one equilibrium (y(n), q(n θ)), where q(n θ) is uniform and supported on [a i, a i+1 ] if θ (a i, a i+1 ), which satisfies the following properties (A) U S (y(a i, a i+1 ), a i, b S ) U S (y(a i 1, a i ), a i, b S ) = 0 (i = 1,..., N 1) (A1) y(n) = y(a i, a i+1 ) for all n (a i, a i+1 ), (A2) a 0 = 0, (A3) a N = 1. Further any equilibrium is essentially equivalent to one in this class, for some value of N with 1 N N(b S, b R ). So the Theorem 2.2 tells that for a given partition N, if true state θ (a i, a i+1 ), then the equilibrium strategy of S will be to send a signal n which has a uniform distribution over [a i, a i+1 ]. Since in equilibrium, the action of R is consistent with his belief which is derived from Bayes rule, he will interpret the signal correctly and take the action, given in (A1) of Theorem 2.2. 3 Example In this section we work out the example discussed in Crawford and Sobel with more calculations and more discussions to illustrate the equilibrium discussed in Theorem 2.2. Here we consider F (θ) to be uniform on [0, 1] i.e. f(θ) = 1 for all θ [0, 1]. The utility function of the agents are given in (1.1). Consider the equilibrium specificatons given in the Theorem 2.2. Let N denote the number of equilibrium partitions and a(n) = (a 0 (N),..., a N (N). For simplicity we shall use a instead of a(n) and a i instead of a i (N) for i = 0,..., N 4

From equation (1.3), ai+1 y(a i, a i+1 ) arg max y U R (y, θ, b R )f(θ)dθ a i ai+1 = arg max y U R (y, θ, b R )dθ a i ai+1 = arg max y (y (θ + b R )) 2 dθ a i ] ai+1 [ (y (θ + br )) 3 = arg max y 3 a [ i (y (ai+1 + b R )) 3 = arg max y 3 (y (a ] i + b R )) 3 3 Maximizing the above expression with respect to y we obtain, (1.4). From condition (A) we have, y(a i, a i+1 ) = a i + a i+1 2 + b R ( 2 ( 2 ai + a i+1 ai 1 + a i + b R (a i + b S )) = + b R (a i + b S )) (1.5) 2 2 for i = 1,..., N 1. Given the monotonicity of a(n), we have a i+1 = 2 a i a i 1 + 4 (b S b R ) (i = 1,..., N 1) (1.6) Using the fact that a 0 = 0, this second order lienar difference equation has a class of solutions parametrized by a 1 : a i = a 1 i + 2 i (i 1) (b S b R ) (i = 1,..., N) (1.7) The intuition for the solution to be parametrized by a 1 can be found during the proof of Theorem 2.2. Since a i 1, 2 i (i 1) (b S b R ) < 1 (1.8) (because a 1 > 0 and we can choose a 1 as small as we wish to keep a i 1). 5

The maximum value of i which satisfies the above inequality constraint will give N(b S, b R ) assuming b S > b R. From calculations, we find N(b S, b R ) = 1 2 + 1 2 ( 1 + 2 b S b R ) 1 2 where z denotes the smallest integer greater than or equal to z. We see that if b S 1/4, N(b S, b R ) = 1. But if b S < b R, then (1.8) does not help us to calculate the maximum value of N(b S, b R ). From (1.7), we know a N = a 1 N + 2N(N 1)(b S b R ), but since a N = 1, we have, As a 1 1, we should have, a 1 = 1 2 N (N 1) (b S b R ) N (1.9) 1 2 N (N 1) (b S b R ) N 1 (1.10) (1.10) helps us to calculate N(b S, b R ) for the case b S < b R. From (1.8) and (1.10), it is clear as b S approaches to b R, N(b S, b R ) approaches to infinity. As b i (i = S, R), N(b S, b R ) eventually falls to unity which means there is no partition of the state space. When there is no partition of the state space in equilibrium, the Sender does not reveal any information and the Receiver takes action according to his own prior belief. This situation is called a babbling equilibrium because it is equivalent to that the Sender just babbles and the Receiver does not listen to the Sender and takes action according to his own prior belief which is given by the prior probability distribution F (θ) of the state space. Substituting the value of a 1 from (1.9) in (1.7) yields a i = i N + 2 (b S b R ) i (i N) (i = 0,..., N) (1.11) and so a i a i 1 = 1 N + 2 (b S b R ) (2 i N 1) (1.12) 6

The ex-ante expected utility of the Receiver R for a given partition N is, EU R = = = N 1 ai+1 i=0 a i N 1 ai+1 i=0 a i ( y(a i, a i+1 ) (θ + b R ) ) 2 dθ ( ) 2 (ai + a i+1 ) + b R θ b R dθ 2 [ N 1 ((ai 3 + a i+1 ) θ) /3 2 i=0 ] ai+1 = 1 N 1 (a i+1 a i ) 3 (1.13) 12 i=0 The ex-ante expected utility of the Sender S for a given partition N is, EU S = = = = N 1 ai+1 i=0 a i N 1 ai+1 i=0 a i ( y(a i, a i+1 ) (θ + b S ) ) 2 dθ ( 2 (ai + a i+1 ) + b R (θ + b S )) dθ 2 [ N 1 ((ai ] 3 ai+1 + a i+1 ) + b R b S θ) /3 2 i=0 a [ i N 1 ((ai ) 3 ( ) ] 3 a i+1 ) (ai+1 a i ) + b R b S /3 + b R b S /3 2 2 i=0 = 1 N 1 (a i+1 a i ) 3 (b S b R ) 2 = EU R (b S b R ) 2 (1.14) 12 i=0 So we have always, EU R EU S. Using (1.12) and (1.13), for a given N, a i EU R = 1 12N (b S b R ) 2 (N 2 1) 2 3 (1.15) So from previous discussion we know as (b S b R ) approaches to zero N(b S, b R ) tends to infinity and so from (1.14) and (1.15) we observe that, 7

EU R tends to zero and so also EU S tends to zero (zero is the highest value of both EU R and EU S ). Also we can see from (1.15), the expected utility associated with the partition N(b S, b R ) is the highest to both the players. We call this partition as the most informative equilibrium as the Receiver can discern among the states more effectively. When b S = b R, then N(b S, b R ) is infinity which is the finest partition, that means the Sender tells the true state and the Receiver believes him i.e. there is truth revealation or full communication. Consider the case b R = 0 and b S = 1/20. Then b S > b R. So from (1.8), we have N(b S, b R ) = 3. There are three partition equilibria: N = 1 with a 0 (1) = 0, a 1 (1) = 1; N = 2 with a 0 (2) = 0, a 1 (2) = 2/5, a 2 (2) = 1; N = 3 with a 0 (3) = 0, a 1 (3) = 2/15, a 2 (3) = 7/15, a 3 (3) = 1. For N = 1, EU R = 0.0833; EU S = 0.0858 for N = 2, EU R = 0.0233; EU S = 0.0258 for N = 3, EU R = 0.0160; EU S = 0.0185 As N increases, both EU R and EU S increase. So for both the players, the equilibrium with maximum number of partition is considered better. That s why in our future discussions we shall always consider the equilibrium partition associated with N(b S, b R ). Consider the case b R = 0 and b S = 1/40. So b S > b R. From (1.8), we have N(b S, b R ) = 4. The partition associated with N = 4 is: a 0 (4) = 0, a 1 (4) = 1/10, a 2 (4) = 3/10, a 3 (4) = 6/10, a 4 (4) = 1. For this partition, EU R = 0.0083 and EU S = 0.0089. We see as b S moves closer b R, N(b S, b R ) increases and the utility associated with the partition N(b S, b R ) increases. So we can conclude that as the preferences gets closer, there is more information transmission (more partitions) and hence more utility. 8

Chapter 2 One Sender and Two Receivers 1 The Model in Discrete Case Here we discuss the model by Farrell and Gibbons[2] for Cheap Talk between one Sender and two Receivers in a discrete state space and action space setting with discussions aimed at our extention to multiple Senders and Receivers. There is a Sender S, who observes the state of the world θ θ 1, θ 2 ; the common knowledge about the prior probability of state θ 1 is π. There are two audiences or Receivers R 1 and R 2. Upon receiving the signal from S, R 1 chooses r1 1 or r1 2 and R 2 chooses r2 1 or r2. 2 Each Receiver s payoff depends upon on his own action and on the state of the world, θ and we also assume that his payoff does not depend on other Receiver s action. If one Receiver had an action that was always optimal for him irrespective of his beliefs about S, then the Sender would not have to consider his reactions to his talk. Consequently, there would be in effect only one Receiver and this problem we studied in the previous chapter. So without loss of generality, we suppose that the Receivers payoffs are such that if the state were known to be θ i (i = 1, 2), then R j (j = 1, 2) would choose r i j. By normalizing (setting the lowest payoff to be zero), we can display these payoffs as in Tables 1A and 1B. Our assumption is that x 1, x 2, y 1 and y 2 are all positive. 9

TABLE 1A - R 1 s PAYOFF R 1 s action True State θ 1 θ 2 r1 1 x 1 > 0 0 r 2 1 0 x 2 > 0 TABLE 1B - R 2 s PAYOFF R 2 s action True State θ 1 θ 2 r2 1 y 1 > 0 0 r 2 2 0 y 2 > 0 Let U S R 1 (θ, r i 1) be the payoff to S in state θ when S talks to R 1 in private and R 1 takes the action r i 1(i = 1, 2). Similarly, U S R 2 (θ, r j 2) be the payoff to S in state θ when S talks to R 2 in private and R 2 takes the action r j 2(j = 1, 2). Simplifying our problem further, we assume that S s payoff U S at state θ when he speaks to both the Receivers in public, is the sum of two components: One U S R 1, depending on R 1 s action, r i 1 (i = 1, 2) and the state θ; the other U S R 2, depending on R 2 s action, r j 2 (j = 1, 2) and the state θ. So we have, U S (θ, r i 1, r j 2) = U S R 1 (θ, r i 1) + U S R 2 (θ, r j 2) (2.1) We write U S R 1 (θ i, r i 1) = v i and U S R 2 (θ i, r i 2) = w i for i = 1, 2 where v i, w i R and we normalize so that U S R 1 (θ i, r j 1) = U S R 2 (θ i, r j 2) = 0 for j i (Table 2A and 2B give S s payoff with respect to the Receivers actions). Thus, for example, if θ = θ 1, R 1 chooses r 2 1 and R 2 chooses r 1 2, then S s payoff is 0+w 1, R 1 s is 0 and R 2 s is y 1. 10

TABLE 2A-S s PAYOFF WITH R 1 R 1 s action True State θ 1 θ 2 r1 1 v 1 0 r 2 1 0 v 2 TABLE 2B-S s PAYOFF WITH R 2 R 2 s action True State θ 1 θ 2 r2 1 w 1 0 r 2 2 0 w 2 If the Receiver does not have any information beyond that represented by her prior belief π, Receiver R 1 will take his pooling action: { r r pool 1 1 = 1, if πx 1 (1 π)x 2, r1, 2 otherwise (2.2) Receiver R 2 s pooling action r pool 2 is defined similarly. Since S s signal is not an argument in any player s payoff function, we do not need any notation for the signals themselves; only their information content matters. Since the meanings conveyed in pure strategy equilibrium can only be θ = θ 1, θ = θ 2 and no information, it is convenient to assume that no other messages are used in equilibrium. 1.1 Equilibrium The equilibrium concept here we used is the pure strategy Bayesian Nash equilibrium because Bayesian Nash equilibrium is a natural generalization of Nash equilibrium in games with incomplete information and a natural extension of the concept of rational-expectations equilibrium to situations where strategic interactions are important. In order to do so, we analyze three distinct Cheap Talk games: the Sender S speaking in private to R 1, in private to R 2 and in public to both Receivers at once. 11

In each of these games and for all values of the payoff parameters (v 1, v 2, w 1, w 2 ), a pooling equilibrium exists. In such an equilibrium, the Sender s talk is uninformative: for instance, whatever the true state, he says no information. Thus in a pooling equilbrium a Receiver rationally ignores what the Sender says: his posterior beliefs about θ are identical to his prior beliefs and he takes his pooling action. But there can be also other equilibria, in which talk does affect actions. Let U S R i (θ j, θ = θ k ) be the Sender s payoff when the true state is θ j and the Receiver R i believes the Sender s claim that the state is θ k (k may be equal or unequal to j) and hence takes the action r k i. So this means U S R i (θ j, θ = θ k ) U S R i (θ j, r k i ) A separating equilibrium is an equilibrium in which S s claim fully reveals the true state: in the most natural separating equilibrium, S says θ = θ 1 when the state is θ 1, and says θ = θ 2 when the state is θ 2. Thus in a separating equilibrium, the Receivers believe the message of S and each Receiver s rational posterior belief is either that θ = θ 1 for sure (so he takes his first action), or that θ = θ 2 for sure (so he takes his second action). Given that S can induce either of these beliefs, the equilibrium condition is that he has no incentive to lie. Thus a separating equilibrium exists in private with each Receiver R i if and only if both the following inequalities hold: U S R i (θ 1, θ = θ 1 ) U S R i (θ 1, θ = θ 2 ) U S R i (θ 2, θ = θ 2 ) U S R i (θ 2, θ = θ 1 ) (2.3) and a separating equilibrium exists in public i.e. in presence of both the Receivers if and only if both the following inequalities hold: U S (θ 1, θ = θ 1 ) U S (θ 1, θ = θ 2 ) U S (θ 2, θ = θ 2 ) U S (θ 2, θ = θ 1 ) (2.4) (Note: U S (θ 1, θ = θ 1 ) = U S R 1 (θ 1, θ = θ 1 ) + U S R 2 (θ 1, θ = θ 1 ) ) From (2.3), there is a separating equilibrium when S speaks in private with R 1 if and only if v 1, v 2 0. Likewise, there is a separating equilibrium wen he speaks in private with R 2 if and only if w 1, w 2 0. 12

From conditions (2.4), there is a separating equilibrium in public if and only if v 1 + w 1 0 and v 2 + w 2 0 because in separating equilibrium, S tells the true state and the Receivers believe the Sender and take the actions consistent with their beliefs. We can summarize the above discussions into the following proposition, Proposition 1.1 (Farrell and Gibbons): If there exists a separating equilibrium between the Sender and each Receiver in private, then there exists a separating equilibrium between the Sender and both the Receivers in public, but the reverse is not true. Proof: Separating equilibrium in private between the Sender and each Receiver implies v 1, v 2 0 and w 1, w 2 0. If we combine these equations, we have v 1 + w 1 0 and v 2 + w 2 0 which means separating equilibrium in public. But if we have v 1 + w 1 0 and v 2 + w 2 0, then it does not necessarily imply that v 1, v 2 0 and w 1, w 2 0. More systematically we can distinguish six cases as follows: 1. No Communication: There is no separating equilibrium, in public or in private. 2. Full Communication: There is a separating equilibrium with each Receiver in private, and hence also (by Proposition 1.1) with both in public. There are no credibility problems. 3. One-Sided Discipline: There is a separating equilibrium in private with one Receiver but not with the other, and there is a separating equilibrium in public. 4. Mutual Discipline: There is no separating equilibrium in private, but there is in public. 5. Subversion: There is a separating equilibrium with one Receiver in private, but not with the other, and there is none in public. 6. Mutual Subversion: There is a separating equilibrium with one Receiver in private and none in the public. But by Proposition 1.1, there cannot be mutual subversion. 13

2 Extension to Continuous Case Here we extend the model of Farrell and Gibbons from discrete setting to continous setting. In their model, they have only two actions for each Receiver, two possible signals for the Sender and there are only two possible states of the Nature. But in reality, we don t have a limited number of strategies for each player nor a finite number of states of the Nature. So in our extension, we allow the strategies of the players to be derived from a continuum and so also the states of the Nature. As usual Sender S has private information about the true state of the nature θ whose diffrentiable probability distribution function, F (θ) with density f(θ) is supported on [0, 1] (f(θ) 0, θ [0, 1]). Following Crawford and Sobel we have a lower bound zero and upper bound one for the state of the Nature. S sends a signal n [0, 1] to the Receivers R 1 and R 2 about the state of the nature and upon receiving the signal the Receivers update their belief using Bayes rule and take actions. S has a twice continuously differentiable von Neumann-Morgenstern utility function U S (y R1, y R2, θ, b S ) where y R1, y R2, real numbers, are the actions taken by R 1 and R 2 respectively upon receiving S s signal. The twice continuously differentiable von Neumann-Morgenstern utility functions of R 1 and R 2 are denoted by U R 1 (y R1, θ, b R1 ) and U R 2 (y R2, θ, b R2 ). Denoting the partial derivatives by subscripts in the usual way, U1 S (y R1, y R2, θ, b S ) = 0 for given y R2, θ, b S, U2 S (y R1, y R2, θ, b S ) = 0 for given y R1, θ, b S and U11(.) S < 0, U22(.) S < 0, U13(.) S > 0, U23(.) S > 0, U14(.) S > 0, U24(.) S > 0. Similarly for i = 1, 2 we have U R i 1 (y R1, θ, b Ri ) = 0 for some y R1 given θ, b Ri and U R i 12 > 0 U R i 13 > 0. All aspects of the game except the state θ is common knowledge. The following Quadratic Loss utility functions for the agents satisfy the above properties. Utility of R i with S and R j in public (i j) or with S in private is given by, U R i (y Ri, θ, b Ri ) = (y Ri (θ + b Ri )) 2 (2.5) Utility of S with both R 1 and R 2 in public is U S (y R1, y R2, θ, b S ) = 1 4 ((y R 1 + y R2 ) 2(θ + b S )) 2 (2.6) 14

Clearly, (2.5) and (2.6) satisfy the properties of the utility functions mentioned above. These utility functions resemble the payoffs considered in the discrete case. Here S s utility depends on the actions by the Receivers and the each Receiver s action does not affect other Receiver s utility. The factor 1/4 in (2.6) is a normalizing factor to level the utility of the players. For welfare considerations, we also mention utility of S with R i in private (i = 1, 2) which is given by, U S R i (y Ri, θ, b S ) = (y Ri (θ + b S )) 2 (2.7) 2.1 Equilibrium If the Sender is to communicate privately to each Receiver, then the equilibrium is same to that given in the One Sender and One Receiver case in the first chapter where the Sender will have a partition equilibrium with the Receiver. Now we discuss how the equilibrium looks like when the Sender speaks to both the Receivers at one time in public. Definition 2.1 A family of signaling rules for S, denoted by q(n θ) (the probability of sending a signal n given the state θ)and the action rules for R 1 and R 2 denoted y R1 (n) and y R2 (n) respectively (actions taken upon receiving the signal n) constitute a Bayesian Nash equilibrium of the above mentioned game if (I) For each θ [0, 1], q(n θ) dn = 1, where N is the set of feasible signals N given θ and if n is in the support of q(. θ), then n solves max (n N) U S (y R1, y R2, θ, b S ); (II) For each n, y R1 (n) solves max y 1 0 U R 1 (y, θ, b R1 ) p(θ n) dθ, and y R2 (n) solves max y 1 0 U R 2 (y, θ, b R2 ) p(θ n) dθ where p(θ n) q(n θ) f(θ)/ 1 q(n t) f(t) dt. 0 Condition (I) says that S s signaling rule yields an expected-utility maximizing action for true state θ, taking action rules of R 1 and R 2 as given. 15

Condition (II) says that R 1 and R 2 respond optimally to each possible signal, using Bayes Rule to update their prior beliefs, taking into account S s signaling strategy and the signal they receive. An important observation is that the posterior beliefs of both the Receivers are same (can be seen from the above definition) because both the Receivers are rational and they are at the same strategic positions i.e. they take actions after hearing S s signal and each Receiver s action does not affect other Receiver s action. So both the Receivers will maximize their utility taking into account the same posterior belief. It can be shown that the equilibrium actions of R 1 and R 2 are related. From (II) of Definition 2.1, 1 y R1 (n) = max y U R 1 (y, θ, b R1 ) p(θ n) dθ 0 1 = max y (y (θ + b R1 )) 2 p(θ n) dθ (2.8) and let the solution to this equation (2.8) be y = y. 1 y R2 (n) = max y U R 2 (y, θ, b R2 ) p(θ n) dθ 0 0 1 = max y (y (θ + b R2 )) 2 p(θ n) dθ 0 1 = max y (y + b R1 b R2 (θ + b R1 )) 2 p(θ n) dθ 0 1 = max z (z (θ + b R1 )) 2 p(θ n) dθ (2.9) 0 (where z = y + b R1 b R2 ) 16

Let the solution to the above equation (2.9) be z. Clearly, y = z y R1 (n) = y R2 (n) + b R1 b R2 (2.10) Now we check whether there are finite number of equilibrium actions and if so how the equilibrium partitions look like. Let u = (u R1, u R2 ) be the action pair by R 1 and R 2 induced by S with signal n u and v = (v R1, v R2 ) be the other action pair induced by S with signal n v in equilibrium. We know from (2.10), u R1 = u R2 + b R1 b R2, v R1 = v R2 + b R1 b R2 (2.11) We assume that u R1 < v R1 which implies u R2 < v R2. So the utility of S for the action pair (u R1, u R2 ) at state θ is U S (u R1, u R2, θ, b S ) = 1 4 (u R 1 + u R2 2(θ + b S )) 2 = 1 4 (u R 1 + u R1 b R1 + b R2 2(θ + b S )) 2 = 1 4 (2u R 1 b R1 + b R2 2(θ + b S )) 2 (2.12) and for the action pair (v R1, v R2 ) at state θ be U S (v R1, v R2, θ, b S ) = 1 4 (v R 1 + v R2 2(θ + b S )) 2 = 1 4 (v R 1 + v R1 b R1 + b R2 2(θ + b S )) 2 = 1 4 (2v R 1 b R1 + b R2 2(θ + b S )) 2 (2.13) We can argue that S prefers for some states u over v and for other states v over u. This is because if he had preferred one action pair over the other action pair for all states, then the less preferred action pair could not have 17

existed. So by continuity, there is a state θ [0, 1], such that U S (u R1, u R2, θ, b S ) = U S (v R1, v R2, θ, b S ) 1 4 (2u R 1 b R1 + b R2 2(θ + b S )) 2 = 1 4 (2v R 1 b R1 + b R2 2(θ + b S )) 2 (2.14) Since u R1 < v R1, it follows from above that, 2u R1 b R1 +b R2 and 2v R1 b R1 + b R2 are at same distance from 2(θ + b S ), but in opposite sides. So we have, 2u R1 b R1 + b R2 < 2(θ + b S ) < 2v R1 b R1 + b R2 u R1 < (θ + b S ) + b R 1 b R2 2 < v R1 (2.15) The continuity of U S (.) in its arguments, u R1 < v R1 and (2.14) imply that (a1) u is not induced by S for states greater than θ (a2) v is not induced by S for states less than θ Define for all θ [0, 1], y R 1 (θ, b R1 ) arg max U R 1 (y R1, θ, b R1 ) = θ + b R1 y R 2 (θ, b R2 ) arg max U R 2 (y R1, θ, b R2 ) = θ + b R2 Claim 2.2 u R1 y R 1 (θ, b R1 ) v R1 and u R2 y R 2 (θ, b R2 ) v R2 Proof:: Suppose to the contrary y R 1 (θ, b R1 ) < u R1 < v R1. Since y R 1 (θ, b R1 ) is the best action for R 1 at state θ and U R 1 12 (.) > 0, y R 1 (θ, b R1 ) still gives more utility to R 1 than u R1 for states less than θ. But u R1 is an action induced in the states less than equal to θ (from (a1)). Then u R1 can not be an equilibrium action because u R1 does not maximize the expected utility of the Receiver for states less than equal to θ. Similarly we can rule out the case u R1 < v R1 < y R (θ, b R ). For u R2 y R 2 (θ, b R2 ) v R2, similar arguements hold for R 2. 18

From the claim we have, u R1 y R 1 (θ, b R1 ) v R1 u R1 θ + b R1 v R1 (2.16) u R2 y R 2 (θ, b R2 ) v R2 u R2 θ + b R2 v R2 (2.17) Adding (2.16) and (2.17), we have, u R1 + u R2 θ + b R1 + θ + b R2 v R1 + v R2 2u R1 b R1 + b R2 2θ + b R1 + b R2 2v R1 b R1 + b R2 u R1 θ + b R1 v R1 (2.18) From (2.15) and (2.18), there will be finite number of actions if θ + b R1 (θ + b S ) b R 1 b R2 2 b R 1 + b R2 2 0 b S 0 (2.19) because then u R1 and v R1 are separated by some distance say d > 0 and since the actions the R 1 can take are bounded by y R 1 (0, b R1 ) and y R 1 (1, b R1 ). From (2.11), this implies that u R2 and v R2 are also separated by distance d. We can summarize the above result into the following lemma. Lemma 2.3 If, b R 1 +b R2 2 b S 0 then there are finite number of actions induced in the equilibrium. Also, it is reasonable to expect that when b R 1 +b R2 b 2 S = 0, then there will be an equilibrium with infinite number of actions which means S will send true signal for each state and R 1, R 2 will believe him i.e. there will be truth revealation or full communication. This is because if b R 1 +b R2 b 2 S = 0, then either the preference of S lies exactly in the middle of preferences of R 1 and R 2 or the preferences of all players are same, so it is better for the Sender to tell the truth. This result is equilvalent to the result for the one Sender and one Receiver case where when b S is equal to b R, there is truth revealation. 19

Define for all a, a [0, 1], a a, y R1 (a, a) argmax a a U R 1 (y R1, θ, b R1 )f(θ)dθ if a < a y R 1 (a, b R1 ) if a = a (2.20) y R2 (a, a) = y R1 (a, a) b R1 + b R2 (2.21) Now we describe the structure of the equilibrium in the following theorem. Theorem 2.4 (following Crawford and Sobel) Suppose b R 1 +b R2 2 b S 0. Then there exists a positive integer N(b S, b R1, b R2 ) such that for every N with 1 N N(b S, b R1, b R2 ), there exists at least one equilibrium (y R1 (n), y R2 (n), q(n θ)), where q(n θ) is uniform and supported on (a i, a i+1 ) if θ (a i, a i+1 ), (A) U S (y R1 (a i, a i+1 ), y R2 (a i, a i+1 ), a i, b S ) U S (y R1 (a i 1, a i ), y R2 (a i 1, a i ), a i, b S ) = 0 (i = 1,..., N 1) (A1) y R1 (n) = y R1 (a i, a i+1 ) for all n (a i, a i+1 ), y R2 (n) = y R1 (a i, a i+1 ) b R1 + b R2 (A2) a 0 = 0, (A3) a N = 1. Further any equilibrium is essentially equivalent to one in this class, for some value of N with 1 N N(b S, b R1, b R2 ). Proof: The proof for this is similar to the proof for one Sender and one Receiver case which is presented in Crawford and Sobel[1] in detail. 3 Example Assume F (θ) is uniform on [0, 1] which means f(θ) = 1. Let s take b S = 1/40, b R1 = 0 and b R2 = 1/60, Consider the conditions that characterize a 20

partition equilibrium of size N. Let a(n) = a 0 (N),..., a N (N) denoting the equilibrium partition points. Whenever it can be done without loss of clarity, we shall denote a or a i instead of a(n) or a i (N) respectively. ai+1 y R1 (a i, a i+1 ) = argmax U R 1 (y R1, θ, b R1 ) f(θ) dθ a i = a i + a i+1 2 + b R1 (2.22) (same calculations as for one Sender and One Receiver case) y R2 (a i, a i+1 ) = y R1 (a i, a i+1 ) b R1 + b R2 = a i + a i+1 2 + b R2 (2.23) From (A) of Theorem 2.4, for i = 1,..., N 1 we have, (y R1 (a i, a i 1 ) + y R2 (a i, a i 1 ) 2(a i + b 1 )) 2 = (y R1 (a i, a i+1 ) + y R2 (a i, a i+1 ) 2(a i + b 1 )) 2 (2.24) which can only hold given the monotonicity of a, if, a i+1 = 2a i a i 1 2(b R1 + b R2 2b S ) (2.25) The solution of this equation is parametrized by a 1 (given that a 0 = 0) which is a i = a 1 i + i (i 1) (2b S b R1 b R2 ) (i = 0,..., N) (2.26) Since a i 1, i (i 1) (2b S b R1 b R2 ) < 1 (2.27) (because a 1 > 0 and we can choose a 1 as small as we wish to keep a i 1) The maximum value of i which satisfies the above inequality constraint will give N(b S, b R1, b R2 ) assuming 2b S > b R1 + b R2. 21

We can calculate N(b S, b R1, b R2 ) to be N(b S, b R1, b R2 ) = 1 2 + 1 ( 2 1 + 2 2b S b R1 b R2 ) 1 2 (2.28) where z denotes the smallest integer greater than or equal to z. But if 2b S < b R1 + b R2, then (2.27) does not help us to calculate the maximum value of N(b S, b R1, b R2 ). From (2.26), we know a N = a 1 N +N(N 1)(2b S b R1, b R2 ), but since a N = 1, we have, As a 1 1, we should have, a 1 = 1 N(N 1)(2b S b R1 b R2 ) N 1 N(N 1)(2b S b R1 b R2 ) N (2.29) 1 (2.30) (2.30) helps us to calculate N(b S, b R1, b R2 ) for the case 2b S < b R1 + b R2. From (2.27) and (2.30), it is clear that the closer 2b S approaches to b R1 + b R2 (i.e b R 1 +b R2 b 2 S ), the more is the value of N(b S, b R1, b R2 ). As b i, i = S, R 1, R 2, N(b S, b R ) eventually falls to unity which means there is no partition of the state space. This is because the preference of the agents become so diverge, that they can not agree upon an equilibrium. When there is no partition of the state space, for all θ [0, 1], the Sender babbles and the Receivers take action according to their own prior belief which is given by the prior probability distribution F (θ) of the state space.this situation is called a babbling equilibrium. Substituting for a 1 from (2.29) in the equation (2.26), for an equilibrium with N partitions we have for i = 0,..., N, a i = 1 N (N 1) (2b S b R1 b R2 ) i + i (i 1) (2b S b R1 b R2 ) (2.31) N We know from (2.22) and (2.23) that y R1 (a i, a i+1 ) = (a i + a i+1 ) + b R1 2 y R2 (a i, a i+1 ) = y R1 (a i, a i+1 ) b R1 + b R2 22

The ex ante expected utility of R j, j = 1, 2 is EU R j = = N 1 ai+1 i=0 a i N 1 ai+1 i=0 a i (y Rj (a i, a i+1 ) (θ + b Rj )) 2 dθ ( 2 (ai + a i+1 ) + b Rj (θ + b Rj )) dθ 2 = 1 N 1 (a i+1 a i ) 3 (2.32) 12 i=0 So each Receiver has the same utility irrespective of their biases. The ex ante expected utility of S j, j = 1, 2 is EU S j = = N 1 ai+1 i=0 a i N 1 ai+1 i=0 a i 1 4 1 4 (y R 1 (a i, a i+1 ) + y R2 (a i, a i+1 ) 2(θ + b Sj )) 2 dθ ( ai + a i+1 + b R1 + b R2 2(θ + b Sj ) ) 2 dθ = 1 N 1 (a i+1 a i ) 3 1 12 4 (b R 1 + b R2 2b Sj ) 2 i=0 = EU R j 1 4 (b R 1 + b R2 2b Sj ) 2 (2.33) So for b S = 1/40, b R1 = 0 and b R2 = 1/60, we can calculate values of a i for all i = 0,..., N given an equilibrium of partition N. We see, 2b S b R1 b R2 = 2 1/40 0 1/60 = 2/60 > 0. From (2.27), we have N(b S, b R1, b R2 ) = 5. We call the most informative equilibrium as the equilibrium with maximum partition size N(b S, b R1, b R2 ) because the Receivers then can distinguish between the states more effectively. So for N = 5, we have, a 0 = 0, a 1 = 4/60, a 2 = 12/60, a 3 = 24/60, a 4 = 40/60 and a 5 = 1. Here we have, EU R = 0.0056; EU S = EU R 1 4 (b R 1 + b R2 2b S ) 2 = 0.0059. 23

If S were to consult R 1 alone, N(b S, b R1 ) = 4 and EU R = 0.0083 and EU S = 0.0089 (from previous chapter) If S were to consult R 2 alone, then calculations of previous chapter reveal, N(b S, b R2 ) = 8, and so for N = 8, a 0 (8) = 0, a 1 (8) = 1/120, a 2 (8) = 6/120, a 3 (8) = 15/120, a 4 (8) = 28/120, a 5 (8) = 45/120, a 6 (8) = 66/120, a 7 (8) = 91/120, a 8 (8) = 1; EU R = 0.0028 and EU S = 0.0034. So talking to both the Receivers in public is not profitable for the Sender in the present case. But talking in public is still better than talking to R 1 alone. We have the freedom to adjust the biases so that for a Sender talking to both the Receivers in public is more profitable than talking to each Receiver in private. So a Receiver can find another Receiver with suitable bias while talking to the Sender so that the Sender will reveal more information. 24

Chapter 3 Two Senders and One Receiver 1 The Model Here we present the model in Krishna and Morgan[2] and discuss some of their results aiming to use them for the extension to multiple Senders and multiple Receivers. There are two Senders and a Receiver. The Receiver takes an action y R, the payoff from which depends on some underlying state of the nature θ [0, 1]. The state of nature θ has a probability distribution function F (θ) with density function f(θ). The Receiver has no information about θ, but there are two Senders who know the true state θ. The two Senders send signals to the Receiver by sending signals n 1 [0, 1] and n 2 [0, 1] respectively. After observing the state, signals are sent sequentially and publicly according to a protocol. The protocol specifies the sequence in which the Senders send the signal. Let s denote the Sender who sends the signal first as S 1 and the other as S 2, and the Receiver as R. The utility functions of the agents are given by the Quadratic Loss utility functions. The utility function of S i (i = 1, 2) with R alone i.e. in private or with S 2 and R toghether in public is U S i (y, θ, b Si ) = (y (θ + b Si )) 2 (3.1) 25

The utility function of R with S 1 or S 2 separately in private or with both S 1 and S 2 in public is U R (y, θ, b R ) = (y (θ + b R )) 2 (3.2) Here b S1, b S2, b R measure the biases of the players i.e. how close their preferences lie to each other. If b S1 = b S2, then effectively we have only one Sender and this case has been studied in the first chapter. If b Si = b R for some i {1, 2}, then S i will reveal truth to R and R will trust him. So the other Sender is irrelevant. For these reasons, in studying the multiple Senders problem, the analysis is divided into three cases. Positive Like Biases: If both the Senders are biased in the positive direction that is, both b S1, b S2 > b R, then the Senders are said to have positive like biases. Negative Like Biases: If both the Senders are biased in the negative direction that is, both b S1, b S2 < b R, then the Senders are said to have negative like biases. Since the discussion related to negative like biases is symmetrical to the discussion related to positive like biases, we don t consider it in our study and whenever we refer to like biases, it means we refer to positive like biases. Opposing Biases: If both the Senders are biased in opposite directions that is, b S1 > b R > b S2 or b S2 > b R > b S1, then the Senders are said to have opposing biases. Define, y R (θ, b R ) = arg max y U R (y, θ, b R ) = θ + b R y S 1 (θ, b S1 ) = arg max y U S 1 (y, θ, b S1 ) = θ + b S1 (3.3) y S 2 (θ, b S2 ) = arg max y U S 2 (y, θ, b S2 ) = θ + b S2 If b Si > b R, i = 1, 2, then y S i (θ, b Si ) > y R (θ, b R ), so the Sender S i prefers a higher action than the Receiver and the Sender is referred as right biased. Similarly, if b Si < b R, then y S i (θ, b Si ) < y R (θ, b R ), so the Sender prefers a lower action than the Receiver and the Sender is referred as left biased 26

2 Equilibrium In the multiple Senders and one Receiver game, the equilibrium concept employed is Perfect Bayesian Equilibrium (PBE) unlike Bayesian Nash equilibrium (BNE) for one Sender and one Receiver case. The PBE is a generalization of BNE and which takes into account the sequentiality of the Senders signals. Definition 2.1 The Pefect Bayesian Equilibrium (PBE) consists of a family of signaling rules for S 1 denoted by q S1 (n 1 θ) (probability of sending signal n 1 for the true state θ) and for S 2 denoted by q S2 (n 2 θ, n 1 ) (probability of sending signal n 2 for true state θ and S 1 s message n 1 )and the action rule by R, y(n 1, n 2 ) (action after receiving n 1 and n 2 ) such that: (I) For each θ [0, 1], N S1 q S1 (n 1 θ) dn 1 = 1 and N S2 q S2 (n 2 θ, n 1 ) dn 2 = 1 where N S1 is the set of feasible signals for S 1 given θ and N S2 is the set of feasible signals for S 2 given θ and n 1, and if n 1 is in the support of q S1 (. θ), and n 2 is in the support of q S2 (. θ, n 1 ), then n 1 solves, max (n1 N S1 ) U S 1 (y(n 1, n 2 ), θ, b S1 ) for a given n 2 and n 2 solves max (n2 N S2 ) U S 2 (y(n 1, n 2 ), θ, b S2 ) for a given n 1. (II) For a signal pair (n 1, n 2 ), y(n 1, n 2 ) solves max y 1 0 U R (y, θ) p(θ n 1, n 2 ) dθ, where, p(θ n 1, n 2 ) q S1 (n 1 θ) q S2 (n 2 θ, n 1 ) f(θ)/ 1 0 q S 1 (n 1 t) q S2 (n 2 t, n 1 ) f(t) dt. Given a PBE, Y is denoted as the outcome function that associates with every state, the resulting equilibrium action by the Receiver. Formally for each θ, Y (θ) = (y(n 1 (θ), n 2 (θ))). Denote Y 1 (y) = θ : Y (θ) = y. Given an outcome function Y, the resulting equilibrium partition 27

P = {Y 1 (y) : y is an equilibrium action} of the state space can be determined. The partition P is then a measure of the informational content of the equilibrium. If the equilibrium partition P is finer that P, then the informational content of P is greater than that of P, since the former allows the decision maker to discern among the states more effectively. A PBE always exists. In particular, there are always equilibria in which all messages from both of the Senders are completely ignored by the Receiver or in other words, both Senders babble. To see that this is a PBE, notice that since the messages of the Senders contain no information, the Receiver correctly disregard them in making his decision. Likewise, from the perspective of each of the Senders, since messages will always be ignored by the Receiver, there is no advice giving strategy that improves payoffs relative to babbling. So the information loss is most severe in a babbling equilibrium. But, there are also other, more informative equilibria. The equilibrium considered here for one Sender and one Receiver has the property that the equilibrium action induced in the higher state is at least as large as that induced in a lower state. This property is known as monotonicity. Formally, we will say that the PBE Y is monotonic if Y (.) is a non-decreasing function. But monotonicity does not hold always for two Senders and one Receiver which makes the equilibrium analysis complicated. So here the analysis is limited to monotonic equilibria. The following lemma identifies some simple necessary conditions satisfied by monotonic equilibria. Lemma 2.2 (Krishna and Morgan): Suppose Y is monotonic. If Y has a discontinuity at θ and lim Y (θ ε) = y < y + = lim Y (θ + ε) (ε 0 + ) (ε 0 + ) and b = min{b S1, b S2 }, b = max{b S1, b S2 } then U S b (y, θ, b) U S b (y +, θ, b) (3.4) U S b (y, θ, b) U S b (y +, θ, b) (3.5) 28

where U S b is the utility of the Sender with lower bias and U S b is the utility of the Sender with higher bias. Here, the inequalities are in the nature of incentive constraints: at any discontinuity, the expert with bias b weakly prefers the lower action y whereas the expert with bias b weakly prefers the higher action y +. This is to be contrasted with the one Sender and one Receiver case where the above inequlality constraints hold with equality. A fully revealing PBE is a PBE where the truth is revealed i.e. the Senders tell the true state, the Receivers believe them and take the utility maximizing action. This means if the true state is θ, the signal of S 1 is n 1 = θ, the signal of S 2 is n 2 = θ then the action of R is y R (θ, b R ). From the beginning we have mentioned that we shall consider only experts with like biases or opposite biases. The following proposition in Krishna and Morgan gives an important result. Proposition 2.3 (Krishna and Morgan): When no two players preferences are same, there does not exist a fully revealing PBE. Senders with Like Biases Here the case of the Receiver consulting with two Senders with like biases has been discussed. The following lemma in Krishna and Morgan states that when the experts have like biases there can be at most a finite number of equilibrium actions played in any monotonic PBE. In particular it rules out the possibility of full revelation in the case of like biases since a fully revealing equilibrium must be monotonic and involves an infinite number of equilibrium actions. Lemma 2.4 (Krishna and Morgan): Let the Senders S 1 and S 2 have like biases i.e. b S1, b S2 > b R and the set of equilibrium actions Y is monotonic in state space θ. Then there are a finite number of actions induced in the equilibrium by the Senders. The intuition for the Lemma 2.3 is that if two equilibrium actions are sufficiently close to one another, then there will be some state where the lower 29

action is called for, but both Senders prefer the higher action. As a consequence, the first Sender can deviate and send a message inducing the higher action confident that Sender 2 will follow his lead. Put differently, it is impossible to satisfy the incentive constraints of Lemma 2.1 if equilibrium actions are too close together. y(a, a) { a arg max y (y, θ, b a R ) f(θ) dθ if a < a y R (a, b R ) if a = a (3.6) Considering the Quadratic utility functions for the agents as given above and using the above lemmas, we can talk about the structure of the equilibrium. For like biases the equilibrium a = (a 0,..., a N ) for a given partition N obeys the following properties: (A1) The signal n 1 of S 1 is uniformly distributed on [a i, a i+1 ] if θ (a i, a i+1 ) and so also The signal n 2 of S 2 is uniformly distributed on [a i, a i+1 ] if θ (a i, a i+1 ). (A2) y(n 1, n 2 ) = y(a i, a i+1 ) for all n 1, n 2 (a i, a i+1 ), (A3) a 0 = 0, (A4) a N = 1. (A) U S b (y(ai, a i 1 ), a i, b) U S b (y(ai, a i+1 ), a i, b) (A ) U S b(y(a i, a i 1 ), a i, b) U S b(y(a i, a i 1 ), a i, b) for i = 1,..., N 1), b = min{b S1, b S2 }, b = max{b S1, b S2 } and S b is the Sender with minimum bias and S b is the Sender with maximum bias. (A5) The off equilibrium beliefs of the Receiver is that if the signal of S b does not exceed the signal of S b by more than or equal to 2 (b b R ), then R believes in S b, otherwise R believes in S b s message. Since the equilibrium is a partition of the state space and the space space is bounded by 0 and 1, we have (A3) and (A4). To establish (A1), we follow the proof of Theorem 2.2 which is given in Crawford and Sobel. Let y be an action induced in θ (a i, a i+1 ). Let N S1 N S2 {(n 1, n 2 ) : y(n 1, n 2 ) = y }; if R hears a signal (n 1, n 2 ) N S1 N S2 30

in such an equilibrium, his conditional expected utility is 1 0 U R (y(n 1, n 2 ), θ, b R ) p(θ n 1, n 2 ) dθ = 1 0 U R q S1 (n 1 θ) q S2 (n 2 θ, n 1 ) dθ (y(n 1, n 2 ), θ, b R ) 1 q 0 S 1 (n 1 t) q S2 (n 2 t, n 1 ) f(t) dt Since n 1 and n 2 are signals for θ (a i, a i+1 ), q S1 (n 1 θ) = 0 and q S2 (n 2 θ, n 1 ) = 0 for θ / (a i, a i+1 ). So the conditional probability is proportional to ai+1 a i U R (y(n 1, n 2 ), θ, b R ) q S1 (n 1 θ) q S2 (n 2 θ, n 1 ) f(θ) dθ Since y is a best response for R to any signal (n 1, n 2 ) N S1 N S2 it must also maximize ai+1 U R (y(n 1, n 2 ), θ, b R ) f(θ) q S1 (n 1 θ) q S2 (n 2 θ, n 1 ) dn 1 dn 2 dθ N S1 a i N S2 ai+1 a i U R (y(n 1, n 2 ), θ, b R )f(θ)dθ (3.7) where the identity follows because y(n 1, n 2 ) is constant over the range of integration N S1 and N S2 and the terms q S1 (n 1 θ) and q S2 (n 2 θ, n 1 ) integrate to unity from Definition 2.1. It follows that all equilibria are essentially equivalent to those with uniform signaling rules. If the true state θ (a i, a i+1 ), from (II) of Definition 2.1 and using uniform signaling rule for the Senders as explained above i.e. q 1 (n 1 θ) is uniform over [a i, a i+1 ] which means q 1 (n 1 θ) = 0 for θ / [a i, a i+1 ] and similarly q 2 (n 2 θ, n 1 ) is uniform over [a i, a i+1 ] which means q 2 (n 2 θ, n 1 ) = 0 for θ / [a i, a i+1 ], we have for θ (a i, a i+1 ) p(θ n 1, n 2 ) q S 1 (n 1 θ) q S2 (n 2 θ, n 1 ) f(θ) 1 q 0 S 1 (n 1 t) q S2 (n 2 t, n 1 ) f(t) dt = f(θ) ai+1 a i f(t) dt (3.8) p(θ n 1, n 2 ) = 0 for θ / (a i, a i+1 ) 31

From (II) of Definition 2.1 we have, 1 y(n 1, n 2 ) = argmax y U R (y, θ, b R ) p(θ n 1, n 2 )dθ = argmax y ai+1 0 a i U R (y, θ, b R ) So using (3.6) and (3.9), we have (A2). In equilibrium (A) and (A ) are satisfied by Lemma 2.2 f(θ) ai+1 dθ (3.9) a i f(t) dt To establish the off equilibrium beliefs given in (A5), we provide the necessary mathematical intuition without going into the rigorous proof. We know by monotonicity that S b always weakly prefers the lower action than higher action at a point of discontinuity in the equilibrium action space and similarly S b always weakly prefers the higher action than lower action at a point of discontinuity in the equilibrium action space. Let y be the induced action for (a i 1, a i ) and y + be the induced action for (a i, a i+1 ). At a i, S b weakly prefers y over y + and so also S b weakly prefers y + over y. In the equilibrium let, S b and S b send messages n 1 and n 2 respectively to induce the action y if true state θ (a i 1, a i ). n 1 and n 2 are uniformly distributed over (a i 1, a i ). Similarly, let S b and S b send messages n + 1 and n + 2 respectively to induce the action y + if true state θ (a i, a i+1 ). n + 1 and n + 2 are uniformly distributed over (a i, a i+1 ). Let the true state θ be in the interval (a i 1, a i ) close to a i. So if S b tries to deviate to the higher action y +, he has to send a message more or equal to n 1 + 2 (b b R ). But since R, believes his message and will take the action y R (n 1 + 2 (b b R ), b R ), which won t be profitable to S b. This can be seen drawing a parabola which represents S b s utility function and knowing that n 1 lies in (a i 1, a i ). Similarly if the true state θ lies in the interval (a i, a i+1 ) close to a i, if Sender S b tries to deviate to the lower action y by sending the message n 1, then S b can send a message more or equal to n 1 + 2 (b b R ) having a profitable deviation. 32

Senders with opposite Biases. In this section we consider two experts having opposite biases that is b Si > b R > b Sj, i, j = 1, 2, i j. Now while the expert i still prefers a higher action than is ideal for the decision maker, expert j prefers a lower action: for all θ, y S i (θ, b Sj ) < y R (θ, b R ) < y S i (θ, b Si ). In effect, the experts want to tug the decision maker in opposite directions. From Proposition 2.3, we know that a fully revealing PBE does not exist. We argue below, however, that when experts have opposing biases, it is possible to construct monotonic equilibria which are semi-revealing in the sense that the decision maker gets to know the true state over a portion of the state space. We then show that semi revealing equilibria are informationally superior to the most informative single expert equilibrium. This construction however requires that the single expert can not be an extremist. Extremists: We will say that a right biased expert (b Si extremist if for all θ [0, 1], > b R ) is an U S i (y R (θ, b R ), θ, b Si ) U S i (y R (1, b R ), θ, b Si ) (3.10) Similarly, a left biased expert (b Sj < b R ) is an extremist if for all θ [0, 1], U S j (y R (0, b R ), θ, b Sj ) U S j (y R (θ, b R ), θ, b Sj ) (3.11) A right biased extremist S i is an expert whose bias is so high that no matter what the state he prefers the highest ideal action, y R (1, b R ) over y R (θ, b R ). Similarly, a left biased extremist S j prefers the lowest ideal action, y R (0, b R ). In the uniform-quadratic case an expert S is an extremist if b S b R 1/2. This is because from (3.10) for all θ [0, 1], (y R (θ, b R ) (θ + b S )) 2 (y R (1, b R ) (θ + b S )) 2 (θ + b R (θ + b S )) 2 (1 + b R (θ + b S )) 2 b S b R 1 + b R θ b S (3.12) Since this is true for all θ [0, 1], we have b S b R 1/2. Similarly from (3.11), we can show that b R b S 1/2. So combining we have b S b R 1/2. 33

If an extremist were to be consulted alone he would reveal no information whatsoever because, he prefers the highest action always and he will send always the signal θ = 1, which the Receiver never believes. In the single expert game, the unique equilibrium involves babbling. Now we construct a semi-revealing PBE for opposite biases. Assume that F (θ) is uniform i.e f(θ) = 1 and suppose b S1 < b R < b S2 < b R + 1/2. If both the experts are extremists then both will send messages on the extreme ends and we can not construct any equilibrium out of it, the only equilibrium in that case will be babbling and the proof for it is given in Krishna and Morgan. So the biases are considered such that both of them are not extremists. The semi-revealing PBE has been constructed as below: (a1) Both the Senders tell truth and S 1 sends a message n 1 = θ, S 2 sends a message n 2 = θ, if θ [0, 1 2(b S2 b R )]. The Receiver believes them and take the action y R (θ, b R ) = θ + b R. (a2) For θ (1 2(b S2 b R ), 1], both the Senders send messages that are uniformly distributed over [1 2(b S2 b R ), 1] and hence the Receiver takes the action, (1 + 1 2b S2 + 2b R )/2 + b R = 1 b S2 + 2b R. (a3) If S 1 s message is n 1 and S 2 s message is n 2, then R believes S 1 until n 2 < n 1 + 2(b S2 2b R ) and for n 2 n 1 + 2(b S2 b R ), R believes S 2. Now we prove the above constructed equilibrium is a PBE. Suppose the true state is θ 1 2(b S2 b R ) and S 1 were to send a message n 1 < θ (it will never send a message more than θ because b S1 < b R ), then S 2 will disagree and send a message n 2 n 1 + 2(b S2 b R ) and the R believes S 2 and takes the action n 2 + b R. This is better for S 2, because the distance of n 2 + b R is less than the distance of θ + b R from his ideal action θ +b S2. So this will increase his utility because of the quadratic utility function properties. Thus S 1 does not have any profitable deviation. Also if S 1 sends the true message n 1 = θ and if S 2, were to deviate, he can not send a messaage n 2 n 1 + 2(b S2 b R ) as this will decrease his utility. So S 1 s message remains valid and R takes the action θ + b R. For the true state θ > 1 2(b S2 b R ), however, the above argument fails since, now there is no rationalizable action z y R (1, b R ) (y R (1, b R ) is highest action possible) such that if expert 1 were to deviate to a slightly lower state, then expert 2 can not send a message which is 2(b S1 b R ) more than the 34

message of S 1 to get a profitable deviation. So here the equilibrium strategy is S 1 will send a message uniformly distributed over [1 2(b S2 b R ), 1], S 2 agrees with him and the Receiver takes the action 1 b S2 + 2b R. 3 Example Consider the case of like biases. We know that there are only finite number of equilibrium actions possible and we have already characterized the equilibrium above. Let a(n) = (a 0 (N) = 0,..., a N (N) = 1) be the equilibrium partition of the state space [0, 1] with N partitions. For short hand we denote a = a(n) and a i (N) = a i, i = 0,..., N. So a i, (i = 1,..., N 1) are the points of discontinuity in the equilibrium action set of R. We also assume that the prior probability distribution function F (θ) is uniform and so f(θ) = 1. Since a i, (i = 1,..., N 1) are the points of discontinuity, from (3.4) of Lemma 2.2 we have, U S b (y, a i, b) U S b (y +, a i, b) U S b (y(a i 1, a i ), a i, b) U S b (y(a i, a i+1 ), a i, b) (3.13) From (3.5) of Lemma 2.2 we have, U S b (y(ai 1, a i ), a i, b) U S b (y(ai, a i+1 ), a i, b) (3.14) Considering the quadratic utility function specified above for R and the structure of equilibrium given for like biases, we have y(a i, a i+1 ) = arg max y ai+1 (same calculation as first chapter) a i U R (y, θ, b R ) f(θ) dθ = (a i + a i+1) 2 + b R (3.15) Using the quadratic utility functions specified above for the Senders we have, from (3.13), (y R (a i 1, a i ) (a i + b)) 2 (y R (a i, a i+1 ) (a i + b)) 2 (y R (a i 1, a i ) (a i + b)) 2 (y R (a i, a i+1 ) (a i + b)) 2 (3.16) 35

Given the monotonicity of a, this holds if and only if (a i + b) y R (a i 1, a i ) y R (a i, a i+1 ) (a i + b) 2(a i + b) a i + a i 1 + a i+1 2 a i a i 1 + a i+1 2 Similar calculations from (3.14) give + 2 b R (from(3.15)) + 2 b R 2 b (3.17) Combining we have, a i a i 1 + a i+1 2 + 2 b R 2 b (3.18) a i 1 + a i+1 2 + 2 b R 2 b a i a i 1 + a i+1 2 + 2 b R 2 b (3.19) From (3.17) and a 0 = 0, we obtain, a i i i + 1 a i+1 + i (2 b R 2 b) for i = 1,..., N 1 (3.20) From (3.18) and a 0 = 0, we obtain, a i From (3.20) and a N = 1, i i + 1 a i+1 + i (2 b R 2 b) for i = 1,..., N 1 (3.21) a N 1 N 1a N N + (N 1)(2 b R 2 b) N 1 + (N 1)(2 b N R 2 b) a N 2 N 2 N 1 a N 1 + (N 2)(2 b R 2 b) N 2 N + 2(N 2)(2 b R 2 b) a N 3 N 3 N 2 a N 2 + (N 3)(2 b R 2 b) N 3 N + 3(N 3)(2 b R 2 b). a 1 = a N (N 1) N (N 1) N + (N 1)(N (N 1))(2 b R 2 b) = 1 N + (N 1)(2 b R 2 b) 36

Since a 1 > 0, we have, 1 N + (N 1)(2 b R 2 b) > 0 (3.22) (We can show using the above equations that for a given N, if a 1 > 0, then a i > 0 for all i = 1,..., N) Let the maximum value of N which satisfies the inequality (3.22) be N 1. Similarly from (3.21) and a N = 1, a N 1 N 1 N a N + (N 1)(2 b R 2 b) N 1 N + (N 1)(2 b R 2 b) Since a N 1 < 1, we have, N 1 N + (N 1)(2 b R 2 b) < 1 (3.23) (We can show using the above equation that for a given N, if a N 1 < 1, then a i < 1 for all i = 0,..., N 1) Let the maximum value of N which satisfies the inequality (3.23) be N 2. Any equilibrium with N partitions must satisfy (3.22) and (3.23). So the maximum value of N which satisfies both (3.22) and (3.23) gives the value of partitions of the equilibrium with maximum number of partitions which we name as N(b S1, b S2, b R ). We call the equilibrium with maximum number of partitions as the most informative equilibrium as the Receivers can discern among the states more effectively. It is clear that the maximum number of partitions that the most informative equilibrium can have is N(b S1, b S2, b R ) = min(n 1, N 2 ) For like biases, (3.22) is always satisfied. So N(b S1, b S2, b R ) = N 1. Now we see that for b < b R < b, N(b S1, b S2, b R ). This is exactly what we have found in the PBE constructed above for the opposing biases. We shall follow similar arguments for the two Senders and two Receivers 37

case to analyze the equilibria and find the conditions for finite number of equilibrium actions and for infinite number of equilibrium actions. Same calculations as Chapter 1, give the ex ante expected utility of the Sender and the Receiver here as the structure of the utility function is same. So EU R = 1 N 1 (a 12 i+1 a i ) 3 i=0 and for j = 1, 2, EU S j = EU R (b Sj b R ) 2 So let s construct the equilibrium for the like biases. If we want to keep S b indifferent at some a i, i = 1,..., N 1, between the lower action and higher action then we use (3.20) with equality and to keep S b indifferent at some a i, we use (3.21) with equality. We can observe that if R were to consult only S b, then for a given partition N, we find the partion points a i (N) from (3.20) with equality and similarly if R were to consult S b, we find the partition points a i(n) from (3.21) with equality. But if Receiver were to consult both S b and S b, then for N we find the partition points a i (N) together from (3.20) and (3.21) and we see clearly that a i (N) a i (N). In the example, we show that the higher the points a i, lies in [0,1], the more is the utility. This means that consulting alone the more loyal expert S b gives more utility than consulting both the experts. We see the above things now with simple numbers. Let b S1 = 1/40 and b S2 = 1/9, b R = 0. If R were to consult only S 1, then calculations from first chapter reveal, N(b S1, b R ) = 4. For N = 4, we have a 1 0 = 0, a 1 1 = 1/10, a 1 2 = 3/10, a 1 3 = 6/10, a 1 4 = 1. EU R = 0.0083 and EU S 1 = 0.0089 If R were to consult only S 2, then N(b S2, b R ) = 2. For N = 2, a 2 0 = 0, a 2 1 = 5/18, a 2 2 = 1. EU R = 0.0332 and EU S 2 = 0.0455 If R were to consult both S 1 and S 2, we find N(b S1, b S2, b R ) = 4. Then keeping S 1 indifferent between the lower and higher actions at a 1, a 2 and a 3 (we employ (3.20) with equality) and S 2 at a 4 (we employ (3.21) with 38

equality) gives, for N = 4, a 0 = 0, a 1 = 1/72, a 2 = 23/180, a 3 = 41/120, a 4 = 1. EU R = 0.0247 and EU S 1 = 0.0253 and EU S 2 = 0.0370 This implies the utility of the Receiver consulting two experts together is less than the utility of consulting expert S 1. The only way R can increase his utility with two experts is to keep S 1 indifferent at all break points. Then employing (3.20) with equality we have (following the calculations of first chapter), a 0 = 0, a 1 = 1/10, a 2 = 3/10, a 3 = 6/10, a 4 = 1. These break points are same as the break points generated by consulting S 1 alone and hence the utilities are same. EU R = 0.0083 and EU S 1 = 0.0089 and EU S 2 = 0.0206 So we can summarize the above discussions into the following proposition: Proposition 3.1 (Krishna and Morgan): Suppose that expert i has bias b Si > b R. Then the addition of another expert with bias b j b i is never informationally superior. Now we discuss the PBE for experts with opposing biases. We have already constructed the PBE during the discussion of opposing biases. We discuss the welfare property whether it s better for the decision maker to consult two experts with opposing biases or one expert is enough. Let b S2 = 1/40, b S1 = 2/3, b R = 0. So here S 2 is not an extremist whereas S 1 is extremist. Previously we have shown that with two extremists, there does not exist any PBE. The PBE constructed above for the opposing biases is that for the true state θ 1 2b S2 = 1 1/20 = 19/20, S 1 and S 2 will tell truth and the action of the R will be θ. For θ > 19/20, the signal of S 1 and S 2 is uniformly distributed over [19/20, 1] and the action is (1 + 19/20)/2 = 0.975. 39

EU R = + 1 2(bS2 b R ) 0 1 1 2(b S2 b R ) = (b S 2 ) 3 3 (y R (θ, b R ) (θ + b R )) 2 dθ (1 b S2 + b R (θ + b R )) 2 dθ (b S 2 2b R ) 3 3 (3.24) Here the first term equal to zero as y R (θ, b R ) = θ + b R and integrating the other term we get the above value. For j = 1, 2 EU S j = + 1 2(bS2 b R ) 0 1 1 2(b S2 b R ) (y R (θ, b R ) (θ + b Sj )) 2 dθ (1 b S2 + b R (θ + b Sj )) 2 dθ = (b R b Sj ) 2 (1 2(b S2 b R )) + ( b S 2 + b R b Sj ) 3 3 (b S 2 b R b Sj ) 3 3 (3.25) Here EU R = 0.00001 and EU S 1 = 0.4226 and EU S 2 = 0.0010. So here we can see that the players have very high utility when the Receiver consults two experts with opposite biases. Hence we have the following proposition : Proposition 3.2 (Krishna and Morgan): Suppose that expert S i has bias b Si > b R and is not an extremist. Then the addition of another expert S j with b Sj < 0 is informationally superior. 40

Chapter 4 Two Senders and Two Receivers 1 The Model Previously we discussed Cheap Talk between one Sender and one Receiver, one Sender and two Receivers, two Senders and one Receiver. Now we present our extension of Cheap Talk between two Senders and two Receivers taking into account the discussions in the previous chapters. We show that the structure of the equilibrium here is a combination of the structure of equilibrium discussed in the previous chapters. This should be true because, from two Senders and two Receivers discussion, we should be able to infer the results in the previous chapters. For example keeping two Senders biases equal, we should be able to infer the results in Chapter 2 whereas keeping two Receivers biases same, we should be able to infer the results of Chapter 3 and so also keeping two Senders biases same and two Receivers biases same, we should get the result of the Chapter 1. To discuss this chapter, we should be well familiarized with the previous chapters. There are two Senders S 1, S 2 and two Receivers R 1 and R 2 ; only Senders have private information about the true state of the nature θ whose diffrentiable probability distribution function, F (θ) with density f(θ) is supported on [0, 1] (f(θ) 0, θ [0, 1]). The Receivers do not have any information about the state of the nature θ, but each Receiver s action affects his own 41

pay off and the the pay off of the Senders, but does not affect the payoff of other Receivers. Here our discussion considers the Quadratic Utility functions for the agents: The utility function of S i (i = 1, 2) with R j (j = 1, 2) alone or with both S k (k i, k = 1, 2) and R j in public is given by, U S i R j (y Rj, θ, b Si ) = (y Ri (θ + b Si )) 2 (4.1) The utility function of S i with R 1 and R 2 in public or with S k (k i, k = 1, 2), R 1 and R 2 in public is given by, U S i (y R1, y R2, θ, b Si ) = 1 4 (y R 1 + y R2 2(θ + b Si )) 2 (4.2) The utility function of R i with any number of other players is given by, U R i (y Ri, θ, b Ri ) = (y Ri (θ + b Ri )) 2 (4.3) The biases b Si, i = 1, 2 for Senders and b Rj, j = 1, 2 represent how close the preference of the players lie. The factor 1/4 in the utility function of the Senders in presence of both the Receivers is a normalizing factor to level the utility of the players. The utility of Senders depend upon the actions taken by all Receivers who are receiving their signals and each Receiver s action does not affect other Receiver s action. (This kind of utility functions has been considered for all previous chapters). We have previously discussed Cheap Talk between one Sender and One Receiver, One Sender and Two Receivers, Two Senders and One Receiver. Now we present our extension to Cheap Talk between two Senders and two Receivers which generalizes all the above models and we shall show how to derive other models from this. The game proceeds as follows. After observing the state, the Senders send signals sequentially and publicly according to a protocol. The protocol specifies the sequence in which the Senders send the signal (later we shall show that protocol does not have any effect on the structure of the equilibrium). Without loss of generality, let s denote the Sender who sends the signal first as S 1 and its signal as n 1 [0, 1] and the other as S 2 and its signal as n 2 [0, 1], and the Receivers as R 1 and R 2 with their actions y R1 and respectively. y R2 42

2 Equilibrium The equilibrium concept that we employ here is that of Perfect Bayesian equilibrium (PBE). PBE is a natural generalization of Bayesian Nash Equilibrium, which takes into account the the incomplete information on the part of the Receivers and the sequentiality of the Senders signals. Formally we define the PBE for two Senders and two Receivers game as, Definition 2.1 The Pefect Bayesian Equilibrium (PBE) consists of a family of signaling rules for S 1 denoted by q S1 (n 1 θ) (probability of sending a signal n 1 in true state θ) and for S 2 denoted by q S2 (n 2 θ, n 1 ) (probability of sending a signal n 2 given θ and n 1 ) and the action rules y R1 (n 1, n 2 ) and y R2 (n 1, n 2 ) (actions taken after receiving the signal n 1 and n 2 )for R 1 and R 2 respectively such that: (I) For each θ [0, 1], N S1 q S1 (n 1 θ)dn 1 = 1 and N S2 q S2 (n 2 θ, n 1 )dn 2 = 1 where N S1 is the set of feasible signals for S 1 given θ and N S2 is the set of feasible signals for S 2 given θ and n 1, and if n 1 is in the support of q S1 (. θ), then n 1 solves, max n1 N S1 U S 1 (y R1 (n 1, n 2 ), y R2 (n 1, n 2 ), θ, b S1 ) given n 2 and if n 2 is in the support of q S2 (. θ, n 1 ), then n 2 solves, max n2 N S2 U S 2 (y R1 (n 1, n 2 ), y R2 (n 1, n 2 ), θ, b S2 ). (II) For each (n 1, n 2 ), y R1 (n 1, n 2 ) solves max y 1 0 U R 1 (y, θ, b R1 ) p(θ n 1, n 2 ) dθ, and y R2 (n 1, n 2 ) solves max y 1 0 U R 2 (y, θ, b R2 ) p(θ n 1, n 2 ) dθ, where p(θ n 1, n 2 ) q S1 (n 1 θ) q S2 (n 2 θ, n 1 ) f(θ)/ 1 0 q S 1 (n 1 t) q S2 (n 2 t, n 1 ) f(t) dt Condition (I) says that signaling rules of S 1 and S 2 yield expected-utility maximizing actions from both R 1 and R 2 for true state θ, taking action rules of R 1 and R 2 as given. Condition (II) says that R 1 and R 2 respond optimally to each possible signal pair (n 1, n 2 ), using Bayes Rule to update their prior 43

beliefs, taking into account signaling strategies of S 1 and S 2 and the signals they receive. An important observation is that the posterior beliefs of both the Receivers are same (can be seen from the above definition) because both the Receivers are rational and they are at the same strategic positions i.e. they take actions after hearing signals from the Senders. So both the Receivers will maximize their utility taking into account the same posterior belief. It can be shown that the equilibrium actions of R 1 and R 2 are related. From (II) of Definition 2.1 1 y R1 (n 1, n 2 ) = max y U R 1 (y, θ, b R1 ) p(θ n 1, n 2 ) dθ and let the solution to this equation be y = y. 0 1 = max y (y (θ + b R1 )) 2 p(θ n) dθ (4.4) 1 y R2 (n 1, n 2 ) = max y U R 2 (y, θ, b R2 ) p(θ n 1, n 2 ) dθ 0 0 1 = max y (y (θ + b R2 )) 2 p(θ n 1, n 2 ) dθ 0 1 = max y (y + b R1 b R2 (θ + b R1 )) 2 p(θ n 1, n 2 ) dθ 0 1 = max z (z (θ + b R1 )) 2 p(θ n 1, n 2 ) dθ (4.5) (where z = y + b R1 b R2 ) Let the solution to the above equation z. Clearly, 0 y = z y R1 (n 1, n 2 ) = y R2 (n 1, n 2 ) + b R1 b R2 (4.6) Given a PBE, we denote Y, the outcome function that associates with every state, the resulting equilibrium actions by the Receivers. 44

Formally for each θ, Y (θ) = (Y R1 (θ), Y R2 (θ)) = (y R1 (n 1 (θ), n 2 (θ, n 1 )), y R2 (n 1 (θ), n 2 (θ, n 1 ))). Since in equilibrium, y R1 (n 1, n 2 ) = y R2 (n 1, n 2 ) + b R1 b R2, we have the freedom to consider Y R1 (.) instead of Y (.). Also all the mathematical properties (continuity, monotonicity etc.) that are possessed by Y R1 (.) are also possessed Y R2 (.). Let y = (y R1, y R2 ). Denote Y 1 (y) = {θ : Y (θ) = y}. Given an outcome function Y, we can determine the resulting equilibrium partition, P = {Y 1 (y) : y is an equilibrium action} of the state space. The partition P is then a measure of the informational content of the equilibrium. If the equilibrium partition P is finer that P, then the informational content of P is greater than that of P, since the former allows the Receivers to discern among the states more effectively. A PBE always exists. In particular, there are always equilibria in which all messages from both of the Senders are completely ignored by the Receivers or in other words, both Senders babble. To see that this is a PBE, notice that since the messages of the Senders contain no information, the Receivers correctly disregard them in making their decision. Likewise, from the perspective of each of the Senders, since messages will always be ignored by the Receivers, there is no advice giving strategy that improves payoffs relative to babbling. Therefore, the information loss is most severe in a babbling equilibrium. But, there are also other, more informative equilibria. The equilibrium we consider here for two Senders and two Receivers have the property that the equilibrium actions induced in the higher state is at least as large as that induced in a lower state. This property is known as monotonicity. Formally, we will say that the PBE Y is monotonic if Y R1 (.) is a non-decreasing function (monotonicity of Y R1 (.) implies monotonicity Y R2 (.)). But monotonicity does not hold always for two Senders and two Receivers which makes the equilibrium analysis complicated. So here our analysis is concerned with monotonic equilibria. The following lemma identifies the necessary conditions satisfied by monotonic equilibria and is almost same as the incentive constraints presented for two Senders and one Receiver case. 45

Lemma 2.2 Suppose Y R1 is monotonic (implies Y R2 is monotonic too). If Y R1 has a discontinuity at θ (implies Y R2 has too discontinutiy at θ) and and then, lim R ε 0 + 1 (θ ε) = y R 1 < y + R 1 = lim R ε 0 + 1 (θ + ε) lim R ε 0 + 2 (θ ε) = y R 2 < y + R 2 = lim R ε 0 + 2 (θ + ε) b = min{b S1, b S2 }, b = max{b S1, b S2 } U S b (y R 1, y R 2, θ, b) U S b (y + R 1, y + R 2 θ, b) (4.7) U S b (y R 1, y R 2, θ, b) U S b (y + R 1, y + R 1, θ, b) (4.8) where U S b is the utility of the Sender with lower bias and U S b is the utility of the Sender with higher bias. (Note: Viewed from a mechanism design perspective, the inequalities are in the nature of incentive constraints: at any discontinuity, the Sender with bias b weakly prefers the lower action y whereas the Sender with bias b weakly prefers the higher action y +. This is to be contrasted with the one Sender and one Receiver or multiple Receivers case where the above inequlality constraints hold with equality. The basic frame of the proof presented here is in accordance with the proof in Krishna and Morgan for two Senders and one Receiver.) Proof: Case 1: b S1 < b S2 So, b = b S1 and b = b S2. Let n + 1, n + 2 be the signals by S 1 and S 2 respectively which induce y + R 1 and y + R 2 and n 1, n 2 induce y R 1 and y R 2. So we can write y R1 (n + 1, n + 2 ) = y + R 1, y R2 (n + 1, n + 2 ) = y + R 2, y R1 (n 1, n 2 ) = y R 1 and y R2 (n 1, n 2 ) = y R 2. By (4.6), we have, To establish (4.7), suppose to the contrary, y R 1 b R1 = y R 2 b R2 y + R 1 b R1 = y + R 2 b R2 (4.9) U S b (y R 1, y R 2, θ, b) < U S b (y + R 1, y + R 2, θ, b) U S 1 (y R 1, y R 2, θ, b S1 ) < U S 1 (y + R 1, y + R 2, θ, b S1 ) (4.10) 46

Then by continuity, for all ε > 0 small enough, U S 1 (y R 1, y R 2, (θ ε), b S1 ) < U S 1 (y + R 1, y + R 2, (θ ε), b S1 ) (4.11) Now suppose that in state (θ ε), S 1 were to send the signal n + 1 (n + 1 is the signal that S 1 sends at state (θ + ε)). Let n 2 be S 2 s best response signal to this off equilibrium signal in state (θ ε) so that: U S 2 (y R1 (n + 1, n 2 ), y R2 (n + 1, n 2 ), (θ ε), b S2 ) U S 2 (y + R 1, y + R 2, (θ ε), b S2 ) (4.12) If we keep in mind the quadratic utility function specified above for S 2 whose shape is like a parabola, this means y R1 (n + 1, n 2 ) + y R2 (n + 1, n 2 ) is at a smaller distance from 2[(θ ε) + b S2 ] than y + R 1 + y + R 2. This implies that since otherwise we would have, y R1 (n + 1, n 2 ) + y R2 (n + 1, n 2 ) y + R 1 + y + R 2 (4.13) U S 2 (y R1 (n + 1, n 2 ), y R2 (n + 1, n 2 ), (θ + ε), b S2 ) > U S 2 (y + R 1, y + R 2, (θ + ε), b S2 ) contradicting the fact that y + R 1, y + R 2 are the equilibrium actions in state (θ+ε). But now since y R1 (n + 1, n 2 ) + y R2 (n + 1, n 2 ) y + R 1 + y + R 2 and from (4.24) S 2 weakly prefers the action pair (y R1 (n + 1, n 2 ), y R2 (n + 1, n 2 )) in state (θ ε) over the action pair (y + R 1, y + R 2 ), the fact that b S1 < b S2 implies that S 1 also weakly prefers the former action pair. Thus, U S 1 (y R1 (n + 1, n 2 ), y R2 (n + 1, n 2 ), (θ ε), b S1 ) U S 1 (y + R 1, y + R 2, (θ ε), b S1 ) and hence by (4.23), we have, U S 1 (y R1 (n + 1, n 2 ), y R2 (n + 1, n 2 ), (θ ε), b S1 ) > U S 1 (y R 1, y R 2, (θ ε), b S1 ) Thus by sending the signal n + 1 in state (θ ε), S 1 can induce an action pair that he prefers over the equilibrium action pair (y R 1, y R 2 ). This is a contradiction and thus (4.7) holds. To establish (4.8), again suppose to the contrary that, U S b (y R 1, y R 2, θ, b) > U S b (y + R 1, y + R 2, θ, b) U S 2 (y R 1, y R 2, θ, b S2 ) > U S 2 (y + R 1, y + R 2, θ, b S2 ) (4.14) 47

Then by continuity, U S 2 (y R 1, y R 2, (θ + ε), b S2 ) > U S 2 (y + R 1, y + R 2, (θ + ε), b S2 ) (4.15) Also since b S1 < b S2, from (4.14), Then by continuity, U S 1 (y R 1, y R 2, θ, b S1 ) > U S 1 (y + R 1, y + R 2, θ, b S1 ) (4.16) U S 1 (y R 1, y R 2, (θ + ε), b S1 ) > U S 1 (y + R 1, y + R 2, (θ + ε), b S1 ) (4.17) Hence in state (θ + ε), both S 1 and S 2 prefer (y R 1, y R 2 ) over (y + R 1, y + R 2 ) and so if S 1 were to send the message n 1, S 2 will agree and send the signal n 2 which would induce an action pair (y R 1, y R 2 ). So (y + R 1, y + R 2 ) can not be an equilibrium action pair in the state (θ + ε) which is a contradiction. Thus (4.8) holds. Case 2: b S2 < b S1 The proof for this case is similar. If either (4.7) or (4.8) does not hold, then S 1 has a profitable deviation. Now we check for the constraints on b S1, b S2, b R1 and b R2 for the number of equilibrium action pairs whether they are finite or not. Before that we need the following useful lemmas. Let a(n) = (a 0 (N), a 1 (N),..., a N (N)) be an equilibrium partition with N partitions. Unless otherwise clearly stated, for short hand we denote a(n) as a and a i (N) as a i. We define a Sender S i, i = 1, 1 having a uniform signaling rule over an equilibrium partition N as if the true state θ (a i, a i+1 ), then the signal n i is uniformly distributed over [a i, a i+1 ]. Lemma 2.3 We can always use uniform signaling rules for the Senders when there are finite number of action pairs by the Receivers. Proof: Let a be an equilibrium partition. Let (y R 1, y R 2 ) be the action pair induced by the Senders for θ (a i, a i+1 ). This is true because of our assumption that Y R1 (.) is monotonic, so for an interval (a i, a i+1 ), we can assume only one action pair. 48

We also know from (4.6) that y R 1 b R1 = y R 2 b R2 (4.18) Let N S1 N S2 {(n 1, n 2 ) : (y R1 (n 1, n 2 ), y R2 (n 1, n 2 )) = (y R 1, y R 2 )}; if R i (i = 1, 2) hears a signal (n 1, n 2 ) N S1 N S2 in such an equilibrium, his conditional expected utility from Definition 2.1 is 1 0 U R i (y Ri (n 1, n 2 ), θ, b Ri ) p(θ n 1, n 2 ) dθ = 1 0 U R i (y Ri (n 1, n 2 ), θ, b Ri ) q S 1 (n 1 θ) q S2 (n 2 θ, n 1 ) f(θ) dθ 1 0 q S 1 (n 1 t) q S2 (n 2 t, n 1 ) f(t) dt Since n 1 and n 2 are signals for θ (a i, a i+1 ), q S1 (n 1 θ) = 0 and q S2 (n 2 θ, n 1 ) = 0 for θ / (a i, a i+1 ). So the conditional probability is proportional to ai+1 a i U R i (y Ri (n 1, n 2 ), θ, b Ri ) q S1 (n 1 θ) q S2 (n 2 θ, n 1 ) f(θ) dθ Since y R i is a best response for R i to any signal (n 1, n 2 ) N S1 N S2 it must also maximize ai+1 U R i (y Ri (n 1, n 2 ), θ, b Ri ) f(θ) q S1 (n 1 θ) q S2 (n 2 θ, n 1 ) dn 1 dn 2 dθ N S1 a i N S2 ai+1 a i U R i (y Ri (n 1, n 2 ), θ, b Ri )f(θ)dθ (4.19) where the identity follows because (y R1 (n 1, n 2 ), y R2 (n 1, n 2 )) is constant over the range of integration N S1 and N S2 and the terms q S1 (n 1 θ) and q S2 (n 2 θ, n 1 ) integrate to unity from Definition 2.1. It follows that all equilibium signaling rules are essentially equivalent to those with uniform signaling rules. Define, y R i (θ, b Ri ) arg max y U R i (y, θ, b Ri ) i = 1, 2 (4.20) 49

Lemma 2.4 If there are infinite number of actions in an equilibrium, then there exists an interval (a, b) [0, 1] where the Senders send different signals for all θ (a, b). Further, for each θ (a, b), the equilibrium action is y R i (θ, b Ri ), i = 1, 2. Also if there is an interval (c, d) [0, 1] where the actions induced by the Senders for each Receiver are same, then the Senders follow uniform signaling rule for (c, d). Proof: Assuming that each signal induces only one action by each Receiver, because the Receivers are rational, the first line of the claim is obvious, otherwise there would not be infinite number of actions. For the second line of the claim, because for each θ (a, b) we have only one action and y R i (θ, b Ri ) maximizes the utility of the Receiver in state θ, it should be the equilibrium action by (II) of Definition 2.1. This means in equilibrium with infinite number of actions Senders tell the truth for some interval and the Receivers believe them and take the action consistent with their belief. For the third line, the proof is same as the previous lemma. Let a(n) = (a 0 (N) = 0,..., a N (N) = 1) be the equilibrium partition of the state space [0, 1] with N partitions. As usual we denote a = a(n) and a i (N) = a i, i = 0,..., N. Clearly a i, i = 1,..., N 1 are the points of discontinuity of the equilibrium action space as for different partitions we have different actions which is the general notion of equilibrium partition. We also assume that the prior probability distribution function F (θ) is uniform and so f(θ) = 1. Denote y R1 (a i, a i+1 ), i = 0,... N 1 be the equilibrium action in the interval (a i, a i+1 ). By previous lemmas we know that if θ (a i, a i+1 ), then the Senders follow uniform signaling rule i.e. q S1 (n 1 θ) and q S2 (n 2 θ, n 1 ) are uniformly distributed over (a i, a i+1 ). Since a i are the points of discontinutiy for i = 1,..., N 1 in the equilibrium action sets of the Receivers, from (4.7) we have, U S b (y R 1, y R 2, a i, b) U S b (y + R 1, y + R 2, a i, b) U S b (y R1 (a i 1, a i ), y R2 (a i 1, a i ), a i, b) From (4.8) we have U S b (y R1 (a i, a i+1 ), y R2 (a i, a i+1 ), a i, b) (4.21) U S b (y R 1, y R 2, a i, b) U S b (y + R 1, y + R 1, a i, b) 50

U S b (yr1 (a i 1, a i ), y R2 (a i 1, a i ), a i, b) U S b (yr1 (a i, a i+1 ), y R2 (a i, a i+1 ), a i, b) (4.22) If the true state θ (a i, a i+1 ), from (II) of Definition 2.1 and using uniform signaling rules for the Senders, we have for θ (a i, a i+1 ) p(θ n 1, n 2 ) q S1 (n 1 θ) q S2 (n 2 θ, n 1 ) f(θ) 1 q 0 S 1 (n 1 t) q S2 (n 2 t, n 1 ) f(t)dt = f(θ)/ ai+1 a i f(t)dt (4.23) ai+1 a i for θ / (a i, a i+1 ), p(θ n 1, n 2 ) = 0 Since f(θ) = 1 because of uniform distribution of state space, we have f(t)dt = a i+1 a i. So from (II) of definition 2.1 we have, 1 y R1 (a i, a i+1 ) = arg max y U R 1 (y, θ, b R1 ) p(θ n 1, n 2 ) dθ = arg max y ai+1 0 a i U R 1 (y, θ, b R1 ) f(θ) dθ (4.24) Considering the quadratic utility function specified in (4.3) for R 1 and R 2, we have y R1 (a i, a i+1 ) = a i + a i+1 + b R1 (4.25) 2 y R2 (a i, a i+1 ) = y R1 (a i, a i+1 ) b R1 + b R2 = a i + a i+1 + b R2 (4.26) 2 (same calculations as Chapter 1) Using the quadratic utility functions specified for the Senders in (4.2) we have, from (4.21), (y R1 (a i 1, a i ) + y R2 (a i 1, a i ) 2(a i + b)) 2 (y R1 (a i, a i+1 ) + y R2 (a i, a i+1 ) 2(a i + b)) 2 (y R1 (a i 1, a i ) + y R2 (a i 1, a i ) 2(a i + b)) 2 (y R1 (a i, a i+1 ) + y R2 (a i, a i+1 )) 2(a i + b)) 2 (4.27) 51

Given the monotonicity of a, the above equation holds if and only if 2(a i + b) (y R1 (a i 1, a i ) + y R2 (a i 1, a i )) y R1 (a i, a i+1 ) + y R2 (a i, a i+1 ) 2(a i + b) 4(a i + b) y R1 (a i 1, a i ) + y R2 (a i 1, a i ) + y R1 (a i, a i+1 ) + y R2 (a i, a i+1 ) 2 a i + 4 b a i 1 + a i+1 + 2 b R1 + 2 b R2 (from (4.25) and (4.26)) a i a i 1 + a i+1 2 + b R1 + b R2 2 b (4.28) Similar calculations from (4.22) give a i a i 1 + a i+1 2 + b R1 + b R2 2 b (4.29) Combining we have, a i 1 + a i+1 2 + b R1 + b R2 2 b a i a i 1 + a i+1 2 + b R1 + b R2 2 b (4.30) From (4.28) and a 0 = 0, we obtain, a i i i + 1 a i+1 + i (b R1 + b R2 2 b) for i = 1,..., N 1 (4.31) From (4.29) and a 0 = 0, we obtain, a i i i + 1 a i+1 + i (b R1 + b R2 2 b) for i = 1,..., N 1 (4.32) From (4.31) and a N = 1, a N 1 N 1a N N + (N 1)(b R1 + b R2 2 b) N 1 + (N 1)(b N R 1 + b R2 2b) a N 2 N 2 N 1 a N 1 + (N 2)(b R1 + b R2 2 b) N 2 N + 2(N 2)(b R 1 + b R2 2b) a N 3 N 3 N 2 a N 2 + (N 3)(b R1 + b R2 2 b) N 3 N + 3(N 3)(b R 1 + b R2 2b) 52

. a 1 = a N (N 1) N (N 1) N = 1 N + (N 1) (b R 1 + b R2 2 b) + (N 1)(N (N 1))(b R1 + b R2 2 b) Since a 1 > 0,, we have the constraint, 1 N + (N 1)(b R 1 + b R2 2 b) > 0 (4.33) (We can show using the above equations that for a given N, if a 1 > 0, then a i > 0 for all i = 1,..., N) Let the maximum value of N which satisfies the above inequality (4.33) be N 1. From (4.32) and a N = 1 gives, a N 1 N 1 N a N + (N 1) (b R1 + b R2 2 b) N 1 N + (N 1) (b R 1 + b R2 2 b) Since a N 1 < 1, we have N 1 N + (N 1) (b R 1 + b R2 2 b) < 1 (4.34) (We can show using the above equations that for a given N, if a N 1 < 1, then a i < 0 for all i = 0,..., N 1) Let the maximum value of N which satisfies the above inequality (4.34) be N 2. Any equilibrium with N partitions must satisfy (4.33) and (4.34). We name the maximum value of N which satisfies both (4.33) and (4.34) as N(b S1, b S2, b R1, b R2 ). We call the equilibrium with N(b S1, b S2, b R1, b R2 ) partitions i.e. with maximum number of partitions as the most informative 53

equilibrium as the Receivers can distinguish among the states more effectively. It is clear that the maximum number of partitions that the most informative equilibrium can have is, N(b S1, b S2, b R1, b R2 ) = min(n 1, N 2 ) Now we see what are the cases where N(b S1, b S2, b R1, b R2 ), then in those cases, in the most informative equilibrium, the Senders S 1, S 2 tell the truth for some interval (a, b) [0, 1] and the Receivers R 1 and R 2 believe the Senders and take the actions y R 1 (θ, b R1 ) and y R 2 (θ, b R2 ) respectively (Lemma (2.4)). From (4.33), N 1 if and only if and from (4.34), N 2 if and only if b R1 + b R2 2 b 0 (4.35) b R1 + b R2 2 b 0 (4.36) So N(b S1, b S2, b R1, b R2 ) if and only if (4.35) and (4.36) are satisfied together. We consider the following cases to find more precisely the structure of the biases so that the equilibrium will have infinity number of actions. Before that we discuss the case b S1 = b S2 = b R1 = b R2, then (4.35) and (4.36) are satisfied and so there is an equilibrium with an infinity number of action pairs by the Receivers. This is true and should be quite obvious, because when all the agents have same preferences all of them have same utility function (from (4.1), (4.2), (4.3)). So there will be full revealation i.e. the Senders will send true signals for all true states θ and the Receivers will believe them and take the actions y R i (θ, b Ri ), (i = 1, 2). We do not need to specify any off equilibrium path belief as Senders don t find it profitable to deviate given the Receivers believe them. Since we discussed the situation b S1 = b S2 = b R1 = b R2, we avoid this situation while considering the cases below. We shall use the notations b = min(b S1, b S2 ) and b = max(b S1, b S2 ) as usual and β = min(b R1, b R2 ) and β = max(b R1, b R2 ) to analyze the cases given as follows. 54

Case 1: b S1 b S2 b R1 b R2 or b S2 b S1 b R1 b R2 or b S1 b S2 b R2 b R1 or b S2 b S1 b R2 b R1. (4.35) is already satisfied and (4.36) can be satisfied iff b = β = β. Case 2: b R1 b R2 b S1 b S2 or b R2 b R1 b S1 b S2 or b R1 b R2 b S2 b S1 or b R2 b R1 b S2 b S1. (4.35) is satisfied iff b = β = β and (4.36) always holds true. Case 3: b S1 b R1 b R2 b S2 or b S2 b R1 b R2 b S1 or b S1 b R2 b R1 b S2 or b S2 b R2 b R1 b S1. (4.35) and (4.36) are always true. Case 4: b R1 b S1 b S2 b R2 or b R2 b S1 b S2 b R1 or b R1 b S2 b S1 b R2 or b R2 b S2 b S1 b R1. (4.35) is satisfied iff b (β +β)/2 and (4.36) is satisfied iff b (β +β)/2. Case 5: b S1 b R1 b S2 b R2 or b S2 b R1 b S1 b R2 or b S1 b R2 b S2 b R1 or b S2 b R2 b S1 b R1. (4.35) is true always and (4.36) holds true iff b (β + β)/2 Case 6: b R1 b S1 b R2 b S2 or b R2 b S1 b R1 b S2 or b R1 b S2 b R2 b S1 or b R2 b S2 b R1 b S1. (4.35) holds true iff b (β + β)/2 and (4.36) is always satisfied. These six cases show the conditions where there can be finite number of 55

equilibrium actions. Now we say the cases where there will be full revealation. Full revealation occurs if b S1 = b S2 = b R1 = b R2 or b S1 = b R1 = b R2 or b S2 = b R1 = b R2. This is because when the preferencces of both the Receivers match any of the Sender, then the utility function of that Sender and both the Receivers are same and so the Sender will find it perfect to tell the truth to both the Receivers whereas the Receivers ignore the other Sender if his preference is different. A more abstract proof for the above six conditions can be given following the proofs given in the previous chapters. For example Case 1 and Case 2 resemble to the case of like biases in two Senders and one Receiver case. Case 3 resemble to the case of opposite biases in two Senders and one Receiver case. Case 4 resemble to the case of one Sender and two Receivers case. Case 5 and Case 6 resemble to a mixture of One Sender and two Receivers and two Senders and one Receiver case. The previous arguments show the importance of the above six cases and prove that we should be able to find the answers to all our previous models of one Sender and one Receiver, one Sender and two Receivers and two Senders and one Receiver case from the above six cases. For example if we have the model of one Sender and one Receiver, then we should consider Case 1, Case 2 with b S1 = b S2 and b R1 = b R2. If we have the model of one Sender and two Receivers we should consider Case 1, Case 2 and Case 4 with b S1 = b S2. If we have two Senders and one Receiver we consider Case 1, Case 2, Case 3 with b R1 = b R2. Consider the equations (4.31) and (4.32). The important observations which prove that this model is general can be seen below. (1) If we take b S1 = b S2, then both the equations (4.31) and (4.32) hold with equality and the equation becomes same as the equation for one Sender and two Receivers. (2) If we take b R1 = b R2, then both the equations give the equations that are satisfied in the two Senders and one Receiver case. (3) If we take b S1 = b S2 and b R1 = b R2, then both the equations give the equation for one Sender and one Receiver case. So we are able to analyze all our previous models from this model which 56

prove the validity and generality of this model. Now finally, we can state the equilibrium strategies when there is a PBE with finite actions set and a PBE with infinite actions set. For a PBE with finite actions N, denoting the equilibrium partition as a = (a 0,..., a N ), we have the following structure of the equilibrium: (A1) a 0 = 0, (A2) a N = 1. (A3) The signal n 1 of S 1 is uniformly distributed on [a i, a i+1 ] if θ (a i, a i+1 ) and so also The signal n 2 of S 2 is uniformly distributed on [a i, a i+1 ] if θ (a i, a i+1 ). (A4) (y R1 (n 1, n 2 ), y R2 (n 1, n 2 )) = (y R1 (a i, a i+1 ), y R2 (a i, a i+1 )) (A5) for all n 1, n 2 (a i, a i+1 ), U S b (y R1 (a i 1, a i ), y R2 (a i 1, a i ), a i, b) (A5 ) U S b (yr1 (a i 1, a i ), y R2 (a i 1, a i ), a i, b) U S b (y R1 (a i, a i+1 ), y R2 (a i, a i+1 ), a i, b) U S b (yr1 (a i, a i+1 ), y R2 (a i, a i+1 ), a i, b) for i = 1,..., N 1, b = min{b S1, b S2 }, b = max{b S1, b S2 } and S b is the Sender with minimum bias and S b is the Sender with maximum bias. (A6) The off equilibrium belief of R 1 and R 2 is that if 2 b > b R1 + b R2 and the signal of S b does not exceed the signal of S b by more than or equal to 2 (2 b b R1 b R2 ), then R 1 and R 2 believe in S b, otherwise they believe in S b s message. Similarly, if 2 b < b R1 + b R2 and the signal of S b does not fall below the signal of S b by more than or equal to 2 (b R1 + b R2 2 b), then R 1 and R 2 believe in S b, otherwise they believe in S b s message. (A1) and (A2) come from the fact that our partition is a partition of the state space which lies in [0, 1]. The proof of (A3) is given in Lemma 2.3. 57

The proof of (A4) is given in equations (4.23) and (4.24) where we use (II) of Definition 2.1. Proofs for (A5) and (A6) are given in Lemma 2.2. To establish the off equilibrium beliefs given in (A6), we provide the necessary mathematical intuition without going into the rigorous proof. First let s consider the case of 2 b > b R1 + b R2. We know by monotonicity that S b always weakly prefers the lower action than higher action at a point of discontinuity in the equilibrium action space and similarly S b always weakly prefers the higher action than lower action at a point of discontinuity in the equilibrium action space. Let (y R 1, y R 2 ) be the induced action pair for (a i 1, a i ) and (y + R 1, y + R 2 ) be the induced action pair for (a i, a i+1 ). At a i, S b weakly prefers (y R 1, y R 2 ) over (y + R 1, y + R 2 ) and so also S b weakly prefers (y + R 1, y + R 2 ) over (y R 1, y R 2 ). In the equilibrium let, S b and S b send messages n 1 and n 2 respectively to induce the action (y R 1, y R 2 ) if true state θ (a i 1, a i ). n 1 and n 2 are uniformly distributed over (a i 1, a i ). Similarly, let S b and S b send messages n + 1 and n + 2 respectively to induce the action (y + R 1, y + R 2 ) if true state θ (a i, a i+1 ). n + 1 and n + 2 are uniformly distributed over (a i, a i+1 ). Let the true state θ be in the interval (a i 1, a i ) close to a i. So if S b tries to deviate to the higher action (y + R 1, y + R 2 ), he has to send a message more or equal to n 1 + 2 (2 b b R1 b R2 ). But since R 1 and R 2, believe in his message, they will take the action y R 1 (n 1 + 2 (2 b b R1 b R2 ), b R1 ) and y R 2 (n 1 + 2 (2 b b R1 b R2 ), b R2 ) respectively which won t be profitable to S b. This can be seen drawing a parabola which represents S b s utility function and knowing that n 1 lies in (a i 1, a i ). Similarly if the true state θ lies in the interval (a i, a i+1 ) close to a i, if Sender S b tries to deviate to the lower action pair (y R 1, y R 2 ) by sending the message n 1, then S b can send a message more or equal to n 1 +2 (2 b b R1 b R2 ) having a profitable deviation. So the given off equilibrium belief supports the equilibrium strategies. We can analyze the case for 2 b < b R1 +b R2 in a similar way. Now we give the characterization of the equilibrium with infinite actions. If there is a fully revealing equilibrium, then for each true state θ [0, 1], the message of S 1 will be n 1 = θ, the message of S 2 will be n 2 = θ and the action of R 1 will be y R 1 (θ, b R1 ) and the action of R 2 will be y R 2 (θ, b R2 ). 58

A semi-revealing equilibrium is an equilibrium where not all states are fully revealed, only some states are revealed where the Senders send true signals and the Receivers take their maximum action with the true signal (Lemma 2.4). Now we present the structure of the semi-revealing equilibrium. Before that as usual b = max(b S1, b S2 ) and b = min(b S1, b S2 ). (a1) Both the Senders tell truth and S b sends a message n 1 = θ, S b sends a message n 2 = θ, if θ [0, 1 2 (2 b b R1 b R2 )]. The R 1 and R 2 believe them and take the actions y R 1 (θ, b R1 ) and y R 2 (θ, b R2 ) respectively. (a2) For θ (1 2 (2 b b R1 b R2 ), 1], both the Senders send messages that are uniformly distributed over [1 2 (2 b b R1 b R2 ), 1] and hence R 1 takes the action, (1 + 1 2 (2 b b R1 b R2 )/2 + b R1 = 1 2 b + 2 b R1 + b R2. (a3) If message of S b is n 1 and message of S b is n 2, then R 1 and R 2 believe S b until n 2 < n 1 + 2 (2 b b R1 b R2 ) and for n 2 n 1 + 2 (2 b b R1 b R2 ), R 1 and R 2 believe S b. To prove this, we can see that given the probability belief in (a3), if the true state θ > 1 2 (2b b R1 b R2 ), there is no rationalizable action z by Receiver R i satisfying z y R i (1, b Ri ) (y R i (1, b Ri ) is the highest action possible) such that if S b were to deviate to a slightly lower state, then S b can not send a message which is 2 (2b b R1 b R2 ) more than the message of S b to get a profitable deviation. So the equilibrium strategy of S b is to send a message uniformly distributed over [1 2 (2b b R1 b R2 ), 1] and S b agrees with him and the R 1 takes the action 1 2 b + 2 b R1 + b R2 and R 2 takes the action 1 2 b + b R1 + 2 b R2. This completes the proof for (a2). Proof for (a1) follows from both Lemma 2.4 and the proof of (A6) for like biases. In the proof of (a1) and (a2) we can see that the off equilibrium belief given in (a3) of the Receivers help to sustain the equilibrium and hence we have (a3). 3 Example First we mention the ex ante expected utility of the players before checking for the best equilibrium. 59

We know from (4.25) and (4.26) that y R1 (a i, a i+1 ) = (a i + a i+1 ) + b R1 2 y R2 (a i, a i+1 ) = y R1 (a i, a i+1 ) b R1 + b R2 The ex ante expected utility of R j, j = 1, 2 is EU R j = = N 1 ai+1 i=0 a i N 1 ai+1 i=0 a i (y Rj (a i, a i+1 ) (θ + b Rj )) 2 dθ ( 2 (ai + a i+1 ) + b Rj (θ + b Rj )) dθ 2 = 1 N 1 (a i+1 a i ) 3 (4.37) 12 i=0 So each Receiver has the same utility irrespective of their biases. The ex ante expected utility of S j, j = 1, 2 is EU S j = = N 1 ai+1 i=0 a i N 1 ai+1 i=0 a i 1 4 1 4 (y R 1 (a i, a i+1 ) + y R2 (a i, a i+1 ) 2(θ + b Sj )) 2 dθ ( ai + a i+1 + b R1 + b R2 2(θ + b Sj ) ) 2 dθ = 1 N 1 (a i+1 a i ) 3 1 12 4 (b R 1 + b R2 2b Sj ) 2 i=0 = EU R j 1 4 (b R 1 + b R2 2b Sj ) 2 (4.38) Let b S2 = 1/40, b S1 = 1/9, b R1 = 0, b R2 = 1/60. Here b = 1/40, b = 1/9, β = 0, β = 1/60. Here we are in the Case 2 and (4.35) can not be satisfied, but (4.36) always holds true. So there are finite number of equilibrium actions. If we want to make S b indifferent between the lower action and higher action at a break point a i, i = 1,..., N 1 for a given partition N, we use 60

equation (4.31) with equality while to make S b indifferent at a break point a i, we use equation (4.32) with equality. b R1 + b R2 2b = 1/30 and b R1 + b R2 2b = 37/180. The maximum number of points S 2 can be indifferent can be calculated from (4.31) with equality and we have N(N 1)(2b S2 b R1 b R2 ) < 1 (following calculations from one Sender and two Receivers case), the maximum value of N which satisfies this is 5. The maximum number of points S 1 can be indifferent is calculated from (4.32) with equality and following the calculations for one Sender and two Receivers case, we have N(N 1)(2b S1 b R1 b R2 ) < 1, the maximum value of N satisfying this inequality is 2. From (4.33), 1 N + (N 1)(b R 1 + b R2 2 b) > 0 and the maximum value of N which satisfies this is N 1 = 5. From (4.34), N 1 N + (N 1)(b R 1 + b R2 2 b) < 1 and this is satisfied for all N 1 and so N 2 = N(b S1, b S2, b R1, b R2 ) = min(n 1, N 2 ) = 5. So for N = 5, if we make S 1 indifferent at a 4 and S 2 indifferent at a 1, a 2 and a 3 between the higher action and lower action then using equation (4.32) with equality at a 4 and equation (4.31) with equality at other break points, we have a 0 = 0, a 1 = 10/360, a 2 = 44/360, a 3 = 102/360, a 4 = 184/360, a 5 = 1. But we can still improve the utility by considering the equilibrium where S 2 is indifferent at all break points, a i, i = 1,..., N 1. Using (4.31) with equality at all break points, we have a 0 = 0, a 1 = 4/60, a 2 = 12/60, a 3 = 24/60, a 4 = 40/60, a 5 = 1. All break points in the later equilibrium are greater than the break points of the former equilibrium which means the expected utility of the players in this equilibrium is more than the former equilibrium. We show this by computation below. For the former equilibrium with S 1 indifferent at a 4 and S 2 indifferent at a 1, a 2 and a 3, 61

EU R 1 = EU R 2 = 0.0111, EU S 1 = 0.0217, EU S 2 = 0.0122. For the later equilibrium with S 2 indifferent at all break points, EU R 1 = EU R 2 = 0.0055, EU S 1 = 0.0161, EU S 2 = 0.0058. Clearly, the later equilibrium is better than former and we can not shift any more the break points to the right from equation (4.31)i.e. we can not make the equilibrium points anymore larger, so the later equilibrium is the most informative equilibrium in this case. Now we analyze the questions related to welfare whether it is better for the Senders (b S1 = 1/9, b S2 = 1/40) to advice both the Receivers (b R1 = 0, b R2 = 1/60) or not, is it better for the Receivers to consult both the experts or not. To do this, first we mention the expected utility of the Senders and Receivers in all cases. 1. If S 1 communicates to R 1 in private, then from calculations of Chapter 1, we have, N(b S1, b R1 ) = 2; a 0 (2) = 0, a 1 (2) = 5/18, a 2 (2) = 1. EU R 1 = 0.0332 and EU S 1 = 0.0455 2. If S 1 with communicates to R 2 with in private, then we have, N(b S1, b R1 ) = 2; a 0 (2) = 0, a 1 (2) = 14/45, a 2 (2) = 1. EU R 2 = 0.0298 and EU S 1 = 0.0387 3. If S 1 communicates to both R 1 and R 2 in public, then calculation from Chapter 2 gives, N(b S1, b R1, b R2 ) = 2; a 0 (2) = 0, a 1 (2) = 53/180, a 2 (2) = 1. EU R 1 = EU R 2 = 0.0314 and EU S 1 = 0.0420. 4. If S 2 communicates to R 1 in private, then we have, N(b S2, b R1 ) = 4; a 0 (4) = 0, a 1 (4) = 1/10, a 2 (4) = 3/10, a 3 (4) = 6/10, a 4 (4) = 1. 62

EU R 1 = 0.0083, and EU S 2 = 0.0089. 5. If S 2 communicates to R 2 in private, then we have, N(b S, b R2 ) = 8; a 0 (8) = 0, a 1 (8) = 1/120, a 2 (8) = 6/120, a 3 (8) = 15/120, a 4 (8) = 28/120, a 5 (8) = 45/120, a 6 (8) = 66/120, a 7 (8) = 91/120, a 8 (8) = 1; EU R 2 = 0.0028 and EU S 2 = 0.0034. 6. If S 2 communicates to both R 1 and R 2 in public, then we have, N(b S, b R1, b R2 ) = 5; a 0 (5) = 0, a 1 (5) = 4/60, a 2 (5) = 12/60, a 3 (5) = 24/60, a 4 (5) = 40/60 and a 5 (5) = 1. EU R 1 = EU R 2 = 0.0056 and EU S 2 = 0.0059. 7. If S 1 and S 2 communicate to R 1 in public, then calculations from Chapter 3 gives N(b S1, b S2, b R1 ) = 4; a 0 (4) = 0, a 1 (4) = 1/10, a 2 (4) = 3/10, a 3 (4) = 6/10, a 4 (4) = 1. EU R 1 = 0.0083, EU S 1 = 0.0206, EU S 2 = 0.0089. 8. If S 1 and S 2 communicate to R 2 in public, then we have N(b S1, b S2, b R2 ) = 8; a 0 (8) = 0, a 1 (8) = 1/120, a 2 (8) = 6/120, a 3 (8) = 15/120, a 4 (8) = 28/120, a 5 (8) = 45/120, a 6 (8) = 66/120, a 7 (8) = 91/120, a 8 (8) = 1. EU R 2 = 0.0028, EU S 1 = 0.0151 and EU S 2 = 0.0034. 9. If S 1 and S 2 communicate to both R 1 and R 2 in public, then calculations in this chapter give, N(b S1, b S2, b R1, b R2 ) = 5; a 0 (5) = 0, a 1 (5) = 4/60, a 2 (5) = 12/60, a 3 (5) = 24/60, a 4 (5) = 40/60 and a 5 (5) = 1. EU R 1 = EU R 2 = 0.0056, EU S 1 = 0.0162 and EU S 2 = 0.0059. 63

Here the best situation for S 1 and S 2 together is to talk to R 2 without R 1, for R 1 and R 2 together to consult S 2 without S 1. Also S 1 and S 2 find it better to talk both R 1 and R 2 together than talking to R 1 alone. In all these situations, we can observe that the closer the preferences of the agents, the more utility they have. We have the freedom to adjust the biases in such a way that two Senders talking to two Receivers together will be beneficial for all agents. 4 I Senders and J Receivers Now we present some ideas to extend the Cheap Talk to more general case when there are more than two Senders and more than two Receivers, which are not themselves the proofs and are given just for intuition. We maintain that these ideas have not been verified properly. Let there be I Senders and J Receivers. Each have the Quadratic Loss utility function defined as below. For Sender S i, i 1,..., I, the utility function in presence of k Receivers (k J) and any number of Senders is given by, where 1 k 2 U S i (y R1,..., y Rk, θ, b Si ) = 1 k 2 (y R 1 +... + y Rk k(θ + b Si )) 2 is a normalizing factor to level the utility with the Receivers. For Receiver R j, j 1,..., J, the utility function in presence of any number of Receivers and Senders is given by, U R j (y Rj, θ, b Rj ) = (y Rj (θ + b Rj )) 2 Using the proof in Section 2 of Chapter 2, we may say that if there is a Sender S and k Receivers, then if b S 1 k (b R 1 +... + b Rk ), then we have partition equilibrium like Crawford and Sobel. If there are two Senders and k Receivers, the proof of Chapter 4 may do the job. But when there are more than two Senders, then the analysis will be complicated. But we may still say that the incentive constraints for the 64

Sender with the lowest bias and the Sender with the highest bias will be same as given in Krishna and Morgan. The incentive constraints for other Receivers may have to obey the same inequality sign either for the Sender with lowest bias or for the Sender with the highest bias. Then using uniform signaling rule for a finite partition equilibrium, we may find the number of highest partition possible and find cases where the highest number of partition tend to infinity and draw conclusions about equilibrium with finite actions or infinite actions. We do not proceed anymore and leave for further discussion. 65

Bibliography [1] Crawford, Vincent P. and Sobel, Joel, Strategic Information Transmission, Econometrica, Vol. 50, No. 6. (Nov., 1982), pp. 1431-1451. [2] Farrell, Joseph and Gibbons, Robert, Cheap Talk with Two Audiences, The American Economic Review, Vol. 79, No. 5. (Dec., 1989), pp. 1214-1223. [3] Krishna, Vijay and Morgan, John, A Model of Expertise, The Quarterly Journal of Economics, MIT Press, vol. 116(2), (May, 2001), pp. 747-775. 66