Equilibrium computation: Part 1


 Ursula Preston
 1 years ago
 Views:
Transcription
1 Equilibrium computation: Part 1 Nicola Gatti 1 Troels Bjerre Sorensen 2 1 Politecnico di Milano, Italy 2 Duke University, USA Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
2 Outline 1 Models and solution concepts Mechanisms in strategic form Solution concepts 2 Non equilibrium solution concept computation Finding dominated actions Finding never best response actions 3 Computing a Nash equilibrium with strategic form games Matrix games Bimatrix games Polymatrix games 4 Computing correlation based equilibria with strategic form games Computing a correlated equilibrium Computing a leader follower equilibrium Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
3 Game model Definition A game is formally defined by a pair: Mechanism M, defining the rules of the game Strategiesσ, defining the behavior of each agent in the game Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
4 Game model Definition A game is formally defined by a pair: Mechanism M, defining the rules of the game Strategiesσ, defining the behavior of each agent in the game Mechanisms There are three main classes of mechanisms: Strategic form mechanisms: agents play without observing the actions undertaken by the opponents (simultaneous games) Extensive form mechanisms: there is a sequential tree based structure according which an agent can observe some opponents actions Stochastic form mechanisms: there is a sequential graph based structure according which an agent can observe some opponents actions Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
5 Games in strategic form (1) Definition A strategic form mechanism is a tuple M = (N,{A} i N, X, f,{u} i N ) N: set of agents A i : set of actions available to agent i X: set of outcomes f : i N A i X: outcome function U i : X R: utility function of agent i Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
6 Games in strategic form (2) Example: Rock Paper Scissors N = {agent 1, agent 2} A 1 = A 2 = {R, P, S} X = {win1, win2, tie} f(r, S) = f(p, R) = f(s, P) = win 1, f(s, R) = f(r, P) = f(p, S) = win2, tie otherwise U i (wini) = 1, U i (win i) = 1, U i (tie) = 0 Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
7 Games in strategic form (2) Example: Rock Paper Scissors N = {agent 1, agent 2} A 1 = A 2 = {R, P, S} X = {win1, win2, tie} f(r, S) = f(p, R) = f(s, P) = win 1, f(s, R) = f(r, P) = f(p, S) = win2, tie otherwise U i (wini) = 1, U i (win i) = 1, U i (tie) = 0 Matrix based representation agent 1 agent 2 R P S R 0, 0 1, 1 1, 1 P 1, 1 0, 0 1, 1 S 1, 1 1, 1 0, 0 Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
8 Games in strategic form (3) Example: three player game A 1 = {a, b} A 2 = {L, R} A 3 = {A, B, C} L R a 2, 2, 1 0, 3, 0 b 3, 0, 2 1, 1, 4 A L R a 2, 3, 0 0, 4, 1 b 3, 1, 2 1, 2, 0 B L R a 2, 1, 0 1, 0, 2 b 0, 3, 1 2, 3, 1 C Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
9 Matrix based games Classification Matrix game: the agents utilities can be represented by a unique matrix (this happens with two agent constant sum games: U 1 + U 2 = constant for every entry) Bimatrix game: two agent general sum games Polymatrix game: the utility U i of each agent i can be expressed as a set of matrices U i,j depending only on the actions of agent i and agent j with non polymatrix games, U i has j N A j entries with polymatrix games, U i has A i j N,j i A j entries Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
10 Strategies Definition A strategy σ i of agent i is a probability distribution over the actions A i Call x i,j the probability with which agent i plays action j and x i the vector of x i,j, we need that x i 0 1 T x i = 1 A strategy profileσ is the collection of one strategy per agent, σ = (σ 1,...,σ N ) Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
11 Strategies Definition A strategy σ i of agent i is a probability distribution over the actions A i Call x i,j the probability with which agent i plays action j and x i the vector of x i,j, we need that x i 0 1 T x i = 1 A strategy profileσ is the collection of one strategy per agent, σ = (σ 1,...,σ N ) Example With Rock Paper Scissors games can be: x 1 = x 1,R = 0.2 x 1,P = 0.8 x 2 = x 2,R = 0.6 x 2,P = 0.0 x 1,S = 0.0 x 2,S = 0.4 Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
12 Expected utility (1) Definition The expected utility of an agent i related to an action j is: U i x k k N,k i j where(a) j is the j th row of matrix A U i k N,k i x k is the vector of expected utilities of agent i The expected utility of an agent i related to a strategy x i is: x T i U i k N,k i x k Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
13 Expected utility (2) Example U 1 = x 1 = x 2 = The expected utilities related to each action of agent 1 are: = The expected utility related to the strategy of agent 1 is: [ ] = Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
14 Game equivalence Definition Given two games with utility functions U 1,...,U N and U 1,...,U N respectively, if, for every i N, there is an affine transformation between U i and U i such that U i = α iu i +β i A 1 where A 1 is a matrix of ones, then the two games are equivalent Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
15 Game equivalence Definition Given two games with utility functions U 1,...,U N and U 1,...,U N respectively, if, for every i N, there is an affine transformation between U i and U i such that U i = α iu i +β i A 1 where A 1 is a matrix of ones, then the two games are equivalent Example U 1 = U 1 = = Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
16 Solutions and solution concepts Definition Given: The strategy x i of each agent i The beliefˆx i j each agent i has over the strategy x j of agent j A solution is a pair(σ,µ), whereµis the set of agents beliefs, such that Rationality constraints: the strategies of each agent are optimal w.r.t. the beliefs Information constraints: the beliefs of each agent are somehow consistent w.r.t. the opponents strategies Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
17 Solutions and solution concepts Definition Given: The strategy x i of each agent i The beliefˆx i j each agent i has over the strategy x j of agent j A solution is a pair(σ,µ), whereµis the set of agents beliefs, such that Rationality constraints: the strategies of each agent are optimal w.r.t. the beliefs Information constraints: the beliefs of each agent are somehow consistent w.r.t. the opponents strategies Definition A solution concept defines the set of rationality and information constraints Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
18 Solution concept classification Non equilibrium solution concepts Dominance and iterated dominance Never best response and iterated never best response Maxmin strategy and minmax strategy Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
19 Solution concept classification Non equilibrium solution concepts Dominance and iterated dominance Never best response and iterated never best response Maxmin strategy and minmax strategy Equilibrium solution concepts without correlation Nash relaxations: conjectural equilibrium, self confirming equilibrium Nash Nash refinements: perfect equilibrium, proper equilibrium Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
20 Solution concept classification Non equilibrium solution concepts Dominance and iterated dominance Never best response and iterated never best response Maxmin strategy and minmax strategy Equilibrium solution concepts without correlation Nash relaxations: conjectural equilibrium, self confirming equilibrium Nash Nash refinements: perfect equilibrium, proper equilibrium Equilibrium solution concepts with correlation One agent based correlation: leader follower/stackelberg/committment equilibrium Device based correlation: correlated equilibrium Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
21 Dominance (1) Definition Action j A i is strictly dominated if there is a strategy x over A that, for every action of the opponents, provides an expected utility larger than action j e T j U i < x T U i where e j is a vector of zeros except for position j wherein there is 1 Example agent 1 Action C is dominated by action B agent 2 D E F A 4, 1 1, 2 1, 3 B 1, 4 4, 0 4, 1 C 0, 1 2, 5 2, 0 Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
22 Dominance (2) Weakly dominance Action j A i is weakly dominated if there is a strategy x over A that, for every action of the opponents, provides an expected utility equal to or larger than action j Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
23 Dominance (2) Weakly dominance Action j A i is weakly dominated if there is a strategy x over A that, for every action of the opponents, provides an expected utility equal to or larger than action j Dominance and rationality No rational agent will play an action that is strictly dominated Strictly dominated actions can be safely removed from the game, never being played The application of strong dominance leads to a reduced game that is equivalent to the original one Weakly dominated actions could be played by agents Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
24 Dominance and mixed strategies Property Dominance with mixed strategies is stronger than with pure strategies Example agent 1 agent 2 D E F A 4, 1 1, 2 1, 3 B 1, 4 4, 0 4, 1 C 2, 1 2, 5 2, 0 Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
25 Dominance and mixed strategies Property Dominance with mixed strategies is stronger than with pure strategies Example agent 1 Dominance in pure strategies agent 2 D E F A 4, 1 1, 2 1, 3 B 1, 4 4, 0 4, 1 C 2, 1 2, 5 2, 0 No action of the agent 1 is dominated by another action Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
26 Dominance and mixed strategies Property Dominance with mixed strategies is stronger than with pure strategies Example agent 1 Dominance in pure strategies agent 2 D E F A 4, 1 1, 2 1, 3 B 1, 4 4, 0 4, 1 C 2, 1 2, 5 2, 0 No action of the agent 1 is dominated by another action Dominance in mixed strategies Action C is dominated by x = [ ] Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
27 Dominance with more than two agents Example L R a 2, 2, 1 0, 3, 0 b 3, 0, 2 1, 1, 4 A L R a 2, 3, 0 0, 4, 1 b 3, 1, 2 1, 2, 0 B L R a 2, 1, 0 1, 0, 2 b 3, 3, 1 2, 3, 1 C Action a is dominated by action b Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
28 Dominance as a solution concept Comments Dominance does not require any assumption over the information available to each agent except for the knowledge of own utility Dominance prescribes what actions are to play and what are not to play independently of the opponents strategies Dominance does not prescribe any strategy over the non dominated actions We have an equilibrium in dominant strategies if dominance removes all the actions except one for every agent Example agent 2 S C agent 1 S 2, 2 0, 3 C 3, 0 1, 1 agent 1 agent 2 H T H 2, 0 0, 2 T 0, 2 2, 0 Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
29 Iterated dominance Definition Under the assumption of complete information over the utility and common information over rationality and utilities, each agent can forecast the dominated actions of the opponents and iteratively remove her own actions Example agent 1 agent 2 D E F A 3, 2 2, 1 2, 0 B 0, 2 0, 5 3, 3 C 0, 1 1, 2 1, 4 Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
30 Best response Definition The best response of agent i is an action that maximizes her expected utility given the strategies of the opponents as input BR i (σ i ) = arg max j A i et j U i x k where x k are given k N,k=i Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
31 Best response Definition The best response of agent i is an action that maximizes her expected utility given the strategies of the opponents as input BR i (σ i ) = arg max j A i et j U i x k where x k are given Comments BR i (σ i ) can return multiple actions k N,k=i A rational agent will play only best response actions Any mixed strategy over best response actions is a best response Any non never best response action is said rationalizable Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
32 Never best response Definition A never best response of agent i is an action j such that there is not any opponents strategy profile such that action j is a best response j BR i (σ i ) σ i Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
33 Never best response Definition A never best response of agent i is an action j such that there is not any opponents strategy profile such that action j is a best response j BR i (σ i ) σ i Comments No rational agent will play never best response actions Never best response actions can be safely removed Rationalizability requires each agent to know her own utilities, no assumption is required over the information on the opponents utilities and rationality Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
34 Never best response Definition A never best response of agent i is an action j such that there is not any opponents strategy profile such that action j is a best response j BR i (σ i ) σ i Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
35 Never best response Definition A never best response of agent i is an action j such that there is not any opponents strategy profile such that action j is a best response j BR i (σ i ) σ i Comments No rational agent will play never best response actions Never best response actions can be safely removed When information on the utilities and rationality is complete and common, rationalizability can be iterated Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
36 Rationalizability and dominance (1) Comments Dominance and rationalizability are equivalent with two agents (the proof is by strong duality) With more than two agents, every dominated action is a never best response, but the reverse may not hold (rationalizability removes a larger number of actions than dominance) The main difference: Dominance is similar to rationalizability, but it implicitly assumes that the opponents correlate their strategy as a unique agent Rationalizability explicitly considers each opponent as a different uncorrelated agent If an action is dominated when the opponents can correlate is also dominated when they cannot If an action is dominated when the opponents cannot correlate, it may be not when they can Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
37 Rationalizability and dominance (2) Example L R a 0, 0, 0 0, 0, 0 b 8, 8, 8 0, 0, 0 L R a 0, 0, 0 8, 8, 8 b 0, 0, 0 0, 0, 0 L R a 4, 4, 4 0, 0, 0 b 0, 0, 0 4, 4, 4 L R a 3, 3, 3 3, 3, 3 b 3, 3, 3 3, 3, 3 A B C D Action D is not strictly dominated, but it is a never best response Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
38 Maxmin Assumptions An agent does not know anything about her opponents An agent aims at maximize her utility in the worst case (safety level) Definition A maxmin strategyσ of agent i is defined as: σ = arg max mine[u i ] σ i σ i Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
39 Minmax Assumptions An agent knows the utility of the opponent An agent aims at minimize the opponent expected utility Definition A minmax strategyσ of agent i is defined as: σ = arg min maxe[u i ] σ i σ i Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
40 Nash equilibrium (1) Assumptions Agents do not communicate before playing Agents know the utilities of the opponents and this information is common Definition A Nash equilibrium is a strategy profile(x 1,...,x n) such that: (x i )T U i j N,j i x j x T i U i j N,j i x j x i, i N Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
41 Nash equilibrium (1) Assumptions Agents do not communicate before playing Agents know the utilities of the opponents and this information is common Definition A Nash equilibrium is a strategy profile(x 1,...,x n) such that: Comments (x i )T U i j N,j i x j x T i U i j N,j i x j x i, i N In a Nash equilibrium, no agent can more by changing her strategy given that the opponents do not change (i..e, every x i is a randomization over best responses) Coalition deviations are not considered Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
42 Nash equilibrium (2) Definition A Nash equilibrium is a strategy profile(x 1,...,x n ) such that: (x i )T U i e T k U i k A i, i N Comments j N,j i x j j N,j i We can substitute x i (infinite constraints) with k A i ( A i constraints) because x T i U i j N,j i x j is a convex combination of different e T k U i j N,j i x j x T i U i j N,j i x j is smaller than or equal to max k e T k U i j N,j i x j since we cannot know what is k with the largest e T k U i j N,j i x j, we impose to be larger than equal to all the e T k U i j N,j i x j We obtain a finite number of constraints that is linear in the size of the game x j Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
43 Nash theorem Theorem Every finite game admits at least a Nash equilibrium in mixed strategies Comments The proof is by Brouwer fixed point theorem: a Nash equilibrium is a fixed point Pure strategies Nash equilibria may not exist (e.g., Matching penny) Multiple equilibria can coexist With continuous games, the things are more complicated (a continuous game may not admit any Nash equilibrium, neither in mixed strategies) Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
44 Example (1) Pure strategy equilibrium agent 1 Pure strategy equilibrium agent 1 agent 2 D E F A 1, 3 2, 1 1, 0 B 3, 2 0, 5 2, 3 C 0, 1 1, 2 3, 3 agent 2 D E F A 6, 2 2, 1 1, 6 B 3, 2 3, 3 2, 3 C 0, 6 1, 2 3, 3 Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
45 Example (2) Multiple pure strategy equilibria agent 1 No pure strategy equilibrium agent 1 agent 2 D E F A 6, 2 2, 1 1, 6 B 3, 2 3, 3 2, 3 C 0, 6 1, 2 9, 9 agent 2 D E F A 6, 2 2, 1 1, 6 B 3, 2 0, 3 2, 3 C 0, 6 1, 2 3, 3 Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
46 Nash equilibrium and Pareto efficiency Example agent 2 S C agent 1 S 2, 2 0, 3 C 3, 0 1, 1 There is a unique Nash equilibrium(c, C) (C, C) is Pareto dominated by (S, S) (C, C) is the unique Pareto dominated strategy profile There is no relationship between Pareto dominance and Nash equilibrium Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
47 Perturbed games (1) Perturbation Given a set of action A i, a perturbation over it corresponds to a probability function f i,j with j A i and j A i f i,j < 1 Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
48 Perturbed games (1) Perturbation Given a set of action A i, a perturbation over it corresponds to a probability function f i,j with j A i and j A i f i,j < 1 Parametric perturbation Perturbation f i,j = f i,j (ǫ) withǫ [0, 1] Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
49 Perturbed games (1) Perturbation Given a set of action A i, a perturbation over it corresponds to a probability function f i,j with j A i and j A i f i,j < 1 Parametric perturbation Perturbation f i,j = f i,j (ǫ) withǫ [0, 1] Perturbed game Given a perturbation f i,j, a perturbed game is a game in which strategies are constrained as: i N, j A i : x i,j f i,j Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
50 Perturbed games (1) Perturbation Given a set of action A i, a perturbation over it corresponds to a probability function f i,j with j A i and j A i f i,j < 1 Parametric perturbation Perturbation f i,j = f i,j (ǫ) withǫ [0, 1] Perturbed game Given a perturbation f i,j, a perturbed game is a game in which strategies are constrained as: i N, j A i : x i,j f i,j Perturbation and Nash equilibrium The introduction of perturbation (i.e., a perturbation game) affects the set of Nash equilibria Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
51 Perturbed games (2) Example agent 1 agent 2 C D A 10, 10 0, 0 B 0, 0 1, 1 Perturbation: f 1,A = 0.2, f 1,B = 0.2, f 2,C = 0.2, f 2,D = 0.2 (A, C) and (B, D) are Nash equilibria without perturbation (0.8A+0.2B, 0.8C+ 0.2D) is a Nash equilibrium with perturbation: all the probability except for the perturbation is put on(a, C) (0.2A+0.8B, 0.2C+ 0.BD) is not a Nash equilibrium with perturbation: all the probability except for the perturbation cannot put on(b, D) Perturbation: f 1,A = 0.05, f 1,B = 0.05, f 2,C = 0.05, f 2,D = 0.05 (B, D) is a Nash equilibrium without perturbation (0.2A+0.8B, 0.2C+ 0.BD) is a Nash equilibrium with perturbation: all the probability except for the perturbation is put on(b, D) Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
52 Perfect equilibrium (1) Definition A strategy profileσ is a perfect equilibrium if there is a f i,j (ǫ) such that, called σ (ǫ) a sequence of Nash equilibria for any ǫ [0,ǫ 0 ] of the associated perturbed games,σ (ǫ) σ as ǫ 0 Example agent 1 agent 2 C D A 1, 1 0, 0 B 0, 0 0, 0 For every f 1,A (ǫ) > 0, action D is not a best response For every f 2,C (ǫ) > 0, action B is not a best response (B, D) is a Nash equilibrium, but it is not perfect (A, C) is a perfect equilibrium Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
53 Perfect equilibrium (2) Properties An equilibrium is perfect if it keeps to be a Nash equilibrium when minimally perturbed Every finite game admits at least a perfect equilibrium Every perfect equilibrium is a Nash equilibrium in which no weakly dominated action is played The vice versa (i.e., every Nash equilibrium in which no weakly dominated action is played is a perfect equilibrium) is true only for two player games There is not relationship between perfect equilibrium and Pareto efficiency We can safely consider only f i,j (ǫ) that are polynomial inǫ Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
54 Perfect equilibrium (3) Example C D A 1, 1, 1 1, 0, 1 B 1, 1, 1 0, 0, 1 E C D A 1, 1, 1 0, 0, 0 B 0, 1, 0 1, 0, 0 F F is weakly dominated for agent 3 D is weakly dominated for agent 2 (A, C, E) and (B, C, E) are Nash equilibria without weakly dominated actions (A, C, E) is not perfect Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
55 Perfect equilibrium (3) Example agent 1 agent 2 C D A 1, 1 10, 0 B 0, 10 10, 10 There are two pure strategy Nash equilibria (A, C) and(b, D) Actions B and D are weakly dominated The unique perfect Nash is (A, C) (B, D) Pareto dominates (A, C) Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
56 Perfect equilibrium (5) Example agent 1 agent 2 A B agent 1 a 1, 1 0, 0 b 0, 0 0, 0 agent 2 A B C a 1, 1 0, 01,2 b 0, 0 0, 0 0,2 c 2, 1 2, 02,2 Without c and C, the unique perfect equilibrium is (a, A) With c and C, (b, B) is a perfect equilibrium The introduction of strictly dominated actions may change the set of perfect equilibria Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
57 Proper equilibrium (1) Perfection weakness The perfect equilibrium is sensible to weakly dominated actions Aim The design of a solution concept refining Nash equilibrium that is not sensible to weakly dominated actions Properness idea A proper equilibrium is a perfect equilibrium with a specific perturbation: given two actions j and k of agent i, if j provides a utility strictly larger than k, then perturbation f i,k is subject to f i,k ǫf i,j In other words The perturbation has the property that a good action must be played (due to perturbation) with probability larger than the probability of a bad action Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
58 Proper equilibrium (2) Properties Every game admits at least a proper equilibrium The proper equilibrium removes weakly dominated strategies with two player games With more agents, the proper equilibrium may not remove weakly dominated strategies Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
59 Correlated equilibrium (1) Assumptions Agents can correlate in some way Typically, a correlation device is considered that sends different signals to each agent Definition A correlated equilibrium is a tuple(v,π,σ), where v is a tuple of random variables v = (v 1,...,v n ) with respective domains D = (D 1,...,D n ),π is a joint distribution over v,σ = (σ 1,...,σ n ) is a vector of mappings σ i : D i A i, and for each agent i and every mapping σ i is the case that: d Dπ(d i, d i )U i (σ i (d i ),σ i (d i )) d D It is possible to limit strategies σ i to be pure π(d i, d i )U i (σ i (d i),σ i (d i )) Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
60 Correlated equilibrium (2) Properties Every Nash equilibrium is a correlated equilibrium in which there is only one signal per agent A correlated equilibrium may be not a Nash equilibrium Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
61 Leader follower equilibrium (1) Assumptions An agent, called leader, can announce (commit to) her strategy to the opponents The other agents, called followers, act knowing the commitment The announce must be credible Definition A leader follower equilibrium is a strategy profile in which the expected utility of the leader is maximized given that the followers act knowing the strategy of the leader Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium Duke University, computation: USA Part ) / 120
COMPUTING EQUILIBRIA FOR TWOPERSON GAMES
COMPUTING EQUILIBRIA FOR TWOPERSON GAMES Appeared as Chapter 45, Handbook of Game Theory with Economic Applications, Vol. 3 (2002), eds. R. J. Aumann and S. Hart, Elsevier, Amsterdam, pages 1723 1759.
More informationTHE PROBLEM OF finding localized energy solutions
600 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 3, MARCH 1997 Sparse Signal Reconstruction from Limited Data Using FOCUSS: A Reweighted Minimum Norm Algorithm Irina F. Gorodnitsky, Member, IEEE,
More informationHow to Use Expert Advice
NICOLÒ CESABIANCHI Università di Milano, Milan, Italy YOAV FREUND AT&T Labs, Florham Park, New Jersey DAVID HAUSSLER AND DAVID P. HELMBOLD University of California, Santa Cruz, Santa Cruz, California
More informationHow Bad is Forming Your Own Opinion?
How Bad is Forming Your Own Opinion? David Bindel Jon Kleinberg Sigal Oren August, 0 Abstract A longstanding line of work in economic theory has studied models by which a group of people in a social network,
More informationFast Solution of l 1 norm Minimization Problems When the Solution May be Sparse
Fast Solution of l 1 norm Minimization Problems When the Solution May be Sparse David L. Donoho and Yaakov Tsaig October 6 Abstract The minimum l 1 norm solution to an underdetermined system of linear
More informationIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006 1289. Compressed Sensing. David L. Donoho, Member, IEEE
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006 1289 Compressed Sensing David L. Donoho, Member, IEEE Abstract Suppose is an unknown vector in (a digital image or signal); we plan to
More informationTrading and Information Diffusion in OvertheCounter Markets
Trading and Information Diffusion in OvertheCounter Markets Ana Babus Federal Reserve Bank of Chicago Péter Kondor Central European University First draft: August 31, 2012, This version: November 5,
More informationFUNDAMENTALS OF SOCIAL CHOICE THEORY by Roger B. Myerson 1 http://home.uchicago.edu/~rmyerson/research/schch1.pdf
FUNDAMENTALS OF SOCIAL CHOICE THEORY by Roger B. Myerson 1 http://home.uchicago.edu/~rmyerson/research/schch1.pdf Abstract: This paper offers a short introduction to some of the fundamental results of
More informationTruthful Mechanisms for OneParameter Agents
Truthful Mechanisms for OneParameter Agents Aaron Archer Éva Tardos y Abstract In this paper, we show how to design truthful (dominant strategy) mechanisms for several combinatorial problems where each
More informationSubspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity
Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Wei Dai and Olgica Milenkovic Department of Electrical and Computer Engineering University of Illinois at UrbanaChampaign
More informationA Principled Approach to Bridging the Gap between Graph Data and their Schemas
A Principled Approach to Bridging the Gap between Graph Data and their Schemas Marcelo Arenas,2, Gonzalo Díaz, Achille Fokoue 3, Anastasios Kementsietsidis 3, Kavitha Srinivas 3 Pontificia Universidad
More informationNestedness in networks: A theoretical model and some applications
Theoretical Economics 9 (2014), 695 752 15557561/20140695 Nestedness in networks: A theoretical model and some applications Michael D. König Department of Economics, University of Zurich Claudio J. Tessone
More informationCommunication Theory of Secrecy Systems
Communication Theory of Secrecy Systems By C. E. SHANNON 1 INTRODUCTION AND SUMMARY The problems of cryptography and secrecy systems furnish an interesting application of communication theory 1. In this
More informationApproximately Detecting Duplicates for Streaming Data using Stable Bloom Filters
Approximately Detecting Duplicates for Streaming Data using Stable Bloom Filters Fan Deng University of Alberta fandeng@cs.ualberta.ca Davood Rafiei University of Alberta drafiei@cs.ualberta.ca ABSTRACT
More informationNot Only What but also When: A Theory of Dynamic Voluntary Disclosure
Not Only What but also When: A Theory of Dynamic Voluntary Disclosure By ILAN GUTTMAN, ILAN KREMER, AND ANDRZEJ SKRZYPACZ We examine a dynamic model of voluntary disclosure of multiple pieces of private
More informationAn O(ND) Difference Algorithm and Its Variations
An O(ND) Difference Algorithm and Its Variations EUGENE W. MYERS Department of Computer Science, University of Arizona, Tucson, AZ 85721, U.S.A. ABSTRACT The problems of finding a longest common subsequence
More informationThe Backpropagation Algorithm
7 The Backpropagation Algorithm 7. Learning as gradient descent We saw in the last chapter that multilayered networks are capable of computing a wider range of Boolean functions than networks with a single
More informationTO QUEUE OR NOT TO QUEUE: EQUILIBRIUM BEHAVIOR IN QUEUEING SYSTEMS
TO QUEUE OR NOT TO QUEUE: EQUILIBRIUM BEHAVIOR IN QUEUEING SYSTEMS REFAEL HASSIN Department of Statistics and Operations Research Tel Aviv University Tel Aviv 69978, Israel hassin@post.tau.ac.il MOSHE
More informationRECENTLY, there has been a great deal of interest in
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 1, JANUARY 1999 187 An Affine Scaling Methodology for Best Basis Selection Bhaskar D. Rao, Senior Member, IEEE, Kenneth KreutzDelgado, Senior Member,
More informationOptimization with SparsityInducing Penalties. Contents
Foundations and Trends R in Machine Learning Vol. 4, No. 1 (2011) 1 106 c 2012 F. Bach, R. Jenatton, J. Mairal and G. Obozinski DOI: 10.1561/2200000015 Optimization with SparsityInducing Penalties By
More informationWHICH SCORING RULE MAXIMIZES CONDORCET EFFICIENCY? 1. Introduction
WHICH SCORING RULE MAXIMIZES CONDORCET EFFICIENCY? DAVIDE P. CERVONE, WILLIAM V. GEHRLEIN, AND WILLIAM S. ZWICKER Abstract. Consider an election in which each of the n voters casts a vote consisting of
More informationNo Free Lunch in Data Privacy
No Free Lunch in Data Privacy Daniel Kifer Penn State University dan+sigmod11@cse.psu.edu Ashwin Machanavajjhala Yahoo! Research mvnak@yahooinc.com ABSTRACT Differential privacy is a powerful tool for
More informationRegular Languages are Testable with a Constant Number of Queries
Regular Languages are Testable with a Constant Number of Queries Noga Alon Michael Krivelevich Ilan Newman Mario Szegedy Abstract We continue the study of combinatorial property testing, initiated by Goldreich,
More informationMatching with Contracts
Matching with Contracts By JOHN WILLIAM HATFIELD AND PAUL R. MILGROM* We develop a model of matching with contracts which incorporates, as special cases, the college admissions problem, the KelsoCrawford
More informationA Fast Learning Algorithm for Deep Belief Nets
LETTER Communicated by Yann Le Cun A Fast Learning Algorithm for Deep Belief Nets Geoffrey E. Hinton hinton@cs.toronto.edu Simon Osindero osindero@cs.toronto.edu Department of Computer Science, University
More informationAn efficient reconciliation algorithm for social networks
An efficient reconciliation algorithm for social networks Nitish Korula Google Inc. 76 Ninth Ave, 4th Floor New York, NY nitish@google.com Silvio Lattanzi Google Inc. 76 Ninth Ave, 4th Floor New York,
More informationSome Applications of Laplace Eigenvalues of Graphs
Some Applications of Laplace Eigenvalues of Graphs Bojan MOHAR Department of Mathematics University of Ljubljana Jadranska 19 1111 Ljubljana, Slovenia Notes taken by Martin Juvan Abstract In the last decade
More informationOrthogonal Bases and the QR Algorithm
Orthogonal Bases and the QR Algorithm Orthogonal Bases by Peter J Olver University of Minnesota Throughout, we work in the Euclidean vector space V = R n, the space of column vectors with n real entries
More informationDecoding by Linear Programming
Decoding by Linear Programming Emmanuel Candes and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics, University of California, Los Angeles, CA 90095
More informationGiotto: A TimeTriggered Language for Embedded Programming
Giotto: A TimeTriggered Language for Embedded Programming THOMAS A HENZINGER, MEMBER, IEEE, BENJAMIN HOROWITZ, MEMBER, IEEE, AND CHRISTOPH M KIRSCH Invited Paper Giotto provides an abstract programmer
More information