Local stability under evolutionary game dynamics
|
|
|
- Darrell Spencer
- 10 years ago
- Views:
Transcription
1 Theoretical Economics 5 (2010), / Local stability under evolutionary game dynamics William H. Sandholm Department of Economics, University of Wisconsin We prove that any regular evolutionarily stable strategy (ESS) is asymptotically stable under any impartial pairwise comparison dynamic, including the Smith dynamic; under any separable excess payoff dynamic, including the BNN dynamic; and under the best response dynamic. Combined with existing results for imitative dynamics, our analysis validates the use of regular ESS as a blanket sufficient condition for local stability under evolutionary game dynamics. Keywords. Evolutionary game dynamics, ESS. JEL classification. C Introduction A basic task of evolutionary game theory is to find conditions under which equilibrium play is dynamically stable. Much of the literature focuses on global convergence to equilibrium, identifying global payoff structures that ensure eventual equilibrium play regardless of agents initial behavior. 1 Inevitably, these requirements on payoff structure are quite demanding: although each condition is satisfied in some applications, a typical game with many strategies does not satisfy any known condition for global convergence. To obtain stability results relevant to a wider range of games, one can turn instead to local stability results, seeking conditions under which an equilibrium is robust to small changes in a population s behavior. Because conditions for local stability need only constrain payoffs near the equilibrium in question, they are far easier to satisfy, and so are more likely to be applicable. The natural starting point for discussions of local stability is the notion of an evolutionarily stable strategy (ESS), introduced for single-population random matching models by Maynard Smith and Price (1973). These authors envision a monomorphic population whose members all play the same mixed strategy, and they call this mixed strategy an ESS if the incumbent population is capable of resisting invasions by any small mutant group whose members all play some alternative mixed strategy. 2 William H. Sandholm: [email protected] I thank Ross Cressman, Josef Hofbauer, Ed Green, and two anonymous referees for helpful comments, and Katsuhiko Aiba for excellent research assistance. Financial support from NSF Grants SES and SES is gratefully acknowledged. 1 Classes of games admitting global convergence results include potential games, supermodular games, dominance-solvable games, and stable games. Sandholm (2009) offers a survey of these results. 2 This interpretation is formalized by Cressman (1992, 2006) and Cressman et al. (2001) see Section 5. Copyright 2010 William H. Sandholm. Licensed under the Creative Commons Attribution- NonCommercial License 3.0. Available at DOI: /TE505
2 28 William H. Sandholm Theoretical Economics 5 (2010) Most economic applications of evolutionary game theory do not proceed from this monomorphic, mixed-strategist model, but from a polymorphic model in which agents choose among the available pure strategies. This, for example, is the setting of the replicator dynamic of Taylor and Jonker (1978). However, Maynard Smith and Price s (1973) conditions retain their importance in this new context: Taylor and Jonker (1978), Hofbauer et al. (1979), Zeeman (1980), and Hofbauer and Sigmund (1988) show that ESS provides a sufficient condition for local stability under the replicator dynamic. Because in this setting Maynard Smith and Price s (1973) conditions are applied to population states that is, to distributions of the agents over the pure strategies we refer to a strategy distribution that satisfies these conditions as an evolutionarily stable state. 3 Subsequent studies of local stability under evolutionary dynamics have tended to focus on interior equilibria that is, equilibria in which every pure strategy is played by a positive mass of players. 4 Analyses that do not impose this restriction are less common. The most noteworthy results in this direction are due to Cressman (1997), who studies a general class of dynamics that captures imitative behavior. Cressman (1997) showsthat near rest points, the linearization of any imitative dynamic preserves the basic structure of the linearization of the replicator dynamic, which itself is the fundamental example of an imitative dynamic. Combining this insight with results from Taylor and Jonker (1978) and Hines (1980), Cressman (1997) proves that in nondegenerate cases, regular ESSs are locally stable under all imitative dynamics. 5 Imitation is undoubtedly a basic component of human decision making, and most early analyses of evolutionary game dynamics in the economics literature focus on models that can be interpreted as imitative. 6 However, the direct evaluation of strategic alternatives, rather than indirect evaluation through the experiences of others, is also a basic mode of choice; indeed, it is the approach implicitly followed in most economic and game-theoretic modeling. Direct evaluation has a number of important consequences for evolutionary game dynamics: strategies growth rates no longer need to be tied to their current levels of utilization, and unused strategies may be chosen if agents deem them worthwhile. The first evolutionary dynamic based on direct evaluation of alternative strategies to gain wide currency in economics is the best response dynamic of Gilboa and Matsui (1991). By having revising agents always switch to a strategy that is currently optimal, this dynamic offers a natural amalgam of inertial evolutionary modeling and traditional, rational game-theoretic analysis. At the same time, optimization requires exact knowledge of all strategies payoffs. Because evolutionary analysis is employed in settings where agents intermittently and myopically update their strategies, the assumption that 3 Thomas (1984) is an early reference that emphasizes the distinction between evolutionarily stable strategies and evolutionarily stable states. 4 See Hofbauer (1995b, 2001), Hopkins (1999), Hofbauer and Hopkins (2005), and Sandholm (2007). 5 The formal definitions of ESS and regular ESS are presented in Section 3. 6 The reason for this is in part historical. Evolutionary game dynamics were introduced in biology to model natural selection, with game payoffs representing fitnesses, and dynamics describing relative rates of births and deaths in animal populations. Early work by economists on deterministic evolutionary game dynamics retained this biological approach. Only later, with the work of Björnerstedt and Weibull (1996), Weibull (1995), Schlag (1998), and Hofbauer (1995a), was it recognized that these explicitly biological models could be reinterpreted as models of imitation by economic agents.
3 Theoretical Economics 5 (2010) Local stability under evolutionary game dynamics 29 this myopic updating is done in an optimal way seems somewhat incongruous. To retain the assumption of direct evaluation of strategies, one can suppose instead that each revising agent considers only a single, randomly chosen alternative to his current strategy. Protocols of this sort form the basis for two families of evolutionary dynamics from the literature. When agents compare the candidate strategy s payoff to the population s average payoff, aggregate behavior is described by an excess payoff dynamic. If instead each agent compares the candidate strategy s payoff to the payoff of his current strategy, aggregate behavior is described by a pairwise comparison dynamic. The Brown von Neumann Nash (BNN) dynamic (Brown and von Neumann 1950) andthesmith dynamic (Smith 1984) are early examples from these two classes of dynamics; only more recently have excess payoff dynamics and pairwise comparison dynamics been studied systematically. 7 All of the direct evaluation dynamics described above differ from imitative dynamics in crucial ways: they allow agents to switch to unused strategies and they do not exhibit the percentage-growth-rate functional form that is characteristic of imitative dynamics. Given these basic differences, it is unclear whether conditions on payoffs that imply local stability under imitative dynamics do the same for dynamics based on other choice principles. 8 In this paper, we study the local stability of rest points under impartial pairwise comparison dynamics, which include the Smith dynamic as a special case, under separable excess payoff dynamics, which include the BNN dynamic, and under the best response dynamic. By modifying Lyapunov functions used by Hofbauer and Sandholm (2009) to study evolution in stable games, we prove that any regular ESS is locally asymptotically stable under all of the dynamics in the classes noted above. Combined with Cressman s (1997) results for imitative dynamics, our analysis validates the use of regular ESS as a blanket sufficient condition for local stability under evolutionary game dynamics. The paper proceeds as follows. Section 2 introduces population games and evolutionary dynamics, and presents the classes of dynamics under study. Section 3 defines the notions of ESS and regular ESS. Section 4 presents our local stability theorem, describes the intuition behind its proof, and compares this analysis with local stability analyses for imitative dynamics. Section 5 presents extensions to multipopulation models. Section 6 offers concluding discussion. All proofs are presented in the Appendix. 2. The model 2.1 Population games To keep the notation manageable, we focus first on games played by a single population of agents. Analogous results for multipopulation models are presented in Section 5. 7 See Weibull (1996), Hofbauer (2001), and Sandholm (2005, 2010). 8 Indeed, Friedman (1991, p. 656) suggests that ESS should not be viewed as a general sufficient condition for local stability. Furthermore, imitative dynamics are known to have special properties in related contexts. Samuelson and Zhang (1992) prove that under imitative dynamics, a strictly dominated strategy must vanish along any interior solution trajectory, but Hofbauer and Sandholm (2007) show that any continuous evolutionary dynamic that is not purely imitative allows dominated strategies to survive in some games.
4 30 William H. Sandholm Theoretical Economics 5 (2010) We suppose that there is a unit mass of agents, each of whom chooses a pure strategy from the set S ={1 n}. The aggregate behavior of these agents is described by a population state x X,whereX ={x R n + : j S x j = 1} is the simplex and x j represents the proportion of agents who choose pure strategy j. We identify a population game with a continuously differentiable payoff function F : X R n. The scalar F i (x) represents the payoff to strategy i when the population state is x, andthematrixdf(x) R n n denotes the derivative of F at x. Welet F(x) = j S x jf j (x) denote the average payoff obtained by the members of the population, and we let ˆF i (x) = F i (x) F(x) denote the excess payoff to strategy i over the population s average payoff. The simplest population games are those with linear payoffs, for which there is a matrix A R n n such that F i (x) = j S A ijx j. 9 In this case, the payoffs to all strategies can be expressed concisely as F(x)= Ax, implying that DF(x) = A. More generally, payoffs may depend nonlinearly on the population state. For example, if we use a population game to model network congestion, then the payoff from using a given link is constant at low levels of traffic, but falls steeply as the link nears its capacity. 10 None of the results to follow requires payoffs to be linear: the continuous differentiability of F is all that is needed for our analysis to hold. 2.2 Revision protocols and evolutionary dynamics To derive evolutionary dynamics from a model of individual decision making, we introduce revision protocols, which describe how agents adjust their choices of strategies during recurrent play of the game at hand. Formally, a revision protocol is a Lipschitz continuous map ρ : R n X R n n + that takes payoff vectors π and population states x as arguments, and returns nonnegative matrices as outputs. The scalar ρ ij (π x) is called the conditional switch rate from strategy i to strategy j. If we imagine that agents receive revision opportunities independently according to rate R Poisson processes, then ρ ij (π x)/r represents the probability that an i player who receives a revision opportunity switches to strategy j i. 11 This framework allows for the possibility that agents observe all strategies current payoffs and utilization levels, but in many cases the information requirements are much weaker than this; see the examples below. A revision protocol ρ and a population game F together define an ordinary differential equation on the state space X: ẋ i = x j ρ ji (F(x) x) x i ρ ij (F(x) x) (M) j S j S Equation (M) is called the mean dynamic generated by ρ and F. The first term in (M) 9 One could interpret F i (x) here as representing the expected payoff of a random match in the symmetric normal form game A. This is not our preferredinterpretation, becausein thesubsequentmodelof strategy revision, F i (x) is best viewed as a realized payoff. To retain a matching interpretation, one can view F i (x) as a realized payoff if each agent is matched once with every opponent. See Section 6 for further discussion. 10 For more on large-population congestion games, see Beckmann et al. (1956) and Sandholm (2001). 11 Note that the rate R must be large enough that j i ρ ij (π x)/r never exceeds 1, and that the diagonal elements of ρ(π x) play no formal role in the model.
5 Theoretical Economics 5 (2010) Local stability under evolutionary game dynamics 31 captures the inflow of agents to strategy i from other strategies, whereas the second term captures the outflow of agents to other strategies from strategy i. Deriving evolutionary dynamics from revision protocols not only allows us to study the aggregate implications of different rules for individual choice, but also lets us find microfoundations for existing dynamics. For instance, the replicator dynamic ẋ i = x i ˆF i (x) (1) is defined by the property that the percentage growth rate of each strategy in use is equal to its excess payoff. Although Taylor and Jonker (1978) introduced this dynamic to model natural selection in animal populations, for the purposes of economic analysis the replicator dynamic can be viewed as a model of payoff-dependent imitation. One revision protocol that generates this dynamic is pairwise proportional imitation, 12 ρ ij (π x) = x j [π j π i ] + (2) where [d] + = max{d 0} is the positive part of d. Thex j term in this protocol reflects the fact that a revising agent picks a candidate strategy by observing the current strategy of a randomly chosen opponent. The agent switches to the opponent s strategy if its payoff is higher than that of his own current strategy, doing so with probability proportional to the payoff difference. Substituting protocol (2) into equation (M) and simplifying the result yields the replicator dynamic (1). 2.3 Families of evolutionary dynamics This approach to dynamics via revision protocols also allows us to define families of evolutionary dynamics, where members of the same family are derived from qualitatively similar protocols. Example 1. An imitative dynamic is an evolutionary dynamic of the form ẋ i = x i G i (x) (3) where the C 1 map G : R n R n satisfies monotone percentage growth rates: G i (x) G j (x) if and only if F i (x) F j (x) (4) as well as the forward invariance condition x G(x) = 0 (see equation (13) below). The simplest and best-known imitative dynamic, the replicator dynamic (1), is obtained when G(x) is the excess payoff function ˆF(x)= F(x) 1 F(x),where1 R n is the vector of ones. Imitative dynamics can be derived from revision protocols of the imitative form ρ ij (π x) = x j r ij (π x) where for all i j k S π j π i if and only if r kj (π x) r jk (π x) r ki (π x) r ik (π x) 12 This protocol is due to Schlag (1998). Other imitative protocols that generate the replicator dynamic can be found in Björnerstedt and Weibull (1996), Weibull (1995), and Hofbauer (1995a).
6 32 William H. Sandholm Theoretical Economics 5 (2010) The imitative aspect of these protocols is manifested by the x j term, the presence of which leads the dynamic (3) to take a simple percentage-growth-rate form. For more on the foundations of imitative dynamics, see Björnerstedt and Weibull (1996), Weibull (1995), Hofbauer (1995a), and Sandholm et al. (2008). Under imitative protocols, agents select candidate strategies by observing their opponents behavior. 13 Under the protocols to follow, agents instead choose candidate strategies directly, allowing them to switch to unused strategies whose payoffs are sufficiently appealing. Example 2. Suppose that the rate at which current strategy i players switch to the candidate strategy j depends only on the payoff difference between them, and that switches to better-performing strategies are the only ones that have positive probability. Such rules are described by revision protocols of the form ρ ij (π x) = φ ij (π j π i ) where each function φ ij is sign-preserving, in the sense that sgn(φ ij (d)) = sgn([d] + ). 14 Dynamics (M) generated by protocols of this form are called pairwise comparison dynamics (Sandholm 2010). We call such a dynamic impartial if φ ij φ j, so that the function of the payoff difference that describes the conditional switch rate from i to j does not depend on an agent s current strategy i. The simplest example of a pairwise comparison dynamic is obtained by setting φ ij (c) =[c] +, which yields the Smith dynamic (Smith 1984): ẋ i = x j [F i (x) F j (x)] + x i [F j (x) F i (x)] + j S j S Even in this simple case, we obtain an equation of motion that is noticeably more complicated than the general imitative dynamic (3). Example 3. Suppose that the rate at which agents switch from strategy i to strategy j is solely a function of strategy j s excess payoff, and that agents switch only to strategies whose payoffs exceed the population average. Such rules are represented by protocols of the form ( ρ ij (π x) = φ j π j x k π k ) k S where each function φ j is sign-preserving: sgn(φ j (d)) = sgn([d] + ). We call the dynamics (M) generated by protocols of this form separable excess payoff dynamics. Choosing φ j (d) =[d] + yields the BNN dynamic (Brown and von Neumann 1950): ẋ i =[ˆF i (x)] + x i [ ˆF j (x)] + 13 Of course, the decision about whether to actually switch to the candidate strategy will depend on the current strategy s payoff, the candidate strategy s payoff, or both. 14 More explicitly, φ ij (d) > 0 if d>0 and φ ij (d) = 0 if d 0. j S
7 Theoretical Economics 5 (2010) Local stability under evolutionary game dynamics 33 For more on this and other examples of excess payoff dynamics, see Skyrms (1990), Swinkels (1993), Weibull (1996), Hofbauer (2001), and Sandholm (2005). Example 4. To obtain a dynamic that reflects myopic optimization, one can suppose that revising agents always switch to a best response: ρ i (π x) = arg max y X y π. Then aggregate behavior is described by the best response dynamic (Gilboa and Matsui 1991; also see Hofbauer 1995b): ẋ arg max y F(x) x y X The best response dynamic is not a (single-valued) differential equation, but a (multivalued) differential inclusion: there may be multiple feasible directions of motion from states at which more than one strategy is optimal. Therefore, our derivation here is somewhat informal, taking a multivalued revision protocol as its basis. It is possible, though, to provide a precise account of evolutionary processes based on multivalued protocols; see Benaïm et al. (2005)andGorodeisky (2008, 2009). 3. Evolutionarily stable states We now introduce our sufficient condition for local stability. To begin, we call x X an evolutionarily stable state if (y x ) F(x ) 0 for all y X (5) there is a neighborhood O X of x such that for all y O {x } (y x ) F(x ) = 0 implies that (y x ) F(y)<0 Condition (5) saysthatx is a Nash equilibrium. Condition (6) requiresthatifastate y near x is an alternative best response to x, then an infinitesimal group of invaders whose aggregate behavior is described by x can invade an incumbent population playing y. 15 If F(x) = Ax is linear, then the force of condition (6) does not change if we require the implication to hold for all y X {x }, as originally specified by Maynard Smith and Price (1973) in this linear setting. 16 To prove their local stability results for the replicator dynamic, Taylor and Jonker (1978) introduce the slightly stronger notion of a regular ESS, which is defined by the following two conditions: F i (x ) = F(x )>F j (x ) whenever x i > 0 and x j = 0 (7) for all y X {x } (y x ) F(x ) = 0 implies that (y x ) DF(x )(y x )<0 (8) Condition (7)saysthatx is a quasistrict equilibrium: a Nash equilibrium at which each unused strategy performs strictly worse than each strategy in use. Condition (8) says 15 To see this, notice that the inequality in condition (6) can be rewritten as (x ) F(y) > y F(y). Other equivalent characterizations of ESS can be found in Bomze and Weibull (1995). 16 This follows from the fact that every y X {x } satisfies y x = c(w x ) for some w O {x } and c>0. (6)
8 34 William H. Sandholm Theoretical Economics 5 (2010) that as the state moves from x toward an alternate best response y (that is, in direction y x ), the payoff advantage of x over y grows at a linear rate. Condition (8) isequivalent to condition (6)whenF is linear (e.g., when F is generated by matching in a normal form game), but is slightly stronger than condition (6) when nonlinear payoffs are allowed. Example 5. The conditions for regular ESS are only slightly more demanding than those for ESS, as can be seen by looking at instances of games with nonregular ESSs. First, consider the two-strategy game defined by F 1 (x) = 1 and F 2 (x) = x 1 ε. Aslongas ε 0, state e 1 = (1 0) is the unique Nash equilibrium of F and an ESS of F. However, for e 1 to be a regular ESS, it must be that ε>0, sothate 1 is a quasistrict (and even a strict) equilibrium. Next, consider the two-strategy game defined by F 1 (x) = (x )3 + ε(x ) and F 2 (x) = 0. For anyε 0, state x = ( ) is the unique Nash equilibrium of F and an ESS of F. However, if ε = 0, thendf(x ) = 0, so condition (8) clearly fails. For instance, letting y = e 2,wehavethat(e 2 x ) DF(x )(e 2 x ) = 0 In words, this equality says that if the population begins at equilibrium x, and if some agents switch from strategy 1 to strategy 2, then although the payoff to strategy 2 falls below the payoff to mixed strategy x, strategy 2 s payoff disadvantage initially does not grow at a linear rate. If instead ε>0,then ( 0 DF(x ) = 0 ) ε 0 so the payoff disadvantage of strategy 2 relative to x does grow at a linear rate: (e 2 x ) DF(x )(e 2 x ) = 1 4 ( F 1/ x 2 )(x ) = 1 4ε<0. Thus,whenε>0, condition (8) is satisfied and x is a regular ESS. The presence of the nonlinear payoff function in this example is no accident: as we noted above, nonlinear payoffs are necessary for discrepancies between conditions (6) and(8) to occur. It will be useful to have a more concise formulation of regular ESS. To this end, let TX ={z R n : j S z j = 0} denote the tangent space of X, lets(x) ={j S : x j > 0} denote the support of state x, andletr n S(x) ={y Rn : y j = 0 whenever j/ S(x)} denote the setofvectorsinr n whose nonzero components correspond to strategies in the support of x. Observation 1. State x is a regular ESS if and only if it is a quasistrict equilibrium (7) that satisfies z DF(x )z < 0 for all nonzero z TX R n S(x ) (9) In words, state x is a regular ESS if it is a quasistrict equilibrium, and if the payoff derivative matrix DF(x ) is negative definite with respect to TX R n S(x ),thesetof vectors tangent to the face of X containing x.
9 Theoretical Economics 5 (2010) Local stability under evolutionary game dynamics Local stability 4.1 The main result To prepare for our main result, we review the relevant notions of stability for dynamical systems. Let x be a rest point of the dynamic (M). We say that x is Lyapunov stable if for every neighborhood O of x, there exists a neighborhood O of x such that every solution {x t } t 0 of (M) thatstartsino is contained in O: that is,x 0 O implies that x t O for all t 0. The rest point x is attracting if there is a neighborhood Ô of x such that every solution that starts in Ô converges to x. Finally, x is asymptotically stable if it is Lyapunov stable and attracting. Taylor and Jonker (1978) and Hines (1980) prove that any regular ESS is asymptotically stable under the replicator dynamic (1) by showing that the eigenvalues of the relevant linearized system have negative real parts. Cressman (1997) extends this linearization analysis to general imitative dynamics. We describe these analyses in detail in Section 4.3. MembersoftheotherclassesofdynamicsintroducedinSection 2.3 often fail to be differentiable, so we cannot use linearization to obtain local stability results. Instead, by constructing suitable Lyapunov functions, we prove that any regular ESS is locally stable under the classes of dynamics from Examples 2, 3,and4. Theorem 1. Let x be a regular ESS of F. Thenx is asymptotically stable under (i) any impartial pairwise comparison dynamic for F; (ii) any separable excess payoff dynamic for F; (iii) the best response dynamic for F. 4.2 Intuition for the main result We now provide the intuition behind the proof of Theorem 1. The complete proof is presented in the Appendix. The proof of the theorem builds on analyses from Hofbauer and Sandholm (2009)of evolutionary dynamics in stable games. A population game F is a strictly stable game if z DF(x)z < 0 for all nonzero z TX and all x X That is, F is a strictly stable game if at all states x X, the payoff derivative matrix DF(x) is negative definite with respect to directions tangent to X. By constructing suitable global Lyapunov functions, Hofbauer andsandholm (2009) show that the Nash equilibrium of any strictly stable game is unique and that it is globally asymptotically stable under the dynamics considered in Theorem 1. TheLyapunov functions for the three classes of dynamics considered in the theorem are (x) = i S j S Fj (x) F i (x) x i φ j (s) ds (10) 0
10 36 William H. Sandholm Theoretical Economics 5 (2010) Ɣ(x) = j S ˆF j (x) M(x) = max y X (y x) F(x) 0 φ j (s) ds (11) respectively. In the first two cases, the Lyapunov functions depend not only on the game F, but also on the exact specification of the dynamic: the latter dependence is through the appearance of the conditional switch rate functions φ j in the formulas for and Ɣ. To see why this analysis is relevant here, recall from Observation 1 that a regular ESS x is a quasistrict equilibrium whose payoff derivative matrix DF(x ) is negative definite with respect to directions in TX R n S(x ). If all strategies are in use at x,thisset of directions is just TX,sothatF resembles a stable game near x.thus,ifx int(x), the global Lyapunov functions for stable games given above also serve as local Lyapunov functions for the ESS x. Suppose instead that x / int(x), sothatatleastonepurestrategyisunusedatx. Because x is a quasistrict equilibrium, all unused strategies earn strictly lower payoffs than strategies in the support of x ; because payoffs are continuous, this is true not only at x itself, but also at states near x. Under any dynamic that respects these payoff differences, solutions from initial conditions near x should approach X x ={x X : S(x) = S(x )}, thefaceofx that contains x.ifx is a pure state (i.e., a vertex of X), and thus a strict equilibrium, then X x is simply the singleton {x }, and the foregoing argument suffices to prove local stability. 17 Now, assume that x is neither in the interior nor at a vertex of X, and suppose that we confine our attention to the behavior of the dynamic on face X x in the vicinity of x. One way to do so is to consider a restricted game in which only the strategies in the support of x are available. Condition (9) can be interpreted as saying that this restricted game resembles a stable game near x. By this logic, the Lyapunov functions from Hofbauer and Sandholm (2009) can be used to establish convergence to x from nearby initial conditions on face X x. To prove Theorem 1, we need to construct local Lyapunov functions whose value decreases not only along solutions on face X x, but also along solutions in the interior of X starting near x. To do so, we augment each of the Lyapunov functions listed above by an additional term. For impartial pairwise comparison dynamics, we replace the original Lyapunov function with the function x (x) = (x) + C x (x) where x (x) = j/ S(x ) x j is the mass placed on strategies outside the support of x, and C>0 is a constant. For the other classes of dynamics, the term C x (x) is added to the functions Ɣ and M above. When the current state x is in the interior of X, the value of need not decrease. However, if x is close to x, then the value of x does decrease, because agents playing 17 In particular, the function x introduced below is a local Lyapunov function at x in this case: see Lemmas 1(ii), 2(ii), and 3(ii) in the Appendix.
11 Theoretical Economics 5 (2010) Local stability under evolutionary game dynamics 37 strategies outside of S(x ) switch to strategies in S(x ),thoughtherateofdecreaseof x approaches zero as the state approaches the boundary of X. To prove the theorem, we must show that if the constant C is large enough, then the value of C x always falls fast enough to compensate for any growth in the value of, so that all told, the value of x falls. The arguments required to accomplish this are somewhat different for each of the three classes of dynamics considered in the theorem: see Lemmas 1, 2, and3 in the Appendix. The argument just described requires precise estimates of the behavior of, Ɣ, M, and x in a neighborhood of x, and these estimates take full advantage of the regular ESS conditions (7) and(8). Still, it seems possible that the conclusions of Theorem 1 could be extended to cases where x only satisfies the ESS conditions, (5) and(6). One can prove by elementary arguments that when F has just two strategies, any ESS is locally stable under the dynamics considered in Theorem 1. Moreover, a close examination of the proof of Theorem 1(iii) reveals that in the case of the best response dynamic, the quasistrictness condition (7) from the definition of regular ESS can be replaced with the weaker requirement there is a neighborhood O X of x such that for all y O {x } F i (y) F j (y) whenever x i > 0 and x j = 0 with F i(y) > F j (y) if y j > 0 which allows for unused optimal strategies both at the equilibrium x and elsewhere on faces of X that contain x. Whether the conclusions of Theorem 1 can be extended to all ESSs of arbitrary population games is an open question. 4.3 Comparison to stability analysis for imitative dynamics The proofs of asymptotic stability of regular ESS under pairwise comparison dynamics, excess payoff dynamics, and the best response dynamic, though differing in their details, all follow the same basic path, using augmented versions of Lyapunov functions for stable games. In contrast, Cressman s (1997) proof of asymptotic stability of regular ESS under imitative dynamics employs a different approach based on linearization. Here we present a version of Cressman s (1997) analysis, and contrast it with the analysis of direct evaluation dynamics proposed here. Given a dynamic ẋ = V(x)defined on a full-dimensional set in R n, the asymptotic stability of the rest point x can be established by showing that all eigenvalues of the derivative matrix DV (x ) have negative real parts. Because the state space for evolutionary game dynamics is the simplex X, the local stability of the rest point x can be established by showing that the relevant eigenvalues of DV (x ) have negative real parts, where by relevant eigenvalues we mean those that correspond to eigenvectors in (the complexification of) the n 1-dimensional tangent space TX ={z R n : z 1 = 0}. Ifwe let = I (1/n)11 R n n denote the orthogonal projection of R n onto TX, then these relevant eigenvalues are also the eigenvalues of DV (x ) that correspond to eigenvectors in TX.
12 38 William H. Sandholm Theoretical Economics 5 (2010) Let us write the general imitative dynamic (3)as ẋ = V(x)= diag(x)g(x) (12) where the C 1 function G satisfies monotone percentage growth rates (4). To ensure that the simplex is forward invariant under (12), we must have V(x) TX or, equivalently, x G(x) = 0 (13) Notice that if we set G = ˆF,then(12) becomes the replicator dynamic (1). We can compute the derivative matrix of V at state x as DV (x) = Q(x)DG(x) xg(x) + diag(g(x)) (14) where Q(x) = diag(x) xx R n n.ifx int(x) is an interior Nash equilibrium, then the monotonicity condition (4) and the identity (13) implythatg(x ) = 0, and hence that DV (x ) = Q(x )DG(x ) (15) Because Q(x) is symmetric, positive definite with respect to TX TX,andmaps1 to 0, a lemma of Hines (1980) implies that the relevant eigenvalues of DV (x ) will have negative real parts if DG(x ) is negative definite with respect to TX TX. Now suppose that x int(x) is a regular ESS. Then by Observation 1, DF(x ) is negative definite with respect to TX TX. To take advantage of this fact, Cressman (1997)provesthat DG(x ) = c DF(x ) for some c 0 (16) That is, at any interior Nash equilibrium, the linearization of any monotone percentagegrowth-rate function G is a multiple of the linearization of the payoff function F. Together, (15), (16), and the lemma of Hines (1980) imply that in nondegenerate cases (c 0), an interior regular ESS is asymptotically stable. Suppose next that x is a boundary Nash equilibrium with support {1 n },wheren <n. In this case, the upper left n n block of DV (x ) is an n n analogue of equation (15). Thus, if x is a regular ESS, a version of the argument above shows that in nondegenerate cases, this block generates n 1 relevant eigenvalues of DV (x ) with a negative real part. The lower right block of DV (x ), whose source is the third summand in (14), is a diagonal matrix with diagonal entries {G j (x )} n j=n +1 ;that each of these n n entries can be shown to be a relevant eigenvalue of DV (x ).However, if x is a regular ESS, and thus a quasistrict equilibrium (7), then monotonicity condition (4) impliesthatg j (x )<0 for all j>n. Therefore, in nondegenerate cases, all (n 1) + (n n ) = n 1 relevant eigenvalues of DV (x ) have negative real parts, implying that the regular ESS x is asymptotically stable. Cressman s (1997) analysis of imitative dynamics and our analysis of direct evaluation dynamics have some broad features in common. In both cases, the negative definiteness of DF(x ) is used to control the growth of strategies in the support of x,and the quasistrictness of x is used to show that small incursions by unused strategies are
13 Theoretical Economics 5 (2010) Local stability under evolutionary game dynamics 39 eliminated. Thus, in both cases, the full strength of regular ESS is called upon to establish local stability. In most other respects, though, the analyses are quite different. Cressman s (1997) analysis takes advantage of the simple functional form (12) of imitative dynamics, and relies on the surprising fact that all of these dynamics behave similarly in the neighborhood of an equilibrium. Moreover, this analysis requires the assumption that the dynamic is nondegenerately differentiable: when condition (16) holds with c 0, linearization allows us to separate the analyses of strategies inside and outside the support of x. Our analysis of direct evaluation dynamics differs on each of these points. The dependence of the Lyapunov functions (10) and(11) on the revision protocol φ suggests that different dynamics exhibit different behavior in the vicinity of x. Moreover, because all of the basic examples of direct evaluation dynamics Smith, BNN, best response are not differentiable, our stability analysis must be based on Lyapunov functions. Because we do not use linearization, we cannot cleanly separate the analyses of strategies inside and outside of the support of x. Indeed, the role of Lemmas 1, 2,and3 is to show that the forces that equilibrate the use of strategies in the support of x are not too badly compromised as the strategies outside the support of x are being eliminated. An alternate possibility for unifying the analyses of imitative and direct evaluation dynamics is to construct Lyapunov functions for the former dynamics. In the case of the replicator dynamic, it is well known from the work of Hofbauer et al. (1979), Zeeman (1980), and Hofbauer and Sigmund (1988) that the function H x (x) = i S(x ) x i log x i x i serves as a local Lyapunov function for the ESS x. However, this function seems unrelated to the Lyapunov functions considered in this paper, and no generalizations of this function for other imitative dynamics are known. 5. Multipopulation games In this section, we explain how our local stability results extend to multipopulation settings. Before proceeding, we should briefly discuss a well-known result of Selten (1980) (see also Weibull 1995) that shows that in multipopulation games defined by random matching, various formulations of ESS are equivalent to strict equilibrium. More broadly, this equivalence between ESS and strict equilibrium holds in any multipopulation game in which each agent s payoffs are independent of the choices of other agents in his own population. Although this form of payoff independence is always present in random matching contexts, it seems rather exceptional in playing the field models, in
14 40 William H. Sandholm Theoretical Economics 5 (2010) which agents payoffs depend directly on opponents aggregate behavior. 18 In such contexts, which are common in applications of population games, the notions of ESS introduced below are not especially more restrictive than those for the single-population setting. To define a multipopulation game, we suppose that there are p > 1 populations of agents, with population p P ={1 p} having mass m p > 0. Agents in population p choose pure strategies from the set S p ={1 n p }, and the total number of pure strategies available in all populations is n = p P np. Aggregate behavior in population p is represented by a population state in X p ={x p R np + : i S p xp i = m p },wherex p i R + represents the mass of players in population p choosing strategy i S p. Elements of X = p P Xp ={x = (x 1 x p ) R n + : xp X p },thesetofsocial states, describe behavior in all p populations at once. The tangent spaces of X and X p are TX p ={z p R np : i S p zp i = 0} and TX = p P TXp, respectively. We identify a multipopulation game with its C 1 payoff function F : X R n. The component F p i : X R denotes the payoff function for strategy i S p,whereasf p : X R np denotes the payoff functions for all strategies in S p. To interpret the definitions below, note that inner products of elements of R n are defined via sums that range over all strategies in all populations: for instance, the aggregate payoff over all p populations at social state x X is given by x F(x)= p P i S p xp i Fp i (x). In this context, a strictly stable game is a game whose derivative matrices that satisfy the (multipopulation) negative definiteness condition z DF(x)z < 0 for all nonzero z TX and all x X (17) The Lyapunov functions introduced in Hofbauerand Sandholm (2009) continue to serve in multipopulation games in which this condition holds. To use these functions as the starting point for a local stability analysis, we follow Taylor (1979) and use a definition of ESS that, like condition (17), makes use of payoff comparisons that aggregate over all populations. In particular, we call x X a Taylor ESS if it is a Nash equilibrium that satisfies there is a neighborhood O X of x such that for all y O {x } (y x ) F(x ) = 0 implies that (y x ) F(y)<0 (18) Strengthening this definition slightly, we call x a regular Taylor ESS if it is a quasistrict equilibrium that satisfies for all y X {x } (y x ) F(x ) = 0 implies that (y x ) DF(x )(y x )<0 (19) By extending arguments from Taylor and Jonker (1978), Taylor (1979) shows that any regular Taylor ESS is asymptotically stable under the (standard) multipopulation repli- 18 For instance, in multipopulation models of traffic congestion, each population corresponds to an origin/destination pair. Clearly, the delays a driver experiences should depend not only on the behavior of drivers with other origin/destination pairs, but also on the behavior of other drivers with the same origin/destination pair.
15 Theoretical Economics 5 (2010) Local stability under evolutionary game dynamics 41 cator dynamic. Similarly, a simple extension of our analysis shows that a regular Taylor ESS is asymptotically stable under multipopulation versions of all of the dynamics considered in Theorem The inequalities in conditions (18) and(19) require that the aggregate payoffs of the p incumbent populations exceed the aggregate payoffs of the p invading populations. A less demanding solution concept can be obtained by requiring only that there be at least one population p P in which incumbents outperform invaders: there is a neighborhood O X of x such that for all y O {x } there is a p P (20) such that (y p x p ) F p (x ) = 0 implies that (y p x p ) F p (y) < 0 We call a Nash equilibrium that satisfies condition (20) acressman ESS. 20 As we noted at the beginning of the paper, Maynard Smith and Price s (1973) definition of ESS serves as a stability condition in two distinct single-population contexts: the monomorphic, mixed-strategist framework these authors envisioned, and the polymorhphic, pure-strategist framework first studied by Taylor and Jonker (1978). In multipopulation settings, both of these roles cannot be played by a single definition. Cressman (1992, 2006) and Cressman et al. (2001) argue convincingly that to capture stability in the multipopulation analogue of Maynard Smith and Price s (1973) mixedstrategist framework, the appropriate solution concept is Cressman ESS. 21 However, following Taylor (1979), we have argued here that in the context of multipopulation purestrategist dynamics, it is the more demanding notion of Taylor ESS that provides a general sufficient condition for local stability. 6. Discussion We study the evolution of behavior in population games, assuming that agents employ revision protocols based on the direct evaluation of alternative strategies. We show that under three classes of dynamics generated by such protocols impartial pairwise comparison dynamics, separable excess payoff dynamics, and the best response dynamic any regular ESS is locally stable. To conclude the paper, we now discuss some of the assumptions maintained throughout the analysis and suggest directions for future research. 19 In defining these multipopulation dynamics, we can allow different populations to employ different revision protocols from the same class (e.g., from the class of protocols that defines impartial pairwise comparison dynamics). Among other things, this flexibility can be used to offset differences in payoff scales across populations. 20 This concept is called monomorphic ESS in Cressman (1992), and N-species ESS in Cressman et al. (2001) and Cressman (2006). A related notion for two-population games is that of a Nash Pareto pair; see Hofbauer and Sigmund (1988). 21 Cressman (1992, 2006) and Cressman et al. (2001) consider a collection of p-dimensional replicator systems, with one system for each strategy profile y other than the candidate for stability, x.thepth component of the state variable in the p-dimensional system describes the fraction of the pth population using the invading mixed strategy y p ; the remainder of the population uses the incumbent mixed strategy x p.it is shown that the origin (i.e., the state at which all members of each population p choose the incumbent mixed strategy x p ) is asymptotically stable in each such system if and only if x is a Cressman ESS.
16 42 William H. Sandholm Theoretical Economics 5 (2010) All of our analysis is conducted in a population game framework in which the payoff F i (x), which is used as an input of the agents revision protocols, represents a realized payoff. This framework includes congestion games and similar models of multilateral externalities, the playing the field models of Maynard Smith (1982), and contests generated by the deterministic matching of all pairs of agents in normal form games. If F were generated instead by a single round of random matching in the normal form game A, thenf i (x) = (Ax) i would represent the expected payoff to strategy i. In this case, agents would need to observe the population state and compute expected payoffs in order to employ the revision protocols studied here. Because these requirements seem rather stringent for evolutionary modeling, it would be more natural in this context to assume that each agent employs a revision protocol that conditions directly on the outcomes of his random matches. Although Schlag (1998) derives the replicator dynamic in such an environment, little work has been done to formulate direct evaluation dynamics in this setting, leaving a promising avenue for future research. The present paper studies local stability in recurrent play of simultaneous-move games. Understanding local stability in recurring sequential-move games is a topic of clear importance, but it introduces a number of complications to the analysis. Deriving evolutionary dynamics for extensive form games from a specification of individual behavior necessitates the use of revision protocols that incorporate agents assessments of the consequences of diverting play to unreached information sets. Because dynamics developed for simultaneous move games do not account for such possibilities, it is not surprising that these dynamics can exhibit unexpected properties in extensive form games, or that few general stability results exist. 22 Finally, because unreached information sets generate components of observationally equivalent equilibria, set-valued analogues of the ESS concept can be expected to play a basic role in studies of local stability for extensive form games. 23 Whether the analyses developed in Cressman (1997) and in the present paper can be adapted to dynamics for extensive form games is thus a difficult open question. Appendix Proof of Theorem 1(i). Let ẋ = V F (x) be an impartial pairwise comparison dynamic for F defined by revision protocol ρ ij (π x) = φ j (π j π i ),andletψ k (d) = d 0 φ k(s) ds be the definite integral of φ k.definethec 1 function x : X R by x (x) = (x) + C x (x) x i ψ j (F j (x) F i (x)) + C i S j S j/ S(x ) 22 Cressman (2003) provides a thorough account of these issues and takes the first steps toward defining deterministic dynamics that respect extensive form structure. 23 For set-valued versions of ESS, see Thomas (1985), Swinkels (1992), and Balkenborg and Schlag (2001, 2007). x j
17 Theoretical Economics 5 (2010) Local stability under evolutionary game dynamics 43 where the constant C>0 is determined later. Hofbauer and Sandholm (2009) observe that the function is nonnegative, with (x) = 0 if and only if x NE(F). It follows that x too is nonnegative, with x (x) = 0 if and only if x is a Nash equilibrium of F with support(x) support(x ). Because x is an ESS, it is isolated in the set of Nash equilibria (see Bomze and Weibull 1995), so there is a neighborhood O of x on which x is the unique zero of x. If we can show that there is also a neighborhood O of x such that x (x) < 0 for all x O {x },then x is a strict local Lyapunov function for x, implying that x is asymptotically stable (see Weibull 1995). To reduce the amount of notation in the analysis to come, let 1 0 R n be the vector whose jth component equals 0 if j support(x ) and equals 1 otherwise, so that (1 0 ) x is the mass of agents who use strategies outside the support of x at state x. Thenwecan write x (x) = (x) + C(1 0 ) x, and so can express the time derivative of x as x (x) = (x) + C(1 0 ) ẋ In the course of establishing a global stability result for stable games, Hofbauer and Sandholm (2009) show that the time derivative of satisfies (x) ẋ DF(x)ẋ with equality holding precisely at the Nash equilibria of F. To finish the proof of Theorem 1(i), it is enough to show that ẋ DF(x)ẋ + C(1 0 ) ẋ 0 for all x O {x }. C M/N. This follows directly from the following lemma and choosing Lemma 1. Let ẋ = V F (x) be a pairwise comparison dynamic for F and let x be a regular ESS of F. Then there is a neighborhood O of x and constants M N > 0 such that for all x O, (i) ẋ DF(x)ẋ M(1 0 ) x (ii) (1 0 ) ẋ N(1 0 ) x. Proof. Suppose without loss of generality that S(x ) = support(x ) is given by {1 n }. Then to complement 1 0 R n,let1 R n be the vector whose first n components equal 1 and whose remaining components equal 0,sothat = 1 (1 1).Next, decompose the identity matrix I as I + I 0,whereI = diag(1 ) and I 0 = diag(1 0 ),and, finally, decompose I as +,where = (1/n )1 (1 ) and = I. Notice that is the orthogonal projection of R n onto TX R n S(x ) ={z Rn 0 : j S z j = 0 and z j = 0 whenever j/ S(x )} and that I = + + I 0. Note also that if x int(x), then this decomposition becomes I = + + 0,where = I (1/n)11 and = (1/n)11 are the orthogonal projections of R n onto TX ={z R n : j S z j = 0} and span({1}), respectively.
18 44 William H. Sandholm Theoretical Economics 5 (2010) Using this decomposition of the identity matrix, we can write ẋ DF(x)ẋ = (( + + I 0 )ẋ) DF(x)(( + + I 0 )ẋ) = ( ẋ) DF(x)( ẋ) + (( + I 0 )ẋ) DF(x)ẋ (21) + ( ẋ) DF(x)(( + I 0 )ẋ) Because x is a regular ESS, we know that z DF(x )z < 0 for all nonzero z TX R n S(x ). Thus, because DF(x) is continuous in x, there is a neighborhood Ô of x on which the first term of (21) is nonpositive. Turning to the second term, note that because 1 ẋ = 0 and (1 0 ) = 1 I 0,wehavethat ( + I 0 )ẋ = ( 1 n 1 (1 ) + I 0 ) ẋ = ( 1n 1 (1 0 ) + I 0 ) ẋ = ((I 1n 1 1 )I 0 ) ẋ Let A denote the spectral norm of the matrix A (see Horn and Johnson 1985, Example 5.6.6). Then applying the spectral norm inequalities Ax A x and AB A B, and the Cauchy Schwarz inequality x y x y,wefindthat (( + I 0 )ẋ) DF(x)ẋ = ((I 1n ) 1 1 I ẋ) 0 DF(x)ẋ I 0 ẋ I 1 n 1(1 ) DF(x) ẋ Because DF(x), V F (x),andρ ij (F(x) x) are continuous in x, and hence bounded, on the compact set X, we can find constants K and R such that I 1 n 1(1 ) DF(x) ẋ K and max ρ ij(f(x) x) R for all x X (22) i j S Now because x is a quasistrict equilibrium, we have that F i (x ) = F(x )>F j (x ) for all i support(x ) ={1 n } and all j/ support(x ). Thus, because the protocol ρ ij (π x) = φ j (π j π i ) is sign preserving, we have ρ ij (F(x ) x ) = 0 for such i and j,and because F is continuous, there is a neighborhood O Ô of x on which for such i and j we have F i (x) > F j (x), and hence ρ ij (F(x) x) = 0. From this argument and the bound on ρ ij in (22), it follows that for x O,wehave I 0 ẋ = ẋ j 2 j>n ẋ j j>n = x k ρ kj (F(x) x) x j ρ jk (F(x) x) j>n j>n k S k S k S ( ) x k ρ kj (F(x) x) + x j ρ jk (F(x) x) k S (23)
19 Theoretical Economics 5 (2010) Local stability under evolutionary game dynamics 45 = x k ρ kj (F(x) x) + j>n k>n 2Rn j>n x j = 2Rn(1 0 ) x We therefore conclude that at all x O, x j j>n k S (( + I 0 )ẋ) DF(x)ẋ 2KRn(1 0 ) x ρ jk (F(x) x) Essentially the same argument provides a similar bound on the third term of (21), completing the proof of part (i) of the lemma. We proceed with the proof of part (ii) of the lemma. Following the line of argument after equation (22) above, we note that because x is quasistrict and because the pairwise comparison dynamic satisfies sign preservation, we have ρ ji (F(x ) x )>0 and ρ ij (F(x ) x ) = 0 whenever i support(x ) ={1 n } and j/ support(x ).So,because F and ρ are continuous, sign preservation implies that there is a neighborhood O of x and an r>0 such that ρ ji (F(x) x) > r and ρ ij (F(x) x) = 0 for all i n, j>n,and x O. Applying this observation and canceling like terms when both j and k are greater than n in the sums below, we find that for all x O, (1 0 ) ẋ = j>n ẋ j = ( ) x k ρ kj (F(x) x) x j ρ jk (F(x) x) j>n k S k S = ( ) x k ρ kj (F(x) x) x j ρ jk (F(x) x) j>n k>n k S = x j ρ ji (F(x) x) j>n i n rn (1 0 ) x (24) This completes the proof of the lemma. The lemma completes the proof of Theorem 1(i). Proof of Theorem 1(ii). Let ẋ = V F (x) be a separable excess payoff dynamic for F defined by revision protocol ρ ij (π x) = φ j (π j x π).definethec 1 function x : X R by Ɣ x (x) = Ɣ(x) + C x (x) i S ˆF i (x) 0 φ i (s) ds + C j/ S(x ) x j
20 46 William H. Sandholm Theoretical Economics 5 (2010) where the constant C>0 is determined later. Hofbauer and Sandholm (2009) show that the function Ɣ is nonnegative, with Ɣ(x) = 0 if and only if x NE(F), andthat Ɣ(x) ẋ DF(x)ẋ with equality holding precisely at the Nash equilibria of F. Therefore, the result follows if we can show that ẋ DF(x)ẋ + C(1 0 ) ẋ 0 for all x O {x }. This inequality follows directly from the following lemma, choosing C K. Lemma 2. Let ẋ = V F (x) be the separable excess payoff dynamic for F defined by revision protocol ρ ij (π x) = φ j (π j x π), andletx be a regular ESS of F. Then there is a neighborhood O of x and a K>0 such that for all x O, we have (i) ẋ DF(x)ẋ KT (x)(1 0 ) x (ii) (1 0 ) ẋ = T(x)(1 0 ) x, where T(x)= j S φ j( ˆF j (x)). Proof. The proof of this lemma follows the same lines as that of Lemma 1, butwith inequalities (23)and(24)replacedby(26)and(25) below. Suppose again that support(x ) ={1 n }.Becausex is quasistrict, we have that F i (x ) = F(x )>F j (x ) for all i support(x ) ={1 n } and all j/ support(x ).Thus, because the protocol ρ ij (π x) = φ j (π j x π) is sign preserving, we have φ j ( ˆF(x j )) = 0 whenever j/ support(x ),andbecausef is continuous, there is a neighborhood O Ô of x on which for such j we have that F(x )>F j (x ), and hence that φ j ( ˆF j (x)) = 0. This implies in turn that for x O,wehave (1 0 ) ẋ = j>n ẋ j = ( ) x k ρ kj (F(x) x) x j ρ jk (F(x) x) j>n k S k S = ( ) φ j ( ˆF j (x)) x j φ k ( ˆF(x)) j>n = x j j>n k S k S φ k ( ˆF(x)) (25) = T(x)(1 0 ) x and so I 0 ẋ = j>n ẋ j 2 j>n ẋ j = j>n ẋ j = T(x)(1 0 ) x (26)
21 Theoretical Economics 5 (2010) Local stability under evolutionary game dynamics 47 The proof otherwise follows that of Lemma 1. The proof of Theorem 1(ii) is now complete. Proof of Theorem 1(iii). Let ẋ V F (x) arg max y X y F(x) x be the best response dynamic for F. Define the Lipschitz continuous function M x (x) = M(x)+ C x (x) max (y y X x) F(x)+ C x j j/ S(x ) Hofbauer (2001) and Hofbauer and Sandholm (2009) show that M is nonnegative, with M(x) = 0 if and only if x NE(F), and that along each solution trajectory {x t } t 0,we have Ṁ(x t ) = (ẋ t ) DF(x t )ẋ t M(x t ) for almost all t 0. Therefore, by standard results on Lyapunov functions for differential inclusions (see Theorems A.2 and A.3 of Hofbauer and Sandholm 2009), the result will follow if we show that ẋ DF(x)ẋ + C(1 0 ) ẋ 0 for all x O {x } and all ẋ V F (x). This inequality follows from the following lemma, setting C K. Lemma 3. Let ẋ V F (x) arg max y X y F(x) x be the best response dynamic for F, and let x be a regular ESS of F. Then there is a neighborhood O of x and a K>0 such that for all x O and ẋ V F (x), we have (i) ẋ DF(x)ẋ K(1 0 ) x (ii) (1 0 ) ẋ = (1 0 ) x. Proof. The proof of this lemma follows the same lines as that of Lemma 1, butwith inequalities (23)and(24)replacedby(28)and(27) below. Suppose once again that support(x ) ={1 n }. If we choose the neighborhood O as we did in the proof of Lemma 2, thennoj>n is optimal at any x O. Thus, for such x and any ẋ V F (x), wehave (1 0 ) ẋ = ẋ j = x j = (1 0 ) x (27) j>n j>n and hence I 0 ẋ = ẋ j 2 ẋ j = ẋ j = (1 0 ) x (28) j>n j>n j>n The proof otherwise follows that of Lemma 1. The proof of Theorem 1(iii) is now complete.
22 48 William H. Sandholm Theoretical Economics 5 (2010) References Balkenborg, Dieter and Karl H. Schlag (2001), Evolutionarily stable sets. International Journal of Game Theory, 29, [42] Balkenborg, Dieter and Karl H. Schlag (2007), On the evolutionary selection of sets of Nash equilibria. Journal of Economic Theory, 133, [42] Beckmann, Martin, C. B. McGuire, and Christopher B. Winsten (1956), Studies in the Economics of Transportation. Yale University Press, New Haven. [30] Benaïm, Michel, Josef Hofbauer, and Sylvain Sorin (2005), Stochastic approximation and differential inclusions. SIAM Journal on Control and Optimization, 44, [33] Björnerstedt, Jonas and Jörgen W. Weibull (1996), Nash equilibrium and evolution by imitation. In The Rational Foundations of Economic Behavior (Kenneth J. Arrow et al., eds.), , St. Martin s Press, New York. [28, 31, 32] Bomze, Immanuel M. and Jörgen W. Weibull (1995), Does neutral stability imply Lyapunov stability? Games and Economic Behavior, 11, [33, 43] Brown, George W. and John von Neumann (1950), Solutions of games by differential equations. In Contributions to the Theory of Games I, volume 24 of Annals of Mathematics Studies (Harold W. Kuhn and Albert W. Tucker, eds.), 73 79, Princeton University Press, Princeton. [29, 32] Cressman, Ross (1992), The Stability Concept of Evolutionary Game Theory: A Dynamic Approach,volume94ofLecture Notes in Biomathematics. Springer, Berlin. [27, 41] Cressman, Ross (1997), Local stability of smooth selection dynamics for normal form games. Mathematical Social Sciences, 34, [28, 29, 35, 37, 38, 39, 42] Cressman, Ross (2003), Evolutionary Dynamics and Extensive Form Games. MIT Press, Cambridge. [42] Cressman, Ross (2006), Uninvadability in N-species frequency models for residentmutant systems with discrete or continuous time. Theoretical Population Biology, 69, [27, 41] Cressman, Ross, József Garay, and Josef Hofbauer (2001), Evolutionary stability concepts for N-species frequency-dependent interactions. Journal of Theoretical Biology, 211, [27, 41] Friedman, Daniel (1991), Evolutionary games in economics. Econometrica, 59, [29] Gilboa, Itzhak and Akihiko Matsui (1991), Social stability and equilibrium. Econometrica, 59, [28, 33] Gorodeisky, Ziv (2008), Stochastic approximation of discontinuous dynamics. Unpublished paper, Hebrew University. [33]
23 Theoretical Economics 5 (2010) Local stability under evolutionary game dynamics 49 Gorodeisky, Ziv (2009), Deterministic approximation of best-response dynamics for the Matching Pennies game. Games and Economic Behavior, 66, [33] Hines, W. G. S. (1980), Three characterizations of population strategy stability. Journal of Applied Probability, 17, [28, 35, 38] Hofbauer, Josef (1995a), Imitation dynamics for games. Unpublished paper, University of Vienna. [28, 31, 32] Hofbauer, Josef (1995b), Stability for the best response dynamics. Unpublished paper, University of Vienna. [28, 33] Hofbauer, Josef (2001), From Nash and Brown to Maynard Smith: Equilibria, dynamics, and ESS. Selection, 1, [28, 29, 33, 47] Hofbauer, Josef and Ed Hopkins (2005), Learning in perturbed asymmetric games. Games and Economic Behavior, 52, [28] Hofbauer, Josef and William H. Sandholm (2007), Survival of dominated strategies under evolutionary dynamics. Unpublished paper, University of Vienna and University of Wisconsin. [29] Hofbauer, Josef and William H. Sandholm (2009), Stable games and their dynamics. Journal of Economic Theory, 144, [29, 35, 36, 40, 43, 46, 47] Hofbauer, Josef, Peter Schuster, and Karl Sigmund (1979), A note on evolutionary stable strategies and game dynamics. Journal of Theoretical Biology, 81, [28, 39] Hofbauer, Josef and Karl Sigmund (1988), The Theory of Evolution and Dynamical Systems. Cambridge University Press, Cambridge. [28, 39, 41] Hopkins, Ed (1999), A note on best response dynamics. Games and Economic Behavior, 29, [28] Horn, Roger A. and Charles R. Johnson (1985), Matrix Analysis. Cambridge University Press, Cambridge. [44] Maynard Smith, John (1982), Evolution and the Theory of Games. Cambridge University Press, Cambridge. [42] Maynard Smith, John and George R. Price (1973), The logic of animal conflict. Nature, 246, [27, 28, 33, 41] Samuelson, Larry and Jianbo Zhang (1992), Evolutionary stability in asymmetric games. Journal of Economic Theory, 57, [29] Sandholm, William H. (2001), Potential games with continuous player sets. Journal of Economic Theory, 97, [30] Sandholm, William H. (2005), Excess payoff dynamics and other well-behaved evolutionary dynamics. Journal of Economic Theory, 124, [29, 33] Sandholm, William H. (2007), Evolution in Bayesian games II: Stability of purified equilibria. Journal of Economic Theory, 136, [28]
24 50 William H. Sandholm Theoretical Economics 5 (2010) Sandholm, William H. (2009), Evolutionary game theory. In Encyclopedia of Complexity and Systems Science (Robert A. Meyers, ed.), , Springer, Heidelberg. [27] Sandholm, William H. (2010), Pairwise comparison dynamics and evolutionary foundations for Nash equilibrium. Games,1,3 17.[29, 32] Sandholm, William H., Emin Dokumacı, and Ratul Lahkar (2008), The projection dynamic and the replicator dynamic. Games and Economic Behavior, 64, [32] Schlag, Karl H. (1998), Why imitate, and if so, how? A boundedly rational approach to multi-armed bandits. Journal of Economic Theory, 78, [28, 31, 42] Selten, Reinhard (1980), A note on evolutionarily stable strategies in asymmetric animal conflicts. Journal of Theoretical Biology, 84, [39] Skyrms, Brian (1990), The Dynamics of Rational Deliberation. Harvard University Press, Cambridge. [33] Smith, Michael J. (1984), The stability of a dynamic model of traffic assignment an application of a method of Lyapunov. Transportation Science, 18, [29, 32] Swinkels, Jeroen M. (1992), Evolutionary stability with equilibrium entrants. Journal of Economic Theory, 57, [42] Swinkels, Jeroen M. (1993), Adjustment dynamics and rational play in games. Games and Economic Behavior, 5, [33] Taylor, Peter D. (1979), Evolutionarily stable strategies with two types of players. Journal of Applied Probability, 16, [40, 41] Taylor, Peter D. and Leo B. Jonker (1978), Evolutionarily stable strategies and game dynamics. Mathematical Biosciences, 40, [28, 31, 33, 35, 40, 41] Thomas, Bernhard (1984), Evolutionary stability: States and strategies. Theoretical Population Biology, 26, [28] Thomas, Bernhard (1985), On evolutionarily stable sets. Journal of Mathematical Biology, 22, [42] Weibull, Jörgen W. (1995), Evolutionary Game Theory. MIT Press, Cambridge. [28, 31, 32, 39, 43] Weibull, Jörgen W. (1996), The mass action interpretation. Excerpt from The work of John Nash in game theory: Nobel seminar, December 8, Journal of Economic Theory, 69, [29, 33] Zeeman, E. Christopher (1980), Population dynamics from game theory. In Global Theory of Dynamical Systems (Evanston, 1979), volume 819 of Lecture Notes in Mathematics (Zbigniew Nitecki and Clark Robinson, eds.), , Springer, Berlin. [28, 39] Submitted Final version accepted Available online
Survival of dominated strategies under evolutionary dynamics
Theoretical Economics 6 (2011), 341 377 1555-7561/20110341 Survival of dominated strategies under evolutionary dynamics Josef Hofbauer Department of Mathematics, University of Vienna William H. Sandholm
Online Appendix to Stochastic Imitative Game Dynamics with Committed Agents
Online Appendix to Stochastic Imitative Game Dynamics with Committed Agents William H. Sandholm January 6, 22 O.. Imitative protocols, mean dynamics, and equilibrium selection In this section, we consider
10 Evolutionarily Stable Strategies
10 Evolutionarily Stable Strategies There is but a step between the sublime and the ridiculous. Leo Tolstoy In 1973 the biologist John Maynard Smith and the mathematician G. R. Price wrote an article in
Network Traffic Modelling
University of York Dissertation submitted for the MSc in Mathematics with Modern Applications, Department of Mathematics, University of York, UK. August 009 Network Traffic Modelling Author: David Slade
Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand
Notes V General Equilibrium: Positive Theory In this lecture we go on considering a general equilibrium model of a private ownership economy. In contrast to the Notes IV, we focus on positive issues such
Numerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number
Example 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum. asin. k, a, and b. We study stability of the origin x
Lecture 4. LaSalle s Invariance Principle We begin with a motivating eample. Eample 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum Dynamics of a pendulum with friction can be written
15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
2.3 Convex Constrained Optimization Problems
42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions
Unraveling versus Unraveling: A Memo on Competitive Equilibriums and Trade in Insurance Markets
Unraveling versus Unraveling: A Memo on Competitive Equilibriums and Trade in Insurance Markets Nathaniel Hendren January, 2014 Abstract Both Akerlof (1970) and Rothschild and Stiglitz (1976) show that
Chapter 7. Sealed-bid Auctions
Chapter 7 Sealed-bid Auctions An auction is a procedure used for selling and buying items by offering them up for bid. Auctions are often used to sell objects that have a variable price (for example oil)
3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
Recall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
Inner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
Section 6.1 - Inner Products and Norms
Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
Inner products on R n, and more
Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +
CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation
Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in
Lecture 13 Linear quadratic Lyapunov theory
EE363 Winter 28-9 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discrete-time
THE FUNDAMENTAL THEOREM OF ARBITRAGE PRICING
THE FUNDAMENTAL THEOREM OF ARBITRAGE PRICING 1. Introduction The Black-Scholes theory, which is the main subject of this course and its sequel, is based on the Efficient Market Hypothesis, that arbitrages
LINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
4: SINGLE-PERIOD MARKET MODELS
4: SINGLE-PERIOD MARKET MODELS Ben Goldys and Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2015 B. Goldys and M. Rutkowski (USydney) Slides 4: Single-Period Market
NOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
OPRE 6201 : 2. Simplex Method
OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2
A Simple Model of Price Dispersion *
Federal Reserve Bank of Dallas Globalization and Monetary Policy Institute Working Paper No. 112 http://www.dallasfed.org/assets/documents/institute/wpapers/2012/0112.pdf A Simple Model of Price Dispersion
Adaptive Online Gradient Descent
Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650
1 if 1 x 0 1 if 0 x 1
Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or
On the Interaction and Competition among Internet Service Providers
On the Interaction and Competition among Internet Service Providers Sam C.M. Lee John C.S. Lui + Abstract The current Internet architecture comprises of different privately owned Internet service providers
Mathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 [email protected].
Some Polynomial Theorems by John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 [email protected] This paper contains a collection of 31 theorems, lemmas,
WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT?
WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT? introduction Many students seem to have trouble with the notion of a mathematical proof. People that come to a course like Math 216, who certainly
BANACH AND HILBERT SPACE REVIEW
BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but
Infinitely Repeated Games with Discounting Ù
Infinitely Repeated Games with Discounting Page 1 Infinitely Repeated Games with Discounting Ù Introduction 1 Discounting the future 2 Interpreting the discount factor 3 The average discounted payoff 4
MOP 2007 Black Group Integer Polynomials Yufei Zhao. Integer Polynomials. June 29, 2007 Yufei Zhao [email protected]
Integer Polynomials June 9, 007 Yufei Zhao [email protected] We will use Z[x] to denote the ring of polynomials with integer coefficients. We begin by summarizing some of the common approaches used in dealing
Compact Representations and Approximations for Compuation in Games
Compact Representations and Approximations for Compuation in Games Kevin Swersky April 23, 2008 Abstract Compact representations have recently been developed as a way of both encoding the strategic interactions
4 Lyapunov Stability Theory
4 Lyapunov Stability Theory In this section we review the tools of Lyapunov stability theory. These tools will be used in the next section to analyze the stability properties of a robot controller. We
Persuasion by Cheap Talk - Online Appendix
Persuasion by Cheap Talk - Online Appendix By ARCHISHMAN CHAKRABORTY AND RICK HARBAUGH Online appendix to Persuasion by Cheap Talk, American Economic Review Our results in the main text concern the case
Cost of Conciseness in Sponsored Search Auctions
Cost of Conciseness in Sponsored Search Auctions Zoë Abrams Yahoo!, Inc. Arpita Ghosh Yahoo! Research 2821 Mission College Blvd. Santa Clara, CA, USA {za,arpita,erikvee}@yahoo-inc.com Erik Vee Yahoo! Research
Notes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
Bargaining Solutions in a Social Network
Bargaining Solutions in a Social Network Tanmoy Chakraborty and Michael Kearns Department of Computer and Information Science University of Pennsylvania Abstract. We study the concept of bargaining solutions,
Lecture 7: Finding Lyapunov Functions 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1
Stochastic Inventory Control
Chapter 3 Stochastic Inventory Control 1 In this chapter, we consider in much greater details certain dynamic inventory control problems of the type already encountered in section 1.3. In addition to the
Lecture 1: Schur s Unitary Triangularization Theorem
Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections
Continuity of the Perron Root
Linear and Multilinear Algebra http://dx.doi.org/10.1080/03081087.2014.934233 ArXiv: 1407.7564 (http://arxiv.org/abs/1407.7564) Continuity of the Perron Root Carl D. Meyer Department of Mathematics, North
Moral Hazard. Itay Goldstein. Wharton School, University of Pennsylvania
Moral Hazard Itay Goldstein Wharton School, University of Pennsylvania 1 Principal-Agent Problem Basic problem in corporate finance: separation of ownership and control: o The owners of the firm are typically
MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column
Equilibrium computation: Part 1
Equilibrium computation: Part 1 Nicola Gatti 1 Troels Bjerre Sorensen 2 1 Politecnico di Milano, Italy 2 Duke University, USA Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium
CPC/CPA Hybrid Bidding in a Second Price Auction
CPC/CPA Hybrid Bidding in a Second Price Auction Benjamin Edelman Hoan Soo Lee Working Paper 09-074 Copyright 2008 by Benjamin Edelman and Hoan Soo Lee Working papers are in draft form. This working paper
Solution to Homework 2
Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if
When is Reputation Bad? 1
When is Reputation Bad? 1 Jeffrey Ely Drew Fudenberg David K Levine 2 First Version: April 22, 2002 This Version: November 20, 2005 Abstract: In traditional reputation theory, the ability to build a reputation
α = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
Solution of Linear Systems
Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start
Similarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2015
ECON 459 Game Theory Lecture Notes Auctions Luca Anderlini Spring 2015 These notes have been used before. If you can still spot any errors or have any suggestions for improvement, please let me know. 1
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
State of Stress at Point
State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,
Fairness in Routing and Load Balancing
Fairness in Routing and Load Balancing Jon Kleinberg Yuval Rabani Éva Tardos Abstract We consider the issue of network routing subject to explicit fairness conditions. The optimization of fairness criteria
Inner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction
IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible
1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.
Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S
Factorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
Fuzzy Differential Systems and the New Concept of Stability
Nonlinear Dynamics and Systems Theory, 1(2) (2001) 111 119 Fuzzy Differential Systems and the New Concept of Stability V. Lakshmikantham 1 and S. Leela 2 1 Department of Mathematical Sciences, Florida
Solving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
Lecture 3: Finding integer solutions to systems of linear equations
Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture
1 Portfolio mean and variance
Copyright c 2005 by Karl Sigman Portfolio mean and variance Here we study the performance of a one-period investment X 0 > 0 (dollars) shared among several different assets. Our criterion for measuring
Vector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)
MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of
1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
3. Reaction Diffusion Equations Consider the following ODE model for population growth
3. Reaction Diffusion Equations Consider the following ODE model for population growth u t a u t u t, u 0 u 0 where u t denotes the population size at time t, and a u plays the role of the population dependent
Cost Minimization and the Cost Function
Cost Minimization and the Cost Function Juan Manuel Puerta October 5, 2009 So far we focused on profit maximization, we could look at a different problem, that is the cost minimization problem. This is
6.207/14.15: Networks Lecture 15: Repeated Games and Cooperation
6.207/14.15: Networks Lecture 15: Repeated Games and Cooperation Daron Acemoglu and Asu Ozdaglar MIT November 2, 2009 1 Introduction Outline The problem of cooperation Finitely-repeated prisoner s dilemma
Manipulability of the Price Mechanism for Data Centers
Manipulability of the Price Mechanism for Data Centers Greg Bodwin 1, Eric Friedman 2,3,4, and Scott Shenker 3,4 1 Department of Computer Science, Tufts University, Medford, Massachusetts 02155 2 School
6.254 : Game Theory with Engineering Applications Lecture 2: Strategic Form Games
6.254 : Game Theory with Engineering Applications Lecture 2: Strategic Form Games Asu Ozdaglar MIT February 4, 2009 1 Introduction Outline Decisions, utility maximization Strategic form games Best responses
Using the Theory of Reals in. Analyzing Continuous and Hybrid Systems
Using the Theory of Reals in Analyzing Continuous and Hybrid Systems Ashish Tiwari Computer Science Laboratory (CSL) SRI International (SRI) Menlo Park, CA 94025 Email: [email protected] Ashish Tiwari
Decision-making with the AHP: Why is the principal eigenvector necessary
European Journal of Operational Research 145 (2003) 85 91 Decision Aiding Decision-making with the AHP: Why is the principal eigenvector necessary Thomas L. Saaty * University of Pittsburgh, Pittsburgh,
Notes 11: List Decoding Folded Reed-Solomon Codes
Introduction to Coding Theory CMU: Spring 2010 Notes 11: List Decoding Folded Reed-Solomon Codes April 2010 Lecturer: Venkatesan Guruswami Scribe: Venkatesan Guruswami At the end of the previous notes,
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.
Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability
Applied Algorithm Design Lecture 5
Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design
Evolutionary dynamics and backward induction
Games and Economic Behavior 41 (2002) 227 264 www.elsevier.com/locate/geb Evolutionary dynamics and backward induction Sergiu Hart Center for Rationality and Interactive Decision Theory, Department of
by the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that
October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix
Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,
ALMOST COMMON PRIORS 1. INTRODUCTION
ALMOST COMMON PRIORS ZIV HELLMAN ABSTRACT. What happens when priors are not common? We introduce a measure for how far a type space is from having a common prior, which we term prior distance. If a type
PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.
PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include
a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)
ROOTS OF POLYNOMIAL EQUATIONS In this unit we discuss polynomial equations. A polynomial in x of degree n, where n 0 is an integer, is an expression of the form P n (x) =a n x n + a n 1 x n 1 + + a 1 x
Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS
Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and
CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e.
CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e. This chapter contains the beginnings of the most important, and probably the most subtle, notion in mathematical analysis, i.e.,
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every
! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.
Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of
Chapter 7. Evolutionary Game Theory
From the book Networks, Crowds, and Markets: Reasoning about a Highly Connected World. By David Easley and Jon Kleinberg. Cambridge University Press, 2010. Complete preprint on-line at http://www.cs.cornell.edu/home/kleinber/networks-book/
Orthogonal Projections
Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors
No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics
No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results
1 Homework 1. [p 0 q i+j +... + p i 1 q j+1 ] + [p i q j ] + [p i+1 q j 1 +... + p i+j q 0 ]
1 Homework 1 (1) Prove the ideal (3,x) is a maximal ideal in Z[x]. SOLUTION: Suppose we expand this ideal by including another generator polynomial, P / (3, x). Write P = n + x Q with n an integer not
Notes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
Inner product. Definition of inner product
Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product
Working Paper Series
RGEA Universidade de Vigo http://webs.uvigo.es/rgea Working Paper Series A Market Game Approach to Differential Information Economies Guadalupe Fugarolas, Carlos Hervés-Beloso, Emma Moreno- García and
