# Infinitely Repeated Games with Discounting Ù

Save this PDF as:

Size: px
Start display at page:

Download "Infinitely Repeated Games with Discounting Ù"

## Transcription

1 Infinitely Repeated Games with Discounting Page 1 Infinitely Repeated Games with Discounting Ù Introduction 1 Discounting the future 2 Interpreting the discount factor 3 The average discounted payoff 4 Restricting strategies to subgames 7 Appendix: Discounting Payoffs 10 In a hurry? 10 The infinite summation of the discount factors 10 An infinite summation starting late 11 The finite summation 12 Endless possibilities 13 Introduction We ll now discuss repeated games which are infinitely repeated. This need not mean that the game never ends, however. We will see that this framework is appropriate for modeling situations in which the game eventually ends (with probability one) but the players are uncertain about exactly when the last period is (and they always believe there s some chance the game will continue to the next period). We ll call the stage game G and interpret it to be a simultaneous-move matrix game which remains exactly the same through time. As usual we let the player set be I={1,,n}. Each player has a pure action space A i. 1 The space of action profiles is A=X i I ÙA i. Each player has a von Neumann- Morgenstern utility function defined over the outcomes of G, g i :ÙA Â. The stage game repeats each period, starting at t=0. Although each stage game is a simultaneousmove game, so that each player acts in ignorance of what her opponent is doing that period, we make the observable action or standard signaling assumption that the play which occurs in each repetition of the stage game is revealed to all the players before the next repetition. Combined with perfect recall, this allows a player to condition her current action on all earlier actions of her opponents. We can think of the players as receiving their stage-game payoffs period-by-period. Their repeatedgame payoffs will be an additively separable function of these stage-game payoffs. Right away we see a potential problem: There is an infinite number of periods and, hence, of stage-game payoffs to be added Ù Û 1996 by Jim Ratliff, <http://virtualperfection.com/gametheory>. 1 I want to reserve s and S to represent typical strategies and strategy spaces, respectively, in the repeated game. So I m using a and A for stage-game actions and action spaces.

2 Infinitely Repeated Games with Discounting Page 2 up. In order that the players repeated-game payoffs be well defined we must ensure that this infinite sum does not blow up to infinity. We ensure the finiteness of the repeated-game payoffs by introducing discounting of future payoffs relative to earlier payoffs. Such discounting can be an expression of time preference and/or uncertainty about the length of the game. We introduce the average discounted payoff as a convenience which normalizes the repeated-game payoffs to be on the same scale as the stagegame payoffs. Infinite repetition can be key to obtaining behavior in the stage games which could not be equilibrium behavior if the game were played once or a known finite number of times. For example, finking every period by both players is the unique equilibrium in any finite repetition of the prisoners dilemma. 2 When repeated an infinite number of times, however, cooperation in every period is an equilibrium if the players are sufficiently patient. We first show that cooperation is a Nash equilibrium of the infinitely repeated game for sufficiently patient players. Then we discuss subgames of infinitely repeated games and show that, in our uniformly discounted payoff scenario, a subgame is the same infinitely repeated game as the original game. We derive how to restrict a repeated-game strategy to a subgame. This allows us to complete the analysis of the example by showing that cooperation in every period by both players is a subgame-perfect equilibrium of the infinitely repeated prisoners dilemma for sufficiently patient players. Discounting the future When we study infinitely repeated games we are concerned about a player who receives a payoff in each of infinitely many periods. In order to represent her preferences over various infinite payoff streams we want to meaningfully summarize the desirability of such a sequence of payoffs by a single number. A common assumption is that the player wants to maximize a weighted sum of her per-period payoffs, where she weights later periods less than earlier periods. For simplicity this assumption often takes the particular form that the sequence of weights forms a geometric progression: for some fixed (0,1), each weighting factor is times the previous weight. ( is called her discount factor.) If in each period t player i receives the payoff u i t, we could summarize the desirability of the payoff stream u i 0, u i 1, by the number 3,4 t t u i. (1) Such an intertemporal preference structure has the desirable property that the infinite sum of the 2 See Theorem 4 of Repeated Games. 3 It s imperative that you not confuse superscripts with exponents! You should be able to tell from context. For example, in the summation (1), the t in t is an exponent; in x i t, it is a superscript identifying the time period. 4 Note that this assigns a weight of unity to period zero. It s the relative weights which are important (why?), so we are free to normalize the weights any way we want.

3 Infinitely Repeated Games with Discounting Page 3 weighted payoffs will be finite (since the stage-game payoffs are bounded). You would be indifferent between a payoff of x t at time t and a payoff of x tá received periods later if x t = x tá. (2) A useful formula for computing the finite and infinite discounted sums we will encounter is T 2 t = T 1 _ T 2 Á1, 1_ t T 1 (3) which, in particular, is valid for T 2 =. 5 Interpreting the discount factor One way we can interpret the discount factor is as an expression of traditional time preference. For example, if you received a dollar today you could deposit it in the bank and it would be worth \$(1+r) tomorrow, where r is the per-period rate of interest. You would be indifferent between a payment of \$x t today and \$x tá received periods later only if the future values were equal: (1+r) x t =x t+. Comparing this indifference condition with (2), we see that the two representations of intertemporal preference are equivalent when = 1 1+r. (4) We can relate a player s discount factor to her patience. How much more does she value a dollar today, at time t, than a dollar received >0 periods later? The relative value of the later dollar to the earlier is tá / t =. As 1, 1, so as her discount factor increases she values the later amount more and more nearly as much as the earlier payment. A person is more patient the less she minds waiting for something valuable rather than receiving it immediately. So we interpret higher discount factors as higher levels of patience. There s another reason you might discount the future. You may be unsure about how long the game will continue. Even in the absence of time preference per se, you would prefer a dollar today rather than a promise of one tomorrow because you re not sure tomorrow will really come. (A payoff at a future time is really a conditional payoff conditional on the game lasting that long.) For example, assume that your beliefs about the length of the game can be represented by a constant conditional continuation probability ; i.e. conditional on time t being reached, the probability that the game will continue at least until the next period is. 6 Without loss of generality we assign a probability 5 See the Appendix: Discounting Payoffs for a derivation of this formula. 6 Note that there is no upper bound which limits the length of the game; for any value of T z ++, there is a positive probability of the game lasting more than T periods. If, on the other hand, players beliefs about the duration are such that there is a common knowledge upper bound T on the length of the game, the game is similar to a finitely repeated game with a common knowledge and definite

4 Infinitely Repeated Games with Discounting Page 4 of one to the event that we reach period zero. 7 Denote by pªtº the probability that period t is reached and by p ªt t º the probability that period t is reached conditional on period t being reached; so pªt+1 tº= for all t. The probability of reaching period 1 is pª1º=pª1 0ºpª0º= æ1=. The probability of reaching period 2 is pª2º=pª2 1ºpª1º= 2. In general, the probability of reaching period t is pªtº= t. The probability that the game ends is 1_pªº=1, i.e. one minus the probability that it continues forever; therefore the game is finitely repeated with probability one. Consider the prospect of receiving the conditional payoff of one dollar at time t, given your time preference and the conditional continuation probability. The probability that the payoff will be received is the probability that period t is reached, viz. t. Its discounted value if received is t and zero if not received. Therefore its expected value is t t = t, where fi is the discount factor which is equivalent to the combination of the time preference with continuation uncertainty. So, even though the game is finitely repeated, due to the uncertainty about its duration we can model it as an infinitely repeated game. The average discounted payoff If we adopted the summation (1) as our players repeated-game utility function, and if a player received the same stage-game payoff v i in every period, her discounted repeated-game payoff, using (3), would be v i /(1_ ). This would be tolerable, but we ll find it more convenient to transform the repeated-game payoffs to be on the same scale as the stage-game payoffs, by multiplying the discounted payoff sum from (1) by (1_ ). So we define the average discounted value of the payoff stream u i 0, u i 1, by (1_ ) t ut i. (5) Now if a player receives the stage-game payoff v i in every period, her repeated-game payoff is v i as well. What could be simpler? 8 Let s make some sense of why (5) is interpreted as an average payoff. Consider some time varying sequence of payoffs u t, for t=0,1,2,, and let V be the resulting value of the payoff function (5). Now replace every per-period payoff u t with the value V. The value of the payoff function (5) associated with this new constant-payoff stream of V,V,V, is (1_ ) t V=V(1_ ) t =V. duration in the following sense: If the stage game has a unique Nash equilibrium payoff vector, every subgame-perfect equilibrium of the repeated game involves Nash equilibrium stage-game behavior in every period. 7 If we don t get to period zero, we never implement our strategies. Therefore we can assume that we have reached period zero if we do indeed implement our strategies. 8 We will see at a later date that an additional advantage to this normalization is that the convex hull of the feasible repeated-game payoffs using the average discounted value formulation is invariant to the discount factor.

5 Infinitely Repeated Games with Discounting Page 5 In other words, the average discounted payoff of a payoff stream u 0,u 1,u 2, is that value V which, if you received it every period, would give you the same average discounted payoff as the original stream. We ll often find it convenient to compute the average discounted value of an infinite payoff stream in terms of a leading finite sum and the sum of a trailing infinite substream. For example, say that the payoffs v i t a player receives are some constant payoff v i for the first t periods, viz. 0,1,2,,t_1, and thereafter she receives a different constant payoff v i in each period t,t+1,t+2,. The average discounted value of this payoff stream is (1_ ) v i =(1_ )Ù v i + v i 0 t 1 0 t =(1_ )Ù v i 1_ t 1_ + v i t 1_ = 1_ t Ùv i + t v i. (6) We see that the average discounted value of this stream of bivalued stage-game payoffs is a convex combination of the two stage-game payoffs. We can iterate this procedure in order to evaluate the average discounted value of more complicated payoff streams. For example and this one will be particularly useful say you receive v i for the t periods 0,1,,t_1 as before, and then you receive v i only in period t and receive v i É every period thereafter. The average discounted value of the stream beginning in period t (discounted to period t) is (1_ )v i + v i É. Substituting this for v i in (6), we find that the average discounted value of this threevalued payoff stream is (1_ t )v i + t [(1_ )v i + v i É]. (7) Example: Cooperation in the Prisoners Dilemma (Nash Equilibrium) In the one-shot prisoners dilemma the players cannot avoid choosing their dominant strategy Fink. (See Figure 2.) Even when this game is finitely repeated because the stage game has a unique Nash equilibrium the unique subgame-perfect equilibrium has both players Finking every period. 9 However, when the players are sufficiently patient we can sustain cooperation (i.e. keeping Mum) in every period as a subgame-perfect equilibrium of the infinitely repeated game. First we will see that such cooperation is a Nash equilibrium of the repeated game. Later we will show that this cooperation is a subgameperfect equilibrium. Figure 2: A Prisoners Dilemma When an infinitely repeated game is played, each player i has a repeated-game strategy s i, which is a sequence of history-dependent stage-game strategies s i t ; i.e. s i =(s i 0,s i 1, ), where each s i t :ÙA t A i. The 9 See Theorem 4 in the Repeated Games.

6 Infinitely Repeated Games with Discounting Page 6 n-tuple of individual repeated-game strategies is the repeated-game strategy profile s; i.e. s=(s 1,,s n ). The repeated-game strategies I exhibit, which are sufficient to sustain cooperation, have the following form: Cooperate (i.e. play Mum) in the first period. In later periods, cooperate if both players have always cooperated. However, if either player has ever Finked, Fink for the remainder of the game. More precisely and formally, we write player i s repeated-game strategy s i =(s i 0,s i 1, ) as the sequence of history-dependent stage-game strategies such that in period t and after history h t, (8) Recall that the history h t is the sequence of stage-game action profiles which were played in the t periods 0,1,2,,t_1. ((M,M) t ) is just a way of writing ((M,M), (M,M),,(M,M)), where (M,M) is repeated t times. We can simplify (8) further in a way that will be useful in our later analysis: Because h 0 is the null history, we adopt the convention that, for any action profile a A, the statement h 0 =(a) 0 is true. 10 With this understanding, t=0 implies that h t =((M,M) t ). Therefore (9) First, I ll show that for sufficiently patient players the strategy profile s=(s 1,s 2 ) is a Nash equilibrium of the repeated game. Later I ll show that for the same required level of patience these strategies are also a subgame-perfect equilibrium. If both players conform to the alleged equilibrium prescription, they both play Mum at t=0. Therefore at t=1, the history is h 1 =(M,M); so they both play Mum again. Therefore at t=2, the history is h 2 =((M,M),(M,M)), so they both play Mum again. And so on. So the path of s is the infinite sequence of cooperative action profiles ((M,M),(M,M), ). The repeated-game payoff to each player corresponding to this path is trivial to calculate: They each receive a payoff of 1 in each period, therefore the average discounted value of each player s payoff stream is 1. Can player i gain from deviating from the repeated-game strategy s i given that player j is faithfully following s j? Let t be the period in which player i first deviates. She receives a payoff of 1 in the first t periods 0,1,,t_1. In period t, she plays Fink while her conforming opponent played Mum, yielding player i a payoff of 2 in that period. This defection by player i now triggers an open-loop Fink-always response from player j. Player i s best response to this open-loop strategy is to Fink in every period 10 Here is a case of potential ambiguity between time superscripts and exponents. (a) 0 means a raised to the zero power; i.e. the profile a is played zero times.

7 Infinitely Repeated Games with Discounting Page 7 herself. Thus she receives zero in every period t+1,t+2,. To calculate the average discounted value of this payoff stream to player i we can refer to (7), and substitute v i =1, v i =2, and v i É=0. This yields player i s repeated-game payoff when she defects in period t in the most advantageous way to be 1_ t (2 _1). This is weakly less than the equilibrium payoff of 1, for any choice of defection period t, as long as. Thus we have defined what I meant by sufficiently patient: cooperation in this prisoners dilemma is a Nash equilibrium of the repeated game as long as. Restricting strategies to subgames Consider the subgame which begins in period after a history h =(a 0,a 1,,a 1 ). This subgame is itself an infinitely repeated game; in fact, it is the same infinitely repeated game as the original game in the sense that 1 the same stage game G is played relentlessly and 2 a unit payoff tomorrow is worth the same as a payoff of received today. Another way to appreciate 2 is to consider an infinite stream of action profiles a 0,a 1,. If a player began in period to receive the payoffs from this stream of action profiles, the contribution to her total repeated-game payoff would be times its contribution if it were received beginning at period zero. If player i were picking a sequence of history-dependent stage-game strategies to play beginning when some subgame h were reached, against fixed choices by her opponents, she would face exactly the same maximization problem as she would if she thought she was just starting the game. 11 What behavior in the subgame is implied by the repeated-game strategy profile s? The restriction of the repeated-game strategy profile s to the subgame h tells us how the subgame will be played. Let s denote this restriction by s. If we played the infinitely-repeated game from the beginning by having each player play her part of s, we would see the same infinite sequence of action profiles as we would see if instead s were being played and we began our observation at period. For example, in the initial period of the subgame (viz. period on the original game s timeline and period zero on the subgame s timeline), player i chooses the action dictated by her subgame strategy s i, viz. s i 0. This action must be what her original repeated-game strategy s i would tell her to do after the history h by which this subgame was reached. I.e. s i 0 =s i ªh º. (10) After all the players choose their actions, a stage-game action profile results. We can denote this action profile in two ways: as a 0, which refers to the initial period of the subgame, or as a, which refers to the timeline of the original game. (See Figure 3.) This action profile a 0 =a then becomes part of the history which conditions the next period s choices. Period 1 of the subgame is period +1 of the original game. 11 This paragraph may be unclear, but it s not a trivial statement! Its validity relies crucially on the structure of the weighting coefficients in the infinite summation. Let å t be the weight for period t s per-period payoff. In the discounted payoff case we consider here, we have å t = t. The fact that the continuation game is the same game as the whole game is a consequence of the fact that, for all t and, the ratio å tá /å t depends only upon but not upon t. In our case, this ratio is just. On the other hand, for example, if the weighting coefficients were å t = t /t, we could not solve each continuation game just as another instance of the original game.

8 Infinitely Repeated Games with Discounting Page 8 The history in the subgame is h 1 =(a 0 ); the history in the original game is h Á1 =(a 0,a 1,,a 1,a )=(h ;Ùa )=(h ;Ùa 0 )=(h ;Ùh 1 ), (11) where (h ;Ùa ) is the concatenation of h with a. 12 The action chosen next by player i can be denoted equivalently as s i 1 ªh 1 º or s i Á1 ª(h ;Ùh 1 )º. Therefore we must have s i 1 ªh 1 º =s i Á1 ª(h ;Ùh 1 )º. (12) Continuing the analysis period-by-period, we see that we can define the restriction of s i to the subgame determined by h by s i t ªh t º=s i Át ª(h ;Ùh t )º, (13) for every t {0,1, }. Figure 3: Histories and history-dependent stage-game strategies in two time frames: the original game s and a subgame s. Example: Cooperation in the Prisoners Dilemma (Subgame-perfect Equilibrium) To verify that s is a subgame-perfect equilibrium of the repeated prisoners dilemma we need to check that this strategy profile s restriction to each subgame is a Nash equilibrium of that subgame. Consider a subgame, beginning in period with some history h. What is the restriction of s i to this subgame? Denoting the restriction by s i and using definitions (13) and (9), we find (14) We can partition the subgames of this game, each identified by a beginning period and a history h, into two classes: A those in which both players chose Mum in all previous periods, i.e. h =((M,M) ), and B those in which a defection by either player has previously occurred. 13 For those subgames in class A, the sequence of restrictions s i t ªh t º from (14) reduces to the sequence of 12 To concatenate two sequences a with b is simply to append b to the end of a. 13 Note that class A includes the subgame of the whole, which begins in period zero. (Because there are no previous periods, it is trivially true that (M,M) was played in all previous periods.)

9 Infinitely Repeated Games with Discounting Page 9 original stage-game strategies s i t ªh t º from (9). I.e. for all and h =((M,M) ), 1415 (15) Because s is a Nash-equilibrium strategy profile of the repeated game, for each subgame h in class A, the restriction s is a Nash-equilibrium strategy profile of the subgame when. For any subgame h in class B, h ((M,M) ). Therefore the restriction s i of s i specifies s i t =F for all t {0,1, }. In other words, in any subgame reached by some player having Finked in the past, each player chooses the open-loop strategy Fink always. Therefore the repeated-game strategy profile s played in such a subgame is an open-loop sequence of stage-game Nash equilibria. From Theorem 1 of the Repeated Games handout we know that this is a Nash equilibrium of the repeated game and hence of this subgame. So we have shown that for every subgame the restriction of s to that subgame is a Nash equilibrium of that subgame for. Therefore s is a subgame-perfect equilibrium of the infinitely repeated prisoners dilemma when. 14 When h =((M,M) ), [h =((M,M) ) and h t =((M,M) t )] h t =((M,M) t ). 15 There is no distinction between the roles played between h t and h t in these two expressions. They are both dummy variables.

10 Infinitely Repeated Games with Discounting Page 10 Appendix: Discounting Payoffs Often in economics for example when we study repeated games we are concerned about a player who receives a payoff in each of many (perhaps infinitely many) periods and we want to meaningfully summarize her entire sequence of payoffs by a single number. A common assumption is that the player wants to maximize a weighted sum of her per-period payoffs, where she weights later periods less than earlier periods. For simplicity this assumption often takes the particular form that the sequence of weights forms a geometric progression: for some fixed (0,1), each weighting factor is times the previous weight. ( is called her discount factor.) In this handout we re going to derive some useful results relevant to this kind of payoff structure. We ll see that you really don t need to memorize a bunch of different formulas. You can easily work them all out from scratch from a simple starting point. Let s set this problem up more concretely. It will be convenient to call the first period t=0. Let the player s payoff in period t be denoted u t. By convention we attach a weight of 1 to the period 0 payoff. 16 The weight for the next period s payoff, u 1, then, is æ1= ; the weight for the next period s payoff, u 2, then, is æ = 2 ; the weight for the next period s payoff, u 3, then, is æ 2 = 3. You see the pattern: in general the weight associated with payoff u t is t. The weighted sum of all the payoffs from t=0 t=t (where T may be ) is u 0 + u u 2 +Ú+ T u T = T t u t. (1) The infinite summation of the discount factors We sometimes will be concerned with the case in which the player s period payoffs are constant over some stretch of time periods. In this case the common u t value can be brought outside and in front of the summation over those periods because those u t values don t depend on the summation index t. Then we will be essentially concerned with summations of the form In a hurry? Here s a formula we ll eventually derive and from which you can derive a host of other useful discounting expressions. In particular, it s valid when T 2 =. T 2 t = T 1 _ T 2Á1. 1_ t T 1 16 It s the relative weights which are important (why?), so we are free to normalize the sequence of weights any way we want.

11 Infinitely Repeated Games with Discounting Page 11 T 2 t. t=t 1 (2) To begin our analysis of these summations we ll first study a special case: t, (3) and then see how all of the other results we d like to have can be easily derived from this special case. The only trick is to see that the infinite sum (3) is mathemagically equal to the reciprocal of (1_ ), i.e. 1 1Ù Ù = Ú= t. (4) To convince yourself that the result of this simple division really is this infinite sum you can use either of two heuristic methods. First, there s synthetic division: Ú 1_ 1 1_ _ _ 3 (5) If you don t remember synthetic division from high school algebra, you can alternatively verify (4) by multiplication: 18 (1_ )( Ú) = ( Ú)_ ( Ú) = ( Ú)_( + 2 +Ú) = 1. An infinite summation starting late (6) We ll encounter cases where a player s per-period payoffs vary in the early periods, say up through 17 I learned this in tenth grade, and I haven t seen it since; so I can t give a rigorous justification for it. In particular: its validity in this case must depend on <1, but I don t know a precise statement addressing this. 18 Here we rely on <1 to guarantee that the two infinite sums on the right-hand side converge and therefore that we can meaningfully subtract the second from the first.

12 Infinitely Repeated Games with Discounting Page 12 period T, but after that she receives a constant payoff per period. To handle this latter constant-payoff stage we ll want to know the value of the summation t TÁ1 t. (7) Now that we know the value from (4) for the infinite sum that starts at t=0, we can use that result and the technique of change of variable in order to find the value of (7). If only the lower limit on the summation (7) were t=0 instead of t=t+1, we d know the answer immediately. So we define a new index, which depends on t, and transform the summation (7) into another summation whose index runs from 0 and whose summand is. So, let s define =t_(t+1). (8) We chose this definition because when t=t+1 like it does for the first term of (7) then =0, which is exactly what we want to be for the first term of the new summation. 19 For purposes of later substitution we write (8) in the form t= +(T+1). (9) Substituting for t using (9), summation (7) can now be rewritten as t TÁ1 t = Á(TÁ1) = TÁ1 Á(TÁ1) TÁ1 Ù=Ù0 = TÁ1 1. (10) where we used (4) to transform the final infinite sum into its simple reciprocal equivalent. 20 The finite summation Our game might only last a finite number of periods. In this case we want to know the value of T t. (11) We can immediately get this answer by combining the two results (4) and (10) which we ve already found. We need only note that the infinite sum in (3) is the sum of two pieces: 1 the piece we want in (11) and 2 the infinite piece in (7). So we immediately obtain T t = t _ t t TÁ1 = 1 1_ _ TÁ1 =1_ TÁ1 1_ 1_. (12) 19 Of course, we could have achieved this goal by defining =(T+1)_t. But we want to increase with t, so that when t, we ll also have. 20 The t in (4) is just a dummy variable; (4) is still true if you replace every occurrence of t with.

13 Infinitely Repeated Games with Discounting Page 13 Endless possibilities The techniques we have used here of employing a change of variable and combining previous results is very general and can be used to obtain the value of summations of the general form in (2). The particular results we derived, viz. (10) and (12), are merely illustrative cases. As an exercise, you should be able to easily combine results we have already found here to show that the value of the general summation in (2) is T 2 t = T 1 _ T 2Á1. 1_ t T 1 (13) Note that (13) is valid even when T 2 =, because t = lim t = lim t T 1 T 2 T 2 t T 1 where we used the fact that T 2 T 1 _ T 2Á1 1_ = T 1 1_, (14) lim t =0, t (15) for [0,1). You should verify that, by assigning the appropriate values to T 1 and T 2 in (13), we can recover all of our previous results, viz. (4), (10), and (12).

### 6.207/14.15: Networks Lecture 15: Repeated Games and Cooperation

6.207/14.15: Networks Lecture 15: Repeated Games and Cooperation Daron Acemoglu and Asu Ozdaglar MIT November 2, 2009 1 Introduction Outline The problem of cooperation Finitely-repeated prisoner s dilemma

### Economics 200C, Spring 2012 Practice Midterm Answer Notes

Economics 200C, Spring 2012 Practice Midterm Answer Notes I tried to write questions that are in the same form, the same length, and the same difficulty as the actual exam questions I failed I think that

### 9.2 Summation Notation

9. Summation Notation 66 9. Summation Notation In the previous section, we introduced sequences and now we shall present notation and theorems concerning the sum of terms of a sequence. We begin with a

### ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE

ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE YUAN TIAN This synopsis is designed merely for keep a record of the materials covered in lectures. Please refer to your own lecture notes for all proofs.

### When is Reputation Bad? 1

When is Reputation Bad? 1 Jeffrey Ely Drew Fudenberg David K Levine 2 First Version: April 22, 2002 This Version: November 20, 2005 Abstract: In traditional reputation theory, the ability to build a reputation

### 8.3. GEOMETRIC SEQUENCES AND SERIES

8.3. GEOMETRIC SEQUENCES AND SERIES What You Should Learn Recognize, write, and find the nth terms of geometric sequences. Find the sum of a finite geometric sequence. Find the sum of an infinite geometric

### Moral Hazard. Itay Goldstein. Wharton School, University of Pennsylvania

Moral Hazard Itay Goldstein Wharton School, University of Pennsylvania 1 Principal-Agent Problem Basic problem in corporate finance: separation of ownership and control: o The owners of the firm are typically

### Chapter 12. Repeated Games Finitely-repeated games

Chapter 12 Repeated Games In real life, most games are played within a larger context, and actions in a given situation affect not only the present situation but also the future situations that may arise.

### 1 Interest rates, and risk-free investments

Interest rates, and risk-free investments Copyright c 2005 by Karl Sigman. Interest and compounded interest Suppose that you place x 0 (\$) in an account that offers a fixed (never to change over time)

### Jeux finiment répétés avec signaux semi-standards

Jeux finiment répétés avec signaux semi-standards P. Contou-Carrère 1, T. Tomala 2 CEPN-LAGA, Université Paris 13 7 décembre 2010 1 Université Paris 1, Panthéon Sorbonne 2 HEC, Paris Introduction Repeated

### Sequences and Series

Contents 6 Sequences and Series 6. Sequences and Series 6. Infinite Series 3 6.3 The Binomial Series 6 6.4 Power Series 3 6.5 Maclaurin and Taylor Series 40 Learning outcomes In this Workbook you will

### Chapter 21: The Discounted Utility Model

Chapter 21: The Discounted Utility Model 21.1: Introduction This is an important chapter in that it introduces, and explores the implications of, an empirically relevant utility function representing intertemporal

### Math 115 Spring 2014 Written Homework 3 Due Wednesday, February 19

Math 11 Spring 01 Written Homework 3 Due Wednesday, February 19 Instructions: Write complete solutions on separate paper (not spiral bound). If multiple pieces of paper are used, they must be stapled with

### 6.254 : Game Theory with Engineering Applications Lecture 2: Strategic Form Games

6.254 : Game Theory with Engineering Applications Lecture 2: Strategic Form Games Asu Ozdaglar MIT February 4, 2009 1 Introduction Outline Decisions, utility maximization Strategic form games Best responses

### Numbers 101: Cost and Value Over Time

The Anderson School at UCLA POL 2000-09 Numbers 101: Cost and Value Over Time Copyright 2000 by Richard P. Rumelt. We use the tool called discounting to compare money amounts received or paid at different

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand

Notes V General Equilibrium: Positive Theory In this lecture we go on considering a general equilibrium model of a private ownership economy. In contrast to the Notes IV, we focus on positive issues such

### Pascal is here expressing a kind of skepticism about the ability of human reason to deliver an answer to this question.

Pascal s wager So far we have discussed a number of arguments for or against the existence of God. In the reading for today, Pascal asks not Does God exist? but Should we believe in God? What is distinctive

### Economics 431 Homework 5 Due Wed,

Economics 43 Homework 5 Due Wed, Part I. The tipping game (this example due to Stephen Salant) Thecustomercomestoarestaurantfordinnereverydayandisservedbythe same waiter. The waiter can give either good

### Two-State Options. John Norstad. j-norstad@northwestern.edu http://www.norstad.org. January 12, 1999 Updated: November 3, 2011.

Two-State Options John Norstad j-norstad@northwestern.edu http://www.norstad.org January 12, 1999 Updated: November 3, 2011 Abstract How options are priced when the underlying asset has only two possible

### MATH 105: PRACTICE PROBLEMS FOR SERIES: SPRING Problems

MATH 05: PRACTICE PROBLEMS FOR SERIES: SPRING 20 INSTRUCTOR: STEVEN MILLER (SJM@WILLIAMS.EDU).. Sequences and Series.. Problems Question : Let a n =. Does the series +n+n 2 n= a n converge or diverge?

### Geometric Series and Annuities

Geometric Series and Annuities Our goal here is to calculate annuities. For example, how much money do you need to have saved for retirement so that you can withdraw a fixed amount of money each year for

### Game Theory: Supermodular Games 1

Game Theory: Supermodular Games 1 Christoph Schottmüller 1 License: CC Attribution ShareAlike 4.0 1 / 22 Outline 1 Introduction 2 Model 3 Revision questions and exercises 2 / 22 Motivation I several solution

### LOGNORMAL MODEL FOR STOCK PRICES

LOGNORMAL MODEL FOR STOCK PRICES MICHAEL J. SHARPE MATHEMATICS DEPARTMENT, UCSD 1. INTRODUCTION What follows is a simple but important model that will be the basis for a later study of stock prices as

### Calculus for Middle School Teachers. Problems and Notes for MTHT 466

Calculus for Middle School Teachers Problems and Notes for MTHT 466 Bonnie Saunders Fall 2010 1 I Infinity Week 1 How big is Infinity? Problem of the Week: The Chess Board Problem There once was a humble

### 16.1. Sequences and Series. Introduction. Prerequisites. Learning Outcomes. Learning Style

Sequences and Series 16.1 Introduction In this block we develop the ground work for later blocks on infinite series and on power series. We begin with simple sequences of numbers and with finite series

### Taylor Polynomials and Taylor Series Math 126

Taylor Polynomials and Taylor Series Math 26 In many problems in science and engineering we have a function f(x) which is too complicated to answer the questions we d like to ask. In this chapter, we will

### Current Accounts in Open Economies Obstfeld and Rogoff, Chapter 2

Current Accounts in Open Economies Obstfeld and Rogoff, Chapter 2 1 Consumption with many periods 1.1 Finite horizon of T Optimization problem maximize U t = u (c t ) + β (c t+1 ) + β 2 u (c t+2 ) +...

### Inflation. Chapter 8. 8.1 Money Supply and Demand

Chapter 8 Inflation This chapter examines the causes and consequences of inflation. Sections 8.1 and 8.2 relate inflation to money supply and demand. Although the presentation differs somewhat from that

### ORDERS OF ELEMENTS IN A GROUP

ORDERS OF ELEMENTS IN A GROUP KEITH CONRAD 1. Introduction Let G be a group and g G. We say g has finite order if g n = e for some positive integer n. For example, 1 and i have finite order in C, since

### Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

### PERPETUITIES NARRATIVE SCRIPT 2004 SOUTH-WESTERN, A THOMSON BUSINESS

NARRATIVE SCRIPT 2004 SOUTH-WESTERN, A THOMSON BUSINESS NARRATIVE SCRIPT: SLIDE 2 A good understanding of the time value of money is crucial for anybody who wants to deal in financial markets. It does

### Introduction to Game Theory Lecture Note 3: Mixed Strategies

Introduction to Game Theory Lecture Note 3: Mixed Strategies Haifeng Huang University of California, Merced Mixed strategies and von Neumann-Morgenstern preferences So far what we have considered are pure

### ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2015

ECON 459 Game Theory Lecture Notes Auctions Luca Anderlini Spring 2015 These notes have been used before. If you can still spot any errors or have any suggestions for improvement, please let me know. 1

### Taylor Series and Asymptotic Expansions

Taylor Series and Asymptotic Epansions The importance of power series as a convenient representation, as an approimation tool, as a tool for solving differential equations and so on, is pretty obvious.

### 1 Short Introduction to Time Series

ECONOMICS 7344, Spring 202 Bent E. Sørensen January 24, 202 Short Introduction to Time Series A time series is a collection of stochastic variables x,.., x t,.., x T indexed by an integer value t. The

### Induction. Margaret M. Fleck. 10 October These notes cover mathematical induction and recursive definition

Induction Margaret M. Fleck 10 October 011 These notes cover mathematical induction and recursive definition 1 Introduction to induction At the start of the term, we saw the following formula for computing

### 10.2 Series and Convergence

10.2 Series and Convergence Write sums using sigma notation Find the partial sums of series and determine convergence or divergence of infinite series Find the N th partial sums of geometric series and

### Objectives. Materials

Activity 5 Exploring Infinite Series Objectives Identify a geometric series Determine convergence and sum of geometric series Identify a series that satisfies the alternating series test Use a graphing

### Students in their first advanced mathematics classes are often surprised

CHAPTER 8 Proofs Involving Sets Students in their first advanced mathematics classes are often surprised by the extensive role that sets play and by the fact that most of the proofs they encounter are

### Mathematical Induction

Chapter 2 Mathematical Induction 2.1 First Examples Suppose we want to find a simple formula for the sum of the first n odd numbers: 1 + 3 + 5 +... + (2n 1) = n (2k 1). How might we proceed? The most natural

### 1 Limiting distribution for a Markov chain

Copyright c 2009 by Karl Sigman Limiting distribution for a Markov chain In these Lecture Notes, we shall study the limiting behavior of Markov chains as time n In particular, under suitable easy-to-check

### Chapter 7. Sealed-bid Auctions

Chapter 7 Sealed-bid Auctions An auction is a procedure used for selling and buying items by offering them up for bid. Auctions are often used to sell objects that have a variable price (for example oil)

### Evaluating Annuities (IV) Evaluating Annuities (III) Annuity: First Example. Evaluating Annuities (V) We want to evaluate

Annuities Annuities and the IRR Philip A. Viton October 24, 2014 When the temporal impact Z t of a project does not vary during some time period, it is said to constitute an annuity (for that period).

### Introduction (I) Present Value Concepts. Introduction (II) Introduction (III)

Introduction (I) Present Value Concepts Philip A. Viton February 19, 2014 Many projects lead to impacts that occur at different times. We will refer to those impacts as constituting an (inter)temporal

### FINAL EXAM, Econ 171, March, 2015, with answers

FINAL EXAM, Econ 171, March, 2015, with answers There are 9 questions. Answer any 8 of them. Good luck! Problem 1. (True or False) If a player has a dominant strategy in a simultaneous-move game, then

### An Example of a Repeated Partnership Game with Discounting and with Uniformly Inefficient Equilibria

An Example of a Repeated Partnership Game with Discounting and with Uniformly Inefficient Equilibria Roy Radner; Roger Myerson; Eric Maskin The Review of Economic Studies, Vol. 53, No. 1. (Jan., 1986),

### Compact Representations and Approximations for Compuation in Games

Compact Representations and Approximations for Compuation in Games Kevin Swersky April 23, 2008 Abstract Compact representations have recently been developed as a way of both encoding the strategic interactions

### Using simulation to calculate the NPV of a project

Using simulation to calculate the NPV of a project Marius Holtan Onward Inc. 5/31/2002 Monte Carlo simulation is fast becoming the technology of choice for evaluating and analyzing assets, be it pure financial

### Wald s Identity. by Jeffery Hein. Dartmouth College, Math 100

Wald s Identity by Jeffery Hein Dartmouth College, Math 100 1. Introduction Given random variables X 1, X 2, X 3,... with common finite mean and a stopping rule τ which may depend upon the given sequence,

### Linear Programming. March 14, 2014

Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

### UGBA 103 (Parlour, Spring 2015), Section 1. Raymond C. W. Leung

UGBA 103 (Parlour, Spring 2015), Section 1 Present Value, Compounding and Discounting Raymond C. W. Leung University of California, Berkeley Haas School of Business, Department of Finance Email: r_leung@haas.berkeley.edu

### CHAPTER 13 GAME THEORY AND COMPETITIVE STRATEGY

CHAPTER 13 GAME THEORY AND COMPETITIVE STRATEGY EXERCISES 3. Two computer firms, A and B, are planning to market network systems for office information management. Each firm can develop either a fast,

### Decision Theory. 36.1 Rational prospecting

36 Decision Theory Decision theory is trivial, apart from computational details (just like playing chess!). You have a choice of various actions, a. The world may be in one of many states x; which one

### Persuasion by Cheap Talk - Online Appendix

Persuasion by Cheap Talk - Online Appendix By ARCHISHMAN CHAKRABORTY AND RICK HARBAUGH Online appendix to Persuasion by Cheap Talk, American Economic Review Our results in the main text concern the case

### By W.E. Diewert. July, Linear programming problems are important for a number of reasons:

APPLIED ECONOMICS By W.E. Diewert. July, 3. Chapter : Linear Programming. Introduction The theory of linear programming provides a good introduction to the study of constrained maximization (and minimization)

### Microeconomic Theory Jamison / Kohlberg / Avery Problem Set 4 Solutions Spring 2012. (a) LEFT CENTER RIGHT TOP 8, 5 0, 0 6, 3 BOTTOM 0, 0 7, 6 6, 3

Microeconomic Theory Jamison / Kohlberg / Avery Problem Set 4 Solutions Spring 2012 1. Subgame Perfect Equilibrium and Dominance (a) LEFT CENTER RIGHT TOP 8, 5 0, 0 6, 3 BOTTOM 0, 0 7, 6 6, 3 Highlighting

### 3.2 The Factor Theorem and The Remainder Theorem

3. The Factor Theorem and The Remainder Theorem 57 3. The Factor Theorem and The Remainder Theorem Suppose we wish to find the zeros of f(x) = x 3 + 4x 5x 4. Setting f(x) = 0 results in the polynomial

### The Behavior of Bonds and Interest Rates. An Impossible Bond Pricing Model. 780 w Interest Rate Models

780 w Interest Rate Models The Behavior of Bonds and Interest Rates Before discussing how a bond market-maker would delta-hedge, we first need to specify how bonds behave. Suppose we try to model a zero-coupon

### Choice under Uncertainty

Choice under Uncertainty Part 1: Expected Utility Function, Attitudes towards Risk, Demand for Insurance Slide 1 Choice under Uncertainty We ll analyze the underlying assumptions of expected utility theory

### Lectures 5-6: Taylor Series

Math 1d Instructor: Padraic Bartlett Lectures 5-: Taylor Series Weeks 5- Caltech 213 1 Taylor Polynomials and Series As we saw in week 4, power series are remarkably nice objects to work with. In particular,

### 1. Periodic Fourier series. The Fourier expansion of a 2π-periodic function f is:

CONVERGENCE OF FOURIER SERIES 1. Periodic Fourier series. The Fourier expansion of a 2π-periodic function f is: with coefficients given by: a n = 1 π f(x) a 0 2 + a n cos(nx) + b n sin(nx), n 1 f(x) cos(nx)dx

### CHAPTER 17. Linear Programming: Simplex Method

CHAPTER 17 Linear Programming: Simplex Method CONTENTS 17.1 AN ALGEBRAIC OVERVIEW OF THE SIMPLEX METHOD Algebraic Properties of the Simplex Method Determining a Basic Solution Basic Feasible Solution 17.2

### 1 Representation of Games. Kerschbamer: Commitment and Information in Games

1 epresentation of Games Kerschbamer: Commitment and Information in Games Game-Theoretic Description of Interactive Decision Situations This lecture deals with the process of translating an informal description

### Perfect Bayesian Equilibrium

Perfect Bayesian Equilibrium When players move sequentially and have private information, some of the Bayesian Nash equilibria may involve strategies that are not sequentially rational. The problem is

### 1 Portfolio mean and variance

Copyright c 2005 by Karl Sigman Portfolio mean and variance Here we study the performance of a one-period investment X 0 > 0 (dollars) shared among several different assets. Our criterion for measuring

### Universidad Carlos III de Madrid Game Theory Problem set on Repeated Games and Bayesian Games

Session 1: 1, 2, 3, 4 Session 2: 5, 6, 8, 9 Universidad Carlos III de Madrid Game Theory Problem set on Repeated Games and Bayesian Games 1. Consider the following game in the normal form: Player 1 Player

### 6.042/18.062J Mathematics for Computer Science. Expected Value I

6.42/8.62J Mathematics for Computer Science Srini Devadas and Eric Lehman May 3, 25 Lecture otes Expected Value I The expectation or expected value of a random variable is a single number that tells you

### Game Theory and Algorithms Lecture 10: Extensive Games: Critiques and Extensions

Game Theory and Algorithms Lecture 0: Extensive Games: Critiques and Extensions March 3, 0 Summary: We discuss a game called the centipede game, a simple extensive game where the prediction made by backwards

### Lesson 4 Annuities: The Mathematics of Regular Payments

Lesson 4 Annuities: The Mathematics of Regular Payments Introduction An annuity is a sequence of equal, periodic payments where each payment receives compound interest. One example of an annuity is a Christmas

### Lecture 24: Series finale

Lecture 24: Series finale Nathan Pflueger 2 November 20 Introduction This lecture will discuss one final convergence test and error bound for series, which concerns alternating series. A general summary

### TIME VALUE OF MONEY. In following we will introduce one of the most important and powerful concepts you will learn in your study of finance;

In following we will introduce one of the most important and powerful concepts you will learn in your study of finance; the time value of money. It is generally acknowledged that money has a time value.

### max cx s.t. Ax c where the matrix A, cost vector c and right hand side b are given and x is a vector of variables. For this example we have x

Linear Programming Linear programming refers to problems stated as maximization or minimization of a linear function subject to constraints that are linear equalities and inequalities. Although the study

### Cryptography and Network Security Department of Computer Science and Engineering Indian Institute of Technology Kharagpur

Cryptography and Network Security Department of Computer Science and Engineering Indian Institute of Technology Kharagpur Module No. # 01 Lecture No. # 05 Classic Cryptosystems (Refer Slide Time: 00:42)

### 8.7 Mathematical Induction

8.7. MATHEMATICAL INDUCTION 8-135 8.7 Mathematical Induction Objective Prove a statement by mathematical induction Many mathematical facts are established by first observing a pattern, then making a conjecture

### MATHEMATICAL INDUCTION AND DIFFERENCE EQUATIONS

1 CHAPTER 6. MATHEMATICAL INDUCTION AND DIFFERENCE EQUATIONS 1 INSTITIÚID TEICNEOLAÍOCHTA CHEATHARLACH INSTITUTE OF TECHNOLOGY CARLOW MATHEMATICAL INDUCTION AND DIFFERENCE EQUATIONS 1 Introduction Recurrence

### CHAPTER 3. Sequences. 1. Basic Properties

CHAPTER 3 Sequences We begin our study of analysis with sequences. There are several reasons for starting here. First, sequences are the simplest way to introduce limits, the central idea of calculus.

### December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

### Stanford University CS261: Optimization Handout 6 Luca Trevisan January 20, In which we introduce the theory of duality in linear programming.

Stanford University CS261: Optimization Handout 6 Luca Trevisan January 20, 2011 Lecture 6 In which we introduce the theory of duality in linear programming 1 The Dual of Linear Program Suppose that we

### INTRODUCTION TO THE CONVERGENCE OF SEQUENCES

INTRODUCTION TO THE CONVERGENCE OF SEQUENCES BECKY LYTLE Abstract. In this paper, we discuss the basic ideas involved in sequences and convergence. We start by defining sequences and follow by explaining

### Week 7 - Game Theory and Industrial Organisation

Week 7 - Game Theory and Industrial Organisation The Cournot and Bertrand models are the two basic templates for models of oligopoly; industry structures with a small number of firms. There are a number

### When is Reputation Bad? The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters.

When is Reputation Bad? he Harvard community has made this article openly available. Please share how this access benefits you. Your story matters. Citation Published Version Accessed Citable Link erms

### Computational Finance Options

1 Options 1 1 Options Computational Finance Options An option gives the holder of the option the right, but not the obligation to do something. Conversely, if you sell an option, you may be obliged to

### i/io as compared to 4/10 for players 2 and 3. A "correct" way to play Certainly many interesting games are not solvable by this definition.

496 MATHEMATICS: D. GALE PROC. N. A. S. A THEORY OF N-PERSON GAMES WITH PERFECT INFORMA TION* By DAVID GALE BROWN UNIVBRSITY Communicated by J. von Neumann, April 17, 1953 1. Introduction.-The theory of

### 2.1 The Present Value of an Annuity

2.1 The Present Value of an Annuity One example of a fixed annuity is an agreement to pay someone a fixed amount x for N periods (commonly months or years), e.g. a fixed pension It is assumed that the

### U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009. Notes on Algebra

U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009 Notes on Algebra These notes contain as little theory as possible, and most results are stated without proof. Any introductory

### Perpetuities and Annuities EC 1745. Borja Larrain

Perpetuities and Annuities EC 1745 Borja Larrain Today: 1. Perpetuities. 2. Annuities. 3. More examples. Readings: Chapter 3 Welch (DidyoureadChapters1and2?Don twait.) Assignment 1 due next week (09/29).

### Practice Problems Solutions

Practice Problems Solutions The problems below have been carefully selected to illustrate common situations and the techniques and tricks to deal with these. Try to master them all; it is well worth it!

12.3 Geometric Sequences Copyright Cengage Learning. All rights reserved. 1 Objectives Geometric Sequences Partial Sums of Geometric Sequences What Is an Infinite Series? Infinite Geometric Series 2 Geometric

### CHAPTER 4: FINANCIAL ANALYSIS OVERVIEW

In the financial analysis examples in this book, you are generally given the all of the data you need to analyze the problem. In a real-life situation, you would need to frame the question, determine the

### Chapter 6. Learning Objectives Principles Used in This Chapter 1. Annuities 2. Perpetuities 3. Complex Cash Flow Streams

Chapter 6 Learning Objectives Principles Used in This Chapter 1. Annuities 2. Perpetuities 3. Complex Cash Flow Streams 1. Distinguish between an ordinary annuity and an annuity due, and calculate present

### Power Series Lecture Notes

Power Series Lecture Notes A power series is a polynomial with infinitely many terms. Here is an example: \$ 0ab œ â Like a polynomial, a power series is a function of. That is, we can substitute in different

### Game Theory in Wireless Networks: A Tutorial

1 Game heory in Wireless Networks: A utorial Mark Felegyhazi, Jean-Pierre Hubaux EPFL Switzerland email: {mark.felegyhazi, jean-pierre.hubaux}@epfl.ch EPFL echnical report: LCA-REPOR-2006-002, submitted

### 1 Uncertainty and Preferences

In this chapter, we present the theory of consumer preferences on risky outcomes. The theory is then applied to study the demand for insurance. Consider the following story. John wants to mail a package

### It is time to prove some theorems. There are various strategies for doing

CHAPTER 4 Direct Proof It is time to prove some theorems. There are various strategies for doing this; we now examine the most straightforward approach, a technique called direct proof. As we begin, it

### Oligopoly: Cournot/Bertrand/Stackelberg

Outline Alternative Market Models Wirtschaftswissenschaften Humboldt Universität zu Berlin March 5, 2006 Outline 1 Introduction Introduction Alternative Market Models 2 Game, Reaction Functions, Solution