851-0585-04L Modeling and Simulating Social Systems with MATLAB Lecture 5 Agent-based Modeling / Game Theory Karsten Donnay and Stefano Balietti Chair of Sociology, in particular of Modeling and Simulation ETH Zürich 2012-10-22
Schedule of the course Introduction to MATLAB Research Plan is due TODAY Working on projects (seminar thesis) 24.09. 01.10. 08.10. 15.10. 22.10. 29.10. 05.11. 12.11. 19.11. 26.11. 03.12. 10.12. 17.12. Introduction to social-science modeling and simulations Handing in seminar thesis and giving a presentation 2
Goals of Lecture 5: students will 1. Consolidate their knowledge of cellular automata, through brief repetition of the main concepts introduced in the previous session. 2. Understand the important concept of Agent-based Modeling (ABM) as a powerful tool to capture complex system dynamics. 3. Get a brief introduction to Game Theory as a mean to formalize complex interaction dynamics such as human cooperation and coordination. 4. Build the basic structure of a simple agent-based simulator 3
Repetition Cellular automata (CA) are abstract representations of interaction dynamics in discrete space and time Reduce complexity of interaction to simple microscopic interaction rules; canonical way to formalize interaction range are Neighborhoods (Moore, van Neuman) Approaches using cellular automata have a few natural limitations: only discrete time space and time; results depend on updating scheme (simultaneous vs. sequential); system dynamics can not always easily be reduced to microscopic interaction rules Can be abstract test bed for complex systems: same results from CA and dynamical systems approach (Ex. 2) 4
Motivation for microscopic modeling A number of real world processes may be reduced to simple local interaction rules (e.g. Swarm Intelligence) Key for correctly capturing complex system dynamics with local individual interactions is that dynamic components of system are atomizable in the context under consideration (e.g. pedestrians behave like particles) Represents reductionist (or mechanistic) point of view in contrast to a systemic (or holistic) approach; in line with the tradition that has fueled the success of natural science research in the past 200 years! Game Theory provides great framework to formalize and reduce complex (strategic) interactions 5
Swarm Intelligence (1989) Ant colonies, bird flocking, animal herding, bacterial growth, fish schooling 6
Swarm Intelligence Key Concepts: decentralized control interaction and learning self-organization 7
Boids (Reynolds,1986) A boid is a simulated individual particle or object moving about in a virtual space. 8
More information:http://www.red3d.com/cwr/boids/ Separation: steer to avoid crowding local flockmates Alignment: steer towards the average heading of local flockmates Cohesion: steer to move toward the average position of local flockmates 9
Game Theory Mathematical framework for strategic interactions of individuals Formalizes the notion of finding a best strategy (keyword: Nash equilibrium) when facing a well-defined decision situation; conceptualization through games Underlying assumption is that individuals optimize their payoffs (or more precisely: utility ) when faced with strategic decisions NOTE: huge difference between game theoretic results for one-shot games vs. repeated interactions! 10
Game Theory: assumptions Game: (definition in class, or in any of the references) Rationality: alignment between preferences, belief, and actions Players prefer is to maximize its payoff function Players belief can be summarized with the saying I am rational, you are rational. I know that you are rational, and you know that I know that you rational. Everybody knows that I know that you are rational, and you know Players act accordingly: best response. 11
Nash Equilibrium 12
Nash Equilibrium It is the strategy that players always play with no regrets: best response. No players has an incentive to deviate from a Nash equilibrium Can be not unique Some questions Is Nash an optimal strategy? What is the difference between a Paretoefficient equilibrium and a Nash Equilibrium? Why do players play Nash? Do they? 13
Game Theory - Human Cooperation Prisoner s dilemma 15,15 20,0 0,20 19,19 14
The Evolution of Cooperation (Axelrod,1984) Tit for tat being nice (cooperate first) pay back immediately easy to be recognized Shadow of the future (discount parameter) if the probability of meeting again is large enough, it is better to be nice 15
Game Theory - Coordination Games 1,1 0,0 0,0 1,1 Game Examples: Stag-hunt Assurance game Chicken game Hawk-dove game 16
Evolutionary Game Theory Classical Game Theory suffers from a number of conceptual weaknesses (hyper-rational players, selection problem of strategy, static theory) Evolutionary Game Theory introduces evolutionary dynamics for strategy selection, i.e. the evolutionary most stable strategy dominates the system dynamics Players are rational about their own decisions but do not need to know the strategies (and reasoning) of all other players! Very suited for agent-based modeling approaches as it allows for learning and adaption of agents! 17
Evolutionary Learning The main assumption underlying evolutionary thinking is that the entities which are more successful at a particular time will have the best chance of being present in the future. Selection Replication Mutation 18
Evolutionary Learning Selection is a discriminating force that favors some specific entities rather than others. Replication ensures that the entities (or their properties) are preserved, replicated or inherited from one generation to another Selection and replication work closely together, and in general tend to reduce diversity the generation of new diversity is the job of the mutation mechanism 19
How agents can learn Imitation : People copy the behavior of others, especially behavior that is popular or appears to yield high payoffs. Reinforcement : People tend to adopt actions that yielded a high payoff in the past, and to avoid actions that yielded a low payoff. Best reply : People adopt actions that optimize their expected payoff given what they expect others to do. Subjects choose best replies to the empirical frequency distribution of their opponents previous actions, i.e. Fictitious Play. Agents may also update their beliefs about others behavior. 20
From Factors to Actors (Macy,2002) Simple and predictable local interactions can generate familiar but enigmatic global patterns, such as the diffusion of information, emergence of norms, coordination of conventions, or participation in collective action 21
From Factors to Actors (Macy,2002) Simple and predictable local interactions can generate familiar but enigmatic global patterns, such as the diffusion of information, emergence of norms, coordination of conventions, or participation in collective action Agents are autonomous Agents are interdependent Agents follow simple rules Agents are adaptive and backward-looking 22
From Factors to Actors (Macy,2002) Simple and predictable local interactions can generate familiar but enigmatic global patterns, such as the diffusion of information, emergence of norms, coordination of conventions, or participation in collective action Agents are autonomous Agents are interdependent Agents follow simple rules reduction/ atomization problem! Agents are adaptive and backward-looking can be a very problematic assumption!! 23
Programming a simulator Agent-based simulations the models simulate the simultaneous operations of multiple agents, in an attempt to re-create and predict the actions of complex phenomena. agents can be pedestrians, animals, customers, internet users, etc 24
Programming a simulator Agent-based simulations State update According to set of behavioral rules Time line Time t Characterized by a state (or states) (location, dead/alive, color, level of excitation) T1 dt Time t + dt New state T2 25
Programming a simulator Agent-based simulations State update According to set of behavioral rules, including neighborhood Time t Characterized by a state (location, dead/alive, level of excitation) Time t + dt New state Time line T1 T2 26
Programming a simulator Programming an agent-based simulator often goes in five steps Initialization - Initial state; parameters; environment Time loop - Processing each time steps Agents loop - Processing each agents Update - Updating agent i at time t Save data - For further analysis Initialization Time loop 27 end Agents loop end Save data Update state
Programming a simulator Efficient Structure define and initialize all variables in a script file that serves as the main of your program execute the actual simulation as function with the pre-defined variables as inputs automated analysis, visualization etc. of the simulation output may then be implemented in the main file using the outputs of the function that runs the actual program 28
Programming a simulator Step 1: Defining the initial state & parameters % Initial state t 0 =0; %begining of the time line dt=1 ; %time step T=100; %number of time steps %Initial state of the agents State1 = [0 0 0 0]; State2 = [1 1 1 1]; etc 29
Programming a simulator Step 1: Defining the initial state & parameters % Initial state t 0 =0; %begining of the time line dt=1 ; %time step T=100; %number of time steps %Initial state of the agents State1 = zeros(1,50); State2 = rand(1,50); 30
Programming a simulator Step 2: Covering each time step % time loop For t=t 0 :dt:t end What happen at each time step? - Update environment - Update agents 31
Programming a simulator Step 3: Covering each agent % agent loop For i=1:length(states) What happen for each agent? end 32
Programming a simulator Step 4: Updating agent i at time t % update % Each agent has 60% chance to switch to state 1 randomvalue=rand(); if (randomvalue<0.6) State(i)=1; else State(i)=0; end Use sub-functions for better organization!! 33
Programming a simulator Step 4: Updating agent i at time t % update % Each agent has proba chance to switch to state 1 randomvalue=rand(); if (randomvalue<proba) State(i)=1; else State(i)=0; end Define parameters in the first part!! 34
Programming a simulator Step 4: Updating agent i at time t % Initial state t 0 =0; %begining of the time line dt=1 ; %time step T=100; %number of time steps proba=0.6; %probability to switch state Define parameters in the first part!! 35
Programming a simulator Step 5: Final processing % Outputs and final processing propalive = sum(states)/length(states); And return propalive as the output of the simulation. 36
Programming a simulator Running simulation >> p=simulation() p = 0.54 >> p=simulation() p = 0.72
Programming a simulator Alternatively : a framework for the simulator % Running N simulations N=1000 %number of simulation for n=1:n p(n)=simulation(); end Use a framework to run N successive simulation and store the result each time. 38
Programming a simulator Alternatively : a framework for the simulator >> hist(p) 39
Exercise 1 There is some additional material on the iterated prisoner s dilemma available on the course website If you are not familiar with the game, the readyto-use implementation of the game will help you get a better understanding of its dynamics 40
Exercise 2 Start thinking about how to adapt the general simulation engine shown today for your project Problems: Workspace organization (files, functions, data) Parameters initialization Number of agents Organization of loops Which data to save, how frequently, in which format What kind of interactions are relevant 41
References Fudenberg & Tirole, Game Theory, MIT Press (2000) Meyerson, Game Theory, Harvard University Press (1997) Herbert Gintis, Game Theory Evolving, Princeton Univ. Press, 2 nd Edition (2009) Uhrmacher, Weyns (Editors), Multi-Agent Systems Simulation and Applications, CRC Press (2009) A step by step guide to compute the Nash equilibrium: http://uwacadweb.uwyo.edu/shogren/jaysho/nashnotes.pdf https://github.com/sandermatt/ethaxelrodnoise 42