Mean Field Stochastic Control



Similar documents
Quasi-static evolution and congested transport

EXIT TIME PROBLEMS AND ESCAPE FROM A POTENTIAL WELL

The Heat Equation. Lectures INF2320 p. 1/88

Example 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum. asin. k, a, and b. We study stability of the origin x

New Economic Perspectives for Resource Allocation in Wireless Networks

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS

Game Theory: Supermodular Games 1

Hydrodynamic Limits of Randomized Load Balancing Networks

Duality in General Programs. Ryan Tibshirani Convex Optimization /36-725

1 Short Introduction to Time Series

Network Security A Decision and Game-Theoretic Approach

A Network Flow Approach in Cloud Computing

15 Limit sets. Lyapunov functions

8. Linear least-squares

Competitive Analysis of On line Randomized Call Control in Cellular Networks

Multivariate Normal Distribution

A HYBRID GENETIC ALGORITHM FOR THE MAXIMUM LIKELIHOOD ESTIMATION OF MODELS WITH MULTIPLE EQUILIBRIA: A FIRST REPORT

On the Interaction and Competition among Internet Service Providers

Online Algorithms: Learning & Optimization with No Regret.

Statistical Machine Learning

19 LINEAR QUADRATIC REGULATOR

CHAPTER 7 APPLICATIONS TO MARKETING. Chapter 7 p. 1/54

Universidad de Montevideo Macroeconomia II. The Ramsey-Cass-Koopmans Model

Unraveling versus Unraveling: A Memo on Competitive Equilibriums and Trade in Insurance Markets

TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAA

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

TD(0) Leads to Better Policies than Approximate Value Iteration

LECTURE 4. Last time: Lecture outline

Adaptive Online Gradient Descent

Disability insurance: estimation and risk aggregation

Rate of convergence towards Hartree dynamics

Some stability results of parameter identification in a jump diffusion model

Mathematical Finance

A Game Theoretical Framework for Adversarial Learning

BOOLEAN CONSENSUS FOR SOCIETIES OF ROBOTS

2.3 Convex Constrained Optimization Problems

Who Should Sell Stocks?

EE4367 Telecom. Switching & Transmission. Prof. Murat Torlak

4 Lyapunov Stability Theory

Differentiating under an integral sign

Nonparametric adaptive age replacement with a one-cycle criterion

Decentralization and Private Information with Mutual Organizations

ARBITRAGE-FREE OPTION PRICING MODELS. Denis Bell. University of North Florida

Predictive Control Algorithms: Stability despite Shortened Optimization Horizons

Lecture L22-2D Rigid Body Dynamics: Work and Energy

MODELING RANDOMNESS IN NETWORK TRAFFIC

Passive control. Carles Batlle. II EURON/GEOPLEX Summer School on Modeling and Control of Complex Dynamical Systems Bertinoro, Italy, July

Lecture 13 Linear quadratic Lyapunov theory

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

TCOM 370 NOTES 99-4 BANDWIDTH, FREQUENCY RESPONSE, AND CAPACITY OF COMMUNICATION LINKS

Nonlinear Systems of Ordinary Differential Equations

The dynamic equation for the angular motion of the wheel is R w F t R w F w ]/ J w

6.254 : Game Theory with Engineering Applications Lecture 1: Introduction

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization

1 Norms and Vector Spaces

Application of Game Theory in Inventory Management

Prerequisite: High School Chemistry.

1 if 1 x 0 1 if 0 x 1

Recurrent Neural Networks

Equilibrium: Illustrations

Dynamic Service Selection and Bandwidth. Allocation in IEEE m Mobile Relay Networks

Physics 9e/Cutnell. correlated to the. College Board AP Physics 1 Course Objectives

Fuzzy Differential Systems and the New Concept of Stability

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Chapter 7. Lyapunov Exponents. 7.1 Maps

Lecture 3: Linear methods for classification

Dynamics and Equilibria

Numerical Solution of Differential Equations

Introduction to the Finite Element Method (FEM)

Orbits of the Lennard-Jones Potential

Functional Optimization Models for Active Queue Management

State of Stress at Point

Lecture. S t = S t δ[s t ].

Continuous time mean-variance portfolio optimization through the mean field approach

OPTIMAL DESIGN OF DISTRIBUTED SENSOR NETWORKS FOR FIELD RECONSTRUCTION

The Black-Scholes-Merton Approach to Pricing Options

Mean Field Energy Games in Wireless Networks

Does Black-Scholes framework for Option Pricing use Constant Volatilities and Interest Rates? New Solution for a New Problem

How performance metrics depend on the traffic demand in large cellular networks

How To Find An Optimal Search Protocol For An Oblivious Cell

Sensitivity analysis of European options in jump-diffusion models via the Malliavin calculus on the Wiener space

Lecture 2 Linear functions and examples

Monte Carlo Methods in Finance

Date: April 12, Contents

Discussion on the paper Hypotheses testing by convex optimization by A. Goldenschluger, A. Juditsky and A. Nemirovski.

Weakly Secure Network Coding

Big Data - Lecture 1 Optimization reminders

The Ergodic Theorem and randomness

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1

Math 241, Exam 1 Information.

Sensitivity analysis of utility based prices and risk-tolerance wealth processes

MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel

How to Solve Strategic Games? Dominant Strategies

Goal Problems in Gambling and Game Theory. Bill Sudderth. School of Statistics University of Minnesota

Dynamic data processing

Reaction diffusion systems and pattern formation

CLASSICAL CONCEPT REVIEW 8

Econometrics Simple Linear Regression

1 Local Brouwer degree

Transcription:

Mean Field Stochastic Control Peter E. Caines IEEE Control Systems Society Bode Lecture 48 th Conference on Decision and Control and 28 th Chinese Control Conference Shanghai 2009 McGill University Caines, 2009 p.1

Co-Authors Minyi Huang Roland Malhamé Caines, 2009 p.2

Collaborators & Students Arman Kizilkale Arthur Lazarte Zhongjing Ma Mojtaba Nourian Caines, 2009 p.3

Overview Overall Objective: Present a theory of decentralized decision-making in stochastic dynamical systems with many competing or cooperating agents Outline: Present a motivating control problem from Code Division Multiple Access (CDMA) uplink power control Motivational notions from Statistical Mechanics The basic notions of Mean Field (MF) control and game theory The Nash Certainty Equivalence (NCE) methodology Main NCE results for Linear-Quadratic-Gaussian (LQG) systems NCE systems with interaction locality: Geometry enters the models Present adaptive NCE system theory Caines, 2009 p.4

Part I CDMA Power Control Base Station & Individual Agents Caines, 2009 p.5

Part I CDMA Power Control Lognormal channel attenuation: 1 i N i th channel: dx i = a(x i + b)dt + σdw i, 1 i N Transmitted power = channel attenuation power = e xi(t) p i (t) (Charalambous, Menemenlis; 1999) Signal to interference ratio (Agent [ i) at the base station = e x i p i / (β/n) ] N j i ex j p j + η How to optimize all the individual SIR s? Self defeating for everyone to increase their power Humans display the Cocktail Party Effect : Tune hearing to frequency of friend s voice Caines, 2009 p.6

Part I CDMA Power Control Can maximize N i=1 SIR i with centralized control. (HCM, 2004) Since centralized control is not feasible for complex systems, how can such systems be optimized using decentralized control? Idea: Use large population properties of the system together with basic notions of game theory. Caines, 2009 p.7

Part II Statistical Mechanics The Statistical Mechanics Connection Caines, 2009 p.8

Part II Statistical Mechanics A foundation for thermodynamics was provided by the Statistical Mechanics of Boltzmann, Maxwell and Gibbs. Basic Ideal Gas Model describes the interaction of a huge number of essentially identical particles. SM explains very complex individual behaviours by PDEs for a continuum limit of the mass of particles. 1 0.9 0.8 0.7 0.6 0.5 0.4 450 400 350 300 250 200 0.3 150 0.2 100 0.1 50 0 0 0.2 0.4 0.6 0.8 1 0 0.2 0.1 0 0.1 0.2 Animation of Particles Caines, 2009 p.9

Part II Statistical Mechanics Start from the equations for the perfectly elastic (i.e. hard) sphere mechanical collisions of each pair of particles: Velocities before collision: v, V Velocities after collision: v = v (v,v,t), V = V (v,v,t) These collisions satisfy the conservation laws, and hence: Conservation of Momentum m(v + V ) = m(v + V) Energy 1 2 m ( v 2 + V 2 ) = 1 2 m ( v 2 + V 2 ) Caines, 2009 p.10

Part II Boltzmann s Equation The assumption of Statistical Independence of particles (Propagation of Chaos Hypothesis) gives Boltzmann s PDE for the behaviour of an infinite population of particles p t (v,x) t + x p t (v,x) v = Q(θ, ψ)dθdψ v V ( ) p t (v,x)p t (V,x) p t (v,x)p t (V,x) d 3 V v = v (v,v,t), V = V (v,v,t) Caines, 2009 p.11

Part II Statistical Mechanics Entropy H: H(t) def = p t (v,x)(log p t (v,x))d 3 xd 3 v The H-Theorem: dh(t) dt 0 H def = sup t 0 H(t) occurs at p = N(v, Σ ) (The Maxwell Distribution) Caines, 2009 p.12

Part II Statistical Mechanics: Key Intuition Extremely complex large population particle systems may have simple continuum limits with distinctive bulk properties. Caines, 2009 p.13

Part II Statistical Mechanics 1 450 0.9 400 0.8 0.7 0.6 0.5 0.4 350 300 250 200 0.3 150 0.2 100 0.1 50 0 0 0.2 0.4 0.6 0.8 1 Animation of Particles 0 0.2 0.1 0 0.1 0.2 Caines, 2009 p.14

Part II Statistical Mechanics Control of Natural Entropy Increase Feedback Control Law (Non-physical Interactions): At each collision, total energy of each pair of particles is shared equally while physical trajectories are retained. Energy is conserved. 1 0.9 600 0.8 500 0.7 0.6 400 0.5 300 0.4 0.3 200 0.2 100 0.1 0 0 0.2 0.4 0.6 0.8 1 0 0 0.05 0.1 0.15 Animation of Particles Caines, 2009 p.15

Part II Key Intuition A sufficiently large mass of individuals may be treated as a continuum. Local control of particle (or agent) behaviour can result in (partial) control of the continuum. Caines, 2009 p.16

Part III Game Theoretic Control Systems Game Theoretic Control Systems of interest will have many competing agents A large ensemble of players seeking their individual interest Fundamental issue: The relation between the actions of each individual agent and the resulting mass behavior Caines, 2009 p.17

Part III Basic LQG Game Problem Individual Agent s Dynamics: dx i = (a i x i + b i u i )dt + σ i dw i, 1 i N. (scalar case only for simplicity of notation) xi : state of the ith agent ui : control wi : disturbance (standard Wiener process) N: population size Caines, 2009 p.18

Part III Basic LQG Game Problem Individual Agent s Cost: J i (u i,ν) E 0 e ρt [(x i ν) 2 + ru 2 i]dt Basic case: ν γ.( 1 N N k i x k + η) More General case: ν Φ( 1 N N k i x k + η) Φ Lipschitz Main feature: Agents are coupled via their costs Tracked process ν: (i) stochastic (ii) depends on other agents control laws (iii) not feasible for x i to track all x k trajectories for large N Caines, 2009 p.19

Part III Background & Current Related Work Background: 40+ years of work on stochastic dynamic games and team problems: Witsenhausen, Varaiya, Ho, Basar, et al. Current Related Work: Industry dynamics with many firms: Markov models and Oblivious Equilibria (Weintraub, Benkard, Van Roy, Adlakha, Johari & Goldsmith, 2005, 2008 - ) Mean Field Games: Stochastic control of many agent systems with applications to finance (Lasry and Lions, 2006 -) Caines, 2009 p.20

Part III Large Popn. Models with Game Theory Features Economic models (e.g, oligopoly production output planning): Each agent receives average effect of others via the market. Cournot-Nash equilibria (Lambson) Advertising competition game models (Erickson) Wireless network resource allocation (Alpcan, Basar, Srikant, Altman; HCM) Admission control in communication networks; (point processes) (Ma, RM, PEC) Feedback control of large scale mechanics-like systems: Synchronization of coupled oscillators (Yin, Mehta, Meyn, Shanbhag) Caines, 2009 p.21

Part III Large Popn. Models with Game Theory Features Public health: Voluntary vaccination games (Bauch & Earn) Biology: Pattern formation in plankton (S. Levine et al.); stochastic swarming (Morale et al.); PDE swarming models (Bertozzi et al.) Biology: "Selfish Herd" behaviour: Fish reducing individual predation risk by joining the group (Reluga & Viscido) Sociology: Urban economics and urban behaviour (Brock and Durlauf et al.) Caines, 2009 p.22

Part III An Example: Collective Motion of Fish (Schooling) Caines, 2009 p.23

Part III An Example: School of Sardines School of Sardines Caines, 2009 p.24

Part III Names Why Mean Field Theory? Standard Term in Physics for the Replacement of Complex Hamiltonian H by an Average H Notion Used in Many Fields for Approximating Mass Effects in Complex Systems Shall use Mean Field as General Term and Nash Certainty Equivalence (NCE) in Specific Control Theory Contexts Caines, 2009 p.25

Part III Names Why Nash Certainty Equivalence Control (NCE Control)? Because the equilibria established are Nash Equilibria and The feedback control laws depend upon an "equivalence" assumed between unobserved system properties and agent-computed approximations e.g. θ 0 and ˆθ N (which are exact in a limit). Caines, 2009 p.26

Part III Preliminary Optimal LQG Tracking Take x (bounded continuous) for scalar model: dx i = a i x i dt + bu i dt + σ i dw i J i (u i,x ) = E 0 e ρt [(x i x ) 2 + ru 2 i ]dt Riccati Equation: ρπ i = 2a i Π i b2 r Π2 i + 1, Π i > 0 Set β 1 = a i + b2 r Π i, β 2 = a i + b2 r Π i + ρ, and assume β 1 > 0 Mass Offset Control: ρs i = ds i dt + a is i b2 r Π is i x. Optimal Tracking Control: u i = b r (Π ix i + s i ) Boundedness condition on x implies existence of unique solution s i. Caines, 2009 p.27

Part III Key Intuition When the tracked signal is replaced by the deterministic mean state of the mass of agents: Agent s feedback = feedback of agent s local stochastic state + feedback of deterministic mass offset Think Globally, Act Locally (Geddes, Alinsky, Rudie-Wonham) Caines, 2009 p.28

Part III Key Features of NCE Dynamics Under certain conditions, the mass effect concentrates into a deterministic - hence predictable - quantity m(t). A given agent only reacts to its own state and the mass behaviour m(t), any other individual agent becomes invisible. The individual behaviours collectively reproduce that mass behaviour. Caines, 2009 p.29

Part III Population Parameter Distribution Info. on other agents is available to each agent only via F(.) Define empirical distribution associated with N agents F N (x) = 1 N N i=1 I(a i x), x R. H1 There exists a distribution function F on R such that F N F weakly as N. Caines, 2009 p.30

Part III LQG-NCE Equation Scheme The Fundamental NCE Equation System Continuum of Systems: a A; common b for simplicity Riccati Equation : ρs a = ds a dt + as a b2 r Π as a x dx a b2 = (a dt r Π a)x a b2 r s a, x(t) = x a (t)df(a), A x (t) = γ(x(t) + η) t 0 ρπ a = 2aΠ a b2 r Π2 a + 1, Π a > 0 Individual control action u a = b r (Π ax a + s a ) is optimal w.r.t tracked x. Does there exist a solution (x a,s a,x ;a A)? Yes: Fixed Point Theorem Caines, 2009 p.31

Part III NCE Feedback Control Proposed MF Solution to the Large Population LQG Game Problem The Finite System of N Agents with Dynamics: dx i = a i x i dt + bu i dt + σ i dw i, 1 i N, t 0 Let u i (u 1,,u i 1,u i+1,,u N ); then the individual cost J i (u i,u i ) = E 0 e ρt {[x i γ( 1 N N x k + η)] 2 + ru 2 i }dt k i Algorithm: For ith agent with parameter (a i,b) compute: x using NCE Equation System ρπ i = 2a i Π i b2 r Π2 i + 1 ρs i = ds i dt + a is i b2 r Π is i x u i = b r (Π ix i + s i ) Caines, 2009 p.32

Part III Saddle Point Nash Equilibrium Agent y is a maximizer Agent x is a minimizer 4 3 2 1 0 1 2 3 4 2 1 0 y 1 2 2 1 x 0 1 2 Caines, 2009 p.33

Part III Nash Equilibrium A i s set of control inputs U i consists of all feedback controls u i (t) adapted to F N σ(x j (τ);τ t, 1 j N). Definition The set of controls U 0 = {u 0 i;1 i N} generates a Nash Equilibrium w.r.t. the costs {J i ; 1 i N} if, for each i, J i (u 0 i,u 0 i) = inf u i U i J i (u i,u 0 i) Caines, 2009 p.34

Part III ε-nash Equilibrium A i s set of control inputs U i consists of all feedback controls u i (t) adapted to F N σ(x j (τ);τ t, 1 j N). Definition Given ε > 0, the set of controls U 0 = {u 0 i;1 i N} generates an ε-nash Equilibrium w.r.t. the costs {J i ; 1 i N} if, for each i, J i (u 0 i,u0 i ) ε inf u i U i J i (u i,u 0 i ) J i(u 0 i,u0 i ) Caines, 2009 p.35

Part III - Assumptions H2 A [b2 γ]/[rβ 1 (a)β 2 (a)]df(a) < 1. H3 All agents have independent zero mean initial conditions. In addition, {w i, 1 i N} are mutually independent and independent of the initial conditions. H4 The interval A satisfies: A is a compact set and infa A β 1 (a) ǫ for some ǫ > 0. supi 1 [σi 2 + Ex2 i (0)] <. Caines, 2009 p.36

Part III NCE Control: First Main Result Theorem (MH, PEC, RPM, 2003) The NCE Control Algorithm generates a set of controls Unce N = {u 0 i ;1 i N}, 1 N <, with s.t. u 0 i = b r (Π ix i + s i ) (i) The NCE Equations have a unique solution. (ii) All agent systems S(A i ), 1 i N, are second order stable. (iii) {U N nce;1 N < } yields an ε-nash equilibrium for all ε, i.e. ε > 0 N(ε) s.t. N N(ε) J i (u 0 i,u 0 i) ε inf u i U i J i (u i,u 0 i) J i (u 0 i,u 0 i), where u i U i is adapted to F N. Caines, 2009 p.37

Part III NCE Control: Key Observations The information set for NCE Control is minimal and completely local since Agent A i s control depends on: (i) Agent A i s own state: x i (t) (ii) Statistical information F(θ) on the dynamical parameters of the mass of agents. Hence NCE Control is truly decentralized. All trajectories are statistically independent for all finite population sizes N. Caines, 2009 p.38

Part III Key Intuition: Fixed point theorem shows all agents competitive control actions (based on local states and assumed mass behaviour) reproduce that mass behaviour Hence: NCE Stochastic Control results in a stochastic dynamic Nash Equilibrium Caines, 2009 p.39

Part III Centralized versus Decentralized NCE Costs Per agent infinite population NCE cost := v indiv (the dashes) Centralized cost: J = N i=1 J i Optimal Centralized Control Cost per agent := v N /N Per agent infinite population Centralized Cost v = lim N v N /N (the solid curve) Cost gap as a function of linking parameter γ: v indiv v (the circles) 0.35 0.3 Indiv. cost by decentralized tracking Limit of centralized cost per agent: lim n (v/n) Cost Gap 0.25 0.2 0.15 0.1 0.05 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Horizontal axis Range of linking parameter γ in the cost Caines, 2009 p.40

Part IV Localized NCE Theory Motivation: Control of physically distributed controllers & sensors Geographically distributed markets Individual dynamics: (note vector states and inputs!) dx i,pi (t) = A i x i,pi (t)dt + B i u i,pi (t)dt + Cdw i,pi (t), 1 i N, t 0 State of Agent A i at p i is denoted x i,pi and indep. hyps. assumed. Dynamical parameters independent of location, hence A i,pi = A i Interaction locality: J i,pi = E 0 e ρt { [x i,pi Φ i,pi ] T Q[x i,pi Φ i,pi ] + u T i,p i Ru i,pi }dt, where Φ i,pi = γ( N j i ω(n) p i p j x j,pj + η) Caines, 2009 p.41

Part IV Example of Weight Allocation Consider the 2-D interaction: Partition [ 1, 1] [ 1, 1] into a 2-D lattice Weight decays with distance by the rule ω (N) p i p j = c p i p j λ where c is the normalizing factor and λ (0, 2) 12 x 10 4 10 weight 8 6 4 1 0.5 0 Y 0.5 1 1 0.5 X 0 0.5 1 Caines, 2009 p.42

Part IV Assumptions on Weight Allocations C1 The weight allocation satisfies the conditions: (a) ω (N) p i p j 0, i,j, (b) N j i ω(n) p i p j = 1, i, (c) ǫ ω N sup N 1 i N j i ω(n) p i p j 2 0, as N. Condition (c) implies the weights cannot become highly concentrated on a small number of neighbours. Caines, 2009 p.43

Part IV Localized NCE: Parametric Assumptions For the continuum of systems: θ = [A θ,b θ,c], assume [ Q 1/2 ],A θ observable, [Aθ,B θ ] controllable As in the standard case, let Π θ > 0 satisfy the Riccati equation: ρπ θ = A T θ Π θ + Π θ A θ Π θ B θ R 1 B T θ Π θ + Q Denote A 1,θ = A θ B θ R 1 B T θ Π θ and A 2,θ = A θ B θ R 1 B T θ Π θ ρi C2 The A 1 eigenvalues satisfy λ.(a 1 ) < 0 (necessarily λ.(a 2 ) < 0), and the analogy of Θ [γb2 θ ]/[rβ 1,θβ 2,θ ]df(θ) < 1 holds. Take [p,p] as the locality index set (in R 1 ). C3 There exist location weight distributions F p ( ),p [p,p], satisfying certain technical conditions. Caines, 2009 p.44

Part IV Localized NCE: Parametric Assumptions Convergence of Emp. Dynamical Parameter and Weight Distributions: 1. Parameter Distribution: F N (θ p) = lim N 1 N N 1 I(θ i θ) F(θ) weakly. e.g. F(θ) N( θ, Σ) 2. Location Weight Distribution: w lim N lim N N j=1 ω(n) p i p j I (pj S,θ j H) = q S p j q ω(n) pp j = F p (q) θ H df(θ)df p(q) p=pi e.g. ω (N) p i p j = c i(n) p j p i 1/2 1 i j N Caines, 2009 p.45

Part IV NCE Equations with Interaction Locality Each agent A θ,p computes: ρs θ,p = ds θ,p + A T θ s θ,p Π θ B θ R 1 B T θ s θ,p x p dt d x θ,p = (A θ B θ R 1 B T θ dt Π θ) x θ,p B θ R 1 B T θ s θ,p, x(0) = 0 x p (t) = x θ,p (t)df(θ )df p (p ) Θ P x p(t) = γ( x p (t) + η), t 0 The Mean Field effect now depends on the locations of the agents. Caines, 2009 p.46

Part IV Localized NCE: Main Result Theorem (HCM, 2008) Under C1-C4: Let the Localized NCE Control Algorithm generate a set of controls U N nce = {u 0 i ;1 i N}, 1 N <, where for agent A i,pi, u 0 i = R 1 B T i (Π θx i + s i,pi ) s.t. s i,pi is given by the Localized NCE Equation System via θ := i,p := p i. Then, (i) There exists a unique bounded solution (s p ( ), x p ( ),x p( ),p A) to the NCE equation system. (ii) All agents systems S(A i ), 1 i N, are second order stable. (iii) {U N nce;1 N < } yields an ε-nash Equilibrium for all ε, i.e. ε > 0 N(ε) s.t. N N(ε) J i (u 0 i,u 0 i) ε inf u i U i J i (u i,u 0 i) J i (u 0 i,u 0 i), where u 0 i UN nce, and u i U i is adapted to F N. Caines, 2009 p.47

Part IV Separated and Linked Populations 1-D & 2-D Cases: Locally: ω (N) p i p j = p i p j 0.5 Connection to other population: ω (N) p i p j = 0 For Population 1: A = 1, B = 1 For Population 2: A = 0, B = 1 η = 0.25 At 20th second two populations merge: Globally: ω (N) p i p j = p i p j 0.5 Caines, 2009 p.48

Part IV Separated and Linked Populations Line 2-D System Caines, 2009 p.49

Part V Nonlinear MF Systems In the infinite population limit, a representative agent satisfies a controlled McKean-Vlasov Equation: dx t = f[x t,u t,µ t ]dt + σdw t, 0 t T where f[x,u,µ t ] = R f(x,u,y)µ t(dy), with x 0,µ 0 given µ t ( ) = distribution of population states at t [0,T]. In the infinite population limit, individual Agents Costs: J(u,µ) = E T 0 L[x t,u t,µ t ]dt, where L[x,u,µ t ] = R L(x,u,y)µ t(dy). Caines, 2009 p.50

Part V Mean Field and McK-V-HJB Theory Mean Field HJB equation (HMC, 2006): V t = inf u U { f[x,u,µ t ] V } x + L[x,u,µ t] + σ2 2 V (T,x) = 0, (t,x) [0,T) R. 2 V x 2 Optimal Control: u t = ϕ(t,x µ t ), (t,x) [0,T] R. Closed-loop McK-V equation: dx t = f[x t,ϕ(t,x µ ),µ t ]dt + σdw t, 0 t T. Caines, 2009 p.51

Part V Mean Field and McK-V-HJB Theory Mean Field HJB equation (HMC, 2006): V t = inf u U { f[x,u,µ t ] V } x + L[x,u,µ t] + σ2 2 V (T,x) = 0, (t,x) [0,T) R. 2 V x 2 Optimal Control: u t = ϕ(t,x µ t ), (t,x) [0,T] R. Closed-loop McK-V equation: dx t = f[x t,ϕ(t,x µ ),µ t ]dt + σdw t, 0 t T. Yielding Nash Certainty Equivalence Principle expressed in terms of McKean-Vlasov HJB Equation, hence achieving the highest Great Name Frequency possible for a Systems and Control Theory result. Caines, 2009 p.52

Part VI Adaptive NCE Theory: Certainty Equivalence Stochastic Adaptive Control (SAC) replaces unknown parameters by their recursively generated estimates Key Problem: To show this results in asymptotically optimal system behaviour Caines, 2009 p.53

Part VI Adaptive NCE: Self & Popn. Identification Self-Identification Case: Individual parameters to be estimated: Agent A i estimates own dynamical parameters: θi T = (A i,b i ) Parameter distribution of mass of agents is given: F ζ = F ζ (θ), θ A R n(n+m), ζ P R p. Self + Population Identification Case: Individual & population distribution parameters to be estimated: A i observes a random subset Obs i (N) of all agents s.t. Obs i (N) /N 0 as N Agent A i estimates: Its own parameter θ i : Population distribution parameter ζ: F ζ (θ), θ A, ζ P Caines, 2009 p.54

Part VI SAC NCE Control Algorithm WLS Equations: ˆθ T i (t) = [Âi, ˆB i ](t), ψ T t = [x T t,u T t ], dˆθ i (t) = a(t)ψ t ψ(t)[dx T i (t) ψ T t ˆθ i (t)dt] 1 i N dψ t = a(t)ψ t ψ t ψ T t Ψ t dt Long Range Average (LRA) Cost Function: J i (û i,û i ) = lim T sup 1 T T 0 { [xi (t) Φ i (t)] T Q[x i (t) Φ i (t)] + û T i (t)rû i (t) } dt 1 i N, a.s. Caines, 2009 p.55

Part VI SAC NCE Control Algorithm Solve the NCE Equations generating x (t,f ζ ). Note: F ζ known Agent A i s NCE Control: ˆΠi,t : Riccati Equation:  T i,t ˆΠ i,t + ˆΠ i,t  i,t ˆΠ i,t ˆBi,t R 1 ˆBT i,t ˆΠi,t + Q = 0 s i (t, ˆθ i,t ): Mass Control Offset: ds i (t, ˆθ i,t ) dt = ( ÂT i,t + ˆΠ i ˆBi,t R 1 ˆBT i,t )s i (t, ˆθ i,t ) + x (t,f ζ ) The control law from Certainty Equivalence Adaptive Control: û i (t) = R 1 ˆBT i,t ( ˆΠ i,t x i (t) + s i (t, ˆθ i,t )) + ξ k [ǫ i (t) ǫ i (k)] Dither weighting: ξ 2 k = log k k, k 1 ǫ i (t) = Wiener Process Zero LRA cost dither after (Duncan, Guo, Pasik-Duncan; 1999) Caines, 2009 p.56

Part VI SAC NCE Theorem (Self-Identification Case) Theorem (Kizilkale & PEC; after (LRA-NCE) Li & Zhang, 2008) The NCE Adaptive Control Algorithm generates a set of controls s.t. Û N nce = {û i ; 1 i N}, 1 N <, (i) ˆθ i,t (x t i) θ i w.p.1 as t, 1 i N (strong consistency) (ii) All agent systems S(A i ), 1 i N, are LRA L 2 stable w.p.1 (iii) {ÛN nce; 1 N < } yields a (strong) ε-nash equilibrium for all ε (iv) Moreover J i (û i,û i ) = J i (u 0 i,u 0 i) w.p.1, 1 i N Caines, 2009 p.57

Part VI SAC NCE (Self + Population Identification Case) Theorem (AK & PEC, 2009) Assume each agent A i : (i) Observes a random subset Obs i (N) of the total population N s.t. Obs i (N) /N 0, N, (ii) Estimates own parameter θ i via WLS (iii) Estimates the population distribution parameter ˆζ i via RMLE (iv) Computes u 0 i(ˆθ, ˆζ) via the extended NCE eqns + dither where x(t) = x θ (θ) (t)dfˆζi A Caines, 2009 p.58

Part VI SAC NCE (Self + Population Identification Case) Theorem (AK & PEC, 2009) ctn. Under reasonable conditions on population dynamical parameter distribution F ζ (.), as t and N : (i) ˆζ i (t) ζ P a.s. and hence, Fˆζi (t) ( ) F ζ( ) a.s. (weak convergence on P ), 1 i N (ii) ˆθ i (t,n) θ i a.s. 1 i N and the set of controls {ÛN nce; 1 N < } is s.t. (iii) Each S(A i ), 1 i N, is an LRA L 2 stable system. (iv) {ÛN nce; 1 N < } yields a (strong) ε-nash equilibrium for all ε (v) Moreover J i (û i,û i ) = J i (u 0 i,u 0 i) w.p.1, 1 i N Caines, 2009 p.59

Part VI SAC NCE Animation 400 Agents System matrices {Ak }, {B k }, 1 k 400 [ ] [ 0.2 + a 11 2 + a 12 1 + b 1 A B 1 + a 21 0 + a 22 0 + b 2 ] Population dynamical parameter distribution aij s and b i s are independent. a ij N(0, 0.5) b i N(0, 0.5) Population distribution parameters: ā 11 = 0.2, σ 2 a 11 = 0.5, b11 = 1, σ 2 b 11 = 0.5 etc. Caines, 2009 p.60

Part VI SAC NCE Animation All agents performing individual parameter and population distribution parameter estimation Each of 400 agents observing its own 20 randomly chosen agents outputs and control inputs 3 D Trajectory 60 Histogram for B Real Values Estimated Values 600 40 400 20 200 y 0 200 400 0 0 0.5 1 1.5 2 2.5 3 Norm Trajectory for B Estimation 600 800 2000 1000 0 x 1000 2000 0 2 4 t 6 8 10 Norm 4 2 400 0 300 200 Agent # 100 0 0 2 t 4 6 8 10 Animation of Trajectories Caines, 2009 p.61

Summary NCE Theory solves a class of decentralized decision-making problems with many competing agents. Asymptotic Nash Equilibria are generated by the NCE Equations. Key intuition: Single agent s control = feedback of stochastic local (rough) state + feedback of deterministic global (smooth) system behaviour NCE Theory extends to (i) localized problems, (ii) stochastic adaptive control, (iii) leader - follower systems. Caines, 2009 p.62

Future Directions Further development of Minyi Huang s large and small players extension of NCE Theory Egoists and altruists version of NCE Theory Mean Field stochastic control of non-linear (McKean-Vlasov) systems Extension of SAC MF Theory in richer game theory contexts Development of MF Theory towards economic and biological applications Development of large scale cybernetics: Systems and control theory for competitive and cooperative systems Caines, 2009 p.63

Thank You! Caines, 2009 p.64