A Study of Synchronization and Group Cooperation Using Partial Contraction Theory

Similar documents
Models of Cortical Maps II

Francesco Sorrentino Department of Mechanical Engineering

Data Mining: Algorithms and Applications Matrix Math Review

DATA ANALYSIS II. Matrix Algorithms

Example 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum. asin. k, a, and b. We study stability of the origin x

Performance of networks containing both MaxNet and SumNet links

Solution of Linear Systems

SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH

Chapter 7. Lyapunov Exponents. 7.1 Maps

The Heat Equation. Lectures INF2320 p. 1/88

Classification of Cartan matrices

BOOLEAN CONSENSUS FOR SOCIETIES OF ROBOTS

Generating Valid 4 4 Correlation Matrices

Solving Simultaneous Equations and Matrices

ON THE K-WINNERS-TAKE-ALL NETWORK

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Nonlinear Systems of Ordinary Differential Equations

Mathematics Course 111: Algebra I Part IV: Vector Spaces

State of Stress at Point

THREE DIMENSIONAL GEOMETRY

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

OPTIMAL DESIGN OF DISTRIBUTED SENSOR NETWORKS FOR FIELD RECONSTRUCTION

Stationary random graphs on Z with prescribed iid degrees and finite mean connections

Numerical Analysis Lecture Notes

Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

Labeling outerplanar graphs with maximum degree three

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

3. Interpolation. Closing the Gaps of Discretization... Beyond Polynomials

Two classes of ternary codes and their weight distributions

(Quasi-)Newton methods

AN ALGORITHM FOR DETERMINING WHETHER A GIVEN BINARY MATROID IS GRAPHIC

Component Ordering in Independent Component Analysis Based on Data Power

Figure 2.1: Center of mass of four points.

SIMPLIFIED PERFORMANCE MODEL FOR HYBRID WIND DIESEL SYSTEMS. J. F. MANWELL, J. G. McGOWAN and U. ABDULWAHID

N 1. (q k+1 q k ) 2 + α 3. k=0

USING SPECTRAL RADIUS RATIO FOR NODE DEGREE TO ANALYZE THE EVOLUTION OF SCALE- FREE NETWORKS AND SMALL-WORLD NETWORKS

Lecture 7: Finding Lyapunov Functions 1

Computer Graphics. Geometric Modeling. Page 1. Copyright Gotsman, Elber, Barequet, Karni, Sheffer Computer Science - Technion. An Example.

Lecture 13 Linear quadratic Lyapunov theory

Weakly Secure Network Coding

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

Lecture 2 Linear functions and examples

Impact of Remote Control Failure on Power System Restoration Time

NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS

Applications to Data Smoothing and Image Processing I

1 Sets and Set Notation.

A New Nature-inspired Algorithm for Load Balancing

Lab 6: Bifurcation diagram: stopping spiking neurons with a single pulse

Network (Tree) Topology Inference Based on Prüfer Sequence

Stochastic Inventory Control

Notes on Symmetric Matrices

6.852: Distributed Algorithms Fall, Class 2

Row Ideals and Fibers of Morphisms

By choosing to view this document, you agree to all provisions of the copyright laws protecting it.

EXIT TIME PROBLEMS AND ESCAPE FROM A POTENTIAL WELL

FIELDS-MITACS Conference. on the Mathematics of Medical Imaging. Photoacoustic and Thermoacoustic Tomography with a variable sound speed

Metric Spaces. Chapter Metrics

Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems

Similarity and Diagonalization. Similar Matrices

How To Prove The Dirichlet Unit Theorem

Masters research projects. 1. Adapting Granger causality for use on EEG data.

Decision-making with the AHP: Why is the principal eigenvector necessary

3. Reaction Diffusion Equations Consider the following ODE model for population growth

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA

Continued Fractions and the Euclidean Algorithm

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Compact Representations and Approximations for Compuation in Games

OPTIMAl PREMIUM CONTROl IN A NON-liFE INSURANCE BUSINESS

Physics 9e/Cutnell. correlated to the. College Board AP Physics 1 Course Objectives

Solving polynomial least squares problems via semidefinite programming relaxations

Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence

On the Interaction and Competition among Internet Service Providers

Competitive Analysis of On line Randomized Call Control in Cellular Networks

Section 1.1. Introduction to R n

THE CENTRAL LIMIT THEOREM TORONTO

Matrix Differentiation

Hello, my name is Olga Michasova and I present the work The generalized model of economic growth with human capital accumulation.

Multi-Robot Tracking of a Moving Object Using Directional Sensors

Adaptive Control Using Combined Online and Background Learning Neural Network

LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING. ----Changsheng Liu

Visualization of General Defined Space Data

The Characteristic Polynomial

A Practical Scheme for Wireless Network Operation

SPECTRAL POLYNOMIAL ALGORITHMS FOR COMPUTING BI-DIAGONAL REPRESENTATIONS FOR PHASE TYPE DISTRIBUTIONS AND MATRIX-EXPONENTIAL DISTRIBUTIONS

3. INNER PRODUCT SPACES

A PAIR OF MEASURES OF ROTATIONAL ERROR FOR AXISYMMETRIC ROBOT END-EFFECTORS

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include

TD(0) Leads to Better Policies than Approximate Value Iteration

Dynamics. Basilio Bona. DAUIN-Politecnico di Torino. Basilio Bona (DAUIN-Politecnico di Torino) Dynamics / 30

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

19 LINEAR QUADRATIC REGULATOR

Transcription:

A Study of Synchronization and Group Cooperation Using Partial Contraction Theory Jean-Jacques E. Slotine 1,2 and Wei Wang 1,3 1 Nonlinear Systems Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts, 02139, USA 2 jjs@mit.edu 3 wangwei@mit.edu 1 Introduction Synchronization, collective behavior, and group cooperation have been the object of extensive recent research. A fundamental understanding of aggregate motions in the natural world, such as bird flocks, fish schools, animal herds, or bee swarms, for instance, would greatly help in achieving desired collective behaviors of artificial multi-agent systems, such as vehicles with distributed cooperative control rules. In [38], Reynolds published his well-known computer model of boids, successfully forming an animation flock using three local rules: collision avoidance, velocity matching, andflock centering. Motivated by the growth of colonies of bacteria, Viscek et al.[55] proposed a similar discrete-time model which realizes heading matching using information only fromneighbors. Viscek s model was later analyzed analytically [16, 52, 53]. Models in continuous-time [1, 22, 32, 33, 62] and combinations of Reynolds three rules [21, 34, 35, 49, 50] were also studied. Related questions can also be found e.g. in [3, 18, 20, 42], in oscillator synchronization [48], as well as in physics in the study of lasers [39] or of Bose-Einstein condensation [17]. This article provides a theoretical analysis tool, partial contraction theory [62], for the study of group cooperation and especially group agreement and synchronization. Partial contraction (or meta-contraction) theory is a straightforward but very general application of contraction theory, a recent nonlinear systemanalysis tool based on studying convergence between two arbitrary systemtrajectories [26, 27, 45, 46]. Actually, partial contraction extends contraction theory in that, while the latter is concerned with convergence to a unique trajectory, the former can describe convergence to particular properties or manifolds [46, 62]. In particular, partial contraction theory can be used to easily derive sufficient conditions for coupled nonlinear networks to reach group agreement or synchronize. The article is organized as follows. Section 2 briefly reviews contraction theory and two important combination properties of contracting systems, and

2 Jean-Jacques E. Slotine and Wei Wang introduces partial contraction theory. The collective behavior of coupled networks of identical dynamic elements is studied in Section 3. Synchronization conditions for general diffusion-coupled networks are derived, and then are extended to networks with switching topologies or including group leaders. Adaptive versions are also derived. Section 4 studies a simplified model of flocking in continuous-time. Concluding remarks are offered in Section 5. 2 Contraction Theory and Partial Contraction 2.1 Contraction Theory Consider a nonlinear system ẋ = f(x,t) (1) where f is an m 1 vector function and x is an m 1 state vector. Assuming f(x,t) is continuously differentiable, we have d dt (δxt δx) =2δx T δẋ =2δx T f x δx 2 λ max δx T δx where δx is a virtual displacement between two neighboring solution trajectories, and λ max (x,t) is the largest eigenvalue of the symmetric part of the Jacobian J = f x. Hence, if λ max(x,t) is uniformly strictly negative, any infinitesimal length δx converges exponentially to zero. By path integration at fixed time, this implies in turn that all the solutions of the system (1) converge exponentially to a single trajectory, independently of the initial conditions. More generally, consider a coordinate transformation δz = Θδx where Θ(x,t) is a uniformly invertible square matrix. One has d dt (δzt δz) =2δz T δż =2δz T ( Θ + Θ f x )Θ 1 δz so that exponential convergence of δz to zero is guaranteed if the generalized Jacobian matrix F =( Θ + Θ f x )Θ 1 is uniformly negative definite. Again, this implies in turn that all the solutions of the original system(1) converge exponentially to a single trajectory, independently of the initial conditions. By convention, the system(1) is called contracting, f(x,t) is called a contracting function, and the absolute value of the largest eigenvalue of the symmetric part of F is called the system s contraction rate with respect to the uniformly positive definite metric Θ T Θ.

A Study of Synchronization Using Partial Contraction Theory 3 2.2 Combinations of Contracting Systems One of the main features of contraction is that it is automatically preserved through a variety of systemcombination. Below we list two applications which are closely related to group cooperation study. Hierarchical Combination Consider a smooth virtual dynamics of the form [ ] [ ][ ] d δz1 F11 0 δz1 = dt δz 2 F 21 F 22 δz 2 The overall systemis contracting as long as F 21 is bounded. By recursion, the result can be extended to combinations of arbitrary size [26]. Example 2.1: Consider a network containing n coupled systems with a chain structure (or more generally, a tree structure) 8 ẋ 0 = f(x 0,t) >< ẋ 1 = f(x 1,t)+u(x 0) u(x 1) >: ẋ n = f(x n,t)+u(x n 1) u(x n) where x i R m is the state vector, f(x i,t) the dynamics of the uncoupled system, and u(x i 1) u(x i) the coupling force. If the function f u is contracting, the whole network will exponentially reach the agreement regardless of initial conditions. x 0 = x 1 = = x n Feedback Combination Consider two contracting systems and an arbitrary feedback connection between them. The overall virtual dynamics can be written as [ ] [ ] d δz1 δz1 = F dt δz 2 δz 2 with the symmetric part of the generalized Jacobian in the form F s = 1 [ ] 2 (F + F1s G FT )= G T F 2s (the subscript symbol s represents the symmetric part of the matrix). By hypothesis the matrices F 1s and F 2s are uniformly negative definite. Then F is uniformly negative definite if and only if ([14], page 472)

4 Jean-Jacques E. Slotine and Wei Wang F 2s < G T F 1 1s Thus, a sufficient condition for the overall systemto be contracting is that λ(f 1s ) λ(f 2s ) > σ 2 (G) uniformly t 0 (2) where λ(f is ) is the contraction rate of F is and σ(g) is the largest singular value of G. Indeed, condition (2) is equivalent to G λ max (F 2s ) < λ min (F 1 1s ) σ2 (G) and, for an arbitrary a nonzero vector v, v T F 2s v λ max (F 2s ) v T v <λ min (F 1 1s ) σ2 (G) v T v λ min (F 1 1s ) vt G T Gv v T G T F 1 1s Gv Again the result can be applied recursively to larger combinations. 2.3 Partial Contraction Theory Theorem 1. Consider a nonlinear system of the form and assume that the auxiliary system ẋ = f(x, x,t) ẏ = f(y, x,t) is contracting with respect to y. If a particular solution of the auxiliary y-system verifies a smooth specific property, then all trajectories of the original x-system verify this property exponentially. The original system is said to be partially contracting. Proof: The virtual, observer-like y-systemhas two particular solutions, namely y(t) = x(t) for all t 0 and the solution with the specific property. This implies that x(t) verifies the specific property exponentially. Note that contraction may be trivially regarded as a particular case of partial contraction. Also, consider for instance an original systemin the form ẋ = c(x,t) + d(x,t) where function c is contracting in a constant metric. The auxiliary contracting systemmay then be constructed as ẏ = c(y,t) + d(x,t) (3) and the specific property of interest may consist e.g. of a relationship between state variables.

A Study of Synchronization Using Partial Contraction Theory 5 3 Synchronization in Coupled Networks 3.1 General Structure Consider a coupled network containing n elements ẋ i = f(x i,t)+ j N i u ji (x j, x i, x,t) i =1,...,n (4) where x =[x 1,...,x n ] T, N i denotes the set of indices of the active links of element i, andu ji the coupling force fromelement j to i. Assume more specifically that the couplings are bidirectional, symmetric, and of the form u ji = u ji ( x j x i, x, t ) where u ji ( 0, x, t )=0 for all i, j, x,t,and K ji = u ji ( x j x i, x, t ) (x j x i ) > 0 uniformly (5) with K ji = K ij. For instance, one may have u ji = ( C ji (t)+ B ji (t) x j x i )(x j x i ) with C ji = C ij > 0 uniformly and B ji = B ij 0. The dynamics is then equivalent to ẋ i = f(x i,t)+ which leads to the auxiliary system j N i u ji (x j x i, x, t) K 0 x j + K 0 n x j ẏ i = f(y i,t)+ j N i u ji (y j y i, x, t) K 0 y j + K 0 n x j (t) (6) We learn fromtheorem1 that, if the auxiliary system(6) is contracting, all systemtrajectories will verify the independent property x 1 = = x n exponentially. Thus we compute the symmetric part of its Jacobian matrix J s = I n J is U n K 0 where J 1 0 0 I n 0 J 2 0 J is =...... 0 0 J n with J i = f(x i,t) x i

6 Jean-Jacques E. Slotine and Wei Wang... K 0 K 0 K 0.. U n K 0 K 0 K 0 K ijs K ijs K 0 =...... T n K ijs =....... K 0 K 0 K 0 K ijs K ijs n n..... and the set N includes all the active links in the network. Note that all the elements in except those displayed in the four intersection points of the ith and jth rows and the ith and jth columns are zero. If we view the network as a graph, Tn K ijs is the generalized (or weighted) Laplacian matrix. Lemma 1. Define J r = U n K 0 If K 0 > 0, K ij > 0, i, j N, and the network is connected, then J r < 0. Proof: Given an arbitrary nonzero vector v =[v 1,...,v n ] T,wehave v T J r v = (v i v j ) T K ijs (v i v j ) ( v i ) T K 0 ( v i ) < 0 for a connected network. Furthermore, the largest eigenvalue of J r can be computed fromthe Courant-Fischer Theorem[14] λ max (J r )= max ( v =1 vt v v T U n K 0 v ) T = min v v =1 P n vi=0 v = λ m+1 ( ) if we choose K 0 large enough [62]. For m = 1, the eigenvalue λ 2 ( Tn 1 ) is a fundamental quantity in graph theory named algebraic connectivity [5, 10, 29], which is zero if and only if the graph is not connected. The above results imply Theorem 2. All the elements within a generally coupled network (5) will reach group agreement exponentially if the network is connected, λ max (J is ) is upper bounded, the coupling strengths are strong enough. More specifically, if λ m+1 ( then the auxiliary system (6) is contracting. n n ) > max λ max (J is ) uniformly (7) i

A Study of Synchronization Using Partial Contraction Theory 7 Remarks: The analysis carries on straightforwardly to other kind of couplings. For instance, consider an all-to-all coupled network with n elements ẋ i = f(x i,t)+ ( u(x j,t) u(x i,t)) i =1, 2,...,n The contraction of f nu guarantees convergence of the whole network, which can be proved using the auxiliary system(3) with c =[c 1,...,c n ] T where c i = f(x i,t) nu(x i,t) d =[d 1,...,d n ] T where d i = u(x j,t) The bidirectional coupling assumption on each link is not always necessary. Consider a network with one-way ring structure and linear diffusion coupling ẋ i = f(x i,t)+k(x i 1 x i ) i =1,...,n where by convention i 1=n for i = 1. Assume that the coupling gain K = K T > 0 is identical for all links. Then J r = 1 n 2 Tn K Un K 0 is negative definite. Since λ m+1 ( 1 2 T n K ) = 1 2 λ min(k) λ 2 ( the threshold to reach synchrony exponentially is T n 1 ) = λ min(k)(1 cos 2π n ), λ min (K) (1 cos 2π n ) > max λ max (J is ) uniformly i Thus, Theorem2 can be extended to the network whose links are either bidirectional with K ji = K ij or unidirectional but formed as rings with K T = K (where K is identical within the same ring but may differ between different rings). If the coupling gain K ij in (5) is only positive semi-definite, we have to add extra restriction to the uncoupled systemdynamics to guarantee globally stable convergence. Assume K ijs = [ ] K ijs 0 0 0 where K ijs is positive definite and has a common dimension to all links. We can divide the uncoupled dynamics J is into the form [ ] J11s J J is = 12 J T 12 J 22s i

8 Jean-Jacques E. Slotine and Wei Wang with each component having the same dimension as that of the corresponding one in K ijs. A sufficient condition to guarantee globally stable convergence behavior in the region beyond a coupling strength threshold is that, i, J 22s is contracting and λ max (J 11s ), σ max (J 12 )areboth bounded[62]. Example 3.1: The FitzHugh-Nagumo model [8, 30, 31] is a classical mathematical model for spiking neurons, based on a simplification of the original Hodgkin-Huxley model [12]. It is given by ( v = c(v + w 1 3 v3 + I) 1 2 b<a<1 3 ẇ = 1 (v a + bw) 0 <b<1, b < (8) c c2 where v is directly related to the membrane potential, w is responsible for accommodation and refractoriness, and I corresponds to stimulating current. Consider a diffusion-coupled network with n identical FitzHugh-Nagumo neurons 8 >< v i = c(v i + w i 1 3 v3 i + I)+ X k ji (v j v i) j N i (t) (9) >: ẇ i = 1 c (vi a + bwi) i =1,...,n» 10 Defining a transformation matrix Θ =, which leaves the coupling gain 0 c unchanged, yields the generalized Jacobian of the uncoupled dynamics» c(1 v 2 J i = i ) 1 1 Thus the whole network will synchronize exponentially if the coupling strengths are strong enough, b c λ 2( P Tn k ij ) > c The definition of the neighbor sets N i is quite flexible. While it may be based simply on position proximity (neighbors within a certain distance of each node), it can be chosen to reflect many other factors. Gestalt psychology [41], for instance, suggests that in human visual perception, grouping occurs not only by proximity, but also by similarity, closure, continuity, common region and connectedness. The coupling strengths can also be specified flexibly. For instance, using Schoenberg/Micchelli s theorems on positive definite functions [24], they can be chosen as smooth functions based on sums of gaussians. 3.2 Coupled Network with Switching Topology The results above can be extended to analyze the collective behaviors of cooperating moving units with local couplings, where the network structure changes abruptly and asynchronously. Consider such a network

A Study of Synchronization Using Partial Contraction Theory 9 ẋ i = f(x i,t)+ u ji (x j, x i, x,t) i =1,...,n j N i(t) where N i (t) denotes the set of the active links associated with element i at time t. Apply partial contraction analysis to each time interval during which N(t) = N i (t) isfixed.if λ m+1 ( ) > max λ max (J is ) uniformly N(t) (10) i the auxiliary system(6) is always contracting, since δz T δz with z =[y 1,...,y n ] T is continuous in time and upper bounded by a vanishing exponential (though its time-derivative can be discontinuous at discrete instants). Since the particular solution of the auxiliary systemin each time interval is y 1 = = y n = y,thesen elements will reach agreement exponentially as they tend to y 1 = = y n which is a constant region in the state-space. 3.3 Leader-Followers Network The previous results can also be extended to analyze the coupled network with an additional leader. Consider such a system ẋ 0 = f(x 0,t) ẋ i = f(x i,t)+ j N i K ji (x j x i )+γ i K 0i (x 0 x i ) i =1,...,n where x 0 is the state of the group leader, γ i = 0 or 1, and N i does not include the links with x 0. Since the dynamics of x 0 is independent, we can treat it as external input to the rest of the system, whose Jacobian matrix has the symmetric part J s = I n J is I n γ ik 0is The matrix J r = Tn K ijs I n γ ik 0is is negative definite if the augmented network with n + 1 elements is connected. In fact, v 0, v T J r v = (v i v j ) T K ijs (v i v j ) γ i (vi T K 0is v i ) < 0 Thus the system[x 1,...,x n ] T is contracting if the coupling strengths are strong enough, λ min ( + I n γ ik 0is ) > max n max(j is ) uniformly (11)

10 Jean-Jacques E. Slotine and Wei Wang Under this condition it will converge to the particular solution x 1 = = x n = x 0 exponentially regardless of the initial conditions. This result can be viewed as a generalization of Example 2.1. Note that the condition for leader following is that the whole group of n + 1 elements is connected. Thus the n followers x 1,...,x n could be either connected together, or there could be isolated subgroups all connected to the leader. Also note that the network structure of a leader-followers group does not have to be fixed during the whole time, and a result similar to Section 3.2 can be derived. Example 3.2: As a very simple example, let us construct a leader-followers network similar to that in [2], but which is able to capture a many-are-equal moment much faster. In this version, both the leader and the followers are FitzHugh- Nagumo neurons as (8). The leader is coupled by linear diffusion to each of the followers, with no direct coupling between the followers themselves. With strong coupling gain, the followers synchronize only if the external inputs I are identical. We can define the system output accordingly to capture the moment when this condition becomes true, as shown in Figure 1. t Fig. 1. Simulation result of Example 3.2. The leader and seventeen followers are all FitzHugh-Nagumo neurons with parameters a = 0.7, b = 0.8, c = 8. The coupling gain between the leader and the followers is κ =3,and I = 1.4 fortheleader. The first plot shows P i max(0, vi) as a function of time, and the second shows the input currents I to the followers, which vary from 0.8 to 1.3. t In fact, a neural network with simple leader-followers structure can be used to emulate many other brain-like computations, such as k-winner-take-all [63]. In all these examples the leading node can also be replaced by a coordinated group of local leaders.

A Study of Synchronization Using Partial Contraction Theory 11 Comparing conditions (7) and (11) shows that, predictably, the existence of an additional leader does not always help the followers network to reach agreement. But it does so if λ min ( + I n γ ik 0is ) >λ m+1 ( ) Consider for instance the case when the leader has identical connections to all other elements, i, K 0i = ki, k > 0. Then λ min ( + I n γ ik 0is ) = m in v =1 vt ( + I n ki )v = k This means the connections between the leader and the followers do promote the convergence within the followers network if λ m+1 ( ) <k which is more likely to happen in a network with less connectivity. This is consistent with the observation in simulation of [64], where two groups of cells couple together and once one group synchronizes, inputs fromthese cells facilitate synchronization in another group the synchronized group as a whole plays the role of the leader in our analysis. Example 3.3: To let a group synchronize faster, [64] suggests to increase its interior connection weights. In fact, similar phenomena can be predicted in a network with non-uniform connectivity, where synchronization will propagate from high-density areas to low-density areas. Consider for illustration two groups of FitzHugh-Nagumo neurons, the first composed of eight neurons coupled all-toall, and surrounded by a second group of sixteen neurons coupled as an one-way ring, with every other neuron in the second group connected bilaterally to a distinct neuron of the first group. The system dynamics follows equation (9) with coupling gain k>0 identical in the whole network. Figure 2 shows simulation results, from which we can observe initially significant phase lag between the two groups. Note that the second group alone would not synchronize without couplings from the first group. However, the connectivity of the followers network always helps the following process, which can be seen by applying Weyl s Theorem[14], λ i ( + I n γ ik 0is ) λ i ( I n γ ik 0is ) i =1,...,n This result can be used e.g. to modify Example 3.2, by adding couplings between local neighbors according to similarity (see Section 5), so as to react even faster.

12 Jean-Jacques E. Slotine and Wei Wang t P 1 n Fig. 2. Plots of i max(0, vi) for two coupled groups of FitzHugh-Nagumo neurons. The parameters are a = 0.7, b = 0.8, c = 8, I = 1.4, κ = 0.5. Initial conditions are chosen randomly for each neuron. t 3.4 Adaptation When the systemdynamics is unknown to a given node, synchronization can be preserved through parameter adaptation. Consider again the coupled system(4), and for simplicity diffusive couplings. Assume that a constant parameter vector a is unknown to some particular node σ, but estimated as â(t) f(x σ, â,t) = f(x σ, a,t) + W(x σ,t)ã where ã = â a. Using the adaptation law â = PW T (x σ, t) j N σ K jσ (x j x σ ) with constant P = P T > 0, the systemsynchronizes asymptotically. Indeed, consider the Lyapunov-like function V = 1 2 ( xt L K x + ã T P 1 ã ) where L K is the weighted Laplacian matrix, L K = = DKD T T n K ij With τ the total number of the links in the network, K R τ τ is a block diagonal matrix whose k th diagonal entry [K] k = K ij corresponds to the coupling strength of the k th link, which is assumed to be symmetric positive definite. D R n τ is the incidence matrix [10] of the corresponding undirected graph. Simple calculations show

where A Study of Synchronization Using Partial Contraction Theory 13 V = x T L K ẋ + ã T P 1 â f(x 1, a,t)... = x T L K ( f(x σ, a,t)... L Kx ) f(x n, a,t) = x T D ( KΛ KD T DK ) D T x = x T ( L KΛ L 2 K ) x L KΛ = D KΛ D T and Λ R τ τ is a block diagonal matrix whose k th diagonal entry [Λ] k corresponds to the k th oriented link (i, j), 1 f [Λ] k = 0 x (x j + λ(x i x j )) dλ Under conditions similar to those in Theorem 2, it can be shown that the matrix L KΛ L 2 K is negative semi-definite, and furthermore that the order of multiplicity of its zero eigenvalue is exactly m. Using Barbalat s lemma [44] and the boundedness of V, the first result implies that V asymptotically converges to zero, and in turn the second result implies that the x i synchronize. Furthermore [44], if t+t α>0, T>0, t o 0, t t o W T (x σ,t)w(x σ,t)dr αi then â actually converges to a. Suchisthecaseinacoupledoscillatornetwork as long as oscillation is preserved. A detailed proof and discussion of adaptive networks will be presented separately. 3.5 Inhibition The dynamics of a large network of synchronized elements, as in Theorem 2, can be completely transformed by the addition of a single inhibitory coupling link. Start for instance with the synchronized network ẋ i = f(x i,t)+ K ji (x j x i ) i =1,...,n j N i and add a single inhibitory link between two arbitrary elements a and b ẋ a = f(x a,t)+ j N a K ja (x j x a )+K ( x b x a ) ẋ b = f(x b,t)+ j N b K jb (x j x b )+K ( x a x b ) t

14 Jean-Jacques E. Slotine and Wei Wang The symmetric part of the Jacobian matrix is J s = I n J is Tn K ijs T n K, where T n K is composed of zeroes except for four identical blocks..... K K T n K =....... K K..... The matrix J r = Tn K ijs T n K is negative definite, since v 0 v T J r v = (v i v j ) T K ijs (v i v j ) (v a + v b ) T K (v a + v b ) < 0 Thus, the network is contracting for strong enough coupling strengths. Hence, the n oscillators will be inhibited and, if the function f is autonomous, they will tend to an equilibrium. Adding more inhibitory couplings preserves the result. Example 3.4: Consider a ring network of ten FitzHugh-Nagumo neurons with diffusion couplings. The whole network turns off immediately if we add one extra inhibitory link between any two neurons, and resumes firing if we remove the extra link, as illustrated in Figure 3. n n t 0 50 100 150 200 250 300 Fig. 3. Time plots of v i in Example 3.4. The parameters of FitzHugh-Nagumo neurons are a = 0.7, b = 0.8, c = 8, I = 0.8. The coupling gains are identical as κ = 100, and the initial conditions are set randomly. The inhibitory link is added between the first and fifth neurons at t = 100 and removed at t = 200. Such inhibition properties may be useful in pattern recognition to achieve rapid desynchronization between different objects. They may also be used as

A Study of Synchronization Using Partial Contraction Theory 15 simplified models of minimal mechanisms for turning off unwanted synchronization, as e.g. in epileptic seizures or oscillations in internet traffic. Cascades of inhibition are common in the brain, in a way perhaps reminiscent of NANDbased logic. 3.6 Algebraic Connectivity Condition (7) to guarantee convergence represents requirements on both the individual dynamics and the network s geometric structure. To see this in more detail, let us assume that all the links within the network are bidirectional (the corresponding graph is called undirected graph) with identical coupling gain K = K T > 0. Thus, according to [15] λ m+1 ( T n K ) = λ 2 λ min (K) where λ 2 is the algebraic connectivity (the smallest non-zero eigenvalue) of the standard Laplacian matrix. Denote λ = max i λ max (J is ) λ min (K) Condition (7) can be written λ 2 > λ uniformly. Given a graph G of order n, there exist lower bounds on its diameter 4 diam(g) and its mean distance 5 ρ(g) [29] diam(g) 4 (n 1) ρ(g) 2 + n 2 nλ 2 λ 2 2 (these bounds are most informative when λ 2 is small) which in turn gives us lower bounds on algebraic connectivity λ 2 4 n diam(g) λ 2 2 (n 1) ρ(g) n 2 2 A sufficient condition to guarantee exponential convergence within a coupled network is thus derived as diam(g) < 4 n λ or ρ(g) < 2 λ(n 1) + n 2 2(n 1) Note that for a coupled network, increasing the coupling gain for an existing link or adding an extra link will both improve the convergence process. Indeed, using again Weyl s Theorem[14] λ k (J s ) λ k (J s ) 4 Maximum number of links between two distinct vertices [10] 5 Average number of links between distinct vertices [29]

16 Jean-Jacques E. Slotine and Wei Wang These two facts quantify intuitive results. Connecting each element with more neighbors favors group agreement, especially for large-size networks. Also, different coupling links make different contributions, with couplings between far-separated nodes contributing more than those between close neighbors, an instance of small-world phenomena. Example 3.5: In [19], Kopell and Ermentrout show that closed chains of oscillators will reliably synchronize with nearest-neighbor coupling, while open chains require nearest and next-nearest neighbor coupling. This result can be explained by expressing the partial contraction condition as maxi λmax(jis) λ min(k) > uniformly λ 2 Assuming n extremely large, for a graph with bidirectional open chain structure λ 2 = 2 ( 1 cos( π n )) 2(π n )2 while for a graph with bidirectional ring structure λ 2 = 2 ( 1 cos( 2π n )) 8(π n )2 The effort to synchronize an open chain network is many times of that to a closed one. Example 3.6: Assuming n very large, we can compare the partial contraction threshold for two extreme cases, λ min(k) O(n 2 ) + for one-way ring structure λ min(k) O( 1 ) 0 for all-to-all structure. n Note that λ 2 is equal to 1 for a star graph [5]. Much less effort is needed to synchronize such a network than a ring, even for a larger number of links, because the center node in a star network performs a global role while the communication in a ring network is completely distributed. 4 A Simple Coupled Model In this section, we study a simplified model of schooling or flocking in continuous-time with f = 0. Consider such a group ẋ i = K ji (x j x i ) i =1,...,n (12) j N i(t) where x i R m denotes the states needed to reach agreements such as heading, attitude, velocity, etc. N i (t) is defined for instance as the set of the nearest neighbors within a certain distance around subsystem i at current

A Study of Synchronization Using Partial Contraction Theory 17 time t, which can change abruptly and asynchronously. Since J is =0 here, the condition (10) is satisfied if only the network is connected. Therefore i, x i converges exponentially to a particular solution, which in this case is a constant value x = 1 n n x i(0). Note that in the case of heading agreement based on spatial proximity, the issue of chattering is immaterial since switching cannot occur infinitely fast, while in the general case it can be simply avoided by using smooth transitions in time or space. Moreover, the network (12) need not be connected for any t 0. A generalized condition can be derived which is the same as that obtained in [16] for a discrete-time model. Lemma 2. For network (12), n x i 2 in converges exponentially to its lower limit n x 2. Proof: Letting x i =[x i1,,x im ] T,wehave n ẋi =0whichleadsto x i = x i (0) = n x with x ij = n x j, j =1,,m using Thus, x i x 2 + n x 2 = = ( m x 2 ij m 2 x ij x j + m (x ij x j ) 2 + n x 2 m x 2 j ) + n x 2 m = x i 2 2 x ij x j + 2n x 2 = m x ij x j = m x ij x j = m ( x j x ij )= x i 2 (13) m n x 2 j =n x 2 Frompartial contraction analysis, we know that any solution of the system(12) converges exponentially to a particular one, x 1 = = x n = x = 1 n n x i(0), which implies that n x i x 2 tends to zero exponentially. Using (13) completes the proof. We can now largely generalize the condition to reach group agreement. Theorem 3. Consider n coupled elements with linear protocol (12), the neighborship of which can change abruptly and asynchronously. Separate time into an infinite sequence of bounded intervals starting at t =0.Ifthenetworkis connected across each such interval 6, the agreement x 1 = = x n will be reached asymptotically. 6 As in [16], being connected across a time interval means that the union of the different graphs accounted along the interval is connected.

18 Jean-Jacques E. Slotine and Wei Wang Proof: Assumethatatsometimet the network is not connected, but instead is composed of k isolated subnetworks, each of which is connected and containing n j elements with j =1,...,k. Defining z i = x i x, weget ż i = K ji (z j z i ) i =1,...,n j N i(t) and from Lemma 2, with z j = 1 n j nj z i(t), z i 2 = n k j z i 2 = n k j ( z i z j 2 + n j z j 2 ) Note that z j is a local agreement compared to the global one x (corresponding to z =0). j, z j is constant as long as the current network structure keeps unchanged, and n j z i z j 2 tends to zero exponentially during this period. Thus n z i 2 is non-increasing. Furthermore, the condition in Theorem 3 guarantees that n z i 2 always decreases across each time interval and it tends to reach the lower limit zero asymptotically. Note that the most important fact behind Theorem 3 and its proof is that, closer to the local agreement, closer to the global. This is illustrated in Figure 4 with a simple subnetwork containing two elements. x 2 A x +x =x (0)+x (0) 1 2 1 2 B C x 1 =x 2 O OC < OB < OA x 1 Fig. 4. A connected subnetwork containing two elements x 1 and x 2.PointOrepresents the global agreement and point C the local. The initial position is point A and the system trajectory is along the line ABC. The closer the group is to the local agreement, the closer it is to the global.

A Study of Synchronization Using Partial Contraction Theory 19 5 Future Research Partial contraction analysis does not add any restriction on the uncoupled dynamics f(x,t) other than requiring λ max ( f x i ) to be bounded. Thus, different qualitative choices exist for f, which can be an oscillator[62], a contracting system, zero, or even a chaotic system [37, 46, 48]. For a group of contracting systems, if Θ = I, the contraction property of the overall group will be enhanced by the diffusion couplings, and all the coupled systems are expected to converge to a common equilibrium point exponentially. If Θ I, however, the situation is more complicated. A transformation process must be done in order to guarantee exponential convergence of the virtual dynamics. The coupling gain may lose positivity through the transformation, and the stability of the equilibriumpoint may be destroyed with strong enough coupling strengths. This kind of bifurcation is interesting especially if the otherwise silent systems behave as oscillators after coupling [25, 47]. Partial contraction theory is derived fromcontraction theory. Thus many results from[26, 45] apply directly. Consider for instance a coupled network with constraints ẋ i = f(x i,t)+n i + u ji (x j, x i, x,t) i =1,...,n j N i where n i represents a superimposed flow normal to the constraint manifold and has the same form to each system. Construct the corresponding auxiliary system ẏ i = f(y i,t)+n i + j N i u ji (y j, y i, x,t) K 0 y j + K 0 n x j (14) Using [26], contraction of the unconstrained flow (6) implies local contraction of the constrained flow (14), which means group agreement can be achieved for constrained network in a finite region which can be computed explicitly. In same cases, the introduction of the constraint combined with the specific property of the particular solution implies that the constrained original system is actually contracting. Similarly, because the auxiliary system is contracting, robustness results in [26] apply directly. Besides group cooperation, partial contraction theory can be applied to other related research. In the brain, for instance, extensive synchronization phenomena likely occur at different scales in low-level neural networks, in binding [43, 45, 46, 54, 59, 61], and in so-called mirror neuron responses [40, 6, 7]. Temporal correlation theory [9, 28, 43, 58] suggests that different objects are perceived in the brain by alternatively fired groups of cells, with each synchronized group activated by the same object. Synchronizationbased neural models have been proposed to implement image segmentation [51], speech segregation [13, 60], contour detection [64] and odor recognition [2]. Results stated in this paper provide another potential solution to

20 Jean-Jacques E. Slotine and Wei Wang similar tasks. Fast synchronization behavior can be achieved in neurocomputation if we set the system dynamics as e.g that illustrated in Example 3.1. The coupling gains K ji between two neighboring neurons may be determined by Gestalt laws [41] of perceptual grouping. This work was supported in part by grants fromthe National Institutes of Health and the National Science Foundation (KDI initiative). References 1. Belta, C., and Kumar, V. (2003) Abstraction and Control for Groups of Fully- Actuated Planar Robots, IEEE International Conference on Robotics and Automation, (Taipei, Taiwan) 2. Brody, C.D., and Hopfield, J.J. (2003) Simple Networks for Spike-Timing-Based Computation, with Application to Olfactory Processing, Neuron, 37:843-852 3. Bruckstein, A.M., Mallows, C.L., and Wagner, I.A. (1997) Cooperative Cleaners: a Study in Ant-Robotics, American Mathematical Monthly, 104(4): 323-343 4. Collins, J.J., and Stewart, I.N. (1993) Coupled Nonlinear Oscillators and the Symmetries of Animal Gaits, Nonlinear Science, 3: 349-392 5. Fiedler, M. (1973) Algebraic Connectivity of Graphs, Czechoslovak Mathematical Journal, 23(98): 298-305 6. Buccino G. (2001) European J. Neuroscience, 13, 400 7. Decety, J. (2002) in Simulation and Knowldege of Action, Benjamin, Philadelphia 8. FitzHugh, R.A. (1961) Impulses and Physiological States in Theoretical Models of Nerve Membrane, Biophys. J., 1:445-466 9. Gray, C.M. (1999) The Temporal Correlation Hypothesis of Visual Feature Integration: Still Alive and Well, Neuron, 24:31-47 10. Godsil, C., and Royle, G. (2001) Algebraic Graph Theory, Springer 11. Golubitsky, M., and Stewart, I. (2002) Patterns of Oscillation in Coupled Cell Systems, Geometry, Dynamics, and Mechanics: 60th Birthday Volume for J.E. Marsden, Springer-Verlag, 243-286 12. Hodgkin, A.L., and Huxley, A.F. (1952) A Quantitative Description of Membrane Current and its Application to Conduction and Excitation in Nerve, J. Physiol., 117:500 13. Hopfield, J.J., and Brody, C.D. (2001) What Is A Moment? Transient Synchrony as a Collective Mechanism for Spatiotemporal Integration, Proc. Natl. Acad. Sci. USA, 98: 1282-1287 14. Horn, R.A., and Johnson, C.R. (1985) Matrix Analysis, Cambridge University Press 15. Horn, R.A., and Johnson, C.R. (1989) Topics in Matrix Analysis, Cambridge University Press 16. Jadbabaie, A., Lin, J., and Morse, A.S. (2003) Coordination of Groups of Mobile Autonomous Agents Using Nearest Neighbor Rules, IEEE Transactions on Automatic Control, June, 48:988-1001 17. Ketterle, W. (2002) When Atoms Behave as Waves: Bose-Eintein Condensation and The Atom Laser, Rev. Mod. Phys., 74

A Study of Synchronization Using Partial Contraction Theory 21 18. Klavins, E., and Koditschek, D.E. (2002) Phase Regulation of Decentralized Cyclic Robotic Systems, International Journal of Robotics and Automation, 21(3): 257-275 19. Kopell, N., and Ermentrout, G.B. (1986) Symmetry and Phase-locking in Chains of Weakly Coupled Oscillators, Communications on Pure and Applied Mathematics, 39:623-660 20. Langbort, C., and D Andrea, R. (2003) Distributed Control of Spatially Reversible Interconnected Systems with Boundary Conditions, submitted to SIAM Journal on Control and Optimization 21. Leonard, N.E., and Fiorelli, E. (2001) Virtual Leaders, Artificial Potentials and Coordinated Control of Groups, 40th IEEE Conference on Decision and Control 22. Lin, Z., Broucke, M., and Francis, B. (2003) Local Control Strategies for Groups of Mobile Autonomous Agents, submitted to IEEE Trans. Automatic Control 23. Milo, R. (2002), Network Motifs, Science 298 24. Micchelli, C.A. (1986), Interpolation of Scattered Data, Constructive Approximation, 2:11-22. 25. Loewenstein, Y., and Sompolinsky, H. (2002) Oscillations by Symmetry Breaking in Homogeneous Networks with Electrical Coupling, Phys. Rev. E, 65:1-11 26. Lohmiller, W., and Slotine, J.J.E. (1998) On Contraction Analysis for Nonlinear Systems, Automatica, 34(6) 27. Lohmiller, W. (1999) Contraction Analysis of Nonlinear Systems, Ph.D. Thesis, Department of Mechanical Engineering, MIT 28. Milner, P.M. (1974) A Model for Visual Shape Recognition, Psychological Review, 81(6):521-535 29. Mohar, B. (1991) Eigenvalues, Diameter, and Mean Distance in Graphs, Graphs and Combinatorics 7:53-64 30. Murray, J.D. (1993) Mathematical Biology, Berlin;New York: Springer-Verlag 31. Nagumo, J., Arimoto, S., and Yoshizawa, S. (1962) An Active Pulse Transmission Line Simulating Nerve Axon, Proc. Inst. Radio Engineers, 50:2061-2070 32. Olfati-Saber, R., and Murray, R.M. (2003) Consensus Protocols for Networks of Dynamic Agents, American Control Conference, Denver, Colorado 33. Olfati-Saber, R., and Murray, R.M. (2003) Agreement Problems in Networks with Directed Graphs and Switching Topology, IEEE Conference on Decision and Control 34. Olfati-Saber, R., and Murray, R.M. (2003) Flocking with Obstacle Avoidance: Cooperation with Limited Communication in Mobile Networks, IEEE Conference on Decision and Control 35. Ögren, P., Fiorelli, E., and Leonard, N.E. (2002) Formations with a Mission: Stable Coordination of Vehicle Group Maneuvers, SIAM Symposium on Mathematical Theory of Networks and Systems 36. Parlett, B.N. (1980) The Symmetric Eigenvalue Problem, Prentice-Hall 37. Pecora, L.M., and Carroll, T.L. (1990) Synchronization in Chaotic Systems, Phys. Rev. Lett., 64:821-824 38. Reynolds, C. (1987) Flocks, Birds, and Schools: a Distributed Behavioral Model, Computer Graphics, 21:25-34 39. Pikovsky, A., Rosenblum, M., and Kurths, J. (2003) Synchronization: A Universal Concept in Nonlinear Sciences, Cambridge University Press 40. Rizzolati, G. (1996) Cognitive Brain Research 3, 131 41. Rock, I., and Palmer, S. (1990) The Legacy of Gestalt Psychology, Scientific American, 263:84-90

22 Jean-Jacques E. Slotine and Wei Wang 42. Seiler, P., Pant, A., and Hedrick J.K. (2003) A Systems Interpretation for Observations of Bird V-formations, Journal of Theoretical Biology, 221:2, 279-287 43. Singer, W., and Gray, C.M. (1995) Visual Feature Integration and The Temporal Correlation Hypothesis, Annu. Rev. Neurosci., 18:555-586 44. Slotine, J.J.E., and Li, W. (1991) Applied Nonlinear Control, Prentice-Hall 45. Slotine, J.J.E., and Lohmiller, W. (2001) Modularity, Evolution, and the Binding Problem: A View from Stability Theory, Neural Networks,14(2) 46. Slotine, J.J.E. (2003) Modular Stability Tools for Distributed Computation and Control, Int. J. Adaptive Control and Signal Processing, 17(6). 47. Smale, S. (1976) A Mathematical Model of Two Cells via Turing s Equation, in The Hopf Bifurcation and Its Applications, Spinger-Verlag, 354-367 48. Strogatz, S. (2003) Sync: The Emerging Science of Spontaneous Order, New York: Hyperion 49. Tanner, H., Jadbabaie, A., and Pappas, G.J. (2003) Stable Flocking of Mobile Agents, Part I: Fixed Topology; Part II: Dynamic Topology, IEEE Conference on Decision and Control, Maui, HI 50. Tanner, H., Jadbabaie, A., and Pappas, G.J. (2003) Coordination of Multiple Autonomous Vehicles, IEEE Mediterranian Conference on Control and Automation, Rhodes, Greece 51. Terman, D., and Wang, D.L. (1995) Global Competition and Local Cooperation in a Network of Neural Oscillators, Physica D, 81: 148-176 52. Toner, J., and Tu Y. (1995) Long Range Order in a Two Dimensional xy Model: How birds Fly Together, Phys. Rev. Lett., 75:4326-4329 53. Toner, J., and Tu Y. (1998) Flocks, Herds, and Schools: A Quantitative Theory of Flocking, Phys. Rev. E., 58:4828-4858 54. Varela, F., Lachaux, J.P., Rodriguez, E., and Martinerie, J. (2001) The Brainweb: Phase Synchronization and Large-Scale Integration, Nature Reviews Neuroscience, 2: 229-239 55. Vicsek, T., Czirók, A., Ben-Jacob, E., and Cohen I. (1995) Novel Type of Phase Transition in a System of Self-Driven Particles, Phys. Rev. Lett., 75:1226-1229 56. Vicsek, T. (2001) A Question of Scale, Nature, 411:421 57. Vicsek, T. (2002) The Bigger Picture, Nature, 418:131 58. von der Malsburg, C. (1981) The Correlation Theory of Brain Function, Max- Planck-Institut Biophys. Chem., Internal Rep. 81-2, Gttingen FRG 59. von der Malsburg, C. (1995) Binding in Models of Perception and Brain Function, Current Opinion in Neurobiology, 5:520-526 60. Wang, D.L., and Brown, G.J. (1999) Separation of Speech From Interfering Sounds Based on Oscillatory Correlation, IEEE Transactions on Neural Networks, 10: 684-697 61. Wang, D.L. (2002) The Time Dimension for Neural Computation, Technical Report, Dept. of Computer and Information Science, Ohio State University 62. Wang, W., and Slotine, J.J.E. (2002) Partial Contraction Analysis for Coupled Nonlinear Oscillators, MIT Nonlinear Systems Laboratory Report, submitted 63. Wang, W., and Slotine, J.J.E. (2003) Fast Computation with Neural Oscillators, MIT Nonlinear Systems Laboratory Report, submitted 64. Yen, S.C., Menschik, E.D., and Finkel, L.H. (1999) Perceptual Grouping in Striate Cortical Networks Mediated by Synchronization and Desynchronization, Neurocomputing, 26-27(1-3):609-616