A Study of Synchronization and Group Cooperation Using Partial Contraction Theory


 Elfrieda Stokes
 1 years ago
 Views:
Transcription
1 A Study of Synchronization and Group Cooperation Using Partial Contraction Theory JeanJacques E. Slotine 1,2 and Wei Wang 1,3 1 Nonlinear Systems Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts, 02139, USA Introduction Synchronization, collective behavior, and group cooperation have been the object of extensive recent research. A fundamental understanding of aggregate motions in the natural world, such as bird flocks, fish schools, animal herds, or bee swarms, for instance, would greatly help in achieving desired collective behaviors of artificial multiagent systems, such as vehicles with distributed cooperative control rules. In [38], Reynolds published his wellknown computer model of boids, successfully forming an animation flock using three local rules: collision avoidance, velocity matching, andflock centering. Motivated by the growth of colonies of bacteria, Viscek et al.[55] proposed a similar discretetime model which realizes heading matching using information only fromneighbors. Viscek s model was later analyzed analytically [16, 52, 53]. Models in continuoustime [1, 22, 32, 33, 62] and combinations of Reynolds three rules [21, 34, 35, 49, 50] were also studied. Related questions can also be found e.g. in [3, 18, 20, 42], in oscillator synchronization [48], as well as in physics in the study of lasers [39] or of BoseEinstein condensation [17]. This article provides a theoretical analysis tool, partial contraction theory [62], for the study of group cooperation and especially group agreement and synchronization. Partial contraction (or metacontraction) theory is a straightforward but very general application of contraction theory, a recent nonlinear systemanalysis tool based on studying convergence between two arbitrary systemtrajectories [26, 27, 45, 46]. Actually, partial contraction extends contraction theory in that, while the latter is concerned with convergence to a unique trajectory, the former can describe convergence to particular properties or manifolds [46, 62]. In particular, partial contraction theory can be used to easily derive sufficient conditions for coupled nonlinear networks to reach group agreement or synchronize. The article is organized as follows. Section 2 briefly reviews contraction theory and two important combination properties of contracting systems, and
2 2 JeanJacques E. Slotine and Wei Wang introduces partial contraction theory. The collective behavior of coupled networks of identical dynamic elements is studied in Section 3. Synchronization conditions for general diffusioncoupled networks are derived, and then are extended to networks with switching topologies or including group leaders. Adaptive versions are also derived. Section 4 studies a simplified model of flocking in continuoustime. Concluding remarks are offered in Section 5. 2 Contraction Theory and Partial Contraction 2.1 Contraction Theory Consider a nonlinear system ẋ = f(x,t) (1) where f is an m 1 vector function and x is an m 1 state vector. Assuming f(x,t) is continuously differentiable, we have d dt (δxt δx) =2δx T δẋ =2δx T f x δx 2 λ max δx T δx where δx is a virtual displacement between two neighboring solution trajectories, and λ max (x,t) is the largest eigenvalue of the symmetric part of the Jacobian J = f x. Hence, if λ max(x,t) is uniformly strictly negative, any infinitesimal length δx converges exponentially to zero. By path integration at fixed time, this implies in turn that all the solutions of the system (1) converge exponentially to a single trajectory, independently of the initial conditions. More generally, consider a coordinate transformation δz = Θδx where Θ(x,t) is a uniformly invertible square matrix. One has d dt (δzt δz) =2δz T δż =2δz T ( Θ + Θ f x )Θ 1 δz so that exponential convergence of δz to zero is guaranteed if the generalized Jacobian matrix F =( Θ + Θ f x )Θ 1 is uniformly negative definite. Again, this implies in turn that all the solutions of the original system(1) converge exponentially to a single trajectory, independently of the initial conditions. By convention, the system(1) is called contracting, f(x,t) is called a contracting function, and the absolute value of the largest eigenvalue of the symmetric part of F is called the system s contraction rate with respect to the uniformly positive definite metric Θ T Θ.
3 A Study of Synchronization Using Partial Contraction Theory Combinations of Contracting Systems One of the main features of contraction is that it is automatically preserved through a variety of systemcombination. Below we list two applications which are closely related to group cooperation study. Hierarchical Combination Consider a smooth virtual dynamics of the form [ ] [ ][ ] d δz1 F11 0 δz1 = dt δz 2 F 21 F 22 δz 2 The overall systemis contracting as long as F 21 is bounded. By recursion, the result can be extended to combinations of arbitrary size [26]. Example 2.1: Consider a network containing n coupled systems with a chain structure (or more generally, a tree structure) 8 ẋ 0 = f(x 0,t) >< ẋ 1 = f(x 1,t)+u(x 0) u(x 1) >: ẋ n = f(x n,t)+u(x n 1) u(x n) where x i R m is the state vector, f(x i,t) the dynamics of the uncoupled system, and u(x i 1) u(x i) the coupling force. If the function f u is contracting, the whole network will exponentially reach the agreement regardless of initial conditions. x 0 = x 1 = = x n Feedback Combination Consider two contracting systems and an arbitrary feedback connection between them. The overall virtual dynamics can be written as [ ] [ ] d δz1 δz1 = F dt δz 2 δz 2 with the symmetric part of the generalized Jacobian in the form F s = 1 [ ] 2 (F + F1s G FT )= G T F 2s (the subscript symbol s represents the symmetric part of the matrix). By hypothesis the matrices F 1s and F 2s are uniformly negative definite. Then F is uniformly negative definite if and only if ([14], page 472)
4 4 JeanJacques E. Slotine and Wei Wang F 2s < G T F 1 1s Thus, a sufficient condition for the overall systemto be contracting is that λ(f 1s ) λ(f 2s ) > σ 2 (G) uniformly t 0 (2) where λ(f is ) is the contraction rate of F is and σ(g) is the largest singular value of G. Indeed, condition (2) is equivalent to G λ max (F 2s ) < λ min (F 1 1s ) σ2 (G) and, for an arbitrary a nonzero vector v, v T F 2s v λ max (F 2s ) v T v <λ min (F 1 1s ) σ2 (G) v T v λ min (F 1 1s ) vt G T Gv v T G T F 1 1s Gv Again the result can be applied recursively to larger combinations. 2.3 Partial Contraction Theory Theorem 1. Consider a nonlinear system of the form and assume that the auxiliary system ẋ = f(x, x,t) ẏ = f(y, x,t) is contracting with respect to y. If a particular solution of the auxiliary ysystem verifies a smooth specific property, then all trajectories of the original xsystem verify this property exponentially. The original system is said to be partially contracting. Proof: The virtual, observerlike ysystemhas two particular solutions, namely y(t) = x(t) for all t 0 and the solution with the specific property. This implies that x(t) verifies the specific property exponentially. Note that contraction may be trivially regarded as a particular case of partial contraction. Also, consider for instance an original systemin the form ẋ = c(x,t) + d(x,t) where function c is contracting in a constant metric. The auxiliary contracting systemmay then be constructed as ẏ = c(y,t) + d(x,t) (3) and the specific property of interest may consist e.g. of a relationship between state variables.
5 A Study of Synchronization Using Partial Contraction Theory 5 3 Synchronization in Coupled Networks 3.1 General Structure Consider a coupled network containing n elements ẋ i = f(x i,t)+ j N i u ji (x j, x i, x,t) i =1,...,n (4) where x =[x 1,...,x n ] T, N i denotes the set of indices of the active links of element i, andu ji the coupling force fromelement j to i. Assume more specifically that the couplings are bidirectional, symmetric, and of the form u ji = u ji ( x j x i, x, t ) where u ji ( 0, x, t )=0 for all i, j, x,t,and K ji = u ji ( x j x i, x, t ) (x j x i ) > 0 uniformly (5) with K ji = K ij. For instance, one may have u ji = ( C ji (t)+ B ji (t) x j x i )(x j x i ) with C ji = C ij > 0 uniformly and B ji = B ij 0. The dynamics is then equivalent to ẋ i = f(x i,t)+ which leads to the auxiliary system j N i u ji (x j x i, x, t) K 0 x j + K 0 n x j ẏ i = f(y i,t)+ j N i u ji (y j y i, x, t) K 0 y j + K 0 n x j (t) (6) We learn fromtheorem1 that, if the auxiliary system(6) is contracting, all systemtrajectories will verify the independent property x 1 = = x n exponentially. Thus we compute the symmetric part of its Jacobian matrix J s = I n J is U n K 0 where J I n 0 J 2 0 J is = J n with J i = f(x i,t) x i
6 6 JeanJacques E. Slotine and Wei Wang... K 0 K 0 K 0.. U n K 0 K 0 K 0 K ijs K ijs K 0 = T n K ijs = K 0 K 0 K 0 K ijs K ijs n n..... and the set N includes all the active links in the network. Note that all the elements in except those displayed in the four intersection points of the ith and jth rows and the ith and jth columns are zero. If we view the network as a graph, Tn K ijs is the generalized (or weighted) Laplacian matrix. Lemma 1. Define J r = U n K 0 If K 0 > 0, K ij > 0, i, j N, and the network is connected, then J r < 0. Proof: Given an arbitrary nonzero vector v =[v 1,...,v n ] T,wehave v T J r v = (v i v j ) T K ijs (v i v j ) ( v i ) T K 0 ( v i ) < 0 for a connected network. Furthermore, the largest eigenvalue of J r can be computed fromthe CourantFischer Theorem[14] λ max (J r )= max ( v =1 vt v v T U n K 0 v ) T = min v v =1 P n vi=0 v = λ m+1 ( ) if we choose K 0 large enough [62]. For m = 1, the eigenvalue λ 2 ( Tn 1 ) is a fundamental quantity in graph theory named algebraic connectivity [5, 10, 29], which is zero if and only if the graph is not connected. The above results imply Theorem 2. All the elements within a generally coupled network (5) will reach group agreement exponentially if the network is connected, λ max (J is ) is upper bounded, the coupling strengths are strong enough. More specifically, if λ m+1 ( then the auxiliary system (6) is contracting. n n ) > max λ max (J is ) uniformly (7) i
7 A Study of Synchronization Using Partial Contraction Theory 7 Remarks: The analysis carries on straightforwardly to other kind of couplings. For instance, consider an alltoall coupled network with n elements ẋ i = f(x i,t)+ ( u(x j,t) u(x i,t)) i =1, 2,...,n The contraction of f nu guarantees convergence of the whole network, which can be proved using the auxiliary system(3) with c =[c 1,...,c n ] T where c i = f(x i,t) nu(x i,t) d =[d 1,...,d n ] T where d i = u(x j,t) The bidirectional coupling assumption on each link is not always necessary. Consider a network with oneway ring structure and linear diffusion coupling ẋ i = f(x i,t)+k(x i 1 x i ) i =1,...,n where by convention i 1=n for i = 1. Assume that the coupling gain K = K T > 0 is identical for all links. Then J r = 1 n 2 Tn K Un K 0 is negative definite. Since λ m+1 ( 1 2 T n K ) = 1 2 λ min(k) λ 2 ( the threshold to reach synchrony exponentially is T n 1 ) = λ min(k)(1 cos 2π n ), λ min (K) (1 cos 2π n ) > max λ max (J is ) uniformly i Thus, Theorem2 can be extended to the network whose links are either bidirectional with K ji = K ij or unidirectional but formed as rings with K T = K (where K is identical within the same ring but may differ between different rings). If the coupling gain K ij in (5) is only positive semidefinite, we have to add extra restriction to the uncoupled systemdynamics to guarantee globally stable convergence. Assume K ijs = [ ] K ijs where K ijs is positive definite and has a common dimension to all links. We can divide the uncoupled dynamics J is into the form [ ] J11s J J is = 12 J T 12 J 22s i
8 8 JeanJacques E. Slotine and Wei Wang with each component having the same dimension as that of the corresponding one in K ijs. A sufficient condition to guarantee globally stable convergence behavior in the region beyond a coupling strength threshold is that, i, J 22s is contracting and λ max (J 11s ), σ max (J 12 )areboth bounded[62]. Example 3.1: The FitzHughNagumo model [8, 30, 31] is a classical mathematical model for spiking neurons, based on a simplification of the original HodgkinHuxley model [12]. It is given by ( v = c(v + w 1 3 v3 + I) 1 2 b<a<1 3 ẇ = 1 (v a + bw) 0 <b<1, b < (8) c c2 where v is directly related to the membrane potential, w is responsible for accommodation and refractoriness, and I corresponds to stimulating current. Consider a diffusioncoupled network with n identical FitzHughNagumo neurons 8 >< v i = c(v i + w i 1 3 v3 i + I)+ X k ji (v j v i) j N i (t) (9) >: ẇ i = 1 c (vi a + bwi) i =1,...,n» 10 Defining a transformation matrix Θ =, which leaves the coupling gain 0 c unchanged, yields the generalized Jacobian of the uncoupled dynamics» c(1 v 2 J i = i ) 1 1 Thus the whole network will synchronize exponentially if the coupling strengths are strong enough, b c λ 2( P Tn k ij ) > c The definition of the neighbor sets N i is quite flexible. While it may be based simply on position proximity (neighbors within a certain distance of each node), it can be chosen to reflect many other factors. Gestalt psychology [41], for instance, suggests that in human visual perception, grouping occurs not only by proximity, but also by similarity, closure, continuity, common region and connectedness. The coupling strengths can also be specified flexibly. For instance, using Schoenberg/Micchelli s theorems on positive definite functions [24], they can be chosen as smooth functions based on sums of gaussians. 3.2 Coupled Network with Switching Topology The results above can be extended to analyze the collective behaviors of cooperating moving units with local couplings, where the network structure changes abruptly and asynchronously. Consider such a network
9 A Study of Synchronization Using Partial Contraction Theory 9 ẋ i = f(x i,t)+ u ji (x j, x i, x,t) i =1,...,n j N i(t) where N i (t) denotes the set of the active links associated with element i at time t. Apply partial contraction analysis to each time interval during which N(t) = N i (t) isfixed.if λ m+1 ( ) > max λ max (J is ) uniformly N(t) (10) i the auxiliary system(6) is always contracting, since δz T δz with z =[y 1,...,y n ] T is continuous in time and upper bounded by a vanishing exponential (though its timederivative can be discontinuous at discrete instants). Since the particular solution of the auxiliary systemin each time interval is y 1 = = y n = y,thesen elements will reach agreement exponentially as they tend to y 1 = = y n which is a constant region in the statespace. 3.3 LeaderFollowers Network The previous results can also be extended to analyze the coupled network with an additional leader. Consider such a system ẋ 0 = f(x 0,t) ẋ i = f(x i,t)+ j N i K ji (x j x i )+γ i K 0i (x 0 x i ) i =1,...,n where x 0 is the state of the group leader, γ i = 0 or 1, and N i does not include the links with x 0. Since the dynamics of x 0 is independent, we can treat it as external input to the rest of the system, whose Jacobian matrix has the symmetric part J s = I n J is I n γ ik 0is The matrix J r = Tn K ijs I n γ ik 0is is negative definite if the augmented network with n + 1 elements is connected. In fact, v 0, v T J r v = (v i v j ) T K ijs (v i v j ) γ i (vi T K 0is v i ) < 0 Thus the system[x 1,...,x n ] T is contracting if the coupling strengths are strong enough, λ min ( + I n γ ik 0is ) > max n max(j is ) uniformly (11)
10 10 JeanJacques E. Slotine and Wei Wang Under this condition it will converge to the particular solution x 1 = = x n = x 0 exponentially regardless of the initial conditions. This result can be viewed as a generalization of Example 2.1. Note that the condition for leader following is that the whole group of n + 1 elements is connected. Thus the n followers x 1,...,x n could be either connected together, or there could be isolated subgroups all connected to the leader. Also note that the network structure of a leaderfollowers group does not have to be fixed during the whole time, and a result similar to Section 3.2 can be derived. Example 3.2: As a very simple example, let us construct a leaderfollowers network similar to that in [2], but which is able to capture a manyareequal moment much faster. In this version, both the leader and the followers are FitzHugh Nagumo neurons as (8). The leader is coupled by linear diffusion to each of the followers, with no direct coupling between the followers themselves. With strong coupling gain, the followers synchronize only if the external inputs I are identical. We can define the system output accordingly to capture the moment when this condition becomes true, as shown in Figure 1. t Fig. 1. Simulation result of Example 3.2. The leader and seventeen followers are all FitzHughNagumo neurons with parameters a = 0.7, b = 0.8, c = 8. The coupling gain between the leader and the followers is κ =3,and I = 1.4 fortheleader. The first plot shows P i max(0, vi) as a function of time, and the second shows the input currents I to the followers, which vary from 0.8 to 1.3. t In fact, a neural network with simple leaderfollowers structure can be used to emulate many other brainlike computations, such as kwinnertakeall [63]. In all these examples the leading node can also be replaced by a coordinated group of local leaders.
11 A Study of Synchronization Using Partial Contraction Theory 11 Comparing conditions (7) and (11) shows that, predictably, the existence of an additional leader does not always help the followers network to reach agreement. But it does so if λ min ( + I n γ ik 0is ) >λ m+1 ( ) Consider for instance the case when the leader has identical connections to all other elements, i, K 0i = ki, k > 0. Then λ min ( + I n γ ik 0is ) = m in v =1 vt ( + I n ki )v = k This means the connections between the leader and the followers do promote the convergence within the followers network if λ m+1 ( ) <k which is more likely to happen in a network with less connectivity. This is consistent with the observation in simulation of [64], where two groups of cells couple together and once one group synchronizes, inputs fromthese cells facilitate synchronization in another group the synchronized group as a whole plays the role of the leader in our analysis. Example 3.3: To let a group synchronize faster, [64] suggests to increase its interior connection weights. In fact, similar phenomena can be predicted in a network with nonuniform connectivity, where synchronization will propagate from highdensity areas to lowdensity areas. Consider for illustration two groups of FitzHughNagumo neurons, the first composed of eight neurons coupled alltoall, and surrounded by a second group of sixteen neurons coupled as an oneway ring, with every other neuron in the second group connected bilaterally to a distinct neuron of the first group. The system dynamics follows equation (9) with coupling gain k>0 identical in the whole network. Figure 2 shows simulation results, from which we can observe initially significant phase lag between the two groups. Note that the second group alone would not synchronize without couplings from the first group. However, the connectivity of the followers network always helps the following process, which can be seen by applying Weyl s Theorem[14], λ i ( + I n γ ik 0is ) λ i ( I n γ ik 0is ) i =1,...,n This result can be used e.g. to modify Example 3.2, by adding couplings between local neighbors according to similarity (see Section 5), so as to react even faster.
12 12 JeanJacques E. Slotine and Wei Wang t P 1 n Fig. 2. Plots of i max(0, vi) for two coupled groups of FitzHughNagumo neurons. The parameters are a = 0.7, b = 0.8, c = 8, I = 1.4, κ = 0.5. Initial conditions are chosen randomly for each neuron. t 3.4 Adaptation When the systemdynamics is unknown to a given node, synchronization can be preserved through parameter adaptation. Consider again the coupled system(4), and for simplicity diffusive couplings. Assume that a constant parameter vector a is unknown to some particular node σ, but estimated as â(t) f(x σ, â,t) = f(x σ, a,t) + W(x σ,t)ã where ã = â a. Using the adaptation law â = PW T (x σ, t) j N σ K jσ (x j x σ ) with constant P = P T > 0, the systemsynchronizes asymptotically. Indeed, consider the Lyapunovlike function V = 1 2 ( xt L K x + ã T P 1 ã ) where L K is the weighted Laplacian matrix, L K = = DKD T T n K ij With τ the total number of the links in the network, K R τ τ is a block diagonal matrix whose k th diagonal entry [K] k = K ij corresponds to the coupling strength of the k th link, which is assumed to be symmetric positive definite. D R n τ is the incidence matrix [10] of the corresponding undirected graph. Simple calculations show
13 where A Study of Synchronization Using Partial Contraction Theory 13 V = x T L K ẋ + ã T P 1 â f(x 1, a,t)... = x T L K ( f(x σ, a,t)... L Kx ) f(x n, a,t) = x T D ( KΛ KD T DK ) D T x = x T ( L KΛ L 2 K ) x L KΛ = D KΛ D T and Λ R τ τ is a block diagonal matrix whose k th diagonal entry [Λ] k corresponds to the k th oriented link (i, j), 1 f [Λ] k = 0 x (x j + λ(x i x j )) dλ Under conditions similar to those in Theorem 2, it can be shown that the matrix L KΛ L 2 K is negative semidefinite, and furthermore that the order of multiplicity of its zero eigenvalue is exactly m. Using Barbalat s lemma [44] and the boundedness of V, the first result implies that V asymptotically converges to zero, and in turn the second result implies that the x i synchronize. Furthermore [44], if t+t α>0, T>0, t o 0, t t o W T (x σ,t)w(x σ,t)dr αi then â actually converges to a. Suchisthecaseinacoupledoscillatornetwork as long as oscillation is preserved. A detailed proof and discussion of adaptive networks will be presented separately. 3.5 Inhibition The dynamics of a large network of synchronized elements, as in Theorem 2, can be completely transformed by the addition of a single inhibitory coupling link. Start for instance with the synchronized network ẋ i = f(x i,t)+ K ji (x j x i ) i =1,...,n j N i and add a single inhibitory link between two arbitrary elements a and b ẋ a = f(x a,t)+ j N a K ja (x j x a )+K ( x b x a ) ẋ b = f(x b,t)+ j N b K jb (x j x b )+K ( x a x b ) t
14 14 JeanJacques E. Slotine and Wei Wang The symmetric part of the Jacobian matrix is J s = I n J is Tn K ijs T n K, where T n K is composed of zeroes except for four identical blocks..... K K T n K = K K..... The matrix J r = Tn K ijs T n K is negative definite, since v 0 v T J r v = (v i v j ) T K ijs (v i v j ) (v a + v b ) T K (v a + v b ) < 0 Thus, the network is contracting for strong enough coupling strengths. Hence, the n oscillators will be inhibited and, if the function f is autonomous, they will tend to an equilibrium. Adding more inhibitory couplings preserves the result. Example 3.4: Consider a ring network of ten FitzHughNagumo neurons with diffusion couplings. The whole network turns off immediately if we add one extra inhibitory link between any two neurons, and resumes firing if we remove the extra link, as illustrated in Figure 3. n n t Fig. 3. Time plots of v i in Example 3.4. The parameters of FitzHughNagumo neurons are a = 0.7, b = 0.8, c = 8, I = 0.8. The coupling gains are identical as κ = 100, and the initial conditions are set randomly. The inhibitory link is added between the first and fifth neurons at t = 100 and removed at t = 200. Such inhibition properties may be useful in pattern recognition to achieve rapid desynchronization between different objects. They may also be used as
15 A Study of Synchronization Using Partial Contraction Theory 15 simplified models of minimal mechanisms for turning off unwanted synchronization, as e.g. in epileptic seizures or oscillations in internet traffic. Cascades of inhibition are common in the brain, in a way perhaps reminiscent of NANDbased logic. 3.6 Algebraic Connectivity Condition (7) to guarantee convergence represents requirements on both the individual dynamics and the network s geometric structure. To see this in more detail, let us assume that all the links within the network are bidirectional (the corresponding graph is called undirected graph) with identical coupling gain K = K T > 0. Thus, according to [15] λ m+1 ( T n K ) = λ 2 λ min (K) where λ 2 is the algebraic connectivity (the smallest nonzero eigenvalue) of the standard Laplacian matrix. Denote λ = max i λ max (J is ) λ min (K) Condition (7) can be written λ 2 > λ uniformly. Given a graph G of order n, there exist lower bounds on its diameter 4 diam(g) and its mean distance 5 ρ(g) [29] diam(g) 4 (n 1) ρ(g) 2 + n 2 nλ 2 λ 2 2 (these bounds are most informative when λ 2 is small) which in turn gives us lower bounds on algebraic connectivity λ 2 4 n diam(g) λ 2 2 (n 1) ρ(g) n 2 2 A sufficient condition to guarantee exponential convergence within a coupled network is thus derived as diam(g) < 4 n λ or ρ(g) < 2 λ(n 1) + n 2 2(n 1) Note that for a coupled network, increasing the coupling gain for an existing link or adding an extra link will both improve the convergence process. Indeed, using again Weyl s Theorem[14] λ k (J s ) λ k (J s ) 4 Maximum number of links between two distinct vertices [10] 5 Average number of links between distinct vertices [29]
16 16 JeanJacques E. Slotine and Wei Wang These two facts quantify intuitive results. Connecting each element with more neighbors favors group agreement, especially for largesize networks. Also, different coupling links make different contributions, with couplings between farseparated nodes contributing more than those between close neighbors, an instance of smallworld phenomena. Example 3.5: In [19], Kopell and Ermentrout show that closed chains of oscillators will reliably synchronize with nearestneighbor coupling, while open chains require nearest and nextnearest neighbor coupling. This result can be explained by expressing the partial contraction condition as maxi λmax(jis) λ min(k) > uniformly λ 2 Assuming n extremely large, for a graph with bidirectional open chain structure λ 2 = 2 ( 1 cos( π n )) 2(π n )2 while for a graph with bidirectional ring structure λ 2 = 2 ( 1 cos( 2π n )) 8(π n )2 The effort to synchronize an open chain network is many times of that to a closed one. Example 3.6: Assuming n very large, we can compare the partial contraction threshold for two extreme cases, λ min(k) O(n 2 ) + for oneway ring structure λ min(k) O( 1 ) 0 for alltoall structure. n Note that λ 2 is equal to 1 for a star graph [5]. Much less effort is needed to synchronize such a network than a ring, even for a larger number of links, because the center node in a star network performs a global role while the communication in a ring network is completely distributed. 4 A Simple Coupled Model In this section, we study a simplified model of schooling or flocking in continuoustime with f = 0. Consider such a group ẋ i = K ji (x j x i ) i =1,...,n (12) j N i(t) where x i R m denotes the states needed to reach agreements such as heading, attitude, velocity, etc. N i (t) is defined for instance as the set of the nearest neighbors within a certain distance around subsystem i at current
17 A Study of Synchronization Using Partial Contraction Theory 17 time t, which can change abruptly and asynchronously. Since J is =0 here, the condition (10) is satisfied if only the network is connected. Therefore i, x i converges exponentially to a particular solution, which in this case is a constant value x = 1 n n x i(0). Note that in the case of heading agreement based on spatial proximity, the issue of chattering is immaterial since switching cannot occur infinitely fast, while in the general case it can be simply avoided by using smooth transitions in time or space. Moreover, the network (12) need not be connected for any t 0. A generalized condition can be derived which is the same as that obtained in [16] for a discretetime model. Lemma 2. For network (12), n x i 2 in converges exponentially to its lower limit n x 2. Proof: Letting x i =[x i1,,x im ] T,wehave n ẋi =0whichleadsto x i = x i (0) = n x with x ij = n x j, j =1,,m using Thus, x i x 2 + n x 2 = = ( m x 2 ij m 2 x ij x j + m (x ij x j ) 2 + n x 2 m x 2 j ) + n x 2 m = x i 2 2 x ij x j + 2n x 2 = m x ij x j = m x ij x j = m ( x j x ij )= x i 2 (13) m n x 2 j =n x 2 Frompartial contraction analysis, we know that any solution of the system(12) converges exponentially to a particular one, x 1 = = x n = x = 1 n n x i(0), which implies that n x i x 2 tends to zero exponentially. Using (13) completes the proof. We can now largely generalize the condition to reach group agreement. Theorem 3. Consider n coupled elements with linear protocol (12), the neighborship of which can change abruptly and asynchronously. Separate time into an infinite sequence of bounded intervals starting at t =0.Ifthenetworkis connected across each such interval 6, the agreement x 1 = = x n will be reached asymptotically. 6 As in [16], being connected across a time interval means that the union of the different graphs accounted along the interval is connected.
18 18 JeanJacques E. Slotine and Wei Wang Proof: Assumethatatsometimet the network is not connected, but instead is composed of k isolated subnetworks, each of which is connected and containing n j elements with j =1,...,k. Defining z i = x i x, weget ż i = K ji (z j z i ) i =1,...,n j N i(t) and from Lemma 2, with z j = 1 n j nj z i(t), z i 2 = n k j z i 2 = n k j ( z i z j 2 + n j z j 2 ) Note that z j is a local agreement compared to the global one x (corresponding to z =0). j, z j is constant as long as the current network structure keeps unchanged, and n j z i z j 2 tends to zero exponentially during this period. Thus n z i 2 is nonincreasing. Furthermore, the condition in Theorem 3 guarantees that n z i 2 always decreases across each time interval and it tends to reach the lower limit zero asymptotically. Note that the most important fact behind Theorem 3 and its proof is that, closer to the local agreement, closer to the global. This is illustrated in Figure 4 with a simple subnetwork containing two elements. x 2 A x +x =x (0)+x (0) B C x 1 =x 2 O OC < OB < OA x 1 Fig. 4. A connected subnetwork containing two elements x 1 and x 2.PointOrepresents the global agreement and point C the local. The initial position is point A and the system trajectory is along the line ABC. The closer the group is to the local agreement, the closer it is to the global.
Models of Cortical Maps II
CN510: Principles and Methods of Cognitive and Neural Modeling Models of Cortical Maps II Lecture 19 Instructor: Anatoli Gorchetchnikov dy dt The Network of Grossberg (1976) Ay B y f (
More informationFrancesco Sorrentino Department of Mechanical Engineering
Master stability function approaches to analyze stability of the synchronous evolution for hypernetworks and of synchronized clusters for networks with symmetries Francesco Sorrentino Department of Mechanical
More informationDATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
More information15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
More informationDiagonal, Symmetric and Triangular Matrices
Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by
More informationExample 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum. asin. k, a, and b. We study stability of the origin x
Lecture 4. LaSalle s Invariance Principle We begin with a motivating eample. Eample 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum Dynamics of a pendulum with friction can be written
More informationPerformance of networks containing both MaxNet and SumNet links
Performance of networks containing both MaxNet and SumNet links Lachlan L. H. Andrew and Bartek P. Wydrowski Abstract Both MaxNet and SumNet are distributed congestion control architectures suitable for
More informationChapter 7. Lyapunov Exponents. 7.1 Maps
Chapter 7 Lyapunov Exponents Lyapunov exponents tell us the rate of divergence of nearby trajectories a key component of chaotic dynamics. For one dimensional maps the exponent is simply the average
More informationSolution of Linear Systems
Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start
More informationSHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH
31 Kragujevac J. Math. 25 (2003) 31 49. SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH Kinkar Ch. Das Department of Mathematics, Indian Institute of Technology, Kharagpur 721302, W.B.,
More informationClassification of Cartan matrices
Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms
More informationThe Heat Equation. Lectures INF2320 p. 1/88
The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)
More informationVarious Symmetries in Matrix Theory with Application to Modeling Dynamic Systems
Various Symmetries in Matrix Theory with Application to Modeling Dynamic Systems A Aghili Ashtiani a, P Raja b, S K Y Nikravesh a a Department of Electrical Engineering, Amirkabir University of Technology,
More informationBOOLEAN CONSENSUS FOR SOCIETIES OF ROBOTS
Workshop on New frontiers of Robotics  Interdep. Research Center E. Piaggio June 222, 22  Pisa (Italy) BOOLEAN CONSENSUS FOR SOCIETIES OF ROBOTS Adriano Fagiolini DIEETCAM, College of Engineering, University
More informationON THE KWINNERSTAKEALL NETWORK
634 ON THE KWINNERSTAKEALL NETWORK E. Majani Jet Propulsion Laboratory California Institute of Technology R. Erlanson, Y. AbuMostafa Department of Electrical Engineering California Institute of Technology
More informationUNIT 2 MATRICES  I 2.0 INTRODUCTION. Structure
UNIT 2 MATRICES  I Matrices  I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationTHREE DIMENSIONAL GEOMETRY
Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,
More informationNonlinear Systems of Ordinary Differential Equations
Differential Equations Massoud Malek Nonlinear Systems of Ordinary Differential Equations Dynamical System. A dynamical system has a state determined by a collection of real numbers, or more generally
More informationDecentralized Method for Traffic Monitoring
Decentralized Method for Traffic Monitoring Guillaume Sartoretti 1,2, JeanLuc Falcone 1, Bastien Chopard 1, and Martin Gander 2 1 Computer Science Department 2 Department of Mathematics, University of
More informationCONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation
Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in
More informationSolving Simultaneous Equations and Matrices
Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering
More informationState of Stress at Point
State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,
More informationStationary random graphs on Z with prescribed iid degrees and finite mean connections
Stationary random graphs on Z with prescribed iid degrees and finite mean connections Maria Deijfen Johan Jonasson February 2006 Abstract Let F be a probability distribution with support on the nonnegative
More informationMathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 19967 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
More informationIntroduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski trajkovski@nyus.edu.mk
Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trakovski trakovski@nyus.edu.mk Neural Networks 2 Neural Networks Analogy to biological neural systems, the most robust learning systems
More informationMATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.
MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted
More informationGenerating Valid 4 4 Correlation Matrices
Applied Mathematics ENotes, 7(2007), 5359 c ISSN 16072510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ Generating Valid 4 4 Correlation Matrices Mark Budden, Paul Hadavas, Lorrie
More informationDecember 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in twodimensional space (1) 2x y = 3 describes a line in twodimensional space The coefficients of x and y in the equation
More informationComponent Ordering in Independent Component Analysis Based on Data Power
Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An ndimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0534405967. Systems of Linear Equations Definition. An ndimensional vector is a row or a column
More informationLabeling outerplanar graphs with maximum degree three
Labeling outerplanar graphs with maximum degree three Xiangwen Li 1 and Sanming Zhou 2 1 Department of Mathematics Huazhong Normal University, Wuhan 430079, China 2 Department of Mathematics and Statistics
More informationOn the DStability of Linear and Nonlinear Positive Switched Systems
On the DStability of Linear and Nonlinear Positive Switched Systems V. S. Bokharaie, O. Mason and F. Wirth Abstract We present a number of results on Dstability of positive switched systems. Different
More informationUSING SPECTRAL RADIUS RATIO FOR NODE DEGREE TO ANALYZE THE EVOLUTION OF SCALE FREE NETWORKS AND SMALLWORLD NETWORKS
USING SPECTRAL RADIUS RATIO FOR NODE DEGREE TO ANALYZE THE EVOLUTION OF SCALE FREE NETWORKS AND SMALLWORLD NETWORKS Natarajan Meghanathan Jackson State University, 1400 Lynch St, Jackson, MS, USA natarajan.meghanathan@jsums.edu
More informationTwo classes of ternary codes and their weight distributions
Two classes of ternary codes and their weight distributions Cunsheng Ding, Torleiv Kløve, and Francesco Sica Abstract In this paper we describe two classes of ternary codes, determine their minimum weight
More information(Quasi)Newton methods
(Quasi)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable nonlinear function g, x such that g(x) = 0, where g : R n R n. Given a starting
More informationOPTIMAL DESIGN OF DISTRIBUTED SENSOR NETWORKS FOR FIELD RECONSTRUCTION
OPTIMAL DESIGN OF DISTRIBUTED SENSOR NETWORKS FOR FIELD RECONSTRUCTION Sérgio Pequito, Stephen Kruzick, Soummya Kar, José M. F. Moura, A. Pedro Aguiar Department of Electrical and Computer Engineering
More informationAN ALGORITHM FOR DETERMINING WHETHER A GIVEN BINARY MATROID IS GRAPHIC
AN ALGORITHM FOR DETERMINING WHETHER A GIVEN BINARY MATROID IS GRAPHIC W. T. TUTTE. Introduction. In a recent series of papers [l4] on graphs and matroids I used definitions equivalent to the following.
More informationLecture 13 Linear quadratic Lyapunov theory
EE363 Winter 289 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discretetime
More informationTHE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS
THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear
More informationThe QOOL Algorithm for fast Online Optimization of Multiple Degree of Freedom Robot Locomotion
The QOOL Algorithm for fast Online Optimization of Multiple Degree of Freedom Robot Locomotion Daniel Marbach January 31th, 2005 Swiss Federal Institute of Technology at Lausanne Daniel.Marbach@epfl.ch
More informationLecture 7: Finding Lyapunov Functions 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1
More informationCORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREERREADY FOUNDATIONS IN ALGEBRA
We Can Early Learning Curriculum PreK Grades 8 12 INSIDE ALGEBRA, GRADES 8 12 CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREERREADY FOUNDATIONS IN ALGEBRA April 2016 www.voyagersopris.com Mathematical
More informationSIMPLIFIED PERFORMANCE MODEL FOR HYBRID WIND DIESEL SYSTEMS. J. F. MANWELL, J. G. McGOWAN and U. ABDULWAHID
SIMPLIFIED PERFORMANCE MODEL FOR HYBRID WIND DIESEL SYSTEMS J. F. MANWELL, J. G. McGOWAN and U. ABDULWAHID Renewable Energy Laboratory Department of Mechanical and Industrial Engineering University of
More informationA New Natureinspired Algorithm for Load Balancing
A New Natureinspired Algorithm for Load Balancing Xiang Feng East China University of Science and Technology Shanghai, China 200237 Email: xfeng{@ecusteducn, @cshkuhk} Francis CM Lau The University of
More informationThe Dirichlet Unit Theorem
Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if
More informationChapter 1  Matrices & Determinants
Chapter 1  Matrices & Determinants Arthur Cayley (August 16, 1821  January 26, 1895) was a British Mathematician and Founder of the Modern British School of Pure Mathematics. As a child, Cayley enjoyed
More informationThe Geometry of Graphs
The Geometry of Graphs Paul Horn Department of Mathematics University of Denver May 21, 2016 Graphs Ultimately, I want to understand graphs: Collections of vertices and edges. Graphs Ultimately, I want
More informationN 1. (q k+1 q k ) 2 + α 3. k=0
Teoretisk Fysik Handin problem B, SI1142, Spring 2010 In 1955 Fermi, Pasta and Ulam 1 numerically studied a simple model for a one dimensional chain of nonlinear oscillators to see how the energy distribution
More informationINTRODUCTION TO NEURAL NETWORKS
INTRODUCTION TO NEURAL NETWORKS Pictures are taken from http://www.cs.cmu.edu/~tom/mlbookchapterslides.html http://research.microsoft.com/~cmbishop/prml/index.htm By Nobel Khandaker Neural Networks An
More informationComputer Graphics. Geometric Modeling. Page 1. Copyright Gotsman, Elber, Barequet, Karni, Sheffer Computer Science  Technion. An Example.
An Example 2 3 4 Outline Objective: Develop methods and algorithms to mathematically model shape of real world objects Categories: WireFrame Representation Object is represented as as a set of points
More informationLab 6: Bifurcation diagram: stopping spiking neurons with a single pulse
Lab 6: Bifurcation diagram: stopping spiking neurons with a single pulse The qualitative behaviors of a dynamical system can change when parameters are changed. For example, a stable fixedpoint can become
More informationt := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).
1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction
More informationFigure 2.1: Center of mass of four points.
Chapter 2 Bézier curves are named after their inventor, Dr. Pierre Bézier. Bézier was an engineer with the Renault car company and set out in the early 196 s to develop a curve formulation which would
More informationMath 315: Linear Algebra Solutions to Midterm Exam I
Math 35: Linear Algebra s to Midterm Exam I # Consider the following two systems of linear equations (I) ax + by = k cx + dy = l (II) ax + by = 0 cx + dy = 0 (a) Prove: If x = x, y = y and x = x 2, y =
More informationLecture 2 Linear functions and examples
EE263 Autumn 200708 Stephen Boyd Lecture 2 Linear functions and examples linear equations and functions engineering examples interpretations 2 1 Linear equations consider system of linear equations y
More informationApplications to Data Smoothing and Image Processing I
Applications to Data Smoothing and Image Processing I MA 348 Kurt Bryan Signals and Images Let t denote time and consider a signal a(t) on some time interval, say t. We ll assume that the signal a(t) is
More informationPhysics 9e/Cutnell. correlated to the. College Board AP Physics 1 Course Objectives
Physics 9e/Cutnell correlated to the College Board AP Physics 1 Course Objectives Big Idea 1: Objects and systems have properties such as mass and charge. Systems may have internal structure. Enduring
More informationStochastic Inventory Control
Chapter 3 Stochastic Inventory Control 1 In this chapter, we consider in much greater details certain dynamic inventory control problems of the type already encountered in section 1.3. In addition to the
More informationNetwork (Tree) Topology Inference Based on Prüfer Sequence
Network (Tree) Topology Inference Based on Prüfer Sequence C. Vanniarajan and Kamala Krithivasan Department of Computer Science and Engineering Indian Institute of Technology Madras Chennai 600036 vanniarajanc@hcl.in,
More informationNotes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 201112 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
More information6.852: Distributed Algorithms Fall, 2009. Class 2
.8: Distributed Algorithms Fall, 009 Class Today s plan Leader election in a synchronous ring: Lower bound for comparisonbased algorithms. Basic computation in general synchronous networks: Leader election
More informationManipulator Kinematics. Prof. Matthew Spenko MMAE 540: Introduction to Robotics Illinois Institute of Technology
Manipulator Kinematics Prof. Matthew Spenko MMAE 540: Introduction to Robotics Illinois Institute of Technology Manipulator Kinematics Forward and Inverse Kinematics 2D Manipulator Forward Kinematics Forward
More informationThe Solution of Linear Simultaneous Equations
Appendix A The Solution of Linear Simultaneous Equations Circuit analysis frequently involves the solution of linear simultaneous equations. Our purpose here is to review the use of determinants to solve
More information3. Reaction Diffusion Equations Consider the following ODE model for population growth
3. Reaction Diffusion Equations Consider the following ODE model for population growth u t a u t u t, u 0 u 0 where u t denotes the population size at time t, and a u plays the role of the population dependent
More informationMetric Spaces. Chapter 7. 7.1. Metrics
Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some
More informationWeakly Secure Network Coding
Weakly Secure Network Coding Kapil Bhattad, Student Member, IEEE and Krishna R. Narayanan, Member, IEEE Department of Electrical Engineering, Texas A&M University, College Station, USA Abstract In this
More information3. Interpolation. Closing the Gaps of Discretization... Beyond Polynomials
3. Interpolation Closing the Gaps of Discretization... Beyond Polynomials Closing the Gaps of Discretization... Beyond Polynomials, December 19, 2012 1 3.3. Polynomial Splines Idea of Polynomial Splines
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 10 Boundary Value Problems for Ordinary Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at UrbanaChampaign
More informationImpact of Remote Control Failure on Power System Restoration Time
Impact of Remote Control Failure on Power System Restoration Time Fredrik Edström School of Electrical Engineering Royal Institute of Technology Stockholm, Sweden Email: fredrik.edstrom@ee.kth.se Lennart
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationRow Ideals and Fibers of Morphisms
Michigan Math. J. 57 (2008) Row Ideals and Fibers of Morphisms David Eisenbud & Bernd Ulrich Affectionately dedicated to Mel Hochster, who has been an inspiration to us for many years, on the occasion
More informationSolving Nonlinear Equations Using Recurrent Neural Networks
Solving Nonlinear Equations Using Recurrent Neural Networks Karl Mathia and Richard Saeks, Ph.D. Accurate Automation Corporation 71 Shallowford Road Chattanooga, Tennessee 37421 Abstract A class of recurrent
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationBy choosing to view this document, you agree to all provisions of the copyright laws protecting it.
This material is posted here with permission of the IEEE Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services Internal
More informationLinearQuadratic Optimal Controller 10.3 Optimal Linear Control Systems
LinearQuadratic Optimal Controller 10.3 Optimal Linear Control Systems In Chapters 8 and 9 of this book we have designed dynamic controllers such that the closedloop systems display the desired transient
More informationA TUTORIAL. BY: Negin Yousefpour PhD Student Civil Engineering Department TEXAS A&M UNIVERSITY
ARTIFICIAL NEURAL NETWORKS: A TUTORIAL BY: Negin Yousefpour PhD Student Civil Engineering Department TEXAS A&M UNIVERSITY Contents Introduction Origin Of Neural Network Biological Neural Networks ANN Overview
More informationEXIT TIME PROBLEMS AND ESCAPE FROM A POTENTIAL WELL
EXIT TIME PROBLEMS AND ESCAPE FROM A POTENTIAL WELL Exit Time problems and Escape from a Potential Well Escape From a Potential Well There are many systems in physics, chemistry and biology that exist
More informationNEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS
NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS TEST DESIGN AND FRAMEWORK September 2014 Authorized for Distribution by the New York State Education Department This test design and framework document
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationFIELDSMITACS Conference. on the Mathematics of Medical Imaging. Photoacoustic and Thermoacoustic Tomography with a variable sound speed
FIELDSMITACS Conference on the Mathematics of Medical Imaging Photoacoustic and Thermoacoustic Tomography with a variable sound speed Gunther Uhlmann UC Irvine & University of Washington Toronto, Canada,
More informationSection 1.1. Introduction to R n
The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to
More informationEigenvalues, Eigenvectors, Matrix Factoring, and Principal Components
Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they
More informationDecisionmaking with the AHP: Why is the principal eigenvector necessary
European Journal of Operational Research 145 (2003) 85 91 Decision Aiding Decisionmaking with the AHP: Why is the principal eigenvector necessary Thomas L. Saaty * University of Pittsburgh, Pittsburgh,
More informationUsing chaotic artificial neural networks to model memory in the brain. 1. Introduction
Using chaotic artificial neural networks to model memory in the brain Zainab Aram a, Sajad Jafari a,*, Jun Ma b, J. C. Sprott c a Biomedical Engineering Department, Amirkabir University of Technology,
More information1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
More informationMasters research projects. 1. Adapting Granger causality for use on EEG data.
Masters research projects 1. Adapting Granger causality for use on EEG data. Background. Granger causality is a concept introduced in the field of economy to determine which variables influence, or cause,
More informationMathematics of Cryptography
CHAPTER 2 Mathematics of Cryptography Part I: Modular Arithmetic, Congruence, and Matrices Objectives This chapter is intended to prepare the reader for the next few chapters in cryptography. The chapter
More informationCompact Representations and Approximations for Compuation in Games
Compact Representations and Approximations for Compuation in Games Kevin Swersky April 23, 2008 Abstract Compact representations have recently been developed as a way of both encoding the strategic interactions
More informationOPTIMAl PREMIUM CONTROl IN A NONliFE INSURANCE BUSINESS
ONDERZOEKSRAPPORT NR 8904 OPTIMAl PREMIUM CONTROl IN A NONliFE INSURANCE BUSINESS BY M. VANDEBROEK & J. DHAENE D/1989/2376/5 1 IN A OPTIMAl PREMIUM CONTROl NONliFE INSURANCE BUSINESS By Martina Vandebroek
More informationVisualization of General Defined Space Data
International Journal of Computer Graphics & Animation (IJCGA) Vol.3, No.4, October 013 Visualization of General Defined Space Data John R Rankin La Trobe University, Australia Abstract A new algorithm
More information3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
More informationArtificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence
Artificial Neural Networks and Support Vector Machines CS 486/686: Introduction to Artificial Intelligence 1 Outline What is a Neural Network?  Perceptron learners  Multilayer networks What is a Support
More informationSolving polynomial least squares problems via semidefinite programming relaxations
Solving polynomial least squares problems via semidefinite programming relaxations Sunyoung Kim and Masakazu Kojima August 2007, revised in November, 2007 Abstract. A polynomial optimization problem whose
More informationCompetitive Analysis of On line Randomized Call Control in Cellular Networks
Competitive Analysis of On line Randomized Call Control in Cellular Networks Ioannis Caragiannis Christos Kaklamanis Evi Papaioannou Abstract In this paper we address an important communication issue arising
More information10. Graph Matrices Incidence Matrix
10 Graph Matrices Since a graph is completely determined by specifying either its adjacency structure or its incidence structure, these specifications provide far more efficient ways of representing a
More informationAnalysis of an Artificial Hormone System (Extended abstract)
c 2013. This is the author s version of the work. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purpose or for creating
More informationAdaptive Online Gradient Descent
Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650
More informationPUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.
PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include
More information