Cooperative Object Tracking for Many-on-Many Engagement



Similar documents
Mathieu St-Pierre. Denis Gingras Dr. Ing.

Physics 235 Chapter 1. Chapter 1 Matrices, Vectors, and Vector Calculus

Lecture L6 - Intrinsic Coordinates

Deterministic Sampling-based Switching Kalman Filtering for Vehicle Tracking

Understanding and Applying Kalman Filtering

A Multi-Model Filter for Mobile Terminal Location Tracking

Lecture L5 - Other Coordinate Systems

Chapter 2. Parameterized Curves in R 3

Lecture L2 - Degrees of Freedom and Constraints, Rectilinear Motion

VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS

EE 570: Location and Navigation

Lecture L22-2D Rigid Body Dynamics: Work and Energy

Lecture L17 - Orbit Transfers and Interplanetary Trajectories

APPLIED MATHEMATICS ADVANCED LEVEL

Least-Squares Intersection of Lines

Lecture L3 - Vectors, Matrices and Coordinate Transformations

In order to describe motion you need to describe the following properties.

Geometric Camera Parameters

Nonlinear Systems of Ordinary Differential Equations

An Introduction to the Kalman Filter

Mobile Robot FastSLAM with Xbox Kinect

Rotation: Moment of Inertia and Torque

COORDINATED groups of autonomous underwater vehicles

Isaac Newton s ( ) Laws of Motion

Differentiation of vectors

Oscillator Models and Collective Motion: Splay State Stabilization of Self-Propelled Particles

Basic Principles of Inertial Navigation. Seminar on inertial navigation systems Tampere University of Technology

Static Environment Recognition Using Omni-camera from a Moving Vehicle

UNIVERSITETET I OSLO

HSC Mathematics - Extension 1. Workshop E4

Torque Analyses of a Sliding Ladder

Cooperative Vehicle Control, Feature Tracking and Ocean Sampling

Mapping an Application to a Control Architecture: Specification of the Problem

A Multi-Sensor Object Localization System

Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm

JPEG compression of monochrome 2D-barcode images using DCT coefficient distributions

Orbits of the Lennard-Jones Potential

Understanding Poles and Zeros

Multi-Robot Tracking of a Moving Object Using Directional Sensors

A Reliability Point and Kalman Filter-based Vehicle Tracking Technique

Let s first see how precession works in quantitative detail. The system is illustrated below: ...

An Introduction to Applied Mathematics: An Iterative Process

Parametric Equations and the Parabola (Extension 1)

MOBILE ROBOT TRACKING OF PRE-PLANNED PATHS. Department of Computer Science, York University, Heslington, York, Y010 5DD, UK

Human-like Arm Motion Generation for Humanoid Robots Using Motion Capture Database

ACCELERATION OF HEAVY TRUCKS Woodrow M. Poplin, P.E.

Biggar High School Mathematics Department. National 5 Learning Intentions & Success Criteria: Assessing My Progress

Lecture L29-3D Rigid Body Dynamics

19 LINEAR QUADRATIC REGULATOR

Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances

LECTURE 6: Fluid Sheets

Relating Vanishing Points to Catadioptric Camera Calibration

Figure 1.1 Vector A and Vector F

Robotics. Chapter 25. Chapter 25 1

Inner Product Spaces

Content. Professur für Steuerung, Regelung und Systemdynamik. Lecture: Vehicle Dynamics Tutor: T. Wey Date: , 20:11:52

v v ax v a x a v a v = = = Since F = ma, it follows that a = F/m. The mass of the arrow is unchanged, and ( )

TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAA

Chapter 2. Mission Analysis. 2.1 Mission Geometry

Unified Lecture # 4 Vectors

Mechanics 1: Conservation of Energy and Momentum

Lecture 2: Homogeneous Coordinates, Lines and Conics

Orbital Mechanics. Angular Momentum

Kristine L. Bell and Harry L. Van Trees. Center of Excellence in C 3 I George Mason University Fairfax, VA , USA kbell@gmu.edu, hlv@gmu.

A PHD filter for tracking multiple extended targets using random matrices

APPENDIX D. VECTOR ANALYSIS 1. The following conventions are used in this appendix and throughout the book:

Physics Notes Class 11 CHAPTER 3 MOTION IN A STRAIGHT LINE

Force/position control of a robotic system for transcranial magnetic stimulation

Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication

Stabilizing a Gimbal Platform using Self-Tuning Fuzzy PID Controller

Special Theory of Relativity

AP Physics C. Oscillations/SHM Review Packet

Construction and Control of an Educational Lab Process The Gantry Crane

Active Vibration Isolation of an Unbalanced Machine Spindle

Penn State University Physics 211 ORBITAL MECHANICS 1

Physics 41 HW Set 1 Chapter 15

Lecture L25-3D Rigid Body Kinematics

Universal Law of Gravitation

ANALYTICAL METHODS FOR ENGINEERS

Determination of source parameters from seismic spectra

Course 8. An Introduction to the Kalman Filter

Information regarding the Lockheed F-104 Starfighter F-104 LN-3. An article published in the Zipper Magazine #48. December Theo N.M.M.

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

Artificial Intelligence

Lecture 3: Coordinate Systems and Transformations

ASEN Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1

Design-Simulation-Optimization Package for a Generic 6-DOF Manipulator with a Spherical Wrist

Copyright 2011 Casa Software Ltd.

Some Comments on the Derivative of a Vector with applications to angular momentum and curvature. E. L. Lady (October 18, 2000)

When the fluid velocity is zero, called the hydrostatic condition, the pressure variation is due only to the weight of the fluid.

Electrical Engineering 103 Applied Numerical Computing

Progettazione Funzionale di Sistemi Meccanici e Meccatronici

Displacement (x) Velocity (v) Acceleration (a) x = f(t) differentiate v = dx Acceleration Velocity (v) Displacement x

A MONTE CARLO DISPERSION ANALYSIS OF A ROCKET FLIGHT SIMULATION SOFTWARE

AP1 Oscillations. 1. Which of the following statements about a spring-block oscillator in simple harmonic motion about its equilibrium point is false?

Basic Equations, Boundary Conditions and Dimensionless Parameters

Understanding Purposeful Human Motion

F = ma. F = G m 1m 2 R 2

State of Stress at Point

Transcription:

Cooperative Object Tracking for Many-on-Many Engagement Pradeep Bhatta a and Michael A. Paluszek a a Princeton Satellite Systems, 33 Witherspoon Street, Princeton, NJ, USA ABSTRACT This paper presents simulation results of nonlinear filtering algorithms applied to the cooperative object tracking problem. Cooperative tracking refers to observing a object from multiple mobile sensor platforms that communicate with each other, either directly or through a central node. Inter-agent communication also enables cooperative guidance, which can be used to achieve agent formation configurations advantageous to object tracking. Keywords: Cooperative object tracking, IR seeker, nonlinear filters, formation control 1. INTRODUCTION In this paper we consider the application of tracking a dynamic object using cooperating mobile sensor platforms. Object tracking is a challenging problem in many applications due to sensor range and accuracy limitations. An approach to resolve this problem involves fusing information from multiple sensors observing the object. Many aerospace applications involve mobile sensors, which may provide additional degrees of freedom to suitably position the sensors for optimal information collection. The work discussed in this paper is motivated by the problem of determining how sensor characteristics influence desirable sensor formation configurations, and implementation of such formations. Our analysis considers object dynamics to be unknown in general. The position of the sensor platforms is typically known to a very good accuracy, and for the purpose of this study is considered to be given without any uncertainty. However, sensor measurement uncertainty is modeled. Object position estimates are computed using an unscented Kalman filter, a nonlinear estimation algorithm. Tracking performance for various relative configurations of sensor platforms with respect to the object is compared. Further, formation regulation methods for guiding a generic sensor platform group to desired configurations are outlined. We present the system dynamics under consideration in Section 2. In Section 3 we present measurement equations used in the simulations. We briefly summarize nonlinear estimation using unscented Kalman filters in Section 4. We compare simulation results for various sensor group-object relative configurations in Section 5. In Section 6 we outline formation regulation methods for implementing cooperative estimation. Finally, we present concluding remarks in Section 7. 2. SYSTEM DYNAMICS Figure 1 shows an earth-centered inertial reference frame represented by axes (X, Y, Z). Vector R represents the position of an agent (either a object or a mobile sensor platform). Vectors Ṙ and R are the corresponding velocity and acceleration: R = r ê r (1) Ṙ = v r ê r + v θ ê θ + v φ ê φ Further author information: (Send correspondence to P. Bhatta) P.Bhatta: E-mail: pradeep@psatellite.com, Telephone: 1 609 279 9606 = ṙ ê r + r θ cos φ ê θ + r φ ê φ (2) R = a r ê r + a θ ê θ + a φ ê φ, (3) Sensors and Systems for Space Applications III, edited by Joseph L. Cox, Pejmun Motaghedi Proc. of SPIE Vol. 7330, 73300J 2009 SPIE CCC code: 0277-786X/09/$18 doi: 10.1117/12.819532 Proc. of SPIE Vol. 7330 73300J-1

Z R ê êr ê Y X Figure 1. Earth-centered inertial reference frame where a r = r r φ 2 r θ 2 cos 2 φ (4) a θ = r θ cos φ +2ṙ θ cos φ 2r φ θ sin φ (5) a φ = r φ +2ṙ φ + r θ 2 cos φ sin φ (6) Equations (4)-(6) represent the dynamic equations, which can be expressed in the following state-space form: ẋ = f(x)+gu(x)+d, (7) ( ) where x = r, θ, φ, ṙ, θ, φ is the state vector, f and G are given by: f = 1 r cos φ 1 r ṙ θ φ r θ 2 cos 2 φ + r φ 2 ( 2ṙ θ cos φ +2r φ θ ) sin φ ( 2ṙ φ r θ ) 2 cos φ sin φ, G = [ ] 03 3. (8) I 3 3 u =(u 1,u 2,u 3 ) is the acceleration vector: u = a r a θ = a G + a D + a C, (9) a φ where, a G = μ r 2 a ê r is the gravity acceleration a D = β 0 ρṙ is a generic (atmospheric) drag term1 a C is either an unknown maneuver (for objects) or guidance (for sensors) acceleration, μ is the gravitational parameter, β 0 is the ballistic coefficient, ρ =exp((r r 0 )/H 0 )istheairdensity,r 0 is the radius of Earth, and H 0 is a coefficient. Vector d contains unknown disturbances. Proc. of SPIE Vol. 7330 73300J-2

3. MEASUREMENT MODELS We consider mobile sensor platforms equipped with electro-optical sensors measuring line-of-sight angles to objects. Each sensor platform measures the relative azimuth and elevation to each object in its field of view. In terms of cartesian position coordinates (x, y, z) =(r cos φ cos θ, r cos φ sin θ, r sin φ) the measured azimuth and elevation angles are given by ( φ = arcsin z t z s (xt x s ) 2 +(y t y s ) 2 +(z t z s ) 2 ) + e φ, (10) ( ) yt y s θ = arctan + e θ (11) x t x s where the subscripts t and s refer to the object and sensor platforms respectively. Terms e φ and e θ are Gaussian white noise signals modeling measurement uncertainty. Imager optical models may be easily incorporated in the estimation process when nonlinear filtering methods (such as the unscented Kalman filter discussed in the next section) are used. The lowest order approximation to the imager optical model is a pinhole camera model. The imager model returns pixel coordinates (ɛ x,ɛ y )as a function of the effective focal length f, and the relative position coordinates: ( ) ( ) ( ) ɛx f xt x = s eɛx +, (12) ɛ y (xt x s ) 2 +(y t y s ) 2 +(z t z s ) 2 y t y s e ɛy where e ɛx and e ɛy are Gaussian white noise terms abstracting all noise sources of the imaging system. 4. UNSCENTED KALMAN FILTERS The unscented Kalman filter (UKF) 2 4 provides a versatile method for state and parameter estimation of nonlinear systems. The UKF removes some of the shortcomings of the Extended Kalman Filter (EKF), which has been the most commonly used estimation method for nonlinear systems, by applying the unscented transformation (UT). Unlike the EKF, the UKF does not require computation of derivatives of either state equations or measurement equations. Instead of just propagating the state, the filter propagates a set of sample points - called sigma points - that are determined from the a priori mean and covariance of the state. The sigma points undergo the unscented transformation. Then the posterior mean and covariance of the state are determined from the transformed sigma points. The implementation of UKF for a nonlinear system follows a systematic procedure, as described in. 4 The filter is initialized with the following state estimate and covariance: At any time-step k, the2l + 1 the a priori sigma points are ˆx 0 = E[x 0 ], P 0 = E[(x 0 ˆx 0 )(x 0 ˆx 0 ) T ]. (13) Ξ k 1 = [ ˆx k 1 ˆx k 1 + γ P k 1 ˆx k 1 γ P k 1 ]. (14) The next step in implementing UKF involves propagating all the sigma points (including the state vector): Ξ k k 1 = F [Ξ k 1]. (15) In the above equation F represents the discrete-time mapping corresponding to propagation of the state vector and sigma points through one time-step (from k 1tok). Ξ k k 1 contains the transformed sigma points. They are used for the computation of posterior mean and covariance as follows: ˆx k = P k = 2L i=0 2L i=0 W (m) i Ξ i,k k 1 (16) W (c) i [Ξ i,k k 1 ˆx k ][Ξ i,k k 1 ˆx k ]T + R v, (17) Proc. of SPIE Vol. 7330 73300J-3

where, R v is the process noise covariance, W (m) i and W (c) i are weights given by W (m) 0 = W (c) 0 = W (m) i = W (c) i = λ L + λ (18) λ L + λ +1 α2 + β (19) 1 2(L + λ), (20) and, γ = L + λ and λ = α 2 (L + κ) L. (21) In the above equations α, β and κ are adjustable parameters of the filter. The parameter α determines the spread of sigma points around the state vector, and is usually set to 1e 4 α 1. The parameter κ also influences scaling, but is normally set to 0 for state estimation. The parameter β incorporates prior knowledge of the distribution of x. For gaussian distributions, β is set to 2. The sigma points corresponding to the posterior state and covariance estimates are computed next: [ ] Ξ k k 1 = ˆx k ˆx k + γ P k ˆx k P γ. (22) k The estimated measurement matrix is computed by transforming the sigma points using the nonlinear measurement model, Υ k k 1 = H[Ξ k k 1 ]. (23) The mean measurement, y k, the measurement covariance, P y k y k, and the cross-correlation covariance, P xk y k, are calculated based on the statistics of the transformed sigma points: ŷ k = P yk y k = P xk y k = 2L i=0 2L i=0 2L i=0 where R n is the measurement covariance matrix. The Kalman gain matrix is W (m) i Υ i,k k 1 (24) W (c) i [Υ i,k k 1 ŷ k ][Υ i,k k 1 ŷ k ]T + R n (25) W (c) i [Ξ i,k k 1 ˆx k ][Υ i,k k 1 ŷ k ]T, (26) K xk = P xk y k P 1 y k y k. (27) Finally, the measurement update equations are used to determine the mean, ˆx k, and the covariance, P xk,of the filtered state: ˆx k = ˆx k + K x k (y k ŷ k ) (28) P xk = Px k K xk P yk y k Kx T k. (29) 5. SIMULATION RESULTS We present simulation results for four different configurations consisting up to three mobile sensor platforms: 1. Configuration 1 consists of just one sensor platform. 2. Configuration 2 consists of three sensor platforms aligned along a straight line. Proc. of SPIE Vol. 7330 73300J-4

SENSOR2 SENSOR3 SENSOR1 SENSOR1-10 -20-20 -30-40 -40 - -60-80 OBJECT -60-70 -80 -go OBJECT 200-7200 - 7200 Configuration 1 Configuration 2 SENSOR1 -so -Iso -1 SENSOR3 OBJECT - -200 SENSOR1 SENSOR3 OBJECT -300-200 200 1..... 8000 SENSOR2 7800-400 400 SENSOR2-200 7200 7400 7600 6000 Configuration 3 Configuration 4 Figure 2. Relative sensors-object configurations 3. Configuration 3 consists of three sensor platforms at the vertices of an equilateral triangle with a side length of 283 km. 4. Configuration 4 consists of three sensor platforms at the vertices of an equilateral triangle with a side length of 495 km. Figure 2 shows the four sensor configurations. Sensor configurations 2 through 4 may be stabilized using cooperative guidance algorithms such as those reviewed in Section 6. In all simulations we consider the measurement model of equations (10)-(11), with measurement noise components having a standard deviation of 0.1 radians. The initial position estimation error vector is (20,-20,-) km. We consider two cases of object motion: (a) non-maneuvering, and (b) maneuvering with random accelerations. 5.1 Non-maneuvering Object Figure 3 shows the evolution of position tracking error for the four configurations under consideration. As expected, the tracking performance improves when the number of sensors is increased from 1 to 3 in Configuration 2. We note that performance improves markedly when the three sensors are splayed about the object as in Configuration 3. This is a consequence of using observations from different vantage points or relative orientations with respect to the object. Furthermore the sensor formation size plays a significant role also. There is an optimal formation size for given measurement statistics. Configuration 4 is close to the optimal size. Tracking Proc. of SPIE Vol. 7330 73300J-5

105 110 95 90 85 80 75 70 65 90 80 70 60 60 110 Configuration 1: 62 km 40 110 Configuration 2: 47 km 90 90 80 70 60 80 70 60 40 40 30 30 20 Configuration 3: 34 km Configuration 4: 28 km Figure 3. Position estimation error evolution for a non-maneuvering object. Final position error for each sensor configuration is shown below the plots. performance deteriorates for larger formation sizes. Figure 4 shows the individual estimation error components for Configuration 4. 5.2 Maneuvering Object Figure 3 shows the evolution of position tracking error for the four configurations under consideration when the object is maneuvering. Random object accelerations of the order of 2 percent of the total acceleration were included in these simulations. Tracking performance follows the same trend as in the case of non-maneuvering object, but performance improvements in Configurations 2 through 4 are more marked in this case. 6. FORMATION REGULATION In this section we discuss two methods that can be used for regulating formations of mobile sensor networks for cooperative estimation. We note that several formation control (or collective motion) paradigms have been introduced in the literature over the last decade. While many of the paradigms can be adapted to implement formation regulation, the two methods that we review are particularly suited for application to object tracking problems. 6.1 Virtual Bodies and Artificial Potentials (VBAP) Framework 5 7 The VBAP framework was developed in Princeton University during the Autonomous Ocean Sampling Network project (AOSN) 8 for coordinating groups of autonomous underwater vehicles. This framework provides a means Proc. of SPIE Vol. 7330 73300J-6

25 Tracking Error x (km) y (km) z(km) 20 15 10 0 10 20 30 20 40 60 80 Time (sec) Figure 4. Position estimation error components corresponding to Configuration 4. 800 600 700 0 600 0 400 300 400 300 200 200 Configuration 1: 265 km 0 Configuration 2: 140 km 300 110 2 90 200 1 80 70 60 40 30 0 20 Configuration 3: 44 km Configuration 4: 26 km Figure 5. Position estimation error evolution for a maneuvering object. Final position error for each sensor configuration is shown below the plots. Proc. of SPIE Vol. 7330 73300J-7

for encoding coordination rules of motion for each vehicle, so that the group can maintain a desired formation, and at the same time can collectively respond to measurements in order to locate interesting features in the ocean. The strength of the framework is the systematic way in which it can be implemented. Furthermore, the approach decouples formation regulation and mission guidance, and can be implemented in a decentralized manner. 6.1.1 Formation Stabilization Consider a group of N mobile sensors. The position of each sensor with respect to an inertial frame may be given by a vector x i R 3. The control force on each sensor is given by u i R 3. Then, the dynamics of the sensor are ẍ i = u i (30) AwebofM reference points, called virtual leaders may be introduced. 5 Let the position of the lth virtual leader with respect to the inertial frame be b l R 3. The virtual leader motion may be specified to provide guidance to the cooperating group of sensors. Let x ij = x i x j R 3 represent the distance between the ith and the jth sensor, and let h il = x i b l R 3 represent the distance between the ith sensor and the lth virtual leader. Between every pair of sensors i and j, an artificial potential V I (x ij ) can be defined. Similarly an artificial potential V h (h il ) can be defined between the ith sensor and the lth virtual leader. The cooperative guidance law for the ith sensor is essentially the negative of the gradient of the sum of these potentials plus a linear damping term: N M u i = xi V I (x ij ) xi V h (h il ) Kẋ i = j i N j i f I (x ij ) x ij x ij l=1 M l=1 f h (h il ) h il h il Kẋ i, (31) where f I and f h are magnitudes of forces derived from the artificial potentials V I and V h respectively. The artificial potentials are chosen such that the sensors maintain a nominal separation between each other, and between themselves and the virtual bodies. 6.1.2 Formation Reconfiguration and Guidance In 7 the motion of the formation is introduced by prescribing the motion of the virtual body. The motion includes translation, rotation, expansion and contraction of the formation, as well as sensor-driven tasks and mission trajectories. Formation reconfiguration and guidance is decoupled from the problem of formation stabilization by way of parameterizing the virtual body motion by a scalar variable s. An augmented state space for the system is given by (x, s, r, R, k) where(r, r) SO(3) R 3 represents the position and orientation of the virtual body, and k R is a scale factor that can be regulated for expansion/contraction of the formation. The total vector fields of the virtual body motion are expressed as: dr dt dr dt dk dt = dr ds ṡ (32) = dr ds ṡ (33) = dk ds ṡ (34) In 7 the magnitude of the virtual body vector fields, ṡ, which controls the speed of the virtual body is chosen to guarantee formation stabilization and convergence properties. The direction vectors can be chosen independently depending on the formation mission. Proc. of SPIE Vol. 7330 73300J-8

6.2 Klein-Morgansen (KM) Approach9, 10 Researchers at the University of Washington have applied oscillator models to synthesize cooperative guidance algorithms for object tracking. Consider a group of N unit speed agents. Their motion may be described using the natural Frenet-Serret framework. In a cartesian reference frame with inertial coordinates, the position of each agent is represented by a vector r. In order to describe direction of the velocity of each agent an orthonormal frame formed of a unit tangent vector, t, a unit normal vector, n, and a unit binormal vector, b is used. Vector t represents the instantaneous velocity of the agent. The agent can be steered gyroscopically using two inputs u and v towards the normal and binormal vectors, respectively. The dynamical system representing the motion of the agent can be described using the following model: d dt r t n b = 0 1 0 0 0 0 u v 0 u 0 0 0 v 0 0 r t n b, (35) There have been several (earlier and later) approaches using the same model for developing collective motion control laws, such as. 11 Reference 10 directly addresses the problem of object tracking by making the following assumptions 1. The path of the object is considered to be at least twice continuously differentiable. 2. Each sensor agent has full information about the object, including its position, velocity and acceleration. 3. Each sensor has undelayed access to all states of every other pursuer. 4. Without loss of generality, the speed of each pursuer vehicle is set to one, and the speed of the object vehicle is restricted to ṙ t [0, 1). 6.2.1 Cooperative Guidance Design Strategy The cooperative guidance design strategy is broken into three steps: S1: Velocity mathcing: Steer each pursuer vehicle such that the velocity of the group centroid matches a known dynamic reference velocity, ṙ ref. The acceleration r ref is assumed to be known. S2: Centroid guidance: Define an outer loop controller to generate an appropriate velocity to stabilize the position and velocity of the group centroid to the position and velocity of the object vehicle. S3: Spacing control: Apply a spacing controller to keep each vehicle near the collective motion centroid without interfering with velocity matching controls. Figure 6 shows a simulation result in which the centroid of three mobile sensor platforms follows a commanded straight line, while each individual sensor moves in helical trajectories about the commanded trajectory of the centroid. 7. CONCLUDING REMARKS In this paper we have presented simulation demonstrations illustrating tracking error performance improvements through cooperative estimation. Our results indicate that certain formation configurations are better than others for achieving good tracking performance. We have indicated methods for regulating mobile sensor platforms to such desirable formation configurations. Further work in this area will consider sensor reliability models and effects of limited sensor-to-sensor communications, including communication latencies. Proc. of SPIE Vol. 7330 73300J-9

10 5 0 z 5 10 15 15 10 5 0 5 60 40 30 20 10 0 x y Figure 6. Simulation of the KM cooperative guidance law REFERENCES [1] Zang, W., Shi, Z. G., Du, S. C., and Chen, K. S., Novel roughening method for reentry vehicle tracking using particle filter, Journal of Electromagnetic Waves and Applications 21(14), 1969 1981 (2007). [2] Julier, S. J. and Uhlmann, J. K., A new extension of the Kalman filter to nonlinear systems, in [Proc. 11th International Symposium on Aerospace/Defence Sensing, Simulation and Controls], (1997). [3] Wan, E. A. and van der Merwe, R., The unscented kalman filter for nonlinear estimation, in [Proc. IEEE Symposium 2000], (2000). [4] van der Merwe, R. and Wan, E. A., The Square-Root Unscented Kalman Filter for State and Parameter- Estimation, in [Proc. IEEE International Conference on Acoustics, Speech and Signal Processing], 3461 3464 (2001). [5] Fiorelli, E. and Leonard, N. E., Formations with a mission: Stable coordination of vehicle group maneuvers, in [Proc. IEEE Conference on Decision and Control], 2968 2973 (2001). [6] Ögren, P., Fiorelli, E., and Leonard, N. E., Formations with a mission: Stable coordination of vehicle group maneuvers, in [Proc. Symposium of Mathematical Theory of Networks and Systems], (2002). [7] Ögren, P., Fiorelli, E., and Leonard, N. E., Coperative control of mobile sensor networks: Adaptive gradient climbing in a distributed environment, IEEE Transactions on Automatic Control 49(8), 1292 1302 (2004). [8] Autonomous Ocean Sampling Network II (AOSN-II), collaborative project. http://www.mbari.org/aosn/. [9] Klein, D. J. and Morgansen, K. A., Controlled collective motion for trajectory tracking, in [Proc. American Control Conference], (2006). [10] Klein, D. J., Matlack, C., and Morgansen, K. A., Cooperative target tracking using oscillator models in three dimensions, in [Proc. American Control Conference], 2569 2575 (2007). [11] Sepulchre, R., Paley, D. A., and Leonard, N. E., Stabilization of planar collective motion with limited communication, IEEE Transactions on Automatic Control 53(3), 706 719 (2008). Proc. of SPIE Vol. 7330 73300J-10