EE 570: Location and Navigation

Similar documents
Master s thesis tutorial: part III

PTE505: Inverse Modeling for Subsurface Flow Data Integration (3 Units)

Discrete Frobenius-Perron Tracking

Lecture 5 Least-squares

Probability and Random Variables. Generation of random variables (r.v.)

A Tutorial on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking

Mathieu St-Pierre. Denis Gingras Dr. Ing.

System Identification for Acoustic Comms.:

Linear regression methods for large n and streaming data

Least Squares Estimation

Kalman Filter Applied to a Active Queue Management Problem

Understanding and Applying Kalman Filtering

Content. Professur für Steuerung, Regelung und Systemdynamik. Lecture: Vehicle Dynamics Tutor: T. Wey Date: , 20:11:52

Master s Theory Exam Spring 2006

Course 8. An Introduction to the Kalman Filter

Spatial Statistics Chapter 3 Basics of areal data and areal data modeling

Automated Stellar Classification for Large Surveys with EKF and RBF Neural Networks

An Introduction to the Kalman Filter

STA 4273H: Statistical Machine Learning

Dynamic data processing

Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers

Advanced Signal Processing and Digital Noise Reduction

Sales Rate and Cumulative Sales Forecasting Using Kalman Filtering Techniques

Statistics Graduate Courses

A Regime-Switching Model for Electricity Spot Prices. Gero Schindlmayr EnBW Trading GmbH

11. Time series and dynamic linear models

Statistical Tests for Multiple Forecast Comparison

CCNY. BME I5100: Biomedical Signal Processing. Linear Discrimination. Lucas C. Parra Biomedical Engineering Department City College of New York

Probabilistic Models for Big Data. Alex Davies and Roger Frigola University of Cambridge 13th February 2014

3. Regression & Exponential Smoothing

Mapping an Application to a Control Architecture: Specification of the Problem

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

A Multi-Model Filter for Mobile Terminal Location Tracking

How To Solve A Sequential Mca Problem

Tracking in flussi video 3D. Ing. Samuele Salti

Classification Problems

Deterministic Sampling-based Switching Kalman Filtering for Vehicle Tracking

Linear Classification. Volker Tresp Summer 2015

Forecasting "High" and "Low" of financial time series by Particle systems and Kalman filters

Particle Filter-Based On-Line Estimation of Spot (Cross-)Volatility with Nonlinear Market Microstructure Noise Models. Jan C.

Cell Phone based Activity Detection using Markov Logic Network

Nonlinear Iterative Partial Least Squares Method

Data quality improvement in automated water quality measurement stations

Lecture 5: Singular Value Decomposition SVD (1)

CHAPTER IV - BROWNIAN MOTION

Forecasting in supply chains

Monte Carlo-based statistical methods (MASM11/FMS091)

Signal processing tools for largescale network monitoring: a state of the art

Web-based Supplementary Materials for Bayesian Effect Estimation. Accounting for Adjustment Uncertainty by Chi Wang, Giovanni

Kristine L. Bell and Harry L. Van Trees. Center of Excellence in C 3 I George Mason University Fairfax, VA , USA kbell@gmu.edu, hlv@gmu.

Joint parameter and state estimation algorithms for real-time traffic monitoring

Statistical Trend Estimation with Application to Workers Compensation Ratemaking

Stock Option Pricing Using Bayes Filters

MIMO CHANNEL CAPACITY

Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091)

Since it is necessary to consider the ability of the lter to predict many data over a period of time a more meaningful metric is the expected value of

Component Ordering in Independent Component Analysis Based on Data Power

Linear Threshold Units

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Accurate and robust image superresolution by neural processing of local image representations

Bayesian Machine Learning (ML): Modeling And Inference in Big Data. Zhuhua Cai Google, Rice University

Univariate and Multivariate Methods PEARSON. Addison Wesley

Machine Learning.

Communication on the Grassmann Manifold: A Geometric Approach to the Noncoherent Multiple-Antenna Channel

Bayesian Statistics: Indian Buffet Process

Unsupervised and supervised dimension reduction: Algorithms and connections

Variations of Statistical Models

QUALITY ENGINEERING PROGRAM

Performance evaluation of multi-camera visual tracking

Monte Carlo-based statistical methods (MASM11/FMS091)

Sales and operations planning (SOP) Demand forecasting

Regression Analysis. Regression Analysis MIT 18.S096. Dr. Kempthorne. Fall 2013

Analyzing Structural Equation Models With Missing Data

Hybrid Data and Decision Fusion Techniques for Model-Based Data Gathering in Wireless Sensor Networks

Linear Algebra Review. Vectors

These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop

Bayes and Naïve Bayes. cs534-machine Learning

Christfried Webers. Canberra February June 2015

ANALYZING INVESTMENT RETURN OF ASSET PORTFOLIOS WITH MULTIVARIATE ORNSTEIN-UHLENBECK PROCESSES

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

1. Introduction. Consider the computation of an approximate solution of the minimization problem

Cooperative Object Tracking for Many-on-Many Engagement

Lecture 8: Signal Detection and Noise Assumption

4.3 Least Squares Approximations

The Monte Carlo Framework, Examples from Finance and Generating Correlated Random Variables

Statistical machine learning, high dimension and big data

Gaussian Conjugate Prior Cheat Sheet

Stability of the LMS Adaptive Filter by Means of a State Equation

MCMC Using Hamiltonian Dynamics

CS229 Lecture notes. Andrew Ng

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Dealing with large datasets

BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES

Weighted-Least-Square(WLS) State Estimation

19 LINEAR QUADRATIC REGULATOR

Robust Neural Network Regression for Offline and Online Learning

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

Imputing Missing Data using SAS

Transcription:

EE 570: Location and Navigation On-Line Bayesian Tracking Aly El-Osery 1 Stephen Bruder 2 1 Electrical Engineering Department, New Mexico Tech Socorro, New Mexico, USA 2 Electrical and Computer Engineering Department, Embry-Riddle Aeronautical Univesity Prescott, Arizona, USA March 27, 2014 Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 1 / 25

Objective Sequentially estimate on-line the states of a system as it changes over time using observations that are corrupted with noise. Filtering: the time of the estimate coincides with the last measurement. Smoothing: the time of the estimate is within the span of the measurements. Prediction: the time of the estimate occurs after the last available measurement. Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 2 / 25

Given State-Space Equations x k = f k ( x k 1, w k 1 ) (1) z k = h k ( x k, v k ) (2) Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 3 / 25

Given State-Space Equations (n 1) state vector at time k x k = f k ( x k 1, w k 1 ) (1) z k = h k ( x k, v k ) (2) (m 1) measurement vector at time k Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 3 / 25

Given State-Space Equations Possibly non-linear function, f k : R n R nw R n x k = f k ( x k 1, w k 1 ) (1) z k = h k ( x k, v k ) (2) Possibly non-linear function, h k : R m R nv R m Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 3 / 25

Given State-Space Equations i.i.d state noise x k = f k ( x k 1, w k 1 ) (1) i.i.d measurement noise z k = h k ( x k, v k ) (2) Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 3 / 25

Given State-Space Equations x k = f k ( x k 1, w k 1 ) (1) z k = h k ( x k, v k ) (2) The state process is Markov chain, i.e., p( x k x 1,..., x k 1 ) = p( x k x k 1 ) and the distribution of z k conditional on the state x k is independent of previous state and measurement values, i.e., p( z k x 1:k, z 1:k 1 ) = p( z k x k ) Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 3 / 25

Objective Probabilistically estimate x k using previous measurement z 1:k. In other words, construct the pdf p( x k z 1:k ). Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 4 / 25

Objective Probabilistically estimate x k using previous measurement z 1:k. In other words, construct the pdf p( x k z 1:k ). Optimal MMSE Estimate E{ x k ˆ x k 2 z 1:k } = x k ˆ x k 2 p( x k z 1:k )d x k (3) in other words find the conditional mean ˆ x k = E{ x k z 1:k } = x k p( x k z 1:k )d x k (4) Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 4 / 25

Assumptions w k and v k are drawn from a Gaussian distribution, uncorrelated have zero mean and statistically independent. { E{ w k w T Q k i = k i } = (5) 0 i k { E{ v k v T R k i = k i } = (6) 0 i k { E{ w k v T i } = 0 i, k (7) Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 5 / 25

Assumptions f k and h k are both linear, e.g., the state-space system equations may be written as x k = Φ k 1 x k 1 + w k 1 (8) y k = H k x k + v k (9) Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 6 / 25

Assumptions f k and h k are both linear, e.g., the state-space system equations may be written as x k = Φ k 1 x k 1 + w k 1 (8) y k = H k x k + v k (9) (n n) transition matrix relating x k 1 to x k Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 6 / 25

Assumptions f k and h k are both linear, e.g., the state-space system equations may be written as x k = Φ k 1 x k 1 + w k 1 (8) y k = H k x k + v k (9) (m n) matrix provides noiseless connection between measurement and state vectors Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 6 / 25

State-Space Equations ˆ x k k 1 = Φ k 1ˆ xk 1 k 1 (10) P k k 1 = Q k 1 +Φ k 1 P k 1 k 1 Φ T k 1 (11) ˆ x k k = ˆ x k k 1 + K k ( z k H kˆ xk k 1 ) (12) P k k = (I K k H k )P k k 1 (13) Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 7 / 25

State-Space Equations ˆ x k k 1 = Φ k 1ˆ xk 1 k 1 (10) P k k 1 = Q k 1 +Φ k 1 P k 1 k 1 Φ T k 1 (11) ˆ x k k = ˆ x k k 1 + K k ( z k H kˆ xk k 1 ) (12) P k k = (I K k H k )P k k 1 (13) (n m) Kalman gain Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 7 / 25

State-Space Equations ˆ x k k 1 = Φ k 1ˆ xk 1 k 1 (10) P k k 1 = Q k 1 +Φ k 1 P k 1 k 1 Φ T k 1 (11) ˆ x k k = ˆ x k k 1 + K k ( z k H kˆ xk k 1 ) (12) P k k = (I K k H k )P k k 1 (13) Measurement innovation Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 7 / 25

Kalman Gain K k = P k k 1 H T k ( H kp k k 1 H T k + R k ) 1 (14) Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 8 / 25

Kalman Gain K k = P k k 1 H T k ( H kp k k 1 H T k + R k ) 1 (14) Covariance of the innovation term Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 8 / 25

Kalman filter data flow Initial estimate (ˆ x 0 and P 0 ) Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 9 / 25

Kalman filter data flow Initial estimate (ˆ x 0 and P 0 ) Compute Kalman gain K k = P k k 1 H T k (H k P k k 1 HT k + R k ) 1 Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 9 / 25

Kalman filter data flow Initial estimate (ˆ x 0 and P 0 ) Compute Kalman gain K k = P k k 1 H T k (H k P k k 1 HT k + R k ) 1 Update estimate with measurement z k ˆ x k k = ˆ x k k 1 + K k ( z k H kˆ xk k 1 ) Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 9 / 25

Kalman filter data flow Initial estimate (ˆ x 0 and P 0 ) Compute Kalman gain K k = P k k 1 H T k (H k P k k 1 HT k + R k ) 1 Update estimate with measurement z k ˆ x k k = ˆ x k k 1 + K k ( z k H kˆ xk k 1 ) Update error covariance P k k = P k k 1 K k H k P k k 1 Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 9 / 25

Kalman filter data flow Initial estimate (ˆ x 0 and P 0 ) Compute Kalman gain K k = P k k 1 H T k (H k P k k 1 HT k + R k ) 1 Project ahead ˆ x k k 1 = Φ k 1ˆ xk 1 k 1 P k k 1 = Q k 1 + Φ k 1 P k 1 k 1 Φ T k 1 Update estimate with measurement z k ˆ x k k = ˆ x k k 1 + K k ( z k H kˆ xk k 1 ) k = k + 1 Update error covariance P k k = P k k 1 K k H k P k k 1 Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 9 / 25

Kalman filter data flow Initial estimate (ˆ x 0 and P 0 ) Compute Kalman gain K k = P k k 1 H T k (H k P k k 1 HT k + R k ) 1 Project ahead ˆ x k k 1 = Φ k 1ˆ xk 1 k 1 P k k 1 = Q k 1 + Φ k 1 P k 1 k 1 Φ T k 1 Update estimate with measurement z k ˆ x k k = ˆ x k k 1 + K k ( z k H kˆ xk k 1 ) k = k + 1 Update error covariance P k k = P k k 1 K k H k P k k 1 Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 9 / 25

Sequential Processing If R is a block matrix, i.e., R = diag(r 1, R 2,...,R r ). The R i has dimensions p i p i. Then, we can sequentially process the measurements as: For i = 1, 2,...,r K i = P i 1 (H i ) T (H i P i 1 (H i ) T + R i ) 1 (15) ˆ x i k k = ˆ x i k k + Ki ( z i k H iˆ x i 1 k k ) (16) P i = (I K i H i )P i 1 (17) where ˆ x 0 k k = ˆ x k k 1, P 0 = P 0 k k 1 and Hi is p i n corresponding to the rows of H corresponding the measurement being processed. Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 10 / 25

Observability The system is observable if the observability matrix H(k n+1) H(k n 2)Φ(k n+1) O(k) =. H(k)Φ(k 1)...Φ(k n+1) (18) where n is the number of states, has a rank of n. The rank of O is a binary indicator and does not provide a measure of how close the system is to being unobservable, hence, is prone to numerical ill-conditioning. Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 11 / 25

A Better Observability Measure In addition to the computation of the rank of O(k), compute the Singular Value Decomposition (SVD) of O(k) as O = UΣV (19) and observe the diagonal values of the matrix Σ. Using this approach it is possible to monitor the variations in the system observability due to changes in system dynamics. Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 12 / 25

Remarks Kalman filter is optimal under the aforementioned assumptions, and it is also an unbiased and minimum variance estimate. If the Gaussian assumptions is not true, Kalman filter is biased and not minimum variance. Observability is dynamics dependent. The error covariance update may be implemented using the Joseph form which provides a more stable solution due to the guaranteed symmetry. P k k = (I K k H k )P k k 1 (I K k H k ) T + K k R k K T k (20) Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 13 / 25

System Model To obtain the state vector estimate ˆ x(t) x(t) = F(t) x(t)+g(t) w(t) (21) E{ x(t)} = tˆ x(t) = F(t)ˆ x(t) (22) Solving the above equation over the interval t τ s, t ˆ x(t) = e ( t t τs F(t )dt )ˆ x(t τs ) (23) where F k 1 is the average of F at times t and t τ s. Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 14 / 25

System Model Discretization As shown in the Kalman filter equations the state vector estimate is given by ˆ x k k 1 = Φ k 1ˆ xk 1 k 1 Therefore, Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 15 / 25

System Model Discretization As shown in the Kalman filter equations the state vector estimate is given by ˆ x k k 1 = Φ k 1ˆ xk 1 k 1 Therefore, Φ k 1 = e F k 1τ s I+F k 1 τ s (24) where F k 1 is the average of F at times t and t τ s, and first order approximation is used. Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 15 / 25

Discrete Covariance Matrix Q k Assuming white noise, small time step, G is constant over the integration period, and the trapezoidal integration Q k 1 1 2 [ ] Φ k 1 G k 1 Q(t k 1 )G T k 1 ΦT k 1 + G k 1Q(t k 1 )G T k 1 τ s (25) where E{ w(η) w T (ζ)} = Q(η)δ(η ζ) (26) Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 16 / 25

Linearized System F k = f( x) x, x=ˆ xk k 1 H k = h( x) x (27) x=ˆ xk k 1 where f( x) x = f 1 f 1 x 1 f 2 f 2 x 1. x 2 x 2,.... h( x) x = h 1 h 1 x 1 h 2 h 2 x 1. x 2 x 2 (28).... Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 17 / 25

First Order Markov Noise State Equation ḃ(t) = 1 T c b(t)+w(t) (29) Autocorrelation Function E{b(t)b(t +τ)} = σ 2 BI e τ /Tc (30) where E{w(t)w(t +τ)} = Q(t)δ(t τ) (31) and T c is the correlation time. Q(t) = 2σ2 BI T c (32) Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 18 / 25

Discrete First Order Markov Noise State Equation System Covariance Matrix b k = e 1 Tc τs b k 1 + w k 1 (33) Q = σ 2 BI [1 e 2 Tc τs ] (34) Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 19 / 25

Autocorrelation of 1st order Markov R b (τ) = σ 2 BI e τ /Tc σ 2 BI σ 2 e τ T c Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 20 / 25

Small Correlation Time T c = 0.01 R b (τ) τ Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 21 / 25

Small Correlation Time T c = 0.01 3 2 R b (τ) 1 b(t) 0-1 τ -2-3 0 0.2 0.4 0.6 0.8 1 Time (sec) Measured Actual Estimated Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 21 / 25

Larger Correlation Time T c = 0.1 R n (τ) τ Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 22 / 25

Larger Correlation Time T c = 0.1 3 2 R n (τ) 1 b(t) 0-1 τ -2-3 0 0.2 0.4 0.6 0.8 1 Time (sec) Measured Actual Estimated Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 22 / 25

Unscented Kalman Filter (UKF) Propagates carefully chosen sample points (using unscented transformation) through the true non-linear system, and therefore captures the posterior mean and covariance accurately to the second order. Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 23 / 25

Particle Filter A Monte Carlo based method. It allows for a complete representation of the state distribution function. Unlike EKF and UKF, particle filters do not require the Gaussian assumptions. Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 24 / 25

Bayesian Filtering: From Kalman Filters to Particle Filters, and Beyond, by Zhe Chen Aly El-Osery, Stephen Bruder (NMT,ERAU) EE 570: Location and Navigation March 27, 2014 25 / 25