Introduction to Kalman Filtering
|
|
|
- Moses Atkinson
- 10 years ago
- Views:
Transcription
1 Introduction to Kalman Filtering A set of two lectures Maria Isabel Ribeiro Associate Professor Instituto Superior écnico / Instituto de Sistemas e Robótica June All rights reserved
2 INRODUCION O KALMAN FILERING What is a Kalman Filter? Introduction to the Concept Which is the best estimate? Basic Assumptions Discrete Kalman Filter Problem Formulation From the Assumptions to the Problem Solution owards the Solution Filter dynamics Prediction cycle Filtering cycle Summary Properties of the Discrete KF A simple example he meaning of the error covariance matrix he Extended Kalman Filter M.Isabel Ribeiro - June.
3 M.Isabel Ribeiro - June. WHA IS A KALMAN FILER? 3 Optimal Recursive Data Processing Algorithm ypical Kalman filter application Controls System System error sources System state (desired but not now Measuring devices Observed measurements Kalman filter Optimal estimate of system state Measurement error sources
4 M.Isabel Ribeiro - June. WHA IS A KALMAN FILER? Introduction to the Concept 4 Optimal Recursive Data Processing Algorithm Dependent upon the criteria chosen to evaluate performance Under certain assumptions, KF is optimal with respect to virtually any criteria that maes sense. KF incorporates all available information nowledge of the system and measurement device dynamics statistical description of the system noises, measurement errors, and uncertainty in the dynamics models any available information about initial conditions of the variables of interest
5 WHA IS A KALMAN FILER? Introduction to the concept 5 Optimal Recursive Data Processing Algorithm x( + z( + = f(x(,u(,w( = h(x( +,v( + x - state f - system dynamics h - measurement function u - controls w - system error sources v - measurement error sources z - observed measurements Given f, h, noise characterization, initial conditons z(, z(, z(,, z( Obtain the best estimate of x( M.Isabel Ribeiro - June.
6 WHA IS A KALMAN FILER? Introduction to the concept 6 Optimal Recursive Data Processing Algorithm the KF does not require all previous data to be ept in storage and reprocessed every time a new measurement is taen. z( z( z(... z( KF xˆ( z ( z( z(... z( z( + KF KF xˆ ( + xˆ( o evaluate the KF only requires xˆ( xˆ ( + and z(+ M.Isabel Ribeiro - June.
7 WHA IS A KALMAN FILER? Introduction to the concept 7 Optimal Recursive Data Processing Algorithm he KF is a data processing algorithm he KF is a computer program runing in a central processor M.Isabel Ribeiro - June.
8 M.Isabel Ribeiro - June. WHA IS HE KALMAN FILER? Which is the best estimate? 8 Any type of filter tries to obtain an optimal estimate of desired quantities from data provided by a noisy environment. Best = minimizing errors in some respect. Bayesian viewpoint - the filter propagates the conditional probability density of the desired quantities, conditioned on the nowledge of the actual data coming from measuring devices Why base the state estimation on the conditional probability density function?
9 WHA IS A KALMAN FILER? Which is the best estimate? 9 Example x(i one dimensional position of a vehicle at time instant i z(j two dimensional vector describing the measurements of position at time j by two separate radars If z(=z, z(=z,., z(j=z j p x(i z(,z(,..., z(i (x z,z,..., z i represents all the information we have on x(i based (conditioned on the measurements acquired up to time i given the value of all measurements taen up time i, this conditional pdf indicates what the probability would be of x(i assuming any particular value or range of values. M.Isabel Ribeiro - June.
10 WHA IS A KALMAN FILER? Which is the best estimate? he shape of px(i z(,z(,..., z(i (x z,z,..., z i conveys the amount of certainty we have in the nowledge of the value x. p(x z,z,..., z i Based on this conditional pdf, the estimate can be: the mean - the center of probability mass (MMSE the mode - the value of x that has the highest probability (MAP the median - the value of x such that half the probability weight lies to the left and half to the right of it. M.Isabel Ribeiro - June. x
11 WHA IS HE KALMAN FILER? Basic Assumptions he Kalman Filter performs the conditional probability density propagation for systems that can be described through a LINEAR model in which system and measurement noises are WHIE and GAUSSIAN Under these assumptions, the conditional pdf is Gaussian mean=mode=median there is a unique best estimate of the state the KF is the best filter among all the possible filter types What happens if these assumptions are relaxed? Is the KF still an optimal filter? In which class of filters? M.Isabel Ribeiro - June.
12 M.Isabel Ribeiro - June. DISCREE KALMAN FILER Problem Formulation MOIVAION Given a discrete-time, linear, time-varying plant with random initial state driven by white plant noise Given noisy measurements of linear combinations of the plant state variables Determine the best estimate of the system state variable SAE DYNAMICS AND MEASUREMEN EQUAION x + z = = A C x x + B + v u + G w,
13 M.Isabel Ribeiro - June. DISCREE KALMAN FILER Problem Formulation 3 VARIABLE DEFINIIONS x u w v z R R R R R n m n r r state vector (stochastic non - white process deterministic input sequence white Gaussian system noise (assumed with zero mean white Gaussian measurement noise (assumed with zero mean measuremen t vector (stochastic non - white sequence
14 M.Isabel Ribeiro - June. DISCREE KALMAN FILER Problem Formulation 4 INIIAL CONDIIONS x is a Gaussian random vector, with mean covariance matrix E [x] = x SAE AND MEASUREMEN NOISE zero mean E[w ]=E[v ]= {w }, {v } - white Gaussian sequences w E v w v E[(x x (x x ] = P = P = Q x(, w and v j are independent for all and j R
15 M.Isabel Ribeiro - June. DISCREE KALMAN FILER Problem Formulation 5 DEFINIION OF FILERING PROBLEM Let denote present value of time Given the sequence of past inputs U = {u,u,...u } Given the sequence of past measurements Z {z,z,...z } = Evaluate the best estimate of the state x(
16 M.Isabel Ribeiro - June. DISCREE KALMAN FILER Problem Formulation 6 Given x Nature apply w We apply u x+ = Ax + Bu + Gw, z = Cx + v he system moves to state x We mae a measurement z Question: which is the best estimate of x? Nature apply w We apply u he system moves to state x p(x Z Answer: obtained from We mae a measurement z Question: which is the best estimate of x?... Answer: obtained from p(x Z
17 M.Isabel Ribeiro - June. DISCREE KALMAN FILER Problem Formulation 7... Question: which is the best estimate of x -? p(x Z Answer: obtained from Nature apply w - We apply u - he system moves to state x We mae a measurement z Question: which is the best estimate of x?... Answer: obtained from p(x Z
18 M.Isabel Ribeiro - June. DISCREE KALMAN FILER owards the Solution 8 he filter has to propagate the conditional probability density functions p(x p(x Z xˆ( p(x Z. p(x Z xˆ(. xˆ ( p(x. Z xˆ(.
19 M.Isabel Ribeiro - June. DISCREE KALMAN FILER From the Assumptions to the Problem Solution 9 he LINEARIY of the system state equation the system observation equation he GAUSSIAN nature of the initial state, x the system white noise, w the measurement white noise, v p(x Z is Gaussian Uniquely characterized by the conditional mean xˆ ( E[x Z = ] the conditional covariance P( cov[x ;x Z = ] p(x Z ~ Ν(xˆ(,P(
20 M.Isabel Ribeiro - June. DISCREE KALMAN FILER owards the Solution As the conditional probability density functions are Gaussian, the Kalman filter only propagates the first two moments p(x p(x p(x Z E[x Z ] = xˆ( P( p(x Z. p(x Z E[x Z ] = xˆ(. P( E[x Z ] = xˆ( P(. p(x Z E[x Z ] = xˆ( P(...
21 M.Isabel Ribeiro - June. DISCREE KALMAN FILER owards the Solution We stated that the state estimate equals the conditional mean xˆ ( = E[x Z ] Why? p(x Z Why not the mode of? Why not the median of p(x Z? p(x Z mean = mode = median As is Gaussian
22 M.Isabel Ribeiro - June. DISCREE KALMAN FILER Filter dynamics KF dynamics is recursive Z = {z,z,...,z } U = {u,u,...,u } p(x Z + Z = {Z,z } + U {U =,u } p(x Z + + Prediction cycle What can you say about x + before we mae the measurement z + p(x Z + Z = {z,z,...,z } U = {U,u } Filtering cycle How can we improve our information on x + after we mae the measurement z +
23 M.Isabel Ribeiro - June. DISCREE KALMAN FILER Filter dynamics 3 p(x p(x Z p(x Z prediction p(x U filtering filtering prediction p(x Z p(x Z p(x + Z p(x + Z prediction filtering p(x Z + prediction p(x + Z + filtering
24 M.Isabel Ribeiro - June. DISCREE KALMAN FILER Filter dynamics - Prediction cycle 4 Prediction cycle p(x p(x Z Z ~ Ν(xˆ(,P( assumed nown p(x Z +? Is Gaussian xˆ( + E(x Z = +? P( + = cov[x ;x Z + + ]? x+ = E[x + Ax + Bu + Gw Z ] A E[x Z = ] + BE[u Z ] + GE[w Z ] xˆ ( + = Axˆ( + Bu
25 M.Isabel Ribeiro - June. DISCREE KALMAN FILER Filter dynamics - Prediction cycle 5 Prediction cycle P( + = cov[x ;x Z + + ] ~ x( ~ x( + = x+ xˆ( + + x( + xˆ( + = = A ~ x( + Gw A prediction error x + Bu + Gw (Axˆ( + Bu P( + = E[ ~ x( + ~ x( + Z ] cov[y;y] = E[(y y(y y ] P( + = A P( A + G Q G
26 M.Isabel Ribeiro - June. DISCREE KALMAN FILER Filter dynamics - Filtering cycle 6 Filtering cycle p(x Z + Ν( xˆ( +,P( + z p(x Z ? º Passo Measurement prediction What can you say about z + before we mae the measurement z + p(z + Z p(c + x+ + v+ Z E[z + Z ] = ẑ( + = C+ xˆ( = + = + + cov[z ;z Z ] Pz ( C P( C + R + +
27 M.Isabel Ribeiro - June. DISCREE KALMAN FILER Filter dynamics - Filtering cycle 7 Filtering cycle º Passo p(x + Z + E[x + Z ] = E[x + Z,z+ ] + Z ~ e {Z, z( + } São equivalentes do ponto de vista de infirmação contida + E[x Z ] E[x Z, ~ + = + z( + ] If x, y and z are jointly Gaussian and y and z are statistically independent E[x Required result y,z] = E[x y] + E[x z] m x
28 M.Isabel Ribeiro - June. DISCREE KALMAN FILER Filter dynamics - Filtering cycle 8 Filtering cycle xˆ ( + + = xˆ( + + P( + C C P( C R (z C xˆ( K( + ẑ( + measurement prediction xˆ ( Kalman Gain + = xˆ( + + K( + (z+ C xˆ( ~ z( + P( + + = P( + P( + C C P( C R C P(
29 M.Isabel Ribeiro - June. DISCREE KALMAN FILER Dynamics 9 Linear System x+ = Ax + Bu + Gw, z = Cx + v Discrete Kalman Filter prediction xˆ ( + = Axˆ( + Bu P( + = A P( A + GQG P( xˆ ( + + = xˆ( + + K( + (z + C + xˆ( = P( + P( + C C P( + C + R C P( K( + = P( + C C P( C R + + filtering Initial conditions ( x xˆ = P ( = P
30 M.Isabel Ribeiro - June. DISCREE KALMAN FILER Properties 3 he Discrete KF is a time-varying linear system xˆ + + = (I K+ C + Axˆ + K+ z + + Bu even when the system is time-invariant and has stationary noise xˆ + + = (I K+ CAxˆ + K+ z + + Bu the Kalman gain is not constant Does the Kalman gain matrix converges to a constant matrix? In which conditions?
31 DISCREE KALMAN FILER Properties 3 he state estimate is a linear function of the measurements KF dyamics in terms of the filtering estimate xˆ + + = (I K+ C + Axˆ + K+ z + + Bu Φ xˆ = x Assuming null inputs for the sae of simplicity xˆ = Φxˆ + Kz xˆ = ΦΦ xˆ + ΦKz + Kz xˆ 3 3 = ΦΦΦ xˆ + ΦΦKz + ΦKz + K3z3 M.Isabel Ribeiro - June.
32 M.Isabel Ribeiro - June. DISCREE KALMAN FILER Properties 3 Innovation process r + = z + C + xˆ( + xˆ ( + = E(x + Z z(+ carries information on x(+ that was not available on this new information is represented by r(+ - innovation process? Z Properties of the innovation process the innovations r( are orthogonal to z(i E[r(z (i] =, i =,,..., the innovations are uncorrelated/white noise E[r(r (i] =, i this test can be used to acess if the filter is operating correctly
33 M.Isabel Ribeiro - June. DISCREE KALMAN FILER Properties 33 Covariance matrix of the innovation process S( + = C + + P(K K C + + R + K( + = P( C C P( C R+ K( + = P( + + C S +
34 M.Isabel Ribeiro - June. DISCREE KALMAN FILER Properties 34 he Discrete KF provides an unbiased estimate of the state xˆ + + is an unbiased estimate of the state x(+, providing that the initial conditions are P ( = P xˆ ( = x Is this still true if the filter initial conditions are not the specified?
35 M.Isabel Ribeiro - June. DISCREE KALMAN FILER Steady state Kalman Filter 35 ime invariant system and stationay white system and observation noise x+ = Ax + Gw, z = Cx + v E[w w ] Q = E[v v ] = R Filter dynamics xˆ ( + + = Axˆ( + + K( + (z + Cxˆ( + P( + = AP( -A AP( C [CP( C + R] CP( A + GQG Discrete Riccati Equation K(
36 M.Isabel Ribeiro - June. DISCREE KALMAN FILER Steady state Kalman Filter 36 If Q is positive definite, (A,G Q is controllable, and (A,C is observable, then the steady state Kalman filter exists the limit exists lim P( + = P P is the unique, finite positive-semidefinite solution to the algebraic equation P = AP A AP C [CP C + R] CP - A + GQG P is independent of P provided that P the steady-state Kalman filter is assymptotically unbiased K = P - C [CP - C + R]
37 M.Isabel Ribeiro - June. MEANING OF HE COVARIANCE MARIX Generals on Gaussian pdf 37 Let z be a Gaussian random vector of dimension n [ ] = P [] = m, E ( z m( z m E z P - covariance matrix - symetric, positive defined Probability density function p(z = ( π exp n detp ( z m P (z m n= n=
38 M.Isabel Ribeiro - June. MEANING OF HE COVARIANCE MARIX Generals on Gaussian pdf 38 Locus of points where the fdp is greater or equal than a given threshold (z m P (z m K n= line segment n= ellipse and inner points n=3 3D ellipsoid and inner points n>3 hiperellipsoid and inner points If P= diag( σ, σ,!, σn the ellipsoid axis are aligned with the axis of the referencial where the vector z is defined n (z m (z m P (z m K i i σ K length of the ellipse semi-axis = σ i i= K i
39 M.Isabel Ribeiro - June. MEANING OF HE COVARIANCE MARIX Generals on Gaussian pdf - Error elipsoid 39 P = σ σ Example n= P = σ σ σ σ
40 M.Isabel Ribeiro - June. MEANING OF HE COVARIANCE MARIX Generals on Gaussian pdf -Error ellipsoid and axis orientation 4 Error ellipsoid ( z mz P (z mz K P=P - to distinct eigenvalues correspond orthogonal eigenvectors Assuming that P is diagonalizable P = D with D = diag( λ, λ,!, = I λn Error ellipoid (after coordinate transformation w = z (z m z (w m w D D (z m (w m w z K K At the new coordinate system, the ellipsoid axis are aligned with the axis of the new referencial
41 4 M.Isabel Ribeiro - June. MEANING OF HE COVARIANCE MARIX Generals on Gaussian pdf -Error elipsis and referencial axis n= [ ] K m y m x m y m x y x y x y x σ σ K m (y K m (x y y x x σ + σ m x m y K σ x K σ y ellipse = y x z
42 M.Isabel Ribeiro - June. MEANING OF HE COVARIANCE MARIX Generals on Gaussian pdf -Error ellipse and referencial axis 4 n= x z = y λ λ = = σ x = ρσ x σ y ρσ xσy σ y σ x ρσxσy ρσxσ y σ y x y P [ x y] K [ ] σ x + σ y + ( σ x σ y + 4ρ σ x σ y [ ] σ + σ ( σ σ + 4ρ σ σ x y x y x y y α = tan w Kλ ρσ σ σ + x y x σ y w Kλ, w K λ α K λ w x π 4 α π, 4 σ x σ y
43 M.Isabel Ribeiro - June. DISCREE KALMAN FILER Probabilistic interpretation of the error ellipsoid 43 p(x Z ~ Ν(xˆ(,P( Given xˆ ( and P( it is possible to define the locus where, with a given probability, the values of the random vector x( ly. Hiperellipsoid with center in xˆ( and with semi-axis proportional to the eigenvalues of P(
44 M.Isabel Ribeiro - June. DISCREE KALMAN FILER Probabilistic interpretation of the error ellipsoid 44 p(x Z ~ Ν(xˆ(,P( Example for n= M = {x : [x xˆ( ] P( [x xˆ( ] K} x xˆ( Pr{ x M} is a function of K a pre-specified values of this probability can be obtained by an apropriate choice of K
45 M.Isabel Ribeiro - June. DISCREE KALMAN FILER Probabilistic interpretation of the error ellipsoid 45 x R n p(x Z [ x xˆ( ] P( [x ~ Ν(xˆ(,P( xˆ( ] K (Scalar random variable with a χ distribution with n degrees of reedom How to chose K for a desired probability? Just consult a Chi square distribution table Probability = 9% n= K=.7 n= K=4.6 Probability = 95% n= K=3.84 n= K=5.99
46 M.Isabel Ribeiro - June. DISCREE KALMAN FILER he error ellipsoid and the filter dynamics 46 Prediction cycle x xˆ( P( xˆ ( + P( + u x + Q w x + = A x + Bu + G w xˆ ( + = A xˆ( + Bu P( + = A P( A + G QG
47 M.Isabel Ribeiro - June. DISCREE KALMAN FILER he error ellipsoid and the filter dynamics 47 Filtering cycle P( + xˆ ( + + = xˆ( + + K( + r( + P( + + = P( + K( + C + P( + x + xˆ ( + xˆ ( + + x + C + xˆ( + S( + P( + + r + R v + r + z + z + = C + x + + v + S( r + = z + C + xˆ( + + = C + P(K + K C + + R +
48 M.Isabel Ribeiro - June. Extended Kalman Filter 48 Non linear dynamics White Gaussian system and observation noise x + z = = f h (x (x,u + + v w x ~ N(x,P E[w w j ] = Q δ j E[v v j ] = R δ j QUESION: Which is the MMSE (minimum mean-square error estimate of x(+? Conditional mean xˆ ( + = E(x + Z Due to the non-linearity of the system, p(x are non Gaussian Z p(x + Z?
49 Extended Kalman Filter 49 (Optimal ANSWER: he MMSE estimate is given by a non-linear filter, that propagates the conditonal pdf. he EKF gives an approximation of the optimal estimate he non-linearities are approximated by a linearized version of the non-linear model around the last state estimate. For this approximation to be valid, this linearization should be a good approximation of the non-linear model in all the unceratinty domain associated with the state estimate. M.Isabel Ribeiro - June.
50 M.Isabel Ribeiro - June. Extended Kalman Filter 5 p(x Z xˆ( linearize x + = f (x,u + w Apply KF to the linear dynamics around xˆ( p(x Z + xˆ ( + Apply KF to the linear dynamics + = + linearize z h+ (x+ v + around xˆ ( + p(x Z + + xˆ ( + +
51 M.Isabel Ribeiro - June. Extended Kalman Filter 5 linearize around x = f (x,u + + xˆ( w f (x,u f (xˆ,u + f (x xˆ +... x+ f x + w + (f (xˆ,u f xˆ Prediction cycle of KF nown input xˆ + = f xˆ + (f (xˆ,u f xˆ P ( + = f P( f + Q
52 M.Isabel Ribeiro - June. Extended Kalman Filter 5 linearize z+ = h+ (x+ + v+ around xˆ ( + h + (x+ h+ (xˆ + + h+ (x+ xˆ z+ h+ x+ + v + (h+ (xˆ + h+ xˆ + Update cycle of KF nown input xˆ xˆ P( h ( h P( h R + + = [z h (xˆ ] P( + + = P( + P( + h [ h P( h + + R+ ] h P( + +
53 M.Isabel Ribeiro - June. References 53 Anderson, Moore, Optimal Filtering, Prentice-Hall, 979. M. Athans, Dynamic Stochastic Estimation, Prediction and Smoothing, Series of Lectures, Spring 999. E. W. Kamen, J. K. Su, Introduction to Optimal Estimation, Springer, 999. Peter S. Maybec, he Kalman Filter: an Introduction to Concepts Jerry M. Mendel, Lessons in Digital Estimation heory, Prentice-Hall, 987. M.Isabel Ribeiro, Notas Dispersas sobre Filtragem de Kalman CAPS, IS, June 989 (
Kalman and Extended Kalman Filters: Concept, Derivation and Properties
Kalman and Extended Kalman ilters: Concept, Derivation and roperties Maria Isabel Ribeiro Institute for Systems and Robotics Instituto Superior Técnico Av. Rovisco ais, 1 1049-001 Lisboa ORTUGAL {[email protected]}
Understanding and Applying Kalman Filtering
Understanding and Applying Kalman Filtering Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton 1 Introduction Objectives: 1. Provide a basic understanding
Sensorless Control of a Brushless DC motor using an Extended Kalman estimator.
Sensorless Control of a Brushless DC motor using an Extended Kalman estimator. Paul Kettle, Aengus Murray & Finbarr Moynihan. Analog Devices, Motion Control Group Wilmington, MA 1887,USA. [email protected]
Kristine L. Bell and Harry L. Van Trees. Center of Excellence in C 3 I George Mason University Fairfax, VA 22030-4444, USA [email protected], hlv@gmu.
POSERIOR CRAMÉR-RAO BOUND FOR RACKING ARGE BEARING Kristine L. Bell and Harry L. Van rees Center of Excellence in C 3 I George Mason University Fairfax, VA 22030-4444, USA [email protected], [email protected] ABSRAC
Kalman Filter Applied to a Active Queue Management Problem
IOSR Journal of Electrical and Electronics Engineering (IOSR-JEEE) e-issn: 2278-1676,p-ISSN: 2320-3331, Volume 9, Issue 4 Ver. III (Jul Aug. 2014), PP 23-27 Jyoti Pandey 1 and Prof. Aashih Hiradhar 2 Department
Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems
Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems In Chapters 8 and 9 of this book we have designed dynamic controllers such that the closed-loop systems display the desired transient
15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
Linear Threshold Units
Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear
Multivariate Normal Distribution
Multivariate Normal Distribution Lecture 4 July 21, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #4-7/21/2011 Slide 1 of 41 Last Time Matrices and vectors Eigenvalues
Master s Theory Exam Spring 2006
Spring 2006 This exam contains 7 questions. You should attempt them all. Each question is divided into parts to help lead you through the material. You should attempt to complete as much of each problem
CS229 Lecture notes. Andrew Ng
CS229 Lecture notes Andrew Ng Part X Factor analysis Whenwehavedatax (i) R n thatcomesfromamixtureofseveral Gaussians, the EM algorithm can be applied to fit a mixture model. In this setting, we usually
Discrete Frobenius-Perron Tracking
Discrete Frobenius-Perron Tracing Barend J. van Wy and Michaël A. van Wy French South-African Technical Institute in Electronics at the Tshwane University of Technology Staatsartillerie Road, Pretoria,
Advanced Signal Processing and Digital Noise Reduction
Advanced Signal Processing and Digital Noise Reduction Saeed V. Vaseghi Queen's University of Belfast UK WILEY HTEUBNER A Partnership between John Wiley & Sons and B. G. Teubner Publishers Chichester New
An Introduction to the Kalman Filter
An Introduction to the Kalman Filter Greg Welch 1 and Gary Bishop 2 TR 95041 Department of Computer Science University of North Carolina at Chapel Hill Chapel Hill, NC 275993175 Updated: Monday, July 24,
Christfried Webers. Canberra February June 2015
c Statistical Group and College of Engineering and Computer Science Canberra February June (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 829 c Part VIII Linear Classification 2 Logistic
Master s thesis tutorial: part III
for the Autonomous Compliant Research group Tinne De Laet, Wilm Decré, Diederik Verscheure Katholieke Universiteit Leuven, Department of Mechanical Engineering, PMA Division 30 oktober 2006 Outline General
NOV - 30211/II. 1. Let f(z) = sin z, z C. Then f(z) : 3. Let the sequence {a n } be given. (A) is bounded in the complex plane
Mathematical Sciences Paper II Time Allowed : 75 Minutes] [Maximum Marks : 100 Note : This Paper contains Fifty (50) multiple choice questions. Each question carries Two () marks. Attempt All questions.
Machine Learning and Pattern Recognition Logistic Regression
Machine Learning and Pattern Recognition Logistic Regression Course Lecturer:Amos J Storkey Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh Crichton Street,
Least-Squares Intersection of Lines
Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a
Review Jeopardy. Blue vs. Orange. Review Jeopardy
Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?
A Tutorial on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking
174 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 50, NO. 2, FEBRUARY 2002 A Tutorial on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking M. Sanjeev Arulampalam, Simon Maskell, Neil
State Space Time Series Analysis
State Space Time Series Analysis p. 1 State Space Time Series Analysis Siem Jan Koopman http://staff.feweb.vu.nl/koopman Department of Econometrics VU University Amsterdam Tinbergen Institute 2011 State
KALMAN Filtering techniques can be used either for
, July 6-8,, London, U.K. Fusion of Non-Contacting Sensors and Vital Parameter Extraction Using Kalman Filtering Jérôme Foussier, Daniel Teichmann, Jing Jia, Steffen Leonhar Abstract This paper describes
Visual Vehicle Tracking Using An Improved EKF*
ACCV: he 5th Asian Conference on Computer Vision, 3--5 January, Melbourne, Australia Visual Vehicle racing Using An Improved EKF* Jianguang Lou, ao Yang, Weiming u, ieniu an National Laboratory of Pattern
Analysis of Mean-Square Error and Transient Speed of the LMS Adaptive Algorithm
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 7, JULY 2002 1873 Analysis of Mean-Square Error Transient Speed of the LMS Adaptive Algorithm Onkar Dabeer, Student Member, IEEE, Elias Masry, Fellow,
Factor analysis. Angela Montanari
Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number
Probability and Random Variables. Generation of random variables (r.v.)
Probability and Random Variables Method for generating random variables with a specified probability distribution function. Gaussian And Markov Processes Characterization of Stationary Random Process Linearly
Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization
Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization Wolfram Burgard, Maren Bennewitz, Diego Tipaldi, Luciano Spinello 1 Motivation Recall: Discrete filter Discretize
A Reliability Point and Kalman Filter-based Vehicle Tracking Technique
A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video
Sections 2.11 and 5.8
Sections 211 and 58 Timothy Hanson Department of Statistics, University of South Carolina Stat 704: Data Analysis I 1/25 Gesell data Let X be the age in in months a child speaks his/her first word and
Correlation in Random Variables
Correlation in Random Variables Lecture 11 Spring 2002 Correlation in Random Variables Suppose that an experiment produces two random variables, X and Y. What can we say about the relationship between
Mathieu St-Pierre. Denis Gingras Dr. Ing.
Comparison between the unscented Kalman filter and the extended Kalman filter for the position estimation module of an integrated navigation information system Mathieu St-Pierre Electrical engineering
Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices
MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two
Introduction to General and Generalized Linear Models
Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby
A Statistical Framework for Operational Infrasound Monitoring
A Statistical Framework for Operational Infrasound Monitoring Stephen J. Arrowsmith Rod W. Whitaker LA-UR 11-03040 The views expressed here do not necessarily reflect the views of the United States Government,
CCNY. BME I5100: Biomedical Signal Processing. Linear Discrimination. Lucas C. Parra Biomedical Engineering Department City College of New York
BME I5100: Biomedical Signal Processing Linear Discrimination Lucas C. Parra Biomedical Engineering Department CCNY 1 Schedule Week 1: Introduction Linear, stationary, normal - the stuff biology is not
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical
EE 570: Location and Navigation
EE 570: Location and Navigation On-Line Bayesian Tracking Aly El-Osery 1 Stephen Bruder 2 1 Electrical Engineering Department, New Mexico Tech Socorro, New Mexico, USA 2 Electrical and Computer Engineering
Time Series Analysis III
Lecture 12: Time Series Analysis III MIT 18.S096 Dr. Kempthorne Fall 2013 MIT 18.S096 Time Series Analysis III 1 Outline Time Series Analysis III 1 Time Series Analysis III MIT 18.S096 Time Series Analysis
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0
Forecasting "High" and "Low" of financial time series by Particle systems and Kalman filters
Forecasting "High" and "Low" of financial time series by Particle systems and Kalman filters S. DABLEMONT, S. VAN BELLEGEM, M. VERLEYSEN Université catholique de Louvain, Machine Learning Group, DICE 3,
Data Modeling & Analysis Techniques. Probability & Statistics. Manfred Huber 2011 1
Data Modeling & Analysis Techniques Probability & Statistics Manfred Huber 2011 1 Probability and Statistics Probability and statistics are often used interchangeably but are different, related fields
Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components
Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they
Econometrics Simple Linear Regression
Econometrics Simple Linear Regression Burcu Eke UC3M Linear equations with one variable Recall what a linear equation is: y = b 0 + b 1 x is a linear equation with one variable, or equivalently, a straight
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,
LECTURE 4. Last time: Lecture outline
LECTURE 4 Last time: Types of convergence Weak Law of Large Numbers Strong Law of Large Numbers Asymptotic Equipartition Property Lecture outline Stochastic processes Markov chains Entropy rate Random
Lecture 3: Linear methods for classification
Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,
Robotics. Chapter 25. Chapter 25 1
Robotics Chapter 25 Chapter 25 1 Outline Robots, Effectors, and Sensors Localization and Mapping Motion Planning Motor Control Chapter 25 2 Mobile Robots Chapter 25 3 Manipulators P R R R R R Configuration
Adaptive Demand-Forecasting Approach based on Principal Components Time-series an application of data-mining technique to detection of market movement
Adaptive Demand-Forecasting Approach based on Principal Components Time-series an application of data-mining technique to detection of market movement Toshio Sugihara Abstract In this study, an adaptive
A Movement Tracking Management Model with Kalman Filtering Global Optimization Techniques and Mahalanobis Distance
Loutraki, 21 26 October 2005 A Movement Tracking Management Model with ing Global Optimization Techniques and Raquel Ramos Pinho, João Manuel R. S. Tavares, Miguel Velhote Correia Laboratório de Óptica
A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS
A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS Eusebio GÓMEZ, Miguel A. GÓMEZ-VILLEGAS and J. Miguel MARÍN Abstract In this paper it is taken up a revision and characterization of the class of
Adaptive Variable Step Size in LMS Algorithm Using Evolutionary Programming: VSSLMSEV
Adaptive Variable Step Size in LMS Algorithm Using Evolutionary Programming: VSSLMSEV Ajjaiah H.B.M Research scholar Jyothi institute of Technology Bangalore, 560006, India Prabhakar V Hunagund Dept.of
These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop
Music and Machine Learning (IFT6080 Winter 08) Prof. Douglas Eck, Université de Montréal These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher
IEOR 6711: Stochastic Models, I Fall 2012, Professor Whitt, Final Exam SOLUTIONS
IEOR 6711: Stochastic Models, I Fall 2012, Professor Whitt, Final Exam SOLUTIONS There are four questions, each with several parts. 1. Customers Coming to an Automatic Teller Machine (ATM) (30 points)
Lecture 8: Signal Detection and Noise Assumption
ECE 83 Fall Statistical Signal Processing instructor: R. Nowak, scribe: Feng Ju Lecture 8: Signal Detection and Noise Assumption Signal Detection : X = W H : X = S + W where W N(, σ I n n and S = [s, s,...,
MIMO CHANNEL CAPACITY
MIMO CHANNEL CAPACITY Ochi Laboratory Nguyen Dang Khoa (D1) 1 Contents Introduction Review of information theory Fixed MIMO channel Fading MIMO channel Summary and Conclusions 2 1. Introduction The use
Part 4 fitting with energy loss and multiple scattering non gaussian uncertainties outliers
Part 4 fitting with energy loss and multiple scattering non gaussian uncertainties outliers material intersections to treat material effects in track fit, locate material 'intersections' along particle
Formulations of Model Predictive Control. Dipartimento di Elettronica e Informazione
Formulations of Model Predictive Control Riccardo Scattolini Riccardo Scattolini Dipartimento di Elettronica e Informazione Impulse and step response models 2 At the beginning of the 80, the early formulations
Similarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
2.2 Creaseness operator
2.2. Creaseness operator 31 2.2 Creaseness operator Antonio López, a member of our group, has studied for his PhD dissertation the differential operators described in this section [72]. He has compared
Course 8. An Introduction to the Kalman Filter
Course 8 An Introduction to the Kalman Filter Speakers Greg Welch Gary Bishop Kalman Filters in 2 hours? Hah! No magic. Pretty simple to apply. Tolerant of abuse. Notes are a standalone reference. These
Subspace Analysis and Optimization for AAM Based Face Alignment
Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China [email protected] Stan Z. Li Microsoft
Component Ordering in Independent Component Analysis Based on Data Power
Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals
Basics of Statistical Machine Learning
CS761 Spring 2013 Advanced Machine Learning Basics of Statistical Machine Learning Lecturer: Xiaojin Zhu [email protected] Modern machine learning is rooted in statistics. You will find many familiar
PTE505: Inverse Modeling for Subsurface Flow Data Integration (3 Units)
PTE505: Inverse Modeling for Subsurface Flow Data Integration (3 Units) Instructor: Behnam Jafarpour, Mork Family Department of Chemical Engineering and Material Science Petroleum Engineering, HED 313,
Univariate and Multivariate Methods PEARSON. Addison Wesley
Time Series Analysis Univariate and Multivariate Methods SECOND EDITION William W. S. Wei Department of Statistics The Fox School of Business and Management Temple University PEARSON Addison Wesley Boston
Two Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering
Two Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering Department of Industrial Engineering and Management Sciences Northwestern University September 15th, 2014
Dynamic data processing
Dynamic data processing recursive least-squares P.J.G. Teunissen Series on Mathematical Geodesy and Positioning Dynamic data processing recursive least-squares Dynamic data processing recursive least-squares
Lecture 9: Introduction to Pattern Analysis
Lecture 9: Introduction to Pattern Analysis g Features, patterns and classifiers g Components of a PR system g An example g Probability definitions g Bayes Theorem g Gaussian densities Features, patterns
Vehicle Tracking in Occlusion and Clutter
Vehicle Tracking in Occlusion and Clutter by KURTIS NORMAN MCBRIDE A thesis presented to the University of Waterloo in fulfilment of the thesis requirement for the degree of Master of Applied Science in
Deterministic Sampling-based Switching Kalman Filtering for Vehicle Tracking
Proceedings of the IEEE ITSC 2006 2006 IEEE Intelligent Transportation Systems Conference Toronto, Canada, September 17-20, 2006 WA4.1 Deterministic Sampling-based Switching Kalman Filtering for Vehicle
Real-time Visual Tracker by Stream Processing
Real-time Visual Tracker by Stream Processing Simultaneous and Fast 3D Tracking of Multiple Faces in Video Sequences by Using a Particle Filter Oscar Mateo Lozano & Kuzahiro Otsuka presented by Piotr Rudol
The Chinese Restaurant Process
COS 597C: Bayesian nonparametrics Lecturer: David Blei Lecture # 1 Scribes: Peter Frazier, Indraneel Mukherjee September 21, 2007 In this first lecture, we begin by introducing the Chinese Restaurant Process.
Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.
Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C
Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability. p. 1/?
Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability p. 1/? p. 2/? Definition: A p p proper rational transfer function matrix G(s) is positive
Orthogonal Projections
Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance
by the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that
Statistical Machine Learning
Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written
67 Detection: Determining the Number of Sources
Williams, D.B. Detection: Determining the Number of Sources Digital Signal Processing Handbook Ed. Vijay K. Madisetti and Douglas B. Williams Boca Raton: CRC Press LLC, 999 c 999byCRCPressLLC 67 Detection:
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem
Logistic Regression (1/24/13)
STA63/CBB540: Statistical methods in computational biology Logistic Regression (/24/3) Lecturer: Barbara Engelhardt Scribe: Dinesh Manandhar Introduction Logistic regression is model for regression used
Automated Stellar Classification for Large Surveys with EKF and RBF Neural Networks
Chin. J. Astron. Astrophys. Vol. 5 (2005), No. 2, 203 210 (http:/www.chjaa.org) Chinese Journal of Astronomy and Astrophysics Automated Stellar Classification for Large Surveys with EKF and RBF Neural
Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers
Variance Reduction The statistical efficiency of Monte Carlo simulation can be measured by the variance of its output If this variance can be lowered without changing the expected value, fewer replications
SALEM COMMUNITY COLLEGE Carneys Point, New Jersey 08069 COURSE SYLLABUS COVER SHEET. Action Taken (Please Check One) New Course Initiated
SALEM COMMUNITY COLLEGE Carneys Point, New Jersey 08069 COURSE SYLLABUS COVER SHEET Course Title Course Number Department Linear Algebra Mathematics MAT-240 Action Taken (Please Check One) New Course Initiated
Lecture 5: Variants of the LMS algorithm
1 Standard LMS Algorithm FIR filters: Lecture 5: Variants of the LMS algorithm y(n) = w 0 (n)u(n)+w 1 (n)u(n 1) +...+ w M 1 (n)u(n M +1) = M 1 k=0 w k (n)u(n k) =w(n) T u(n), Error between filter output
COMPUTER SIMULATION OF REAL TIME IDENTIFICATION FOR INDUCTION MOTOR DRIVES
Proceedings of the International Conference on Theory and Applications of Mathematics and Informatics - ICTAMI 2004, Thessaloniki, Greece COMPUTER SIMULATION OF REAL TIME IDENTIFICATION FOR INDUCTION MOTOR
Background 2. Lecture 2 1. The Least Mean Square (LMS) algorithm 4. The Least Mean Square (LMS) algorithm 3. br(n) = u(n)u H (n) bp(n) = u(n)d (n)
Lecture 2 1 During this lecture you will learn about The Least Mean Squares algorithm (LMS) Convergence analysis of the LMS Equalizer (Kanalutjämnare) Background 2 The method of the Steepest descent that
Sales forecasting # 2
Sales forecasting # 2 Arthur Charpentier [email protected] 1 Agenda Qualitative and quantitative methods, a very general introduction Series decomposition Short versus long term forecasting
STA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! [email protected]! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct
