# Master s thesis tutorial: part III

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 for the Autonomous Compliant Research group Tinne De Laet, Wilm Decré, Diederik Verscheure Katholieke Universiteit Leuven, Department of Mechanical Engineering, PMA Division 30 oktober 2006

2 Outline

3 General Outline 1 General 2 Basic concepts in probability 3 Recursive state estimation 4 Gaussian filters (Statistics-based methods) 5 Nonparametric methods (Sample-based filters) 6 Bayesian networks 7 BFL 8 On-line links 9 Further reading

4 General Probabilistic state estimation Estimating state from sensor data State often not fully observable Sensor data corrupted by noise Example: US se ns or Wa ll

5 Basic concepts in probability Outline 1 General 2 Basic concepts in probability 3 Recursive state estimation 4 Gaussian filters (Statistics-based methods) 5 Nonparametric methods (Sample-based filters) 6 Bayesian networks 7 BFL 8 On-line links 9 Further reading

6 Basic concepts in probability Random variables and probability Random variable X with value x Discrete case: Probability: p (X = x) = p(x) x p(x) = 1 Continuous case: Probability density function (PDF): p(x) probability that X (x, x + δx) equals p(x)δx for δx 0 x p(x)dx = 1 Remark: Also for vector variables X and x.

7 Basic concepts in probability Gaussian Example: Gaussian with mean µ and variance σ 2 p(x) = N ( x µ, σ 2) = 1 e 1 2σ 2 (x µ)2 2πσ 2

8 Basic concepts in probability Probability distributions Joint distribution: p (x, y) = p (X = x, Y = y) Conditional probability: p (x y) Two fundamental rules of probability: Sum rule (theorem of total probability, marginalization): Discrete case: p(x) = P y p(x y)p(y) = P y p(x, y) Continuous case: p (x) = R p (x y) p (y) dy y Product rule: p(x, y) = p(x y)p(y) = p(y x)p(x) Independence: p (x, y) = p (x) p (y) Conditional independence: p (x, y z) = p (x z) p (y z) Bayes rule: p (x y) = p(y x)p(x) p(y)

9 Basic concepts in probability Terminology in estimation State x and data or measurements y Prior probability distribution: p (x) What we want to know is the posterior probability distribution: p (x y) Use of Bayes rule! p (x y) = p(y x)p(x) p(y) Expectation or Expected value of a random variable X : Discrete case: E [X ] = x xp (x) Continuous case: E [X ] = xp (x) dx x Variance of[ a random variable X : var [X ] = E (X E [X ]) 2] = E [ X 2] E [X ] 2 Covariance matrix of two vector variables X and Y: cov [X, Y] = E [(X ] E [X]) (Y E [Y]) T Special case: cov [X, X] = cov [X]

10 Basic concepts in probability Estimation and identification

11 Basic concepts in probability Parameter identification

12 Basic concepts in probability State estimation

13 Basic concepts in probability The problem of on-line estimation On-line estimation is generally less robust than off-line estimation, due to the fact that statistics, i.e. mean and covariance for the (Extended/Unscented) Kalman filter, or samples, i.e. particles for the Particle filter, are used to summarize the information gathered at a certain time step. Often, summary statistics or samples do not fully describe the gathered knowledge. Hence, information is thrown away at every time-step, which is not recovered afterwards.

14 Basic concepts in probability On-line state estimation

15 Recursive state estimation Outline 1 General 2 Basic concepts in probability 3 Recursive state estimation 4 Gaussian filters (Statistics-based methods) 5 Nonparametric methods (Sample-based filters) 6 Bayesian networks 7 BFL 8 On-line links 9 Further reading

16 Recursive state estimation Recursive state estimation

17 Recursive state estimation Recursive state estimation State x, measurements y, control u Subscript t denotes time-instance Goal for recursive estimation = posterior pdf p (x t x 0:t 1, z 1:t 1, u 1:t ) = p (x t x t 1, z t, u t ) (Markov condition) Two steps: state transition probability: p (x t x t 1, u t ) PREDICTION measurement probability: p (z t x t ) CORRECTION dynamic Bayesian network (DBN) X0 X1 X2 X3... Xk 1 Xk... Z1 Z2 Z3 Zk 1 Zk

18 Recursive state estimation Belief The belief reflects the robot s internal knowledge about the state of the environment belief: bel (x t ) = p (x t z 1:t, u 1:t ) The belief just before incorporating the latest measurement z t, the prediction, is denoted as: bel (xt ) = p (x t z 1:t 1, u 1:t )

19 Recursive state estimation A general Bayes Filter Algorithm Algorithm Bayes filter(bel (x t 1, u t, z t )) for all x t do bel (x t ) = p (x t u t, x t 1 ) bel (x t 1 ) dx t 1 prediction bel (x t ) = ηp (z t x t ) bel (x t ) correction endfor return bel (x t ) Remark: Initial belief bel (x 0 ) needed in first timestep

20 Gaussian filters (Statistics-based methods) Outline 1 General 2 Basic concepts in probability 3 Recursive state estimation 4 Gaussian filters (Statistics-based methods) Kalman filter Extended Kalman filter Iterated extended Kalman filter Unscented Kalman filter Information filter Extended information filter Nonminimal state Kalman filter 5 Nonparametric methods (Sample-based filters) 6 Bayesian networks 7 BFL 8 On-line links 9 Further reading

21 Gaussian filters (Statistics-based methods) Gaussian filters Earliest tractable implementations of the Bayes filter for continuous space. Most popular, despite shortcomings. Basic idea Beliefs are represented by multivariate normal distributions. Unimodal Characterized by two sets of parameters (moments parametrization): mean (µ) and covariance (Σ). Other parametrization possible (canonical parametrization) see information filter Poor match for any global estimation problems in which many distinct hypotheses exist!

22 Gaussian filters (Statistics-based methods) Gaussian filters Different type of Gaussian filters: Kalman filter (KF) Extended Kalman filter (EKF) Iterated Extended Kalman filter (IEKF) Unscented Kalman filter (UKF) Information filter

23 Gaussian filters (Statistics-based methods) Kalman filter Assumptions Assumptions Linear Gaussian system: Linear state transition: x t = A t x t 1 + B t u t + ɛ t Additive Gaussian noise ɛ t Linear Gaussian measurement: Linear measurement model: z t = H t x t + δ t Additive Gaussian noise δ t Initial belief bel (x 0 ) is Gaussian. Remark Comparable with least-squares solution with stochastic inspired weights.

24 Gaussian filters (Statistics-based methods) Kalman filter Algorithm Kalman filter Algorithm Kalman filter(µ t 1, Σ t 1, u t, z t ) µ t = A t µ t 1 + B t u t PREDICTION Σ t = A t Σ t 1 A T t + R t K t = Σ t Ht T ( Ht Σt Ht T ) 1 + Q t µ t = µ t + K t (z t H t µ t ) CORRECTION Σ t = (I K t H t ) Σ t return µ t, Σ t

25 Gaussian filters (Statistics-based methods) Extended Kalman filter General In practice rarely linear process and measurement model! EKF relaxes the linearity assumption. Assumptions Nonlinear Gaussian system: Nonlinear state transition: x t = g(u t, x t 1 ) + ɛ t Additive Gaussian noise ɛ t Nonlinear Gaussian measurement: Nonlinear measurement model: z t = h(x t ) + δ t Additive Gaussian noise δ t Initial belief bel (x 0 ) is Gaussian. Result: true belief no longer Gaussian.

26 Gaussian filters (Statistics-based methods) Extended Kalman filter Gaussian approximation The extended Kalman filter calculates a Gaussian approximation to the true belief.

27 Gaussian filters (Statistics-based methods) Extended Kalman filter Linearization effect 2.5 Gaussian pdf Nonlinear function Transformed pdf Linearised approximation

28 Gaussian filters (Statistics-based methods) Extended Kalman filter Linearizations EKF uses (first order) Taylor approximation. Linearization in the most likely state which is for Gaussians the mean of the posterior. Linearization of system model: g(u t, x t 1 ) g(u t, µ t 1 ) + g (u t, µ t 1 ) } {{ } A t (x t 1 µ t 1 ), with g (u t, x t 1 ) = g(ut,x t 1) x t 1 Linearization of measurement model: with h (x t ) = h(xt) x t h(x t ) h( µ t ) + h ( µ t ) } {{ } H t (x t µ t ),

29 Gaussian filters (Statistics-based methods) Extended Kalman filter Algorithm extended Kalman filter Algorithm extended Kalman filter (µ t 1, Σ t 1, u t, z t ) µ t = g ( ) u t, µ t 1 Σ t = A t Σ t 1 A T t + R t K t = Σ t Ht T ( Ht Σt Ht T ) 1 + Q t µ t = µ t + K t (z t h ( µ t )) Σ t = (I K t H t ) Σ t return µ t, Σ t Very similar to the Kalman filter algorithm! Linear predictions in the KF are replaced by their nonlinear generalizations in EKF. EKF uses Jacobians instead of the linear system matrices in the case of KF.

30 Gaussian filters (Statistics-based methods) Extended Kalman filter Advantages and limitations Advantages Simplicity and computational efficiency (unimodal presentation). If the nonlinear functions are approximately linear at the mean of the estimate and the covariance is small, the EKF performs well. Limitation The approximation of state transitions and measurements using linear Taylor expansion can be insufficient. The goodness of approximation depends on two main factors: the degree of uncertainty, and the degree of nonlinearity of the functions.

31 Gaussian filters (Statistics-based methods) Iterated extended Kalman filter General IEKF tries to do better than the EKF by linearization of the measurement model around the updated state estimate. This is achieved by iteration: First linearize around the predicted state estimate ( µ t ) and do measurement update. Linearize the measurement model around the newly obtained estimate (µ 1 t ) (where 1 stand for the first iteration). Iterate this process.

32 Gaussian filters (Statistics-based methods) Iterated extended Kalman filter Algorithm Iterated extended Kalman filter Algorithm iterated extended Kalman filter (µ t 1, Σ t 1, u t, z t ) µ t = g ( ) u t, µ t 1 Σ t = A t Σ t 1 A T t + R t Kt 1 = Σ ( ) t Ht T Ht Σt Ht T 1 + Q t µ 1 t = µ t + Kt 1 (z t h ( µ t )) Σ 1 t = ( ) I Kt 1 h t Σt for i = 1 : n Ht i = h(xt) µ i 1 t η i = h(µ i 1 t ) + Ht( µ i µ i 1 t ) ( ) ( K t = Σ t H i T (H ) ( ) ) i t t Σt H i T 1 t ( µ i t = µ t + K ) t i η i end Σ t = ( I Kt i Ht) i Σ t µ t = µ i t return µ t, Σ t

33 Gaussian filters (Statistics-based methods) Iterated extended Kalman filter Advantages and limitations Advantages Simplicity and computational efficiency (unimodal presentation). Outperforms the EKF in case of certain nonlinear measurement models. The IEKF is the best way to handle nonlinear measurement models that fully observe the part of the state that makes the measurement model non-linear. Limitation Computationally more involved than extended Kalman filter. Uni-modal representation.

34 Gaussian filters (Statistics-based methods) Unscented Kalman filter General EKF is only one way to linearize the transformation of a Gaussian. Unscented Kalman filter UKF performs a stochastic linearization through the use of a weighted statistical linear regression process.

35 Gaussian filters (Statistics-based methods) Unscented Kalman filter Illustration linearization Gaussian pdf Nonlinear function Transformed pdf Unscented approximation

36 Gaussian filters (Statistics-based methods) Unscented Kalman filter Procedure Procedure Extract sigma-points from the Gaussian. These points are located at the mean and symmetrically along the main axes of the covariance (two per dimension). Two weights associated with each sigma point (one for calculating mean, and one for covariance) Pass sigma-points through process model (g). The parameters (µ and Σ) are extracted from the mapped sigma points.

37 Gaussian filters (Statistics-based methods) Unscented Kalman filter Algorithm unscented Kalman filter Algorithm unscented Kalman filter (µ t 1, Σ t 1, u t, z t ) χ t 1 = ( µ t 1 µ t 1 + γ Σ t 1 µ t 1 γ ) Σ t 1 χ t = g (u t, χ t 1 ) µ t = 2n i=0 w m i i χ t Σ t = 2n i=0 w ( ) ( ) c i i χt µ i T t χt µ t + Rt χ t = ( µ t µ t + γ Σt µ t γ Σt ) Z t = h( χ t ) ẑ t = 2n i=0 w m i Z t i S t = 2n ) ) i i T ( Z t ẑ t ( Z t ẑ t + Qt Σ x,z t K t = Σx,z t i=0 w i c = 2n i=0 w i c S 1 t µ t = µ t + K t (z t ẑ t ) Σ t = Σ t K t S t K T t return µ t, Σ t ( χ i t µ t ) ( Z i t ẑ t ) T

38 Gaussian filters (Statistics-based methods) Unscented Kalman filter Advantages and limitations Advantages UKF more accurate than the first order Taylor series expansion by the EKF. The UKF performs better than EKF and IEKF for the process update (doesn t use only local information) No need to calculate derivatives of the functions (interesting when discontinuous,... ) Derivative free filter Limitation Slightly slower than extended Kalman filter.

39 Gaussian filters (Statistics-based methods) Unscented Kalman filter Some remarks Remark Resemblance to the sample based representation used in particle filters (see next section). Key difference: sigma points are determined deterministically, while particle filters draw samples randomly. Therefore the UKF is more efficient than PF in the case the underlying distribution is approximately Gaussian. However if the belief is highly non-gaussian the UKF s performance is low.

40 Gaussian filters (Statistics-based methods) Information filter General Dual of the Kalman filter. Represents belief by Gaussian but in the canonical parametrization: information matrix and information vector. Same assumptions as Kalman filter Different update equations what is computationally complex in one parametrization happens to be simple in the other (and vice versa.) Canonical parametrization Information matrix (or precision matrix): Ω = Σ 1. Information vector: ξ = Σ 1 µ

41 Gaussian filters (Statistics-based methods) Information filter Algorithm information filter Algorithm information filter (ξ t 1, Ω t 1, u t, z t ) Ω t = ( A t Ωt 1 1 ) 1 AT t + R t ξ t = Ω ( t At Ω 1 t 1 ξ ) t 1 + B t u t PREDICTION Ω t = Ht T Qt 1 H t + Ω t ξ t = Ht T Qt 1 z t + ξ t CORRECTION return ξ t, Ω t Computationally most involved step is prediction. In IF: measurement updates are additive. Even more efficient if measurements carry only information about a subset of all state variables at the time. In KF: process updates are additive. Even more efficient if only a subset of variables is affected by a control, or if variables evolve independently of each other.

42 Gaussian filters (Statistics-based methods) Extended information filter General Extends the IF to the nonlinear case (similar to EKF). Algorithm extended information filter (ξ t 1, Ω t 1, u t, z t ) µ t 1 = Ω 1 t 1 ξ t 1 Ω t = ( A t Ω 1 ) 1 t 1 AT t + R t ξ t = Ω t g(u t, µ t 1 ) PREDICTION µ t = g(u t, µ t 1 ) Ω t = Ω t + Ht T Qt 1 H t ξ t = ξ t + Ht T Qt 1 (z t h ( µ t ) H t µ t ) CORRECTION return ξ t, Ω t

43 Gaussian filters (Statistics-based methods) Extended information filter Advantages and limitations Advantages Easy to represent global uncertainty: Ω = 0 Tends to be numerically more stable than the Kalman filter in a lot of applications. Allows to integrate information without immediately resolving it into probabilities (interesting in case of large estimation problems). This can be done by adding new information locally to the system (an extension is necessary). Natural fit for multi-robot problems. (adding information (commutative))

44 Gaussian filters (Statistics-based methods) Extended information filter Limitation The need to recover a state estimate in the update step is an important disadvantage. (inversion of information matrix). However, information matrix often exhibits sparse structure (they can be thought of as sparse graphs: Markov Random Fields).

45 Gaussian filters (Statistics-based methods) Nonminimal state Kalman filter Nonminimal state Kalman filter Transform the original state into a higher dimensional space where the measurement equations are linear It avoids the accumulation of linearization errors (EKF, IEKF, IF, EIF). The transformation is however not always possible.

46 Nonparametric methods (Sample-based filters) Outline 1 General 2 Basic concepts in probability 3 Recursive state estimation 4 Gaussian filters (Statistics-based methods) 5 Nonparametric methods (Sample-based filters) Histogram filter Particle filter 6 Bayesian networks 7 BFL 8 On-line links 9 Further reading

47 Nonparametric methods (Sample-based filters) Sample-based filters Do not rely on a fixed functional form of the posterior (e.g. Gaussians). Approximation of the posteriors by a finite number of values (discretization of belief) Choice of the values: Histogram filters: Decompose the state space into finitely many regions and represent the cumulative posterior for each region by a single probability value. Particle filters: Represent the posteriors by finitely many samples.

48 Nonparametric methods (Sample-based filters) Advantages and limitations Advantages No assumptions on the posterior density, well-suited to represent complex multimodal beliefs. Limitations High computational cost. Resource-adaptive algorithms

49 Nonparametric methods (Sample-based filters) Histogram filter Histogram filter(grid-based methods) Decomposes the state space into finitely many regions and represent the cumulative posterior for each region by a single probability value. Discrete Bayes filters: finite spaces. Histogram filters: continuous spaces.

50 Nonparametric methods (Sample-based filters) Histogram filter Discrete Bayes filter Random variable X t can take finitely many values. Algorithm discrete Bayes filter ({p k,t 1 }, u t, z t ) for all k do p k,t = i p (X t = x k u t, X t 1 = x i ) p i,t 1 PREDICTION p k,t = ηp (z t X t = x k ) p k,t CORRECTION endfor return {p k,t }

51 Nonparametric methods (Sample-based filters) Histogram filter Histogram filter Approximate inference tool for continuous state spaces. The continuous space is decomposed into finitely many (K) bins or regions: dom(x t ) = x 1,t x 2,t x K,t. Trade off between accuracy and computational burden. The posterior becomes a piecewise constant PDF, which assigns a uniform probability to each state x t within each region x k,t : p (x t ) = p k,t x k,t, with x k,t the volume of the region x k,t.

52 Nonparametric methods (Sample-based filters) Histogram filter Illustration Gaussian pdf Nonlinear function Transformed pdf Samples

53 Nonparametric methods (Sample-based filters) Histogram filter Histogram filter Decomposition techniques: Static: fixed decomposition, chosen in advance, irrespective of the shape of the posterior which is begin approximated. Easier to implement Possibly wasteful with regards to computational resources. Dynamic: adapt the decomposition to the specific shape of the posterior distribution. The less likely a region, the coarser the decomposition. More difficult to implement Ability to make better use of computational resources. An similar effect is obtained by selective updating

54 Nonparametric methods (Sample-based filters) Particle filter General (Sequential Monte Carlo methods) Nonparametric implementation of the Bayes filter. Approximation of the posterior by a finite number of values. These values are randomly drawn from the posterior distribution samples: X t := x 1 t, x 2 t,, x M t, with M the number of particles (often large, e.g. M = 1000 for each dimension) The likelihood for a state hypothesis x t to be included in the particle set X t would ideally be proportional to its posterior belief: x i t p(x t z 1:t, u 1:t )

55 Nonparametric methods (Sample-based filters) Particle filter General The likelihood for a state hypothesis x t to be included in the particle set X t would ideally be proportional to its posterior belief: x i t p(x t z 1:t, u 1:t ) This posterior belief is however unknown, since this is what we want to calculate. Therefore we have to sample from an approximate distribution Importance sampling

56 Nonparametric methods (Sample-based filters) Particle filter Importance sampling Algorithm importance sampling Require: M >> N for j = 1 tom do Sample x j q(x) w j = p(ex j ) q(ex j ) endfor for i = 1 to N do Sample x i ( x j, w j ) 1 < j < M endfor

57 Nonparametric methods (Sample-based filters) Particle filter Particle filter Particle filter recursively constructs the particle set X t from the set X t 1 Problem A problem concerning particle filtering is the degeneracy problem, particle deprivation/depletion or sample impoverishment, after a few iterations all but one particles will have negligible weight. To reduce this effect Good choice of importance density function, Use of resampling.

58 Nonparametric methods (Sample-based filters) Particle filter Particle filter Algorithm Particle filter (X t 1, u t, z t ) X t = X t = empty for m 1 to M do sample x m t p(x t u t, x m t 1 ) w m t = p(z t x m t ) X t = X t + < x m t, w m t > endfor for m 1 to M do draw i with probability w i t add x i t to X t endfor return X t

59 Nonparametric methods (Sample-based filters) Particle filter Particle filter Gaussian pdf Nonlinear function Transformed pdf Samples

60 Nonparametric methods (Sample-based filters) Particle filter Particle filter A lot of different variants on the particle filtering (how particle deprivation is handled, variable number of particles,... ). How many samples should be used?

61 Bayesian networks Outline 1 General 2 Basic concepts in probability 3 Recursive state estimation 4 Gaussian filters (Statistics-based methods) 5 Nonparametric methods (Sample-based filters) 6 Bayesian networks 7 BFL 8 On-line links 9 Further reading

62 Bayesian networks General Bayesian networks are graphical structures for representing the probabilistic relationships among a large number of variables and for doing probabilistic inference with those variables.

63 Bayesian networks Definition Bayesian network A Bayesian network consists of the following: A set of variables and a set of directed edges between variables. Each variable has a finite set of mutually exclusive states. The variables together with the directed edges form a directed acyclic graph (DAG). To each variable A with parents B 1,, B n, there is attached the potential table P(A B 1,, B n )

64 BFL Outline 1 General 2 Basic concepts in probability 3 Recursive state estimation 4 Gaussian filters (Statistics-based methods) 5 Nonparametric methods (Sample-based filters) 6 Bayesian networks 7 BFL 8 On-line links 9 Further reading

65 BFL Bayesian Filtering Library (BFL) Open source project (C++), started by Klaas Gadeyne State estimation software framework/library BFL: support for different filters (in particular particle filters and Kalman filters, but also e.g. grid based methods) and easily extensible towards other Bayesian methods.

66 BFL Bayesian Filtering Library (BFL) What is BFL? Bayesian: fully Bayesian software framework. Different Bayesian algorithms with maximum of code reuse. Easy comparison of performance of different algorithms. Open: Potential for maximum reuse of code and study algorithms Independent: BFL is decoupled possible from one particular numerical/stochastic library. Furthermore BFL is independent of a particular application. This means both its interface and implementation are decoupled from particular sensors, assumptions, algorithms,... that are specific to a certain application.

67 BFL Bayesian Filtering Library (BFL) Getting support - the BFL community There are different ways to get some help/support: A BFL-tutorial: tdelaet/tutorialbfl.pdf. The website: kgadeyne/bfl.html (also source code). Klaas Gadeyne s PhD thesis (see website). The mailing list (see website).

68 BFL An example: mobile robot tracking Mobile robot deadreckoning Mobile robot with measurements

69 On-line links Outline 1 General 2 Basic concepts in probability 3 Recursive state estimation 4 Gaussian filters (Statistics-based methods) 5 Nonparametric methods (Sample-based filters) 6 Bayesian networks 7 BFL 8 On-line links 9 Further reading

70 On-line links On line links Estimation links: Wikipedia: Recursive Bayesian estimation, filter and filter. BFL (Bayesian Filtering Library): http: //people.mech.kuleuven.be/ kgadeyne/bfl.html. BNT (Bayes Net Toolbox): Sequential Monte Carlo Methods homepage:

71 Further reading Outline 1 General 2 Basic concepts in probability 3 Recursive state estimation 4 Gaussian filters (Statistics-based methods) 5 Nonparametric methods (Sample-based filters) 6 Bayesian networks 7 BFL 8 On-line links 9 Further reading

72 Further reading Further reading Kalman filtering: Kalman filters: a tutorial ( be/ tdelaet/journala99.pdf) Nonminimal state Kalman Filter: doctorate Tine Lefebvre, Contact modelling, parameter identification and task planning for autonomous compliant motion using elementary contacts, Dept. Mechanical Engineering KUL. Particle filtering: A Particle Filter Tutorial for Mobile Robot Localization, I.M. Rekleitis ( yiannis/ particletutorial.pdf) Sequential Monte Carlo Methods in Practice, A. Doucet et al., Springer, 2001

73 Further reading Further reading Bayesian networks: Bayesian Networks and Decision Graphs, F.V. Jensen, Springer, 2001 Learning Bayesian Networks, R.E. Neapolitan, Prentice Hall, 2004 Bayesian Nets and Causality, J. Williamson, Oxford University Press, 2005

74 Further reading Presentation and article version This presentation is available online: kuleuven.be/ tdelaet/estimation/part3.pdf. An article version of the presentation including extra comments and explanations is available online: tdelaet/ estimation/article3.pdf.

75 Bibliography Bibliography

### Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization

Introduction to Mobile Robotics Bayes Filter Particle Filter and Monte Carlo Localization Wolfram Burgard, Maren Bennewitz, Diego Tipaldi, Luciano Spinello 1 Motivation Recall: Discrete filter Discretize

### Tracking Algorithms. Lecture17: Stochastic Tracking. Joint Probability and Graphical Model. Probabilistic Tracking

Tracking Algorithms (2015S) Lecture17: Stochastic Tracking Bohyung Han CSE, POSTECH bhhan@postech.ac.kr Deterministic methods Given input video and current state, tracking result is always same. Local

### Particle Filtering. Emin Orhan August 11, 2012

Particle Filtering Emin Orhan eorhan@bcs.rochester.edu August 11, 1 Introduction: Particle filtering is a general Monte Carlo (sampling) method for performing inference in state-space models where the

### EE 570: Location and Navigation

EE 570: Location and Navigation On-Line Bayesian Tracking Aly El-Osery 1 Stephen Bruder 2 1 Electrical Engineering Department, New Mexico Tech Socorro, New Mexico, USA 2 Electrical and Computer Engineering

### Deterministic Sampling-based Switching Kalman Filtering for Vehicle Tracking

Proceedings of the IEEE ITSC 2006 2006 IEEE Intelligent Transportation Systems Conference Toronto, Canada, September 17-20, 2006 WA4.1 Deterministic Sampling-based Switching Kalman Filtering for Vehicle

### A Tutorial on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking

174 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 50, NO. 2, FEBRUARY 2002 A Tutorial on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking M. Sanjeev Arulampalam, Simon Maskell, Neil

### Understanding and Applying Kalman Filtering

Understanding and Applying Kalman Filtering Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton 1 Introduction Objectives: 1. Provide a basic understanding

### Lecture 2: Introduction to belief (Bayesian) networks

Lecture 2: Introduction to belief (Bayesian) networks Conditional independence What is a belief network? Independence maps (I-maps) January 7, 2008 1 COMP-526 Lecture 2 Recall from last time: Conditional

### Linear Threshold Units

Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear

### Discrete Frobenius-Perron Tracking

Discrete Frobenius-Perron Tracing Barend J. van Wy and Michaël A. van Wy French South-African Technical Institute in Electronics at the Tshwane University of Technology Staatsartillerie Road, Pretoria,

### Data Modeling & Analysis Techniques. Probability & Statistics. Manfred Huber 2011 1

Data Modeling & Analysis Techniques Probability & Statistics Manfred Huber 2011 1 Probability and Statistics Probability and statistics are often used interchangeably but are different, related fields

### A more robust unscented transform

A more robust unscented transform James R. Van Zandt a a MITRE Corporation, MS-M, Burlington Road, Bedford MA 7, USA ABSTRACT The unscented transformation is extended to use extra test points beyond the

Statistics Graduate Courses STAT 7002--Topics in Statistics-Biological/Physical/Mathematics (cr.arr.).organized study of selected topics. Subjects and earnable credit may vary from semester to semester.

### Basics of Statistical Machine Learning

CS761 Spring 2013 Advanced Machine Learning Basics of Statistical Machine Learning Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu Modern machine learning is rooted in statistics. You will find many familiar

### A robot s Navigation Problems. Localization. The Localization Problem. Sensor Readings

A robot s Navigation Problems Localization Where am I? Localization Where have I been? Map making Where am I going? Mission planning What s the best way there? Path planning 1 2 The Localization Problem

### The Basics of Graphical Models

The Basics of Graphical Models David M. Blei Columbia University October 3, 2015 Introduction These notes follow Chapter 2 of An Introduction to Probabilistic Graphical Models by Michael Jordan. Many figures

### Christfried Webers. Canberra February June 2015

c Statistical Group and College of Engineering and Computer Science Canberra February June (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 829 c Part VIII Linear Classification 2 Logistic

### STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct

### L10: Probability, statistics, and estimation theory

L10: Probability, statistics, and estimation theory Review of probability theory Bayes theorem Statistics and the Normal distribution Least Squares Error estimation Maximum Likelihood estimation Bayesian

### Probability & Statistics Primer Gregory J. Hakim University of Washington 2 January 2009 v2.0

Probability & Statistics Primer Gregory J. Hakim University of Washington 2 January 2009 v2.0 This primer provides an overview of basic concepts and definitions in probability and statistics. We shall

### Forecasting "High" and "Low" of financial time series by Particle systems and Kalman filters

Forecasting "High" and "Low" of financial time series by Particle systems and Kalman filters S. DABLEMONT, S. VAN BELLEGEM, M. VERLEYSEN Université catholique de Louvain, Machine Learning Group, DICE 3,

### Performance evaluation of multi-camera visual tracking

Performance evaluation of multi-camera visual tracking Lucio Marcenaro, Pietro Morerio, Mauricio Soto, Andrea Zunino, Carlo S. Regazzoni DITEN, University of Genova Via Opera Pia 11A 16145 Genoa - Italy

### Bayesian Statistics: Indian Buffet Process

Bayesian Statistics: Indian Buffet Process Ilker Yildirim Department of Brain and Cognitive Sciences University of Rochester Rochester, NY 14627 August 2012 Reference: Most of the material in this note

### Tracking in flussi video 3D. Ing. Samuele Salti

Seminari XXIII ciclo Tracking in flussi video 3D Ing. Tutors: Prof. Tullio Salmon Cinotti Prof. Luigi Di Stefano The Tracking problem Detection Object model, Track initiation, Track termination, Tracking

### Bayesian networks - Time-series models - Apache Spark & Scala

Bayesian networks - Time-series models - Apache Spark & Scala Dr John Sandiford, CTO Bayes Server Data Science London Meetup - November 2014 1 Contents Introduction Bayesian networks Latent variables Anomaly

### UW CSE Technical Report 03-06-01 Probabilistic Bilinear Models for Appearance-Based Vision

UW CSE Technical Report 03-06-01 Probabilistic Bilinear Models for Appearance-Based Vision D.B. Grimes A.P. Shon R.P.N. Rao Dept. of Computer Science and Engineering University of Washington Seattle, WA

### Probability for Estimation (review)

Probability for Estimation (review) In general, we want to develop an estimator for systems of the form: x = f x, u + η(t); y = h x + ω(t); ggggg y, ffff x We will primarily focus on discrete time linear

### Data Mining Chapter 6: Models and Patterns Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

Data Mining Chapter 6: Models and Patterns Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University Models vs. Patterns Models A model is a high level, global description of a

### Stock Option Pricing Using Bayes Filters

Stock Option Pricing Using Bayes Filters Lin Liao liaolin@cs.washington.edu Abstract When using Black-Scholes formula to price options, the key is the estimation of the stochastic return variance. In this

### Outline. Random Variables. Examples. Random Variable

Outline Random Variables M. Sami Fadali Professor of Electrical Engineering University of Nevada, Reno Random variables. CDF and pdf. Joint random variables. Correlated, independent, orthogonal. Correlation,

### Linear regression methods for large n and streaming data

Linear regression methods for large n and streaming data Large n and small or moderate p is a fairly simple problem. The sufficient statistic for β in OLS (and ridge) is: The concept of sufficiency is

### Tutorial on Markov Chain Monte Carlo

Tutorial on Markov Chain Monte Carlo Kenneth M. Hanson Los Alamos National Laboratory Presented at the 29 th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Technology,

### C: LEVEL 800 {MASTERS OF ECONOMICS( ECONOMETRICS)}

C: LEVEL 800 {MASTERS OF ECONOMICS( ECONOMETRICS)} 1. EES 800: Econometrics I Simple linear regression and correlation analysis. Specification and estimation of a regression model. Interpretation of regression

### Part 2: Kalman Filtering COS 323

Part 2: Kalman Filtering COS 323 On-Line Estimation Have looked at off-line model estimation: all data is available For many applications, want best estimate immediately when each new datapoint arrives

### Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091)

Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMS091) Magnus Wiktorsson Centre for Mathematical Sciences Lund University, Sweden Lecture 6 Sequential Monte Carlo methods II February

### Real-time Visual Tracker by Stream Processing

Real-time Visual Tracker by Stream Processing Simultaneous and Fast 3D Tracking of Multiple Faces in Video Sequences by Using a Particle Filter Oscar Mateo Lozano & Kuzahiro Otsuka presented by Piotr Rudol

### 11. Time series and dynamic linear models

11. Time series and dynamic linear models Objective To introduce the Bayesian approach to the modeling and forecasting of time series. Recommended reading West, M. and Harrison, J. (1997). models, (2 nd

### An Introduction to the Kalman Filter

An Introduction to the Kalman Filter Greg Welch 1 and Gary Bishop 2 TR 95041 Department of Computer Science University of North Carolina at Chapel Hill Chapel Hill, NC 275993175 Updated: Monday, July 24,

### Mathieu St-Pierre. Denis Gingras Dr. Ing.

Comparison between the unscented Kalman filter and the extended Kalman filter for the position estimation module of an integrated navigation information system Mathieu St-Pierre Electrical engineering

### Machine Learning and Data Analysis overview. Department of Cybernetics, Czech Technical University in Prague. http://ida.felk.cvut.

Machine Learning and Data Analysis overview Jiří Kléma Department of Cybernetics, Czech Technical University in Prague http://ida.felk.cvut.cz psyllabus Lecture Lecturer Content 1. J. Kléma Introduction,

### Chapter 14 Managing Operational Risks with Bayesian Networks

Chapter 14 Managing Operational Risks with Bayesian Networks Carol Alexander This chapter introduces Bayesian belief and decision networks as quantitative management tools for operational risks. Bayesian

### Lecture 11: Graphical Models for Inference

Lecture 11: Graphical Models for Inference So far we have seen two graphical models that are used for inference - the Bayesian network and the Join tree. These two both represent the same joint probability

### The Kalman Filter and its Application in Numerical Weather Prediction

Overview Kalman filter The and its Application in Numerical Weather Prediction Ensemble Kalman filter Statistical approach to prevent filter divergence Thomas Bengtsson, Jeff Anderson, Doug Nychka http://www.cgd.ucar.edu/

### Parametric Models Part I: Maximum Likelihood and Bayesian Density Estimation

Parametric Models Part I: Maximum Likelihood and Bayesian Density Estimation Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Fall 2015 CS 551, Fall 2015

### Summary of Probability

Summary of Probability Mathematical Physics I Rules of Probability The probability of an event is called P(A), which is a positive number less than or equal to 1. The total probability for all possible

### Probabilistic Models for Big Data. Alex Davies and Roger Frigola University of Cambridge 13th February 2014

Probabilistic Models for Big Data Alex Davies and Roger Frigola University of Cambridge 13th February 2014 The State of Big Data Why probabilistic models for Big Data? 1. If you don t have to worry about

### Learning Gaussian process models from big data. Alan Qi Purdue University Joint work with Z. Xu, F. Yan, B. Dai, and Y. Zhu

Learning Gaussian process models from big data Alan Qi Purdue University Joint work with Z. Xu, F. Yan, B. Dai, and Y. Zhu Machine learning seminar at University of Cambridge, July 4 2012 Data A lot of

### Probability Theory. Elementary rules of probability Sum rule. Product rule. p. 23

Probability Theory Uncertainty is key concept in machine learning. Probability provides consistent framework for the quantification and manipulation of uncertainty. Probability of an event is the fraction

### Square Root Propagation

Square Root Propagation Andrew G. Howard Department of Computer Science Columbia University New York, NY 10027 ahoward@cs.columbia.edu Tony Jebara Department of Computer Science Columbia University New

### CCNY. BME I5100: Biomedical Signal Processing. Linear Discrimination. Lucas C. Parra Biomedical Engineering Department City College of New York

BME I5100: Biomedical Signal Processing Linear Discrimination Lucas C. Parra Biomedical Engineering Department CCNY 1 Schedule Week 1: Introduction Linear, stationary, normal - the stuff biology is not

### Statistical Machine Learning

Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes

### Robotics. Chapter 25. Chapter 25 1

Robotics Chapter 25 Chapter 25 1 Outline Robots, Effectors, and Sensors Localization and Mapping Motion Planning Motor Control Chapter 25 2 Mobile Robots Chapter 25 3 Manipulators P R R R R R Configuration

### Sampling via Moment Sharing: A New Framework for Distributed Bayesian Inference for Big Data

Sampling via Moment Sharing: A New Framework for Distributed Bayesian Inference for Big Data (Oxford) in collaboration with: Minjie Xu, Jun Zhu, Bo Zhang (Tsinghua) Balaji Lakshminarayanan (Gatsby) Bayesian

### PS 271B: Quantitative Methods II. Lecture Notes

PS 271B: Quantitative Methods II Lecture Notes Langche Zeng zeng@ucsd.edu The Empirical Research Process; Fundamental Methodological Issues 2 Theory; Data; Models/model selection; Estimation; Inference.

### Monte Carlo-based statistical methods (MASM11/FMS091)

Monte Carlo-based statistical methods (MASM11/FMS091) Magnus Wiktorsson Centre for Mathematical Sciences Lund University, Sweden Lecture 6 Sequential Monte Carlo methods II February 7, 2014 M. Wiktorsson

### Model-based Synthesis. Tony O Hagan

Model-based Synthesis Tony O Hagan Stochastic models Synthesising evidence through a statistical model 2 Evidence Synthesis (Session 3), Helsinki, 28/10/11 Graphical modelling The kinds of models that

### Mapping and Localization with RFID Technology

Mapping and Localization with RFID Technology Matthai Philipose, Kenneth P Fishkin, Dieter Fox, Dirk Hahnel, Wolfram Burgard Presenter: Aniket Shah Outline Introduction Related Work Probabilistic Sensor

### Variational Mean Field for Graphical Models

Variational Mean Field for Graphical Models CS/CNS/EE 155 Baback Moghaddam Machine Learning Group baback @ jpl.nasa.gov Approximate Inference Consider general UGs (i.e., not tree-structured) All basic

### BNG 202 Biomechanics Lab. Descriptive statistics and probability distributions I

BNG 202 Biomechanics Lab Descriptive statistics and probability distributions I Overview The overall goal of this short course in statistics is to provide an introduction to descriptive and inferential

### PTE505: Inverse Modeling for Subsurface Flow Data Integration (3 Units)

PTE505: Inverse Modeling for Subsurface Flow Data Integration (3 Units) Instructor: Behnam Jafarpour, Mork Family Department of Chemical Engineering and Material Science Petroleum Engineering, HED 313,

### Bayes and Naïve Bayes. cs534-machine Learning

Bayes and aïve Bayes cs534-machine Learning Bayes Classifier Generative model learns Prediction is made by and where This is often referred to as the Bayes Classifier, because of the use of the Bayes rule

### Monte Carlo-based statistical methods (MASM11/FMS091)

Monte Carlo-based statistical methods (MASM11/FMS091) Jimmy Olsson Centre for Mathematical Sciences Lund University, Sweden Lecture 6 Sequential Monte Carlo methods II February 3, 2012 Changes in HA1 Problem

### Real-Time Monitoring of Complex Industrial Processes with Particle Filters

Real-Time Monitoring of Complex Industrial Processes with Particle Filters Rubén Morales-Menéndez Dept. of Mechatronics and Automation ITESM campus Monterrey Monterrey, NL México rmm@itesm.mx Nando de

### Joint parameter and state estimation algorithms for real-time traffic monitoring

USDOT Region V Regional University Transportation Center Final Report NEXTRANS Project No 097IY04. Joint parameter and state estimation algorithms for real-time traffic monitoring By Ren Wang PhD Candidate

### Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

### ELEC-E8104 Stochastics models and estimation, Lecture 3b: Linear Estimation in Static Systems

Stochastics models and estimation, Lecture 3b: Linear Estimation in Static Systems Minimum Mean Square Error (MMSE) MMSE estimation of Gaussian random vectors Linear MMSE estimator for arbitrarily distributed

### Visual Tracking. Frédéric Jurie LASMEA. CNRS / Université Blaise Pascal. France. jurie@lasmea.univ-bpclermoaddressnt.fr

Visual Tracking Frédéric Jurie LASMEA CNRS / Université Blaise Pascal France jurie@lasmea.univ-bpclermoaddressnt.fr What is it? Visual tracking is the problem of following moving targets through an image

### Lecture 3 : Hypothesis testing and model-fitting

Lecture 3 : Hypothesis testing and model-fitting These dark lectures energy puzzle Lecture 1 : basic descriptive statistics Lecture 2 : searching for correlations Lecture 3 : hypothesis testing and model-fitting

### Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

### Probability and statistics; Rehearsal for pattern recognition

Probability and statistics; Rehearsal for pattern recognition Václav Hlaváč Czech Technical University in Prague Faculty of Electrical Engineering, Department of Cybernetics Center for Machine Perception

### Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C

### Recursive Estimation

Recursive Estimation Raffaello D Andrea Spring 04 Problem Set : Bayes Theorem and Bayesian Tracking Last updated: March 8, 05 Notes: Notation: Unlessotherwisenoted,x, y,andz denoterandomvariables, f x

### Linear Models for Classification

Linear Models for Classification Sumeet Agarwal, EEL709 (Most figures from Bishop, PRML) Approaches to classification Discriminant function: Directly assigns each data point x to a particular class Ci

### Bayesian Filters for Location Estimation

Bayesian Filters for Location Estimation Dieter Fo Jeffrey Hightower, Lin Liao Dirk Schulz Gaetano Borriello, University of Washington, Dept. of Computer Science & Engineering, Seattle, WA Intel Research

### CS 688 Pattern Recognition Lecture 4. Linear Models for Classification

CS 688 Pattern Recognition Lecture 4 Linear Models for Classification Probabilistic generative models Probabilistic discriminative models 1 Generative Approach ( x ) p C k p( C k ) Ck p ( ) ( x Ck ) p(

### FastSLAM: A Factored Solution to the Simultaneous Localization and Mapping Problem With Unknown Data Association

FastSLAM: A Factored Solution to the Simultaneous Localization and Mapping Problem With Unknown Data Association Michael Montemerlo 11th July 2003 CMU-RI-TR-03-28 The Robotics Institute Carnegie Mellon

### The Memory-Based Paradigm for Vision-Based Robot Localization

The Memory-Based Paradigm for Vision-Based Robot Localization D I S S E R T A T I O N zur Erlangung des akademischen Grades Dr. rer. nat. im Fach Informatik eingereicht an der Mathematisch-Naturwissenschaftlichen

### Java Modules for Time Series Analysis

Java Modules for Time Series Analysis Agenda Clustering Non-normal distributions Multifactor modeling Implied ratings Time series prediction 1. Clustering + Cluster 1 Synthetic Clustering + Time series

### Why Taking This Course? Course Introduction, Descriptive Statistics and Data Visualization. Learning Goals. GENOME 560, Spring 2012

Why Taking This Course? Course Introduction, Descriptive Statistics and Data Visualization GENOME 560, Spring 2012 Data are interesting because they help us understand the world Genomics: Massive Amounts

### Multivariate Normal Distribution

Multivariate Normal Distribution Lecture 4 July 21, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #4-7/21/2011 Slide 1 of 41 Last Time Matrices and vectors Eigenvalues

### Online Model Predictive Control of a Robotic System by Combining Simulation and Optimization

Mohammad Rokonuzzaman Pappu Online Model Predictive Control of a Robotic System by Combining Simulation and Optimization School of Electrical Engineering Department of Electrical Engineering and Automation

### Least Squares Estimation

Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David

### Learning outcomes. Knowledge and understanding. Competence and skills

Syllabus Master s Programme in Statistics and Data Mining 120 ECTS Credits Aim The rapid growth of databases provides scientists and business people with vast new resources. This programme meets the challenges

### Master s Theory Exam Spring 2006

Spring 2006 This exam contains 7 questions. You should attempt them all. Each question is divided into parts to help lead you through the material. You should attempt to complete as much of each problem

### INTRODUCTORY STATISTICS

INTRODUCTORY STATISTICS FIFTH EDITION Thomas H. Wonnacott University of Western Ontario Ronald J. Wonnacott University of Western Ontario WILEY JOHN WILEY & SONS New York Chichester Brisbane Toronto Singapore

### Linear and Piecewise Linear Regressions

Tarigan Statistical Consulting & Coaching statistical-coaching.ch Doctoral Program in Computer Science of the Universities of Fribourg, Geneva, Lausanne, Neuchâtel, Bern and the EPFL Hands-on Data Analysis

### Cell Phone based Activity Detection using Markov Logic Network

Cell Phone based Activity Detection using Markov Logic Network Somdeb Sarkhel sxs104721@utdallas.edu 1 Introduction Mobile devices are becoming increasingly sophisticated and the latest generation of smart

### Logic, Probability and Learning

Logic, Probability and Learning Luc De Raedt luc.deraedt@cs.kuleuven.be Overview Logic Learning Probabilistic Learning Probabilistic Logic Learning Closely following : Russell and Norvig, AI: a modern

### Artificial Intelligence Mar 27, Bayesian Networks 1 P (T D)P (D) + P (T D)P ( D) =

Artificial Intelligence 15-381 Mar 27, 2007 Bayesian Networks 1 Recap of last lecture Probability: precise representation of uncertainty Probability theory: optimal updating of knowledge based on new information

### A crash course in probability and Naïve Bayes classification

Probability theory A crash course in probability and Naïve Bayes classification Chapter 9 Random variable: a variable whose possible values are numerical outcomes of a random phenomenon. s: A person s

### A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS

A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS Eusebio GÓMEZ, Miguel A. GÓMEZ-VILLEGAS and J. Miguel MARÍN Abstract In this paper it is taken up a revision and characterization of the class of

### Spatial Statistics Chapter 3 Basics of areal data and areal data modeling

Spatial Statistics Chapter 3 Basics of areal data and areal data modeling Recall areal data also known as lattice data are data Y (s), s D where D is a discrete index set. This usually corresponds to data

### Exploiting A Constellation of Narrowband RF Sensors to Detect and Track Moving Targets

Exploiting A Constellation of Narrowband RF Sensors to Detect and Track Moving Targets Chris Kreucher a, J. Webster Stayman b, Ben Shapo a, and Mark Stuff c a Integrity Applications Incorporated 900 Victors

### Two Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering

Two Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering Department of Industrial Engineering and Management Sciences Northwestern University September 15th, 2014

### Exponential Random Graph Models for Social Network Analysis. Danny Wyatt 590AI March 6, 2009

Exponential Random Graph Models for Social Network Analysis Danny Wyatt 590AI March 6, 2009 Traditional Social Network Analysis Covered by Eytan Traditional SNA uses descriptive statistics Path lengths

### Service courses for graduate students in degree programs other than the MS or PhD programs in Biostatistics.

Course Catalog In order to be assured that all prerequisites are met, students must acquire a permission number from the education coordinator prior to enrolling in any Biostatistics course. Courses are

### Probability and Statistics

CHAPTER 2: RANDOM VARIABLES AND ASSOCIATED FUNCTIONS 2b - 0 Probability and Statistics Kristel Van Steen, PhD 2 Montefiore Institute - Systems and Modeling GIGA - Bioinformatics ULg kristel.vansteen@ulg.ac.be

### State Space Time Series Analysis

State Space Time Series Analysis p. 1 State Space Time Series Analysis Siem Jan Koopman http://staff.feweb.vu.nl/koopman Department of Econometrics VU University Amsterdam Tinbergen Institute 2011 State