# Learning for Control from Multiple Demonstrations

Save this PDF as:

Size: px
Start display at page:

## Transcription

2 good local models in the vicinity of the trajectory such trajectory-specific local models tend to greatly improve control performance. Our algorithm allows one to easily incorporate prior knowledge to further improve the quality of the learned trajectory. For example, for a helicopter performing in-place flips, it is known that the helicopter can be roughly centered around the same position over the entire sequence of flips. Our algorithm incorporates this prior knowledge, and successfully factors out the position drift in the expert demonstrations. We apply our algorithm to learn trajectories and dynamics models for aerobatic flight with a remote controlled helicopter. Our experimental results show that (i) our algorithm successfully extracts a good trajectory from the multiple sub-optimal demonstrations, and (ii) the resulting flight performance significantly extends the state of the art in aerobatic helicopter flight (Abbeel et al., 2007; Gavrilets et al., 2002). Most importantly, our resulting controllers are the first to perform as well, and often even better, than our expert pilot. We posted movies of our autonomous helicopter flights at: The remainder of this paper is organized as follows: Section 2 presents our generative model for (multiple) suboptimal demonstrations; Section 3 describes our trajectory learning algorithm in detail; Section 4 describes our local model learning algorithm; Section 5 describes our helicopter platform and experimental results; Section 6 discusses related work. 2. Generative Model 2.1. Basic Generative Model We are given M demonstration trajectories of length N k, for k = 0..M 1. Each trajectory is a sequence of states, s k j, and control inputs, uk j, composed into a single state vector: [ s yj k k = j u k j ], for j = 0..N k 1,k = 0..M 1. Our goal is to estimate a hidden target trajectory of length T, denoted similarly: [ ] s z t = t, for t = 0..T 1. u t We use the following notation: y = {y k j j = 0..Nk 1,k = 0..M 1}, z = {z t t = 0..T 1}, and similarly for other indexed variables. The generative model for the ideal trajectory is given by an initial state distribution z 0 N(µ 0,Σ 0 ) and an approximate model of the dynamics z t+1 = f(z t ) + ω (z) t, ω (z) t N(0,Σ (z) ). (1) The dynamics model does not need to be particularly accurate in our experiments, we use a single generic model learned from a large corpus of data that is not specific to the trajectory we want to perform. In our experiments (Section 5) we provide some concrete examples showing how accurately the generic model captures the true dynamics for our helicopter. 1 Our generative model represents each demonstration as a set of independent observations of the hidden, ideal trajectory z. Specifically, our model assumes y k j = z τ k j + ω (y) j, ω (y) j N(0,Σ (y) ). (2) Here τj k is the time index in the hidden trajectory to which the observation yj k is mapped. The noise term in the observation equation captures both inaccuracy in estimating the observed trajectories from sensor data, as well as errors in the maneuver that are the result of the human pilot s imperfect demonstration. 2 The time indices τj k are unobserved, and our model assumes the following distribution with parameters d k i : d k 1 if τj+1 k τk j = 1 P(τj+1 τ k j k d k 2 if τj+1 k ) = τk j = 2 d k 3 if τj+1 k τk j = 3 (3) 0 otherwise τ k 0 0. (4) To accommodate small, gradual shifts in time between the hidden and observed trajectories, our model assumes the observed trajectories are subsampled versions of the hidden trajectory. We found that having a hidden trajectory length equal to twice the average length of the demonstrations, i.e., T = 2( 1 M M k=1 Nk ), gives sufficient resolution. Figure 1 depicts the graphical model corresponding to our basic generative model. Note that each observation y k j depends on the hidden trajectory s state at time τ k j, which means that for τk j unobserved, yk j depends on all states in the hidden trajectory that it could be associated with Extensions to the Generative Model Thus far we have assumed that the expert demonstrations are misaligned copies of the ideal trajectory 1 The state transition model also predicts the controls as a function of the previous state and controls. In our experiments we predict u t+1 as u t plus Gaussian noise. 2 Even though our observations, y, are correlated over time with each other due to the dynamics governing the observed trajectory, our model assumes that the observations y k j are independent for all j = 0..N k 1 and k = 0..M 1.

3 tories, we augment the latent trajectory s state with a drift vector δ k t for each time t and each demonstrated trajectory k. We model the drift as a zero-mean random walk with (relatively) small variance. The state observations are now noisy measurements of z t + δ k t rather than merely z t. Figure 1. Graphical model representing our trajectory assumptions. (Shaded nodes are observed.) merely corrupted by Gaussian noise. Listgarten et al. have used this same basic generative model (for the case where f( ) is the identity function) to align speech signals and biological data (Listgarten, 2006; Listgarten et al., 2005). We now augment the basic model to account for other sources of error which are important for modeling and control Learning Local Model Parameters For many systems, we can substantially improve our modeling accuracy by using a time-varying model f t ( ) that is specific to the vicinity of the intended trajectory at each time t. We express f t as our crude model, f, augmented with a bias term 3, β t : z t+1 = f t (z t ) + ω (z) t f(z t ) + β t + ω (z) t. To regularize our model, we assume that β t changes only slowly over time. We have β t+1 N(β t,σ (β) ). We incorporate the bias into our observation model by computing the observed bias β k j = y k j f(yk j 1 ) for each of the observed state transitions, and modeling this as a direct observation of the true model bias corrupted by Gaussian noise. The result of this modification is that the ideal trajectory must not only look similar to the demonstration trajectories, but it must also obey a dynamics model which includes those errors consistently observed in the demonstrations Factoring out Demonstration Drift It is often difficult, even for an expert pilot, during aerobatic maneuvers to keep the helicopter centered around a fixed position. The recorded position trajectory will often drift around unintentionally. Since these position errors are highly correlated, they are not explained well by the Gaussian noise term in our observation model. To capture such slow drift in the demonstrated trajec- 3 Our generative model can incorporate richer local models. We discuss our choice of merely using biases in our generative trajectory model in more detail in Section Incorporating Prior Knowledge Even though it might be hard to specify the complete ideal trajectory in state space, we might still have prior knowledge about the trajectory. Hence, we introduce additional observations ρ t = ρ(z t ) corresponding to our prior knowledge about the ideal trajectory at time t. The function ρ(z t ) computes some features of the hidden state z t and our expert supplies the value ρ t that this feature should take. For example, for the case of a helicopter performing an in-place flip, we use an observation that corresponds to our expert pilot s knowledge that the helicopter should stay at a fixed position while it is flipping. We assume that these observations may be corrupted by Gaussian noise, where the variance of the noise expresses our confidence in the accuracy of the expert s advice. In the case of the flip, the variance expresses our knowledge that it is, in fact, impossible to flip perfectly in-place and that the actual position of the helicopter may vary slightly from the position given by the expert. Incorporating prior knowledge of this kind can greatly enhance the learned ideal trajectory. We give more detailed examples in Section Model Summary In summary, we have the following generative model: z t+1 = f(z t ) + βt + ω (z) t, (5) βt+1 = βt + ω (β) t, (6) δ k t+1 = δ k t + ω (δ) t, (7) ρ t = ρ(z t ) + ω (ρ) t, (8) y k j = z τ k j + δ k j + ω (y) j, (9) τ k j P(τ k j+1 τ k j ) (10) Here ω (z) t t t t j are zero mean Gaussian random variables with respective covariance matrices Σ (z),σ (β),σ (δ),σ (ρ),σ (y). The transition probabilities for τj k are defined by Eqs. (3, 4) with parameters d k 1,d k 2,d k 3 (collectively denoted d).,ω (β),ω (δ),ω (ρ),ω (y) 3. Trajectory Learning Algorithm Our learning algorithm automatically finds the timealignment indexes τ, the time-index transition probabilities d, and the covariance matrices Σ ( ) by (approximately) maximizing the joint likelihood of the observed trajectories y and the observed prior knowl-

4 edge about the ideal trajectory ρ, while marginalizing out over the unobserved, intended trajectory z. Concretely, our algorithm (approximately) solves max log P(y,ρ,τ ; Σ ( ),d). (11) τ,σ ( ),d Then, once our algorithm has found τ,d,σ ( ), it finds the most likely hidden trajectory, namely the trajectory z that maximizes the joint likelihood of the observed trajectories y and the observed prior knowledge about the ideal trajectory ρ for the learned parameters τ,d,σ ( ). 4 The joint optimization in Eq. (11) is difficult because (as can be seen in Figure 1) the lack of knowledge of the time-alignment index variables τ introduces a very large set of dependencies between all the variables. However, when τ is known, the optimization problem in Eq. (11) greatly simplifies thanks to context specific independencies (Boutilier et al., 1996). When τ is fixed, we obtain a model such as the one shown in Figure 2. In this model we can directly estimate the multinomial parameters d in closed form; and we have a standard HMM parameter learning problem for the covariances Σ ( ), which can be solved using the EM algorithm (Dempster et al., 1977) often referred to as Baum-Welch in the context of HMMs. Concretely, for our setting, the EM algorithm s E-step computes the pairwise marginals over sequential hidden state variables by running a (extended) Kalman smoother; the M-step then uses these marginals to update the covariances Σ ( ). Figure 2. Example of graphical model when τ is known. (Shaded nodes are observed.) To also optimize over the time-indexing variables τ, we propose an alternating optimization procedure. For 4 Note maximizing over the hidden trajectory and the covariance parameters simultaneously introduces undesirable local maxima: the likelihood score would be highest (namely infinity) for a hidden trajectory with a sequence of states exactly corresponding to the (crude) dynamics model f( ) and state-transition covariance matrices equal to all-zeros as long as the observation covariances are nonzero. Hence we marginalize out the hidden trajectory to find τ,d, Σ ( ). fixed Σ ( ) and d, and for fixed z, we can find the optimal time-indexing variables τ using dynamic programming over the time-index assignments for each demonstration independently. The dynamic programming algorithm to find τ is known in the speech recognition literature as dynamic time warping (Sakoe & Chiba, 1978) and in the biological sequence alignment literature as the Needleman-Wunsch algorithm (Needleman & Wunsch, 1970). The fixed z we use, is the one that maximizes the likelihood of the observations for the current setting of parameters τ,d,σ ( ). 5 In practice, rather than alternating between complete optimizations over Σ ( ),d and τ, we only partially optimize over Σ ( ), running only one iteration of the EM algorithm. We provide the complete details of our algorithm in the full paper (Coates et al., 2008). 4. Local Model Learning For complex dynamical systems, the state z t used in the dynamics model often does not correspond to the complete state of the system, since the latter could involve large numbers of previous states or unobserved variables that make modeling difficult. 6 However, when we only seek to model the system dynamics along a specific trajectory, knowledge of both z t and how far we are along that trajectory is often sufficient to accurately predict the next state z t+1. Once the alignments between the demonstrations are computed by our trajectory learning algorithm, we can use the time aligned demonstration data to learn a sequence of trajectory-specific models. The time indices of the aligned demonstrations now accurately associate the demonstration data points with locations along the learned trajectory, allowing us to build models for the state at time t using the appropriate corresponding data from the demonstration trajectories. 7 5 Fixing z means the dynamic time warping step only approximately optimizes the original objective. Unfortunately, without fixing z, the independencies required to obtain an efficient dynamic programming algorithm do not hold. In practice we find our approximation works very well. 6 This is particularly true for helicopters. Whereas the state of the helicopter is very crudely captured by the 12D rigid-body state representation we use for our controllers, the true physical state of the system includes, among others, the airflow around the helicopter, the rotor head speed, and the actuator dynamics. 7 We could learn the richer local model within the trajectory alignment algorithm, updating the dynamics model during the M-step. We chose not to do so since these models are more computationally expensive to estimate. The richer models have minimal influence on the alignment because the biases capture the average model error the richer models capture the derivatives around it. Given the limited influence on the alignment, we chose to save computational time and only estimate the richer models after

5 Figure 3. Our XCell Tempest autonomous helicopter. To construct an accurate nonlinear model to predict z t+1 from z t, using the aligned data, one could use locally weighted linear regression (Atkeson et al., 1997), where a linear model is learned based on a weighted dataset. Data points from our aligned demonstrations that are nearer to the current time index along the trajectory, t, and nearer the current state, z t, would be weighted more highly than data far away. While this allows us to build a more accurate model from our time-aligned data, the weighted regression must be done online, since the weights depend on the current state, z t. For performance reasons 8 this may often be impractical. Thus, we weight data only based on the time index, and learn a parametric model in the remaining variables (which, in our experiments, has the same form as the global crude model, f( )). Concretely, when estimating the model for the dynamics at time t, we weight a data point at time t by: 9 W(t ) = exp ( (t t ) 2 ), where σ is a bandwidth parameter. Typical values for σ are between one and two seconds in our experiments. Since the weights for the data points now only depend on the time index, we can precompute all models f t ( ) along the entire trajectory. The ability to precompute the models is a feature crucial to our control algorithm, which relies heavily on fast simulation. 5. Experimental Results 5.1. Experimental Setup To test our algorithm, we had our expert helicopter pilot fly our XCell Tempest helicopter (Figure 3), alignment. 8 During real-time control execution, our model is queried roughly times per second. Even with KDtree or cover-tree data structures a full locally weighted model would be much too slow. 9 In practice, the data points along a short segment of the trajectory lie in a low-dimensional subspace of the state space. This sometimes leads to an ill-conditioned parameter estimation problem. To mitigate this problem, we regularize our models toward the crude model f( ). σ 2 which can perform professional, competition-level maneuvers. 10 We collected multiple demonstrations from our expert for a variety of aerobatic trajectories: continuous inplace flips and rolls, a continuous tail-down tic toc, and an airshow, which consists of the following maneuvers in rapid sequence: split-s, snap roll, stall-turn, loop, loop with pirouette, stall-turn with pirouette, hurricane (fast backward funnel), knife-edge, flips and rolls, tic-toc and inverted hover. The (crude) helicopter dynamics f( ) is constructed using the method of Abbeel et al. (2006a). 11 The helicopter dynamics model predicts linear and angular accelerations as a function of current state and inputs. The next state is then obtained by integrating forward in time using the standard rigid-body equations. In the trajectory learning algorithm, we have bias terms β t for each of the predicted accelerations. We use the state-drift variables, δ k t, for position only. For the flips, rolls, and tic-tocs we incorporated our prior knowledge that the helicopter should stay in place. We added a measurement of the form: 0 = p(z t ) + ω (ρ0), ω (ρ0) N(0,Σ (ρ0) ) where p( ) is a function that returns the position coordinates of z t, and Σ (ρ0) is a diagonal covariance matrix. This measurement which is a direct observation of the pilot s intended trajectory is similar to advice given to a novice human pilot to describe the desired maneuver: A good flip, roll, or tic-toc trajectory stays close to the same position. We also used additional advice in the airshow to indicate that the vertical loops, stall-turns and split-s should all lie in a single vertical plane; that the hurricanes should lie in a horizontal plane and that a good knife-edge stays in a vertical plane. These measurements take the form: c = N p(z t ) + ω (ρ1), ω (ρ1) N(0,Σ (ρ1) ) where, again, p(z t ) returns the position coordinates of z t. N is a vector normal to the plane of the maneuver, c is a constant, and Σ (ρ1) is a diagonal covariance matrix. 10 We instrumented the helicopter with a Microstrain 3DM-GX1 orientation sensor. A ground-based camera system measures the helicopter s position. A Kalman filter uses these measurements to track the helicopter s position, velocity, orientation and angular rate. 11 The model of Abbeel et al. (2006a) naturally generalizes to any orientation of the helicopter regardless of the flight regime from which data is collected. Hence, even without collecting data from aerobatic flight, we can reasonably attempt to use such a model for aerobatic flying, though we expect it to be relatively inaccurate.

6 40 North (m) East (m) (a) (b) (c) (d) Figure 4. Colored lines: demonstrations. Black dotted line: trajectory inferred by our algorithm. (See text for details.) 5.2. Trajectory Learning Results Figure 4(a) shows the horizontal and vertical position of the helicopter during the two loops flown during the airshow. The colored lines show the expert pilot s demonstrations. The black dotted line shows the inferred ideal path produced by our algorithm. The loops are more rounded and more consistent in the inferred ideal path. We did not incorporate any prior knowledge to this extent. Figure 4(b) shows a topdown view of the same demonstrations and inferred trajectory. The prior successfully encouraged the inferred trajectory to lie in a vertical plane, while obeying the system dynamics. Figure 4(c) shows one of the bias terms, namely the model prediction errors for the Z-axis acceleration of the helicopter computed from the demonstrations, before time-alignment. Figure 4(d) shows the result after alignment (in color) as well as the inferred acceleration error (black dotted). We see that the unaligned bias measurements allude to errors approximately in the - 1G to -2G range for the first 40 seconds of the airshow (a period that involves high-g maneuvering that is not predicted accurately by the crude model). However, only the aligned biases precisely show the magnitudes and locations of these errors along the trajectory. The alignment allows us to build our ideal trajectory based upon a much more accurate model that is tailored to match the dynamics observed in the demonstrations. Results for other maneuvers and state variables are similar. At the URL provided in the introduction we posted movies which simultaneously replay the different demonstrations, before alignment and after alignment. The movies visualize the alignment results in many state dimensions simultaneously Flight Results After constructing the idealized trajectory and models using our algorithm, we attempted to fly the trajectory on the actual helicopter. Our helicopter uses a receding-horizon differential dynamic programming (DDP) controller (Jacobson & Mayne, 1970). DDP approximately solves general continuous state-space optimal control problems by taking advantage of the fact that optimal control problems with linear dynamics and a quadratic reward function (known as linear quadratic regulator (LQR) problems) can be solved efficiently. It is well-known that the solution to the (time-varying, finite horizon) LQR problem is a sequence of linear feedback controllers. In short, DDP iteratively approximates the general control problem with LQR problems until convergence, resulting in a sequence of linear feedback controllers that are approximately optimal. In the receding-horizon algorithm, we not only run DDP initially to design the sequence of controllers, but also re-run DDP during control execution at every time step and recompute the optimal controller over a fixed-length time interval (the horizon), assuming the precomputed controller and cost-to-go are correct after this horizon. As described in Section 4, our algorithm outputs a sequence of learned local parametric models, each of the form described by Abbeel et al. (2006a). Our implementation linearizes these models on the fly with a 2 second horizon (at 20Hz). Our reward function penalizes error from the target trajectory, s t, as well as deviation from the desired controls, u t, and the desired control velocities, u t+1 u t. First we compare our results with the previous stateof-the-art in aerobatic helicopter flight, namely the inplace rolls and flips of Abbeel et al. (2007). That work used hand-specified target trajectories and a single nonlinear model for the entire trajectory. Figure 5(a) shows the Y-Z position 12 and the collective (thrust) control inputs for the in-place rolls for both their controller and ours. Our controller achieves (i) better position performance (standard deviation of approximately 2.3 meters in the Y-Z plane, compared to about 4.6 meters and (ii) lower overall collective control values (which roughly represents the amount of energy being used to fly the maneuver). Similarly, Figure 5(b) shows the X-Z position and the collective control inputs for the in-place flips for both controllers. Like for the rolls, we see that our controller significantly outperforms that of Abbeel et al. (2007), both in position accuracy and in control energy expended. 12 These are the position coordinates projected into a plane orthogonal to the axis of rotation.

7 Collective Input Altitude (m) North Position (m) Time (s) Collective Input Altitude (m) East Position (m) Time (s) Altitude (m) North Position (m) (a) (b) (c) Figure 5. Flight results. (a),(b) Solid black: our results. Dashed red: Abbeel et al. (2007). (c) Dotted black: autonomous tic-toc. Solid colored: expert demonstrations. (See text for details.) Besides flips and rolls, we also performed autonomous tic tocs widely considered to be an even more challenging aerobatic maneuver. During the (tail-down) tic-toc maneuver the helicopter pitches quickly backward and forward in-place with the tail pointed toward the ground (resembling an inverted clock pendulum). The complex relationship between pitch angle, horizontal motion, vertical motion, and thrust makes it extremely difficult to create a feasible tic-toc trajectory by hand. Our attempts to use such a hand-coded trajectory with the DDP algorithm from (Abbeel et al., 2007) failed repeatedly. By contrast, our algorithm readily yields an excellent feasible trajectory that was successfully flown on the first attempt. Figure 5(c) shows the expert trajectories (in color), and the autonomously flown tic-toc (black dotted). Our controller significantly outperforms the expert s demonstrations. We also applied our algorithm to successfully fly a complete aerobatic airshow, which consists of the following maneuvers in rapid sequence: split-s, snap roll, stall-turn, loop, loop with pirouette, stall-turn with pirouette, hurricane (fast backward funnel), knifeedge, flips and rolls, tic-toc and inverted hover. The trajectory-specific local model learning typically captures the dynamics well enough to fly all the aforementioned maneuvers reliably. Since our computer controller flies the trajectory very consistently, however, this allows us to repeatedly acquire data from the same vicinity of the target trajectory on the real helicopter. Similar to Abbeel et al. (2007), we incorporate this flight data into our model learning, allowing us to improve flight accuracy even further. For example, during the first autonomous airshow our controller achieves an RMS position error of 3.29 meters, and this procedure improved performance to 1.75 meters RMS position error. Videos of all our flights are available at: 6. Related Work Although no prior works span our entire setting of learning for control from multiple demonstrations, there are separate pieces of work that relate to various components of our approach. Atkeson and Schaal (1997) use multiple demonstrations to learn a model for a robot arm, and then find an optimal controller in their simulator, initializing their optimal control algorithm with one of the demonstrations. The work of Calinon et al. (2007) considered learning trajectories and constraints from demonstrations for robotic tasks. There, they do not consider the system s dynamics or provide a clear mechanism for the inclusion of prior knowledge. Our formulation presents a principled, joint optimization which takes into account the multiple demonstrations, as well as the (complex) system dynamics and prior knowledge. While Calinon et al. (2007) also use some form of dynamic time warping, they do not try to optimize a joint objective capturing both the system dynamics and time-warping. Among others, An et al. (1988) and, more recently, Abbeel et al. (2006b) have exploited the idea of trajectory-indexed model learning for control. However, contrary to our setting, their algorithms do not time align nor coherently integrate data from multiple trajectories. While the work by Listgarten et al. (Listgarten, 2006; Listgarten et al., 2005) does not consider robotic control and model learning, they also consider the problem of multiple continuous time series alignment with a hidden time series. Our work also has strong similarities with recent work on inverse reinforcement learning, which extracts a reward function (rather than a trajectory) from the expert demonstrations. See, e.g., Ng and Russell (2000); Abbeel and Ng (2004); Ratliff et al. (2006); Neu and Szepesvari (2007); Ramachandran and Amir (2007); Syed and Schapire (2008).

8 Most prior work on autonomous helicopter flight only considers the flight-regime close to hover. There are three notable exceptions. The aerobatic work of Gavrilets et al. (2002) comprises three maneuvers: split-s, snap-roll, and stall-turn, which we also include during the first 10 seconds of our airshow for comparison. They record pilot demonstrations, and then hand-engineer a sequence of desired angular rates and velocities, as well as transition points. Ng et al. (2004) have their autonomous helicopter perform sustained inverted hover. We compared the performance of our system with the work of Abbeel et al. (2007), by far the most advanced autonomous aerobatics results to date, in Section Conclusion We presented an algorithm that takes advantage of multiple suboptimal trajectory demonstrations to (i) extract (an estimate of) the ideal demonstration, (ii) learn a local model along this trajectory. Our algorithm is generally applicable for learning trajectories and dynamics models along trajectories from multiple demonstrations. We showed the effectiveness of our algorithm for control by applying it to the challenging problem of autonomous helicopter aerobatics. The ideal target trajectory and the local models output by our trajectory learning algorithm enable our controllers to significantly outperform the prior state of the art. Acknowledgments We thank Garett Oku for piloting and building our helicopter. Adam Coates is supported by a Stanford Graduate Fellowship. This work was also supported in part by the DARPA Learning Locomotion program under contract number FA C References Abbeel, P., Coates, A., Quigley, M., & Ng, A. Y. (2007). An application of reinforcement learning to aerobatic helicopter flight. NIPS 19. Abbeel, P., Ganapathi, V., & Ng, A. Y. (2006a). Learning vehicular dynamics with application to modeling helicopters. NIPS 18. Abbeel, P., & Ng, A. Y. (2004). Apprenticeship learning via inverse reinforcement learning. Proc. ICML. Abbeel, P., Quigley, M., & Ng, A. Y. (2006b). Using inaccurate models in reinforcement learning. Proc. ICML. An, C. H., Atkeson, C. G., & Hollerbach, J. M. (1988). Model-based control of a robot manipulator. MIT Press. Atkeson, C., & Schaal, S. (1997). Robot learning from demonstration. Proc. ICML. Atkeson, C. G., Moore, A. W., & Schaal, S. (1997). Locally weighted learning for control. Artificial Intelligence Review, 11. Boutilier, C., Friedman, N., Goldszmidt, M., & Koller, D. (1996). Context-specific independence in Bayesian networks. Proc. UAI. Calinon, S., Guenter, F., & Billard, A. (2007). On learning, representing and generalizing a task in a humanoid robot. IEEE Trans. on Systems, Man and Cybernetics, Part B. Coates, A., Abbeel, P., & Ng, A. Y. (2008). Learning for control from multiple demonstrations (Full version). Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. J. of the Royal Statistical Society. Gavrilets, V., Martinos, I., Mettler, B., & Feron, E. (2002). Control logic for automated aerobatic flight of miniature helicopter. AIAA Guidance, Navigation and Control Conference. Jacobson, D. H., & Mayne, D. Q. (1970). Differential dynamic programming. Elsevier. Listgarten, J. (2006). Analysis of sibling time series data: alignment and difference detection. Doctoral dissertation, University of Toronto. Listgarten, J., Neal, R. M., Roweis, S. T., & Emili, A. (2005). Multiple alignment of continuous time series. NIPS 17. Needleman, S., & Wunsch, C. (1970). A general method applicable to the search for similarities in the amino acid sequence of two proteins. J. Mol. Biol. Neu, G., & Szepesvari, C. (2007). Apprenticeship learning using inverse reinforcement learning and gradient methods. Proc. UAI. Ng, A. Y., Coates, A., Diel, M., Ganapathi, V., Schulte, J., Tse, B., Berger, E., & Liang, E. (2004). Autonomous inverted helicopter flight via reinforcement learning. ISER. Ng, A. Y., & Russell, S. (2000). Algorithms for inverse reinforcement learning. Proc. ICML. Ramachandran, D., & Amir, E. (2007). Bayesian inverse reinforcement learning. Proc. IJCAI. Ratliff, N., Bagnell, J., & Zinkevich, M. (2006). Maximum margin planning. Proc. ICML. Sakoe, H., & Chiba, S. (1978). Dynamic programming algorithm optimization for spoken word recognition. IEEE Transactions on Acoustics, Speech, and Signal Processing. Syed, U., & Schapire, R. E. (2008). A game-theoretic approach to apprenticeship learning. NIPS 20.

9 A. Trajectory Learning Algorithm As described in Section 3, our algorithm (approximately) solves max log P(y,ρ,τ ; Σ ( ),d). (12) τ,σ ( ),d Then, once our algorithm has found τ,d,σ ( ), it finds the most likely hidden trajectory, namely the trajectory z that maximizes the joint likelihood of the observed trajectories y and the observed prior knowledge about the ideal trajectory ρ for the learned parameters τ,d,σ ( ). To optimize Eq. (12), we alternatingly optimize over Σ ( ), d and τ. Section 3 provides the high-level description, below we provide the detailed description of our algorithm. 1. Initialize the parameters to hand-chosen defaults. A typical choice: Σ ( ) = I, d k i = 1 3, τk T 1 j = j N k E-step for latent trajectory: For the current setting of τ,σ ( ) run a (extended) Kalman smoother to find the distributions for the latent states, N(µ t T 1,Σ t T 1 ). 3. M-step for latent trajectory: Update the covariances Σ ( ) using the standard EM update. 4. E-step for the time indexing (using hard assignments): run dynamic time warping to find τ that maximizes the joint probability P( z,y,ρ,τ), where z is fixed to µ t T 1, namely the mode of the distribution obtained from the Kalman smoother. 5. M-step for the time indexing: estimate d from τ. 6. Repeat steps 2-5 until convergence. A.1. Steps 2 and 3 details EM for non-linear dynamical systems Steps 2 and 3 in our algorithm correspond to the standard E and M steps of the EM algorithm applied to a non-linear dynamical system with Gaussian noise. For completeness we provide the details below. In particular, we have: z t+1 = f(z t ) + ω t, y t+1 = h(z t ) + ν t, ω t N(0,Q), ν t N(0,R). In the E-step, for t = 0..T 1, the Kalman smoother computes the parameters µ t t and Σ t t for the distribution N(µ t t,σ t t ), which is the distribution of z t conditioned on all observations up to and including time t. Along the way, the smoother also computes µ t+1 t and Σ t+1 t. These are the parameters for the distribution of z t+1 given only the measurements up to time t. Finally, during the backward pass, the parameters µ t T 1 and Σ t T 1 are computed, which give the distribution for z t given all measurements. After running the Kalman smoother (for the E-step), we can use the computed quantities to update Q and R in the M-step. In particular, we can compute 13 : δµ t = µ t+1 T 1 f(µ t T 1 ), A t = Df(µ t T 1 ), L t = Σ t t A t Σ 1 t+1 t, P t = Σ t+1 T 1 Σ t+1 T 1 L t A t A t L t Σ t+1 T 1, Q = 1 T 1 δµ t δµ t + A t Σ t T 1 A t + P t, T t=0 δy t = y t h(µ t T 1 ), C t = Dh(µ t T 1 ), R = 1 T 1 δy t δyt + C t Σ t T 1 Ct. T t=0 A.2. Steps 4 and 5 details Dynamic time warping In Step 4 our goal is to compute τ as: τ = arg max log P( z,y,ρ,τ;σ ( ),d) τ = arg max log P(y z, τ)p(ρ z)p( z)p(τ) (13) τ = arg max log P(y z,τ)p(τ) τ where z is the mode of the distribution computed by the Kalman smoother (namely, z t = µ t T 1 ) and Eq. (13) made use of independence assumptions implied by our model (see Figure 2). Again, using independence properties, the log likelihood above can be expanded to: τ = arg max τ M 1 k=0 N k 1 j=0 [ ] (14) l(yj k z τ k j,τj k ) + l(τj k τj 1) k Note that the inner summations in the above expression are independent the likelihoods for each of the M observation sequences can be maximized separately. Hence, in the following, we will omit the k superscript, as the algorithm can be applied separately for each sequence of observations and indices. At this point, we can solve the maximization over τ using a dynamic programming algorithm known in the speech recognition literature as dynamic time warping (Sakoe & Chiba, 1978) and in the biological sequence alignment literature as the Needleman-Wunsch 13 The notation Df(z) is the Jacobian of f evaluated at z.

10 algorithm (Needleman & Wunsch, 1970). For completeness, we provide the details for our setting below. We define the quantity Q(s,t) to be the maximum obtainable value of the first s + 1 terms of the inner summation if we choose τ s = t. For s = 0 we have: And for s > 0: Learning for Control from Multiple Demonstrations Q(0,t) = l(y 0 z τ0,τ 0 = t) + l(τ 0 = t) (15) Q(s,t) = l(y s z τs,τ s = t) + max τ 1,...,τ s 1 [l(τ s = t τ s 1 ) s 1 + j=0 [ l(yj z τj,τ j ) + l(τ j τ j 1 ) ]] The latter equation can be written recursively as: Q(s,t) = l(y s z τs,τ s = t)+ max t [l(τ s = t τ s 1 = t ) + Q(s 1,t )] (16) The equations (15) and (16) can be used to compute max t Q(N k 1,t) for each observation sequence (and the maximizing solution, τ), which is exactly the maximizing value of the inner summation in Eq. (14). The maximization in Eq. (16) can be restricted to the relevant values of t. In our application, we only allow t {t 3,t 2,t 1}. As is common practice, we typically restrict the time-index assignments to a fixed-width band around the default, equally-spaced alignment. In our case, we only compute Q(s, t) if 2s C t 2s + C, for fixed C. In Step 5 we compute the parameters d using standard maximum likelihood estimates for multinomial distributions.

### An EM algorithm for the estimation of a ne state-space systems with or without known inputs

An EM algorithm for the estimation of a ne state-space systems with or without known inputs Alexander W Blocker January 008 Abstract We derive an EM algorithm for the estimation of a ne Gaussian state-space

### An Application of Reinforcement Learning to Aerobatic Helicopter Flight

An Application of Reinforcement Learning to Aerobatic Helicopter Flight Pieter Abbeel, Adam Coates, Morgan Quigley, Andrew Y. Ng Computer Science Dept. Stanford University Stanford, CA 94305 Abstract Autonomous

### Local Gaussian Process Regression for Real Time Online Model Learning and Control

Local Gaussian Process Regression for Real Time Online Model Learning and Control Duy Nguyen-Tuong Jan Peters Matthias Seeger Max Planck Institute for Biological Cybernetics Spemannstraße 38, 776 Tübingen,

### Force/position control of a robotic system for transcranial magnetic stimulation

Force/position control of a robotic system for transcranial magnetic stimulation W.N. Wan Zakaria School of Mechanical and System Engineering Newcastle University Abstract To develop a force control scheme

### Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm

1 Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm Hani Mehrpouyan, Student Member, IEEE, Department of Electrical and Computer Engineering Queen s University, Kingston, Ontario,

### A Learning Based Method for Super-Resolution of Low Resolution Images

A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 emre.ugur@ceng.metu.edu.tr Abstract The main objective of this project is the study of a learning based method

### Automatic Labeling of Lane Markings for Autonomous Vehicles

Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 jkiske@stanford.edu 1. Introduction As autonomous vehicles become more popular,

### Lecture 10: Sequential Data Models

CSC2515 Fall 2007 Introduction to Machine Learning Lecture 10: Sequential Data Models 1 Example: sequential data Until now, considered data to be i.i.d. Turn attention to sequential data Time-series: stock

### Course: Model, Learning, and Inference: Lecture 5

Course: Model, Learning, and Inference: Lecture 5 Alan Yuille Department of Statistics, UCLA Los Angeles, CA 90095 yuille@stat.ucla.edu Abstract Probability distributions on structured representation.

### Nonlinear Iterative Partial Least Squares Method

Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

### Onboard electronics of UAVs

AARMS Vol. 5, No. 2 (2006) 237 243 TECHNOLOGY Onboard electronics of UAVs ANTAL TURÓCZI, IMRE MAKKAY Department of Electronic Warfare, Miklós Zrínyi National Defence University, Budapest, Hungary Recent

### DYNAMIC RANGE IMPROVEMENT THROUGH MULTIPLE EXPOSURES. Mark A. Robertson, Sean Borman, and Robert L. Stevenson

c 1999 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or

### A Reliability Point and Kalman Filter-based Vehicle Tracking Technique

A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video

### CS229 Lecture notes. Andrew Ng

CS229 Lecture notes Andrew Ng Part X Factor analysis Whenwehavedatax (i) R n thatcomesfromamixtureofseveral Gaussians, the EM algorithm can be applied to fit a mixture model. In this setting, we usually

### CHAPTER 5 PREDICTIVE MODELING STUDIES TO DETERMINE THE CONVEYING VELOCITY OF PARTS ON VIBRATORY FEEDER

93 CHAPTER 5 PREDICTIVE MODELING STUDIES TO DETERMINE THE CONVEYING VELOCITY OF PARTS ON VIBRATORY FEEDER 5.1 INTRODUCTION The development of an active trap based feeder for handling brakeliners was discussed

### Tracking of Small Unmanned Aerial Vehicles

Tracking of Small Unmanned Aerial Vehicles Steven Krukowski Adrien Perkins Aeronautics and Astronautics Stanford University Stanford, CA 94305 Email: spk170@stanford.edu Aeronautics and Astronautics Stanford

### Deterministic Sampling-based Switching Kalman Filtering for Vehicle Tracking

Proceedings of the IEEE ITSC 2006 2006 IEEE Intelligent Transportation Systems Conference Toronto, Canada, September 17-20, 2006 WA4.1 Deterministic Sampling-based Switching Kalman Filtering for Vehicle

### Introduction. Independent Component Analysis

Independent Component Analysis 1 Introduction Independent component analysis (ICA) is a method for finding underlying factors or components from multivariate (multidimensional) statistical data. What distinguishes

### Gaussian Processes in Machine Learning

Gaussian Processes in Machine Learning Carl Edward Rasmussen Max Planck Institute for Biological Cybernetics, 72076 Tübingen, Germany carl@tuebingen.mpg.de WWW home page: http://www.tuebingen.mpg.de/ carl

### EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set

EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set Amhmed A. Bhih School of Electrical and Electronic Engineering Princy Johnson School of Electrical and Electronic Engineering Martin

### CHAPTER 1 INTRODUCTION

CHAPTER 1 INTRODUCTION 1.1 Background of the Research Agile and precise maneuverability of helicopters makes them useful for many critical tasks ranging from rescue and law enforcement task to inspection

### Advanced Signal Processing and Digital Noise Reduction

Advanced Signal Processing and Digital Noise Reduction Saeed V. Vaseghi Queen's University of Belfast UK WILEY HTEUBNER A Partnership between John Wiley & Sons and B. G. Teubner Publishers Chichester New

### System Identification for Acoustic Comms.:

System Identification for Acoustic Comms.: New Insights and Approaches for Tracking Sparse and Rapidly Fluctuating Channels Weichang Li and James Preisig Woods Hole Oceanographic Institution The demodulation

### Static Environment Recognition Using Omni-camera from a Moving Vehicle

Static Environment Recognition Using Omni-camera from a Moving Vehicle Teruko Yata, Chuck Thorpe Frank Dellaert The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 USA College of Computing

### Understanding and Applying Kalman Filtering

Understanding and Applying Kalman Filtering Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton 1 Introduction Objectives: 1. Provide a basic understanding

### Bayesian Image Super-Resolution

Bayesian Image Super-Resolution Michael E. Tipping and Christopher M. Bishop Microsoft Research, Cambridge, U.K..................................................................... Published as: Bayesian

### Solving Simultaneous Equations and Matrices

Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering

### Mixtures of Robust Probabilistic Principal Component Analyzers

Mixtures of Robust Probabilistic Principal Component Analyzers Cédric Archambeau, Nicolas Delannay 2 and Michel Verleysen 2 - University College London, Dept. of Computer Science Gower Street, London WCE

### STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct

### C4 Computer Vision. 4 Lectures Michaelmas Term Tutorial Sheet Prof A. Zisserman. fundamental matrix, recovering ego-motion, applications.

C4 Computer Vision 4 Lectures Michaelmas Term 2004 1 Tutorial Sheet Prof A. Zisserman Overview Lecture 1: Stereo Reconstruction I: epipolar geometry, fundamental matrix. Lecture 2: Stereo Reconstruction

### VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS

VEHICLE TRACKING USING ACOUSTIC AND VIDEO SENSORS Aswin C Sankaranayanan, Qinfen Zheng, Rama Chellappa University of Maryland College Park, MD - 277 {aswch, qinfen, rama}@cfar.umd.edu Volkan Cevher, James

### IMU Components An IMU is typically composed of the following components:

APN-064 IMU Errors and Their Effects Rev A Introduction An Inertial Navigation System (INS) uses the output from an Inertial Measurement Unit (IMU), and combines the information on acceleration and rotation

### PNI White Paper Written in collaboration with Miami University. Accurate Position Tracking Using Inertial Measurement Units

PNI White Paper Written in collaboration with Miami University Accurate Position Tracking Using Inertial Measurement Units David Vincent February This white paper presents an overview of inertial position

### CE801: Intelligent Systems and Robotics Lecture 3: Actuators and Localisation. Prof. Dr. Hani Hagras

1 CE801: Intelligent Systems and Robotics Lecture 3: Actuators and Localisation Prof. Dr. Hani Hagras Robot Locomotion Robots might want to move in water, in the air, on land, in space.. 2 Most of the

### Linear Threshold Units

Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear

### Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication

Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication Thomas Reilly Data Physics Corporation 1741 Technology Drive, Suite 260 San Jose, CA 95110 (408) 216-8440 This paper

### ELEC-E8104 Stochastics models and estimation, Lecture 3b: Linear Estimation in Static Systems

Stochastics models and estimation, Lecture 3b: Linear Estimation in Static Systems Minimum Mean Square Error (MMSE) MMSE estimation of Gaussian random vectors Linear MMSE estimator for arbitrarily distributed

### Component Ordering in Independent Component Analysis Based on Data Power

Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals

### Subspace Analysis and Optimization for AAM Based Face Alignment

Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China zhaoming1999@zju.edu.cn Stan Z. Li Microsoft

### An Introduction to the Kalman Filter

An Introduction to the Kalman Filter Greg Welch 1 and Gary Bishop 2 TR 95041 Department of Computer Science University of North Carolina at Chapel Hill Chapel Hill, NC 275993175 Updated: Monday, July 24,

### The Expectation Maximization Algorithm A short tutorial

The Expectation Maximiation Algorithm A short tutorial Sean Borman Comments and corrections to: em-tut at seanborman dot com July 8 2004 Last updated January 09, 2009 Revision history 2009-0-09 Corrected

### The calibration problem was discussed in details during lecture 3.

1 2 The calibration problem was discussed in details during lecture 3. 3 Once the camera is calibrated (intrinsics are known) and the transformation from the world reference system to the camera reference

### A Semi-parametric Approach for Decomposition of Absorption Spectra in the Presence of Unknown Components

A Semi-parametric Approach for Decomposition of Absorption Spectra in the Presence of Unknown Components Payman Sadegh 1,2, Henrik Aalborg Nielsen 1, and Henrik Madsen 1 Abstract Decomposition of absorption

### Simple and efficient online algorithms for real world applications

Simple and efficient online algorithms for real world applications Università degli Studi di Milano Milano, Italy Talk @ Centro de Visión por Computador Something about me PhD in Robotics at LIRA-Lab,

### An Application of Inverse Reinforcement Learning to Medical Records of Diabetes Treatment

An Application of Inverse Reinforcement Learning to Medical Records of Diabetes Treatment Hideki Asoh 1, Masanori Shiro 1 Shotaro Akaho 1, Toshihiro Kamishima 1, Koiti Hasida 1, Eiji Aramaki 2, and Takahide

### EL5223. Basic Concepts of Robot Sensors, Actuators, Localization, Navigation, and1 Mappin / 12

Basic Concepts of Robot Sensors, Actuators, Localization, Navigation, and Mapping Basic Concepts of Robot Sensors, Actuators, Localization, Navigation, and1 Mappin / 12 Sensors and Actuators Robotic systems

### Variational Policy Search via Trajectory Optimization

Variational Policy Search via Trajectory Optimization Sergey Levine Stanford University svlevine@cs.stanford.edu Vladlen Koltun Stanford University and Adobe Research vladlen@cs.stanford.edu Abstract In

### Intelligent Submersible Manipulator-Robot, Design, Modeling, Simulation and Motion Optimization for Maritime Robotic Research

20th International Congress on Modelling and Simulation, Adelaide, Australia, 1 6 December 2013 www.mssanz.org.au/modsim2013 Intelligent Submersible Manipulator-Robot, Design, Modeling, Simulation and

### Robotics. Lecture 3: Sensors. See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information.

Robotics Lecture 3: Sensors See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review: Locomotion Practical

### The QOOL Algorithm for fast Online Optimization of Multiple Degree of Freedom Robot Locomotion

The QOOL Algorithm for fast Online Optimization of Multiple Degree of Freedom Robot Locomotion Daniel Marbach January 31th, 2005 Swiss Federal Institute of Technology at Lausanne Daniel.Marbach@epfl.ch

### Model-based Synthesis. Tony O Hagan

Model-based Synthesis Tony O Hagan Stochastic models Synthesising evidence through a statistical model 2 Evidence Synthesis (Session 3), Helsinki, 28/10/11 Graphical modelling The kinds of models that

### State Space Time Series Analysis

State Space Time Series Analysis p. 1 State Space Time Series Analysis Siem Jan Koopman http://staff.feweb.vu.nl/koopman Department of Econometrics VU University Amsterdam Tinbergen Institute 2011 State

### CHAPTER 2 Estimating Probabilities

CHAPTER 2 Estimating Probabilities Machine Learning Copyright c 2016. Tom M. Mitchell. All rights reserved. *DRAFT OF January 24, 2016* *PLEASE DO NOT DISTRIBUTE WITHOUT AUTHOR S PERMISSION* This is a

### Math 54. Selected Solutions for Week Is u in the plane in R 3 spanned by the columns

Math 5. Selected Solutions for Week 2 Section. (Page 2). Let u = and A = 5 2 6. Is u in the plane in R spanned by the columns of A? (See the figure omitted].) Why or why not? First of all, the plane in

### PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical

### ECEN 5682 Theory and Practice of Error Control Codes

ECEN 5682 Theory and Practice of Error Control Codes Convolutional Codes University of Colorado Spring 2007 Linear (n, k) block codes take k data symbols at a time and encode them into n code symbols.

### Regression Using Support Vector Machines: Basic Foundations

Regression Using Support Vector Machines: Basic Foundations Technical Report December 2004 Aly Farag and Refaat M Mohamed Computer Vision and Image Processing Laboratory Electrical and Computer Engineering

### The Image Deblurring Problem

page 1 Chapter 1 The Image Deblurring Problem You cannot depend on your eyes when your imagination is out of focus. Mark Twain When we use a camera, we want the recorded image to be a faithful representation

### INTRODUCTION TO NEURAL NETWORKS

INTRODUCTION TO NEURAL NETWORKS Pictures are taken from http://www.cs.cmu.edu/~tom/mlbook-chapter-slides.html http://research.microsoft.com/~cmbishop/prml/index.htm By Nobel Khandaker Neural Networks An

### Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus

Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus Tihomir Asparouhov and Bengt Muthén Mplus Web Notes: No. 15 Version 8, August 5, 2014 1 Abstract This paper discusses alternatives

### BayesX - Software for Bayesian Inference in Structured Additive Regression

BayesX - Software for Bayesian Inference in Structured Additive Regression Thomas Kneib Faculty of Mathematics and Economics, University of Ulm Department of Statistics, Ludwig-Maximilians-University Munich

### The Basics of Graphical Models

The Basics of Graphical Models David M. Blei Columbia University October 3, 2015 Introduction These notes follow Chapter 2 of An Introduction to Probabilistic Graphical Models by Michael Jordan. Many figures

### Cell Phone based Activity Detection using Markov Logic Network

Cell Phone based Activity Detection using Markov Logic Network Somdeb Sarkhel sxs104721@utdallas.edu 1 Introduction Mobile devices are becoming increasingly sophisticated and the latest generation of smart

### Blind Deconvolution of Corrupted Barcode Signals

Blind Deconvolution of Corrupted Barcode Signals Everardo Uribe and Yifan Zhang Advisors: Ernie Esser and Yifei Lou Interdisciplinary Computational and Applied Mathematics Program University of California,

### An Introduction to Mobile Robotics

An Introduction to Mobile Robotics Who am I. Steve Goldberg 15 years programming robots for NASA/JPL Worked on MSL, MER, BigDog and Crusher Expert in stereo vision and autonomous navigation Currently Telecommuting

### Robotics. Chapter 25. Chapter 25 1

Robotics Chapter 25 Chapter 25 1 Outline Robots, Effectors, and Sensors Localization and Mapping Motion Planning Motor Control Chapter 25 2 Mobile Robots Chapter 25 3 Manipulators P R R R R R Configuration

### State of Stress at Point

State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,

### Introduction to latent variable models

Introduction to latent variable models lecture 1 Francesco Bartolucci Department of Economics, Finance and Statistics University of Perugia, IT bart@stat.unipg.it Outline [2/24] Latent variables and their

### Autonomous inverted helicopter flight via reinforcement learning

Autonomous inverted helicopter flight via reinforcement learning Andrew Y. Ng 1, Adam Coates 1, Mark Diel 2, Varun Ganapathi 1, Jamie Schulte 1, Ben Tse 2, Eric Berger 1, and Eric Liang 1 1 Computer Science

### Convolution. 1D Formula: 2D Formula: Example on the web: http://www.jhu.edu/~signals/convolve/

Basic Filters (7) Convolution/correlation/Linear filtering Gaussian filters Smoothing and noise reduction First derivatives of Gaussian Second derivative of Gaussian: Laplacian Oriented Gaussian filters

### Probabilistic user behavior models in online stores for recommender systems

Probabilistic user behavior models in online stores for recommender systems Tomoharu Iwata Abstract Recommender systems are widely used in online stores because they are expected to improve both user

### Analysis of Bayesian Dynamic Linear Models

Analysis of Bayesian Dynamic Linear Models Emily M. Casleton December 17, 2010 1 Introduction The main purpose of this project is to explore the Bayesian analysis of Dynamic Linear Models (DLMs). The main

### Nonlinear Blind Source Separation and Independent Component Analysis

Nonlinear Blind Source Separation and Independent Component Analysis Prof. Juha Karhunen Helsinki University of Technology Neural Networks Research Centre Espoo, Finland Helsinki University of Technology,

### Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

### POTENTIAL OF STATE-FEEDBACK CONTROL FOR MACHINE TOOLS DRIVES

POTENTIAL OF STATE-FEEDBACK CONTROL FOR MACHINE TOOLS DRIVES L. Novotny 1, P. Strakos 1, J. Vesely 1, A. Dietmair 2 1 Research Center of Manufacturing Technology, CTU in Prague, Czech Republic 2 SW, Universität

### CHAPTER 6 TEXTURE ANIMATION

CHAPTER 6 TEXTURE ANIMATION 6.1. INTRODUCTION Animation is the creating of a timed sequence or series of graphic images or frames together to give the appearance of continuous movement. A collection of

Statistics Graduate Courses STAT 7002--Topics in Statistics-Biological/Physical/Mathematics (cr.arr.).organized study of selected topics. Subjects and earnable credit may vary from semester to semester.

### Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C

### Probability & Statistics Primer Gregory J. Hakim University of Washington 2 January 2009 v2.0

Probability & Statistics Primer Gregory J. Hakim University of Washington 2 January 2009 v2.0 This primer provides an overview of basic concepts and definitions in probability and statistics. We shall

### Eyeglass Localization for Low Resolution Images

Eyeglass Localization for Low Resolution Images Earl Arvin Calapatia 1 1 De La Salle University 1 earl_calapatia@dlsu.ph Abstract: Facial data is a necessity in facial image processing technologies. In

### Stock Option Pricing Using Bayes Filters

Stock Option Pricing Using Bayes Filters Lin Liao liaolin@cs.washington.edu Abstract When using Black-Scholes formula to price options, the key is the estimation of the stochastic return variance. In this

### Principal Components Analysis (PCA)

Principal Components Analysis (PCA) Janette Walde janette.walde@uibk.ac.at Department of Statistics University of Innsbruck Outline I Introduction Idea of PCA Principle of the Method Decomposing an Association

### 7 Time series analysis

7 Time series analysis In Chapters 16, 17, 33 36 in Zuur, Ieno and Smith (2007), various time series techniques are discussed. Applying these methods in Brodgar is straightforward, and most choices are

### Direct Methods for Solving Linear Systems. Linear Systems of Equations

Direct Methods for Solving Linear Systems Linear Systems of Equations Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University

### Identification of Exploitation Conditions of the Automobile Tire while Car Driving by Means of Hidden Markov Models

Identification of Exploitation Conditions of the Automobile Tire while Car Driving by Means of Hidden Markov Models Denis Tananaev, Galina Shagrova, Victor Kozhevnikov North-Caucasus Federal University,

### High Quality Image Magnification using Cross-Scale Self-Similarity

High Quality Image Magnification using Cross-Scale Self-Similarity André Gooßen 1, Arne Ehlers 1, Thomas Pralow 2, Rolf-Rainer Grigat 1 1 Vision Systems, Hamburg University of Technology, D-21079 Hamburg

### Image Estimation Algorithm for Out of Focus and Blur Images to Retrieve the Barcode Value

IJSTE - International Journal of Science Technology & Engineering Volume 1 Issue 10 April 2015 ISSN (online): 2349-784X Image Estimation Algorithm for Out of Focus and Blur Images to Retrieve the Barcode

### Geometric Constraints

Simulation in Computer Graphics Geometric Constraints Matthias Teschner Computer Science Department University of Freiburg Outline introduction penalty method Lagrange multipliers local constraints University

### IMPROVED NETWORK PARAMETER ERROR IDENTIFICATION USING MULTIPLE MEASUREMENT SCANS

IMPROVED NETWORK PARAMETER ERROR IDENTIFICATION USING MULTIPLE MEASUREMENT SCANS Liuxi Zhang and Ali Abur Department of Electrical and Computer Engineering Northeastern University Boston, MA, USA lzhang@ece.neu.edu

### MOBILE ROBOT TRACKING OF PRE-PLANNED PATHS. Department of Computer Science, York University, Heslington, York, Y010 5DD, UK (email:nep@cs.york.ac.

MOBILE ROBOT TRACKING OF PRE-PLANNED PATHS N. E. Pears Department of Computer Science, York University, Heslington, York, Y010 5DD, UK (email:nep@cs.york.ac.uk) 1 Abstract A method of mobile robot steering

### Stefanos D. Georgiadis Perttu O. Ranta-aho Mika P. Tarvainen Pasi A. Karjalainen. University of Kuopio Department of Applied Physics Kuopio, FINLAND

5 Finnish Signal Processing Symposium (Finsig 5) Kuopio, Finland Stefanos D. Georgiadis Perttu O. Ranta-aho Mika P. Tarvainen Pasi A. Karjalainen University of Kuopio Department of Applied Physics Kuopio,

### Solution of Linear Systems

Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

### Automatic Calibration of an In-vehicle Gaze Tracking System Using Driver s Typical Gaze Behavior

Automatic Calibration of an In-vehicle Gaze Tracking System Using Driver s Typical Gaze Behavior Kenji Yamashiro, Daisuke Deguchi, Tomokazu Takahashi,2, Ichiro Ide, Hiroshi Murase, Kazunori Higuchi 3,

### Least Squares Estimation

Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David

### Data fusion, estimation and sensor calibration

FYS3240 PC-based instrumentation and microcontrollers Data fusion, estimation and sensor calibration Spring 2015 Lecture #13 Bekkeng 29.3.2015 Multisensor systems Sensor 1 Sensor 2.. Sensor n Computer

### Robotics 2 Clustering & EM. Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Maren Bennewitz, Wolfram Burgard

Robotics 2 Clustering & EM Giorgio Grisetti, Cyrill Stachniss, Kai Arras, Maren Bennewitz, Wolfram Burgard 1 Clustering (1) Common technique for statistical data analysis to detect structure (machine learning,

### Introduction to Engineering System Dynamics

CHAPTER 0 Introduction to Engineering System Dynamics 0.1 INTRODUCTION The objective of an engineering analysis of a dynamic system is prediction of its behaviour or performance. Real dynamic systems are

### TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAA

2015 School of Information Technology and Electrical Engineering at the University of Queensland TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAA Schedule Week Date