Optimal Control. Lecture 3. Palle Andersen, Aalborg University. Opt lecture 3 p. 1/30
|
|
- Tyrone Payne
- 7 years ago
- Views:
Transcription
1 Optimal Control Lecture 3 pa@control.aau.dk Palle Andersen, Aalborg University Opt lecture 3 p. 1/30
2 Stochastic Optimal Control In this lecture we are going to introduce disturbances which are modelled by adding a stochastic input to a state space description: x(k +1) = Φx(k)+Γu(k)+e x (k) Assumptions about noise: Expected value E{e x (k)} = 0 Variance E{e x (k)e T x (k)} = R ex (symmetric, (n n)) Covariance function E{e x (k)e T x(k +l)} = R ex δ(l) Assumptions about the initial state: Expected value E{x(0)} = x m (0) Variance matrix E{(x(0) x m (0))(x(0) x m (0)) T } = R x (0) Opt lecture 3 p. 2/30
3 Performance function Because we can not predict future values of the states we need to change performance function using expectation: N 1 I0 N = E{ (x T (k)q 1 x(k)+u T (k)q 2 u(k))+x T (N)Q N x(n)} = E{ k=0 N H(x(k),u(k))} k=0 As earlier we denote the optimal value J N 0 J N 0 = min u(0),,u(n) E{ N k=0 H(x(k),u(k))} Opt lecture 3 p. 3/30
4 Stochastic Optimal Control Performance in the interval [k;n]. I N k = E{ N H(x(i),u(i))} i=k J N k = min u(k),,u(n) E{ N H(x(i),u(i))} i=k Opt lecture 3 p. 4/30
5 Split up sum J N k (x(k)) = J N k min u(k),,u(n) IN k (x(k)) = min (H(x(k),u(k))+E{J k+1 N (x(k +1))}) u(k) H(x(k),u(k)) = x T (k)q 1 x(k)+u T (k)q 2 u(k) J N k (x(k)) = min u(k) (xt (k)q 1 x(k)+u T (k)q 2 u(k)+ E{Jk+1 N (x(k +1))}) Opt lecture 3 p. 5/30
6 Assumptions onj N k+1 Assumption in deterministic case: J N k (x(k)) = xt (k)s(k)x(k) Assumption in stochastic case J N k (x(k)) = xt (k)s(k)x(k)+w(k) the extra term w(k) is a scalar independent of x(k) We will show how S(k) and w(k) relates to S(k +1) and w(k+1). Since the assumptions holds for k = N, this can be seen as proof by induction of the validity of the assumptions on the structure of the optimal performance. Opt lecture 3 p. 6/30
7 Insert assumption and model : J N k (x(k)) = xt (k)s(k)x(k)+w(k) = min u(k) [xt (k)q 1 x(k)+u T (k)q 2 u(k) +E{x T (k +1)S(k +1)x(k +1)+w(k +1)}] = min u(k) [xt (k)q 1 x(k)+u T (k)q 2 u(k) +E{(Φx(k)+Γu(k)+e x (k)) T S(k +1)(Φx(k) +Γu(k)+e x (k))+w(k +1)}] = min u(k) [xt (k)q 1 x(k)+u T (k)q 2 u(k) +(Φx(k)+Γu(k)) T S(k +1)(Φx(k)+Γu(k)) +E{e T x(k)s(k +1)e x (k)}+w(k +1)] e x (k) is uncorrelated to x(k) og u(k). Opt lecture 3 p. 7/30
8 Stochastic Optimal Control Lemma on Expectation of a Quadratic form E{v T (k)av(k)} where v(k) has stochastic mean v m (k) and variance R E{v T (k)av(k)} = v T m (k)av m(k) +E{(v(k) v m (k)) T A(v(k) v m (k))} = v T m(k)av m (k) +tr[ae{(v(k) v m (k))(v(k) v m (k)) T }] = v T m(k)av m (k)+tr[ar v ] tr[a] is termed the trace of A and is defined as the sum of the diagonal elements of A Opt lecture 3 p. 8/30
9 Stochastic Optimal Control We can using the lemma on expectation of a quadratic form to find: J N k (x(k)) = min u(k) [xt (k)q 1 x(k)+u T (k)q 2 u(k)+ dj N k (x(k)) du(k) (Φx(k)+Γu(k)) T S(k +1)(Φx(k)+Γu(k)) +tr[s(k +1)R ex ]+w(k +1)] = 2Q 2 u(k)+2γ T S(k +1)(Φx(k)+Γu(k)) = 0 u (k) = [Q 2 +Γ T S(k +1)Γ] 1 Γ T S(k +1)Φx(k) Opt lecture 3 p. 9/30
10 Obtain the performance Jk N (x(k)) = xt (k)s(k)x(k)+w(k) = x T (k)[q 1 +L T (k)q 2 L(k) + (Φ ΓL(k)) T S(k +1)(Φ ΓL(k))]x(k) + tr[s(k +1)R ex ]+w(k +1)] From this can be seen: S(k) = Q 1 +L T (k)q 2 L(k)+(Φ ΓL(k)) T S(k +1)(Φ ΓL(k)) w(k) = tr[s(k +1)R ex ]+w(k +1) with S(N) = Q N and w(n) = 0. Opt lecture 3 p. 10/30
11 LQ: stochastic, DT systems with SI x(k +1) = Φx(k)+Γu(k)+e x (k) with E{e x (k)} = 0 E{e x (k)e T x(k)} = R ex E{x(0)} = x m (0) E{(x(0) x m (0))(x(0) x m (0)) T } = R x and a quadratic performance index N 1 I = E{ (x T (k)q 1 x(k)+u T (k)q 2 u(k))+x T (N)Q N x(n)} k=0 where Q 1 and Q N are positive definite and Q 2 is positive semi definite, Opt lecture 3 p. 11/30
12 LQ: stochastic, DT systems with SI The optimal input sequence will be determined by: u(k) = L(k)x(k), with L(k) = [Q 2 +Γ T S(k +1)Γ] 1 Γ T S(k +1)Φ, with S(k) = Q 1 +L T (k)q 2 L(k) +(Φ ΓL(k)) T S(k +1)(Φ ΓL(k)) = Q 1 +Φ T S(k +1)(Φ ΓL(k)), with S(N) = Q N Opt lecture 3 p. 12/30
13 Obtain the performance index: mini N 0 = x T m(0)s(0)x m (0) } {{ } as deterministic +tr[s(0)r x (0)]+ N 1 k=0 tr[s(k +1)R ex ] } {{ } due to state noise Opt lecture 3 p. 13/30
14 Stochastic LQ, incomplete info Now we leave the assumption of full state information and assume a stochastic output equation combined with the state equation x(k +1) = Φx(k)+Γu(k)+e x (k) y(k) = Hx(k)+e y (k) Stochastic properties E{e x (k)} = 0 E{e y (k)} = 0 E{e x (k)e T x(k)} = R ex E{e y (k)e T y(k)} = R ey E{e x (k)e T x(k +l)} = R ex δ(l) E{e y (k)e T y(k +l)} = R ey δ(l) Opt lecture 3 p. 14/30
15 Stochastic LQ, incomplete info Performance function as in the case with complete state information N 1 I = E{ (x T (k)q 1 x(k)+u T (k)q 2 u(k))+x T (N)Q N x(n)} k=0 Opt lecture 3 p. 15/30
16 Stochastic LQ Control, Separation The optimal control law for a linear system with stochastic disturbances and measurements contaminated by stochastic noise can be obtained as a combination of the solutions of two subproblems: 1. An estimator giving the optimal estimate of the system state vector from observations of the system input and output. This is also called an Observer. 2. An optimal feedback law from the estimated states. This feedback law is the same as if complete state information was available. Next we will derive an prediction observer assuming the signals up to k 1 are available to calculate u(k). Similar results can also be obtained with a current observer assuming signals at k are available. Opt lecture 3 p. 16/30
17 Stochastic Control, observation error Observer equation for prediction observer: ˆx(k +1) = Φˆx(k)+Γu(k)+K(k)[y(k) Hˆx(k)] x(k +1) = Φx(k)+Γu(k)+e x (k) Observation error x(k) = x(k) ˆx(k) x(k +1) = (Φ K(k)H) x(k)+e x (k) K(k)e y (k) Mean value of observation error E{ x(k +1)} = (Φ K(k)H)E{ x(k)} The mean value will tend to zero when eigenvalues of (Φ K(k)H) are inside unit circle Opt lecture 3 p. 17/30
18 Variance of estimation error The observer is designed to minimize the variance of the observation error P(k +1) = E{ x(k +1) x(k +1) T } = (Φ K(k)H)P(k)(Φ K(k)H) T + R ex +K(k)R ey K(k) T = K(HPH T +R ey )K T KHPΦ T ΦPH T K T +ΦPΦ T +R ex Minimize by completing the squares: KMK T KN T NK T = (K NM 1 )M(K NM 1 ) T NM 1 N T has minimum where first term is zero: K = NM 1 Opt lecture 3 p. 18/30
19 Observer Riccati equations Substituting for M and N we obtain the minimizing observer gain K(k) = ΦP(k)H T [R ey +HP(k)H T ] 1 P(k +1) = R ex +K(k)R ey K T (k) +(Φ K(k)H)P(k)(Φ K(k)H) T = R ex +(Φ K(k)H)P(k)Φ T, and P(0) = R x (0) Opt lecture 3 p. 19/30
20 Closed loop diagram System e x(k) e y (k) u(k) Γ + z -1 x(k) H y(k) Φ Observer K Γ + z -1 x(k) ^ H Φ Controller -L Opt lecture 3 p. 20/30
21 Closed loop equations A steady state solution of the observer Riccati equation may be found by iterating the time varying equations forward in time. This solution is also the one found by Matlab commands lqe, dlqe or kalman). The closed loop state equations are (steady state L and K) x(k +1) = Φx(k) ΓLˆx(k)+e x (k) ˆx(k +1) = Φˆx(k) ΓLˆx(k)+K(Hx(k)+e y (k) Hˆx(k)) Opt lecture 3 p. 21/30
22 Separation: closed loop poles Closed loop poles are most easily recognized from equations in x = x ˆx and x x(k +1) = Φx(k) ΓL(x(k) x(k))+e x (k) x(k +1) = (Φ KH) x(k)+e x (k) Ke y (k) or [ x(k +1) x(k +1) ] = [ Φ ΓL ΓL 0 Φ KH ][ x(k) x(k) ] + [ e x (k) e x (k) Ke y (k) ] Opt lecture 3 p. 22/30
23 Separation: closed loop poles The poles are the eigenvalues of the state transition matrix or the values of z where ([ ]) (zi Φ+ΓL) ΓL det = 0 0 (zi Φ+KH) or det(zi Φ+ΓL)det(zI Φ+KH) = 0 Opt lecture 3 p. 23/30
24 Duality, Controller & Observer Notice the duality between controllability and observability x(k +1) = Φx(k)+Γu(k) x(k +2) = Φ 2 x(k)+φγu(k)+γu(k +1) x(k +n) = Φ n x(k)+[γ,φγ,...,φ n 1 Γ] u(k +n 1) u(k +n 2)... u(k) The plant is controllable if you can reach the full state space: controllability matrix C has full rank (n) C(Φ,Γ) = [Γ,ΦΓ,...,Φ n 1 Γ] Opt lecture 3 p. 24/30
25 Duality, Controller & Observer In a similar way observability shows if a state vector can be calculated using n subsequent observations of output. Consider the case with no input and no noise y(k n+1) y(k n+2)... y(k) = Hx(k n+1) HΦx(k n+1)... HΦ n 1 x(k n+1) = H HΦ... HΦ n 1 x(k n+1) The plant is observable if the observability matrix O has full rank (n) H HΦ O(Φ,H) =... = [HT,Φ T H T,,(Φ T ) n 1 H] T HΦ n 1 Opt lecture 3 p. 25/30
26 Duality, Controller & Observer Note the duality between controllability and observability: If you construct a plant with system matrix Φ T and input matrix H T this would have a controllabity matrix equal to the transpose of the original obervability matrix. We might write C(Φ T,H T ) = O(Φ,H) T Opt lecture 3 p. 26/30
27 Duality, Controller & Observer Duality between Riccati eqs. for controller and observer Controller : Observer : L = [Q 2 +Γ T SΓ] 1 Γ T SΦ S = Q 1 +L T Q 2 L+(Φ ΓL) T S(Φ ΓL) S(N) = Q N K = ΦPH T [R ey +HPH T ] 1 P = R ex +KR ey K T +(Φ KH)P(Φ KH) T P(0) = R x (0) Immediate you see the following duality: Controller Q 2 Γ T S Φ T Q 1 L T Q N Observer R ey H P Φ R ex K R x (0) Opt lecture 3 p. 27/30
28 How to tune the observer The variance matrices R ex and R ey determine the observer gain K the closed loop poles related to the observer But how can we find the variance of these stochastic quantities. Recognize that R ex and R ey are part of a specification of the performance goal R ex and R ey specifies the disturbance e x and the measurement noise e y Q 1 and Q 2 specify the outputs to weight in performance Separation splits it to a controller problem and an observer problem Opt lecture 3 p. 28/30
29 How to tune the observer Either controller poles or observer poles may determine the bandwidth R ex and R ey specify the classic tradeoff between noise noise immunity and speed of response Large R ex (or small R ey ) results in a fast observer Large R ey (or small R ex ) results in a slow noise immune observer Q 1 and Q 2 specify the classic tradeoff between size of control signal and speed of response Large Q 1 (or small Q 2 ) results in fast controller poles Large Q 2 (or small Q 1 ) results in a slow controller with small control signals Opt lecture 3 p. 29/30
30 How to tune the observer Disturbances can be easy to observe but difficult to reject (close to measurements at output). Fast observer and slow controller. Controller limits the response dificult to observe but easy to reject (close to control inputs). Fast controller and slow observer. Observer limits the response In pole placement design observer poles are often chosen 4 times faster than controller poles. A choice of the controller poles 4 times faster than observer poles can also be well suited Opt lecture 3 p. 30/30
Understanding and Applying Kalman Filtering
Understanding and Applying Kalman Filtering Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton 1 Introduction Objectives: 1. Provide a basic understanding
More informationBackground 2. Lecture 2 1. The Least Mean Square (LMS) algorithm 4. The Least Mean Square (LMS) algorithm 3. br(n) = u(n)u H (n) bp(n) = u(n)d (n)
Lecture 2 1 During this lecture you will learn about The Least Mean Squares algorithm (LMS) Convergence analysis of the LMS Equalizer (Kanalutjämnare) Background 2 The method of the Steepest descent that
More informationFormulations of Model Predictive Control. Dipartimento di Elettronica e Informazione
Formulations of Model Predictive Control Riccardo Scattolini Riccardo Scattolini Dipartimento di Elettronica e Informazione Impulse and step response models 2 At the beginning of the 80, the early formulations
More informationPID Controller Design for Nonlinear Systems Using Discrete-Time Local Model Networks
PID Controller Design for Nonlinear Systems Using Discrete-Time Local Model Networks 4. Workshop für Modellbasierte Kalibriermethoden Nikolaus Euler-Rolle, Christoph Hametner, Stefan Jakubek Christian
More informationContent. Professur für Steuerung, Regelung und Systemdynamik. Lecture: Vehicle Dynamics Tutor: T. Wey Date: 01.01.08, 20:11:52
1 Content Overview 1. Basics on Signal Analysis 2. System Theory 3. Vehicle Dynamics Modeling 4. Active Chassis Control Systems 5. Signals & Systems 6. Statistical System Analysis 7. Filtering 8. Modeling,
More informationEigenvalues, Eigenvectors, Matrix Factoring, and Principal Components
Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they
More informationSTATISTICS AND DATA ANALYSIS IN GEOLOGY, 3rd ed. Clarificationof zonationprocedure described onpp. 238-239
STATISTICS AND DATA ANALYSIS IN GEOLOGY, 3rd ed. by John C. Davis Clarificationof zonationprocedure described onpp. 38-39 Because the notation used in this section (Eqs. 4.8 through 4.84) is inconsistent
More informationFactorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
More informationInteractive applications to explore the parametric space of multivariable controllers
Milano (Italy) August 28 - September 2, 211 Interactive applications to explore the parametric space of multivariable controllers Yves Piguet Roland Longchamp Calerga Sàrl, Av. de la Chablière 35, 14 Lausanne,
More informationThe Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
More informationCONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation
Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in
More informationLinear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems
Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems In Chapters 8 and 9 of this book we have designed dynamic controllers such that the closed-loop systems display the desired transient
More informationChapter 12 Modal Decomposition of State-Space Models 12.1 Introduction The solutions obtained in previous chapters, whether in time domain or transfor
Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter
More information19 LINEAR QUADRATIC REGULATOR
19 LINEAR QUADRATIC REGULATOR 19.1 Introduction The simple form of loopshaping in scalar systems does not extend directly to multivariable (MIMO) plants, which are characterized by transfer matrices instead
More informationPOTENTIAL OF STATE-FEEDBACK CONTROL FOR MACHINE TOOLS DRIVES
POTENTIAL OF STATE-FEEDBACK CONTROL FOR MACHINE TOOLS DRIVES L. Novotny 1, P. Strakos 1, J. Vesely 1, A. Dietmair 2 1 Research Center of Manufacturing Technology, CTU in Prague, Czech Republic 2 SW, Universität
More informationKalman and Extended Kalman Filters: Concept, Derivation and Properties
Kalman and Extended Kalman ilters: Concept, Derivation and roperties Maria Isabel Ribeiro Institute for Systems and Robotics Instituto Superior Técnico Av. Rovisco ais, 1 1049-001 Lisboa ORTUGAL {mir@isr.ist.utl.pt}
More informationSOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis
More informationEnhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm
1 Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm Hani Mehrpouyan, Student Member, IEEE, Department of Electrical and Computer Engineering Queen s University, Kingston, Ontario,
More informationOverview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written
More information3.1 State Space Models
31 State Space Models In this section we study state space models of continuous-time linear systems The corresponding results for discrete-time systems, obtained via duality with the continuous-time models,
More informationStatistical Machine Learning
Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes
More informationC21 Model Predictive Control
C21 Model Predictive Control Mark Cannon 4 lectures Hilary Term 216-1 Lecture 1 Introduction 1-2 Organisation 4 lectures: week 3 week 4 { Monday 1-11 am LR5 Thursday 1-11 am LR5 { Monday 1-11 am LR5 Thursday
More information1 2 3 1 1 2 x = + x 2 + x 4 1 0 1
(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which
More informationState Space Time Series Analysis
State Space Time Series Analysis p. 1 State Space Time Series Analysis Siem Jan Koopman http://staff.feweb.vu.nl/koopman Department of Econometrics VU University Amsterdam Tinbergen Institute 2011 State
More informationTHE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok
THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE Alexer Barvinok Papers are available at http://www.math.lsa.umich.edu/ barvinok/papers.html This is a joint work with J.A. Hartigan
More informationSystem Identification for Acoustic Comms.:
System Identification for Acoustic Comms.: New Insights and Approaches for Tracking Sparse and Rapidly Fluctuating Channels Weichang Li and James Preisig Woods Hole Oceanographic Institution The demodulation
More informationIntroduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
More information4F7 Adaptive Filters (and Spectrum Estimation) Kalman Filter. Sumeetpal Singh Email : sss40@eng.cam.ac.uk
4F7 Adaptive Filters (and Spectrum Estimation) Kalman Filter Sumeetpal Singh Email : sss40@eng.cam.ac.uk 1 1 Outline State space model Kalman filter Examples 2 2 Parameter Estimation We have repeated observations
More information3. Regression & Exponential Smoothing
3. Regression & Exponential Smoothing 3.1 Forecasting a Single Time Series Two main approaches are traditionally used to model a single time series z 1, z 2,..., z n 1. Models the observation z t as a
More informationEstimating an ARMA Process
Statistics 910, #12 1 Overview Estimating an ARMA Process 1. Main ideas 2. Fitting autoregressions 3. Fitting with moving average components 4. Standard errors 5. Examples 6. Appendix: Simple estimators
More informationProbability and Random Variables. Generation of random variables (r.v.)
Probability and Random Variables Method for generating random variables with a specified probability distribution function. Gaussian And Markov Processes Characterization of Stationary Random Process Linearly
More informationThe Filtered-x LMS Algorithm
The Filtered-x LMS Algorithm L. Håkansson Department of Telecommunications and Signal Processing, University of Karlskrona/Ronneby 372 25 Ronneby Sweden Adaptive filters are normally defined for problems
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More informationCOMPLETE MARKETS DO NOT ALLOW FREE CASH FLOW STREAMS
COMPLETE MARKETS DO NOT ALLOW FREE CASH FLOW STREAMS NICOLE BÄUERLE AND STEFANIE GRETHER Abstract. In this short note we prove a conjecture posed in Cui et al. 2012): Dynamic mean-variance problems in
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationA Multi-Model Filter for Mobile Terminal Location Tracking
A Multi-Model Filter for Mobile Terminal Location Tracking M. McGuire, K.N. Plataniotis The Edward S. Rogers Sr. Department of Electrical and Computer Engineering, University of Toronto, 1 King s College
More informationMehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics
INTERNATIONAL BLACK SEA UNIVERSITY COMPUTER TECHNOLOGIES AND ENGINEERING FACULTY ELABORATION OF AN ALGORITHM OF DETECTING TESTS DIMENSIONALITY Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree
More informationQuadratic forms Cochran s theorem, degrees of freedom, and all that
Quadratic forms Cochran s theorem, degrees of freedom, and all that Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 1, Slide 1 Why We Care Cochran s theorem tells us
More informationEE 570: Location and Navigation
EE 570: Location and Navigation On-Line Bayesian Tracking Aly El-Osery 1 Stephen Bruder 2 1 Electrical Engineering Department, New Mexico Tech Socorro, New Mexico, USA 2 Electrical and Computer Engineering
More informationTorgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances
Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances It is possible to construct a matrix X of Cartesian coordinates of points in Euclidean space when we know the Euclidean
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More information4F7 Adaptive Filters (and Spectrum Estimation) Least Mean Square (LMS) Algorithm Sumeetpal Singh Engineering Department Email : sss40@eng.cam.ac.
4F7 Adaptive Filters (and Spectrum Estimation) Least Mean Square (LMS) Algorithm Sumeetpal Singh Engineering Department Email : sss40@eng.cam.ac.uk 1 1 Outline The LMS algorithm Overview of LMS issues
More informationMATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.
MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar
More informationMathematical Finance
Mathematical Finance Option Pricing under the Risk-Neutral Measure Cory Barnes Department of Mathematics University of Washington June 11, 2013 Outline 1 Probability Background 2 Black Scholes for European
More informationLecture 5: Variants of the LMS algorithm
1 Standard LMS Algorithm FIR filters: Lecture 5: Variants of the LMS algorithm y(n) = w 0 (n)u(n)+w 1 (n)u(n 1) +...+ w M 1 (n)u(n M +1) = M 1 k=0 w k (n)u(n k) =w(n) T u(n), Error between filter output
More informationA characterization of trace zero symmetric nonnegative 5x5 matrices
A characterization of trace zero symmetric nonnegative 5x5 matrices Oren Spector June 1, 009 Abstract The problem of determining necessary and sufficient conditions for a set of real numbers to be the
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationDATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
More informationNotes on Factoring. MA 206 Kurt Bryan
The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor
More informationEDUMECH Mechatronic Instructional Systems. Ball on Beam System
EDUMECH Mechatronic Instructional Systems Ball on Beam System Product of Shandor Motion Systems Written by Robert Hirsch Ph.D. 998-9 All Rights Reserved. 999 Shandor Motion Systems, Ball on Beam Instructional
More informationLinear Control Systems
Chapter 3 Linear Control Systems Topics : 1. Controllability 2. Observability 3. Linear Feedback 4. Realization Theory Copyright c Claudiu C. Remsing, 26. All rights reserved. 7 C.C. Remsing 71 Intuitively,
More information7 Gaussian Elimination and LU Factorization
7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method
More informationThe Exponential Distribution
21 The Exponential Distribution From Discrete-Time to Continuous-Time: In Chapter 6 of the text we will be considering Markov processes in continuous time. In a sense, we already have a very good understanding
More informationOptimal linear-quadratic control
Optimal linear-quadratic control Martin Ellison 1 Motivation The lectures so far have described a general method - value function iterations - for solving dynamic programming problems. However, one problem
More informationSensorless Control of a Brushless DC motor using an Extended Kalman estimator.
Sensorless Control of a Brushless DC motor using an Extended Kalman estimator. Paul Kettle, Aengus Murray & Finbarr Moynihan. Analog Devices, Motion Control Group Wilmington, MA 1887,USA. Paul.Kettle@analog.com
More informationLecture 13: Martingales
Lecture 13: Martingales 1. Definition of a Martingale 1.1 Filtrations 1.2 Definition of a martingale and its basic properties 1.3 Sums of independent random variables and related models 1.4 Products of
More informationOPTIMAl PREMIUM CONTROl IN A NON-liFE INSURANCE BUSINESS
ONDERZOEKSRAPPORT NR 8904 OPTIMAl PREMIUM CONTROl IN A NON-liFE INSURANCE BUSINESS BY M. VANDEBROEK & J. DHAENE D/1989/2376/5 1 IN A OPTIMAl PREMIUM CONTROl NON-liFE INSURANCE BUSINESS By Martina Vandebroek
More informationTexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAA
2015 School of Information Technology and Electrical Engineering at the University of Queensland TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAA Schedule Week Date
More informationEpipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R.
Epipolar Geometry We consider two perspective images of a scene as taken from a stereo pair of cameras (or equivalently, assume the scene is rigid and imaged with a single camera from two different locations).
More information1.4 Fast Fourier Transform (FFT) Algorithm
74 CHAPTER AALYSIS OF DISCRETE-TIME LIEAR TIME-IVARIAT SYSTEMS 4 Fast Fourier Transform (FFT Algorithm Fast Fourier Transform, or FFT, is any algorithm for computing the -point DFT with a computational
More informationDynamic Eigenvalues for Scalar Linear Time-Varying Systems
Dynamic Eigenvalues for Scalar Linear Time-Varying Systems P. van der Kloet and F.L. Neerhoff Department of Electrical Engineering Delft University of Technology Mekelweg 4 2628 CD Delft The Netherlands
More informationSystems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
More informationChapter 3: The Multiple Linear Regression Model
Chapter 3: The Multiple Linear Regression Model Advanced Econometrics - HEC Lausanne Christophe Hurlin University of Orléans November 23, 2013 Christophe Hurlin (University of Orléans) Advanced Econometrics
More informationCITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION
No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August
More informationFactor analysis. Angela Montanari
Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number
More informationPerformance Analysis of the LMS Algorithm with a Tapped Delay Line (Two-Dimensional Case)
1542 IEEE TRANSACTIONS ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL. ASSP-34, NO, 6, DECEMBER 1986 Performance Analysis of the LMS Algorithm with a Tapped Delay Line (Two-Dimensional Case) SHAUL FLORIAN
More informationOrthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
More informationThe Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method
The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem
More informationChapter 6: Multivariate Cointegration Analysis
Chapter 6: Multivariate Cointegration Analysis 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie VI. Multivariate Cointegration
More informationClassification of Cartan matrices
Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms
More informationAnalysis of a Production/Inventory System with Multiple Retailers
Analysis of a Production/Inventory System with Multiple Retailers Ann M. Noblesse 1, Robert N. Boute 1,2, Marc R. Lambrecht 1, Benny Van Houdt 3 1 Research Center for Operations Management, University
More informationMath 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.
Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(
More informationLinear Threshold Units
Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear
More informationModeling and Performance Evaluation of Computer Systems Security Operation 1
Modeling and Performance Evaluation of Computer Systems Security Operation 1 D. Guster 2 St.Cloud State University 3 N.K. Krivulin 4 St.Petersburg State University 5 Abstract A model of computer system
More information3.1 Least squares in matrix form
118 3 Multiple Regression 3.1 Least squares in matrix form E Uses Appendix A.2 A.4, A.6, A.7. 3.1.1 Introduction More than one explanatory variable In the foregoing chapter we considered the simple regression
More informationBy choosing to view this document, you agree to all provisions of the copyright laws protecting it.
This material is posted here with permission of the IEEE Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services Internal
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationIntroduction to General and Generalized Linear Models
Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby
More informationMean squared error matrix comparison of least aquares and Stein-rule estimators for regression coefficients under non-normal disturbances
METRON - International Journal of Statistics 2008, vol. LXVI, n. 3, pp. 285-298 SHALABH HELGE TOUTENBURG CHRISTIAN HEUMANN Mean squared error matrix comparison of least aquares and Stein-rule estimators
More informationLinearly Independent Sets and Linearly Dependent Sets
These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation
More informationSECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA
SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the
More informationStochastic Gradient Method: Applications
Stochastic Gradient Method: Applications February 03, 2015 P. Carpentier Master MMMEF Cours MNOS 2014-2015 114 / 267 Lecture Outline 1 Two Elementary Exercices on the Stochastic Gradient Two-Stage Recourse
More informationNonlinear Programming Methods.S2 Quadratic Programming
Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective
More informationFast Fourier Transform: Theory and Algorithms
Fast Fourier Transform: Theory and Algorithms Lecture Vladimir Stojanović 6.973 Communication System Design Spring 006 Massachusetts Institute of Technology Discrete Fourier Transform A review Definition
More informationImpulse Response Functions
Impulse Response Functions Wouter J. Den Haan University of Amsterdam April 28, 2011 General definition IRFs The IRF gives the j th -period response when the system is shocked by a one-standard-deviation
More informationIntroduction to Kalman Filtering
Introduction to Kalman Filtering A set of two lectures Maria Isabel Ribeiro Associate Professor Instituto Superior écnico / Instituto de Sistemas e Robótica June All rights reserved INRODUCION O KALMAN
More informationThe Bivariate Normal Distribution
The Bivariate Normal Distribution This is Section 4.7 of the st edition (2002) of the book Introduction to Probability, by D. P. Bertsekas and J. N. Tsitsiklis. The material in this section was not included
More informationDiffusion Adaptation Strategies for Distributed Optimization and Learning over Networks
Diffusion Adaptation Strategies for Distributed Optimization and Learning over Networks Jianshu Chen, Student Member, IEEE, and Ali H. Sayed, Fellow, IEEE 1 arxiv:1111.0034v3 [math.oc] 12 May 2012 Abstract
More information13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms
More informationSome probability and statistics
Appendix A Some probability and statistics A Probabilities, random variables and their distribution We summarize a few of the basic concepts of random variables, usually denoted by capital letters, X,Y,
More informationLecture 7: Finding Lyapunov Functions 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1
More informationElectrical Engineering 103 Applied Numerical Computing
UCLA Fall Quarter 2011-12 Electrical Engineering 103 Applied Numerical Computing Professor L Vandenberghe Notes written in collaboration with S Boyd (Stanford Univ) Contents I Matrix theory 1 1 Vectors
More informationMapping an Application to a Control Architecture: Specification of the Problem
Mapping an Application to a Control Architecture: Specification of the Problem Mieczyslaw M. Kokar 1, Kevin M. Passino 2, Kenneth Baclawski 1, and Jeffrey E. Smith 3 1 Northeastern University, Boston,
More information1 Determinants and the Solvability of Linear Systems
1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped
More informationMAN-BITES-DOG BUSINESS CYCLES ONLINE APPENDIX
MAN-BITES-DOG BUSINESS CYCLES ONLINE APPENDIX KRISTOFFER P. NIMARK The next section derives the equilibrium expressions for the beauty contest model from Section 3 of the main paper. This is followed by
More information(Quasi-)Newton methods
(Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting
More informationSF2940: Probability theory Lecture 8: Multivariate Normal Distribution
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,
More informationRefined enumerations of alternating sign matrices. Ilse Fischer. Universität Wien
Refined enumerations of alternating sign matrices Ilse Fischer Universität Wien 1 Central question Which enumeration problems have a solution in terms of a closed formula that (for instance) only involves
More informationGeneral Framework for an Iterative Solution of Ax b. Jacobi s Method
2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,
More information