Optimal linear-quadratic control
|
|
- Hilary Harmon
- 7 years ago
- Views:
Transcription
1 Optimal linear-quadratic control Martin Ellison 1 Motivation The lectures so far have described a general method - value function iterations - for solving dynamic programming problems. However, one problem alluded to at the end of the last lecture was that the method suffers from the curse of dimensionality. If the number of state variables is large then there are many arguments in the value function and it becomes computationally very intensive to iterate the value function. In practice, a state space with dimension about four or five already tests the limits of the power of the current generation of computers. In this lecture, we examine a particular class of dynamic programming problems that can be solved relatively easily. We focus on problems of linear-quadratic control, in which the payoff function is quadratic and the transition equation is linear. Many standard problems in economics can be cast in such a linear-quadratic framework. We will show how a combination of analytical and numerical analysis can be used to derive the solution to the linear-quadratic problem. 1
2 2 Key reading The formal analysis for this lecture is taken from Dynamic Macroeconomic Theory by Tom Sargent, Harvard University Press, Other reading Optimal linear-quadratic control is discussed in most graduate macroeconomics textbooks, e.g. chapter 4 of Recursive Macroeconomic Theory, 2nd ed by Lars Ljungqvist and Tom Sargent, MIT Press, The concepts are taken from the engineering theory of optimal control so more sophisticated treatments can be found in books such as Analysis and Control of Dynamic Economic Systems by Gregory Chow, Gauss codes for matrix Riccati equation iterations in a dynamic general equilibrium context are available from Morten Ravn s homepage at 4 Linear-quadratic control The general framework we will analyse is one in which the agent chooses a vector of controls to influence a series of state variables. We do not limit the dimension of either of these vectors, although it is natural to consider cases where the number of state variables exceeds the number of controls, otherwise it may well be that there is a very simple trivial solution which controls the states perfectly. In the context of linear-quadratic control, we assume that the transition equation governing the evolution of state is linear linear in past values of the state variables and linear in current values of the control variables. We allow the problem to be stochastic by including random shocks (with variance-covariance matrix Σ) to the state variables. The payoff function is assumed to be quadratic in the state and control variables, 2
3 giving quadratic forms in the objective. The fully-specified linear-quadratic control problem is specified below. The symmetric matrices and are the weights of state and control variables in the payoff function. Matrices and govern the linear evolution of state variables in the transition equation. min { } P [ ] =0 +1 = + + In dynamic programming form, the value function is defined over the state variables. ( )=min[ ( + + )] (1) It is certainly possible for us to proceed as before by discretising the state space for the value function and applying value function iteration to converge to the optimal policy. However, such a procedure is computationally very intensive and unnecessary in the special linear-quadratic case. Instead, we will use a different approach which combines analytical and numerical methods. The key to the method is that we know the general form of the policy and value functions for linear-quadratic control problems. Armed with this knowledge, it is much easier to proceed. We begin by postulating a quadratic form for the value function, in which is an idempotent matrix so 0 =. ( )= 0 + We proceed by substituting this form (with as yet undetermined matrices and ) into the value function (1). For convenience of notation, we drop thetimesubscripts.inallcases, and refer to time dated variables. 3
4 ( ) =min ( + + ) 0 ( + + )+ Expanding the quadratic terms in brackets, while remembering that ( ) 0 = 0 0 gives ( ) =min The expected values of the stochastic shocks is zero so terms of the form and 0 drop out. We are left with " Ã! # ( ) =min (2) The first order condition with respect to canbeusedtoderiveoptimal policy. Note that 0 =2, = 0 and 0 =. ( ) = =0 Solving in terms of implies Or, more succinctly, = ( + 0 ) 1 0 = = ( + 0 ) 1 0 4
5 Several things are worthy of note at this stage. Firstly, optimal control requires the control vector to react linearly to the state variables. We have yet to confirm that this implies a quadratic value function as first postulated, but it already suggests that the policy function has a very simple form. Secondly, the coefficient matrix in the policy function is a non-linear function of the fundamental matrices and the matrix in the postulated value function. We therefore can approach the problem as one of determining or. Our choice is to calculate, and then calculated the implied,but other techniques take the opposite approach. Economically, the policy reaction function is interesting because it is independent of the stochastic shocks. This is because certainty equivalence holds in a linear-quadratic framework. There is no effect on policy, unless shocks enter multiplicatively or payoffs are not quadratic. We continue next to demonstrate that the linear policy function (derived from a postulated quadratic value function) does actually imply a quadratic value function. In the process, we will be able to determine the two matrices and. To do this, we substitute the policy function = back into the value function (2). Note that is a scalar and so equal to Ã! 0 + = Comparing coefficients on constant terms, = 0 + We simplify this equation by applying the result 0 = ( 0 )= ( 0 )= ( Σ). 5
6 = ( Σ) 1 This equation shows how the additive uncertainty caused by the stochastic element does have an effect on the value function, but that this effect is limited to the constant term, which is independent of policy. Hence, certainty equivalence holds in this respect. Comparing coefficients on the terms quadratic in, Rearranging, = ( ) = ( + 0 ) We know that optimal policy defines as ( + 0 ) 1 0.Hence, we have = ( + 0 ) (( + 0 ) 1 ) 0 ( + 0 )( + 0 ) 1 0 Using the fact that ( 1 ) 0 =( 0 ) 1 and ( + 0 ) 0 =( + 0 ), this reduces to = ( + 0 ) 1 0 This equation confirms that a linear policy function does imply a quadratic value function. It is often known as the algebraic matrix Riccati equation. At present, it implicitly defines that matrix in the value function in terms of the structural matrices and. The matrix Riccati equation is as far as we can go analytically in linear-quadratic control. It does define as a function of and, but the relationship is not linear and potentially is 6
7 highly non-linear. Fortunately, a relatively simple iterative technique based on a matrix Riccati difference equation can be applied. Instead of trying to solve the Riccati equation directly, we start from an initial guess of the matrix in the value function. The initial guess is updated to +1 according to +1 = ( + 0 ) 1 0 This equation is iterated until convergence, which is guaranteed to uniqueness under very weak conditions. Specifically, having eigenvalues in of modulus less than unity is a sufficient condition. In fact, even explosive systems with eigenvalues grater than one in absolute value can be handled if some other weak conditions hold. Iteration of the matrix Riccati equation is directly analogous to the value function iterations we discussed in previous lectures. In fact, what we are doing is actually to iterate over the value function, with each successive matrix equivalent to our earlier iterations over.once has converged, it is a simple matter to calculate in the optimal policy function. 5 Numerical application To illustrate the practicalities of matrix Riccati difference equation iterations, we discuss Matlab code to solve a simple example of linear-quadratic control. Our model is one in which a central bank is trying to simultaneously control inflation and output by choosing the interest rate.the instantaneous payoff function for the central bank is assumed to be quadratic in inflation, output and the interest rate. L =
8 We assume that the central bank places equal weight on inflation and output deviations from target (normalised to zero for convenience) and a smaller weight on deviations in the interest rate from target. The objective of the central bank is to minimise the present discounted value of expected losses, with discounting at the rate. The structure of the economy is given by two equations. +1 = = It is not intended that these equations are to be considered a serious representation of the structure of the economy. Rather, the purpose is to illustrate our technique. The first equation determines inflation, which is assumed to be highly persistent and negatively correlated with interest rates. The timing is such that current interest rate decisions only affect inflation with a lag - a timing convention favoured by Athanasios Orphanides amongst others. The second equation determines output in a similar fashion. High interest rates depress output but output itself is not as persistent as inflation. The timing convention remains the same so interest rate decision only affect output with a lag. Both inflation and output are subject to (potentially correlated) random disturbances in the form of shocks and. The full minimisation problem is min P { } =0 +1 = =
9 The general form of optimal linear-quadratic control is min P [ ] { } =0 +1 = + + To cast our model in this general form, we define state variables as ( ) 0, the control variables as =, and the disturbances as ( ) 0.Thematrices and are given by à 1 0! à ! à 0 5! = 0 1 =0 1 = = 0 5 The theory discussed in the previous section implies that all we need to do is iterate the matrix Riccati equation to find, the calculate the policy reaction coefficients. The equations we will need are therefore +1 = ( + 0 ) 1 0 = ( + 0 ) 1 0 The Matlab code to solve the optimal linear-quadratic control problem is discussed below. Firstly, a new program is started by clearing the screen and the discount factor is defined. CLEAR; beta=0.99; The matrices and are first defined to be of the correct dimension and the non-zero elements are set. 9
10 Q=zeros(1,1); R=zeros(2,2); A=zeros(2,2); B=zeros(2,1); Q(1,1)=0.1; R(1,1)=1; R(1,2)=0; R(2,1)=0; R(2,2)=1; A(1,1)=0.75; A(1,2)=0; A(2,1)=0; A(2,2)=0.25; B(1,1)=-0.5; B(2,1)=-0.5; The next section initialises the matrix Riccati equation iterations The variable is used to measure the largest absolute in the elements of between successive iterations. The variable is simply a count of how many iterations have been carried out. The initial guess of the matrix is contained in the matrix 0. Asinitialvalues,weuse Ã! = These starting values are used rather than zero because, if =0and 0 is zero then the matrix + 0 in the Riccati equation is not invertible. In our example, 6= 0and we could just as easily used zeros as starting values. In practice, the algorithm is not sensitive to starting values in the vast majority of cases. 10
11 d=1; i=0; P0= *eye(2); Begin matrix Riccati equation iterations. We continue iterations until the maximum absolute difference in the elements of between iterations is less than The new value +1 is stored in the matrix 1. After each iteration, the new value 1 is compared to the old value 0. The difference is contained in, from which the maximum absolute value is extracted into. If is not sufficiently small then the initial guess 0 is updated and iterations continue. For each iteration, the iteration number and maximum absolute deviation are collected in and respectively in order to be printed at the end. WHILE d P1=R+beta*A P0*A-(beta*A *P0*B)*(invpd(Q + beta*b *P0*B)) *(beta*b *P0*A); Pd=P1-P0; d=max(abs(pd)); d=max(d ); P0=P1; i=i+1; END; The matrix Riccati equation iterations are now complete. The policy function matrix is calculated from the final iteration of the matrix. Both policy function matrix and value function matrix are printed in the command window. P=P0; F=-inv(Q+beta*B *P*B)*(beta*B *P*A); 11
12 ID=[I(2:length(I)) D(2:length(I)) ]; disp( i d ); disp(id); disp( SOLUTIONS ); disp( F ); disp(f); disp( P ); disp(p); The output of the computer code is as follows i d
13 SOLUTIONS F P As can be seen from the low number of iterations, the matrix Riccati equation iterations converge quickly. Returning to the context of our numerical model, the results imply policy and value functions of the following form. = ( ) = According to the policy function, the interest rate needs to rise whenever inflation or output is above target. The result is intuitively appealing, with the central bank deflating the economy when inflation and/or output is too high. The larger reaction to inflation than output is due to our assumption that inflation is more persistent than output. Inflation is intrinsically more problematic in the model since, if inflation deviates from target in the current period then the deviation is likely to persist to the next period. The value function can similarly be interpreted. The coefficient on the square of inflation exceeds that on the square of output precisely because the higher persistence of inflation makes it more problematic. The negative coefficient on the cross-product of inflation and output reflects the fact that it is easier to control inflation and output when they are deviating from target in the same direction. A rise in the interest rate depresses both inflation and output, so if inflation is above target and output below target (i.e. stagflation) then it is very difficult to stabilise the economy. 13
1 Teaching notes on GMM 1.
Bent E. Sørensen January 23, 2007 1 Teaching notes on GMM 1. Generalized Method of Moment (GMM) estimation is one of two developments in econometrics in the 80ies that revolutionized empirical work in
More informationLinearly Independent Sets and Linearly Dependent Sets
These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationGeneral Framework for an Iterative Solution of Ax b. Jacobi s Method
2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column
More informationOverview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationIntroduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
More informationElasticity Theory Basics
G22.3033-002: Topics in Computer Graphics: Lecture #7 Geometric Modeling New York University Elasticity Theory Basics Lecture #7: 20 October 2003 Lecturer: Denis Zorin Scribe: Adrian Secord, Yotam Gingold
More informationUnderstanding and Applying Kalman Filtering
Understanding and Applying Kalman Filtering Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton 1 Introduction Objectives: 1. Provide a basic understanding
More informationINDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition)
INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition) Abstract Indirect inference is a simulation-based method for estimating the parameters of economic models. Its
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationChapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem
Chapter Vector autoregressions We begin by taking a look at the data of macroeconomics. A way to summarize the dynamics of macroeconomic data is to make use of vector autoregressions. VAR models have become
More informationLecture 5: Singular Value Decomposition SVD (1)
EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system
More informationCentre for Central Banking Studies
Centre for Central Banking Studies Technical Handbook No. 4 Applied Bayesian econometrics for central bankers Andrew Blake and Haroon Mumtaz CCBS Technical Handbook No. 4 Applied Bayesian econometrics
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationMachine Learning and Pattern Recognition Logistic Regression
Machine Learning and Pattern Recognition Logistic Regression Course Lecturer:Amos J Storkey Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh Crichton Street,
More information15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
More informationTHREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS
THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS O.U. Sezerman 1, R. Islamaj 2, E. Alpaydin 2 1 Laborotory of Computational Biology, Sabancı University, Istanbul, Turkey. 2 Computer Engineering
More informationSolution to Homework 2
Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if
More informationOperation Count; Numerical Linear Algebra
10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floating-point
More informationLinear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
More informationby the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that
More informationNonlinear Iterative Partial Least Squares Method
Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for
More informationLinear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems
Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems In Chapters 8 and 9 of this book we have designed dynamic controllers such that the closed-loop systems display the desired transient
More information1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
More informationMatrix Differentiation
1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have
More informationTypical Linear Equation Set and Corresponding Matrices
EWE: Engineering With Excel Larsen Page 1 4. Matrix Operations in Excel. Matrix Manipulations: Vectors, Matrices, and Arrays. How Excel Handles Matrix Math. Basic Matrix Operations. Solving Systems of
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationGaussian Conjugate Prior Cheat Sheet
Gaussian Conjugate Prior Cheat Sheet Tom SF Haines 1 Purpose This document contains notes on how to handle the multivariate Gaussian 1 in a Bayesian setting. It focuses on the conjugate prior, its Bayesian
More informationPartial Fractions. Combining fractions over a common denominator is a familiar operation from algebra:
Partial Fractions Combining fractions over a common denominator is a familiar operation from algebra: From the standpoint of integration, the left side of Equation 1 would be much easier to work with than
More information2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system
1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables
More informationCS3220 Lecture Notes: QR factorization and orthogonal transformations
CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss
More informationUniversity of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
More informationThis unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.
Algebra I Overview View unit yearlong overview here Many of the concepts presented in Algebra I are progressions of concepts that were introduced in grades 6 through 8. The content presented in this course
More informationThe VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series.
Cointegration The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series. Economic theory, however, often implies equilibrium
More informationLeast-Squares Intersection of Lines
Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a
More informationNormalization and Mixed Degrees of Integration in Cointegrated Time Series Systems
Normalization and Mixed Degrees of Integration in Cointegrated Time Series Systems Robert J. Rossana Department of Economics, 04 F/AB, Wayne State University, Detroit MI 480 E-Mail: r.j.rossana@wayne.edu
More informationMATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0
More informationDefinition 8.1 Two inequalities are equivalent if they have the same solution set. Add or Subtract the same value on both sides of the inequality.
8 Inequalities Concepts: Equivalent Inequalities Linear and Nonlinear Inequalities Absolute Value Inequalities (Sections 4.6 and 1.1) 8.1 Equivalent Inequalities Definition 8.1 Two inequalities are equivalent
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More informationModule 3: Correlation and Covariance
Using Statistical Data to Make Decisions Module 3: Correlation and Covariance Tom Ilvento Dr. Mugdim Pašiƒ University of Delaware Sarajevo Graduate School of Business O ften our interest in data analysis
More informationLinear Algebra and TI 89
Linear Algebra and TI 89 Abdul Hassen and Jay Schiffman This short manual is a quick guide to the use of TI89 for Linear Algebra. We do this in two sections. In the first section, we will go over the editing
More informationSection 1.1 Linear Equations: Slope and Equations of Lines
Section. Linear Equations: Slope and Equations of Lines Slope The measure of the steepness of a line is called the slope of the line. It is the amount of change in y, the rise, divided by the amount of
More informationReview Jeopardy. Blue vs. Orange. Review Jeopardy
Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?
More information1 Short Introduction to Time Series
ECONOMICS 7344, Spring 202 Bent E. Sørensen January 24, 202 Short Introduction to Time Series A time series is a collection of stochastic variables x,.., x t,.., x T indexed by an integer value t. The
More informationLS.6 Solution Matrices
LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions
More informationK80TTQ1EP-??,VO.L,XU0H5BY,_71ZVPKOE678_X,N2Y-8HI4VS,,6Z28DDW5N7ADY013
Hill Cipher Project K80TTQ1EP-??,VO.L,XU0H5BY,_71ZVPKOE678_X,N2Y-8HI4VS,,6Z28DDW5N7ADY013 Directions: Answer all numbered questions completely. Show non-trivial work in the space provided. Non-computational
More informationLecture 7: Finding Lyapunov Functions 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1
More informationAn Introduction to the Kalman Filter
An Introduction to the Kalman Filter Greg Welch 1 and Gary Bishop 2 TR 95041 Department of Computer Science University of North Carolina at Chapel Hill Chapel Hill, NC 275993175 Updated: Monday, July 24,
More informationCHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.
CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In
More informationSolving Mass Balances using Matrix Algebra
Page: 1 Alex Doll, P.Eng, Alex G Doll Consulting Ltd. http://www.agdconsulting.ca Abstract Matrix Algebra, also known as linear algebra, is well suited to solving material balance problems encountered
More informationNotes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
More informationProbability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur
Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce
More informationConditional guidance as a response to supply uncertainty
1 Conditional guidance as a response to supply uncertainty Appendix to the speech given by Ben Broadbent, External Member of the Monetary Policy Committee, Bank of England At the London Business School,
More informationASEN 3112 - Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1
19 MDOF Dynamic Systems ASEN 3112 Lecture 1 Slide 1 A Two-DOF Mass-Spring-Dashpot Dynamic System Consider the lumped-parameter, mass-spring-dashpot dynamic system shown in the Figure. It has two point
More informationECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2015
ECON 459 Game Theory Lecture Notes Auctions Luca Anderlini Spring 2015 These notes have been used before. If you can still spot any errors or have any suggestions for improvement, please let me know. 1
More informationCryptography and Network Security. Prof. D. Mukhopadhyay. Department of Computer Science and Engineering. Indian Institute of Technology, Kharagpur
Cryptography and Network Security Prof. D. Mukhopadhyay Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Module No. # 01 Lecture No. # 12 Block Cipher Standards
More informationLecture Notes 2: Matrices as Systems of Linear Equations
2: Matrices as Systems of Linear Equations 33A Linear Algebra, Puck Rombach Last updated: April 13, 2016 Systems of Linear Equations Systems of linear equations can represent many things You have probably
More informationDynamic Eigenvalues for Scalar Linear Time-Varying Systems
Dynamic Eigenvalues for Scalar Linear Time-Varying Systems P. van der Kloet and F.L. Neerhoff Department of Electrical Engineering Delft University of Technology Mekelweg 4 2628 CD Delft The Netherlands
More informationExcel supplement: Chapter 7 Matrix and vector algebra
Excel supplement: Chapter 7 atrix and vector algebra any models in economics lead to large systems of linear equations. These problems are particularly suited for computers. The main purpose of this chapter
More informationDRAFT. Further mathematics. GCE AS and A level subject content
Further mathematics GCE AS and A level subject content July 2014 s Introduction Purpose Aims and objectives Subject content Structure Background knowledge Overarching themes Use of technology Detailed
More informationChapter 4: Vector Autoregressive Models
Chapter 4: Vector Autoregressive Models 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie IV.1 Vector Autoregressive Models (VAR)...
More informationQuestion 2: How do you solve a matrix equation using the matrix inverse?
Question : How do you solve a matrix equation using the matrix inverse? In the previous question, we wrote systems of equations as a matrix equation AX B. In this format, the matrix A contains the coefficients
More informationEM Clustering Approach for Multi-Dimensional Analysis of Big Data Set
EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set Amhmed A. Bhih School of Electrical and Electronic Engineering Princy Johnson School of Electrical and Electronic Engineering Martin
More informationLinear Programming. March 14, 2014
Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1
More informationRisk Decomposition of Investment Portfolios. Dan dibartolomeo Northfield Webinar January 2014
Risk Decomposition of Investment Portfolios Dan dibartolomeo Northfield Webinar January 2014 Main Concepts for Today Investment practitioners rely on a decomposition of portfolio risk into factors to guide
More informationHIGH ACCURACY APPROXIMATION ANALYTICAL METHODS FOR CALCULATING INTERNAL RATE OF RETURN. I. Chestopalov, S. Beliaev
HIGH AUAY APPOXIMAIO AALYIAL MHODS FO ALULAIG IAL A OF U I. hestopalov, S. eliaev Diversity of methods for calculating rates of return on investments has been propelled by a combination of ever more sophisticated
More informationMultiple regression - Matrices
Multiple regression - Matrices This handout will present various matrices which are substantively interesting and/or provide useful means of summarizing the data for analytical purposes. As we will see,
More informationNumerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)
More information10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method
578 CHAPTER 1 NUMERICAL METHODS 1. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after
More information3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
More informationQUALITY ENGINEERING PROGRAM
QUALITY ENGINEERING PROGRAM Production engineering deals with the practical engineering problems that occur in manufacturing planning, manufacturing processes and in the integration of the facilities and
More informationStatistical Machine Learning
Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes
More informationEnhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm
1 Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm Hani Mehrpouyan, Student Member, IEEE, Department of Electrical and Computer Engineering Queen s University, Kingston, Ontario,
More information0.8 Rational Expressions and Equations
96 Prerequisites 0.8 Rational Expressions and Equations We now turn our attention to rational expressions - that is, algebraic fractions - and equations which contain them. The reader is encouraged to
More informationMultiple Regression: What Is It?
Multiple Regression Multiple Regression: What Is It? Multiple regression is a collection of techniques in which there are multiple predictors of varying kinds and a single outcome We are interested in
More information5.1 Radical Notation and Rational Exponents
Section 5.1 Radical Notation and Rational Exponents 1 5.1 Radical Notation and Rational Exponents We now review how exponents can be used to describe not only powers (such as 5 2 and 2 3 ), but also roots
More informationRegression III: Advanced Methods
Lecture 16: Generalized Additive Models Regression III: Advanced Methods Bill Jacoby Michigan State University http://polisci.msu.edu/jacoby/icpsr/regress3 Goals of the Lecture Introduce Additive Models
More informationMISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group
MISSING DATA TECHNIQUES WITH SAS IDRE Statistical Consulting Group ROAD MAP FOR TODAY To discuss: 1. Commonly used techniques for handling missing data, focusing on multiple imputation 2. Issues that could
More informationAlgebraic Concepts Algebraic Concepts Writing
Curriculum Guide: Algebra 2/Trig (AR) 2 nd Quarter 8/7/2013 2 nd Quarter, Grade 9-12 GRADE 9-12 Unit of Study: Matrices Resources: Textbook: Algebra 2 (Holt, Rinehart & Winston), Ch. 4 Length of Study:
More information1.2 Solving a System of Linear Equations
1.. SOLVING A SYSTEM OF LINEAR EQUATIONS 1. Solving a System of Linear Equations 1..1 Simple Systems - Basic De nitions As noticed above, the general form of a linear system of m equations in n variables
More informationDepartment of Chemical Engineering ChE-101: Approaches to Chemical Engineering Problem Solving MATLAB Tutorial VI
Department of Chemical Engineering ChE-101: Approaches to Chemical Engineering Problem Solving MATLAB Tutorial VI Solving a System of Linear Algebraic Equations (last updated 5/19/05 by GGB) Objectives:
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
More informationOutline. Generalize Simple Example
Solving Simultaneous Nonlinear Algebraic Equations Larry Caretto Mechanical Engineering 309 Numerical Analysis of Engineering Systems March 5, 014 Outline Problem Definition of solving simultaneous nonlinear
More informationDirect Methods for Solving Linear Systems. Matrix Factorization
Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
More informationQuadratic forms Cochran s theorem, degrees of freedom, and all that
Quadratic forms Cochran s theorem, degrees of freedom, and all that Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 1, Slide 1 Why We Care Cochran s theorem tells us
More informationMethod To Solve Linear, Polynomial, or Absolute Value Inequalities:
Solving Inequalities An inequality is the result of replacing the = sign in an equation with ,, or. For example, 3x 2 < 7 is a linear inequality. We call it linear because if the < were replaced with
More informationPrinciple Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression
Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression Saikat Maitra and Jun Yan Abstract: Dimension reduction is one of the major tasks for multivariate
More informationEconomics 1011a: Intermediate Microeconomics
Lecture 12: More Uncertainty Economics 1011a: Intermediate Microeconomics Lecture 12: More on Uncertainty Thursday, October 23, 2008 Last class we introduced choice under uncertainty. Today we will explore
More informationMATH 551 - APPLIED MATRIX THEORY
MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points
More informationECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE
ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE YUAN TIAN This synopsis is designed merely for keep a record of the materials covered in lectures. Please refer to your own lecture notes for all proofs.
More informationSimple Regression Theory II 2010 Samuel L. Baker
SIMPLE REGRESSION THEORY II 1 Simple Regression Theory II 2010 Samuel L. Baker Assessing how good the regression equation is likely to be Assignment 1A gets into drawing inferences about how close the
More informationOn Marginal Effects in Semiparametric Censored Regression Models
On Marginal Effects in Semiparametric Censored Regression Models Bo E. Honoré September 3, 2008 Introduction It is often argued that estimation of semiparametric censored regression models such as the
More informationEigenvalues, Eigenvectors, Matrix Factoring, and Principal Components
Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they
More informationThe Basic New Keynesian Model
The Basic New Keynesian Model January 11 th 2012 Lecture notes by Drago Bergholt, Norwegian Business School Drago.Bergholt@bi.no I Contents 1. Introduction... 1 1.1 Prologue... 1 1.2 The New Keynesian
More information4.5 Linear Dependence and Linear Independence
4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then
More informationAu = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
More information