CONTROLLER DESIGN USING POLYNOMIAL MATRIX DESCRIPTION

Size: px
Start display at page:

Download "CONTROLLER DESIGN USING POLYNOMIAL MATRIX DESCRIPTION"

Transcription

1 CONTROLLER DESIGN USING POLYNOMIAL MATRIX DESCRIPTION Didier Henrion, Centre National de la Recherche Scientifique, France Michael Šebek, Center for Applied Cybernetics, Faculty of Electrical Eng., Czech Technical University in Prague, Czech Republic Keywords: Polynomial methods, polynomial matrices, dynamics assignment, deadbeat regulation, H 2 optimal control, polynomial Diophantine equation, polynomial spectral factorization, Polynomial Toolbox for Matlab Contents 1. Introduction 2. Polynomial approach to three classical control problems 2.1. Dynamics assignment 2.2. Deadbeat regulation 2.3. H 2 optimal control 3. Numerical methods for polynomial matrices 3.1. Diophantine equation 3.2. Spectral factorization equation 4. Conclusion Bibliography Glossary Deadbeat regulation: A control problem that consists in driving the state of the system to the origin in the shortest possible time. Diophantine equation: A linear polynomial matrix equation. Dynamics assignment: A control problem that consists in assigning the closed-loop similarity invariants. H 2 optimal control: A control problem that consists in stabilizing a system while minimizing the H 2 norm of some transfer function. Interpolation: An algorithm consisting in working with values taken by polynomials at given points, rather than with polynomial coefficients. Matrix fraction description: Ratio of two polynomial matrices describing a matrix transfer function. Polynomial matrix: A matrix with polynomial entries, or equivalently a polynomial with matrix coefficients. Polynomial Toolbox: The Matlab Toolbox for polynomials, polynomial matrices and their application in systems, signals and control. Similarity invariants: A set of polynomials describing the dynamics of a system. Spectral factor: The solution of a quadratic polynomial matrix equation. Spectral factorization: A quadratic polynomial matrix equation. Sylvester matrix: A structured matrix arising when solving polynomial Diophantine equations.

2 Summary Polynomial matrix techniques can be used as an alternative to state-space techniques when designing controllers for linear systems. In this article, we show how polynomial techniques can be invoked to solve three classical control problems: dynamics assignment, deadbeat control and H 2 optimal control. We assume that the control problems are formulated in the state-space setting, and we show how to solve them in the polynomial setting, thus illustrating the linkage existing between the two approaches. Finally, we mention the numerical methods available to solve problems involving polynomial matrices. 1. Introduction Polynomial matrices arise naturally when modeling physical systems. For example, many dynamical systems in mechanics, acoustics or linear stability of flows in fluid dynamics can be represented by a second order vector differential equation A 0 x tµ A 1 ẋ tµ A 2 ẍ tµ 0 Upon application of the Laplace transform, studying the above equation amounts to studying the characteristic matrix polynomial A sµ A 0 sa 1 s 2 A 2 The constant coefficient matrices A 0, A 1 and A 2 are known as the stiffness, damping and inertia matrices, respectively, usually having some special structure depending on the type of loads acting on the system. For example, when A 0 is symmetric negative definite, A 1 is anti-symmetric and A 2 is symmetric positive definite, then the equation models the so-called gyroscopic systems. In the same way, third degree polynomial matrices arise in aero-acoustics. In fluid mechanics the study of the spatial stability of the Orr-Sommerfeld equation yields a quartic matrix polynomial. It is therefore not surprising to learn that most of the control design problems boil down to solving mathematical equations involving polynomial matrices. For historical reasons, prior to the sixties most of the control problems were formulated for scalar plants, and they involved manipulations on scalar polynomials and scalar rational functions. The extension of these methods to the multivariable (multi-input, multi-output) case was not obvious at this time, and it has been achieved only with the newly developed concept of state-space setting. In the seventies, several multivariable results were therefore not available in the polynomial setting, which somehow renewed the interest in this approach. Now most of the results are available both in the state-space and polynomial settings. In this article we will study in detail three standard control problems and their solution by means of polynomial methods. The first problem is known as the dynamics assignment problem, a generalization of eigenvalue placement. The second problem is called the deadbeat regulation problem. It consists of finding a control law such that the state of a discrete-time system is stirred to the origin as fast as possible, i.e. in a minimal number of steps. The third and last problem is H 2 optimal control, where a stabilizing control law is sought that minimizes the H 2 norm of some transfer function. All

3 these problems are formulated in the state-space setting, and then solved with the help of polynomial methods, to better illustrate the linkage between the two approaches. After describing these problems, we will then focus more on practical aspects, enumerating the numerical methods that are available to solve the polynomial equations. 2. Polynomial approach to three classical control problems 2.1. Dynamics assignment We consider a discrete-time linear system x k 1 Ax k Bu k with n states and m inputs, and we study the effects of the linear static state feedback u k Kx k on the system dynamics. One of the simplest problem we can think of is that of enforcing closed-loop eigenvalues, i.e. finding a matrix K such that the closed-loop system matrix A BK has prescribed eigenvalues. In the polynomial setting, assigning the closed-loop eigenvalues amounts to assigning the characteristic polynomial det zi A BKµ. This is possible if and only if ABµ is a reachable pair. (See Pole placement control for more information on pole assignment, and System characteristics: stability, controllability, observability for more information on reachability). A more involved problem is that of enforcing not only eigenvalues, but also the eigenstructure of the closed-loop system matrix. In the polynomial setting, the eigenstructure is captured by the so-called similarity invariants of matrix A BK, i.e. the polynomials that appear in the Smith diagonal form of the polynomial matrix z A BK. Rosenbrock s fundamental theorem captures the degrees of freedom one has in enforcing these invariants. Let ABµ be a reachable pair with reachability indices k 1 k m. Let c 1 zµc p zµ be monic polynomials such that c i 1 zµ divides c i zµ and p i1 deg c i zµn. Then there exists a feedback matrix K such that closed-loop matrix A BK has similarity invariants c i zµ if and only if k i1 deg c i zµ k k i i1 k 1p The above theorem basically says that one can place the eigenvalues at arbitrary specified locations but the structure of each multiple eigenvalue is limited: one cannot split it into as many repeated eigenvalues as one might wish. Rosenbrock s result is constructive, and we now describe a procedure to assign a set of invariant polynomials by static state feedback.

4 First we must find relatively right coprime polynomial matrices D R zµ and N R zµ with D R zµ columnreduced and column-degree ordered, such that zi Aµ 1 B N R zµd 1 R zµ (1) See Polynomial and matrix fraction description for the definition of a column-reduced matrix. A column-reduced matrix can be put into column-degree ordered form by suitable column permutations. Then we must form a column-reduced polynomial matrix C zµ with invariant polynomials c 1 zµc p zµ which has the same column degrees as D R zµ. Finally, we solve the equation X L D R zµ Y L N R zµ C zµ (2) for constant matrices X L and Y L, and let K XL 1 Y L The above equation over polynomial matrices is called a Diophantine polynomial equation. Under the assumptions of Rosenbrock s theorem, there always exists a constant solution X and Y to this equation. As an example, take A B as a reachable pair. We seek a feedback matrix K such that A BK has similarity invariants c 1 zµ z 3 z 2 c 2 zµ z Following a procedure of conversion from state-space form to matrix fraction description (MFD) form (see Polynomial and matrix fraction description) we obtain D R zµ z 2 z 0 z 2 z 1 N R zµ 0 z 1 1 z z 0 1

5 as a right coprime pair satisfying relation (1). The reachability indices of ABµ are the column degrees of column-reduced matrix D R zµ, namely k 1 2 and k 2 2. Rosenbrock s inequalities are satisfied since deg c 1 zµ 2 deg c 1 zµ deg c 2 zµ 4 A column-reduced matrix with column degrees k 1 k 2 and c 1 zµ c 2 zµ as similarity invariants is found to be C zµ z 2 0 z z 2 z Then we have to solve the Diophantine equation z 2 z X L 0 z 2 z 1 Y L 0 z 1 1 z z 0 1 z 2 0 z z 2 z We find the constant solution X L Y L corresponding to the feedback matrix K yielding the desired dynamics Deadbeat regulation We consider a discrete-time linear system x k 1 Ax k Bu k with n states and m inputs, and our objective is to determine a linear state feedback u k Kx k

6 such that every initial state x 0 is driven to the origin in the shortest possible time. This is the deadbeat regulation problem. Deadbeat regulation is a typical discrete-time control problem. Indeed, in the continuous-time setting, it is not possible to find a linear state feedback that stirs the system state in finite time. The idea of driving the initial state to the origin is closed related to discrete-time controllability. To solve the above problem formulated in the state-space setting, we will use polynomial techniques. We will describe a procedure in the same spirit than in the previous paragraph. The fundamental constructive result on deadbeat control can be formulated as follows. There exists a state-feedback deadbeat regulator if and only if the pair ABµ is controllable (see System characteristics: stability, controllability, observability for the definition of controllability. Under this assumption, let X R z 1 µ, Y R z 1 µ be a minimum-column-degree polynomial solution of the Diophantine equation I Az 1 µx R z 1 µ Bz 1 Y R zµ I (3) Then X R z 1 µ is non-singular and K Y R z 1 µx 1 R z 1 µ is a deadbeat gain. The above equation is a matrix polynomial Diophantine equation in the indeterminate z 1. Note that in the previous paragraph, we have shown that solving the dynamics assignment problem amounts to solving a matrix polynomial Diophantine equation in the indeterminate z, where the right hand-side matrix contained all the required similarity invariants. It turns out that the deadbeat control problem can be viewed as a special case of dynamics assignment. To see this, build a right coprime MFD N R zµ D R zµ of the transfer function zi Aµ 1 B N R zµ D 1 R zµ where polynomial matrix D R zµ is column-reduced with ordered column degrees k 1 k m (see Polynomial and matrix fraction description ). Then define C 1 zµ diag z k 1 z k m and N R z 1 µ N R z 1 µc 1 zµ D R z 1 µ D R z 1 µc 1 zµ

7 as right coprime polynomial matrices in the indeterminate z 1. They satisfy the equation I Az 1 µ 1 Bz 1 N R z 1 µd 1 R z 1 µ and can be found in the generalized Bézout identity I Az 1 Bz 1 Y L X L XR z 1 µ NR z 1 µ Y R z 1 µ D R z 1 µ I 0 0 I where constant matrices X L and Y L are such that X L D R z 1 µ Y L N R z 1 µi Upon multiplication on the right by the matrix diag z k 1 z km this later equation is precisely the dynamics assignment Diophantine equation (2) with right hand-side matrix C zµ diag z k 1 z k m It means that the deadbeat regulator assigns all the closed-loop eigenvalues at the origin. As an example, take A B as a controllable pair for which we seek a deadbeat gain matrix K. Diophantine equation (3) reads 1 z z z z 1 1 X R z 1 µ z z 1 Y R z 1 µi It has a minimum-column-degree solution pair X R z 1 µ z 1 z z Y R z 1 µ 0 1 z 1 z z 1

8 corresponding to the deadbeat feedback gain K Y R z 1 µxr 1 z µ Closed-loop dynamics are given by x z 1 µ k z k A BKµ k x z 1 z z x 0 so every initial condition x 0 is stirred to the origin in three steps at most since A BKµ k 0 for k H 2 optimal control The H 2 optimal control problem consists of stabilizing a linear system in such a way that its transfer matrix attains a minimum norm in the Hardy space H 2. The linear quadratic (LQ) optimal control problem (see Optimal linear quadratic control ) is a special case of a H 2 optimal control problem. The continuous-time plant is modeled by the equations ẋ tµ Ax tµ B 1 v tµ B 2 u tµ z tµ C 1 x tµ D 11 v tµ D 12 u tµ y tµ C 2 x tµ D 21 v tµ D 22 u tµ giving rise to the plant transfer matrix P11 sµ P 21 sµ P 12 sµ P 22 sµ C1 si Aµ 1 D11 D B C 1 B D 22 D 21 Denoting K sµ the transfer matrix of the controller, the control system transfer matrix between v tµ and z tµ becomes G sµ P 11 sµ P 12 sµi K sµp 22 sµ 1 K sµp 21 sµ The standard H 2 optimal control problem consists in finding a controller K sµ that stabilizes the control system and minimizes the squared H 2 norm of G sµ, defined as trace G ¼ sµg sµds

9 where the integral is a contour integral up the imaginary axis and then around an infinite semi-circle in the left half-plane. We make the following assumptions on the plant: AB 1 µ is stabilizable, C 1 Aµ is detectable, and D 11 0 AB 2 µ is controllable, C 2 Aµ is observable, and D 22 0 D ¼ 12 C 1 0 D ¼ 12 D 12 I B 1 D ¼ 21 0 D 21D ¼ 21 I. See System characteristics: stability, controllability, observability for more information on these concepts. The above assumptions greatly simplify the exposition of the results. They can be relaxed, but at the price of additional technicalities. Then write C 2 si Aµ 1 DL 1 sµn L sµ where N L sµ and D L sµ are left coprime polynomial matrices such that D L sµ is row-reduced. Further, write si Aµ 1 B 2 N R sµd 1 R sµ where N R sµ and D R sµ are right coprime polynomial matrices such that D R sµ is column-reduced. Let C L sµ be a square polynomial matrix with Hurwitz determinant (i.e. all its roots have strictly negative real parts), such that C L sµc ¼ L sµ N L sµb 1 D L sµd 21 N L sµb 1 D L sµd 21 ¼ and let C R sµ be a square polynomial matrix with Hurwitz determinant, such that C ¼ R sµc R sµ C 1 N R sµ D 12 D R sµ ¼ C 1 N R sµ D 12 D R sµ The matrices C L sµ and C R sµ are called spectral factors and are uniquely determined up to right and left orthogonal factors, respectively. Under the above assumptions, then there exists a unique constant solution X R Y R to the polynomial Diophantine equation D L sµx R N L sµy R C L sµ (4)

10 from which we build the feedback matrix F Y R XR 1 Similarly, there exists a unique constant solution X L Y L to the polynomial Diophantine equation X L D R sµ Y L N R sµ C R sµ (5) from which we build the feedback matrix G XL 1 Y L Then the unique H 2 optimal controller is given by K sµ G si A FC 2 B 2 Gµ 1 F A matrix fraction description can then be readily determined (see Polynomial and matrix fraction description ). From equations (4) and (5) we see that H 2 optimal control can be interpreted as a two-stage dynamics assignment problem, where the assigned optimal invariant polynomials are obtained by solving two polynomial matrix spectral factorization problems. Now we treat a very simple example to illustrate the procedure. The plant data are given by A C 1 C B 1 D 21 Ô B 2 D and we can check that all the standard assumptions are satisfied. First we build a left MFD by extracting a minimal polynomial basis of a left null-space NL sµ D L sµ si A C 2 0 We find N L sµ s 1 D L sµ s 2

11 Then we build a right MFD by extracting a minimal polynomial basis of a right null-space si A B2 N R sµ D R sµ 0 We find N R sµ 1 s D R sµ s 2 Following this, we must perform the spectral factorization C L sµcl ¼ sµ N L sµb 1 D L sµd 21 N L sµb 1 D L sµd 21 Ô ¼ Ô 2s 2s 1 s 2 1 s 2 1 2s 2 s 4 s 1µ 2 s 1µ 2 and we find the Hurwitz spectral factor C L sµ s 2 2s 1 s 1µ 2 Then we perform the spectral factorization CR ¼ sµc R sµ C 1 N R sµ D 12 D R sµ ¼ C 1 N R sµ D 12 D R sµ 2 s 2 2 s 2 4 s 4 s 2 2s 2µ s 2 2s 2µ and we find the Hurwitz spectral factor C R sµ s 2 2s 2 s 1 iµ s 1 iµ The next step consists in solving Diophantine equation (4) s 2 s 1 X R Y R 1 2s s 2 We easily find X R 1 Y R 2 1

12 so F Y R 2 1 Then we solve Diophantine equation (5) XL Y L s2 1 s s 2 2s 2 We easily find X L 1 Y L 2 2 so G Y L 2 2 Finally, we can build a right MFD for the transfer function si A FC 2 B 2 Gµ 1 F by extracting a minimal polynomial basis of a right null-space si A FC2 B 2 G F N K sµ D K sµ 0 We find N K sµ 2s 5 s 4 D K sµ s 2 4s 7 so that with N K sµ G N K sµ 6s 2 we obtain the H 2 optimal controller with transfer function K sµ N K sµ D K sµ 6s 2 s 2 4s 7

13 3. Numerical methods for polynomial matrices In the previous section we have shown that the main ingredients when solving control problems with polynomial techniques are the polynomial Diophantine equation, or linear system of equations over polynomial matrices the spectral factorization equation, or quadratic system of equations over polynomial matrices. In this section we shortly review the numerical routines that are available to address these problems Diophantine equation A polynomial Diophantine equation has generally the form A L sµx R sµ B L sµy R sµ C L sµ where polynomial matrices A L sµ, B L sµ and C sµ are given, and polynomial matrices X R sµ and Y R sµ are to be found. Usually, A L sµ has special row-reducedness properties, and a solution X R sµ of minimum column degrees is generally sought. We can find the dual equation X L sµa R sµ Y L sµb R sµ C R sµ or even two-sided equations of the type X L sµa R sµ B L sµy R sµ C sµ where the unknowns X L sµ and Y R sµ multiply matrices A R sµ and B L sµ both from the left and from the right, respectively. With the help of the Kronecker product, we can however always rewrite these equations to the general form A sµx sµ B sµ (6) where A sµ and B sµ are given, and X sµ is to be found. There exist several methods to solve equation (6):

14 Sylvester matrix. We assume that the expected degree n of the solution X sµ is given, and we identify like powers of indeterminate s to obtain a linear system of equations A 0 A 1 A 0. A 1... A m. A 0 A m A A m X 0 X 1. X n B 0 B 1. B n m The block matrix appearing at the left hand-side is called a Sylvester matrix. The linear system has a special banded Toeplitz structure, so that fast algorithms can be designed for its solution. If there is no solution of degree n, then we may try to find a solution of degree n 1 and so on. Fast structured algorithms can be designed to use the information obtained at the previous step to improve the performance. When properly implemented, these algorithms are numerically stable. Interpolation. We assume that polynomial matrices are not known by coefficients but rather by values taken at a certain number of points s i. Then, from the matrix values A s i µ and B s i µ we build a linear system of equations. The block matrix appearing in this system can be shown to be the product of a block Vandermonde matrix times the above Sylvester matrix. Basically, this means that the problem will be badly conditioned, unless we properly select the interpolation points. The best choice, corresponding to a perfect conditioning of the Vandermonde matrix, are complex points equidistantly spaced along the unit circle. The interpolated linear system can then be solved by any non-structured algorithm based on the numerically stable QR factorization. Polynomial reduction. We reduce matrix A sµ to some special form, such as a triangular Hermite form, with the help of successive elementary operations (see Polynomial and matrix fraction description for more information on the Hermite form and elementary operations). This method is not recommended from the numerical point of view. Finally, we point out that in the special case that B sµ0 in equation (6) then solving the Diophantine equation amounts to extracting a polynomial basis X sµ for the right null space of polynomial A sµ, i.e. we have to solve the equation A sµx sµ 0 This problem typically arises when converting from state-space to matrix fraction description (MFD), or from right MFD to left MFD and vice-versa. There are many bases for the right null-space of A sµ, and generally we are interested in the one which is row-reduced with minimum row degrees, the so-called minimal polynomial basis. There exist a specific algorithm to extract the minimal polynomial basis of a null space. It is based on numerically stable operations and takes advantage of the special problem structure. The degree of the polynomial basis, usually not known in advance, is found as a byproduct of the algorithm.

15 3.2. Spectral factorization equation In the continuous-time spectral factorization problem we are given a para-hermitian polynomial matrix B sµ, i.e. such that B sµ B ¼ sµ and we try to find a Hurwitz stable matrix A sµ (i.e. all its zeros are located in the open left half-plane) such that A ¼ sµa sµ B sµ Under the assumption that B sµ is positive definite when evaluated along the imaginary axis, such a spectral factor always exists. In the discrete-time case, the para-hermitian polynomial matrix satisfies B ¼ z 1 µb zµ so B zµ features positive and negative powers of z 1. When B zµ is positive definite along the unit circle, there always exists a Schur spectral factor A zµ (i.e. all its zeros are located in the open unit disk) such that A ¼ z 1 µa zµ B zµ The problem is relatively easy to solve in the scalar case (when both A sµ and B sµ are scalar polynomials). In the matrix case the problem is more involved. Basically we can distinguish the following approaches: Diagonalization. It is based on the reduction into a diagonal form with elementary operations. It is not recommended for the numerical point of view. Successive factor extraction. It requires first to compute the zeros of the polynomial matrix. Typically this is done with the numerically stable Schur decomposition. Then the zeros are extracted successively in a symmetric way. Interpolation. It requires first to compute the zeros of the polynomial matrix. Then, all the zeros are extracted simultaneously via an interpolation equation. Algebraic Riccati equation. It is a reformulation of the problem in the state-space setting. There exist several numerically efficient algorithms to solve algebraic Riccati equations. It must be pointed out that in the scalar discrete-time case there exists still a more efficient FFT-based algorithm to perform polynomial spectral factorization. Unfortunately no nice extension of the results is available for the continuous-time case or the matrix case.

16 Finally, a more general polynomial spectral factorization problem is encountered in LQG game theory and H optimal control, where a stable polynomial matrix A sµ and a possibly indefinite constant matrix J are sought such that (in continuous-time) A ¼ sµja sµ B sµ for a given para-hermitian matrix B sµ. It is called the J-spectral factorization problem. There exists also a discrete-time version of the problem. All the algorithms mentioned above can handle J-spectral factorization. 4. Conclusion With the help of three simple control problems, namely dynamics assignment, deadbeat regulation and H 2 optimal control, we have shown how polynomial methods naturally arise as an alternative to state-space methods. Dynamics assignment amounts to solving a special linear equation over polynomial matrices called the Diophantine equation. Deadbeat control can be seen as a special case of dynamics assignment where all the poles are located at the origin. Minimization of a quadratic cost in H 2 optimal control implies solving a special quadratic equation over polynomial matrices called the spectral factorization equation. Numerical methods available to solve these polynomial equations were then reviewed. Although not mentioned in this article, several other control problems can be solved with polynomial techniques. Most notably the H problem and most of the robust control problems can fit the approach. When applying polynomial methods to solve these problems, the designer will face the similar linear and quadratic matrix polynomial equations that arised when solving the simpler problems described here. Bibliography Grimble M. J. and Kučera V. (Editors) (1996). Polynomial methods for control systems design. London: Springer Verlag. [Collection of papers on the polynomial approach to solving both H 2 and H control and filtering problems.] Kučera V. (1979). Discrete linear control: the polynomial equation approach. London: Wiley. [This is a pioneering textbook that paved the way for the use of polynomials, polynomial matrices and polynomial equations to solve linear control problems.] Kučera V. (1991). Analysis and design of discrete linear control systems. London: Prentice Hall. [Textbook that investigates connections between the state-space approach and the polynomial approach to several standard linear control problems. Most of the material proposed in this article was taken from this reference.] Kwakernaak H. (1993). Robust control and H optimization - Tutorial paper. Automatica 29(2): [Tutorial exposition of the polynomial approach to H optimal regulation theory, with a focus on the mixed sensitivity problem for robust control system design.]

17 Kwakernaak H. and Šebek M. (1994). Polynomial J-spectral factorization. IEEE Trans. Autom. Control 39(2): [Collection of several algorithms for the J-spectral factorization of a polynomial matrix.] Kwakernaak H. and Šebek M. (1998)). The Polynomial Toolbox for Matlab, Prague, CZ: PolyX Ltd., Web site at ÛÛÛºÔÓÐÝܺÓÑ ÓÖ ÛÛÛºÔÓÐÝÜºÞºÌ ÅØÐ ÌÓÓÐÓÜ ÓÖ ÔÓÐÝÒÓÑÐ ÔÓÐÝÒÓÑÐ ÑØÖ Ò ØÖ ÔÔÐØÓÒ Ò Ý ØÑ ÒÐ Ò ÓÒØÖÓк ÓÒØÒ ÓÚÖ ¼¼ ÙÒØÓÒ ÒÐÙÒ ÚÖÓÙ ÓÐÚÖ ÓÖ ÐÒÖ Ò ÕÙÖØ ÑØÖÜ ÔÓÐÝÒÓÑÐ ÕÙØÓÒ º

By choosing to view this document, you agree to all provisions of the copyright laws protecting it.

By choosing to view this document, you agree to all provisions of the copyright laws protecting it. This material is posted here with permission of the IEEE Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services Internal

More information

Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems

Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems In Chapters 8 and 9 of this book we have designed dynamic controllers such that the closed-loop systems display the desired transient

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

LOOP TRANSFER RECOVERY FOR SAMPLED-DATA SYSTEMS 1

LOOP TRANSFER RECOVERY FOR SAMPLED-DATA SYSTEMS 1 LOOP TRANSFER RECOVERY FOR SAMPLED-DATA SYSTEMS 1 Henrik Niemann Jakob Stoustrup Mike Lind Rank Bahram Shafai Dept. of Automation, Technical University of Denmark, Building 326, DK-2800 Lyngby, Denmark

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

ASEN 3112 - Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1

ASEN 3112 - Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1 19 MDOF Dynamic Systems ASEN 3112 Lecture 1 Slide 1 A Two-DOF Mass-Spring-Dashpot Dynamic System Consider the lumped-parameter, mass-spring-dashpot dynamic system shown in the Figure. It has two point

More information

19 LINEAR QUADRATIC REGULATOR

19 LINEAR QUADRATIC REGULATOR 19 LINEAR QUADRATIC REGULATOR 19.1 Introduction The simple form of loopshaping in scalar systems does not extend directly to multivariable (MIMO) plants, which are characterized by transfer matrices instead

More information

Lecture 13 Linear quadratic Lyapunov theory

Lecture 13 Linear quadratic Lyapunov theory EE363 Winter 28-9 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discrete-time

More information

A note on companion matrices

A note on companion matrices Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod

More information

1 Short Introduction to Time Series

1 Short Introduction to Time Series ECONOMICS 7344, Spring 202 Bent E. Sørensen January 24, 202 Short Introduction to Time Series A time series is a collection of stochastic variables x,.., x t,.., x T indexed by an integer value t. The

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

POTENTIAL OF STATE-FEEDBACK CONTROL FOR MACHINE TOOLS DRIVES

POTENTIAL OF STATE-FEEDBACK CONTROL FOR MACHINE TOOLS DRIVES POTENTIAL OF STATE-FEEDBACK CONTROL FOR MACHINE TOOLS DRIVES L. Novotny 1, P. Strakos 1, J. Vesely 1, A. Dietmair 2 1 Research Center of Manufacturing Technology, CTU in Prague, Czech Republic 2 SW, Universität

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

3.1 State Space Models

3.1 State Space Models 31 State Space Models In this section we study state space models of continuous-time linear systems The corresponding results for discrete-time systems, obtained via duality with the continuous-time models,

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

ROUTH S STABILITY CRITERION

ROUTH S STABILITY CRITERION ECE 680 Modern Automatic Control Routh s Stability Criterion June 13, 2007 1 ROUTH S STABILITY CRITERION Consider a closed-loop transfer function H(s) = b 0s m + b 1 s m 1 + + b m 1 s + b m a 0 s n + s

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4) ROOTS OF POLYNOMIAL EQUATIONS In this unit we discuss polynomial equations. A polynomial in x of degree n, where n 0 is an integer, is an expression of the form P n (x) =a n x n + a n 1 x n 1 + + a 1 x

More information

Controllability and Observability

Controllability and Observability Controllability and Observability Controllability and observability represent two major concepts of modern control system theory These concepts were introduced by R Kalman in 1960 They can be roughly defined

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year. This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions. Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Mean value theorem, Taylors Theorem, Maxima and Minima.

Mean value theorem, Taylors Theorem, Maxima and Minima. MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and express-ions. Permutations and Combinations.

More information

Chapter 12 Modal Decomposition of State-Space Models 12.1 Introduction The solutions obtained in previous chapters, whether in time domain or transfor

Chapter 12 Modal Decomposition of State-Space Models 12.1 Introduction The solutions obtained in previous chapters, whether in time domain or transfor Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter

More information

Indiana State Core Curriculum Standards updated 2009 Algebra I

Indiana State Core Curriculum Standards updated 2009 Algebra I Indiana State Core Curriculum Standards updated 2009 Algebra I Strand Description Boardworks High School Algebra presentations Operations With Real Numbers Linear Equations and A1.1 Students simplify and

More information

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A = MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

Algebra I Credit Recovery

Algebra I Credit Recovery Algebra I Credit Recovery COURSE DESCRIPTION: The purpose of this course is to allow the student to gain mastery in working with and evaluating mathematical expressions, equations, graphs, and other topics,

More information

Using row reduction to calculate the inverse and the determinant of a square matrix

Using row reduction to calculate the inverse and the determinant of a square matrix Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible

More information

Formulations of Model Predictive Control. Dipartimento di Elettronica e Informazione

Formulations of Model Predictive Control. Dipartimento di Elettronica e Informazione Formulations of Model Predictive Control Riccardo Scattolini Riccardo Scattolini Dipartimento di Elettronica e Informazione Impulse and step response models 2 At the beginning of the 80, the early formulations

More information

3 Orthogonal Vectors and Matrices

3 Orthogonal Vectors and Matrices 3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution

More information

Linearly Independent Sets and Linearly Dependent Sets

Linearly Independent Sets and Linearly Dependent Sets These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation

More information

Algebra 1 Course Title

Algebra 1 Course Title Algebra 1 Course Title Course- wide 1. What patterns and methods are being used? Course- wide 1. Students will be adept at solving and graphing linear and quadratic equations 2. Students will be adept

More information

Linear Algebra: Determinants, Inverses, Rank

Linear Algebra: Determinants, Inverses, Rank D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of

More information

1 Review of Least Squares Solutions to Overdetermined Systems

1 Review of Least Squares Solutions to Overdetermined Systems cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares

More information

ALGEBRAIC EIGENVALUE PROBLEM

ALGEBRAIC EIGENVALUE PROBLEM ALGEBRAIC EIGENVALUE PROBLEM BY J. H. WILKINSON, M.A. (Cantab.), Sc.D. Technische Universes! Dsrmstedt FACHBEREICH (NFORMATiK BIBL1OTHEK Sachgebieto:. Standort: CLARENDON PRESS OXFORD 1965 Contents 1.

More information

Notes on Factoring. MA 206 Kurt Bryan

Notes on Factoring. MA 206 Kurt Bryan The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor

More information

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes Solving Polynomial Equations 3.3 Introduction Linear and quadratic equations, dealt within Sections 3.1 and 3.2, are members of a class of equations, called polynomial equations. These have the general

More information

Solution of Linear Systems

Solution of Linear Systems Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

More information

Nonlinear Iterative Partial Least Squares Method

Nonlinear Iterative Partial Least Squares Method Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

More information

Integer Factorization using the Quadratic Sieve

Integer Factorization using the Quadratic Sieve Integer Factorization using the Quadratic Sieve Chad Seibert* Division of Science and Mathematics University of Minnesota, Morris Morris, MN 56567 [email protected] March 16, 2011 Abstract We give

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

Lecture Notes on Polynomials

Lecture Notes on Polynomials Lecture Notes on Polynomials Arne Jensen Department of Mathematical Sciences Aalborg University c 008 Introduction These lecture notes give a very short introduction to polynomials with real and complex

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

Lecture 5: Singular Value Decomposition SVD (1)

Lecture 5: Singular Value Decomposition SVD (1) EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system

More information

Interactive applications to explore the parametric space of multivariable controllers

Interactive applications to explore the parametric space of multivariable controllers Milano (Italy) August 28 - September 2, 211 Interactive applications to explore the parametric space of multivariable controllers Yves Piguet Roland Longchamp Calerga Sàrl, Av. de la Chablière 35, 14 Lausanne,

More information

Algebra Practice Problems for Precalculus and Calculus

Algebra Practice Problems for Precalculus and Calculus Algebra Practice Problems for Precalculus and Calculus Solve the following equations for the unknown x: 1. 5 = 7x 16 2. 2x 3 = 5 x 3. 4. 1 2 (x 3) + x = 17 + 3(4 x) 5 x = 2 x 3 Multiply the indicated polynomials

More information

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5. PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include

More information

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible

More information

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS Revised Edition James Epperson Mathematical Reviews BICENTENNIAL 0, 1 8 0 7 z ewiley wu 2007 r71 BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc.,

More information

SECTION 0.6: POLYNOMIAL, RATIONAL, AND ALGEBRAIC EXPRESSIONS

SECTION 0.6: POLYNOMIAL, RATIONAL, AND ALGEBRAIC EXPRESSIONS (Section 0.6: Polynomial, Rational, and Algebraic Expressions) 0.6.1 SECTION 0.6: POLYNOMIAL, RATIONAL, AND ALGEBRAIC EXPRESSIONS LEARNING OBJECTIVES Be able to identify polynomial, rational, and algebraic

More information

Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test

Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test Math Review for the Quantitative Reasoning Measure of the GRE revised General Test www.ets.org Overview This Math Review will familiarize you with the mathematical skills and concepts that are important

More information

Integrals of Rational Functions

Integrals of Rational Functions Integrals of Rational Functions Scott R. Fulton Overview A rational function has the form where p and q are polynomials. For example, r(x) = p(x) q(x) f(x) = x2 3 x 4 + 3, g(t) = t6 + 4t 2 3, 7t 5 + 3t

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

NUMERICAL METHODS TOPICS FOR RESEARCH PAPERS

NUMERICAL METHODS TOPICS FOR RESEARCH PAPERS Faculty of Civil Engineering Belgrade Master Study COMPUTATIONAL ENGINEERING Fall semester 2004/2005 NUMERICAL METHODS TOPICS FOR RESEARCH PAPERS 1. NUMERICAL METHODS IN FINITE ELEMENT ANALYSIS - Matrices

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

The Method of Least Squares

The Method of Least Squares The Method of Least Squares Steven J. Miller Mathematics Department Brown University Providence, RI 0292 Abstract The Method of Least Squares is a procedure to determine the best fit line to data; the

More information

RESULTANT AND DISCRIMINANT OF POLYNOMIALS

RESULTANT AND DISCRIMINANT OF POLYNOMIALS RESULTANT AND DISCRIMINANT OF POLYNOMIALS SVANTE JANSON Abstract. This is a collection of classical results about resultants and discriminants for polynomials, compiled mainly for my own use. All results

More information

Factoring Cubic Polynomials

Factoring Cubic Polynomials Factoring Cubic Polynomials Robert G. Underwood 1. Introduction There are at least two ways in which using the famous Cardano formulas (1545) to factor cubic polynomials present more difficulties than

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

Elementary Matrices and The LU Factorization

Elementary Matrices and The LU Factorization lementary Matrices and The LU Factorization Definition: ny matrix obtained by performing a single elementary row operation (RO) on the identity (unit) matrix is called an elementary matrix. There are three

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

CONTINUED FRACTIONS AND FACTORING. Niels Lauritzen

CONTINUED FRACTIONS AND FACTORING. Niels Lauritzen CONTINUED FRACTIONS AND FACTORING Niels Lauritzen ii NIELS LAURITZEN DEPARTMENT OF MATHEMATICAL SCIENCES UNIVERSITY OF AARHUS, DENMARK EMAIL: [email protected] URL: http://home.imf.au.dk/niels/ Contents

More information

3.2 Sources, Sinks, Saddles, and Spirals

3.2 Sources, Sinks, Saddles, and Spirals 3.2. Sources, Sinks, Saddles, and Spirals 6 3.2 Sources, Sinks, Saddles, and Spirals The pictures in this section show solutions to Ay 00 C By 0 C Cy D 0. These are linear equations with constant coefficients

More information

26. Determinants I. 1. Prehistory

26. Determinants I. 1. Prehistory 26. Determinants I 26.1 Prehistory 26.2 Definitions 26.3 Uniqueness and other properties 26.4 Existence Both as a careful review of a more pedestrian viewpoint, and as a transition to a coordinate-independent

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

Method To Solve Linear, Polynomial, or Absolute Value Inequalities:

Method To Solve Linear, Polynomial, or Absolute Value Inequalities: Solving Inequalities An inequality is the result of replacing the = sign in an equation with ,, or. For example, 3x 2 < 7 is a linear inequality. We call it linear because if the < were replaced with

More information

The Method of Partial Fractions Math 121 Calculus II Spring 2015

The Method of Partial Fractions Math 121 Calculus II Spring 2015 Rational functions. as The Method of Partial Fractions Math 11 Calculus II Spring 015 Recall that a rational function is a quotient of two polynomials such f(x) g(x) = 3x5 + x 3 + 16x x 60. The method

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

6 EXTENDING ALGEBRA. 6.0 Introduction. 6.1 The cubic equation. Objectives

6 EXTENDING ALGEBRA. 6.0 Introduction. 6.1 The cubic equation. Objectives 6 EXTENDING ALGEBRA Chapter 6 Extending Algebra Objectives After studying this chapter you should understand techniques whereby equations of cubic degree and higher can be solved; be able to factorise

More information

6. Cholesky factorization

6. Cholesky factorization 6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

More information

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in 2-106. Total: 175 points.

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in 2-106. Total: 175 points. 806 Problem Set 4 Solution Due Wednesday, March 2009 at 4 pm in 2-06 Total: 75 points Problem : A is an m n matrix of rank r Suppose there are right-hand-sides b for which A x = b has no solution (a) What

More information

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In

More information

Scicos is a Scilab toolbox included in the Scilab package. The Scicos editor can be opened by the scicos command

Scicos is a Scilab toolbox included in the Scilab package. The Scicos editor can be opened by the scicos command 7 Getting Started 7.1 Construction of a Simple Diagram Scicos contains a graphical editor that can be used to construct block diagram models of dynamical systems. The blocks can come from various palettes

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

The Fourth International DERIVE-TI92/89 Conference Liverpool, U.K., 12-15 July 2000. Derive 5: The Easiest... Just Got Better!

The Fourth International DERIVE-TI92/89 Conference Liverpool, U.K., 12-15 July 2000. Derive 5: The Easiest... Just Got Better! The Fourth International DERIVE-TI9/89 Conference Liverpool, U.K., -5 July 000 Derive 5: The Easiest... Just Got Better! Michel Beaudin École de technologie supérieure 00, rue Notre-Dame Ouest Montréal

More information

Elasticity Theory Basics

Elasticity Theory Basics G22.3033-002: Topics in Computer Graphics: Lecture #7 Geometric Modeling New York University Elasticity Theory Basics Lecture #7: 20 October 2003 Lecturer: Denis Zorin Scribe: Adrian Secord, Yotam Gingold

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

Operation Count; Numerical Linear Algebra

Operation Count; Numerical Linear Algebra 10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floating-point

More information