Chapter 7. Lyapunov Exponents. 7.1 Maps
|
|
|
- Steven Logan
- 10 years ago
- Views:
Transcription
1 Chapter 7 Lyapunov Exponents Lyapunov exponents tell us the rate of divergence of nearby trajectories a key component of chaotic dynamics. For one dimensional maps the exponent is simply the average < log df/dx > over the dynamics (chapter 4). In this chapter the concept is generalized to higher dimensional maps and flows. There are now a number of exponents equal to the dimension of the phase space λ,λ 2... where we choose to order them in decreasing value. The exponents can be intuitively understood geometrically: line lengths separating trajectories grow as e λ t (where t is the continuous time in flows and the iteration index for maps); areas grow as e (λ +λ 2 )t ; volumes as e (λ +λ 2 +λ 3 )t etc. However, areas and volumes will become strongly distorted over long times, since the dimension corresponding to λ grows more rapidly than that corresponding to λ 2 etc., and so this is not immediately a practical way to calculate the exponents. 7. Maps Consider the map U n+ = F(U n ). (7.) with U the phase space vector. We want to know what happens to a small change in U 0. This is given by the iteration of the tangent space given by the Jacobean matrix K ij (U n ) = F i. (7.2) U=Un U (j)
2 CHAPTER 7. LYAPUNOV EXPONENTS 2 Then if the change in U n is ε n or 7.2 Flows U (i) n U (j) 0 For continuous time systems a change ε(t) in U(t) evolves as Then with M satisfying ε n+ = K(U n )ε n, (7.3) = M n ij = [ K(U n )K(U n 2 )...K(U 0 ) ] ij. (7.4) du dt dε dt = K(U)ε with K(ij ) = f i = f(u) (7.5) U (j). (7.6) U=U(t) U (i) (t) U (j) (t 0 ) = M ij (t, t 0 ) (7.7) dm dt = K(U(t))M. (7.8) 7.3 Oseledec s Multiplicative Ergodic Theorem Roughly, the eigenvalues of M for large t are e λ in or e λ i(t t 0 ) for maps and flows respectively. The existence of the appropriate limits is known as Oseledec s multiplicative ergodic theorem []. The result is stated here in the language of flows, but the version for maps should then be obvious.
3 CHAPTER 7. LYAPUNOV EXPONENTS 3 For almost any initial point U(t 0 ) there exists an orthonormal set of vectors v i (t 0 ), i n with n the dimension of the phase space such that λ i = lim log M(t, t 0 )v i (t 0 ) (7.9) t t t 0 exists. For ergodic systems the {λ i } do not depend on the initial point, and so are global properties of the dynamical system. The λ i may be calculated as the log of the eigenvalues of [ M T (t, t 0 )M(t, t 0 ) ] 2(t t 0 ). (7.0) with T the transpose. The v(t 0 ) are the eigenvectors of M T (t, t 0 )M(t, t 0 ) and are independent of t for large t. Some insight into this theorem can be obtained by considering the singular valued decomposition (SVD) of M = M(t,t 0 ) (figure 7.a). Any real matrix can be decomposed M = WDV T (7.) where D is a diagonal matrix with diagonal values d i the square root of the eigenvalues of M T M and V,W are orthogonal matrices, with the columns v i of V the orthonormal eigenvectors of M T M and the columns w i of W the orthonormal eigenvectors of MM T. Pictorially, this shows us that a unit circle of initial conditions is mapped by M into an ellipse: the principal axes of the ellipse are the w i and the lengths of the semi axes are d i. Furthermore the preimage of the w i are v i i.e. the v i are the particular choice of orthonormal axes for the unit circle that are mapped into the ellipse axes. The multiplicative ergodic theorem says that the vectors v i are independent of t for large t, and the d i yield the Lyapunov exponents in this limit. The vector v i defines a direction such that an initial displacement in this direction is asymptotically amplified at a rate given by λ i. Forafixedfinal point U(t) one would similarly expect the w i to be independent of t 0 for most t 0 and large t t 0. Either the v i or the w i may be called Lyapunov eigenvectors. 7.4 Practical Calculation The difficulty of the calculation is that for any initial displacement vector v (which may be an attempt to approximate one of the v i ) any component along v will
4 CHAPTER 7. LYAPUNOV EXPONENTS 4 Figure 7.: Calculating Lyapunov exponents. (a) Oseledec s theorem (SVD picture): orthonormal vectors v, v 2 can be found at initial time t 0 that M(t,t 0 ) maps to orthonormal vectors w,w 2 along axes of ellipse. For large t t 0 the v i are independent of t and the lengths of the ellipse axes grow according to Lyapunov eigenvalues. (b) Gramm-Schmidt procedure: arbitrary orthonormal vectors O, O 2 map to P, P 2 that are then orthogonalized by the Gramm-Schmidt procedure preserving the growing area of the parallelepiped. be enormously amplified relative to the other components, so that the iterated displacement becomes almost parallel to the iteration of v 0, with all the information of the other Lyapunov exponents contained in the tiny correction to this. Various numerical techniques have been implemented [2] to maintain control of the small correction, of which the most intuitive, although not necessarily the most accurate, is the method using Gramm-Schmidt orthogonalization after a number of steps [3] (figure 7.b). Orthogonal unit displacement vectors O (),O (2),... are iterated according to the Jacobean to give, after some number of iterations n (for a map) or some time t (for a flow), P () = MO () and P (2) = MO (2) etc. We will use O () to calculate λ and O (2) to calculate λ 2 etc. The vectors P (i) will all tend to align along a single direction. We keep track of the orthogonal components using Gramm-
5 CHAPTER 7. LYAPUNOV EXPONENTS 5 Schmidt orthogonalization. Write P () = N () P ˆ () with N () the magnitude and P ˆ () the unit vector giving the direction. Define P (2) as the component of P (2) normal to P () ( P (2) = P (2) P (2) P ˆ ()) P ˆ (). (7.2) and then write P (2) = N (2) ˆ P (2). Notice that the area P () P (2) = P () P (2) is preserved by this transformation, and so we can use P (2) (in fact its norm N (2) ) to calculate λ 2. For dimensions larger than 2 the further vectors P (i) are successively orthogonalized to all previous vectors. This process is then repeated and the eigenvalues are given by (quoting the case of maps) e nλ = N () (n )N () (n 2 )... e nλ 2 = N (2) (n )N (2) (n 2 )... (7.3) etc. with n = n + n Comparing with the singular valued decomposition we can describe the Gramm- Schmidt method as following the growth of the area of parallelepipeds, whereas the SVD description follows the growth of ellipses. Example : the Lorenz Model The Lorenz equations (chapter ) are Ẋ = σ(x Y) Ẏ = rx Y XZ Ż = XY bz. (7.4) A perturbation ε n = (δx, δy, δz) evolves according to tangent space equations given by linearizing (7.4) δẋ = σ(δx δy) δẏ = rδx δy (δx Z + XδZ) δż = δx Y + XδY bδz (7.5) or dε dt = σ σ 0 r Z X Y X b ε (7.6)
6 CHAPTER 7. LYAPUNOV EXPONENTS 6 defining the Jacobean matrix K. To calculate the Lyapunov exponents start with three orthogonal unit vectors t () = (, 0, 0),t (2) = (0,, 0) and t (3) = (0, 0, ) and evolve the components of each vector according to the tangent equations (7.6). (Since the Jacobean depends on X, Y, Z this means we evolve (X, Y, Z) and the t (i) as a twelve dimensional coupled system.) After a number of iteration steps (chosen for numerical convenience) calculate the magnification of the vector t () and renormalize to unit magnitude. Then project t (2) normal to t (), calculate the magnification of the resulting vector, and renormalize to unit magnitude. Finally project t (3) normal to the preceding two orthogonal vectors and renormalize to unit magnitude. The product of each magnification factor over a large number iterations of this procedure evolving the equations a time t leads to e λ it. Note that in the case of the Lorenz model (and some other simple examples) the trace of K is independent of the position on the attractor [in this case ( + σ + b)], so that we immediately have the result for the sum of the eigenvalues λ +λ 2 +λ 3, a useful check of the algorithm. (The corresponding result for a map would be for a constant determinant of the Jacobean: λ i = ln det K.) Example 2: the Bakers Map For the Bakers map, the Lyapunov exponents can be calculated analytically. For the map in the form x n+ = y n+ = { λa x n if y n <α { ( λ b ) + λ b x n if y n >α yn /α if y n <α (y n α)/β if y n >α (7.7) with β = α the exponents are λ = α log α β log β > 0 λ 2 = α ln λ a + β log λ b < 0. (7.8) This easily follows since the stretching in the y direction is α or β depending on whether y is greater or less than α, and the measure is uniform in the y direction so the probability of an iteration falling in these regions is just α and β respectively. Similarly the contraction in the x direction is λ a or λ b for these two cases.
7 CHAPTER 7. LYAPUNOV EXPONENTS 7 Numerical examples Numerical examples on 2D maps are given in the demonstrations. 7.5 Other Methods 7.5. Householder transformation The Gramm-Schmidt orthogonalization is actually a method of implementing QR decomposition. Any matrix M can be written with Q an orthogonal matrix M = QR (7.9) Q = [ w w 2 w n ] and R an upper triangular matrix R = ν 0 ν ν n, (7.20) where denotes a nonzero (in general) element. iteration matrix M we can write In particular for the tangent M = M N M N 2...M 0 (7.2) for the successive steps t i or n i for flows or maps. Then writing M 0 = Q R 0, M Q = Q 2 R, etc. (7.22) we get M = Q N R N R N 2...R 0 (7.23)
8 CHAPTER 7. LYAPUNOV EXPONENTS 8 so that Q = Q N and R = R N R N 2...R 0. Furthermore the exponents are λ i = lim ln R ii. (7.24) t t t 0 The correspondence with the Gramm-Schmidt orthogonalization is that the Q i are the set of unit vectors P,P 2,... etc. and the ν i are the norms N i. However an alternative procedure, known as the Householder transformation, may give better numerical convergence [],[4] Evolution of the singular valued decomposition The trick of this method is to find a way to evolve the matrices W, D in the singular valued decomposition (7.) directly. This appears to be only possible for continuous time systems, and has been implemented by Kim and Greene [5]. 7.6 Significance of Lyapunov Exponents A positive Lyapunov exponent may be taken as the defining signature of chaos. For attractors of maps or flows, the Lyapunov exponents also sharply discriminate between the different dynamics: a fixed point will have all negative exponents; a limit cycle will have one zero exponent, with all the rest negative; and a m- frequency quasiperiodic orbit (motion on a m-torus) will have m zero eigenvalues, with all the rest negative. (Note, of course, that a fixed point on a map that is a Poincaré section of a flow corresponds to a periodic orbit of the flow.) For a flow there is in fact always one zero exponent, except for fixed point attractors. This is shown by noting that the phase space velocity satisfies the tangent equations: d U (i) dt = F i U (j) U (j) (7.25) so that for this direction λ = lim t t log U(t) (7.26) which tends to zero except for the approach to a fixed point.
9 CHAPTER 7. LYAPUNOV EXPONENTS Lyapunov Eigenvectors This section is included because I became curious about the vectors defined in the Oseledec theorem, and found little discussion of them in the literature. It can well be skipped on a first reading (and probably subsequent ones, as well!). The vectors v i the direction of the initial vectors giving exponential growth seem not immediately accessible from the numerical methods for the exponents (except the SVD method for continuous time systems [5]). However the w i are naturally produced by the Gramm-Schmidt orthogonalization. The relationship of these orthogonal vectors to the natural stretching and contraction directions seems quite subtle however. e s e s e s+ e s+ M N F N U 0 e u e u+ M N U N e u e u+ Figure 7.2: Stretching direction e u and contracting direction e s at points U 0 and U N = F N (U 0 ). The vector e u at U 0 is mapped to a vector along e u at U N by the tangent map M N etc. The adjoint vectors e u+, e s+ are defined perpendicular to e s and e u respectively. An orthogonal pair of directions close to e s, e u+ is mapped by M N to an orthogonal pair close to e u, e s+. The relationship can be illustrated in the case of a map with one stretching direction e u and one contracting direction e s in the tangent space. These are unit vectors at each point on the attractor conveniently defined so that separations along e s asymptotically contract exponentially at the rate e λ per iteration for forward iteration, and separations along e u asymptotically contract exponentially at the rate e λ + for backward iteration. Here λ +, λ are the positive and negative Lyapunov exponents. The vectors e s and e u are tangent to the stable and unstable manifolds to be discussed in chapter 22, and have an easily interpreted physical significance. How are the orthogonal Lyapunov eigenvectors related to these directions? Since e s and e u are not orthogonal, it is useful to define the adjoint unit vectors e u+ and
10 CHAPTER 7. LYAPUNOV EXPONENTS 0 e s+ as in Fig.(7.2) so that e s e u+ = e u e s+ = 0. (7.27) Then under some fixed large number of iterations N it is easy to convince oneself that orthogonal vectors e (0), e(0) 2 asymptotically close to the orthogonal pair e s, e u+ at the point U 0 on the attractor are mapped by the tangent map M N to directions e (N), e (N) 2 asymptotically close to the orthogonal pair e u, e s+ at the iterated point U N = F N (U 0 ), with expansion factors given asymptotically by the Lyapunov exponents (see Fig.(7.2)). For example e s is mapped to e Nλ e s. However a small deviation from e s will be amplified by the amount e Nλ +. This means that we can find an e (0) given by a carefully chosen deviation of order e N(λ + λ ) from e s that will be mapped to e s+. Similarly almost all initial directions will be mapped very close to e u because of the strong expansion in this direction. Deviations in the direction will be of order e N(λ + λ ). In particular an e (0) 2 chosen orthogonal to e (0), i.e. very close to eu+, will be mapped very close to e u. Thus vectors very close to e s, e u+ at the point U 0 satisfy the requirements for the v i of Oseledec s theorem and e u, e s+ at the iterated point F N (U 0 ) are the w i of the SVD and the vectors of the Gramm-Schmidt procedure. It should be noted that for 2N iterations rather than N (for example) the vectors e (0), e(0) 2, mapping to eu, e s+ at the iterated point U 2N, must be chosen as a very slightly different perturbation from e s, e u+ equivalently the vectors e (N), e (N) 2 at U N will not be mapped under a further N iterations to e u, e s+ at the iterated point U 2N. It is apparent that even for this very simple two dimensional case neither the v i nor the w i separately give us the directions of both e u and e s. The significance of the orthogonal Lyapunov eigenvectors in higher dimensional systems remains unclear. January 26, 2000
11 Bibliography [] J-P. Eckmann and D. Ruelle, Rev. Mod. Phys. 57, 67 (985) [2] K. Geist, U. Parlitz, and W. Lauterborn, Prog. Theor. Phys. 83, 875 (990) [3] A. Wolf, J.B. Swift, H.L. Swinney, and J.A. Vastano, Physica 6D, 285 (985) [4] P. Manneville, in Dissipative Structures and Weak Turbulence (Academic, Boston 990), p280. [5] J.M. Green and J-S. Kim, Physica 24D, 23 (987)
CS3220 Lecture Notes: QR factorization and orthogonal transformations
CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss
Inner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
Chapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
Factorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
Lecture 5: Singular Value Decomposition SVD (1)
EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system
LINEAR ALGEBRA. September 23, 2010
LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................
Similarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
Linear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka [email protected] http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
State of Stress at Point
State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,
α = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every
Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices
MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two
Applied Linear Algebra I Review page 1
Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties
3 Orthogonal Vectors and Matrices
3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first
Dimension Theory for Ordinary Differential Equations
Vladimir A. Boichenko, Gennadij A. Leonov, Volker Reitmann Dimension Theory for Ordinary Differential Equations Teubner Contents Singular values, exterior calculus and Lozinskii-norms 15 1 Singular values
Nonlinear Iterative Partial Least Squares Method
Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for
13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms
CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.
CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
The Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation
[1] Diagonal factorization
8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:
Inner product. Definition of inner product
Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product
Manifold Learning Examples PCA, LLE and ISOMAP
Manifold Learning Examples PCA, LLE and ISOMAP Dan Ventura October 14, 28 Abstract We try to give a helpful concrete example that demonstrates how to use PCA, LLE and Isomap, attempts to provide some intuition
Francesco Sorrentino Department of Mechanical Engineering
Master stability function approaches to analyze stability of the synchronous evolution for hypernetworks and of synchronized clusters for networks with symmetries Francesco Sorrentino Department of Mechanical
Figure 1.1 Vector A and Vector F
CHAPTER I VECTOR QUANTITIES Quantities are anything which can be measured, and stated with number. Quantities in physics are divided into two types; scalar and vector quantities. Scalar quantities have
Eigenvalues and Eigenvectors
Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution
Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics
INTERNATIONAL BLACK SEA UNIVERSITY COMPUTER TECHNOLOGIES AND ENGINEERING FACULTY ELABORATION OF AN ALGORITHM OF DETECTING TESTS DIMENSIONALITY Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree
1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.
Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(
11.1. Objectives. Component Form of a Vector. Component Form of a Vector. Component Form of a Vector. Vectors and the Geometry of Space
11 Vectors and the Geometry of Space 11.1 Vectors in the Plane Copyright Cengage Learning. All rights reserved. Copyright Cengage Learning. All rights reserved. 2 Objectives! Write the component form of
Chapter 12 Modal Decomposition of State-Space Models 12.1 Introduction The solutions obtained in previous chapters, whether in time domain or transfor
Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter
3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
7 Gaussian Elimination and LU Factorization
7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method
by the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that
Orthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation
Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in
DATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
1 2 3 1 1 2 x = + x 2 + x 4 1 0 1
(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which
Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.
Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R
Introduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
Numerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 [email protected] 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)
Dimensionality Reduction: Principal Components Analysis
Dimensionality Reduction: Principal Components Analysis In data mining one often encounters situations where there are a large number of variables in the database. In such situations it is very likely
Dynamical Systems Analysis II: Evaluating Stability, Eigenvalues
Dynamical Systems Analysis II: Evaluating Stability, Eigenvalues By Peter Woolf [email protected]) University of Michigan Michigan Chemical Process Dynamics and Controls Open Textbook version 1.0 Creative
Section 6.1 - Inner Products and Norms
Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
APPLICATIONS. are symmetric, but. are not.
CHAPTER III APPLICATIONS Real Symmetric Matrices The most common matrices we meet in applications are symmetric, that is, they are square matrices which are equal to their transposes In symbols, A t =
Similar matrices and Jordan form
Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive
Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010
Math 550 Notes Chapter 7 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 7 Fall 2010 1 / 34 Outline 1 Self-Adjoint and Normal Operators
Orthogonal Projections and Orthonormal Bases
CS 3, HANDOUT -A, 3 November 04 (adjusted on 7 November 04) Orthogonal Projections and Orthonormal Bases (continuation of Handout 07 of 6 September 04) Definition (Orthogonality, length, unit vectors).
Breeding and predictability in coupled Lorenz models. E. Kalnay, M. Peña, S.-C. Yang and M. Cai
Breeding and predictability in coupled Lorenz models E. Kalnay, M. Peña, S.-C. Yang and M. Cai Department of Meteorology University of Maryland, College Park 20742 USA Abstract Bred vectors are the difference
x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.
Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability
Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj
Section 5. l j v j = [ u u j u m ] l jj = l jj u j + + l mj u m. l mj Section 5. 5.. Not orthogonal, the column vectors fail to be perpendicular to each other. 5..2 his matrix is orthogonal. Check that
Solving Simultaneous Equations and Matrices
Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering
is in plane V. However, it may be more convenient to introduce a plane coordinate system in V.
.4 COORDINATES EXAMPLE Let V be the plane in R with equation x +2x 2 +x 0, a two-dimensional subspace of R. We can describe a vector in this plane by its spatial (D)coordinates; for example, vector x 5
Orthogonal Bases and the QR Algorithm
Orthogonal Bases and the QR Algorithm Orthogonal Bases by Peter J Olver University of Minnesota Throughout, we work in the Euclidean vector space V = R n, the space of column vectors with n real entries
Chapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
Notes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
Lecture L3 - Vectors, Matrices and Coordinate Transformations
S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0
NONLINEAR TIME SERIES ANALYSIS
NONLINEAR TIME SERIES ANALYSIS HOLGER KANTZ AND THOMAS SCHREIBER Max Planck Institute for the Physics of Complex Sy stems, Dresden I CAMBRIDGE UNIVERSITY PRESS Preface to the first edition pug e xi Preface
2.2 Creaseness operator
2.2. Creaseness operator 31 2.2 Creaseness operator Antonio López, a member of our group, has studied for his PhD dissertation the differential operators described in this section [72]. He has compared
A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS
A QUIK GUIDE TO THE FOMULAS OF MULTIVAIABLE ALULUS ontents 1. Analytic Geometry 2 1.1. Definition of a Vector 2 1.2. Scalar Product 2 1.3. Properties of the Scalar Product 2 1.4. Length and Unit Vectors
Statistical Physics and Dynamical Systems approaches in Lagrangian Fluid Dynamics
IV Summer School on Statistical Physics of Complex and Small Systems Palma de Mallorca, 8-19 September 2014 Large-scale transport in oceans Emilio Hernández-García IFISC (CSIC-UIB), Palma de Mallorca,
NOV - 30211/II. 1. Let f(z) = sin z, z C. Then f(z) : 3. Let the sequence {a n } be given. (A) is bounded in the complex plane
Mathematical Sciences Paper II Time Allowed : 75 Minutes] [Maximum Marks : 100 Note : This Paper contains Fifty (50) multiple choice questions. Each question carries Two () marks. Attempt All questions.
Lecture 13 Linear quadratic Lyapunov theory
EE363 Winter 28-9 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discrete-time
6. Cholesky factorization
6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix
Lecture 1: Schur s Unitary Triangularization Theorem
Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections
Factor analysis. Angela Montanari
Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number
Dot product and vector projections (Sect. 12.3) There are two main ways to introduce the dot product
Dot product and vector projections (Sect. 12.3) Two definitions for the dot product. Geometric definition of dot product. Orthogonal vectors. Dot product and orthogonal projections. Properties of the dot
ASEN 3112 - Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1
19 MDOF Dynamic Systems ASEN 3112 Lecture 1 Slide 1 A Two-DOF Mass-Spring-Dashpot Dynamic System Consider the lumped-parameter, mass-spring-dashpot dynamic system shown in the Figure. It has two point
Thnkwell s Homeschool Precalculus Course Lesson Plan: 36 weeks
Thnkwell s Homeschool Precalculus Course Lesson Plan: 36 weeks Welcome to Thinkwell s Homeschool Precalculus! We re thrilled that you ve decided to make us part of your homeschool curriculum. This lesson
Orbital Dynamics of an Ellipsoidal Body
Orbital Dynamics of an Ellipsoidal Body Akash Gupta Indian Institute of Technology Kanpur The purpose of this article is to understand the dynamics about an irregular body like an asteroid or a comet by
ISOMETRIES OF R n KEITH CONRAD
ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x
Mean value theorem, Taylors Theorem, Maxima and Minima.
MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and express-ions. Permutations and Combinations.
Some Comments on the Derivative of a Vector with applications to angular momentum and curvature. E. L. Lady (October 18, 2000)
Some Comments on the Derivative of a Vector with applications to angular momentum and curvature E. L. Lady (October 18, 2000) Finding the formula in polar coordinates for the angular momentum of a moving
L 2 : x = s + 1, y = s, z = 4s + 4. 3. Suppose that C has coordinates (x, y, z). Then from the vector equality AC = BD, one has
The line L through the points A and B is parallel to the vector AB = 3, 2, and has parametric equations x = 3t + 2, y = 2t +, z = t Therefore, the intersection point of the line with the plane should satisfy:
Network Traffic Modelling
University of York Dissertation submitted for the MSc in Mathematics with Modern Applications, Department of Mathematics, University of York, UK. August 009 Network Traffic Modelling Author: David Slade
Understanding and Applying Kalman Filtering
Understanding and Applying Kalman Filtering Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton 1 Introduction Objectives: 1. Provide a basic understanding
Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.
This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra
LINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1982, 2008. This chapter originates from material used by author at Imperial College, University of London, between 1981 and 1990. It is available free to all individuals,
Elasticity Theory Basics
G22.3033-002: Topics in Computer Graphics: Lecture #7 Geometric Modeling New York University Elasticity Theory Basics Lecture #7: 20 October 2003 Lecturer: Denis Zorin Scribe: Adrian Secord, Yotam Gingold
4. Matrix Methods for Analysis of Structure in Data Sets:
ATM 552 Notes: Matrix Methods: EOF, SVD, ETC. D.L.Hartmann Page 64 4. Matrix Methods for Analysis of Structure in Data Sets: Empirical Orthogonal Functions, Principal Component Analysis, Singular Value
MATH 551 - APPLIED MATRIX THEORY
MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points
LS.6 Solution Matrices
LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions
Metrics on SO(3) and Inverse Kinematics
Mathematical Foundations of Computer Graphics and Vision Metrics on SO(3) and Inverse Kinematics Luca Ballan Institute of Visual Computing Optimization on Manifolds Descent approach d is a ascent direction
8 Square matrices continued: Determinants
8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You
Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication
Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication Thomas Reilly Data Physics Corporation 1741 Technology Drive, Suite 260 San Jose, CA 95110 (408) 216-8440 This paper
Vectors 2. The METRIC Project, Imperial College. Imperial College of Science Technology and Medicine, 1996.
Vectors 2 The METRIC Project, Imperial College. Imperial College of Science Technology and Medicine, 1996. Launch Mathematica. Type
Inner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
3. Reaction Diffusion Equations Consider the following ODE model for population growth
3. Reaction Diffusion Equations Consider the following ODE model for population growth u t a u t u t, u 0 u 0 where u t denotes the population size at time t, and a u plays the role of the population dependent
Notes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
Continued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.
Math 312, Fall 2012 Jerry L. Kazdan Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday. In addition to the problems below, you should also know how to solve
Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.
Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.
