7. THE CLASSICAL APPROACH TO TIME-VARIANT (NON- AUTONOMOUS), LINEAR & NONLINEAR SYSTEMS

Size: px
Start display at page:

Download "7. THE CLASSICAL APPROACH TO TIME-VARIANT (NON- AUTONOMOUS), LINEAR & NONLINEAR SYSTEMS"

Transcription

1 7. THE CLASSICAL APPROACH TO TIME-VARIANT (NON- AUTONOMOUS), LINEAR & NONLINEAR SYSTEMS that complicates the analysis of TV systems. Thus, we will also learn how to overcome this problem by adding additional constraints on the system. In section 7.4, we discuss the extension of the Lyapunov direct method for stability analysis of non-autonomous systems. We will learn again how the initial time is excluded from the various definitions. Finally in section 7.5, we look at the extension of some autonomous systems' analysis tools to non-autonomous systems. From these limited discussions, it should become clear that the topdown approach is not an efficient way for the design and application of nonautonomous systems. 7.2 Non-autonomous System Models and Examples 7.1 Introduction to the chapter In this chapter, we briefly discuss time-varying or (TV) non-autonomous systems and present some basic concepts of stability for time-varying systems. Again, the emphasis is to present the basic results (that are derived from a mathematical/analytical perspective) in as simply as possible. In the next chapter, we will look at non-autonomous systems from a physical realization perspective and derive a number of new results and techniques for designing stable non-autonomous systems. The reasons for considering non-autonomous systems is that in a number of physical systems, the parameters associated with physical systems (that may or may not appear in the dynamical equations used to represent the systems) may vary with time. Some example parameters that play a major role in many aeronautical systems are temperature and pressure. The key here are the two statements: 1) "parameters associated with physical systems," and 2) "vary with time," shown highlighted here. Usually, greater emphasis is placed on the latter statement in the analytical approach to dealing with non-autonomous systems. Thus, time-varying is used to imply the appearance of terms such as "t, exp[t], sin [t]" etc. in the dynamical equations and results based on such modeling techniques appear in the literature. In this chapter, results based on such an approach are presented. In the next chapter, we will discuss a new approach that makes use of the above two key statements in defining physically meaningful devices and their use in forming complex systems and or dynamics. The chapter is organized as follows. In section 7.2, we provide the general mathematical model commonly used and a number of first- or second-order examples. It should be obvious from these examples and definitions that we do not have any physical guiding principles other than our mastery of differential equations in arriving at the dynamics. In section 7.3, we introduce the concept of equilibrium points of non-autonomous systems and the definitions for stability. We will find, by necessity, we need to include the initial time in their definitions We can simply extend the state-space representation to model non-autonomous systems as: x = f[ x,t] (7.1) Here, the independent variable 't ' appears along with the state vector x in the nonlinear vector function f [ ] indicating that it depends on the time variable (or it is an explicit function of the time variable). That is, f [ ] is a vector function that varies with respect to time. Examples of non-autonomous systems appearing in circuits and control theory literature are: where x (t) = a(t)x(t) (7.2) a(t) x (t) = x(t) (7.3) 1 x 2 (t) x (t) = tx(t) (7.4) x (t) = x(t) (7.5) (1 t) 2 x (t) = x(t) (1 t) x(t) x (t) = 1 sin 2 [x(t)] g x (t) = (t)x(t) g(t) (7.6) (7.7) (7.8a)

2 with g 2 (τ)dτ < e τ (τ)dτ 2 n = 2 (7.8b) 0 0 n =1 x (t) y (t) = 0 1 x(t) x 2 (t)1 ε y(t) 0 rcos[ωt] ; ε, r > 0 (7.9) x 2t 1 (t) 1 e x 2 (t) = x 1(t) 1 1 x 2 (t) (7.10) x = x (t) c[t] x (t) x(t) = 0 c[t]= 2 e t > 0 for all t. (7.11a) (7.11b) x (t) (2 8t) x (t) 5x(t) = 0 (7.12) We have given these examples to illustrate a number of key points. The examples that are used to describe non-autonomous systems are mostly limited to first- and second-order dynamic equations since we can only obtain closed form solutions for first-order dynamic equations and some second-order dynamic equations. For example, the solution of a first-order dynamic equation expressed in the form given in equation (7.2) is: x(t) = x(t 0 )exp t t 0 a(τ)dτ (7.13) which can be used to obtain the closed form solutions for equations (7.3) to (7.7). When higher order dynamic equations are used, they are carefully handcrafted based on these equations to drive home some point. An example belonging to this category is: x 1 (t) (5 x 2 3 (t)) x 4 2 (t)x 1 (t) 0 x 1 (t) x 2 (t) = 0 1 4x 3 (t) x 2 (t) x 3 (t) 0 0 (2 sin[t]) x 3 (t) (7.14) In this model, the derivative of the third state variable depends on this (and only this) state variable and a time-varying term. That is, we have a third-order system in which there is no coupling from the first two state variable to the dynamics of the third state variable. Of course, all these example are used to prove a number of properties of non-autonomous systems, as we will demonstrate later. They all use the maxim that, though a large number of examples cannot prove conclusively a proposition, a single counter example can disapprove a proposition. Thus, a standard set of mathematical techniques are used to highlight key aspects of non-autonomous systems Another conclusion that can be derived from the above examples is that there is no clear delineation between time-varying linear systems (equation 7.2 for example), nonlinear autonomous systems driven by an external source (as we can interpret equation 7.9, also known as Duffing's equation), and nonautonomous systems. For example, Chua used non-autonomous systems with a term that changes periodically and shows that such systems can be converted to a autonomous system of one-order higher (than the order of the non-autonomous system) by appending an extra state given by: θ = ωt = 2πt T (7.15) where T is the period of the time-periodic term. Thus, equation (7.9) can be converted to a third-order autonomous system given by: x (t) y(t) y (t) = x(t)(1 x 2 (t)) εy(t) rcos[θ(t)] θ (t) 2π T (7.16) Finally, the examples given here use time functions such as 't', sin[ ω 0 t], 1/(1t), functions that are motivated from an analytical perspective, and often, no connection is made between such terms and physical devices. 7.3 Equilibrium points and Stability Concepts Similar to the definitions for autonomous systems, we can define equilibrium points and consider their stability etc. However, when we define equilibrium point(s) and their stability, their dependence on time has to be shown explicitly. Thus, equilibrium points of non-autonomous systems x equ can be defined as those for which f[ x equ,t] = 0 for all t t 0 (7.17) That is, the vector function f [ ] becomes zero for x = x equ and for t t 0. Similarly, we can consider the stability of an equilibrium point by considering an initial time, norm of the initial state, which may explicitly depend on this initial time, and norm of the state at any time after the initial time. Thus:

3 The equilibrium point, assumed to be the origin, is stable at t 0 if for any R f > 0, there exists a positive scalar r i R f,t 0 [ ] such that x(t 0 ) < r i [ R f,t 0 ] x(t) < R f for all t > t 0 (7.18) Otherwise, the equilibrium point is unstable. Similarly, we can define asymptotic stability and exponential stability for non-autonomous systems as: The equilibrium point 0 is asymptotically stable at time t 0 if: 1) It is stable and 2) For all r i [ t 0 ] > 0 such that x(t 0 ) < r i t 0 t (7.19) [ ] x(t) 0 as The equilibrium point is exponentially stable if there exist two positive scalar values such that x(t) α x(t 0 ) exp[ λ(t t 0 )] for all t > t 0 (7.20) A major limitation of these definitions as far as application to practical systems is the dependence of the behavior on the initial time. It is highly desirable to have definitions that describe the behavior of non-autonomous systems without involving the starting time of operation as follows: The equilibrium point is uniformly stable if the scalar r i [ R f,t 0 ] used in the definition for stability of equilibrium can be chosen independent of the value t 0. That is, if r i [ R f,t 0 ] = r i [ R f ]. The equilibrium point is uniformly asymptotically stable 1) If it is uniformly stable 2) If the state converges to 0, the origin, uniformly. That is, the convergence rate and time is independent of the initial time t 0. That is, finally, we arrived at definitions which require a) independence from the initial time, and b) the state vector to become zero (assuming 0 to be the equilibrium point). We will use these statements which perhaps look useless or redundant at this point to define nonlinear and time-varying circuit elements in next chapter. 7.4 Lyapunov's Direct Method for Stability Analysis of Non-autonomous Systems The success in using energy-like functions and a study of their derivatives along the system trajectory in the Lyapunov's direct method for the stability analysis of autonomous systems, makes a similar approach a natural candidate for the stability analysis of non-autonomous systems. First, we start with a scalar energy-like function V[ x,t ] which of course has to show explicitly the timedependence of non-autonomous systems. Thus, we define a time-varying positive definite function as one which is characterized by: and 1) V[ 0,t]= 0 (7.21a) 2) lower bounded by (or dominates) a time-invariant positive definite function for all time t t 0. That is: V[ x,t ] V L [ x] for all t t 0 (7.21b) Modifiers such as local or global can be added, as necessary, to this definition. Note the requirement that the function become zero when the value of the state vector equals zero (or equals the equilibrium point) regardless of the value of the time. If our interest is in defining a time-varying positive definite function only, the above definition which requires a lower bound in the form of a time-variant positive function for the time-variant function will suffice. However, we are interested in defining a function that will serve as an energy-like function. Thus, we also need to bound this function on the upper or the maximum side. Thus, we require: V[ x,t ] V u [ x] for all t t 0 (7.22) That is, V[ x,t ] is dominated by another time-invariant positive definite function V u [ x]. Such a time-variant function V[ x,t ] is called a decrescent function. Given a time-varying positive definite function, V[ x,t ], of the state variables x of a non-autonomous system that fits the description of energy left within that system, we can calculate its derivative along the system trajectory as:

4 V [ x,t ] x =fx,t = [ ] dv [ x,t] dt x =fx,t [ ] = V t = V t V x x t x =f [ x,t] V x f x,t [ ] (7.23) Using the scalar function V[ x,t ] of the state and its derivative along the system trajectory, we can state the main Lyapunov stability results for nonautonomous systems as follows: A non-autonomous system is stable if 1) V[ x,t ] is positive definite, 2) V [ x,t ] is negative semi-definite, 3) V [ x,t ] is uniformly continuous in time [A differentiable function f(t) which has a finite limit as t will be uniformly continuos if its derivative is bounded]. By this condition, we imply that V [ x,t ] alongsystrajec. 0 as t. Thus, V[ x,t ] will approach a finite limiting value V where V is less than or equal to V[ x(0),0]. It is uniformly stable if, in addition to 1), 2) and 3) above: 4) V[ x,t ] is decrescent. It is uniformly asymptotically stable if V [ x,t ] is not just negative semi-definite but is negative definite. It is globally uniformly asymptotically stable if: 5) V[ x,t ] is radially unbounded in x. Note that the radial unboundedness applies to the state variables only and not to the time-variable t. Otherwise, the requirement that V[ x,t ] is dominated by a time-invariant function V u [ x] will be violated. Note in the above definitions, the decrescent condition follows immediately after the negative semi-definiteness and uniformly continuos conditions, and then only the negative definiteness condition. Thus, just changing the negative semi-definiteness requirement to negative definiteness with out the decrescent condition does not guarantee asymptotic stability as is the case with nonlinear autonomous systems. We omit presenting here analytical proofs of these theorems as they are not really needed for this books objectives. Further, we have modified the theorems slightly so that they will conform to results that can be obtained from a network or systems' perspective. Because of the complexities involved in nonautonomous systems, these subtle changes may enable one to come up with counter-examples to prove or disprove the necessity of some of these conditions. However, these conditions make sense from a practical perspective. In the next chapter, we will discuss these requirements and the various definitions from a network or systems' perspective. 7.5 Analysis of Non-Autonomous Systems Having seen the conditions for the stability of non-autonomous systems, we can ask how we can analyze non-autonomous systems to characterize their behavior. We will discuss some of the possibilities. The aim here is to provide just enough information to highlight the problems in dealing with non-autonomous systems using the classical analytical approach. We can use computer simulation and draw phase-plane portraits (in the case of lower order systems) to get some feel for the response of non-autonomous systems. However, the dependence on the initial condition and the presence of time-varying terms makes the situation complicated. One can study the timevarying component (s) in particular and use intuition to come up with useful conclusions. However, even here many problems can be found. For example, equation (7.4) reproduced below: x (t) = tx(t) (7.4) may give the impression that x (t) will become large as t becomes large. However, it can be shown that this system is exponentially stable. On the other hand, the systems (7.11) and (7.12) reproduced below: with and x (t) c[t] x (t) x(t) = 0 c[t]= 2 e t > 0 for all t. (7.11a) (7.11b) x (t) (2 8t) x (t) 5x(t) = 0 (7.12) correspond to second-order systems with coefficients that are always positive. Thus, one may (and indeed can) associate the time-varying terms (that multiplies the first derivative) to power dissipative elements and hence conclude that both systems are asymptotically stable. However, it can be shown that the system with the dynamics in equation (7.11) is not asymptotically stable where as the system (7.12) is.

5 x 2 (t) = x 2 (0) 1 s a 2 e bt 1 s a 1 x 1 (t) We can also use the linearization approach to study the stability of nonlinear non-autonomous systems. We can write a Taylor expansion of f [ ] near the equilibrium point 0 as: x (t) x 0 = f[ x,t] x 0 = f[ 0,t] f[ x,t]. x higher order terms x x 0 (7.25) Figure 7-1. Flowgraph of a second-order time-varying dynamics to show possible problems in the use of eigen analysis for finding the stability of non-autonomous systems. where it can be noted that the higher order terms we need to ignore are not just functions of the state x, but also of time t. Thus, we need to ensure that they are small enough for x close to 0, the equilibrium point, and for all t. A commonly used criteria based on the L 2 -norm is: sup f hot [x,t] x x 0 0 for all t (7.26) Due to our experience with eigen analysis, we may think of obtaining the time-varying eigenvalues of a linear, non-autonomous system x = A(t)x (7.23) and see if stability can be established if all the eigen values of A(t) have negative real parts for all time t t 0. For example, the second order system: x 1 (t) x 2 (t) = a 1 e bt 0 a x 1(t) 2 x 2 (t) ; a 1,a 2,b > 0 (7.24) has two eigenvalues which are negative and constant (-a1, -a2). However, we can show that the state variable x 1 (t) as t indicating that the system is unstable. By looking carefully at the dynamics and the flow graph of the dynamics in Fig. 7.1, we can note that we have two stable linear systems that are not fully coupled. The state variable x 2 (t) (which tends to zero as t ) modulated by exp[ bt] becomes the input (which tends to infinity as t ) to another stable linear system. The lack of coupling between the state variables is indicated by the A(t) matrix being an upper diagonal or lower diagonal matrix, and it prevents the use of eigen analysis for testing the asymptotic stability. We can overcome this problem by studying the eigen values of A(t) A t (t). We can show that a sufficient condition for asymptotic stability is that the eigenvalues of this new matrix all have negative real parts. We can see that it is not easy to prove this when we deal with fairly high order systems. Incorporating this criteria in any design will be much more difficult if not altogether impossible. Assuming that the above equation is satisfied, we get the approximate state equation around the equilibrium point as: x (t) = A(t)x(t) (7.27) That is, the linearized model can in general be non-autonomous. We can use methods developed for linear time-varying systems such as the eigen analysis approach to ascertain the stability of this approximate model and hence the local stability of the original non-autonomous system. Needless to say, caution needs to be exercised before reaching any conclusion, due to the approximation, as we noted for the case of nonlinear autonomous systems. From the above discussions, we can appreciate the problems involved. Some of the problems are due to the complexity inherent in the kind of system (nonlinear non-autonomous) that we are dealing with. However, the top down approach, that is starting from the abstract mathematical relationship from a global level and trying to figure out the stability etc. aggravate the conditions further. Given a non-autonomous I/O relationship, we need to first find a function that qualifies as a Lyapunov function. There is no systematic procedure to arrive at a Lyapunov function of a system known to be stable. To assure us that Lyapunov functions do exist for such systems (and, note, they still do not tell us how to arrive at the Lyapunov function; they merely serve as a first step of assuring us), a number of theorems called converse Lyapunov theorems have been developed. For example:

6 If the vector function f[ x,t] of a non-autonomous system of (7.1) has continuos and bounded partial derivatives with respect to x and t for all x in a ball B r and for all t 0, then the equilibrium point x e = 0 is exponentially stable iff there exists a function V[ x,t ] and strictly positive constants α 1, α 2 and α 3 such that for all x in B r and for all t 0 : α 1 x 2 V[ x,t ] α 2 x 2 (7.28a) V [ x,t ] α 3 x (7.28b) V x α 4 x (7.28c) Results such as these can be used to construct a Lyapunov function for the subsystem of nonlinear system which may be known to possess some stability properties. This, in turn may enable us to find a Lyapunov function for the whole system. In summary, we find that designing a stable system is our main goal. In the next chapter, we will use the building block approach to achieve the same. 7.6 Summary In this chapter, we briefly considered time-varying or non-autonomous systems and basic concepts of stability for time-varying systems. Again, the emphasis was on presentation of basic results (that are derived from a mathematical/analytical perspective) in simple terms as possible. The problems in dealing with time-varying systems from an analytical perspective without intuition and guidance from the physical world are obvious from the results presented. In the next chapter, we will look at non-autonomous systems from a physical realization perspective and derive a number of new results and techniques for designing stable non-autonomous systems. 7.7 Notes and References Comprehensive information on non-autonomous systems including the proofs for various results and theorems presented here can be found in a number of books; see for example, [Vidyasagar, 1978]. The materials presented here are based on the book by [Slotine and Li, 1991].

4 Lyapunov Stability Theory

4 Lyapunov Stability Theory 4 Lyapunov Stability Theory In this section we review the tools of Lyapunov stability theory. These tools will be used in the next section to analyze the stability properties of a robot controller. We

More information

Lecture 7: Finding Lyapunov Functions 1

Lecture 7: Finding Lyapunov Functions 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1

More information

3. Reaction Diffusion Equations Consider the following ODE model for population growth

3. Reaction Diffusion Equations Consider the following ODE model for population growth 3. Reaction Diffusion Equations Consider the following ODE model for population growth u t a u t u t, u 0 u 0 where u t denotes the population size at time t, and a u plays the role of the population dependent

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

Stability. Chapter 4. Topics : 1. Basic Concepts. 2. Algebraic Criteria for Linear Systems. 3. Lyapunov Theory with Applications to Linear Systems

Stability. Chapter 4. Topics : 1. Basic Concepts. 2. Algebraic Criteria for Linear Systems. 3. Lyapunov Theory with Applications to Linear Systems Chapter 4 Stability Topics : 1. Basic Concepts 2. Algebraic Criteria for Linear Systems 3. Lyapunov Theory with Applications to Linear Systems 4. Stability and Control Copyright c Claudiu C. Remsing, 2006.

More information

Example 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum. asin. k, a, and b. We study stability of the origin x

Example 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum. asin. k, a, and b. We study stability of the origin x Lecture 4. LaSalle s Invariance Principle We begin with a motivating eample. Eample 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum Dynamics of a pendulum with friction can be written

More information

Eigenvalues, Eigenvectors, and Differential Equations

Eigenvalues, Eigenvectors, and Differential Equations Eigenvalues, Eigenvectors, and Differential Equations William Cherry April 009 (with a typo correction in November 05) The concepts of eigenvalue and eigenvector occur throughout advanced mathematics They

More information

Using the Theory of Reals in. Analyzing Continuous and Hybrid Systems

Using the Theory of Reals in. Analyzing Continuous and Hybrid Systems Using the Theory of Reals in Analyzing Continuous and Hybrid Systems Ashish Tiwari Computer Science Laboratory (CSL) SRI International (SRI) Menlo Park, CA 94025 Email: ashish.tiwari@sri.com Ashish Tiwari

More information

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0.

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0. Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients We will now turn our attention to nonhomogeneous second order linear equations, equations with the standard

More information

Lecture 13 Linear quadratic Lyapunov theory

Lecture 13 Linear quadratic Lyapunov theory EE363 Winter 28-9 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discrete-time

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

On the D-Stability of Linear and Nonlinear Positive Switched Systems

On the D-Stability of Linear and Nonlinear Positive Switched Systems On the D-Stability of Linear and Nonlinear Positive Switched Systems V. S. Bokharaie, O. Mason and F. Wirth Abstract We present a number of results on D-stability of positive switched systems. Different

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Nonlinear Systems of Ordinary Differential Equations

Nonlinear Systems of Ordinary Differential Equations Differential Equations Massoud Malek Nonlinear Systems of Ordinary Differential Equations Dynamical System. A dynamical system has a state determined by a collection of real numbers, or more generally

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

CHAPTER 2. Eigenvalue Problems (EVP s) for ODE s

CHAPTER 2. Eigenvalue Problems (EVP s) for ODE s A SERIES OF CLASS NOTES FOR 005-006 TO INTRODUCE LINEAR AND NONLINEAR PROBLEMS TO ENGINEERS, SCIENTISTS, AND APPLIED MATHEMATICIANS DE CLASS NOTES 4 A COLLECTION OF HANDOUTS ON PARTIAL DIFFERENTIAL EQUATIONS

More information

Duality in Linear Programming

Duality in Linear Programming Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadow-price interpretation of the optimal simplex multipliers is a very useful concept. First, these shadow

More information

19 LINEAR QUADRATIC REGULATOR

19 LINEAR QUADRATIC REGULATOR 19 LINEAR QUADRATIC REGULATOR 19.1 Introduction The simple form of loopshaping in scalar systems does not extend directly to multivariable (MIMO) plants, which are characterized by transfer matrices instead

More information

Orbits of the Lennard-Jones Potential

Orbits of the Lennard-Jones Potential Orbits of the Lennard-Jones Potential Prashanth S. Venkataram July 28, 2012 1 Introduction The Lennard-Jones potential describes weak interactions between neutral atoms and molecules. Unlike the potentials

More information

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

More information

Module 3: Correlation and Covariance

Module 3: Correlation and Covariance Using Statistical Data to Make Decisions Module 3: Correlation and Covariance Tom Ilvento Dr. Mugdim Pašiƒ University of Delaware Sarajevo Graduate School of Business O ften our interest in data analysis

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

V(x)=c 2. V(x)=c 1. V(x)=c 3

V(x)=c 2. V(x)=c 1. V(x)=c 3 University of California Department of Mechanical Engineering Linear Systems Fall 1999 (B. Bamieh) Lecture 6: Stability of Dynamic Systems Lyapunov's Direct Method 1 6.1 Notions of Stability For a general

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

Chapter 13 Internal (Lyapunov) Stability 13.1 Introduction We have already seen some examples of both stable and unstable systems. The objective of th

Chapter 13 Internal (Lyapunov) Stability 13.1 Introduction We have already seen some examples of both stable and unstable systems. The objective of th Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete

More information

Special Situations in the Simplex Algorithm

Special Situations in the Simplex Algorithm Special Situations in the Simplex Algorithm Degeneracy Consider the linear program: Maximize 2x 1 +x 2 Subject to: 4x 1 +3x 2 12 (1) 4x 1 +x 2 8 (2) 4x 1 +2x 2 8 (3) x 1, x 2 0. We will first apply the

More information

The Graphical Method: An Example

The Graphical Method: An Example The Graphical Method: An Example Consider the following linear program: Maximize 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2 0, where, for ease of reference,

More information

1 Norms and Vector Spaces

1 Norms and Vector Spaces 008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)

More information

DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents

DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents DIFFERENTIABILITY OF COMPLEX FUNCTIONS Contents 1. Limit definition of a derivative 1 2. Holomorphic functions, the Cauchy-Riemann equations 3 3. Differentiability of real functions 5 4. A sufficient condition

More information

15 Limit sets. Lyapunov functions

15 Limit sets. Lyapunov functions 15 Limit sets. Lyapunov functions At this point, considering the solutions to ẋ = f(x), x U R 2, (1) we were most interested in the behavior of solutions when t (sometimes, this is called asymptotic behavior

More information

System of First Order Differential Equations

System of First Order Differential Equations CHAPTER System of First Order Differential Equations In this chapter, we will discuss system of first order differential equations. There are many applications that involving find several unknown functions

More information

Basics of Polynomial Theory

Basics of Polynomial Theory 3 Basics of Polynomial Theory 3.1 Polynomial Equations In geodesy and geoinformatics, most observations are related to unknowns parameters through equations of algebraic (polynomial) type. In cases where

More information

Chapter 12 Modal Decomposition of State-Space Models 12.1 Introduction The solutions obtained in previous chapters, whether in time domain or transfor

Chapter 12 Modal Decomposition of State-Space Models 12.1 Introduction The solutions obtained in previous chapters, whether in time domain or transfor Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter

More information

Lectures 5-6: Taylor Series

Lectures 5-6: Taylor Series Math 1d Instructor: Padraic Bartlett Lectures 5-: Taylor Series Weeks 5- Caltech 213 1 Taylor Polynomials and Series As we saw in week 4, power series are remarkably nice objects to work with. In particular,

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

DERIVATIVES AS MATRICES; CHAIN RULE

DERIVATIVES AS MATRICES; CHAIN RULE DERIVATIVES AS MATRICES; CHAIN RULE 1. Derivatives of Real-valued Functions Let s first consider functions f : R 2 R. Recall that if the partial derivatives of f exist at the point (x 0, y 0 ), then we

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

3. Mathematical Induction

3. Mathematical Induction 3. MATHEMATICAL INDUCTION 83 3. Mathematical Induction 3.1. First Principle of Mathematical Induction. Let P (n) be a predicate with domain of discourse (over) the natural numbers N = {0, 1,,...}. If (1)

More information

FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL

FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL STATIsTICs 4 IV. RANDOm VECTORs 1. JOINTLY DIsTRIBUTED RANDOm VARIABLEs If are two rom variables defined on the same sample space we define the joint

More information

Lies My Calculator and Computer Told Me

Lies My Calculator and Computer Told Me Lies My Calculator and Computer Told Me 2 LIES MY CALCULATOR AND COMPUTER TOLD ME Lies My Calculator and Computer Told Me See Section.4 for a discussion of graphing calculators and computers with graphing

More information

88 CHAPTER 2. VECTOR FUNCTIONS. . First, we need to compute T (s). a By definition, r (s) T (s) = 1 a sin s a. sin s a, cos s a

88 CHAPTER 2. VECTOR FUNCTIONS. . First, we need to compute T (s). a By definition, r (s) T (s) = 1 a sin s a. sin s a, cos s a 88 CHAPTER. VECTOR FUNCTIONS.4 Curvature.4.1 Definitions and Examples The notion of curvature measures how sharply a curve bends. We would expect the curvature to be 0 for a straight line, to be very small

More information

What is Linear Programming?

What is Linear Programming? Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

More information

EXIT TIME PROBLEMS AND ESCAPE FROM A POTENTIAL WELL

EXIT TIME PROBLEMS AND ESCAPE FROM A POTENTIAL WELL EXIT TIME PROBLEMS AND ESCAPE FROM A POTENTIAL WELL Exit Time problems and Escape from a Potential Well Escape From a Potential Well There are many systems in physics, chemistry and biology that exist

More information

Scalar Valued Functions of Several Variables; the Gradient Vector

Scalar Valued Functions of Several Variables; the Gradient Vector Scalar Valued Functions of Several Variables; the Gradient Vector Scalar Valued Functions vector valued function of n variables: Let us consider a scalar (i.e., numerical, rather than y = φ(x = φ(x 1,

More information

Part IB Paper 6: Information Engineering LINEAR SYSTEMS AND CONTROL Dr Glenn Vinnicombe HANDOUT 3. Stability and pole locations.

Part IB Paper 6: Information Engineering LINEAR SYSTEMS AND CONTROL Dr Glenn Vinnicombe HANDOUT 3. Stability and pole locations. Part IB Paper 6: Information Engineering LINEAR SYSTEMS AND CONTROL Dr Glenn Vinnicombe HANDOUT 3 Stability and pole locations asymptotically stable marginally stable unstable Imag(s) repeated poles +

More information

Speech at IFAC2014 BACKGROUND

Speech at IFAC2014 BACKGROUND Speech at IFAC2014 Thank you Professor Craig for the introduction. IFAC President, distinguished guests, conference organizers, sponsors, colleagues, friends; Good evening It is indeed fitting to start

More information

HW6 Solutions. MATH 20D Fall 2013 Prof: Sun Hui TA: Zezhou Zhang (David) November 14, 2013. Checklist: Section 7.8: 1c, 2, 7, 10, [16]

HW6 Solutions. MATH 20D Fall 2013 Prof: Sun Hui TA: Zezhou Zhang (David) November 14, 2013. Checklist: Section 7.8: 1c, 2, 7, 10, [16] HW6 Solutions MATH D Fall 3 Prof: Sun Hui TA: Zezhou Zhang David November 4, 3 Checklist: Section 7.8: c,, 7,, [6] Section 7.9:, 3, 7, 9 Section 7.8 In Problems 7.8. thru 4: a Draw a direction field and

More information

Quotes from Object-Oriented Software Construction

Quotes from Object-Oriented Software Construction Quotes from Object-Oriented Software Construction Bertrand Meyer Prentice-Hall, 1988 Preface, p. xiv We study the object-oriented approach as a set of principles, methods and tools which can be instrumental

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

A First Course in Elementary Differential Equations. Marcel B. Finan Arkansas Tech University c All Rights Reserved

A First Course in Elementary Differential Equations. Marcel B. Finan Arkansas Tech University c All Rights Reserved A First Course in Elementary Differential Equations Marcel B. Finan Arkansas Tech University c All Rights Reserved 1 Contents 1 Basic Terminology 4 2 Qualitative Analysis: Direction Field of y = f(t, y)

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Understanding Poles and Zeros

Understanding Poles and Zeros MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING 2.14 Analysis and Design of Feedback Control Systems Understanding Poles and Zeros 1 System Poles and Zeros The transfer function

More information

Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability. p. 1/?

Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability. p. 1/? Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability p. 1/? p. 2/? Definition: A p p proper rational transfer function matrix G(s) is positive

More information

Zeros of a Polynomial Function

Zeros of a Polynomial Function Zeros of a Polynomial Function An important consequence of the Factor Theorem is that finding the zeros of a polynomial is really the same thing as factoring it into linear factors. In this section we

More information

Solving Linear Programs

Solving Linear Programs Solving Linear Programs 2 In this chapter, we present a systematic procedure for solving linear programs. This procedure, called the simplex method, proceeds by moving from one feasible solution to another,

More information

Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems

Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems In Chapters 8 and 9 of this book we have designed dynamic controllers such that the closed-loop systems display the desired transient

More information

Network Traffic Modelling

Network Traffic Modelling University of York Dissertation submitted for the MSc in Mathematics with Modern Applications, Department of Mathematics, University of York, UK. August 009 Network Traffic Modelling Author: David Slade

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

3.2 Sources, Sinks, Saddles, and Spirals

3.2 Sources, Sinks, Saddles, and Spirals 3.2. Sources, Sinks, Saddles, and Spirals 6 3.2 Sources, Sinks, Saddles, and Spirals The pictures in this section show solutions to Ay 00 C By 0 C Cy D 0. These are linear equations with constant coefficients

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

Lyapunov Stability Analysis of Energy Constraint for Intelligent Home Energy Management System

Lyapunov Stability Analysis of Energy Constraint for Intelligent Home Energy Management System JAIST Reposi https://dspace.j Title Lyapunov stability analysis for intelligent home energy of energ manageme Author(s)Umer, Saher; Tan, Yasuo; Lim, Azman Citation IEICE Technical Report on Ubiquitous

More information

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem

More information

ECO 199 B GAMES OF STRATEGY Spring Term 2004 PROBLEM SET 4 B DRAFT ANSWER KEY 100-3 90-99 21 80-89 14 70-79 4 0-69 11

ECO 199 B GAMES OF STRATEGY Spring Term 2004 PROBLEM SET 4 B DRAFT ANSWER KEY 100-3 90-99 21 80-89 14 70-79 4 0-69 11 The distribution of grades was as follows. ECO 199 B GAMES OF STRATEGY Spring Term 2004 PROBLEM SET 4 B DRAFT ANSWER KEY Range Numbers 100-3 90-99 21 80-89 14 70-79 4 0-69 11 Question 1: 30 points Games

More information

ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE

ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE YUAN TIAN This synopsis is designed merely for keep a record of the materials covered in lectures. Please refer to your own lecture notes for all proofs.

More information

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and

More information

1 Sufficient statistics

1 Sufficient statistics 1 Sufficient statistics A statistic is a function T = rx 1, X 2,, X n of the random sample X 1, X 2,, X n. Examples are X n = 1 n s 2 = = X i, 1 n 1 the sample mean X i X n 2, the sample variance T 1 =

More information

Mechanics 1: Conservation of Energy and Momentum

Mechanics 1: Conservation of Energy and Momentum Mechanics : Conservation of Energy and Momentum If a certain quantity associated with a system does not change in time. We say that it is conserved, and the system possesses a conservation law. Conservation

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver. Finite Difference Methods for Partial Differential Equations As you are well aware, most differential equations are much too complicated to be solved by

More information

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method 578 CHAPTER 1 NUMERICAL METHODS 1. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after

More information

Numerical Methods for Differential Equations

Numerical Methods for Differential Equations Numerical Methods for Differential Equations Chapter 1: Initial value problems in ODEs Gustaf Söderlind and Carmen Arévalo Numerical Analysis, Lund University Textbooks: A First Course in the Numerical

More information

Chapter 20. Vector Spaces and Bases

Chapter 20. Vector Spaces and Bases Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

15 Kuhn -Tucker conditions

15 Kuhn -Tucker conditions 5 Kuhn -Tucker conditions Consider a version of the consumer problem in which quasilinear utility x 2 + 4 x 2 is maximised subject to x +x 2 =. Mechanically applying the Lagrange multiplier/common slopes

More information

The Effects of Start Prices on the Performance of the Certainty Equivalent Pricing Policy

The Effects of Start Prices on the Performance of the Certainty Equivalent Pricing Policy BMI Paper The Effects of Start Prices on the Performance of the Certainty Equivalent Pricing Policy Faculty of Sciences VU University Amsterdam De Boelelaan 1081 1081 HV Amsterdam Netherlands Author: R.D.R.

More information

Lecture 8 : Dynamic Stability

Lecture 8 : Dynamic Stability Lecture 8 : Dynamic Stability Or what happens to small disturbances about a trim condition 1.0 : Dynamic Stability Static stability refers to the tendency of the aircraft to counter a disturbance. Dynamic

More information

(Refer Slide Time: 01:11-01:27)

(Refer Slide Time: 01:11-01:27) Digital Signal Processing Prof. S. C. Dutta Roy Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 6 Digital systems (contd.); inverse systems, stability, FIR and IIR,

More information

Fixed Point Theorems

Fixed Point Theorems Fixed Point Theorems Definition: Let X be a set and let T : X X be a function that maps X into itself. (Such a function is often called an operator, a transformation, or a transform on X, and the notation

More information

Figure 1.1 Vector A and Vector F

Figure 1.1 Vector A and Vector F CHAPTER I VECTOR QUANTITIES Quantities are anything which can be measured, and stated with number. Quantities in physics are divided into two types; scalar and vector quantities. Scalar quantities have

More information

Brief Introduction to Vectors and Matrices

Brief Introduction to Vectors and Matrices CHAPTER 1 Brief Introduction to Vectors and Matrices In this chapter, we will discuss some needed concepts found in introductory course in linear algebra. We will introduce matrix, vector, vector-valued

More information

Lecture 5 Principal Minors and the Hessian

Lecture 5 Principal Minors and the Hessian Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and

More information

Online Appendix. Supplemental Material for Insider Trading, Stochastic Liquidity and. Equilibrium Prices. by Pierre Collin-Dufresne and Vyacheslav Fos

Online Appendix. Supplemental Material for Insider Trading, Stochastic Liquidity and. Equilibrium Prices. by Pierre Collin-Dufresne and Vyacheslav Fos Online Appendix Supplemental Material for Insider Trading, Stochastic Liquidity and Equilibrium Prices by Pierre Collin-Dufresne and Vyacheslav Fos 1. Deterministic growth rate of noise trader volatility

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

The last three chapters introduced three major proof techniques: direct,

The last three chapters introduced three major proof techniques: direct, CHAPTER 7 Proving Non-Conditional Statements The last three chapters introduced three major proof techniques: direct, contrapositive and contradiction. These three techniques are used to prove statements

More information

Multi-variable Calculus and Optimization

Multi-variable Calculus and Optimization Multi-variable Calculus and Optimization Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Multi-variable Calculus and Optimization 1 / 51 EC2040 Topic 3 - Multi-variable Calculus

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products Chapter 3 Cartesian Products and Relations The material in this chapter is the first real encounter with abstraction. Relations are very general thing they are a special type of subset. After introducing

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

The Quantum Harmonic Oscillator Stephen Webb

The Quantum Harmonic Oscillator Stephen Webb The Quantum Harmonic Oscillator Stephen Webb The Importance of the Harmonic Oscillator The quantum harmonic oscillator holds a unique importance in quantum mechanics, as it is both one of the few problems

More information

10 Evolutionarily Stable Strategies

10 Evolutionarily Stable Strategies 10 Evolutionarily Stable Strategies There is but a step between the sublime and the ridiculous. Leo Tolstoy In 1973 the biologist John Maynard Smith and the mathematician G. R. Price wrote an article in

More information

About the Gamma Function

About the Gamma Function About the Gamma Function Notes for Honors Calculus II, Originally Prepared in Spring 995 Basic Facts about the Gamma Function The Gamma function is defined by the improper integral Γ) = The integral is

More information

Solving simultaneous equations using the inverse matrix

Solving simultaneous equations using the inverse matrix Solving simultaneous equations using the inverse matrix 8.2 Introduction The power of matrix algebra is seen in the representation of a system of simultaneous linear equations as a matrix equation. Matrix

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10 Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction

More information