/SOLUTIONS/ where a, b, c and d are positive constants. Study the stability of the equilibria of this system based on linearization.

Size: px
Start display at page:

Download "/SOLUTIONS/ where a, b, c and d are positive constants. Study the stability of the equilibria of this system based on linearization."

Transcription

1 echnische Universiteit Eindhoven Faculteit Elektrotechniek NIE-LINEAIRE SYSEMEN / NEURALE NEWERKEN (P6) gehouden op donderdag maart 7, van 9: tot : uur. Dit examenonderdeel bestaat uit 8 opgaven. /SOLUIONS/ Problem ( point) Consider the sstem: = ax bxx () = cxx dx where a, b, c and d are positive constants. Stud the stabilit of the equilibria of this sstem based on linearization. Solution Equilibrium points: ax bx x cx x dx = = x ( a bx ( cx d ) x ) = = Real solutions of this sstem are Jacobian matrix: {[ ], [ d / c b / a } [ x x] = ] J = a bx cx bx cx d Stabilit of equilibrium points for [ ] a : J =, λ = a, λ = d [ ] is unstable (saddle point) d

2 bd for [d/c b/a] : J = c, λ, = ± j ad eigenvalues are on the imaginar axis jω and ac b because of Hartman-Grobman theorem can t stud the stabilit of the equilibrium point [d/c b/a] based on linearization. Problem (. points) Consider the following sstems: = x (a) = x = x x (b) = x hese sstems have an equilibrium at the origin. he second sstem could be considered identical to the first one, but controlled b a signal u=-x. (a) Is the origin of sstem (a) stable in sense of Lapunov? (b) Show that for sstem (b) the origin is globall asmptoticall stable. Hint (especiall for the second sstem): tr the Lapunov function ( ) x + x and appl the Lassale principle. Solution (a) Jacobian matrix of a): x J = Stabilit of the origin of a) for [ ] : J =, λ = one eigenvalue is on the imaginar axis jω and because of Hartman-Grobman theorem can t stud the stabilit of the origin based on indirect Lapunov method (linearization) We can stud the stabilit b direct inspection. It is eas to observe that the right hand side of the first equation of a) do not contains x, and thus dx /dt depends onl on x. If x >, dx /dt<, if x <, dx /dt> and if x =, dx /dt=. herefore the x coordinate of sstem trajector tends to. However, because x is either positive or negative and becomes zero in infinit, dx /dt does not

3 change its sign and thus the second coordinate x of the sstem trajector either increases or decreases. herefore sstem a) is not stable in the sense of Lapunov, because the stabilit in sense of Lapunov implies that if the initial point of the sstem trajector is near enough to the origin, the sstem trajector remains close to the origin. (b) Let V ( x, x ) = ( x + x ) V & ( x, x ) = x + x = x xx + xx = x for ever ( x, x ) R herefore the origin of sstem () is globall stable in the sense of Lapunov, because V ( x, x ) is negative semi defined ( V& x, x ) is not onl at the origin but for {(x, x ) x = and x }). ( At (,), V & = i.e. we can claim asmptotic stabilit if can appl the Lasalle s theorem. Let S = {( x, x ) R ; V& = } = {( x, x ) x =, x} But if x = =, = x = Letting M be the largest invariant set in S, we have M = {(,)}, i.e. onl the origin and b the Lasalle s theorem, the origin is globall asmptoticall stable. Problem ( point) Draw phase portraits of the following one-dimensional sstem as the parameter changes: & () x = x+ x x Solution = x( + x x ) = f ( x) Equilibra x = {, ±, ± } D x f = + x x zero eigenvalue at (,) Stabilit: x = Dx f =, stable R x = ± Dx f =, stable > x = ± Dx f = 6, unstable >

4 x Phase portrait = x = ( +. ) (, ) x = ( x = = = x = (. ) =. ). ) = x = ( + Problem (. points) Consider the nonlinear sstem in the feedback form as given in the figure below r= + _ e NL v g(s) h v slope = β > e -h where the parameters of the nonlinear function NL are h=, β= and the transfer function of the linear part is NL (single-valued skewsmmetric function) gs () = ss ( + ) Use describing function analsis to predict whether or not the sstem will oscillate. If es, what is the predicted amplitude a and frequenc ω of oscillation (amplitude and frequenc of the error signal e)? h Note: he describing function corresponding to the nonlinear function NL is Na ( ) = + β π a

5 Solution Oscillations appear if equation g( jω ) = Na ( ) has real solutions with respect to amplitude a and frequenc ω of the error signal e=a.sin(ωt) gs ( ) = = ss ( + ) s + s + s g( jω) = = = ω + ω + ω ω ω + ω ω + ω ω Because is real Na ( ) ( j ) ( j ) j j j j( ) ω Im ( g( jω) ) = ( ω ω ) = ω = ω = rad / s; f = =.9Hz π hus at the intersection point of g( jω ) and, ω=ω and Na ( ) g( jω ) = =, ω Na ( ) = + πa Hence from g( jω ) = Na ( ) follows a a a,7 = π π + πa = = π = herefore the amplitude and frequenc of the oscillating error signal e are a=.7 and ω = rad / s (period =6.8 s).

6 Problem (. points) Consider a two input perceptron without bias input. he output activation function of the perceptron is if wx j j j= = φ ( wjxj) = j= if wx j j < j= Given the following input-output training pairs: x() =, d() (), d() (), d() (), d() = x = = = = = = x x Appl the perceptron learning rule to adjust the weights w and w starting with initial weights w vector w = = w. Solution Adjustment the weights if d = and φ(w x) = then w := w x if d = and φ(w x) = then w := w + x Appl the first training sample x()=[ ] to produce: φ(w x()) = φ(-) =, d()= his is wrong so we need to move the weight vector towards the sample w = w + x so w = [ -] + [ ] = [ ] Appl the second training sample x()=[- ] to produce: φ(w x()) = φ(-) =, d()= his is wrong so we need to move the weight vector towards the sample w = w + x so w = [ ] + [- ] = [ ] Appl the third training sample x()=[ ] to produce: φ(w x()) = φ() =, d()= his is wrong so we need to move the weight vector awa from the sample w = w - x so w = [ ] - [ ] = [- ] Appl the fourth training sample x()=[ -] to produce: 6

7 φ(w x()) = φ(-) =, d()= his is correct so we don t adjust the weights. Appl again the first training sample x()=[ ] to produce: φ(w x()) = φ() =, d()= his is correct so we don t adjust the weights. Appl again the second training sample x()=[- ] to produce: φ(w x()) = φ() =, d()= his is correct so we don t adjust the weights. Appl again the third training sample x()=[ ] to produce: φ(w x()) = φ(-) = -, d()= his is correct so we don t adjust the weights. Appl again the fourth training sample x()=[ -] to produce: φ(w x()) = φ(-) =, d()= his is correct so we don t adjust the weights. he solution is w = [w w ] = [- ] because all training pairs are correctl classified. Problem 6 ( point) Consider an autoassociative net with the bipolar step function as the activation function and weights set b the Hebb rule (outer products), with the main diagonal of the weight matrix set to zero. a) Find the weight matrix to store the vector X()=[ - ] ; b) est the net using X() as input; c) est the net, using Y()=[ - - ] as input; d) Find the weight matrix to store the vector X()=[ - -] ; e) est the net using X() as input; f) est the net, using Y()=[ - -] as input; g) Find the weight matrix to store the both X() and X(); h) est the new net on X(), X(), Y() and Y() as inputs. Solution a) he weight matrix to store X() = [ - ] is computed b Hebbian rule outer product 7

8 X() X() = W = b) if in _ i he bipolar step function is i = if in _ i < 6 WX. () =. = = = = X () correctl associated with X() c) 6 WY. () =. = = = = X () correctl associated with X() d) he weight matrix to store X() = [ - -] is computed b Hebbian rule outer product X() X() = W = e) 8

9 6 WX. () =. = = = = X () correctl associated with X() f) 6 WY. () =. = = = = X () correctl associated with X() g) he weight matrix to store X() = [ - ] and X() = [ - -] is computed b Hebbian rule outer product X() X() + X() X() = W = h) 6 WX. () =. = = = = X () correctl associated with X() 9

10 WX = = = = = X. (). () 6 correctl associated with X() WY X. () =. = = = () 6 no correctl associated with X() WY = = = = = X. (). () 6 correctl associated with X() Problem 7 (. points) Give a brief description of the following items related to Radial Basis Functions Networks (RBFN): Linear separabilit problem, Cover s theorem on the separabilit of patterns, RBFN architecture, Radial basis functions, Free parameters in RBFN, Learning algorithms, Features.

11 Solution Linear classifiers are eas to use. In realit, however, there are man cases that linear classifiers can t handle. herefore we are looking for a network that nonlinearl converts the input to a higher dimension after which it can be classified using onl one laer of neurons with linear activation functions. Cover s theorem on the separabilit of patterns: A complex pattern classification problem that is nonlinearl separable in a low dimensional space, is more likel to be linearl separable in a high dimensional space. A simple three-laer structure an input laer; a hidden laer with nonlinear activation function; an output laer. Gaussian radial-basis function is used most: g(x) = exp[-( x m /σ)] hree different sets of variables affect the performance of a RBF network with Gaussian functions: he centre of each radial-basis activation function, he width of each radial-basis activation function, he weights of the output laer. Learning algorithms Modelling of radial basis functions: centres (random initialisation of centres or self-organised learning of centres or supervised) and widths (often unsupervised); Supervised learning of weights. Features: Easier interpretation of the sstem's results; Universal approximation abilit; Much faster training than back-propagation neural networks Problem 8 ( point) Given a Maxnet with 6 neurons with self feedback weights w ii =. he output activation function for each neuron is x_ in if x_ in> f( x_ in) =. otherwise Choose the inhibitor weights w ij, i j and iterate the network while it stabilizes if the initial states of the neurons are x ()=., x ()=., x ()=.7, x ()=., x ()=.8, x 6 ()=.6. Solution w ii = θ =, w ij = -ε, i j and < ε </6; we choose ε=. he neural network states update

12 m m xi( t+ ) = f wii. xi( t) + wij. xj( t) = f θ. xi( t) ε. xj( t) j= j= j i j i x() = [ ] initial states x ()=f(.-.( ))=f(-.)= x ()=f(.-.( ))=f(-.)= x ()=f(.7-.( ))=f(.)=. x ()=f(.-.( ))=f(.6)=.6 x ()=f(.8-.( ))=f(.)=. x ()=f(.6-.( ))=f(.)=. herefore: x() = [..6..] Similarl we calculate the neuron states for the next iterations until the Maxnet stabilized x() = [..6..] x() = [ ] x() = [.. ] x() = [.7.7 ] x(6) = [.986 ] x(7) = [.986 ] = x(6) stabilized

Understanding Poles and Zeros

Understanding Poles and Zeros MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING 2.14 Analysis and Design of Feedback Control Systems Understanding Poles and Zeros 1 System Poles and Zeros The transfer function

More information

Dynamical Systems Analysis II: Evaluating Stability, Eigenvalues

Dynamical Systems Analysis II: Evaluating Stability, Eigenvalues Dynamical Systems Analysis II: Evaluating Stability, Eigenvalues By Peter Woolf pwoolf@umich.edu) University of Michigan Michigan Chemical Process Dynamics and Controls Open Textbook version 1.0 Creative

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

Autonomous Equations / Stability of Equilibrium Solutions. y = f (y).

Autonomous Equations / Stability of Equilibrium Solutions. y = f (y). Autonomous Equations / Stabilit of Equilibrium Solutions First order autonomous equations, Equilibrium solutions, Stabilit, Longterm behavior of solutions, direction fields, Population dnamics and logistic

More information

MAT188H1S Lec0101 Burbulla

MAT188H1S Lec0101 Burbulla Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u

More information

4 Lyapunov Stability Theory

4 Lyapunov Stability Theory 4 Lyapunov Stability Theory In this section we review the tools of Lyapunov stability theory. These tools will be used in the next section to analyze the stability properties of a robot controller. We

More information

Introduction to Support Vector Machines. Colin Campbell, Bristol University

Introduction to Support Vector Machines. Colin Campbell, Bristol University Introduction to Support Vector Machines Colin Campbell, Bristol University 1 Outline of talk. Part 1. An Introduction to SVMs 1.1. SVMs for binary classification. 1.2. Soft margins and multi-class classification.

More information

Neural Networks and Support Vector Machines

Neural Networks and Support Vector Machines INF5390 - Kunstig intelligens Neural Networks and Support Vector Machines Roar Fjellheim INF5390-13 Neural Networks and SVM 1 Outline Neural networks Perceptrons Neural networks Support vector machines

More information

Stability of Linear Control System

Stability of Linear Control System Stabilit of Linear Control Sstem Concept of Stabilit Closed-loop feedback sstem is either stable or nstable. This tpe of characterization is referred to as absolte stabilit. Given that the sstem is stable,

More information

3. Regression & Exponential Smoothing

3. Regression & Exponential Smoothing 3. Regression & Exponential Smoothing 3.1 Forecasting a Single Time Series Two main approaches are traditionally used to model a single time series z 1, z 2,..., z n 1. Models the observation z t as a

More information

Least-Squares Intersection of Lines

Least-Squares Intersection of Lines Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a

More information

Neural Networks: a replacement for Gaussian Processes?

Neural Networks: a replacement for Gaussian Processes? Neural Networks: a replacement for Gaussian Processes? Matthew Lilley and Marcus Frean Victoria University of Wellington, P.O. Box 600, Wellington, New Zealand marcus@mcs.vuw.ac.nz http://www.mcs.vuw.ac.nz/

More information

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression Logistic Regression Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes

More information

degrees of freedom and are able to adapt to the task they are supposed to do [Gupta].

degrees of freedom and are able to adapt to the task they are supposed to do [Gupta]. 1.3 Neural Networks 19 Neural Networks are large structured systems of equations. These systems have many degrees of freedom and are able to adapt to the task they are supposed to do [Gupta]. Two very

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Chapter 2 The Research on Fault Diagnosis of Building Electrical System Based on RBF Neural Network

Chapter 2 The Research on Fault Diagnosis of Building Electrical System Based on RBF Neural Network Chapter 2 The Research on Fault Diagnosis of Building Electrical System Based on RBF Neural Network Qian Wu, Yahui Wang, Long Zhang and Li Shen Abstract Building electrical system fault diagnosis is the

More information

Self Organizing Maps: Fundamentals

Self Organizing Maps: Fundamentals Self Organizing Maps: Fundamentals Introduction to Neural Networks : Lecture 16 John A. Bullinaria, 2004 1. What is a Self Organizing Map? 2. Topographic Maps 3. Setting up a Self Organizing Map 4. Kohonen

More information

Numerical methods for American options

Numerical methods for American options Lecture 9 Numerical methods for American options Lecture Notes by Andrzej Palczewski Computational Finance p. 1 American options The holder of an American option has the right to exercise it at any moment

More information

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not. Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C

More information

Monotonicity Hints. Abstract

Monotonicity Hints. Abstract Monotonicity Hints Joseph Sill Computation and Neural Systems program California Institute of Technology email: joe@cs.caltech.edu Yaser S. Abu-Mostafa EE and CS Deptartments California Institute of Technology

More information

Loop Analysis. Chapter 7. 7.1 Introduction

Loop Analysis. Chapter 7. 7.1 Introduction Chapter 7 Loop Analysis Quotation Authors, citation. This chapter describes how stability and robustness can be determined by investigating how sinusoidal signals propagate around the feedback loop. The

More information

Nonlinear Systems of Ordinary Differential Equations

Nonlinear Systems of Ordinary Differential Equations Differential Equations Massoud Malek Nonlinear Systems of Ordinary Differential Equations Dynamical System. A dynamical system has a state determined by a collection of real numbers, or more generally

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

NEURAL NETWORKS A Comprehensive Foundation

NEURAL NETWORKS A Comprehensive Foundation NEURAL NETWORKS A Comprehensive Foundation Second Edition Simon Haykin McMaster University Hamilton, Ontario, Canada Prentice Hall Prentice Hall Upper Saddle River; New Jersey 07458 Preface xii Acknowledgments

More information

EE 330 Lecture 21. Small Signal Analysis Small Signal Analysis of BJT Amplifier

EE 330 Lecture 21. Small Signal Analysis Small Signal Analysis of BJT Amplifier EE 330 Lecture 21 Small Signal Analsis Small Signal Analsis of BJT Amplifier Review from Last Lecture Comparison of Gains for MOSFET and BJT Circuits IN (t) A B BJT CC 1 R EE OUT I R C 1 t If I D R =I

More information

UNIVERSITY OF CALIFORNIA AT BERKELEY College of Engineering Department of Electrical Engineering and Computer Sciences. EE105 Lab Experiments

UNIVERSITY OF CALIFORNIA AT BERKELEY College of Engineering Department of Electrical Engineering and Computer Sciences. EE105 Lab Experiments UNIVERSITY OF CALIFORNIA AT BERKELEY College of Engineering Department of Electrical Engineering and Computer Sciences EE15 Lab Experiments Bode Plot Tutorial Contents 1 Introduction 1 2 Bode Plots Basics

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

Lecture 7: Finding Lyapunov Functions 1

Lecture 7: Finding Lyapunov Functions 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

Lecture 3: Linear methods for classification

Lecture 3: Linear methods for classification Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

Part IB Paper 6: Information Engineering LINEAR SYSTEMS AND CONTROL Dr Glenn Vinnicombe HANDOUT 3. Stability and pole locations.

Part IB Paper 6: Information Engineering LINEAR SYSTEMS AND CONTROL Dr Glenn Vinnicombe HANDOUT 3. Stability and pole locations. Part IB Paper 6: Information Engineering LINEAR SYSTEMS AND CONTROL Dr Glenn Vinnicombe HANDOUT 3 Stability and pole locations asymptotically stable marginally stable unstable Imag(s) repeated poles +

More information

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two

More information

Basics of Statistical Machine Learning

Basics of Statistical Machine Learning CS761 Spring 2013 Advanced Machine Learning Basics of Statistical Machine Learning Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu Modern machine learning is rooted in statistics. You will find many familiar

More information

Recurrent Neural Networks

Recurrent Neural Networks Recurrent Neural Networks Neural Computation : Lecture 12 John A. Bullinaria, 2015 1. Recurrent Neural Network Architectures 2. State Space Models and Dynamical Systems 3. Backpropagation Through Time

More information

LCs for Binary Classification

LCs for Binary Classification Linear Classifiers A linear classifier is a classifier such that classification is performed by a dot product beteen the to vectors representing the document and the category, respectively. Therefore it

More information

The continuous and discrete Fourier transforms

The continuous and discrete Fourier transforms FYSA21 Mathematical Tools in Science The continuous and discrete Fourier transforms Lennart Lindegren Lund Observatory (Department of Astronomy, Lund University) 1 The continuous Fourier transform 1.1

More information

Feed-Forward mapping networks KAIST 바이오및뇌공학과 정재승

Feed-Forward mapping networks KAIST 바이오및뇌공학과 정재승 Feed-Forward mapping networks KAIST 바이오및뇌공학과 정재승 How much energy do we need for brain functions? Information processing: Trade-off between energy consumption and wiring cost Trade-off between energy consumption

More information

CS 2750 Machine Learning. Lecture 1. Machine Learning. http://www.cs.pitt.edu/~milos/courses/cs2750/ CS 2750 Machine Learning.

CS 2750 Machine Learning. Lecture 1. Machine Learning. http://www.cs.pitt.edu/~milos/courses/cs2750/ CS 2750 Machine Learning. Lecture Machine Learning Milos Hauskrecht milos@cs.pitt.edu 539 Sennott Square, x5 http://www.cs.pitt.edu/~milos/courses/cs75/ Administration Instructor: Milos Hauskrecht milos@cs.pitt.edu 539 Sennott

More information

Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability. p. 1/?

Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability. p. 1/? Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability p. 1/? p. 2/? Definition: A p p proper rational transfer function matrix G(s) is positive

More information

Linear Threshold Units

Linear Threshold Units Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear

More information

1 Determinants and the Solvability of Linear Systems

1 Determinants and the Solvability of Linear Systems 1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped

More information

Principles of Scientific Computing Nonlinear Equations and Optimization

Principles of Scientific Computing Nonlinear Equations and Optimization Principles of Scientific Computing Nonlinear Equations and Optimization David Bindel and Jonathan Goodman last revised March 6, 2006, printed March 6, 2009 1 1 Introduction This chapter discusses two related

More information

Linear Algebraic Equations, SVD, and the Pseudo-Inverse

Linear Algebraic Equations, SVD, and the Pseudo-Inverse Linear Algebraic Equations, SVD, and the Pseudo-Inverse Philip N. Sabes October, 21 1 A Little Background 1.1 Singular values and matrix inversion For non-smmetric matrices, the eigenvalues and singular

More information

Classification Problems

Classification Problems Classification Read Chapter 4 in the text by Bishop, except omit Sections 4.1.6, 4.1.7, 4.2.4, 4.3.3, 4.3.5, 4.3.6, 4.4, and 4.5. Also, review sections 1.5.1, 1.5.2, 1.5.3, and 1.5.4. Classification Problems

More information

UNIVERSITY OF BOLTON SCHOOL OF ENGINEERING MS SYSTEMS ENGINEERING AND ENGINEERING MANAGEMENT SEMESTER 1 EXAMINATION 2015/2016 INTELLIGENT SYSTEMS

UNIVERSITY OF BOLTON SCHOOL OF ENGINEERING MS SYSTEMS ENGINEERING AND ENGINEERING MANAGEMENT SEMESTER 1 EXAMINATION 2015/2016 INTELLIGENT SYSTEMS TW72 UNIVERSITY OF BOLTON SCHOOL OF ENGINEERING MS SYSTEMS ENGINEERING AND ENGINEERING MANAGEMENT SEMESTER 1 EXAMINATION 2015/2016 INTELLIGENT SYSTEMS MODULE NO: EEM7010 Date: Monday 11 th January 2016

More information

Introduction to Complex Numbers in Physics/Engineering

Introduction to Complex Numbers in Physics/Engineering Introduction to Complex Numbers in Physics/Engineering ference: Mary L. Boas, Mathematical Methods in the Physical Sciences Chapter 2 & 14 George Arfken, Mathematical Methods for Physicists Chapter 6 The

More information

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

Manufacturing Equipment Modeling

Manufacturing Equipment Modeling QUESTION 1 For a linear axis actuated by an electric motor complete the following: a. Derive a differential equation for the linear axis velocity assuming viscous friction acts on the DC motor shaft, leadscrew,

More information

A characterization of trace zero symmetric nonnegative 5x5 matrices

A characterization of trace zero symmetric nonnegative 5x5 matrices A characterization of trace zero symmetric nonnegative 5x5 matrices Oren Spector June 1, 009 Abstract The problem of determining necessary and sufficient conditions for a set of real numbers to be the

More information

Lecture 13 Linear quadratic Lyapunov theory

Lecture 13 Linear quadratic Lyapunov theory EE363 Winter 28-9 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discrete-time

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance

More information

Transmission Lines. Smith Chart

Transmission Lines. Smith Chart Smith Chart The Smith chart is one of the most useful graphical tools for high frequency circuit applications. The chart provides a clever way to visualize complex functions and it continues to endure

More information

Machine Learning and Pattern Recognition Logistic Regression

Machine Learning and Pattern Recognition Logistic Regression Machine Learning and Pattern Recognition Logistic Regression Course Lecturer:Amos J Storkey Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh Crichton Street,

More information

First, we show how to use known design specifications to determine filter order and 3dB cut-off

First, we show how to use known design specifications to determine filter order and 3dB cut-off Butterworth Low-Pass Filters In this article, we describe the commonly-used, n th -order Butterworth low-pass filter. First, we show how to use known design specifications to determine filter order and

More information

Linear Classification. Volker Tresp Summer 2015

Linear Classification. Volker Tresp Summer 2015 Linear Classification Volker Tresp Summer 2015 1 Classification Classification is the central task of pattern recognition Sensors supply information about an object: to which class do the object belong

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

Stability. Chapter 4. Topics : 1. Basic Concepts. 2. Algebraic Criteria for Linear Systems. 3. Lyapunov Theory with Applications to Linear Systems

Stability. Chapter 4. Topics : 1. Basic Concepts. 2. Algebraic Criteria for Linear Systems. 3. Lyapunov Theory with Applications to Linear Systems Chapter 4 Stability Topics : 1. Basic Concepts 2. Algebraic Criteria for Linear Systems 3. Lyapunov Theory with Applications to Linear Systems 4. Stability and Control Copyright c Claudiu C. Remsing, 2006.

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

MATH 425, PRACTICE FINAL EXAM SOLUTIONS. MATH 45, PRACTICE FINAL EXAM SOLUTIONS. Exercise. a Is the operator L defined on smooth functions of x, y by L u := u xx + cosu linear? b Does the answer change if we replace the operator L by the operator

More information

3 An Illustrative Example

3 An Illustrative Example Objectives An Illustrative Example Objectives - Theory and Examples -2 Problem Statement -2 Perceptron - Two-Input Case -4 Pattern Recognition Example -5 Hamming Network -8 Feedforward Layer -8 Recurrent

More information

3.2 Sources, Sinks, Saddles, and Spirals

3.2 Sources, Sinks, Saddles, and Spirals 3.2. Sources, Sinks, Saddles, and Spirals 6 3.2 Sources, Sinks, Saddles, and Spirals The pictures in this section show solutions to Ay 00 C By 0 C Cy D 0. These are linear equations with constant coefficients

More information

MECHANICS OF SOLIDS - BEAMS TUTORIAL TUTORIAL 4 - COMPLEMENTARY SHEAR STRESS

MECHANICS OF SOLIDS - BEAMS TUTORIAL TUTORIAL 4 - COMPLEMENTARY SHEAR STRESS MECHANICS OF SOLIDS - BEAMS TUTORIAL TUTORIAL 4 - COMPLEMENTARY SHEAR STRESS This the fourth and final tutorial on bending of beams. You should judge our progress b completing the self assessment exercises.

More information

Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski trajkovski@nyus.edu.mk

Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski trajkovski@nyus.edu.mk Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trakovski trakovski@nyus.edu.mk Neural Networks 2 Neural Networks Analogy to biological neural systems, the most robust learning systems

More information

Brief Introduction to Vectors and Matrices

Brief Introduction to Vectors and Matrices CHAPTER 1 Brief Introduction to Vectors and Matrices In this chapter, we will discuss some needed concepts found in introductory course in linear algebra. We will introduce matrix, vector, vector-valued

More information

Finite dimensional C -algebras

Finite dimensional C -algebras Finite dimensional C -algebras S. Sundar September 14, 2012 Throughout H, K stand for finite dimensional Hilbert spaces. 1 Spectral theorem for self-adjoint opertors Let A B(H) and let {ξ 1, ξ 2,, ξ n

More information

Reaction diffusion systems and pattern formation

Reaction diffusion systems and pattern formation Chapter 5 Reaction diffusion systems and pattern formation 5.1 Reaction diffusion systems from biology In ecological problems, different species interact with each other, and in chemical reactions, different

More information

Second Order Linear Partial Differential Equations. Part I

Second Order Linear Partial Differential Equations. Part I Second Order Linear Partial Differential Equations Part I Second linear partial differential equations; Separation of Variables; - point boundary value problems; Eigenvalues and Eigenfunctions Introduction

More information

College of the Holy Cross, Spring 2009 Math 373, Partial Differential Equations Midterm 1 Practice Questions

College of the Holy Cross, Spring 2009 Math 373, Partial Differential Equations Midterm 1 Practice Questions College of the Holy Cross, Spring 29 Math 373, Partial Differential Equations Midterm 1 Practice Questions 1. (a) Find a solution of u x + u y + u = xy. Hint: Try a polynomial of degree 2. Solution. Use

More information

ANALYTICAL METHODS FOR ENGINEERS

ANALYTICAL METHODS FOR ENGINEERS UNIT 1: Unit code: QCF Level: 4 Credit value: 15 ANALYTICAL METHODS FOR ENGINEERS A/601/1401 OUTCOME - TRIGONOMETRIC METHODS TUTORIAL 1 SINUSOIDAL FUNCTION Be able to analyse and model engineering situations

More information

Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence

Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence Artificial Neural Networks and Support Vector Machines CS 486/686: Introduction to Artificial Intelligence 1 Outline What is a Neural Network? - Perceptron learners - Multi-layer networks What is a Support

More information

Unit - 6 Vibrations of Two Degree of Freedom Systems

Unit - 6 Vibrations of Two Degree of Freedom Systems Unit - 6 Vibrations of Two Degree of Freedom Systems Dr. T. Jagadish. Professor for Post Graduation, Department of Mechanical Engineering, Bangalore Institute of Technology, Bangalore Introduction A two

More information

APPLICATIONS. are symmetric, but. are not.

APPLICATIONS. are symmetric, but. are not. CHAPTER III APPLICATIONS Real Symmetric Matrices The most common matrices we meet in applications are symmetric, that is, they are square matrices which are equal to their transposes In symbols, A t =

More information

4. How many integers between 2004 and 4002 are perfect squares?

4. How many integers between 2004 and 4002 are perfect squares? 5 is 0% of what number? What is the value of + 3 4 + 99 00? (alternating signs) 3 A frog is at the bottom of a well 0 feet deep It climbs up 3 feet every day, but slides back feet each night If it started

More information

8 Hyperbolic Systems of First-Order Equations

8 Hyperbolic Systems of First-Order Equations 8 Hyperbolic Systems of First-Order Equations Ref: Evans, Sec 73 8 Definitions and Examples Let U : R n (, ) R m Let A i (x, t) beanm m matrix for i,,n Let F : R n (, ) R m Consider the system U t + A

More information

Chapter 15, example problems:

Chapter 15, example problems: Chapter, example problems: (.0) Ultrasound imaging. (Frequenc > 0,000 Hz) v = 00 m/s. λ 00 m/s /.0 mm =.0 0 6 Hz. (Smaller wave length implies larger frequenc, since their product,

More information

Controller Design in Frequency Domain

Controller Design in Frequency Domain ECSE 4440 Control System Engineering Fall 2001 Project 3 Controller Design in Frequency Domain TA 1. Abstract 2. Introduction 3. Controller design in Frequency domain 4. Experiment 5. Colclusion 1. Abstract

More information

Multi-variable Calculus and Optimization

Multi-variable Calculus and Optimization Multi-variable Calculus and Optimization Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Multi-variable Calculus and Optimization 1 / 51 EC2040 Topic 3 - Multi-variable Calculus

More information

Design of op amp sine wave oscillators

Design of op amp sine wave oscillators Design of op amp sine wave oscillators By on Mancini Senior Application Specialist, Operational Amplifiers riteria for oscillation The canonical form of a feedback system is shown in Figure, and Equation

More information

www.sakshieducation.com

www.sakshieducation.com LENGTH OF THE PERPENDICULAR FROM A POINT TO A STRAIGHT LINE AND DISTANCE BETWEEN TWO PAPALLEL LINES THEOREM The perpendicular distance from a point P(x 1, y 1 ) to the line ax + by + c 0 is ax1+ by1+ c

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS

THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS O.U. Sezerman 1, R. Islamaj 2, E. Alpaydin 2 1 Laborotory of Computational Biology, Sabancı University, Istanbul, Turkey. 2 Computer Engineering

More information

V(x)=c 2. V(x)=c 1. V(x)=c 3

V(x)=c 2. V(x)=c 1. V(x)=c 3 University of California Department of Mechanical Engineering Linear Systems Fall 1999 (B. Bamieh) Lecture 6: Stability of Dynamic Systems Lyapunov's Direct Method 1 6.1 Notions of Stability For a general

More information

Introduction to Matrices for Engineers

Introduction to Matrices for Engineers Introduction to Matrices for Engineers C.T.J. Dodson, School of Mathematics, Manchester Universit 1 What is a Matrix? A matrix is a rectangular arra of elements, usuall numbers, e.g. 1 0-8 4 0-1 1 0 11

More information

Some stability results of parameter identification in a jump diffusion model

Some stability results of parameter identification in a jump diffusion model Some stability results of parameter identification in a jump diffusion model D. Düvelmeyer Technische Universität Chemnitz, Fakultät für Mathematik, 09107 Chemnitz, Germany Abstract In this paper we discuss

More information

Intermediate Value Theorem, Rolle s Theorem and Mean Value Theorem

Intermediate Value Theorem, Rolle s Theorem and Mean Value Theorem Intermediate Value Theorem, Rolle s Theorem and Mean Value Theorem February 21, 214 In many problems, you are asked to show that something exists, but are not required to give a specific example or formula

More information

Big Data Analytics CSCI 4030

Big Data Analytics CSCI 4030 High dim. data Graph data Infinite data Machine learning Apps Locality sensitive hashing PageRank, SimRank Filtering data streams SVM Recommen der systems Clustering Community Detection Web advertising

More information

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication). MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix

More information

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col

More information

THREE DIMENSIONAL GEOMETRY

THREE DIMENSIONAL GEOMETRY Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

More information

Exponential and Logarithmic Functions

Exponential and Logarithmic Functions Chapter 6 Eponential and Logarithmic Functions Section summaries Section 6.1 Composite Functions Some functions are constructed in several steps, where each of the individual steps is a function. For eample,

More information

Chapter 2: Binomial Methods and the Black-Scholes Formula

Chapter 2: Binomial Methods and the Black-Scholes Formula Chapter 2: Binomial Methods and the Black-Scholes Formula 2.1 Binomial Trees We consider a financial market consisting of a bond B t = B(t), a stock S t = S(t), and a call-option C t = C(t), where the

More information

CHAPTER 2. Eigenvalue Problems (EVP s) for ODE s

CHAPTER 2. Eigenvalue Problems (EVP s) for ODE s A SERIES OF CLASS NOTES FOR 005-006 TO INTRODUCE LINEAR AND NONLINEAR PROBLEMS TO ENGINEERS, SCIENTISTS, AND APPLIED MATHEMATICIANS DE CLASS NOTES 4 A COLLECTION OF HANDOUTS ON PARTIAL DIFFERENTIAL EQUATIONS

More information

Chapter 8 - Power Density Spectrum

Chapter 8 - Power Density Spectrum EE385 Class Notes 8/8/03 John Stensby Chapter 8 - Power Density Spectrum Let X(t) be a WSS random process. X(t) has an average power, given in watts, of E[X(t) ], a constant. his total average power is

More information

Transistor amplifiers: Biasing and Small Signal Model

Transistor amplifiers: Biasing and Small Signal Model Transistor amplifiers: iasing and Small Signal Model Transistor amplifiers utilizing JT or FT are similar in design and analysis. Accordingly we will discuss JT amplifiers thoroughly. Then, similar FT

More information