/SOLUTIONS/ where a, b, c and d are positive constants. Study the stability of the equilibria of this system based on linearization.



Similar documents
Understanding Poles and Zeros

Dynamical Systems Analysis II: Evaluating Stability, Eigenvalues

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Autonomous Equations / Stability of Equilibrium Solutions. y = f (y).

MAT188H1S Lec0101 Burbulla

4 Lyapunov Stability Theory

Introduction to Support Vector Machines. Colin Campbell, Bristol University

Neural Networks and Support Vector Machines

Stability of Linear Control System

3. Regression & Exponential Smoothing

Least-Squares Intersection of Lines

Neural Networks: a replacement for Gaussian Processes?

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

Statistical Machine Learning

degrees of freedom and are able to adapt to the task they are supposed to do [Gupta].

Vector and Matrix Norms

Chapter 2 The Research on Fault Diagnosis of Building Electrical System Based on RBF Neural Network

Self Organizing Maps: Fundamentals

Numerical methods for American options

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Monotonicity Hints. Abstract

Loop Analysis. Chapter Introduction

Nonlinear Systems of Ordinary Differential Equations

3. INNER PRODUCT SPACES

NEURAL NETWORKS A Comprehensive Foundation

EE 330 Lecture 21. Small Signal Analysis Small Signal Analysis of BJT Amplifier

UNIVERSITY OF CALIFORNIA AT BERKELEY College of Engineering Department of Electrical Engineering and Computer Sciences. EE105 Lab Experiments

University of Lille I PC first year list of exercises n 7. Review

Inner Product Spaces

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Lecture 7: Finding Lyapunov Functions 1

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

Lecture 3: Linear methods for classification

[1] Diagonal factorization

Part IB Paper 6: Information Engineering LINEAR SYSTEMS AND CONTROL Dr Glenn Vinnicombe HANDOUT 3. Stability and pole locations.

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Basics of Statistical Machine Learning

Recurrent Neural Networks

LCs for Binary Classification

The continuous and discrete Fourier transforms

Feed-Forward mapping networks KAIST 바이오및뇌공학과 정재승

CS 2750 Machine Learning. Lecture 1. Machine Learning. CS 2750 Machine Learning.

Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability. p. 1/?

Linear Threshold Units

1 Determinants and the Solvability of Linear Systems

Principles of Scientific Computing Nonlinear Equations and Optimization

Linear Algebraic Equations, SVD, and the Pseudo-Inverse

Classification Problems

UNIVERSITY OF BOLTON SCHOOL OF ENGINEERING MS SYSTEMS ENGINEERING AND ENGINEERING MANAGEMENT SEMESTER 1 EXAMINATION 2015/2016 INTELLIGENT SYSTEMS

Introduction to Complex Numbers in Physics/Engineering

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

Chapter 17. Orthogonal Matrices and Symmetries of Space

Manufacturing Equipment Modeling

A characterization of trace zero symmetric nonnegative 5x5 matrices

Lecture 13 Linear quadratic Lyapunov theory

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

Transmission Lines. Smith Chart

Machine Learning and Pattern Recognition Logistic Regression

First, we show how to use known design specifications to determine filter order and 3dB cut-off

Linear Classification. Volker Tresp Summer 2015

7 Gaussian Elimination and LU Factorization

Stability. Chapter 4. Topics : 1. Basic Concepts. 2. Algebraic Criteria for Linear Systems. 3. Lyapunov Theory with Applications to Linear Systems

Data Mining: Algorithms and Applications Matrix Math Review

MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

3 An Illustrative Example

3.2 Sources, Sinks, Saddles, and Spirals

MECHANICS OF SOLIDS - BEAMS TUTORIAL TUTORIAL 4 - COMPLEMENTARY SHEAR STRESS

Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski

Brief Introduction to Vectors and Matrices

Finite dimensional C -algebras

Reaction diffusion systems and pattern formation

Second Order Linear Partial Differential Equations. Part I

College of the Holy Cross, Spring 2009 Math 373, Partial Differential Equations Midterm 1 Practice Questions

ANALYTICAL METHODS FOR ENGINEERS

Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence

Unit - 6 Vibrations of Two Degree of Freedom Systems

APPLICATIONS. are symmetric, but. are not.

4. How many integers between 2004 and 4002 are perfect squares?

8 Hyperbolic Systems of First-Order Equations

Chapter 15, example problems:

Controller Design in Frequency Domain

Multi-variable Calculus and Optimization

Design of op amp sine wave oscillators


by the matrix A results in a vector which is a reflection of the given

THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS

V(x)=c 2. V(x)=c 1. V(x)=c 3

Introduction to Matrices for Engineers

Intermediate Value Theorem, Rolle s Theorem and Mean Value Theorem

Big Data Analytics CSCI 4030

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

THREE DIMENSIONAL GEOMETRY

Exponential and Logarithmic Functions

Chapter 2: Binomial Methods and the Black-Scholes Formula

CHAPTER 2. Eigenvalue Problems (EVP s) for ODE s

Chapter 8 - Power Density Spectrum

Transistor amplifiers: Biasing and Small Signal Model

Transcription:

echnische Universiteit Eindhoven Faculteit Elektrotechniek NIE-LINEAIRE SYSEMEN / NEURALE NEWERKEN (P6) gehouden op donderdag maart 7, van 9: tot : uur. Dit examenonderdeel bestaat uit 8 opgaven. /SOLUIONS/ Problem ( point) Consider the sstem: = ax bxx () = cxx dx where a, b, c and d are positive constants. Stud the stabilit of the equilibria of this sstem based on linearization. Solution Equilibrium points: ax bx x cx x dx = = x ( a bx ( cx d ) x ) = = Real solutions of this sstem are Jacobian matrix: {[ ], [ d / c b / a } [ x x] = ] J = a bx cx bx cx d Stabilit of equilibrium points for [ ] a : J =, λ = a, λ = d [ ] is unstable (saddle point) d

bd for [d/c b/a] : J = c, λ, = ± j ad eigenvalues are on the imaginar axis jω and ac b because of Hartman-Grobman theorem can t stud the stabilit of the equilibrium point [d/c b/a] based on linearization. Problem (. points) Consider the following sstems: = x (a) = x = x x (b) = x hese sstems have an equilibrium at the origin. he second sstem could be considered identical to the first one, but controlled b a signal u=-x. (a) Is the origin of sstem (a) stable in sense of Lapunov? (b) Show that for sstem (b) the origin is globall asmptoticall stable. Hint (especiall for the second sstem): tr the Lapunov function ( ) x + x and appl the Lassale principle. Solution (a) Jacobian matrix of a): x J = Stabilit of the origin of a) for [ ] : J =, λ = one eigenvalue is on the imaginar axis jω and because of Hartman-Grobman theorem can t stud the stabilit of the origin based on indirect Lapunov method (linearization) We can stud the stabilit b direct inspection. It is eas to observe that the right hand side of the first equation of a) do not contains x, and thus dx /dt depends onl on x. If x >, dx /dt<, if x <, dx /dt> and if x =, dx /dt=. herefore the x coordinate of sstem trajector tends to. However, because x is either positive or negative and becomes zero in infinit, dx /dt does not

change its sign and thus the second coordinate x of the sstem trajector either increases or decreases. herefore sstem a) is not stable in the sense of Lapunov, because the stabilit in sense of Lapunov implies that if the initial point of the sstem trajector is near enough to the origin, the sstem trajector remains close to the origin. (b) Let V ( x, x ) = ( x + x ) V & ( x, x ) = x + x = x xx + xx = x for ever ( x, x ) R herefore the origin of sstem () is globall stable in the sense of Lapunov, because V ( x, x ) is negative semi defined ( V& x, x ) is not onl at the origin but for {(x, x ) x = and x }). ( At (,), V & = i.e. we can claim asmptotic stabilit if can appl the Lasalle s theorem. Let S = {( x, x ) R ; V& = } = {( x, x ) x =, x} But if x = =, = x = Letting M be the largest invariant set in S, we have M = {(,)}, i.e. onl the origin and b the Lasalle s theorem, the origin is globall asmptoticall stable. Problem ( point) Draw phase portraits of the following one-dimensional sstem as the parameter changes: & () x = x+ x x Solution = x( + x x ) = f ( x) Equilibra x = {, ±, ± } D x f = + x x zero eigenvalue at (,) Stabilit: x = Dx f =, stable R x = ± Dx f =, stable > x = ± Dx f = 6, unstable >

x Phase portrait = x = ( +. ) (, ) x = ( x = = = x = (. ) =. ). ) = x = ( + Problem (. points) Consider the nonlinear sstem in the feedback form as given in the figure below r= + _ e NL v g(s) h v slope = β > e -h where the parameters of the nonlinear function NL are h=, β= and the transfer function of the linear part is NL (single-valued skewsmmetric function) gs () = ss ( + ) Use describing function analsis to predict whether or not the sstem will oscillate. If es, what is the predicted amplitude a and frequenc ω of oscillation (amplitude and frequenc of the error signal e)? h Note: he describing function corresponding to the nonlinear function NL is Na ( ) = + β π a

Solution Oscillations appear if equation g( jω ) = Na ( ) has real solutions with respect to amplitude a and frequenc ω of the error signal e=a.sin(ωt) gs ( ) = = ss ( + ) s + s + s g( jω) = = = ω + ω + ω ω ω + ω ω + ω ω Because is real Na ( ) ( j ) ( j ) j j j j( ) ω Im ( g( jω) ) = ( ω ω ) = ω = ω = rad / s; f = =.9Hz π hus at the intersection point of g( jω ) and, ω=ω and Na ( ) g( jω ) = =, ω Na ( ) = + πa Hence from g( jω ) = Na ( ) follows a a a,7 = π π + πa = = π = herefore the amplitude and frequenc of the oscillating error signal e are a=.7 and ω = rad / s (period =6.8 s).

Problem (. points) Consider a two input perceptron without bias input. he output activation function of the perceptron is if wx j j j= = φ ( wjxj) = j= if wx j j < j= Given the following input-output training pairs: x() =, d() (), d() (), d() (), d() = x = = = = = = x x Appl the perceptron learning rule to adjust the weights w and w starting with initial weights w vector w = = w. Solution Adjustment the weights if d = and φ(w x) = then w := w x if d = and φ(w x) = then w := w + x Appl the first training sample x()=[ ] to produce: φ(w x()) = φ(-) =, d()= his is wrong so we need to move the weight vector towards the sample w = w + x so w = [ -] + [ ] = [ ] Appl the second training sample x()=[- ] to produce: φ(w x()) = φ(-) =, d()= his is wrong so we need to move the weight vector towards the sample w = w + x so w = [ ] + [- ] = [ ] Appl the third training sample x()=[ ] to produce: φ(w x()) = φ() =, d()= his is wrong so we need to move the weight vector awa from the sample w = w - x so w = [ ] - [ ] = [- ] Appl the fourth training sample x()=[ -] to produce: 6

φ(w x()) = φ(-) =, d()= his is correct so we don t adjust the weights. Appl again the first training sample x()=[ ] to produce: φ(w x()) = φ() =, d()= his is correct so we don t adjust the weights. Appl again the second training sample x()=[- ] to produce: φ(w x()) = φ() =, d()= his is correct so we don t adjust the weights. Appl again the third training sample x()=[ ] to produce: φ(w x()) = φ(-) = -, d()= his is correct so we don t adjust the weights. Appl again the fourth training sample x()=[ -] to produce: φ(w x()) = φ(-) =, d()= his is correct so we don t adjust the weights. he solution is w = [w w ] = [- ] because all training pairs are correctl classified. Problem 6 ( point) Consider an autoassociative net with the bipolar step function as the activation function and weights set b the Hebb rule (outer products), with the main diagonal of the weight matrix set to zero. a) Find the weight matrix to store the vector X()=[ - ] ; b) est the net using X() as input; c) est the net, using Y()=[ - - ] as input; d) Find the weight matrix to store the vector X()=[ - -] ; e) est the net using X() as input; f) est the net, using Y()=[ - -] as input; g) Find the weight matrix to store the both X() and X(); h) est the new net on X(), X(), Y() and Y() as inputs. Solution a) he weight matrix to store X() = [ - ] is computed b Hebbian rule outer product 7

X() X() = W = b) if in _ i he bipolar step function is i = if in _ i < 6 WX. () =. = = = = X () correctl associated with X() c) 6 WY. () =. = = = = X () correctl associated with X() d) he weight matrix to store X() = [ - -] is computed b Hebbian rule outer product X() X() = W = e) 8

6 WX. () =. = = = = X () correctl associated with X() f) 6 WY. () =. = = = = X () correctl associated with X() g) he weight matrix to store X() = [ - ] and X() = [ - -] is computed b Hebbian rule outer product X() X() + X() X() = W = h) 6 WX. () =. = = = = X () correctl associated with X() 9

WX = = = = = X. (). () 6 correctl associated with X() WY X. () =. = = = () 6 no correctl associated with X() WY = = = = = X. (). () 6 correctl associated with X() Problem 7 (. points) Give a brief description of the following items related to Radial Basis Functions Networks (RBFN): Linear separabilit problem, Cover s theorem on the separabilit of patterns, RBFN architecture, Radial basis functions, Free parameters in RBFN, Learning algorithms, Features.

Solution Linear classifiers are eas to use. In realit, however, there are man cases that linear classifiers can t handle. herefore we are looking for a network that nonlinearl converts the input to a higher dimension after which it can be classified using onl one laer of neurons with linear activation functions. Cover s theorem on the separabilit of patterns: A complex pattern classification problem that is nonlinearl separable in a low dimensional space, is more likel to be linearl separable in a high dimensional space. A simple three-laer structure an input laer; a hidden laer with nonlinear activation function; an output laer. Gaussian radial-basis function is used most: g(x) = exp[-( x m /σ)] hree different sets of variables affect the performance of a RBF network with Gaussian functions: he centre of each radial-basis activation function, he width of each radial-basis activation function, he weights of the output laer. Learning algorithms Modelling of radial basis functions: centres (random initialisation of centres or self-organised learning of centres or supervised) and widths (often unsupervised); Supervised learning of weights. Features: Easier interpretation of the sstem's results; Universal approximation abilit; Much faster training than back-propagation neural networks Problem 8 ( point) Given a Maxnet with 6 neurons with self feedback weights w ii =. he output activation function for each neuron is x_ in if x_ in> f( x_ in) =. otherwise Choose the inhibitor weights w ij, i j and iterate the network while it stabilizes if the initial states of the neurons are x ()=., x ()=., x ()=.7, x ()=., x ()=.8, x 6 ()=.6. Solution w ii = θ =, w ij = -ε, i j and < ε </6; we choose ε=. he neural network states update

m m xi( t+ ) = f wii. xi( t) + wij. xj( t) = f θ. xi( t) ε. xj( t) j= j= j i j i x() = [...7..8.6] initial states x ()=f(.-.(.+.7++.8+.6))=f(-.)= x ()=f(.-.(.+.7++.8+.6))=f(-.)= x ()=f(.7-.(.+.++.8+.6))=f(.)=. x ()=f(.-.(.+.+.7+.8+.6))=f(.6)=.6 x ()=f(.8-.(.+.+.7++.6))=f(.)=. x ()=f(.6-.(.+.+.7++.8))=f(.)=. herefore: x() = [..6..] Similarl we calculate the neuron states for the next iterations until the Maxnet stabilized x() = [..6..] x() = [.69.66. ] x() = [.. ] x() = [.7.7 ] x(6) = [.986 ] x(7) = [.986 ] = x(6) stabilized