Adaptive Control Using Combined Online and Background Learning Neural Network

Size: px
Start display at page:

Download "Adaptive Control Using Combined Online and Background Learning Neural Network"

Transcription

1 Adaptive Control Using Combined Online and Background Learning Neural Network Eric N. Johnson and Seung-Min Oh Abstract A new adaptive neural network (NN control concept is proposed with proof of stability properties. The NN learns the plant dynamics with online training, and then combines this with background learning from previously recorded data, which can be advantageous to the NN adaptation convergence characteristics. The network adaptation characteristics of the new combined online and background learning adaptive NN is demonstrated through simulations. I. INTRODUCTION Recently, artificial neural networks mimicking the biological neuronal mechanisms of the human intelligence system, the brain, have been successfully used in various fields, including pattern recognition, signal processing, and adaptive control []. A neural network can be thought of as a parameterized class of nonlinear maps. Throughout the 98 s and the early 99 s, numerous researchers showed that multilayer feedforward neural networks are capable of approximating any continuous unknown nonlinear function, or mapping, on a compact set [], [], and that the neural networks have an online learning adaptation capability that does not require preliminary off-line tuning [3]. As a result, this architecture represents a successful framework for use in adaptive nonlinear control systems. Online adaptive neural network controllers have been extensively studied and successfully applied to robot control by Lewis and others [3]. They have provided many feasible online learning algorithms accompanied by mathematical stability analyses. Online learning architectures are used to compensate for dynamic inversion model error caused by system uncertainties. Current adaptive neural network online control methods are diversified to various forms by using techniques from classical adaptive control, including σ-modification [], ε- modification [], [4], dead-zone method [5], or projection method [6]. Actually, most of these NN training laws have NN weight dynamics of low rank, mostly unity [7]. NN training could potentially be full rank and these additional degrees-of-freedom utilized to improve system performance. In this paper, we propose a new approach for neural network adaptive controller that overcomes the rank- This work was supported in part by NSF #ECS E. N. Johnson is the Lockheed Martin Assistant Professor of Avionics Integration, the School of Aerospace Engineering, Georgia Institute of Technology, Atlanta, GA 333, USA [email protected] S.-M. Oh is with Graduate Research Assistant at same school. [email protected] x c, x c Reference Model crm pd ad Approximate Dynamic Inversion Neural Network P-D Compensator x, x, Adaptation Law Plant xrm, x rm x, x Error Calc Fig.. Neural Network adaptive control, including an approximate dynamic inversion. limitation and shows the properties of semi-global learning. This is accomplished by combining online learning algorithms with a background learning concept. II. ADAPTIVE CONTROL ARCHITECTURE A block diagram in Fig. illustrates the key elements of baseline controller architecture: the plant, the reference model that provides desired response, the approximate dynamic inversion block, the linear (proportional/derivative or P-D controller that is used to track the reference model, the online learning NN that corrects for errors/uncertainty in the approximate dynamic inversion. A. Dynamic Inversion-Based Adaptive Control For simplicity, consider the case of full model inversion, in which the representative n-degree-of-freedom multi-input multi-output (MIMO plant dynamics are given as ẍ = f(x, ẋ, δ, ( where x, ẋ, δ R n. We introduce a pseudo-control input ν, which represents a desired ẍ and is expected to be approximately achieved by actuating signal δ. ẍ = ν, where ν = f(x, ẋ, δ. Ideally, the actual control input δ is obtained by inverting f. Since the exact function f(x, ẋ, δ is usually unknown or difficult to invert, an approximation is introduced as ν = ˆf(x, ẋ, δ, which results in a modeling error in the system dynamics ẍ = ν + (x, ẋ, δ, ( where (x, ẋ, δ = f(x, ẋ, δ ˆf(x, ẋ, δ. Based on the approximation in ν above, the actuator command is determined by an approximate dynamic inversion of the form e rm

2 δ cmd = ˆf (x, ẋ, ν, (3 where ν is the pseudo-control and represents a desired ẍ that is expected to be approximately achieved by δ cmd. The reference model dynamics are given as ẍ rm = ν crm = f rm (x rm, ẋ rm, x c, ẋ c, (4 where x c, ẋ c represent external commands. B. Model Tracking Error Dynamics The total pseudo-control signal for the system is now constructed by the three components ν = ν crm + ν pd ν ad, (5 where ν crm is the pseudo-control signal generated by the reference model in (4, ν pd is the output of the linear compensator, and ν ad is the NN adaptation signal. The linear compensator term (ν pd can be designed by any standard linear control design technique and most often implemented by PD (Proportional-Derivative compensation as long as the linearized closed loop system is stable. For the second order system, PD compensation is expressed by ν pd = [ ] K p K d e, where the reference model tracking error is defined by e = [ (x rm x T (ẋ rm ẋ ] T T, and the compensator gain matrices K d >, R n n and K p >, R n n are diagonal matrices to be designed. The model tracking error dynamics are found by differentiating e: ė = Ae + B [ν ad (x, ẋ, δ (x, ẋ, δ], (6 [ ] [ I where A =, B =, and (x, ẋ, δ = K p K d I] f(x, ẋ, δ ˆf(x, ẋ, δ is the model error to be approximated and canceled by ν ad, the output of the NN. The linear PD compensator gain matrices K p, K d are chosen such that A is Hurwitz. These dynamics can form the basis of the NN adaptive law. C. Neural-Network-Based Adaptation Single Hidden Layer (SHL Perceptron NNs are universal approximators in that they can approximate any smooth nonlinear function to within arbitrary accuracy, given a sufficient number of hidden layer neurons and input information []. In the case of model error, this SHL NN is trained online and adapted to cancel the model error with feedback. The input-output map of the SHL NN can be ( expressed as [7] ν adk = b w θ w,k + n w j,k σ j b v θ v,j + n v i,j x i, (k =,..., n 3 or j= in matrix form ν ad (W, V, x = W T σ ( V T x R n3, where x = [ ] T b v x x... x n is the NN input vector, σ(z = [ b w σ (z σ (z... σ n (z n ] T is a sigmoidal activation function vector, V is an input layer to hidden layer weight matrix, W is a hidden layer to output layer weight matrix, and ν ad is the NN output. n, n, and n 3 are the number of variable input nodes, variable hidden layer nodes, and outputs, respectively. The input vector to the hidden layer neurons is given by z = V T x = [ ] T z z... z n and the inputoutput map in the hidden layer is defined by a sigmoidal activation function σ j (z j =, j =,..., n +e a j z j. A matrix containing the derivatives of the sigmoid vector is denoted as σ (z. The universal approximation property of NNs ensures that for x D, where D is a bounded domain, there exists an N and an ideal set of weights( W, V such that = W T σ(v T x + ε, ε ε, n N. We introduce the following assumptions: Assumption. All external command signals are bounded: [ x T c ẋ T c ẍ T c ] x c. Assumption. The input vector to the NN is uniformly bounded: x x, x >. Assumption 3. The norm of the ideal NN weights is bounded: Z F Z. [ ] W Define W W W, Ṽ V V, Z. Ṽ W and V are the upper bounds for the ideal NN weights : W F < W, V F < V. Using the Taylor series expansion of σ(z around σ(z, we get σ σ(z = σ(z z = σ(z σ (z z + O( z, where O( represents the higher order terms, and z = V T x, z = V T x, z = z z = Ṽ T x. Expanding the NN/model-error cancellation error [6] we have ν ad = W T σ ( V T x W T σ(v T x ε = W T σ + W T σ Ṽ T x + w, where w = W T (σ σ W T σ Ṽ T x ε. The following bounds are useful to prove the stability of adaptive law: W T σ b w + n W, zj σ j (z j δ =.4, σ V T x δ n, σ ā 4 n. D. Online Learning NN Adaptive Control and Rank- Limitation An appropriate use of NNs is nonlinear multidimensional curve fitting, and can be applied for approximating errors in a model ˆf of f, as described above. The NN is normally trained offline based on some form of training data, or online while controlling the plant. The online NN weight adaptation laws only tap a small amount of the potential adaptation possible with SHL perceptron NNs. This limitation occurs as a consequence of how the adaptive law was developed (backstepping, and results in an adaptive law of rank-, a rank of at most unity. Consider the nonnegative ( definite Lyapunov function candidate of the form L e, W, Ṽ = et P e + ( tr W Γ W w T + (Ṽ tr T Γ v Ṽ, where Γ w and Γ v are positive definite learning rate weighting matrices. One obtains the following online adaptive law by the time derivative of the Lyapunov function candidate [3]. Ẇ = σ(v T x r Γ w (7 V = Γ v x r W T σ (V T x, (8 where r = e T P B and P R n n is the positive definite solution to A T P + P A + Q =. Since this original

3 backpropagation law is the basis for developing the combined online and background learning weight adaptation law, the fundamental form of this online learning NN law is introduced for comparison purposes. Fact : Every matrix of the form A = u v T has at most a rank of one, where A is n m matrix, u is n vector, and v is m vector [8]. Since σ is (n + column vector and rγ w is n 3 row vector, Ẇ is always at most rank one matrix. Similarly, V is also at most rank one matrix because Γ v x is (n + column vector and rw T σ is n row vector. Even though the online NN weight adaptation laws have matrix forms, the rank of the gradient matrices is always at most one. This implies that the adaptation performance of the NN weights might be improved by taking advantage of the remaining subspace. III. COMBINED ONLINE AND BACKGROUND LEARNING ADAPTIVE CONTROL ARCHITECTURE Simultaneous batch (or background and instantaneous (or online learning NN adaptation law is proposed. This law is described as training both on a set of data over a number of points taken at different times and on purely current state/control information. This is done by utilizing a combination of a priori information, recorded online data or history stack [9], and the instantaneous current data, such as (7 and (8. Both should be done concurrently with real-time control. They provide the same guarantees of boundedness as earlier online training approaches. The approach and theory are given in the following subsections. A. Selection of NN inputs for background learning One reasonable choice for the background learning adaptive law is to train the neural network based on previously stored information such as a priori stored information or state/control data stored in real-time. One potential technique for conducting this training is presented here. With reference to the online learning NN adaptation law in (7 (8 and the model tracking error dynamics in (6, we need input vector ( x to the NN and the corresponding model error ( in order to train the NN. It is assumed that, for times sufficiently far in the past, the model error can be observed or otherwise measured, and the corresponding inputs to the NN such as parameters, states, and controls are stored. This storage can then be done for a number of data points, i =,,..., p, where i = f i ˆf (x i, ẋ i, δ i is a model error that will be used in the background adaptation. Regarding the selection of data points for the background learning NN adaptation, one may raise the following questions: How can we calculate the model error, i, for the i-th stored data point? We normally do not know the exact model, nor do the model error. One easily implemented method that measures the magnitude of model error i, which will be saved and used in background learning, is to utilize the residual signal r i from the online NN adaptation. The model error i for the i-th stored data point x i (i =,,..., p is estimated by the following equation at the time of each data storage. i = W T σ(v T x i r T i, (9 where r i is the residual signal. Current time r i for background learning is obtained through the simulation of tracking error dynamics. r i = e T i P B, ( where ė i = A e i + B [ ] W T σ(v T x i i. Which data points, x i (i =,,..., p, should be stored for the use of background learning adaptation? One possible choice for selecting new data points x i (i =,,..., p to be stored is to save the point whenever ( x x p T ( x x p x T > ε x. ( x This implies that new points are stored whenever the difference between the current input (states and controls and the last stored data point is greater than some specified amount. Saving only sufficiently different data points maximizes the input domain space of the NN mapping, which is spanned by x i (i =,,..., p and in which background adaptation will be performed. 3 When should the data point be removed from storage, and which point should be removed first? As a practical consideration, the total number of stored points could be fixed. When a new point is added, the oldest or least representative of the overall space could be dropped. B. Combined online and background learning NN adaptation One approach to utilize the stored data points in background learning NN adaptation is to apply the same adaptive learning law structure as the online learning weight adaptation law. All the stored data points x i (i =,,..., p used in background learning adaptation are equally considered and summed with online learning adaptation. The model tracking error for each stored data points is simulated by model tracking error dynamics that has the same structure as that in the online learning adaptation. A projection operator [6] is employed to constrain the learning NN weight estimates inside a known convex bounded set in the weight space that contains the unknown optimal weights. For the boundedness proof of combined online and background learning NN adaptation law, we need a boundedness theorem for the state-dependent impulsive dynamical system in state-space form ẋ(t = f c (x(t, x( = x, x(t / Z, ( x(t = f d (x(t, x(t Z, (3 where x(t D R n, D is an open set with D, x(t x(t + x(t, f c : D R n is Lipschitz continuous with f c ( =, f d : Z R n is continuous, and Z D is the resetting set. We refer to the differential equation ( as the continuous-time dynamics between the

4 resetting set, and we refer to the difference equation (3 as the resetting law. The stability of zero solution for the statedependent impulsive dynamical system is dealt in detail by Haddad et al []. We slightly extend the result for the proof of boundedness. Theorem : Suppose there exists a piecewise continuously differentiable function L : D [, such that L( =, (4 L(x >, x D {}, (5 L(x = L (xf c (x <, x / Z, x Ω {} (6 L = L(x + x(t L(x, x Z, (7 where Ω D is compact set. Then the solution x(t to (, (3 is bounded outside of the compact set Ω. Proof : Assume that the resetting times τ k (x are welldefined and distinct for every trajectory of (, (3 []. Before the first resetting time ( t τ (x = t, L(x(t can be obtained by the following integral equation: L(x(t = L(x( + t L (x(τf c (x(τdτ, t [, τ (x ]. Between two consecutive resetting times τ k (x and τ k+ (x (τ k (x < t τ k+ (x, k =,,, we get the following result: L(x(t = L(x(τ k (x + x(τ k (x + t τ k (x L (x(τf c (x(τdτ = L(x(τ k (x + [L(x(τ k (x + x(τ k (x L(x(τ k (x ] + t τ k (x L (x(τf c (x(τdτ, t (τ k (x, τ k+ (x ]. At time t = τ k (x, L(x(τ k (x = L(x(τ k (x + [L(x(τ k (x + x(τ k (x L(x(τ k (x ] + τk (x τ k (x L (x(τf c (x(τdτ. By recursive substitution of this into previous one, we get L(x(t = L(x(τ (x + + t k [L(x(τ i (x + x(τ i (x L(x(τ i (x ] τ (x L (x(τf c (x(τdτ, t (τ k (x, τ k+ (x ]. (8 Since τ (x = t = and L = L(x(τ i (x + x(τ i (x L(x(τ i (x, L(x(t becomes L(x(t = L(x( + t L (x(τf c (x(τdτ, t (τ k (x, τ k+ (x ]. Since L(x = L (xf c (x < for all x Ω {} and x / Z, then L(x(t L(x( for all t. (9 Hence, Lyapunov stability is established. For some time s < t, we obtain similar expression with (8 as L(x(s = L(x(τ (x + k [L(x(τ i(x + x(τ i (x L(x(τ i (x ] + s τ (x L (x(τf c (x(τdτ, t (τ k (x, τ k+ (x ]. By subtracting this from (8, we have L(x(t L(x(s = t s L (x(τf c (x(τdτ <, t > s, x Ω. ( c Proj(W, Fig.. i i W i i W i Projection operator. g(w i L(x(t < L(x(s for all t > s, x Ω. As long as x(t lies in the region of Ω, the trajectory moves in order to reduce L(x(t as time increases until x(t goes outside of the region Ω. Hence, x is bounded by the region outside of the compact set Ω. We ll drive the combined online and background learning NN adaptation law and prove the boundedness of the law by using the Theorem. Theorem : Consider the system in ( with the inverting controller in (3. The following combined online and background learning NN adaptation law guarantees the boundedness of all system signals. Ẇ = P roj(w, ξ Γ w ( V = Γ v P roj(v, ζ, ( where ė i = Ae i + B ( W T σ(v T x i i, i =,..., p (3 ξ = σr σ(v T x i r i (4 ζ = xrw T σ (V T x x i r i W T σ (V T x i (5 r = e T P B, r i = e T i P B, i =,..., p (6 i = W T σ ( V T x i r T i, i =,..., p (7 A T P + P A + Q =. (8 The projection operator is defined in column vectors. For the weight matrices, the following definitions are used: P roj(w, ξ = [P roj(w, ξ... P roj(w n3, ξ n3 ] R (n+ n3, P roj(v, ζ = [P roj(v, ζ... P roj(v n, ζ n ] R (n+ n, where W i, ξ i, V i, ζ i are the i-th column vector of W, ξ, V, ζ matrices, respectively. The projection operator concept, illustrated in Fig., is defined as follows. ξ i g gt g ξ i g(w i, P roj(w i, ξ i = if g(w i > and g T ξ i > ξ i, if otherwise, i =,..., n 3. Here, we introduce a convex set having a smooth boundary defined by Ω i,c = {W i R n : g(w i c}, c, where g : R n R is a smooth known function g(w i =

5 W T i W i W i, i =,..., n 3. Wi is the estimated bound on the weight vector W i, and > denotes the projection tolerance. Gradient of the convex function is defined as the column vector g(w i = W i, i =,..., n 3. Proof : This theorem can be proved by introducing the following Lyapunov function candidate: L ( e, e i, W, Ṽ = et P e + tr ( W Γ w W T + (Ṽ tr T Γ v Ṽ + e T i P e i. (9 Remark : When a data point is added, the discrete change in the Lyapunov function L is zero. The initial condition for each tracking error dynamics (3 is set to zero (e i ( =. When a data point is dropped, the discrete change in the Lyapunov function L is negative. With the Lyapunov function candidate in (9, the positive definiteness conditions (4, (5 in the Theorem are satisfied. In addition, the nonincreasing condition (7 of Lyapunov function value at the resetting set is gauranteed by the Remark. Finally, we need to prove the boundedness condition (6 between the resetting points and find the compact set Ω outside of which defines the bounded region the trajectory resides. The boundedness of the NN weight W is shown by defining the Lyapunov function of the form [6]: L wi = g(γ w W i + W i rate of { change is L wi = = (Γ w, i =,..., n 3. Its time = g T Γ w Ẇi = g T P roj(w i, ξ i g T ξ i ( g(w i, if g(w i > and g T ξ i > g T ξ i, if otherwise, i =,..., n 3. WiT Γ w Wi Hence, Lwi outside Ω i, and W i is bounded in a compact set Ω i,. Denote the maximum value of the norm of W as W i max Wi Ω i,,,...,n 3 W i (t. Similarly, V is bounded and the maximum value of its norm is denoted as V i max Vi Ω i,,,...,n V i (t. Using these bounds on W and V, the disturbances w and w i can be bounded as follows: w = W T (σ σ W T σ Ṽ T x ε b w + n W + ā 4 n Wi ( V i + V x + ε w and w i = W T (σ i σi W T σ iṽ T x i ε i b w + n W + ā 4 n Wi ( V i + V x i + ε i, w i, i =,..., p. The tracking error dynamics for the current states can be ( expressed as ė = Ae + B (ν ad = Ae + B W T σ(v T x + W σ (V T xṽ T x + w. Similarly, the simulated tracking error dynamics for the i-th stored data point x i, i = (,..., p are ė i = Ae i + B (ν adi i = Ae i + B W T σ(v T x i + W σ (V T x i Ṽ T x i + w i. Now, with these tracking error dynamics, the time derivative for the Lyapunov function candidate in (9 can be expressed as L = et Qe + r (ν ad + tr (Ẇ Γ W w T + tr (Ṽ T Γ v V e T i Qe i + r i (ν adi i = et Qe e T i Qe i + r w + n r i w i + n3 { (Wi W i T [ P roj(w i, ξ i ξ i ] } + { (Vi V i T [ P roj(v i, ζ i ζ i ] }. By the definition of projection operator, we have g gt g ξ i g(w i, P roj(w i, ξ i ξ i = if g(w i > and g T ξ i >, if otherwise, i =,..., n 3. Since g is the convex function, g always directs outward. Hence, referring to the Fig., we get (Wi W i T g(w i. Therefore, the following quantities are always less than or equal to zero. (W i Wi T ( P roj(w i, ξ i ξ i. (3 By the similar argument, since (Vi V i T h(v i, next inequalities are always true: (V i Vi T ( P roj(v i, ζ i ζ i. (3 Using (3 and (3, L et Qe e T i Qe i + r w + p ( r i w i λ min(q e w P B λ min (Q P B λ min (Q ( λ min (Q e i w i P B λ min (Q + γ, where γ = ( w + p. w i L(x <, x / Z, x Ω {}, (3 where Ω = {x R n : e > w P B λ + γ min(q λ min(q ( or e i w i P B λ min(q > γ λ }. min(q Using the assumption 3 and the result that the columns of W (t, W i, are bounded in the compact set Ω i,, the error weight matrix between hidden and output layer, W (t, is clearly bounded. Similar argument is applied for the boundedness of error weight matrix between input and hidden layer, Ṽ (t. Since we satisfied all the conditions in Theorem, the ultimate boundedness of e, e i, W, V is established. Here, the following definitions are used. Z = { x(t D : ( x(t x i T ( x(t x i x(t T > ε}, (33 x(t where x(t is input { vector to NN. for added point, f d = (34 e i for subtracted point. x = [ (vecṽ T (vec W ] T e T e T e T T p. (35 IV. SIMULATION RESULTS The overall approach of combining instantaneous online and background learning is conceptually appealing, because current data dominates when tracking error is large. Also past data is used to train the NN in any case, so the controller skill, performance when a problem experienced earlier is re-encountered, is improved even when no excitation or tracking error is present. It is also an obvious extension to allow the stored data to be developed from

6 e.5 x x rm.6 x x rm x(t, x rm (t.3. NN weights.5. x(t, x rm (t.4.3 NN weights Time ( seconds Time ( seconds Time ( seconds Time ( seconds (a Comparison of States (b Weights V (a Comparison of States (b Weights V Torque,del Torque Torque,del Torque..4.6 # of stored points Time ( seconds (c Control Input Fig. 3. Online NN adaptation law Time ( seconds (c Control Input.5.45 stored points Time ( seconds (d Total Number of Stored Points e e e 3 certain types of a priori information regarding the plant, such as data recorded during previous use of the plant. To illustrate the method, a local learning problem was induced by using a combination of a relatively low learning rate and a model error that was large and strongly-dependent on plant state variables. A low-dimensional problem is used as an illustration. Remember that the greater the dimension of the problem (n, the greater benefit we expect from these methods due to the larger subspace, so, in a sense, this is a worst case test. The plant is described by ẍ = δ + sin(x ẋ ẋ, (36 where the last two terms, regarded as unknown, represent a significant model error. The desired dynamics are those of a linear second-order system. At first, a square wave external command that repeats with some frequency is simulated with online learning NN adaptation. There is only minor improvement over the seconds of the trajectory given in Fig. 3(a. The weight histories, shown in Fig. 3(b, are slowly reaching constant values, with a partially periodic response at the same period as the command and state response, indicating that it must re-learn much of the effect each time the external command is cycled. The control input history is provided in Fig. 3(c. Simulation results with combined online and background learning NN adaptation are presented in Fig. 4(a Fig. 4(e. From Fig. 4(a, combined learning NN adaptation provides better global convergence except during the initial phase. In Fig. 4(b, note that the weights assume smooth and nearly constant values sooner. V. CONCLUSIONS A new adaptive neural network (NN control concept is proposed. The NN retains the advantages of the existing online trained NN that enables the system to learn the complete plant state and control space and attains the capability Fig Time ( seconds (e Tracking Error History Combined NN adaptation law of background learning with the information from a priori recorded data and current state. Proof of boundedness of all system signals is provided. The characteristics of the algorithm were demonstrated using a typified plant model simulation. REFERENCES [] J. T. Spooner, M. Maggiore, R. Ordonez, and K. M. Passino, Stable Adaptive Control and Estimation for Nonlinear Systems: Neural and Fuzzy Approximator Techniques, John Wiley & Sons,. [] K. Hornik, M. Stinchcombe, and H. White, Multilayered Feedforward Networks are Universal Approximators, Neural Networks, Vol., 989, pp [3] F. Lewis, Nonlinear Network Structures for Feedback Control, Asian Journal of Control. Vol., No. 4, December 999. [4] K. Narendra and A. Annaswamy, A New Adaptive Law for Robust Adaptation Without Persistent Excitation, IEEE Transactions on Automatic Control, Vol. 3, No., Feb. 987, pp [5] B. Peterson and K. Narendra, Bounded Error Adaptive Control, IEEE Transactions on Automatic Control, Vol. 7, No. 6, Dec. 98, pp [6] N. Kim, Improved Methods in Neural Network-Based Adaptive Output Feedback Control, with Applications to Flight Control, Ph.D. Thesis, Georgia Institute of Technology, 3. [7] E. N. Johnson, Limited Authority Adaptive Flight Control, Ph.D. Thesis, Georgia Institute of Technology,. [8] G. Strang, Linear Algebra And Its Applications, 3 rd Ed, Harcourt College Publishers, 988. [9] P. M. Mills, A. Y. Zomaya, and M. O. Tade, Neuro-Adaptive Process Control: A Practical Approach, Wiley, 996. [] W. M. Haddad and V. Chellaboina, Nonlinear Dynamical Systems and Control, preprint,. e 4 e 5

Recurrent Neural Networks

Recurrent Neural Networks Recurrent Neural Networks Neural Computation : Lecture 12 John A. Bullinaria, 2015 1. Recurrent Neural Network Architectures 2. State Space Models and Dynamical Systems 3. Backpropagation Through Time

More information

Lecture 13 Linear quadratic Lyapunov theory

Lecture 13 Linear quadratic Lyapunov theory EE363 Winter 28-9 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discrete-time

More information

Example 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum. asin. k, a, and b. We study stability of the origin x

Example 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum. asin. k, a, and b. We study stability of the origin x Lecture 4. LaSalle s Invariance Principle We begin with a motivating eample. Eample 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum Dynamics of a pendulum with friction can be written

More information

A Direct Numerical Method for Observability Analysis

A Direct Numerical Method for Observability Analysis IEEE TRANSACTIONS ON POWER SYSTEMS, VOL 15, NO 2, MAY 2000 625 A Direct Numerical Method for Observability Analysis Bei Gou and Ali Abur, Senior Member, IEEE Abstract This paper presents an algebraic method

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem

More information

Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski [email protected]

Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski trajkovski@nyus.edu.mk Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trakovski [email protected] Neural Networks 2 Neural Networks Analogy to biological neural systems, the most robust learning systems

More information

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand Notes V General Equilibrium: Positive Theory In this lecture we go on considering a general equilibrium model of a private ownership economy. In contrast to the Notes IV, we focus on positive issues such

More information

Online Tuning of Artificial Neural Networks for Induction Motor Control

Online Tuning of Artificial Neural Networks for Induction Motor Control Online Tuning of Artificial Neural Networks for Induction Motor Control A THESIS Submitted by RAMA KRISHNA MAYIRI (M060156EE) In partial fulfillment of the requirements for the award of the Degree of MASTER

More information

A linear algebraic method for pricing temporary life annuities

A linear algebraic method for pricing temporary life annuities A linear algebraic method for pricing temporary life annuities P. Date (joint work with R. Mamon, L. Jalen and I.C. Wang) Department of Mathematical Sciences, Brunel University, London Outline Introduction

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Feed-Forward mapping networks KAIST 바이오및뇌공학과 정재승

Feed-Forward mapping networks KAIST 바이오및뇌공학과 정재승 Feed-Forward mapping networks KAIST 바이오및뇌공학과 정재승 How much energy do we need for brain functions? Information processing: Trade-off between energy consumption and wiring cost Trade-off between energy consumption

More information

Analecta Vol. 8, No. 2 ISSN 2064-7964

Analecta Vol. 8, No. 2 ISSN 2064-7964 EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,

More information

Solution of Linear Systems

Solution of Linear Systems Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

6. Cholesky factorization

6. Cholesky factorization 6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

More information

3. Reaction Diffusion Equations Consider the following ODE model for population growth

3. Reaction Diffusion Equations Consider the following ODE model for population growth 3. Reaction Diffusion Equations Consider the following ODE model for population growth u t a u t u t, u 0 u 0 where u t denotes the population size at time t, and a u plays the role of the population dependent

More information

Lecture 6. Artificial Neural Networks

Lecture 6. Artificial Neural Networks Lecture 6 Artificial Neural Networks 1 1 Artificial Neural Networks In this note we provide an overview of the key concepts that have led to the emergence of Artificial Neural Networks as a major paradigm

More information

19 LINEAR QUADRATIC REGULATOR

19 LINEAR QUADRATIC REGULATOR 19 LINEAR QUADRATIC REGULATOR 19.1 Introduction The simple form of loopshaping in scalar systems does not extend directly to multivariable (MIMO) plants, which are characterized by transfer matrices instead

More information

Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence

Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence Artificial Neural Networks and Support Vector Machines CS 486/686: Introduction to Artificial Intelligence 1 Outline What is a Neural Network? - Perceptron learners - Multi-layer networks What is a Support

More information

(Quasi-)Newton methods

(Quasi-)Newton methods (Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS

THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS O.U. Sezerman 1, R. Islamaj 2, E. Alpaydin 2 1 Laborotory of Computational Biology, Sabancı University, Istanbul, Turkey. 2 Computer Engineering

More information

Linear Threshold Units

Linear Threshold Units Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

DERIVATIVES AS MATRICES; CHAIN RULE

DERIVATIVES AS MATRICES; CHAIN RULE DERIVATIVES AS MATRICES; CHAIN RULE 1. Derivatives of Real-valued Functions Let s first consider functions f : R 2 R. Recall that if the partial derivatives of f exist at the point (x 0, y 0 ), then we

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

NEURAL NETWORKS A Comprehensive Foundation

NEURAL NETWORKS A Comprehensive Foundation NEURAL NETWORKS A Comprehensive Foundation Second Edition Simon Haykin McMaster University Hamilton, Ontario, Canada Prentice Hall Prentice Hall Upper Saddle River; New Jersey 07458 Preface xii Acknowledgments

More information

General Framework for an Iterative Solution of Ax b. Jacobi s Method

General Framework for an Iterative Solution of Ax b. Jacobi s Method 2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

TRAINING A LIMITED-INTERCONNECT, SYNTHETIC NEURAL IC

TRAINING A LIMITED-INTERCONNECT, SYNTHETIC NEURAL IC 777 TRAINING A LIMITED-INTERCONNECT, SYNTHETIC NEURAL IC M.R. Walker. S. Haghighi. A. Afghan. and L.A. Akers Center for Solid State Electronics Research Arizona State University Tempe. AZ 85287-6206 [email protected]

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

Neural network software tool development: exploring programming language options

Neural network software tool development: exploring programming language options INEB- PSI Technical Report 2006-1 Neural network software tool development: exploring programming language options Alexandra Oliveira [email protected] Supervisor: Professor Joaquim Marques de Sá June 2006

More information

EFFICIENT DATA PRE-PROCESSING FOR DATA MINING

EFFICIENT DATA PRE-PROCESSING FOR DATA MINING EFFICIENT DATA PRE-PROCESSING FOR DATA MINING USING NEURAL NETWORKS JothiKumar.R 1, Sivabalan.R.V 2 1 Research scholar, Noorul Islam University, Nagercoil, India Assistant Professor, Adhiparasakthi College

More information

An Introduction to Neural Networks

An Introduction to Neural Networks An Introduction to Vincent Cheung Kevin Cannons Signal & Data Compression Laboratory Electrical & Computer Engineering University of Manitoba Winnipeg, Manitoba, Canada Advisor: Dr. W. Kinsner May 27,

More information

A Passivity Measure Of Systems In Cascade Based On Passivity Indices

A Passivity Measure Of Systems In Cascade Based On Passivity Indices 49th IEEE Conference on Decision and Control December 5-7, Hilton Atlanta Hotel, Atlanta, GA, USA A Passivity Measure Of Systems In Cascade Based On Passivity Indices Han Yu and Panos J Antsaklis Abstract

More information

degrees of freedom and are able to adapt to the task they are supposed to do [Gupta].

degrees of freedom and are able to adapt to the task they are supposed to do [Gupta]. 1.3 Neural Networks 19 Neural Networks are large structured systems of equations. These systems have many degrees of freedom and are able to adapt to the task they are supposed to do [Gupta]. Two very

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT MINING SYSTEM

FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT MINING SYSTEM International Journal of Innovative Computing, Information and Control ICIC International c 0 ISSN 34-48 Volume 8, Number 8, August 0 pp. 4 FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where. Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

More information

We shall turn our attention to solving linear systems of equations. Ax = b

We shall turn our attention to solving linear systems of equations. Ax = b 59 Linear Algebra We shall turn our attention to solving linear systems of equations Ax = b where A R m n, x R n, and b R m. We already saw examples of methods that required the solution of a linear system

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

A New Nature-inspired Algorithm for Load Balancing

A New Nature-inspired Algorithm for Load Balancing A New Nature-inspired Algorithm for Load Balancing Xiang Feng East China University of Science and Technology Shanghai, China 200237 Email: xfeng{@ecusteducn, @cshkuhk} Francis CM Lau The University of

More information

Lecture 7: Finding Lyapunov Functions 1

Lecture 7: Finding Lyapunov Functions 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1

More information

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

More information

Mapping an Application to a Control Architecture: Specification of the Problem

Mapping an Application to a Control Architecture: Specification of the Problem Mapping an Application to a Control Architecture: Specification of the Problem Mieczyslaw M. Kokar 1, Kevin M. Passino 2, Kenneth Baclawski 1, and Jeffrey E. Smith 3 1 Northeastern University, Boston,

More information

OPTIMAL CONTROL OF A COMMERCIAL LOAN REPAYMENT PLAN. E.V. Grigorieva. E.N. Khailov

OPTIMAL CONTROL OF A COMMERCIAL LOAN REPAYMENT PLAN. E.V. Grigorieva. E.N. Khailov DISCRETE AND CONTINUOUS Website: http://aimsciences.org DYNAMICAL SYSTEMS Supplement Volume 2005 pp. 345 354 OPTIMAL CONTROL OF A COMMERCIAL LOAN REPAYMENT PLAN E.V. Grigorieva Department of Mathematics

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

More information

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 J. Zhang Institute of Applied Mathematics, Chongqing University of Posts and Telecommunications, Chongqing

More information

Machine Learning and Pattern Recognition Logistic Regression

Machine Learning and Pattern Recognition Logistic Regression Machine Learning and Pattern Recognition Logistic Regression Course Lecturer:Amos J Storkey Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh Crichton Street,

More information

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem

More information

Fuzzy Differential Systems and the New Concept of Stability

Fuzzy Differential Systems and the New Concept of Stability Nonlinear Dynamics and Systems Theory, 1(2) (2001) 111 119 Fuzzy Differential Systems and the New Concept of Stability V. Lakshmikantham 1 and S. Leela 2 1 Department of Mathematical Sciences, Florida

More information

Date: April 12, 2001. Contents

Date: April 12, 2001. Contents 2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........

More information

PID Controller Design for Nonlinear Systems Using Discrete-Time Local Model Networks

PID Controller Design for Nonlinear Systems Using Discrete-Time Local Model Networks PID Controller Design for Nonlinear Systems Using Discrete-Time Local Model Networks 4. Workshop für Modellbasierte Kalibriermethoden Nikolaus Euler-Rolle, Christoph Hametner, Stefan Jakubek Christian

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

1 Norms and Vector Spaces

1 Norms and Vector Spaces 008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)

More information

Lecture 5 Least-squares

Lecture 5 Least-squares EE263 Autumn 2007-08 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property

More information

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C. CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis

More information

Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

More information

A New Quantitative Behavioral Model for Financial Prediction

A New Quantitative Behavioral Model for Financial Prediction 2011 3rd International Conference on Information and Financial Engineering IPEDR vol.12 (2011) (2011) IACSIT Press, Singapore A New Quantitative Behavioral Model for Financial Prediction Thimmaraya Ramesh

More information

SELECTING NEURAL NETWORK ARCHITECTURE FOR INVESTMENT PROFITABILITY PREDICTIONS

SELECTING NEURAL NETWORK ARCHITECTURE FOR INVESTMENT PROFITABILITY PREDICTIONS UDC: 004.8 Original scientific paper SELECTING NEURAL NETWORK ARCHITECTURE FOR INVESTMENT PROFITABILITY PREDICTIONS Tonimir Kišasondi, Alen Lovren i University of Zagreb, Faculty of Organization and Informatics,

More information

SUCCESSFUL PREDICTION OF HORSE RACING RESULTS USING A NEURAL NETWORK

SUCCESSFUL PREDICTION OF HORSE RACING RESULTS USING A NEURAL NETWORK SUCCESSFUL PREDICTION OF HORSE RACING RESULTS USING A NEURAL NETWORK N M Allinson and D Merritt 1 Introduction This contribution has two main sections. The first discusses some aspects of multilayer perceptrons,

More information

A Control Scheme for Industrial Robots Using Artificial Neural Networks

A Control Scheme for Industrial Robots Using Artificial Neural Networks A Control Scheme for Industrial Robots Using Artificial Neural Networks M. Dinary, Abou-Hashema M. El-Sayed, Abdel Badie Sharkawy, and G. Abouelmagd unknown dynamical plant is investigated. A layered neural

More information

Data quality in Accounting Information Systems

Data quality in Accounting Information Systems Data quality in Accounting Information Systems Comparing Several Data Mining Techniques Erjon Zoto Department of Statistics and Applied Informatics Faculty of Economy, University of Tirana Tirana, Albania

More information

Neural Networks and Support Vector Machines

Neural Networks and Support Vector Machines INF5390 - Kunstig intelligens Neural Networks and Support Vector Machines Roar Fjellheim INF5390-13 Neural Networks and SVM 1 Outline Neural networks Perceptrons Neural networks Support vector machines

More information

Linear Codes. Chapter 3. 3.1 Basics

Linear Codes. Chapter 3. 3.1 Basics Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length

More information

Using artificial intelligence for data reduction in mechanical engineering

Using artificial intelligence for data reduction in mechanical engineering Using artificial intelligence for data reduction in mechanical engineering L. Mdlazi 1, C.J. Stander 1, P.S. Heyns 1, T. Marwala 2 1 Dynamic Systems Group Department of Mechanical and Aeronautical Engineering,

More information

24. The Branch and Bound Method

24. The Branch and Bound Method 24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

More information

Research Article End-Effector Trajectory Tracking Control of Space Robot with L 2 Gain Performance

Research Article End-Effector Trajectory Tracking Control of Space Robot with L 2 Gain Performance Mathematical Problems in Engineering Volume 5, Article ID 7534, 9 pages http://dx.doi.org/.55/5/7534 Research Article End-Effector Trajectory Tracking Control of Space Robot with L Gain Performance Haibo

More information

Prediction Model for Crude Oil Price Using Artificial Neural Networks

Prediction Model for Crude Oil Price Using Artificial Neural Networks Applied Mathematical Sciences, Vol. 8, 2014, no. 80, 3953-3965 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2014.43193 Prediction Model for Crude Oil Price Using Artificial Neural Networks

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming

More information

A New Approach For Estimating Software Effort Using RBFN Network

A New Approach For Estimating Software Effort Using RBFN Network IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.7, July 008 37 A New Approach For Estimating Software Using RBFN Network Ch. Satyananda Reddy, P. Sankara Rao, KVSVN Raju,

More information

Big Data - Lecture 1 Optimization reminders

Big Data - Lecture 1 Optimization reminders Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Schedule Introduction Major issues Examples Mathematics

More information

Analysis of Multilayer Neural Networks with Direct and Cross-Forward Connection

Analysis of Multilayer Neural Networks with Direct and Cross-Forward Connection Analysis of Multilayer Neural Networks with Direct and Cross-Forward Connection Stanis law P laczek and Bijaya Adhikari Vistula University, Warsaw, Poland [email protected],[email protected]

More information

15 Limit sets. Lyapunov functions

15 Limit sets. Lyapunov functions 15 Limit sets. Lyapunov functions At this point, considering the solutions to ẋ = f(x), x U R 2, (1) we were most interested in the behavior of solutions when t (sometimes, this is called asymptotic behavior

More information

Study of a neural network-based system for stability augmentation of an airplane

Study of a neural network-based system for stability augmentation of an airplane Study of a neural network-based system for stability augmentation of an airplane Author: Roger Isanta Navarro Annex 3 ANFIS Network Development Supervisors: Oriol Lizandra Dalmases Fatiha Nejjari Akhi-Elarab

More information

Temporal Difference Learning in the Tetris Game

Temporal Difference Learning in the Tetris Game Temporal Difference Learning in the Tetris Game Hans Pirnay, Slava Arabagi February 6, 2009 1 Introduction Learning to play the game Tetris has been a common challenge on a few past machine learning competitions.

More information

Online Learning, Stability, and Stochastic Gradient Descent

Online Learning, Stability, and Stochastic Gradient Descent Online Learning, Stability, and Stochastic Gradient Descent arxiv:1105.4701v3 [cs.lg] 8 Sep 2011 September 9, 2011 Tomaso Poggio, Stephen Voinea, Lorenzo Rosasco CBCL, McGovern Institute, CSAIL, Brain

More information

Visualization of General Defined Space Data

Visualization of General Defined Space Data International Journal of Computer Graphics & Animation (IJCGA) Vol.3, No.4, October 013 Visualization of General Defined Space Data John R Rankin La Trobe University, Australia Abstract A new algorithm

More information

Persuasion by Cheap Talk - Online Appendix

Persuasion by Cheap Talk - Online Appendix Persuasion by Cheap Talk - Online Appendix By ARCHISHMAN CHAKRABORTY AND RICK HARBAUGH Online appendix to Persuasion by Cheap Talk, American Economic Review Our results in the main text concern the case

More information

Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 [email protected].

Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom. Some Polynomial Theorems by John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 [email protected] This paper contains a collection of 31 theorems, lemmas,

More information

A Negative Result Concerning Explicit Matrices With The Restricted Isometry Property

A Negative Result Concerning Explicit Matrices With The Restricted Isometry Property A Negative Result Concerning Explicit Matrices With The Restricted Isometry Property Venkat Chandar March 1, 2008 Abstract In this note, we prove that matrices whose entries are all 0 or 1 cannot achieve

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

Control Systems with Actuator Saturation

Control Systems with Actuator Saturation Control Systems with Actuator Saturation Analysis and Design Tingshu Hu Zongli Lin With 67 Figures Birkhauser Boston Basel Berlin Preface xiii 1 Introduction 1 1.1 Linear Systems with Actuator Saturation

More information