AT2 PROBLEM SET #3. Problem 1

Similar documents
MatLab - Systems of Differential Equations

Exam 1 Sample Question SOLUTIONS. y = 2x

Data Mining: Algorithms and Applications Matrix Math Review

Compensation Basics - Bagwell. Compensation Basics. C. Bruce Bagwell MD, Ph.D. Verity Software House, Inc.

by the matrix A results in a vector which is a reflection of the given

STA 4273H: Statistical Machine Learning

Self Organizing Maps: Fundamentals

correct-choice plot f(x) and draw an approximate tangent line at x = a and use geometry to estimate its slope comment The choices were:

WEEK #3, Lecture 1: Sparse Systems, MATLAB Graphics

Component Ordering in Independent Component Analysis Based on Data Power

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

L 2 : x = s + 1, y = s, z = 4s Suppose that C has coordinates (x, y, z). Then from the vector equality AC = BD, one has

How long is the vector? >> length(x) >> d=size(x) % What are the entries in the matrix d?

Least Squares Estimation

Scientic Computing 2013 Computer Classes: Worksheet 11: 1D FEM and boundary conditions

2+2 Just type and press enter and the answer comes up ans = 4

Objectives. Electric Current

Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm

Understanding and Applying Kalman Filtering

ab = c a If the coefficients a,b and c are real then either α and β are real or α and β are complex conjugates

Lecture 6. Artificial Neural Networks

Regression Analysis: A Complete Example

PSTricks. pst-ode. A PSTricks package for solving initial value problems for sets of Ordinary Differential Equations (ODE), v0.7.

Gamma Distribution Fitting

Computer exercise 2: Least Mean Square (LMS)

/SOLUTIONS/ where a, b, c and d are positive constants. Study the stability of the equilibria of this system based on linearization.

Intro to scientific programming (with Python) Pietro Berkes, Brandeis University

Nonlinear Systems of Ordinary Differential Equations

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

PES INSTITUTE OF TECHNOLOGY B.E 5TH SEMESTER (AUTONOMOUS) - PROVISIONAL RESULTS JANUARY 2015 COMPUTER SCIENCE AND ENGINEERING BRANCH

How To Run Statistical Tests in Excel

Template for parameter estimation with Matlab Optimization Toolbox; including dynamic systems

Numerical Solution of Differential Equations

Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication

Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers

1 Example of Time Series Analysis by SSA 1

Server Load Prediction

On Motion of Robot End-Effector using the Curvature Theory of Timelike Ruled Surfaces with Timelike Directrix

Tutorial on Markov Chain Monte Carlo

Algebra I Vocabulary Cards

Chapter 3 RANDOM VARIATE GENERATION

Visualizing Differential Equations Slope Fields. by Lin McMullin

Appendix 4 Simulation software for neuronal network models

ANALYTICAL METHODS FOR ENGINEERS

Virtual Network Topology Control with Oja and APEX Learning

Hedging Exotic Options

Index-Velocity Rating Development for Rapidly Changing Flows in an Irrigation Canal Using Broadband StreamPro ADCP and ChannelMaster H-ADCP

Comparing Neural Networks and ARMA Models in Artificial Stock Market

ADVANCED APPLICATIONS OF ELECTRICAL ENGINEERING

E190Q Lecture 5 Autonomous Robot Navigation

Metrics on SO(3) and Inverse Kinematics

How to compute Random acceleration, velocity, and displacement values from a breakpoint table.

Motion Graphs. It is said that a picture is worth a thousand words. The same can be said for a graph.

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Evaluating System Suitability CE, GC, LC and A/D ChemStation Revisions: A.03.0x- A.08.0x

AP Calculus BC 2001 Free-Response Questions

Cooling and Euler's Method

Two Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering

Making Accurate Voltage Noise and Current Noise Measurements on Operational Amplifiers Down to 0.1Hz

2013 MBA Jump Start Program. Statistics Module Part 3

POTENTIAL OF STATE-FEEDBACK CONTROL FOR MACHINE TOOLS DRIVES

The Wondrous World of fmri statistics

Multivariate Normal Distribution

A Simple Feature Extraction Technique of a Pattern By Hopfield Network

a. all of the above b. none of the above c. B, C, D, and F d. C, D, F e. C only f. C and F

Data Mining and Visualization

TMA4213/4215 Matematikk 4M/N Vår 2013

An Introduction to Neural Networks

AIMMS Function Reference - Arithmetic Functions

Harmonic oscillations of spiral springs Springs linked in parallel and in series

Robot Perception Continued

MIMO CHANNEL CAPACITY

Chapter 2 The Research on Fault Diagnosis of Building Electrical System Based on RBF Neural Network

User Guide Thank you for purchasing the DX90

GeoGebra. 10 lessons. Gerrit Stols

Manufacturing Equipment Modeling

Estimating the Average Value of a Function

System Identification for Acoustic Comms.:

Lecture 3: Linear methods for classification

7 Time series analysis

Multivariate Analysis of Variance (MANOVA)

Lecture 5 Least-squares

Classroom Tips and Techniques: The Student Precalculus Package - Commands and Tutors. Content of the Precalculus Subpackage

Convolution. 1D Formula: 2D Formula: Example on the web:

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

The Image Deblurring Problem

Orbital Mechanics. Angular Momentum

the points are called control points approximating curve

Magruder Statistics & Data Analysis

AP Calculus BC 2006 Free-Response Questions

Advanced Microeconomics

Average rate of change of y = f(x) with respect to x as x changes from a to a + h:

Least-Squares Intersection of Lines

EDEXCEL NATIONAL CERTIFICATE/DIPLOMA UNIT 5 - ELECTRICAL AND ELECTRONIC PRINCIPLES NQF LEVEL 3 OUTCOME 4 - ALTERNATING CURRENT

x = + x 2 + x

Linear Algebra and TI 89

Fitting Subject-specific Curves to Grouped Longitudinal Data

Mini-project in TSRT04: Cell Phone Coverage

Transcription:

AT2 PROBLEM SET #3 Problem 1 We want to simulate a neuron with autapse. To make this we use Euler method and we simulated system for different initial values. (a) Neuron s activation function f = [50*(1 + tanh(x)) for x in linspace(-5,5,1000)] # list of f for s in [-5,5] plot(linspace(-5,5,1000),f) titre = 'Neuron\'s activation function' ylabel('firing rate [Hz]') xlabel('current [na]') # print the curve [1]

We can see that activation of neuron increase with the current which is inject in it. But augmentation is not linear. For weak input (less than -3nA), the neuron doesn t create spiking. Then, the firing rate is increasing more rapidly with a highest augmentation of firing rate for current (the curve look like the hyperbolic tangent that we use to create the dynamic of neuron). We observe finally a saturation: for current upper than 3nA, the maximum firing rate is 100 Hz. (b) Plot of dx/dt w = 0.04 # strength of the synaptic connection I = -2 #na # input list = linspace(0,120,10000) # differents values of firing rate x def f(x): # value of the sigmoidal function return 50*(1 + tanh(x)) [2]

dx = [-x + f(w*x+i) for x in list] # list of dx with the list values plot(list,dx) titre = 'dx in function of the neuron\'s firing rate' ylabel('devivate') xlabel('firing rate [Hz]') On this plot, the zero-crossings indicate that the derivate of x, dt, is null. So it s a fixe point of the dynamic system because x+dt = x. (c) Simulation of system w = 0.04 I = -2 # strength of the synaptic connection # na #input [3]

dt = 0.01 # ms # time steps sim_time = 50 #simulation time def f(x): # value of the sigmoidal function return 50*(1 + tanh(x)) for x0 in [49,50,51]: # we make simulation for differents x0 x = [] # we initialize x.append(x0) dx = [] dx.append(0) for i in arange(dt,sim_time,dt): # we make simulation x.append(x[-1] + (-x[-1] + f(w*x[-1]+i))*dt) # we use euler method dx.append(dt*(-x[-1] + f(w*x[-1]+i))) # we capture dx to plot orbite plot(arange(0,sim_time,dt),x) titre = 'Simulation of x over time for x0 = ' + str(x0) ylabel('firing rate [Hz]') xlabel('time [s]') plot(x,dx,'o') titre = 'Trajectory of x for x0 = ' + str(x0) ylabel('x\'') xlabel('x') We simulate the system for three initial conditions: - x(0) = 49 Here we see that 49 is not a fixe point. The trajectory is repulse by 50 and come to the fixe point near to zero. We see on the orbite that dx is decrease more and more rapidly when the point is next to 50. Then, [4]

when it is next to zero, the convergence speed is slower. The speed of convergence have a pic next to 30. This particular change of velocity is due to the hyperbolic tangent function in the dt calcul. - x(0) = 50 Here 50 is a fix point. So the firing rate stay the same all long the simulation. There is no evolution of orbite in the trajectory plan. - x(0) = 51 When initial condition is 51Hz, firing rate change to the fix point near to 100Hz. Like for x(0) = 49, velocity is increasing and decreasing during the trajectory, due to hyperbolic tangent function. (d) Noise in the system! import random [5]

w = 0.04 # strength of the synaptic connection I = -2 # input dt = 0.01 # time steps sim_time = 50 #simulation time def f(x): # value of the sigmoidal function return 50*(1 + tanh(x)) for sigma in [5, 80]: #sigma value for the noise for x0 in [49,50,51]: # we make simulation for differents x0 x = [] # we initialize x.append(x0) for i in arange(dt,sim_time,dt): # we make simulation x.append(x[-1] + (-x[-1] + f(w*x[-1]+i) + sigma * random.gauss(0,1))*dt) # we use euler method and put the noise term with sigma times a gaussian aleatory value plot(arange(0,sim_time,dt),x) titre = 'Simulation of x over time for x0 = ' + str(x0) + ' with noise sigma = ' +str (sigma) ylabel('firing rate [Hz]') xlabel('time [s]') Sigma = 5 Sigma = 80 49 [6]

50 51 For a noise with a weak sigma, variance is small so we have the same curves that question c, with little variation due to white noise. For variance equal to 80, we have big variation in the signal. The white noise is too strong and the signal seems converge to zero in the three case for this example but they could converge to 100 Hz too. It s depend of in which side of 50Hz the signal come to at the beginning (it s random because of the white noise). Problem 2 We want to simulate two neurons network. Each neuron inhibited the other and they discharge spontaneously. (a) Null-clines of system import random [7]

w = -0.1 # strength of the synaptic connection I = 5 # na # input dt = 0.1 # ms list = linspace(0.001,99.999,1000) # list of x1 values def f(x): # value of the sigmoidal function return 50*(1 + tanh(x)) x = [] # we initialize for i in list: # we make simulation x.append(f(w*i+i)) # the nullcline have the same calcul for x1 and x2 # the code below it's to have the direction vectors on the plot L = 100 Dt = 5 u = [] v = [] for i in range(0,l+1,dt): u.append([]) v.append([]) for j in range(0,l+1,dt): u[i/dt].append((-i + f(w*j+i))*dt) v[i/dt].append((-j + f(w*i+i))*dt) # plot(list,x,x,list,list,[50 for i in list],'k--') # titre = 'Nullclines of the system' ylabel('x2') xlabel('x1') axis([-0.5,100.5,-0.5,100.5]) t= range(0,l+1,dt) z = range(0,l+1,dt) quiver(array(t), array(z), array(v), array(u), angles='xy', scale_units='xy', scale=1.5) [8]

It seems that they are three fix point in this system: (0,100), (100,0) and (50,50). We note that the fix point (50,50) is attractive if trajectory is on the diagonal (X1=X2) but repulsive otherwise. (b) Simulation of system import random w = -0.1 # strength of the synaptic connection list = linspace(0.001,99.999,1000) # list of x1 values I = 5 # na # input dt = 0.1 # ms time = 10 #time of the simulation def f(x): # value of the sigmoidal function return 50*(1 + tanh(x)) x = [] # we initialize for i in list: # we make simulation x.append(f(w*i+i)) # the nullcline have the same calcul for x1 and x2 [9]

plot(list,x,'y',x,list,'y') #plot of the two null isoclines list_initial = [(60,80),(99,99),(20,5)]# list of initial values for x1 and x2 for a,b in list_initial: x = -20*ones(time/dt) # vector of solution x1_init = a x2_init = b x[x1_init] = x2_init for i in arange(dt, time, dt): x1 = x1_init + (-x1_init + f(w*x2_init+i))*dt #find the next step value of x1 x2 = x2_init + (-x2_init + f(w*x1_init+i))*dt x1_init = x1 x2_init = x2 x[x1_init] = x2_init #append the value of x2 in the x1 emplacment plot(linspace(0,100,time/dt),x,'o') # the code below it's to have the direction vectors on the plot L = 100 Dt = 5 u = [] v = [] for i in range(0,l+1,dt): u.append([]) v.append([]) for j in range(0,l+1,dt): u[i/dt].append((-i + f(w*j+i))*dt) v[i/dt].append((-j + f(w*i+i))*dt) # titre = 'Simulation of system' ylabel('x2') xlabel('x1') axis([-0.5,100.5,-0.5,100.5]) t= range(0,l+1,dt) z = range(0,l+1,dt) quiver(array(t), array(z), array(v), array(u), angles='xy', scale_units='xy', scale=1.5) [10]

Here the initial conditions are: x(0) = (60,80) (blue), x(0) = (99,99) (green), x(0) = (20,5) (red). We can see that if initial condition are below the diagonal were X1= X2, dynamical system converge to (100,0), this means that neuron X1 firing a lot when X2 stop firing. If initial condition is above diagonal, inverse event occur: X2 firing a lot (100 Hz) and X1 stop firing. Finally, if X1 = X2 at the beginning, system converge to (50,50) : the two neurons firing with the same rate. (c) Matrix version import random w = -0.1 # strength of the synaptic connection I1 = 5 #na# input I2 = 5 #na list = linspace(0.001,99.999,1000) # list of x1 values dt = 0.1 # ms [11]

time = 10 #time of the simulation def f(x): # value of the sigmoidal function return array([[50],[50]])*(array([[1],[1]]) + array([[tanh(x[0][0])],[tanh(x[1][0])]])) x = [] # initialisation of vector x W = array([[0,w],[w,0]]) # initialisation of matrix W I = array([[i1],[i2]]) # initialisation of vector I for i in list: # we make simulation x.append(array([[i],f(dot(w,[[i],[i]])+i)[1]])) # we can plot x1 in function of x2 with the inverse function plot(list,[b[0] for [a,b] in x],'y',[b[0] for [a,b] in x],list,'y') #plot of the two null isoclines list_initial = [(60,80),(99,99),(20,5)]# list of initial values for x1 and x2 for a,b in list_initial: x = [] # initialisation of vector x x.append(array([[a],[b]])) for i in arange(dt, time, dt): x.append(x[-1] + (-x[-1] + f(dot(w,x[-1])+i))*[[dt],[dt]]) plot([a[0] for [a,b] in x],[b[0] for [a,b] in x],'.') titre = 'Simulation of system' ylabel('x2') xlabel('x1') axis([-0.5,100.5,-0.5,100.5]) [12]

Problem 3 We construct a Hopfield network in which each neurons is connect with the other (in this case). With this type of network, we can save patterns and system could converge to them. (a) An example of pattern import random N = 64 p = array([-1 if random.randint(0,1) == 0 else 1 for i in range (N)]) #create a 64 point vector p = p.reshape((8,8)) # reshape as a 8*8 matrix figure = pcolor(p) [13]

cbar = fig.colorbar(figure, ticks=[-1, 0, 1]) cbar.ax.set_yticklabels(['-1', '0', '1'])# vertically oriented colorbar titre = 'Pattern matrix' A blue square correspond at a -1 weight between two neurons and red correspond to a positive weight. (b) Evolution with one pattern stored in weight matrix import random N = 64.0 Ns = sqrt(n) dt = 0.1 T = 3.0 [14]

p = sign(rand(1,ns)-0.5) centered values W = (1/N)*p.T*p # weight matrix sigma = 0.1 dt = 0.1 figure = pcolor(p.t*p) #create a 8 point vector with cbar = fig.colorbar(figure, ticks=[-1, 0, 1]) cbar.ax.set_yticklabels(['-1', '0', '1'])# vertically oriented colorbar titre = 'p pattern' x = p.t*p+2*rand(ns,ns)*rand(ns,ns) #create the initial condition by vector of 64 point vector x = reshape(x,n) figure = pcolor(reshape(x,(ns,-1))) cbar = fig.colorbar(figure, ticks=[min(x), 0, max(x)]) cbar.ax.set_yticklabels([str(round(min(x),2)), '0', str(round(max(x),2))])# vertically oriented colorbar titre = 'Evolution of x at beginning' for j in range(1,int(t/dt)+1): # we make simulation print j x += (-x + sign(reshape(dot(w,reshape(x,(ns,ns))),n)) + sigma*randn(size(x)))*dt # evolution of system if j%int(t/dt/10) == 0 : #permitted to plot only 10 graph for a simulation figure = pcolor(reshape(x,(ns,-1))) # plot the image cbar = fig.colorbar(figure, ticks=[min(x), 0, max(x)]) cbar.ax.set_yticklabels([str(round(min(x),2)), '0', str(round(max(x),2))])# vertically oriented colorbar titre = 'Evolution of x at time ' + str(j*dt) [15]

We see that the initial pattern (which is near to p.t*p) converge to p.t*p. (c) Evolution with two patterns stored in weight matrix import random N = 64.0 Ns = sqrt(n) dt = 0.1 T = 10.0 p = sign(rand(1,ns)-0.5) #create a 8 point vector with centered values q = sign(rand(1,ns)-0.5) #create a 8 point vector with centered values W = (1/N)*(q.T*q + p.t*p) # weight matrix sigma = 0.1 dt = 0.1 figure = pcolor(q.t*q) cbar = fig.colorbar(figure, ticks=[-1, 0, 1]) cbar.ax.set_yticklabels(['-1', '0', '1'])# vertically oriented colorbar titre = 'q Matrix' figure = pcolor(p.t*p) cbar = fig.colorbar(figure, ticks=[-1, 0, 1]) cbar.ax.set_yticklabels(['-1', '0', '1'])# vertically oriented colorbar titre = 'p Matrix' figure = pcolor(w) cbar = fig.colorbar(figure, ticks=[min(w[0]), 0, max(w[0])]) cbar.ax.set_yticklabels([str(round(min(w[0]),2)), '0', str(round(max(w[0]),2))])# vertically oriented colorbar titre = 'W Matrix' x = q.t*q + 2*rand(Ns,Ns)*rand(Ns,Ns) #create the initial condition by vector of 64 point vector x = reshape(x,n) figure = pcolor(reshape(x,(ns,-1))) cbar = fig.colorbar(figure, ticks=[min(x), 0, max(x)]) [16]

cbar.ax.set_yticklabels([str(round(min(x),2)), '0', str(round(max(x),2))])# vertically oriented colorbar titre = 'Evolution of x at the beginning' for j in range(1,int(t/dt)+1): # we make simulation x += (-x + sign(reshape(dot(w,reshape(x,(ns,ns))),n)) + sigma*randn(size(x)))*dt # evolution of system if j%int(t/dt/10) == 0 : #permited to plot only 10 graph for a simulation print j figure = pcolor(reshape(x,(ns,-1))) # plot the image cbar = fig.colorbar(figure, ticks=[min(x), 0, max(x)]) cbar.ax.set_yticklabels([str(round(min(x),2)), '0', str(round(max(x),2))])# vertically oriented colorbar titre = 'Evolution of x at time ' + str(j*dt) Matrix W seems to be the weight difference between patterns for each connections. If we start with a patterns near to the pattern that we want, system converge through it. [17]

(d) General rule In the article Hertz, John A., Anders S. Krogh, and Richard G. Palmer. Introduction to the theory of neural computation, researchers saw that for 1000 connections in the Hopfield network, we have 138 patterns can be storage. So here we have 64 connection: we can store 8 patterns! For the code, if I replace sign by tanh, my pattern converge with less precision and all the matrix converge to zero. import random N = 64.0 Ns = sqrt(n) dt = 0.1 T = 10.0 p = sign(rand(1,ns)-0.5) #create a 8 point vector with centered values q = sign(rand(1,ns)-0.5) #create a 8 point vector with centered values W = (1/N)*(q.T*q + p.t*p) # weight matrix sigma = 0.1 dt = 0.1 figure = pcolor(q.t*q) cbar = fig.colorbar(figure, ticks=[-1, 0, 1]) cbar.ax.set_yticklabels(['-1', '0', '1'])# vertically oriented colorbar titre = 'q Matrix' figure = pcolor(p.t*p) cbar = fig.colorbar(figure, ticks=[-1, 0, 1]) cbar.ax.set_yticklabels(['-1', '0', '1'])# vertically oriented colorbar [18]

titre = 'p Matrix' figure = pcolor(w) cbar = fig.colorbar(figure, ticks=[min(w[0]), 0, max(w[0])]) cbar.ax.set_yticklabels([str(round(min(w[0]),2)), '0', str(round(max(w[0]),2))])# vertically oriented colorbar titre = 'W Matrix' x = q.t*q + 2*rand(Ns,Ns)*rand(Ns,Ns) #create the initial condition by vector of 64 point vector x = reshape(x,n) figure = pcolor(reshape(x,(ns,-1))) cbar = fig.colorbar(figure, ticks=[min(x), 0, max(x)]) cbar.ax.set_yticklabels([str(round(min(x),2)), '0', str(round(max(x),2))])# vertically oriented colorbar titre = 'Evolution of x at the beginning' for j in range(1,int(t/dt)+1): # we make simulation I = reshape(dot(w,reshape(x,(ns,ns))),n) for i in I : i = tanh(i) x += (-x + I)*dt # evolution of system if j%int(t/dt/10) == 0 : #permited to plot only 10 graph for a simulation print j figure = pcolor(reshape(x,(ns,-1))) # plot the image cbar = fig.colorbar(figure, ticks=[min(x), 0, max(x)]) cbar.ax.set_yticklabels([str(round(min(x),2)), '0', str(round(max(x),2))])# vertically oriented colorbar titre = 'Evolution of x at time ' + str(j*dt) [19]