A stochastic calculus approach of Learning in Spike Models



Similar documents
The Leaky Integrate-and-Fire Neuron Model

A Simplified Neuron Model as a Principal Component Analyzer


MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Numerical Analysis Lecture Notes

Stochastic modeling of a serial killer

Data Mining: Algorithms and Applications Matrix Math Review

Algebraic and Transcendental Numbers

Statistical Machine Learning from Data

Structure Preserving Model Reduction for Logistic Networks

Schrödinger operators with non-confining potentials.

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) Write the given system in matrix form x = Ax + f ( ) sin(t) x y z = dy cos(t)

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

DATA ANALYSIS II. Matrix Algorithms

Appendix 4 Simulation software for neuronal network models

Overview. Neural Encoding. Encoding: Stimulus-response relation. The neural code. Understanding the neural code

1 Overview and background

Neural Networks: a replacement for Gaussian Processes?

Phase Change Memory for Neuromorphic Systems and Applications

Section 4.4 Inner Product Spaces

Representing Reversible Cellular Automata with Reversible Block Cellular Automata

Biological Neurons and Neural Networks, Artificial Neurons

Introduction to Machine Learning Using Python. Vikram Kamath

NESTML Tutorial. I.Blundell, M.J.Eppler, D.Plotnikov. Software Engineering RWTH Aachen.

An Introduction to Machine Learning

On-Line Learning with Structural Adaptation in a Network of Spiking Neurons for Visual Pattern Recognition

Discussion on the paper Hypotheses testing by convex optimization by A. Goldenschluger, A. Juditsky and A. Nemirovski.

Real Roots of Univariate Polynomials with Real Coefficients

15.1. Integration as the limit of a sum. Introduction. Prerequisites. Learning Outcomes. Learning Style

Computing with Spiking Neuron Networks

The Master s Degree with Thesis Course Descriptions in Industrial Engineering

Similarity and Diagonalization. Similar Matrices

Single item inventory control under periodic review and a minimum order quantity

[1] Diagonal factorization

Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski

ASEN Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1

Chapter 17. Orthogonal Matrices and Symmetries of Space

Classification of Cartan matrices


SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

THE DYING FIBONACCI TREE. 1. Introduction. Consider a tree with two types of nodes, say A and B, and the following properties:

A Regime-Switching Model for Electricity Spot Prices. Gero Schindlmayr EnBW Trading GmbH

Method To Solve Linear, Polynomial, or Absolute Value Inequalities:

RECIFE: a MCDSS for Railway Capacity

CHAPTER IV - BROWNIAN MOTION

Inner Product Spaces

Poisson Models for Count Data

QUALITY ENGINEERING PROGRAM

Feed-Forward mapping networks KAIST 바이오및뇌공학과 정재승

Computational Neuroscience. Models of Synaptic Transmission and Plasticity. Prof. Dr. Michele GIUGLIANO 2036FBDBMW

Pacific Journal of Mathematics

The Einstein field equations

CHAPTER 3 SECURITY CONSTRAINED OPTIMAL SHORT-TERM HYDROTHERMAL SCHEDULING

Orthogonal Diagonalization of Symmetric Matrices

Nonlinear Optimization: Algorithms 3: Interior-point methods

Function minimization

Linear smoother. ŷ = S y. where s ij = s ij (x) e.g. s ij = diag(l i (x)) To go the other way, you need to diagonalize S

Introduction to Matrix Algebra

NEURAL NETWORKS A Comprehensive Foundation

1.3. DOT PRODUCT If θ is the angle (between 0 and π) between two non-zero vectors u and v,

7 Network Models. 7.1 Introduction

Cell Microscopic Segmentation with Spiking Neuron Networks

Homework # 3 Solutions

Class Meeting # 1: Introduction to PDEs

BIOPHYSICS OF NERVE CELLS & NETWORKS

Reinforcement Learning

Fitting Subject-specific Curves to Grouped Longitudinal Data

RESONANCES AND BALLS IN OBSTACLE SCATTERING WITH NEUMANN BOUNDARY CONDITIONS

CHAPTER 5 SIGNALLING IN NEURONS

On a phase diagram for random neural networks with embedded spike timing dependent plasticity

Object tracking & Motion detection in video sequences

Brief Review of Tensors

Scheduling Jobs and Preventive Maintenance Activities on Parallel Machines

Numerical Analysis of the Jeans Instability

Supplement to Call Centers with Delay Information: Models and Insights

Statistical Machine Learning

Implementing Deep Neural Networks with Non Volatile Memories

Department of Economics

CHAPTER 6 PRINCIPLES OF NEURAL CIRCUITS.

Matematisk Fysik. Paolo Sibani Institut for Fysik og Kemi, SDU

Understanding Big Data Spectral Clustering

October 3rd, Linear Algebra & Properties of the Covariance Matrix

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Modélisation et résolutions numérique et symbolique

APPLICATIONS OF BAYES THEOREM

Reinforcement Learning

Transcription:

A stochastic calculus approach of Learning in Spike Models Adriana Climescu-Haulica Laboratoire de Modélisation et Calcul Institute d Informatique et Mathématiques Appliquées de Grenoble 51, rue des Mathématiques, 38041-Grenoble cedex 9 France email: adriana.climescu@imag.fr Abstract Some aspects of the Hebb s rule are formalized by means of a SRM 0 model where refractoriness and external input are neglected. Using tools from stochastic calculus theory it is shown explicitly that Hebb s rule is a 0-1 rule, based on a local learning window. Assuming the knowledge of the membrane potential, some learning inequalities are proposed, reflecting the existence of a delay on which the learning can be accomplished. The model allows a natural space-time generalization. 1

1 The model We consider a neuron i that receives spike input from N neurons. The membrane potential u i (t) is described by a SRM 0 model (Gerstner): u i (t) = 1 N N j=1 0 w ij (t s)ɛ(s)s j (t s)ds where refractoriness and external input are neglected. w ij (t) is the weight of the (i, j) synapse at time t ɛ(s) is the response kernel modelling the postsynaptic potential S j is defined as S j (t) = f F j δ(t t f j ) where F j is the set of spiking times of the neuron j. Using the change of variable t s = x the membrane potential is expressed as a stochastic integral with respect to a Poisson process Π j (t) = f F j 1 {t t f j } u i (t) = 1 N 1 N N j=1 t 0 N j=1 w ij (x)ɛ(t x)dπ j (x) f F j w ij (t f j )ɛ(t tf j ) 2

Figure 1: N spike neurons as input 3

2 0-1 Hebbian Learning rule Proposition There is a vectorial stochastic process m(t) = (m 1 (t), m 2 (t),..., m N (t)) such that the conditional probability P ( Π j (t) A u i [0, t] = h[0, t] ) = λ A ( m j (h[0, t]) ) = { 1 if mj (h[0, t]) A 0 if m j (h[0, t]) / A where each component of the stochastic process m(t) is constructed by the relation m j (h[0, t]) = k=1 1 λ j k H j (I[0, t]), e j k L 2 [ ν j (t)] h[0, t], ej k L 2 [ ν j (t)] with H j = Ru j the square root functional associated with the covariance functional of the membrane potential the kernel R j u(h)(t) = t 0 C u (t, x)h(x)d ν j (x) = H j Hj (h)(t) λ j k and ej k Ru. j C u (t, x) = t x 0 w ij (s)ɛ(t s)w ij (s)ɛ(x s)dν j (s) are the eigenvalues and eigenvectors of the covariance functional 4

Definition An interval W L = [n D, n] is a counting window learning set associated with the spike train S j if there is a spike time t f i for the neuron i such that and Π j (t f i ) = n Π j (t f i ) Π j(t f i t) = D 1. The Hebb s rule become P ( Π j (t) W L u i [0, t] = h[0, t] ) = { 1 if mj (h[0, t]) W L 0 if m j (h[0, t]) / W L With an a priori interpretation, the Hebb rule is a detection criterion: between the N neurons firing the neuron i which one become wired with i? If m j (u i [0, t]) W L then the synapse (i, j) is strengthened If m j (u i [0, t]) / W L then the synapse (i, j) vanishes. 5

3 Learning Inequations With an a posteriori interpretation, the Hebb rule allows the knowledge of the synaptic weights modified by a learning process. Assume that the synapse (i, j) was strengthened. Then with { n D m j (u i [0, t]) n (1) Π j (t f i ) = n Π j (t f i ) S j(t f i t) = D 1 As it is known that the stochastic process m j (t) can be assimilated with a Poisson process, the study of the inequations (1) reduces to the study of an equation k=1 1 λ j k H j (I[0, t]), e j k L 2 [ ν j (t)] u i[0, t], e j k L 2 [ ν j (t)] = l with l an integer value of the interval [n D, n]. In order to determine w ij (t) from the above equation, an extraction algorithm is needed. The learning inequalities reflects the existence of a delay on which the learning can be accomplished. 6

4 Generalization for space-time approach On this framework, the membrane potential can be modelled as a stochastic integral with respect to a space-time Poisson process u i (t, x) = w ij (s, y)ɛ(t s, x y)dπ j (s, y) [O,t] D x with Π j (t, x) = δ(t t f j )δ(x xf j ) f F j Definition An interval W L = [n D, n] is a counting window learning set associated with the spike train S j if there is a spike time t f i and a spike place for the neuron i such that x f i and Π j (t f i, xf i ) = n Π j (t f i, xf i ) Π j(t f i t, xf i x) = D 1. The Hebb rule become If m j (u i ([0, t] D x )) W L then the synapse (i, j) is strengthened If m j (u i ([0, t] D x )) / W L then the synapse (i, j) vanishes. The above formulation exhibits the local aspect in time and space of the Hebb rule. 7

5 Conclusions The stochastic integral with respect to a Poisson process is a natural framework for spike response models. The Hebb rule is expressed as a detection criterion depending on the membrane potential. As kernels of the covariance functional of the membrane potential, the synaptic weights are involved in learning inequalities. In order to extract them, an algorithmic solution is needed. Space-time spiking neurons can be modelled by means of a space-time Poisson processes. The above remarks apply to space-time model as well. References [1] A. Climescu-Haulica, Calcul stochastique appliqué aux problèmes de détection des signaux aléatoires, Ph.D. Thesis, EPFL, Lausanne, 1999 [2] W. Gerstner and W.M. Kistler, Spiking Neurons Models. Single Neurons, Populations, Plasticity, Cambridge University Press, 2002 8