An Associative Neural Network Model of Classical Conditioning

Size: px
Start display at page:

Download "An Associative Neural Network Model of Classical Conditioning"

Transcription

1 Department of Numerical Analysis and Computer Science TRITA-NA-P27 ISSN -225 ISRN KTH/NA/P-2/7SE An Associative Neural Network Model of Classical Conditioning Christopher Johansson and Anders Lansner Report from Studies of Artificial Neural Systems, (SANS)

2 Numerical Analysis and Computer Science (Nada) Royal Institute of Technology (KTH) S- 44 STOCKHOLM, Sweden An Associative Neural Network Model of Classical Conditioning Christopher Johansson * and Anders Lansner TRITA-NA-P27 Abstract In this paper we present a new associative model of classical conditioning based on a neural network. The new model is compared with a number of other well-known models of classical conditioning. The experiments that are used to evaluate the new model are commonly used and they represent the set of tasks that a model of classical conditioning needs to address in order to be successful. The new neural network based model is composed of a number of interconnected Bayesian confidence propagating neural networks (BCPNNs). The BCPNN implements Hebbian learning. This new BCPNN based model sorts under the category of associative models. A key concept of this model is to make a closer tie between the output and the underlying neural activity. The BCPNN model does not use delaylines as many other models of conditioning. The output from the BCPNN model fit the results of classical conditioning experiments. Keywords: Classical Conditioning; BCPNN; Neural Network; Learning; Associative Model; Hebbian Learning; * cjo@nada.kth.se

3 Introduction Classical conditioning is when an animal learns to associate a stimulus with a reinforcement or aversion. The study of conditioning goes back to the beginning of the 2 th centaury and the experiments with dogs performed by the Russian psychologist Pavlov. When dogs are presented with food, which is an unconditioned stimulus,, they start to salivate and this response is called the unconditioned response, UR. In Pavlov s experiment the dogs heard a tone (conditioned stimulus, CS) before they were presented with the food. After a number of training trials the dogs were able to pair the CS with the and started to salivate when only the CS was presented (conditioned response, CR). If the training is continued, after the initial learning period, the intensity or probability of a response is increased. The distinction between UR and CR is less relevant in many parts of this text and hence we only use the term response, R. The learning processes involved in classical conditioning have been studied for a long and many models of these processes have been developed (for a review see Ph.D. thesis by J. Morén []). Classical conditioning has attracted model-makers since there exists lots of data on the phenomenon and the basic workings of conditioning are easy to understand. But conditioning is also a very difficult subject to study in the sense that the underlying biological system that is studied, i.e. the animal, is a very complex system. Small variations or changes in the experimental settings may have a large impact on the result. The type of sensory modality used for the conditioning stimulus plays an important role, e.g. animals cannot learn to associate a flavour with a shock. There is a general consensus that what we call learning is an intricate process of both explicit and implicit nature and many of these learning processes are closely connected to and affected by emotions. This means that the effects seen in classical conditioning experiments do not originate from a single learning system but from several different interconnected learning mechanisms. A conclusion from this is that in order to be able to describe all aspects of conditioning experiments one needs to have a model of considerable complexity, a single differential equation will not do, e.g. when several stimuli are present and the desired response depends on their relative activity. The models used for describing the processes of classical conditioning are dived into two classes; computational models and associative models. Roughly speaking, a computational model can be said to be a black box with statistical computations used to analyse and predict future data. In this paper we are concerned with associative models and not computational models. Associative models are based on the idea that the inner working of the biological system that is modelled is based on associations of events. This means that the model cannot only be used for predicting output data but also for predicting how this data was created. Computational models on the other hand do not consider how the output data was created. They are only concerned with predicting the output data. A more comprehensive distinction between these two types of models is provided by Gallistel [2].

4 There are two types of associative models; trace-based models like Sutton and Barto s and neural network based models. The trace-based models are not able to predict the inner workings of the biological system they model. Many of the neural network based models uses delay lines to handle the aspects of, i.e. create the correct dynamics of the response. An issue with delay lines is that they introduce a large amount of hardwiring into a model. The new model we present in this paper is a fusion between these two types, trace and neural network based, of associative models. From this fusion we get a more biological realistic model and at the same it retains the capabilities possessed by these other models. In this paper we aim at making a model with a simple phenotype but with a rich set of behaviours. The model is built with fairly simple building blocks; an advanced model of the perceptron. But creating an overall spanning model of classical conditioning is also hard since there is a wide variety of conditioning experiments, which often gives contradicting results. Traditionally classical conditioning has been an area studied by psychologists and many of the models of classical conditioning lack neural correlates. A key issue in the new model presented in this paper is to identify the correlation between the neural activity in the model and the results seen in the experiments. The model we present is built with a number of interconnected Bayesian confidence propagating neural networks (BCPNNs) [3-6]. The BCPNN has been used to model a variety of different systems and phenomena [7-9]. Classical Conditioning Experiments In the following sections a number of typical experiments used to test classical conditioning models are presented. The outline and description of the experiments follows closely that of Balkenius []. In many texts about learning and acquisition the figures measures on the x- axis. The is measured in units of varying size, from seconds to days, and the meaning of is very general e.g. it could mean number of trials. In this paper always means a measure in the order of magnitude of seconds. The speed of learning varies largely between different experiments i.e. associations to food can be learnt after a single trial while associations to an air puff in the eye may take 5 trials or more to learn. In most of the experiments in this paper we assume that learning is complete after one trial except for the experiments on S- shaped learning curves and reacquisition. We use one trial learning because it enables the display of the dynamics of the activity during the conditioning experiments. With the explicit display of the activity it is relative easy to grasp how the, CS and R interact. Acquisition and Learning Curve The most important process in classical conditioning is acquisition, where an association between a stimulus and response is made. Pavlov s experiment with dogs, described in the introduction, is a typical example of an acquisition process. CS + CS R

5 First a CS is presented; it is then followed by a. This process is repeated a number of s and as a result a CS that is later presented on its own will be able to produce R. Delay Conditioning A ISI CS LCS L Delay Conditioning B ISI CS LCS L Trace Conditioning ISI CS LCS L Figure Three different types of training procedures. There are two versions of delay conditioning; A the CS disappears immediately before the is presented. B the CS is still present when the is presented. In trace conditioning there is a silent period before the is presented. The ISI interval is computed as the difference between the start of the CS and. In this paper we assume that the learning is complete after one trial but in reality it often takes more than one trial for an animal to acquire an association. When the learning process occurs over multiple trials a desirable property of a model is to show an S-shaped learning curve.

6 Inter-Stimulus Interval Effects The inter-stimulus interval (ISI) is central in classical conditioning. The ISI period measures the between the start of CS-stimulus and -stimulus. The basic acquisition process can be performed in three different setups (Figure ); Delay A conditioning where the CS ends immediately before the onset of, Delay B conditioning where the does not end until the ends and Trace conditioning where there is a silent period between presentation of the CS and the. Trace conditioning is different from delay conditioning in that the CS needs to be memorised. In many of the models we present in this paper the memory of the CS is formed by a decaying trace. In trace conditioning, the ISI can be both positive and negative. If the appears before the CS is presented the association becomes negative i.e. an inhibitory connection is formed. The desirable behavior for the response level to have a single peak at small positive ISI:s, no response at all for negative ISI:s and an asymptotically declining value as the ISI grows large. Extinction Extinction is when an acquired association between CS and R is erased or suppressed. The ability to relearn and extinct previous made associations is often important in order to adapt to changes in the environment. Extinction of a previously made association is simply done by presenting the CS on its own with out any following. CS CS no R Reacquisition Effects Reacquisition effects appear when an animal relearns a previously extinguished association. The acquisition is faster during the succeeding trials after the initial acquisition. Here, running an acquisition-extinction cycle four s tested the reacquisition effect. Somes the term savings effects is used instead of reacquisition. Blocking Blocking means that when a first stimulus, CS, has been conditioned, i.e. associated with a response, a second stimulus, CS 2, will be blocked from being associated with the response, R. The blocking experiment shows that acquisition of an association is not independent of earlier learning. In blocking no learning occurs when the second stimulus, CS 2, is presented since CS has already been associated with R. CS + CS R CS CS CS R no In the reacquisition experiment one is looking for the remains of the previously suppressed association. The remains of the first learning trial have the effect of speeding up the learning in the following trials.

7 Conditioned Inhibition In conditioned inhibition two stimuli, CS and CS 2 are conditioned on R. A third stimulus, CS, is then presented together with one of the previously conditioned stimuli, CS, and no is given. The result is that the CS takes on inhibitory properties, which has as a result that if CS is presented together with CS 2 there will be no response, R. Phase I CS2 + Phase II CS + CS + CS+ no Test CS + CS no R 2 Conditioned inhibition is an active form of extinction and it requires the existence of inhibitory associations between the conditioned stimulus and response. Secondary Conditioning Secondary conditioning is when a conditioned stimulus CS is used to condition a second stimulus CS 2 on the response, R. The effect is typically weak and highly dependent on the exact timing of CS and CS 2, as CS will extinguish at the same that CS 2 is reinforced. CS+ CS R CS + CS CS R 2 2 Secondary condition is by some considered to be an important part of a model of instrumental learning. Models of secondary conditioning or instrumental learning have been used to solve path-finding problems []. But secondary conditioning need not play a role in instrumental learning since there are neural models [2] that can solve the path-finding problem by other means than secondary conditioning. Facilitation by an Intermittent Stimulus A second type of acquisition can be seen in trace conditioning when an extra stimulus CS 2 is introduced during the silent period between the presentation of CS and. If the conditioning to CS is weak due to a long ISI the extra CS 2 will facilitate conditioning to CS. Normal CS+ CS weak R Facilitated CS + CS + CS strong R 2 Models of Classical Conditioning In the paper by Balkenius [] five different models are analysed; the Sutton-Barto, the Temporal-Difference, the Klopf, the Balkenius and the Schmajuk-DiCarlo model. These five models have a roughly equal complexity and they are also making the same claims about their capabilities. All of these models are associative models. The

8 Klopf and Balkenius models have been influenced by neural network ideas. In all of these models the learning is initiated by changes in the input/output values. The Sutton-Barto (SB) model is built around a single, first order, differential equation and it works in real-. The model is an extension of a previous model by Rescorla-Wagner, which did not work in real-. The SB model is over 2 years old and it has been a precursor of many later associative models. The Temporal-Difference (TD) model is an extension of the SB model where the concept of discounted rewards has been introduced. The Klopf model incorporates some basic ideas from neural networks and the equation describing the model could also be used to the describe the relationship between units in a feed-forward neural network with delay lines. The output, R, is computed by weighting the incoming CS(s) both in and for each stimulus. The values of the weights are changed by training with a set of constants (the delaylines). Excitatory and inhibitory training is separated and there are both negative and positive weights. Neural networks have also influenced the Balkenius model and as the Klopf model it also uses delay-lines. The separate training of excitatory and inhibitory connections is even more distinguished than in the Klopf model. The equation of the weight computation is very similar to the differential equation used in the Sutton-Barto model. The Balkenius model can be seen as an attempt to combine TD-learning and neural networks. The Schmajuk-DiCarlo (SD) model is based on differential equations and the concept of short-term memory is modelled by a trace in a fashion similar to that used in the BCPNN-model. Bayesian Confidence Propagating Neural Network The Bayesian confidence propagating neural network (BCPNN) algorithm is derived from Bayesian statistics [3]. The weights are computed according to Hebb s principle [4] of strengthening co-activated units. A population is defined as a set of units and a projection as the computation needed to derive the weights and biases of a connection between two populations. A projection has a direction and the weights and biases are included in it. A very important feature of the BCPNN is the partitioning of units into hypercolumns, but in this paper we will only be concerned with single hypercolumn populations. More information about hypercolumns is found in the references on the BCPNN [3, 5, 3]. In the following equations N is used to denote the total number of units in a population and H (which always equals ) as the number of hypercolumns in a population. In a population with k hypercolumns, each hypercolumn will have U k units, which gives N= k U k. The differential equations are preferably solved by Euler s method since it is the simplest method available. The first two equations, eq. () and (2), represents the synaptic potential in the preand postsynaptic synapses. The build-up of this potential is controlled by the parameters τ Zpre and τ Zpost. The inputs to the pre- and postsynaptic units are represented by the variables S i and S j. If τ Z the synaptic potential is instantaneously built up but it disappears when the input disappears. If τ Z is small,

9 slightly larger than, the synaptic potential builds up and disappears rapidly and if τ Z is large the build up and decay of the synaptic potential is slow. Zi( t) Si( t) Zi( t) = () t τ Zpre Z j( t) Sj( t) Z j( t) = (2) t τ The second set of equations, eq. (3)-(5), represent the synaptic trace in a connection between two units. The function of these three equations is to delay the storage of correlations. The synaptic trace, stored in the E-variables, is thought to correspond to the Ca 2+ influx in a synapse after it has been activated and which is necessary for synaptic potentiation [5]. If τ E then there is no synaptic trace, the activity of Z i and Z j is instantaneously propagated to the memory in eq. (6)-(8). If τ E is small, slightly larger than, the synaptic trace is short and if τ E is large the synaptic trace is consequently long. Ei( t) Zi( t) Ei( t) = (3) t τ E Ej( t) Z j( t) Ej( t) = (4) t τ E Eij ( t) Zi ( t) Z j ( t) Eij ( t) = (5) t τ In the following three equations, eq. (6)-(8), the P-variables constitute the memory of a connection and are thus intended to correspond to the long-term potentiation (LTP) in a synaptic coupling [5]. When the printnow-signal is activated, i.e. the variable κ is greater than zero, the information stored in the state of the synaptic coupling, E-variables, is transferred into the memory (P-variables) by eq. (6)-(8). The activation of the printnow-signal is thought to correspond to the release of intercellular neuromodulator substances [6] e.g. dopamine [7]. The printnow-signal is in the range of (, ) and a large printnow-signal induces a large change in the memory. If τ P is small the memory is short-term and only a few patterns or correlations are remembered and if τ P is large the memory is long-term. Pt i( ) Ei( t) Pt i( ) = κ (6) t τ P Pj( t) Ej( t) Pj( t) = κ (7) t τ P Pij ( t) Eij ( t) Pij ( t) = κ (8) t τ At each -step the P-variables are used to compute the weights and biases. The constant λ is used to avoid the logarithm of zero and it can be thought of as the bias activity or as the noise in a synapse. Zpost E P

10 βi() t = Pi() t + λ (9) 2 Pij () t + λ wij () t = ( Pt () + λ )( P() t+ λ ) () i j The retrieval process is divided into two parts, first the potential is computed then the new activity. The potential h i is computed by integrating over the support given from the incoming connections. Before the retrieval process is started the potential is initialised to zero. The potential is then integrated, by eq. (), until it settles for a stable set of values. The constant τ C is the potential integration factor or in more neuropsychological terms the membrane constant. It determines how fast the potential of a unit adapts to new input. We define a set C(k) that consists of all units, j, in a hypercolumn k. In the following two equations we use the set C(k) to describe how the new activity, o i, is computed. The new activity, o i, is computed by normalising the support in each hypercolumn. hi () t = log ( β ()) + log () () () / () t for each hypercolumn : ( ) = (2) H i t wij t oj t hi t τ C k j C( k) Ghi () t e k oi t N Ghj ( t) e j C( k) The characteristic of the BCPNN is determined by the values of τ Zpre, τ Zpost, τ E, τ P, τ C and G. The values of the four τ parameters are in the range of (, ) and the G parameter is in the range of [, ). The gain, G, is used to control the shape of the Gibbs distribution used to compute the output activity from a population. The differential equations were solved with Euler s method and the -step, t, was set to. In the implementation the lower bound of the parameters τ then become t instead of. If two populations are of equal size a non-plastic one-to-one projection can be created between these populations. The one-to-one projection connects unit n in the first population with unit n in the second population and so on. A one-to-one projections is created with the following settings; β=λ and w=/ λ. A population have two modes of operation; silent and active. When there is no activity in a population i.e. the activity is zero for each unit, we say that the population is silent. A silent population does not affect any other population. If there is activity in population i.e. all units have bias activity, we say that the population is active. If the units in a population have bias activity they do affect other populations because of the bias β.

11 Method The Neural Model The neural network based model we used had three population; CS-, - and R- population. The CS- and -populations were used to represent incoming stimuli and they were both connected with the R-population (Figure 2). The two connections between the CS-population and the R-population were plastic while the connection between the -population and the R-population was a non-plastic direct projection. CS-population unit CS stimuli unit 2 inhibitory and excitatory all-to-all plastic projection R-population unit 9 unit unit 2 unit 2 -population unit unit 9 stimuli unit 2 non-plastic one-to-one projection unit 2 unit 9 unit 2 Figure 2 The BCPNN model of classical conditioning. Three populations were used; each represented an input or output node of the system e.g. a CS input was set in the CS-population. There were two plastic projections from the CS-population to the R-population, one was inhibitory and the other was excitatory. The projection between the -population and the R- population was static. The model had 2 units in each population. At least 2 units should be used in each population in order to attain a low bias level of activity, as the activity was normalised within each population 2. If e.g. two units had been used, the bias level of activity would have been 5% or.5, with twenty units the bias level of activity become 5% or.5. 2 In this special case there was only one hypercolumn in the population and therefore we normalised over the entire population.

12 The non-plastic connection had the property that if unit n in the -population was active then unit n in the R-population was heftily excitated. This meant that by setting a unit active in the -population a corresponding unit also became active in the R- population. One of the two plastic projections was excitatory and the other was inhibitory. Which of the two projections that was updated was determined by the printnowsignal. If the printnow-signal was positive then the excitatory projection was updated and if the printnow-signal was negative then the inhibitory projection was updated. If only one stimulus was present the activity of this stimulus / unit was set to - (N- )λ and the remaining stimuli / units were set to λ. When two stimuli were present the activity of each active stimulus was set to - (N-2)λ and the remaining stimuli were set to λ. This meant that when two stimuli were present the activity of each active stimulus was about.5. A central part in all models of classical conditioning is the triggering of learning, i.e. detecting that new information is available and that it should be learnt. In all of the models mentioned the rate of learning is an artifact of the past and current level of change in the CS and inputs. In the BCPNN-model the printnow-signal controls the learning rate. The printnow-signal was computed according to eq. (3) where o was the activity in one of the units in the R-population and k was a scale factor. In all experiments except one k=5. We assumed a small noise level of at least 2% in the activity and therefore changes smaller than.2 in the derivative of the activity was neglected. In the implementation, the.2 threshold enabled us to avoid boundary problems between the different phases of input to the model. The printnow-signal was gated by the. If no was present the printnowsignal and hence the learning-rate was zero. This can be motivated by direction of attention; if no attention is paid to the stimulus then it has no relevance and hence should not affect the learning. k o( t) if o( t) >.2 printnow - signal = (3) t t else In all of the models, except the Klopf model, there exists a trace of the CS-input. In the Klopf model the trace of the CS-input is replaced by a summation over delay lines. The corresponding trace in the BCPNN-model is found in the Z-variables. The -dynamics of the response was controlled by the membrane potential, eq. (), in the BCPNN-model while in the Klopf and Balkenius models the response dynamics is created by the delay-lines. The following parameter settings were used in the model; τ Zpre =, τ Zpost =, τ E =, τ P =2, τ C =5 and λ =.. All populations except the R-population had G=, the R- population had G=3.

13 Results The results of the BCPNN model applied to the previously described experiments are grouped in four sections. The given on the x-axis is intended to be in the same order of magnitude as seconds. When an acquisition was repeated the label on the x- axis was changed from to number of trials. All of the plots follow the same layout; the first graph shows the printnow-signal, the second graph shows the response, the third (and following) graph(s) show the CS and the graph at the bottom shows the. As mentioned earlier the printnow-signal controlled the learning rate of the CS-to-R projections. A positive printnow-signal meant that the excitatory projection was updated and a negative printnow-signal meant that the inhibitory projection was updated Number of Trials Figure 3 The learning curve over 5 training trials. The printnow-signal was set to % of its normal strength. Trace conditioning with an ISI of 4 units was used. The learning curve was very slightly S-shaped. Acquisition and Inter-Stimulus Interval First we studied the effects of repeated learning and the form of the learning curve (Figure 3). In order to avoid instantaneous learning the printnow-signal was set to % of its original strength (eq. (3)). We used trace conditioning and run 5 training trials and the result was a slightly S-shaped learning curve. The length of the inter-stimulus interval (ISI) affects the acquisition of a stimulusresponse association (Figure 3). Three different types of acquisition were considered

14 and delay B conditioning gave the highest probability / strongest response. Delay A and trace conditioning were both identical for small ISI-periods, less than 2 units. The sharp rise of the response seen for delay A and trace conditioning was due to the build-up of the potential, the had to be present for at least -units in order to induce a potential in the R-population. When the ISI-period was larger than 2 -units the effects of trace conditioning started to become apparent with a lower level of response as a consequence. If the ISI period was 8 -units long the trace or memory of an input had almost completely disappeared. Delay A and B conditioning had almost identical response levels when the ISI was larger than 2 -units. Delay A Delay B Trace.8 Probability ISI () Figure 4 The inter-stimulus interval (ISI) effect in delay and trace conditioning experiments. During the first 2 -units the delay A and trace conditioning curves were almost identical and later the curves of delay A and B conditioning were identical. The sharp rise of the response during the first -units seen in delay A and trace conditioning occurred because of the buildup of potential in the R-population. In Figure 5, Figure 6 and Figure 7 a detailed view of the acquisition process is presented. The ISI-period was set to 4. The strongest response was achieved when delay A or B conditioning was used. Trace conditioning produced a slightly weaker response. The initial 7 -units in Figure 5, Figure 6 and Figure 7 were used to acquire the association. These three figures illustrate the three different acquisition processes described in Figure. Notice how the changed level of response results in a printnowsignal. After the acquisition of the association there was a silent period of 4 units. The silent period allowed the activity in the network to go down towards its bias level. At 2 the CS was given to test the strength of the acquired association. The length of this test period was 3 -units.

15 Print CS Figure 5 Delay A conditioning with a ISI-period of 4. The CS ended before the input. The first 7 units were used to acquire the association and the remaining 8 units were used to test the association. Print CS Figure 6 Delay B conditioning with a ISI-period of 4. The CS did not end at the onset of the but ended at the same as the. The first 7 units were used to acquire the association and the remaining 8 units were used to test the association. Extinction and Reacquisition The extinction experiment in Figure 8 started with the acquisition of an association during the first 5 units. Between the 5 and 5 there was a silent period.

16 At the 9 the extinction process was initiated. The extinction process worked almost as the acquisition process with the only difference that the was not presented 3. As a result the response level, when tested again at 25, was much lower than previously (at 8). Notice that a negative printnow-signal was created because the was absent. The reacquisition effects were tested by four consecutive acquisition-extinction cycles (Figure 9). After the first acquisition-extinction cycle the response was harder to facilitate, the complete opposite of the desired behavior where the response is more easily facilitated in the following cycles. Print CS Figure 7 Trace conditioning with a ISI-period of 4. CS ended long before the became active. The first 7 units were used to acquire the association and the remaining 8 units were used to test the association. Blocking Figure shows the effect of blocking. First an association is acquired for CS. This was followed by a silent period of 5 -units. Between and 7 a new acquisition was initiated and a second stimulus, CS 2, was introduced. During this new acquisition process the printnow-signal was zero. The printnow-signal was zero because the first stimulus, CS, already predicted a % response and no change in the level of response occurred when the second stimulus, CS 2, was introduced. Hence when the was activated the response was already at % and therefore no printnow-signal was produced. During the last 5 units the response given by the two stimuli was tested. The response level of CS remained high and the response level of CS 2 remained low. 3 The very small activity of the was the bias activity. The bias activity was used to indicate that something was expected to happen.

17 Print CS Figure 8 Extinction of an acquired response. The response given by the CS was almost at 8. After the extinction process at 25 the level of response following a CS was reduced to about Number of Trials Figure 9 The reacquisition effects were tested by running four acquisition-extinction cycles. After the first cycle the response was harder to facilitate. The printnow-signal was set to % of its normal strength.

18 Print CS CS Figure Blocking of a second stimulus. An association is first acquired to CS. Then both CS and CS 2 were conditioned on the. This did not associate the CS 2 with the response. Print CS CS CS Figure Conditioned inhibition. Activity in CS or CS 3 had an excitatory influence on the response. At 25 to 28 the CS 2 was trained to have an inhibitory effect on the response.

19 Conditioned Inhibition Conditioned inhibition shows that a stimulus can attain an inhibitory influence on the response. Figure shows the conditioned inhibition experiment. In this experiment CS and CS 3 were positively associated with the response i.e. they had an excitatory influence on the response (this was done during the first 7 units). After the initial acquisitions the stimuli CS and CS 2 were presented simultaneously (between 7 and 22). During the from 22 to 25 the inhibitory connection was created, since no was presented, and associated to stimulus CS 2. Finally, from 25 and forward on, the response levels of different stimuli were again tested. The activation of stimulus CS 2 has an inhibitory effect on the response. Secondary Conditioning Secondary conditioning was strong. Figure 2 shows how secondary conditioning was used to transfer the association from CS to CS 2. The transfer was made with delay A type of conditioning at to 2. The second printnow-signal at 5 was not as strong as the first at 2 because it was induced by CS. At 2 to 25 the response induced by CS 2 was tested. Print CS CS Figure 2 Secondary conditioning of CS 2 from CS. Stimulus CS 2 gave a response ( 2 to 25) that was as intense as that given from CS ( 5 to 2). Facilitated Acquisition In this experiment the effect of an intermittent stimulus during the delay period was evaluated. Figure 3 shows a trace conditioning with a ISI-period of 6. Figure 4 shows a trace conditioning but with an intermittent stimulus during the silent period

20 between 2-6. The conditioning with an intermittent stimulus during the silent period gave a slightly higher level of response (compare the levels of response at 3 in Figure 3 and Figure 4). Print CS CS Figure 3 Normal acquisition of a conditioned stimulus. The ISI-period of this trace conditioning was 6 units. Print CS CS Figure 4 Facilitated acquisition of a conditioned stimulus. The ISI-period was 6 units. At the start of the silent period, at 2, a second stimulus was activated.

21 Comparison to Other Models The results from the experiments are summarized in Table. The BCPNN-model was able to perform most of the tasks. SB TD Klopf Balkenius SD BCPNN Trace Conditioning * * * * * * Delay Conditioning * O * * * ISI-curve * O o * o S-shaped Acquisition * * * Extinction * * * * * * Reacquisition o * Blocking * * * * * * Secondary Conditioning O o * * * Conditioned Inhibition * * * * * * Facilitation * * * * * o Table This table was copied from Balkenius paper []. The last column contains data on the BCPNN-model. A * means that this feature was explained by the model and a o means that this feature could partially be explained by the model.

22 Discussion This new model has a different origin than many other models of classical conditioning. The BCPNN model was developed from the perspective: We have a biological plausible neural network model, can it be used to build a system that models the data seen in classical condoning experiments? Usually models of classical conditioning have been developed based on a desire to model the data in the experiments. An implication of this is that the modeling of the underlying neural processes has had to stand back in order for the models to meet their expected output. The approach to modeling where no attention is paid to the underlying mechanisms is taken to its full extent in the computational models. The computational models that are completely based on statistical considerations without any biological considerations represent the complete opposite to our modeling approach. The Balkenius model has large similarities to the BCPNN model. The Balkenius model uses one layer or set of differential equations to compute the weights. As in most of the models, including the BCPNN, this first set of differential equations enables the model to handle asynchronous associations. In order to handle the dynamics of the response the Balkenius model uses delay lines. What we have done in the BCPNN model is to combine these two aspects of the dynamics, the associative and response dynamics, into a single system of coupled differential equations. This means that the acquisition process has been modeled by the dynamics of synapses and membrane potentials. A hallmark of this model is the unified view that has been used when constructing the model. All three populations are built with the same neural model, the BCPNN. The system can easily be extended with more populations and incorporated with other BCPNN neural systems. In the current model the populations are only groups of units but in a more elaborate model these populations could be attractor memories or complex sensory or output systems. The BCPNN model did excellently model extinction, blocking, inhibition and secondary conditioning. The model did not fully model the ISI curve of trace conditioning. The model could not handle the situation where the was presented before the CS. In a successful model, a reversed order of stimuli presentation results in the creation of an inhibitory connection. By setting the τ Zpost to a value larger than one the connection between the CS- and R-population probably would become inhibitory. An important part of the BCPNN model is the printnow-signal. There are many ways to compute the printnow-signal and the behavior of the model is heavily dependent on it. In this paper we did not use a neural circuit to compute the derivate of the activity in the R-population. This is an area of future studies. A future study should also address the issue with reacquisition. The experiments in this report showed an opposite behavior to the desired one. The effect of repeated acquisition-extinction was that the learning became slower after the first learning trial instead of faster.

23 References. Morén, J., Emotion and Learning - A Computational Model of the Amygdala, in Cognitive Studies. 22, Lund University: Lund. p Gallistel, C.R. and J. Gibbon, Computational Versus Associative Models of Simple Conditioning. Current Directions in Psychological Science, 2. : p Johansson, C., A. Sandberg, and A. Lansner. A Neural Network with Hypercolumns. in ICANN. 22. Spain, Madrid: Springer-Verlag Berlin. 4. Lansner, A. and Ö. Ekeberg, A one-layer feedback artificial neural network with a Bayesian learning rule. Int. J. Neural Systems, 989. : p Sandberg, A., et al., A Bayesian attractor network with incremental learning. Network: Computation in Neural Systems, 22. 3(2): p Holst, A. and A. Lansner, A Higher Order Bayesian Neural Network for Classification and Diagnosis, in Applied Decision Technologies: Computational Learning and Probabilistic Reasoning, A. Gammerman, Editor. 996, John Wiley & Sons Ltd.: New York. p Sandberg, A., A. Lansner, and K.-M. Petersson. A Bayesian Connectionist Model for Memory Scanning. in XXVII International Congress of Psychology Sandberg, A. and A. Lansner, Synaptic Depression as an Intrinsic Driver of Reinstatement Dynamics in an Attractor Network. Neurocomputing, Sandberg, A., K.M. Petersson, and A. Lansner, Selective Enhancement of Recall through Plasticity Modulation in an Autoassociative Memory. Neurocomputing, : p Balkenius, C. and J. Morén, Computational Models of Classical Conditioning: A Comparative Study. 998, Lund University Cognitive Science: Lund. p... Balkenius, C. Generalization in Instrumental Learning. in From Animals to Animats 4: Proceedings of the Fourth International Conference on Simulation of Adaptive Behavior. 996: The MIT Press/Bradford Books. 2. Johansson, C. and A. Lansner, A Neural Reinforcement Learning System. 22, Nada, SANS: Stockholm. 3. Holst, A., The Use of a Bayesian Neural Network Model for Classification Tasks, in Dept. of Numerical Analysis and Computing Science. 997, Kungl. Tekniska Högskolan, Stockholm. 4. Hebb, D.O., The Organization of Behavior. 949, New York: John Wiley Inc. 5. Kandel, E.R., J.H. Schwartz, and T.M. Jessell, The Anatomical Organization of the CNS, Coding of Sensory Information, in Principles of Neural Science. 2, McGraw-Hill Companies. p , Marder, E. and V. Thirumalai, Cellular, synaptic and network effects of neuromodulation. Neural Networks, 22. 5(4-6): p Reynolds, J.N.J. and J.R. Wickens, Dopamine-dependent plasticity of corticostriatal synapses. Neural Networks, 22. 5(4-6): p

COMPUTATIONAL MODELS OF CLASSICAL CONDITIONING: A COMPARATIVE STUDY

COMPUTATIONAL MODELS OF CLASSICAL CONDITIONING: A COMPARATIVE STUDY COMPUTATIONAL MODELS OF CLASSICAL CONDITIONING: A COMPARATIVE STUDY Christian Balkenius Jan Morén christian.balkenius@fil.lu.se jan.moren@fil.lu.se Lund University Cognitive Science Kungshuset, Lundagård

More information

Classical and Operant Conditioning as Roots of Interaction for Robots

Classical and Operant Conditioning as Roots of Interaction for Robots Classical and Operant Conditioning as Roots of Interaction for Robots Jean Marc Salotti and Florent Lepretre Laboratoire EA487 Cognition et Facteurs Humains, Institut de Cognitique, Université de Bordeaux,

More information

Chapter 12 Time-Derivative Models of Pavlovian Reinforcement Richard S. Sutton Andrew G. Barto

Chapter 12 Time-Derivative Models of Pavlovian Reinforcement Richard S. Sutton Andrew G. Barto Approximately as appeared in: Learning and Computational Neuroscience: Foundations of Adaptive Networks, M. Gabriel and J. Moore, Eds., pp. 497 537. MIT Press, 1990. Chapter 12 Time-Derivative Models of

More information

Appendix 4 Simulation software for neuronal network models

Appendix 4 Simulation software for neuronal network models Appendix 4 Simulation software for neuronal network models D.1 Introduction This Appendix describes the Matlab software that has been made available with Cerebral Cortex: Principles of Operation (Rolls

More information

Today. Learning. Learning. What is Learning? The Biological Basis. Hebbian Learning in Neurons

Today. Learning. Learning. What is Learning? The Biological Basis. Hebbian Learning in Neurons Today Learning What is Learning? Classical conditioning Operant conditioning Intro Psychology Georgia Tech Instructor: Dr. Bruce Walker What is Learning? Depends on your purpose and perspective Could be

More information

Agent Simulation of Hull s Drive Theory

Agent Simulation of Hull s Drive Theory Agent Simulation of Hull s Drive Theory Nick Schmansky Department of Cognitive and Neural Systems Boston University March 7, 4 Abstract A computer simulation was conducted of an agent attempting to survive

More information

Programmed Learning Review

Programmed Learning Review Programmed Learning Review L-HO1-121907 Take another sheet of paper and cover the answers located in the right hand column. Then read through the unit filling in the blanks as you go. After filling in

More information

Okami Study Guide: Chapter 7

Okami Study Guide: Chapter 7 1 Chapter in Review 1. Learning is difficult to define, but most psychologists would agree that: In learning the organism acquires some new knowledge or behavior as a result of experience; learning can

More information

RESCORLA-WAGNER MODEL

RESCORLA-WAGNER MODEL RESCORLA-WAGNER, LearningSeminar, page 1 RESCORLA-WAGNER MODEL I. HISTORY A. Ever since Pavlov, it was assumed that any CS followed contiguously by any US would result in conditioning. B. Not true: Contingency

More information

MANAGING QUEUE STABILITY USING ART2 IN ACTIVE QUEUE MANAGEMENT FOR CONGESTION CONTROL

MANAGING QUEUE STABILITY USING ART2 IN ACTIVE QUEUE MANAGEMENT FOR CONGESTION CONTROL MANAGING QUEUE STABILITY USING ART2 IN ACTIVE QUEUE MANAGEMENT FOR CONGESTION CONTROL G. Maria Priscilla 1 and C. P. Sumathi 2 1 S.N.R. Sons College (Autonomous), Coimbatore, India 2 SDNB Vaishnav College

More information

Learning. Relatively permanent behavior change that is acquired through experience

Learning. Relatively permanent behavior change that is acquired through experience Learning Relatively permanent behavior change that is acquired through experience Learning vs Maturation Not all behavior change is best described as learning Maturation (neuromuscular development) usually

More information

CHAPTER 6 PRINCIPLES OF NEURAL CIRCUITS.

CHAPTER 6 PRINCIPLES OF NEURAL CIRCUITS. CHAPTER 6 PRINCIPLES OF NEURAL CIRCUITS. 6.1. CONNECTIONS AMONG NEURONS Neurons are interconnected with one another to form circuits, much as electronic components are wired together to form a functional

More information

THE HUMAN BRAIN. observations and foundations

THE HUMAN BRAIN. observations and foundations THE HUMAN BRAIN observations and foundations brains versus computers a typical brain contains something like 100 billion miniscule cells called neurons estimates go from about 50 billion to as many as

More information

Operant Conditioning. PSYCHOLOGY (8th Edition, in Modules) David Myers. Module 22

Operant Conditioning. PSYCHOLOGY (8th Edition, in Modules) David Myers. Module 22 PSYCHOLOGY (8th Edition, in Modules) David Myers PowerPoint Slides Aneeq Ahmad Henderson State University Worth Publishers, 2007 1 Operant Conditioning Module 22 2 Operant Conditioning Operant Conditioning

More information

Learning from Experience. Definition of Learning. Psychological definition. Pavlov: Classical Conditioning

Learning from Experience. Definition of Learning. Psychological definition. Pavlov: Classical Conditioning Learning from Experience Overview Understanding Learning Classical Conditioning Operant Conditioning Observational Learning Definition of Learning Permanent change Change in behavior or knowledge Learning

More information

Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski trajkovski@nyus.edu.mk

Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski trajkovski@nyus.edu.mk Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trakovski trakovski@nyus.edu.mk Neural Networks 2 Neural Networks Analogy to biological neural systems, the most robust learning systems

More information

Neural Networks: a replacement for Gaussian Processes?

Neural Networks: a replacement for Gaussian Processes? Neural Networks: a replacement for Gaussian Processes? Matthew Lilley and Marcus Frean Victoria University of Wellington, P.O. Box 600, Wellington, New Zealand marcus@mcs.vuw.ac.nz http://www.mcs.vuw.ac.nz/

More information

Chapter 7 Conditioning and Learning

Chapter 7 Conditioning and Learning Chapter 7 Conditioning and Learning Chapter Summary Definitions Learning is defined as a relatively permanent change in behavior due to experience. A stimulus is anything that comes in through your senses.

More information

IMPORTANT BEHAVIOURISTIC THEORIES

IMPORTANT BEHAVIOURISTIC THEORIES IMPORTANT BEHAVIOURISTIC THEORIES BEHAVIOURISTIC THEORIES PAVLOV THORNDIKE SKINNER PAVLOV S CLASSICAL CONDITIONING I. Introduction: Ivan Pavlov (1849-1936) was a Russian Physiologist who won Nobel Prize

More information

UNIT 6: LEARNING. 6. When the US is presented prior to a neutral stimulus, conditioning DOES NOT (does/does not) occur.

UNIT 6: LEARNING. 6. When the US is presented prior to a neutral stimulus, conditioning DOES NOT (does/does not) occur. UNIT 6: LEARNING HOW DO WE LEARN? OBJECTIVE 1: Define learning, and identify two forms of learning. 1. A relatively permanent change in an organism s behavior due to experience is called LEARNING. 2. More

More information

LEARNING. Chapter 6 (Bernstein), pages 194-229

LEARNING. Chapter 6 (Bernstein), pages 194-229 LEARNING Chapter 6 (Bernstein), pages 194-229 What is LEARNING? LEARNING is the adaptive process through which experience modifies preexisting behavior and understanding; relatively permanent change in

More information

NEUROEVOLUTION OF AUTO-TEACHING ARCHITECTURES

NEUROEVOLUTION OF AUTO-TEACHING ARCHITECTURES NEUROEVOLUTION OF AUTO-TEACHING ARCHITECTURES EDWARD ROBINSON & JOHN A. BULLINARIA School of Computer Science, University of Birmingham Edgbaston, Birmingham, B15 2TT, UK e.robinson@cs.bham.ac.uk This

More information

Brain & Mind. Bicester Community College Science Department

Brain & Mind. Bicester Community College Science Department B6 Brain & Mind B6 Key Questions How do animals respond to changes in their environment? How is information passed through the nervous system? What can we learn through conditioning? How do humans develop

More information

Jae Won idee. School of Computer Science and Engineering Sungshin Women's University Seoul, 136-742, South Korea

Jae Won idee. School of Computer Science and Engineering Sungshin Women's University Seoul, 136-742, South Korea STOCK PRICE PREDICTION USING REINFORCEMENT LEARNING Jae Won idee School of Computer Science and Engineering Sungshin Women's University Seoul, 136-742, South Korea ABSTRACT Recently, numerous investigations

More information

6.2.8 Neural networks for data mining

6.2.8 Neural networks for data mining 6.2.8 Neural networks for data mining Walter Kosters 1 In many application areas neural networks are known to be valuable tools. This also holds for data mining. In this chapter we discuss the use of neural

More information

The operations performed to establish Pavlovian conditioned reflexes

The operations performed to establish Pavlovian conditioned reflexes ~ 1 ~ Pavlovian Conditioning and Its Proper Control Procedures Robert A. Rescorla The operations performed to establish Pavlovian conditioned reflexes require that the presentation of an unconditioned

More information

Chapter 1: Educational Psychology - A Foundation for Teaching. 1. Define educational psychology and state its main purpose.

Chapter 1: Educational Psychology - A Foundation for Teaching. 1. Define educational psychology and state its main purpose. LEARNING OBJECTIVES Educational Psychology - Slavin, Ninth Edition Psychology 207 Mr. Conjar Chapter 1: Educational Psychology - A Foundation for Teaching 1. Define educational psychology and state its

More information

Classical Conditioning. Classical and Operant Conditioning. Basic effect. Classical Conditioning

Classical Conditioning. Classical and Operant Conditioning. Basic effect. Classical Conditioning Classical Conditioning Classical and Operant Conditioning January 16, 2001 Reminder of Basic Effect What makes for effective conditioning? How does classical conditioning work? Classical Conditioning Reflex-basic

More information

Temporal Difference Learning in the Tetris Game

Temporal Difference Learning in the Tetris Game Temporal Difference Learning in the Tetris Game Hans Pirnay, Slava Arabagi February 6, 2009 1 Introduction Learning to play the game Tetris has been a common challenge on a few past machine learning competitions.

More information

Conners' Continuous Performance Test II (CPT II V.5)

Conners' Continuous Performance Test II (CPT II V.5) Conners' Continuous Performance Test II (CPT II V.5) By C. Keith Conners, Ph.D. and MHS Staff Profile Report This report is intended to be used by the test administrator as an interpretive aid. This report

More information

The Counterpropagation Network

The Counterpropagation Network 214 The Counterpropagation Network The Counterpropagation Network " The Counterpropagation network (CPN) is the most recently developed of the models that we have discussed so far in this text. The CPN

More information

Bi 360: Midterm Review

Bi 360: Midterm Review Bi 360: Midterm Review Basic Neurobiology 1) Many axons are surrounded by a fatty insulating sheath called myelin, which is interrupted at regular intervals at the Nodes of Ranvier, where the action potential

More information

Passive Conduction - Cable Theory

Passive Conduction - Cable Theory Passive Conduction - Cable Theory October 7, 2013 Biological Structure Theoretical models describing propagation of synaptic potentials have evolved significantly over the past century. Synaptic potentials

More information

ANN Based Fault Classifier and Fault Locator for Double Circuit Transmission Line

ANN Based Fault Classifier and Fault Locator for Double Circuit Transmission Line International Journal of Computer Sciences and Engineering Open Access Research Paper Volume-4, Special Issue-2, April 2016 E-ISSN: 2347-2693 ANN Based Fault Classifier and Fault Locator for Double Circuit

More information

Master s Thesis. A Study on Active Queue Management Mechanisms for. Internet Routers: Design, Performance Analysis, and.

Master s Thesis. A Study on Active Queue Management Mechanisms for. Internet Routers: Design, Performance Analysis, and. Master s Thesis Title A Study on Active Queue Management Mechanisms for Internet Routers: Design, Performance Analysis, and Parameter Tuning Supervisor Prof. Masayuki Murata Author Tomoya Eguchi February

More information

degrees of freedom and are able to adapt to the task they are supposed to do [Gupta].

degrees of freedom and are able to adapt to the task they are supposed to do [Gupta]. 1.3 Neural Networks 19 Neural Networks are large structured systems of equations. These systems have many degrees of freedom and are able to adapt to the task they are supposed to do [Gupta]. Two very

More information

Model Uncertainty in Classical Conditioning

Model Uncertainty in Classical Conditioning Model Uncertainty in Classical Conditioning A. C. Courville* 1,3, N. D. Daw 2,3, G. J. Gordon 4, and D. S. Touretzky 2,3 1 Robotics Institute, 2 Computer Science Department, 3 Center for the Neural Basis

More information

AP Psychology 2008-2009 Academic Year

AP Psychology 2008-2009 Academic Year AP Psychology 2008-2009 Academic Year Course Description: The College Board Advanced Placement Program describes Advanced Placement Psychology as a course that is designed to introduce students to the

More information

How do we Learn? How do you know you ve learned something? CLASS OBJECTIVES: What is learning? What is Classical Conditioning? Chapter 6 Learning

How do we Learn? How do you know you ve learned something? CLASS OBJECTIVES: What is learning? What is Classical Conditioning? Chapter 6 Learning How do we Learn? Chapter 6 Learning CLASS OBJECTIVES: What is learning? What is Classical Conditioning? How do you know you ve learned something? 1 Can our beliefs and attitudes be a result of learning??

More information

Behavioral Principles. S-R Learning. Pavlov & Classical Conditioning 12/2/2009

Behavioral Principles. S-R Learning. Pavlov & Classical Conditioning 12/2/2009 Behavioral Principles S-R Learning Classical conditioning The most basic form of learning; one stimulus comes to serve as a signal for the occurrence of a second stimulus (the response) Stimulus a physical

More information

Online simulations of models for backward masking

Online simulations of models for backward masking Online simulations of models for backward masking Gregory Francis 1 Purdue University Department of Psychological Sciences 703 Third Street West Lafayette, IN 47907-2004 11 July 2002 Revised: 30 January

More information

Neural network software tool development: exploring programming language options

Neural network software tool development: exploring programming language options INEB- PSI Technical Report 2006-1 Neural network software tool development: exploring programming language options Alexandra Oliveira aao@fe.up.pt Supervisor: Professor Joaquim Marques de Sá June 2006

More information

A BEHAVIORAL VIEW OF LEARNING

A BEHAVIORAL VIEW OF LEARNING Chapter 10 Classical Conditioning Classical Conditioning: The Story of Dogs and Little Albert A BEHAVIORAL VIEW OF LEARNING As you read below you may come to think that behavioral learning theories seem

More information

COMBINED NEURAL NETWORKS FOR TIME SERIES ANALYSIS

COMBINED NEURAL NETWORKS FOR TIME SERIES ANALYSIS COMBINED NEURAL NETWORKS FOR TIME SERIES ANALYSIS Iris Ginzburg and David Horn School of Physics and Astronomy Raymond and Beverly Sackler Faculty of Exact Science Tel-Aviv University Tel-A viv 96678,

More information

Component Ordering in Independent Component Analysis Based on Data Power

Component Ordering in Independent Component Analysis Based on Data Power Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals

More information

The CUSUM algorithm a small review. Pierre Granjon

The CUSUM algorithm a small review. Pierre Granjon The CUSUM algorithm a small review Pierre Granjon June, 1 Contents 1 The CUSUM algorithm 1.1 Algorithm............................... 1.1.1 The problem......................... 1.1. The different steps......................

More information

A Sarsa based Autonomous Stock Trading Agent

A Sarsa based Autonomous Stock Trading Agent A Sarsa based Autonomous Stock Trading Agent Achal Augustine The University of Texas at Austin Department of Computer Science Austin, TX 78712 USA achal@cs.utexas.edu Abstract This paper describes an autonomous

More information

A. Learning Process through which experience causes permanent change in knowledge or behavior.

A. Learning Process through which experience causes permanent change in knowledge or behavior. Woolfolk, A. (2010). Chapter 6: Behavioral Views of Learning. In A. Woolfook (Ed.), Educational psychology (11th ed.). Columbus, OH: Pearson/Allyn & Bacon. This chapter begins by defining learning and

More information

Learning Theories 4- Behaviorism

Learning Theories 4- Behaviorism LEARNING THEORIES - BEHAVIORISM CHAPTER 4 CHAPTER Learning Theories 4- Behaviorism LEARNING OUTCOMES After studying this chapter, you should be able to: 1. Explain the principles of classical conditioning,

More information

Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence

Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence Artificial Neural Networks and Support Vector Machines CS 486/686: Introduction to Artificial Intelligence 1 Outline What is a Neural Network? - Perceptron learners - Multi-layer networks What is a Support

More information

Learning. Any relatively permanent change in behavior brought about by experience or practice. Permanent Experience Practice

Learning. Any relatively permanent change in behavior brought about by experience or practice. Permanent Experience Practice Learning Any relatively permanent change in behavior brought about by experience or practice Permanent Experience Practice Ivan Pavlov (1849-1936) Russian Physiologist Father= Village Priest Father-in-law=

More information

CHAPTER 1 INTRODUCTION

CHAPTER 1 INTRODUCTION CHAPTER 1 INTRODUCTION Power systems form the largest man made complex system. It basically consists of generating sources, transmission network and distribution centers. Secure and economic operation

More information

MASCOT Search Results Interpretation

MASCOT Search Results Interpretation The Mascot protein identification program (Matrix Science, Ltd.) uses statistical methods to assess the validity of a match. MS/MS data is not ideal. That is, there are unassignable peaks (noise) and usually

More information

SELECTING NEURAL NETWORK ARCHITECTURE FOR INVESTMENT PROFITABILITY PREDICTIONS

SELECTING NEURAL NETWORK ARCHITECTURE FOR INVESTMENT PROFITABILITY PREDICTIONS UDC: 004.8 Original scientific paper SELECTING NEURAL NETWORK ARCHITECTURE FOR INVESTMENT PROFITABILITY PREDICTIONS Tonimir Kišasondi, Alen Lovren i University of Zagreb, Faculty of Organization and Informatics,

More information

Soil Dynamics Prof. Deepankar Choudhury Department of Civil Engineering Indian Institute of Technology, Bombay

Soil Dynamics Prof. Deepankar Choudhury Department of Civil Engineering Indian Institute of Technology, Bombay Soil Dynamics Prof. Deepankar Choudhury Department of Civil Engineering Indian Institute of Technology, Bombay Module - 2 Vibration Theory Lecture - 8 Forced Vibrations, Dynamic Magnification Factor Let

More information

Lecture 6. Artificial Neural Networks

Lecture 6. Artificial Neural Networks Lecture 6 Artificial Neural Networks 1 1 Artificial Neural Networks In this note we provide an overview of the key concepts that have led to the emergence of Artificial Neural Networks as a major paradigm

More information

Learning: Classical Conditioning

Learning: Classical Conditioning How Do We Learn? Learning Learning: Classical Conditioning Chapter 7 One way is through Classical Conditioning Pavlov s Experiments Extending Pavlov s Understanding Pavlov s Legacy Psy 12000.003 1 2 Definition

More information

Decentralized Method for Traffic Monitoring

Decentralized Method for Traffic Monitoring Decentralized Method for Traffic Monitoring Guillaume Sartoretti 1,2, Jean-Luc Falcone 1, Bastien Chopard 1, and Martin Gander 2 1 Computer Science Department 2 Department of Mathematics, University of

More information

CS311 Lecture: Sequential Circuits

CS311 Lecture: Sequential Circuits CS311 Lecture: Sequential Circuits Last revised 8/15/2007 Objectives: 1. To introduce asynchronous and synchronous flip-flops (latches and pulsetriggered, plus asynchronous preset/clear) 2. To introduce

More information

Empirical Background for Skinner s Basic Arguments Regarding Selection by Consequences

Empirical Background for Skinner s Basic Arguments Regarding Selection by Consequences Empirical Background for Skinner s Basic Arguments Regarding Selection by Consequences Iver Iversen University of North Florida, Jacksonville Presentation at NAFO, April 2016 Gol, Norway Skinner was Controvercial

More information

Continuous Performance Test 3 rd Edition. C. Keith Conners, Ph.D.

Continuous Performance Test 3 rd Edition. C. Keith Conners, Ph.D. Continuous Performance Test 3 rd Edition C. Keith Conners, Ph.D. Assessment Report Name/ID: Alexandra Sample Age: 16 Gender: Female Birth Date: February 16, 1998 Grade: 11 Administration Date: February

More information

Modern Construction Materials Prof. Ravindra Gettu Department of Civil Engineering Indian Institute of Technology, Madras

Modern Construction Materials Prof. Ravindra Gettu Department of Civil Engineering Indian Institute of Technology, Madras Modern Construction Materials Prof. Ravindra Gettu Department of Civil Engineering Indian Institute of Technology, Madras Module - 2 Lecture - 2 Part 2 of 2 Review of Atomic Bonding II We will continue

More information

Neural Network and Genetic Algorithm Based Trading Systems. Donn S. Fishbein, MD, PhD Neuroquant.com

Neural Network and Genetic Algorithm Based Trading Systems. Donn S. Fishbein, MD, PhD Neuroquant.com Neural Network and Genetic Algorithm Based Trading Systems Donn S. Fishbein, MD, PhD Neuroquant.com Consider the challenge of constructing a financial market trading system using commonly available technical

More information

Neurophysiology. 2.1 Equilibrium Potential

Neurophysiology. 2.1 Equilibrium Potential 2 Neurophysiology 2.1 Equilibrium Potential An understanding of the concepts of electrical and chemical forces that act on ions, electrochemical equilibrium, and equilibrium potential is a powerful tool

More information

Flip-Flops, Registers, Counters, and a Simple Processor

Flip-Flops, Registers, Counters, and a Simple Processor June 8, 22 5:56 vra235_ch7 Sheet number Page number 349 black chapter 7 Flip-Flops, Registers, Counters, and a Simple Processor 7. Ng f3, h7 h6 349 June 8, 22 5:56 vra235_ch7 Sheet number 2 Page number

More information

NEURAL NETWORK FUNDAMENTALS WITH GRAPHS, ALGORITHMS, AND APPLICATIONS

NEURAL NETWORK FUNDAMENTALS WITH GRAPHS, ALGORITHMS, AND APPLICATIONS NEURAL NETWORK FUNDAMENTALS WITH GRAPHS, ALGORITHMS, AND APPLICATIONS N. K. Bose HRB-Systems Professor of Electrical Engineering The Pennsylvania State University, University Park P. Liang Associate Professor

More information

Psychology 3720. Learning. Dr. r. D

Psychology 3720. Learning. Dr. r. D Psychology 3720 Learning Dr. r. D Lecture 13 Acquisition Spontaneous recovery, resurgence Reinforcement/punishment, positive/negative Immediate vs delayed reinforcement Primary vs secondary reinforcement

More information

Electrical Resonance

Electrical Resonance Electrical Resonance (R-L-C series circuit) APPARATUS 1. R-L-C Circuit board 2. Signal generator 3. Oscilloscope Tektronix TDS1002 with two sets of leads (see Introduction to the Oscilloscope ) INTRODUCTION

More information

On the Interaction and Competition among Internet Service Providers

On the Interaction and Competition among Internet Service Providers On the Interaction and Competition among Internet Service Providers Sam C.M. Lee John C.S. Lui + Abstract The current Internet architecture comprises of different privately owned Internet service providers

More information

Another Look at Sensitivity of Bayesian Networks to Imprecise Probabilities

Another Look at Sensitivity of Bayesian Networks to Imprecise Probabilities Another Look at Sensitivity of Bayesian Networks to Imprecise Probabilities Oscar Kipersztok Mathematics and Computing Technology Phantom Works, The Boeing Company P.O.Box 3707, MC: 7L-44 Seattle, WA 98124

More information

UNIVERSITY OF BOLTON SCHOOL OF ENGINEERING MS SYSTEMS ENGINEERING AND ENGINEERING MANAGEMENT SEMESTER 1 EXAMINATION 2015/2016 INTELLIGENT SYSTEMS

UNIVERSITY OF BOLTON SCHOOL OF ENGINEERING MS SYSTEMS ENGINEERING AND ENGINEERING MANAGEMENT SEMESTER 1 EXAMINATION 2015/2016 INTELLIGENT SYSTEMS TW72 UNIVERSITY OF BOLTON SCHOOL OF ENGINEERING MS SYSTEMS ENGINEERING AND ENGINEERING MANAGEMENT SEMESTER 1 EXAMINATION 2015/2016 INTELLIGENT SYSTEMS MODULE NO: EEM7010 Date: Monday 11 th January 2016

More information

An Empirical Study of Two MIS Algorithms

An Empirical Study of Two MIS Algorithms An Empirical Study of Two MIS Algorithms Email: Tushar Bisht and Kishore Kothapalli International Institute of Information Technology, Hyderabad Hyderabad, Andhra Pradesh, India 32. tushar.bisht@research.iiit.ac.in,

More information

Learning to classify complex patterns using a VLSI network of spiking neurons

Learning to classify complex patterns using a VLSI network of spiking neurons Learning to classify complex patterns using a VLSI network of spiking neurons Srinjoy Mitra, Giacomo Indiveri and Stefano Fusi Institute of Neuroinformatics, UZH ETH, Zurich Center for Theoretical Neuroscience,

More information

3.2 LOGARITHMIC FUNCTIONS AND THEIR GRAPHS. Copyright Cengage Learning. All rights reserved.

3.2 LOGARITHMIC FUNCTIONS AND THEIR GRAPHS. Copyright Cengage Learning. All rights reserved. 3.2 LOGARITHMIC FUNCTIONS AND THEIR GRAPHS Copyright Cengage Learning. All rights reserved. What You Should Learn Recognize and evaluate logarithmic functions with base a. Graph logarithmic functions.

More information

Behaviorism & Education

Behaviorism & Education Behaviorism & Education Early Psychology (the use of nonobjective methods such as Introspection) Learning = behavior change movement toward objective methods Behaviorism Pavlov, Skinner (Focus on Sà R)

More information

Biological Neurons and Neural Networks, Artificial Neurons

Biological Neurons and Neural Networks, Artificial Neurons Biological Neurons and Neural Networks, Artificial Neurons Neural Computation : Lecture 2 John A. Bullinaria, 2015 1. Organization of the Nervous System and Brain 2. Brains versus Computers: Some Numbers

More information

Industry Environment and Concepts for Forecasting 1

Industry Environment and Concepts for Forecasting 1 Table of Contents Industry Environment and Concepts for Forecasting 1 Forecasting Methods Overview...2 Multilevel Forecasting...3 Demand Forecasting...4 Integrating Information...5 Simplifying the Forecast...6

More information

Stabilization by Conceptual Duplication in Adaptive Resonance Theory

Stabilization by Conceptual Duplication in Adaptive Resonance Theory Stabilization by Conceptual Duplication in Adaptive Resonance Theory Louis Massey Royal Military College of Canada Department of Mathematics and Computer Science PO Box 17000 Station Forces Kingston, Ontario,

More information

CHAPTER 5 PREDICTIVE MODELING STUDIES TO DETERMINE THE CONVEYING VELOCITY OF PARTS ON VIBRATORY FEEDER

CHAPTER 5 PREDICTIVE MODELING STUDIES TO DETERMINE THE CONVEYING VELOCITY OF PARTS ON VIBRATORY FEEDER 93 CHAPTER 5 PREDICTIVE MODELING STUDIES TO DETERMINE THE CONVEYING VELOCITY OF PARTS ON VIBRATORY FEEDER 5.1 INTRODUCTION The development of an active trap based feeder for handling brakeliners was discussed

More information

Impedance 50 (75 connectors via adapters)

Impedance 50 (75 connectors via adapters) VECTOR NETWORK ANALYZER PLANAR TR1300/1 DATA SHEET Frequency range: 300 khz to 1.3 GHz Measured parameters: S11, S21 Dynamic range of transmission measurement magnitude: 130 db Measurement time per point:

More information

We employed reinforcement learning, with a goal of maximizing the expected value. Our bot learns to play better by repeated training against itself.

We employed reinforcement learning, with a goal of maximizing the expected value. Our bot learns to play better by repeated training against itself. Date: 12/14/07 Project Members: Elizabeth Lingg Alec Go Bharadwaj Srinivasan Title: Machine Learning Applied to Texas Hold 'Em Poker Introduction Part I For the first part of our project, we created a

More information

Parallel Ray Tracing using MPI: A Dynamic Load-balancing Approach

Parallel Ray Tracing using MPI: A Dynamic Load-balancing Approach Parallel Ray Tracing using MPI: A Dynamic Load-balancing Approach S. M. Ashraful Kadir 1 and Tazrian Khan 2 1 Scientific Computing, Royal Institute of Technology (KTH), Stockholm, Sweden smakadir@csc.kth.se,

More information

DEVELOPMENT OF MULTI INPUT MULTI OUTPUT COUPLED PROCESS CONTROL LABORATORY TEST SETUP

DEVELOPMENT OF MULTI INPUT MULTI OUTPUT COUPLED PROCESS CONTROL LABORATORY TEST SETUP International Journal of Advanced Research in Engineering and Technology (IJARET) Volume 7, Issue 1, Jan-Feb 2016, pp. 97-104, Article ID: IJARET_07_01_012 Available online at http://www.iaeme.com/ijaret/issues.asp?jtype=ijaret&vtype=7&itype=1

More information

Section A. Index. Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting techniques... 1. Page 1 of 11. EduPristine CMA - Part I

Section A. Index. Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting techniques... 1. Page 1 of 11. EduPristine CMA - Part I Index Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting techniques... 1 EduPristine CMA - Part I Page 1 of 11 Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting

More information

NEURAL networks [5] are universal approximators [6]. It

NEURAL networks [5] are universal approximators [6]. It Proceedings of the 2013 Federated Conference on Computer Science and Information Systems pp. 183 190 An Investment Strategy for the Stock Exchange Using Neural Networks Antoni Wysocki and Maciej Ławryńczuk

More information

Learning in Abstract Memory Schemes for Dynamic Optimization

Learning in Abstract Memory Schemes for Dynamic Optimization Fourth International Conference on Natural Computation Learning in Abstract Memory Schemes for Dynamic Optimization Hendrik Richter HTWK Leipzig, Fachbereich Elektrotechnik und Informationstechnik, Institut

More information

Neural Network Design in Cloud Computing

Neural Network Design in Cloud Computing International Journal of Computer Trends and Technology- volume4issue2-2013 ABSTRACT: Neural Network Design in Cloud Computing B.Rajkumar #1,T.Gopikiran #2,S.Satyanarayana *3 #1,#2Department of Computer

More information

The primary goal of this thesis was to understand how the spatial dependence of

The primary goal of this thesis was to understand how the spatial dependence of 5 General discussion 5.1 Introduction The primary goal of this thesis was to understand how the spatial dependence of consumer attitudes can be modeled, what additional benefits the recovering of spatial

More information

Critical Branching Neural Computation, Neural Avalanches, and 1/f Scaling

Critical Branching Neural Computation, Neural Avalanches, and 1/f Scaling Critical Branching Neural Computation, Neural Avalanches, and 1/f Scaling Christopher T. Kello (ckello@ucmerced.edu) Bryan Kerster (bkerster@ucmerced.edu) Eric Johnson (ejohnson5@ucmerced.edu) Cognitive

More information

TD(0) Leads to Better Policies than Approximate Value Iteration

TD(0) Leads to Better Policies than Approximate Value Iteration TD(0) Leads to Better Policies than Approximate Value Iteration Benjamin Van Roy Management Science and Engineering and Electrical Engineering Stanford University Stanford, CA 94305 bvr@stanford.edu Abstract

More information

An approach of detecting structure emergence of regional complex network of entrepreneurs: simulation experiment of college student start-ups

An approach of detecting structure emergence of regional complex network of entrepreneurs: simulation experiment of college student start-ups An approach of detecting structure emergence of regional complex network of entrepreneurs: simulation experiment of college student start-ups Abstract Yan Shen 1, Bao Wu 2* 3 1 Hangzhou Normal University,

More information

Knowledge Management in Call Centers: How Routing Rules Influence Expertise and Service Quality

Knowledge Management in Call Centers: How Routing Rules Influence Expertise and Service Quality Knowledge Management in Call Centers: How Routing Rules Influence Expertise and Service Quality Christoph Heitz Institute of Data Analysis and Process Design, Zurich University of Applied Sciences CH-84

More information

Name: Teacher: Olsen Hour:

Name: Teacher: Olsen Hour: Name: Teacher: Olsen Hour: The Nervous System: Part 1 Textbook p216-225 41 In all exercises, quizzes and tests in this class, always answer in your own words. That is the only way that you can show that

More information

Computational modeling of pair-association memory in inferior temporal cortex

Computational modeling of pair-association memory in inferior temporal cortex Title page (i) Computational modeling of pair-association memory in inferior temporal cortex (ii) Masahiko MORITA(1) and Atsuo SUEMITSU(2) (iii) (1) Institute of Engineering Mechanics and Systems, University

More information

Low Cost Correction of OCR Errors Using Learning in a Multi-Engine Environment

Low Cost Correction of OCR Errors Using Learning in a Multi-Engine Environment 2009 10th International Conference on Document Analysis and Recognition Low Cost Correction of OCR Errors Using Learning in a Multi-Engine Environment Ahmad Abdulkader Matthew R. Casey Google Inc. ahmad@abdulkader.org

More information

Neu. al Network Analysis of Distributed Representations of Dynamical Sensory-Motor rrransformations in the Leech

Neu. al Network Analysis of Distributed Representations of Dynamical Sensory-Motor rrransformations in the Leech 28 Lockery t Fang and Sejnowski Neu. al Network Analysis of Distributed Representations of Dynamical Sensory-Motor rrransformations in the Leech Shawn R. LockerYt Van Fangt and Terrence J. Sejnowski Computational

More information

Perceptual Processes in Matching and Recognition of Complex Pictures

Perceptual Processes in Matching and Recognition of Complex Pictures Journal of Experimental Psychology: Human Perception and Performance 2002, Vol. 28, No. 5, 1176 1191 Copyright 2002 by the American Psychological Association, Inc. 0096-1523/02/$5.00 DOI: 10.1037//0096-1523.28.5.1176

More information

Chapter 5: Learning I. Introduction: What Is Learning? learning Conditioning II. Classical Conditioning: Associating Stimuli Ivan Pavlov

Chapter 5: Learning I. Introduction: What Is Learning? learning Conditioning II. Classical Conditioning: Associating Stimuli Ivan Pavlov Chapter 5: Learning I. Introduction: What Is Learning? A. Psychologists define learning as a process that produces a relatively enduring change in behavior or knowledge as a result of an individual s experience.

More information

Chapter 2 The Research on Fault Diagnosis of Building Electrical System Based on RBF Neural Network

Chapter 2 The Research on Fault Diagnosis of Building Electrical System Based on RBF Neural Network Chapter 2 The Research on Fault Diagnosis of Building Electrical System Based on RBF Neural Network Qian Wu, Yahui Wang, Long Zhang and Li Shen Abstract Building electrical system fault diagnosis is the

More information

Okami Study Guide: Chapter 7

Okami Study Guide: Chapter 7 1 Chapter Test 1. Knowing how to do something, like drive a car or play a sport, is referred to as a. explicit knowledge b. behavioral knowledge c. procedural knowledge d. implicit knowledge 2. All of

More information