4 Perceptron Learning Rule

Size: px
Start display at page:

Download "4 Perceptron Learning Rule"

Transcription

1 Percetron Learning Rule Objectives Objectives - Theory and Examles - Learning Rules - Percetron Architecture -3 Single-Neuron Percetron -5 Multile-Neuron Percetron -8 Percetron Learning Rule -8 Test Problem -9 Constructing Learning Rules - Unified Learning Rule - Training Multile-Neuron Percetrons -3 Proof of Convergence -5 Notation -5 Proof -6 Limitations -8 Summary of Results - Solved Problems - Eilogue -33 Further Reading -3 Exercises -36 Objectives One of the questions we raised in Chater 3 was: ÒHow do we determine the weight matrix and bias for ercetron networks with many inuts, where it is imossible to visualize the decision boundaries?ó In this chater we will describe an algorithm for training ercetron networks, so that they can learn to solve classification roblems. We will begin by exlaining what a learning rule is and will then develo the ercetron learning rule. We will conclude by discussing the advantages and limitations of the singlelayer ercetron network. This discussion will lead us into future chaters. -

2 Percetron Learning Rule Theory and Examles Learning Rule In 93, Warren McCulloch and Walter Pitts introduced one of the first artificial neurons [McPi3]. The main feature of their neuron model is that a weighted sum of inut signals is comared to a threshold to determine the neuron outut. When the sum is greater than or equal to the threshold, the outut is. When the sum is less than the threshold, the outut is. They went on to show that networks of these neurons could, in rincile, comute any arithmetic or logical function. Unlike biological networks, the arameters of their networks had to be designed, as no training method was available. However, the erceived connection between biology and digital comuters generated a great deal of interest. In the late 95s, Frank Rosenblatt and several other researchers develoed a class of neural networks called ercetrons. The neurons in these networks were similar to those of McCulloch and Pitts. RosenblattÕs key contribution was the introduction of a learning rule for training ercetron networks to solve attern recognition roblems [Rose58]. He roved that his learning rule will always converge to the correct network weights, if weights exist that solve the roblem. Learning was simle and automatic. Examles of roer behavior were resented to the network, which learned from its mistakes. The ercetron could even learn when initialized with random values for its weights and biases. Unfortunately, the ercetron network is inherently limited. These limitations were widely ublicized in the book Percetrons [MiPa69] by Marvin Minsky and Seymour Paert. They demonstrated that the ercetron networks were incaable of imlementing certain elementary functions. It was not until the 98s that these limitations were overcome with imroved (multilayer) ercetron networks and associated learning rules. We will discuss these imrovements in Chaters and. Today the ercetron is still viewed as an imortant network. It remains a fast and reliable network for the class of roblems that it can solve. In addition, an understanding of the oerations of the ercetron rovides a good basis for understanding more comlex networks. Thus, the ercetron network, and its associated learning rule, are well worth discussion here. In the remainder of this chater we will define what we mean by a learning rule, exlain the ercetron network and learning rule, and discuss the limitations of the ercetron network. Learning Rules As we begin our discussion of the ercetron learning rule, we want to discuss learning rules in general. By learning rule we mean a rocedure for modifying the weights and biases of a network. (This rocedure may also -

3 Percetron Architecture be referred to as a training algorithm.) The urose of the learning rule is to train the network to erform some task. There are many tyes of neural network learning rules. They fall into three broad categories: suervised learning, unsuervised learning and reinforcement (or graded) learning. Suervised Learning Training Set In suervised learning, the learning rule is rovided with a set of examles (the training set) of roer network behavior: Target Reinforcement Learning Unsuervised Learning {, t },{, t },, { Q, t Q }, (.) where q is an inut to the network and t q is the corresonding correct (target) outut. As the inuts are alied to the network, the network oututs are comared to the targets. The learning rule is then used to adjust the weights and biases of the network in order to move the network oututs closer to the targets. The ercetron learning rule falls in this suervised learning category. We will also investigate suervised learning algorithms in Chaters 7Ð. Reinforcement learning is similar to suervised learning, excet that, instead of being rovided with the correct outut for each network inut, the algorithm is only given a grade. The grade (or score) is a measure of the network erformance over some sequence of inuts. This tye of learning is currently much less common than suervised learning. It aears to be most suited to control system alications (see [BaSu83], [WhSo9]). In unsuervised learning, the weights and biases are modified in resonse to network inuts only. There are no target oututs available. At first glance this might seem to be imractical. How can you train a network if you donõt know what it is suosed to do? Most of these algorithms erform some kind of clustering oeration. They learn to categorize the inut atterns into a finite number of classes. This is esecially useful in such alications as vector quantization. We will see in Chaters 3Ð6 that there are a number of unsuervised learning algorithms. Percetron Architecture Before we resent the ercetron learning rule, letõs exand our investigation of the ercetron network, which we began in Chater 3. The general ercetron network is shown in Figure.. The outut of the network is given by a hardlim( W + b). (.) (Note that in Chater 3 we used the hardlims transfer function, instead of hardlim. This does not affect the caabilities of the network. See Exercise E.6.) -3

4 Percetron Learning Rule Inut Hard Limit Layer R R x W S x R b S x n S x S a S x a hardlim (W + b) Figure. Percetron Network It will be useful in our develoment of the ercetron learning rule to be able to conveniently reference individual elements of the network outut. LetÕs see how this can be done. First, consider the network weight matrix: w, w, w, R W w, w, w, R w S, w S, w S, R. (.3) We will define a vector comosed of the elements of the ith row of W : i w w i, w i,. (.) w i, R Now we can artition the weight matrix: wt W wt S wt. (.5) This allows us to write the ith element of the network outut vector as -

5 Percetron Architecture a hardlim (n) n W + b. (.6) Recall that the hardlim transfer function (shown at left) is defined as: (.7) Therefore, if the inner roduct of the ith row of the weight matrix with the inut vector is greater than or equal to b i, the outut will be, otherwise the outut will be. Thus each neuron in the network divides the inut sace into two regions. It is useful to investigate the boundaries between these regions. We will begin with the simle case of a single-neuron ercetron with two inuts. Single-Neuron Percetron a i hardlim( n i ) hardlim( w T i + b i ) a hardlim( n) if n otherwise. LetÕs consider a two-inut ercetron with one neuron, as shown in Figure.. Inuts Two-Inut Neuron Figure. Two-Inut/Single-Outut Percetron The outut of this network is determined by w, w, Σ b n a a hardlim (W + b) a hardlim( n) hardlim( W + b) hardlim( + b) hardlim( w, + w, + b) w T (.8) Decision Boundary The decision boundary is determined by the inut vectors for which the net inut n is zero: n + b w, + w, + b. (.9) w T To make the examle more concrete, letõs assign the following values for the weights and bias: -5

6 Percetron Learning Rule w, The decision boundary is then, w,, b. (.) n + b w, + w, + b +. (.) w T This defines a line in the inut sace. On one side of the line the network outut will be ; on the line and on the other side of the line the outut will be. To draw the line, we can find the oints where it intersects the and axes. To find the intercet set : b if. (.) w, To find the intercet, set : b if. (.3) w, The resulting decision boundary is illustrated in Figure.3. To find out which side of the boundary corresonds to an outut of, we just need to test one oint. For the inut T, the network outut will be a hardlim w T ( + b) hardlim. (.) Therefore, the network outut will be for the region above and to the right of the decision boundary. This region is indicated by the shaded area in Figure.3. w T + b w a a Figure.3 Decision Boundary for Two-Inut Percetron -6

7 Percetron Architecture w We can also find the decision boundary grahically. The first ste is to note that the boundary is always orthogonal to w, as illustrated in the adjacent figures. The boundary is defined by w T + b. (.5) w For all oints on the boundary, the inner roduct of the inut vector with the weight vector is the same. This imlies that these inut vectors will all have the same rojection onto the weight vector, so they must lie on a line orthogonal to the weight vector. (These concets will be covered in more detail in Chater 5.) In addition, any vector in the shaded region of Figure.3 will have an inner roduct greater than b, and vectors in the unshaded region will have inner roducts less than b. Therefore the weight vector will always oint toward the region where the neuron outut is. w After we have selected a weight vector with the correct angular orientation, the bias value can be comuted by selecting a oint on the boundary and satisfying Eq. (.5). + LetÕs aly some of these concets to the design of a ercetron network to imlement a simle logic function: the AND gate. The inut/target airs for the AND gate are, t, t 3, t 3, t. The figure to the left illustrates the roblem grahically. It dislays the inut sace, with each inut vector labeled according to its target. The dark circles indicate that the target is, and the light circles indicate that the target is. The first ste of the design is to select a decision boundary. We want to have a line that searates the dark circles and the light circles. There are an infinite number of solutions to this roblem. It seems reasonable to choose the line that falls ÒhalfwayÓ between the two categories of inuts, as shown in the adjacent figure. Next we want to choose a weight vector that is orthogonal to the decision boundary. The weight vector can be any length, so there are infinite ossibilities. One choice is AND w w, (.6) as dislayed in the figure to the left. -7

8 Percetron Learning Rule Finally, we need to find the bias, b. We can do this by icking a oint on the decision boundary and satisfying Eq. (.5). If we use.5 T we find w T + b.5 + b 3 + b b 3. (.7) We can now test the network on one of the inut/target airs. If we aly to the network, the outut will be a hardlim w T ( + b) hardlim a hardlim( ), 3 (.8) which is equal to the target outut correctly classified.. Verify for yourself that all inuts are To exeriment with decision boundaries, use the Neural Network Design Demonstration Decision Boundaries (nnddb). Multile-Neuron Percetron Note that for ercetrons with multile neurons, as in Figure., there will be one decision boundary for each neuron. The decision boundary for neuron i will be defined by w T i + b i. (.9) A single-neuron ercetron can classify inut vectors into two categories, since its outut can be either or. A multile-neuron ercetron can classify inuts into many categories. Each category is reresented by a different outut vector. Since each element of the outut vector can be either or, there are a total of S ossible categories, where S is the number of neurons. Percetron Learning Rule t Now that we have examined the erformance of ercetron networks, we are in a osition to introduce the ercetron learning rule. This learning rule is an examle of suervised training, in which the learning rule is rovided with a set of examles of roer network behavior: {, t },{, t },, { Q, t Q }, (.) -8

9 Percetron Learning Rule q where is an inut to the network and t q is the corresonding target outut. As each inut is alied to the network, the network outut is comared to the target. The learning rule then adjusts the weights and biases of the network in order to move the network outut closer to the target. Test Problem In our resentation of the ercetron learning rule we will begin with a simle test roblem and will exeriment with ossible rules to develo some intuition about how the rule should work. The inut/target airs for our test roblem are, t, t 3, t 3. 3 The roblem is dislayed grahically in the adjacent figure, where the two inut vectors whose target is are reresented with a light circle, and the vector whose target is is reresented with a dark circle. This is a very simle roblem, and we could almost obtain a solution by insection. This simlicity will hel us gain some intuitive understanding of the basic concets of the ercetron learning rule. The network for this roblem should have two-inuts and one outut. To simlify our develoment of the learning rule, we will begin with a network without a bias. The network will then have just two arameters, w, and, as shown in Figure.. w, Inuts No-Bias Neuron 3 w, Σ w, Figure. Test Problem Network By removing the bias we are left with a network whose decision boundary must ass through the origin. We need to be sure that this network is still able to solve the test roblem. There must be an allowable decision boundary that can searate the vectors and 3 from the vector. The figure to the left illustrates that there are indeed an infinite number of such boundaries. n a hardlim(w) a -9

10 Percetron Learning Rule 3 The adjacent figure shows the weight vectors that corresond to the allowable decision boundaries. (Recall that the weight vector is orthogonal to the decision boundary.) We would like a learning rule that will find a weight vector that oints in one of these directions. Remember that the length of the weight vector does not matter; only its direction is imortant. Constructing Learning Rules Training begins by assigning some initial values for the network arameters. In this case we are training a two-inut/single-outut network without a bias, so we only have to initialize its two weights. Here we set the elements of the weight vector, w, to the following randomly generated values: w T..8. (.) We will now begin resenting the inut vectors to the network. We begin with : a hardlim w T ( ) hardlim..8 a hardlim(.6). (.) 3 w The network has not returned the correct value. The network outut is, while the target resonse,, is. t We can see what haened by looking at the adjacent diagram. The initial weight vector results in a decision boundary that incorrectly classifies the vector. We need to alter the weight vector so that it oints more toward, so that in the future it has a better chance of classifying it correctly. One aroach would be to set w equal to. This is simle and would ensure that was classified roerly in the future. Unfortunately, it is easy to construct a roblem for which this rule cannot find a solution. The diagram to the lower left shows a roblem that cannot be solved with the weight vector ointing directly at either of the two class vectors. If we aly the rule w every time one of these vectors is misclassified, the networkõs weights will simly oscillate back and forth and will never find a solution. Another ossibility would be to add to w. Adding to w would make w oint more in the direction of. Reeated resentations of would cause the direction of w to asymtotically aroach the direction of. This rule can be stated: If t and a, then w new w old +. (.3) -

11 Percetron Learning Rule Alying this rule to our test roblem results in new values for : w w new w old (.) 3 w This oeration is illustrated in the adjacent figure. We now move on to the next inut vector and will continue making changes to the weights and cycling through the inuts until they are all classified correctly. The next inut vector is. When it is resented to the network we find: a hardlim w T ( ) hardlim.. hardlim(.). (.5) The target t associated with is and the outut a is. A class vector was misclassified as a. Since we would now like to move the weight vector w away from the inut, we can simly change the addition in Eq. (.3) to subtraction: If t and a, then w new w old. (.6) If we aly this to the test roblem we find: w new w old , (.7) which is illustrated in the adjacent figure. 3 w Now we resent the third vector : 3 a hardlim w T ( 3 ) hardlim 3..8 hardlim(.8). (.8) The current w results in a decision boundary that misclassifies 3. This is a situation for which we already have a rule, so w will be udated again, according to Eq. (.6): w new w old (.9) -

12 Percetron Learning Rule 3 w The diagram to the left shows that the ercetron has finally learned to classify the three vectors roerly. If we resent any of the inut vectors to the neuron, it will outut the correct class for that inut vector. This brings us to our third and final rule: if it works, donõt fix it. If t new a, then w wold. (.3) Here are the three rules, which cover all ossible combinations of outut and target values: If t and a, then w new If t and a, then w If t new new a, then w w old +. wold. w old. (.3) Unified Learning Rule The three rules in Eq. (.3) can be rewritten as a single exression. First we will define a new variable, the ercetron error e: e t a We can now rewrite the three rules of Eq. (.3) as:. (.3) If e new, then w If e, then w If e new new, then w w old +. w old. w old. (.33) Looking carefully at the first two rules in Eq. (.33) we can see that the sign of is the same as the sign on the error, e. Furthermore, the absence of in the third rule corresonds to an e of. Thus, we can unify the three rules into a single exression: w new w old + e w old + ( t a). (.3) This rule can be extended to train the bias by noting that a bias is simly a weight whose inut is always. We can thus relace the inut in Eq. (.3) with the inut to the bias, which is. The result is the ercetron rule for a bias: b new b old + e. (.35) -

13 Percetron Learning Rule Training Multile-Neuron Percetrons The ercetron rule, as given by Eq. (.3) and Eq. (.35), udates the weight vector of a single neuron ercetron. We can generalize this rule for the multile-neuron ercetron of Figure. as follows. To udate the ith row of the weight matrix use: i w new i w old + e i. (.36) Percetron Rule + To udate the ith element of the bias vector use: + e i. (.37) The ercetron rule can be written conveniently in matrix notation: and b i new b i old W new W old + e T b new b old + e, (.38). (.39) To test the ercetron learning rule, consider again the ale/orange recognition roblem of Chater 3. The inut/outut rototye vectors will be, t, t. (.) (Note that we are using as the target outut for the orange attern,, instead of -, as was used in Chater 3. This is because we are using the hardlim transfer function, instead of hardlims.) Tyically the weights and biases are initialized to small random numbers. Suose that here we start with the initial weight matrix and bias: W.5.5, b.5. (.) The first ste is to aly the first inut vector,, to the network: a hardlim( W + b) hardlim.5.5 hardlim(.5) +.5 (.) -3

14 Percetron Learning Rule Then we calculate the error: e t a. (.3) The weight udate is W new W old + e T ( ).5.5. (.) The bias udate is b new b old + e.5 + ( ).5. (.5) This comletes the first iteration. The second iteration of the ercetron rule is: a hardlim ( W + b) hardlim ( (.5)) (.6) hardlim (.5) e t a W new W old + e T b new b old + e (.7) (.8) (.9) The third iteration begins again with the first inut vector: a hardlim ( W + b) hardlim ( ) (.5) hardlim (.5) e t a W new W old + e T ( ).5.5 (.5) (.5) -

15 Proof of Convergence b new b old + e.5 + ( ).5. (.53) If you continue with the iterations you will find that both inut vectors will now be correctly classified. The algorithm has converged to a solution. Note that the final decision boundary is not the same as the one we develoed in Chater 3, although both boundaries correctly classify the two inut vectors. To exeriment with the ercetron learning rule, use the Neural Network Design Demonstration Percetron Rule (nndr). Proof of Convergence Although the ercetron learning rule is simle, it is quite owerful. In fact, it can be shown that the rule will always converge to weights that accomlish the desired classification (assuming that such weights exist). In this section we will resent a roof of convergence for the ercetron learning rule for the single-neuron ercetron shown in Figure.5. Inuts Hard Limit Neuron w, 3 Σ n a w,r b R a hardlim ( w T + b) Figure.5 Single-Neuron Percetron The outut of this ercetron is obtained from a hardlim( w T + b). (.5) The network is rovided with the following examles of roer network behavior: where each target outut, t q, is either or. {, t },{, t },,{ Q, t Q }. (.55) Notation To conveniently resent the roof we will first introduce some new notation. We will combine the weight matrix and the bias into a single vector: -5

16 Percetron Learning Rule x w b. (.56) We will also augment the inut vectors with a, corresonding to the bias inut: z q q. (.57) Now we can exress the net inut to the neuron as follows: n + b x T z. (.58) w T The ercetron learning rule for a single-neuron ercetron (Eq. (.3) and Eq. (.35)) can now be written x new x old + ez. (.59) The error e can be either, or. If e, then no change is made to the weights. If e, then the inut vector is added to the weight vector. If e, then the negative of the inut vector is added to the weight vector. If we count only those iterations for which the weight vector is changed, the learning rule becomes x( k) x( k ) + z' ( k ), (.6) where z' ( k ) is the aroriate member of the set { z, z,, z Q, z, z,, z Q }. (.6) We will assume that a weight vector exists that can correctly categorize all Q inut vectors. This solution will be denoted x. For this weight vector we will assume that and x T z q > δ > if t q, (.6) x T z q < δ < if t q. (.63) Proof We are now ready to begin the roof of the ercetron convergence theorem. The objective of the roof is to find uer and lower bounds on the length of the weight vector at each stage of the algorithm. -6

17 Proof of Convergence Assume that the algorithm is initialized with the zero weight vector: x( ). (This does not affect the generality of our argument.) Then, after k iterations (changes to the weight vector), we find from Eq. (.6): x( k) z' ( ) + z' ( ) + + z' ( k ). (.6) If we take the inner roduct of the solution weight vector with the weight vector at iteration k we obtain x T x( k) x T z' ( ) + x T z' ( ) + + x T z' ( k ). (.65) From Eq. (.6)ÐEq. (.63) we can show that Therefore x T z' () i > δ. (.66) x T x( k) > kδ From the Cauchy-Schwartz inequality (see [Brog9]). (.67) ( x T x( k) ) x x( k), (.68) where x x T x. (.69) If we combine Eq. (.67) and Eq. (.68) we can ut a lower bound on the squared length of the weight vector at iteration k : x( k) ( x T x( k) ) ( kδ) > x x. (.7) Next we want to find an uer bound for the length of the weight vector. We begin by finding the change in the length at iteration k : x( k) x T ( k)x( k) [ x( k ) + z' ( k ) ] T [ x( k ) + z' ( k ) ] x T ( k )x( k ) + x T ( k )z' ( k ) (.7) + z' T ( k )z' ( k ) Note that -7

18 Percetron Learning Rule Limitations The ercetron learning rule is guaranteed to converge to a solution in a finite number of stes, so long as a solution exists. This brings us to an imx T ( k )z' ( k ), (.7) since the weights would not be udated unless the revious inut vector had been misclassified. Now Eq. (.7) can be simlified to x( k) x( k ) + z' ( k ). (.73) We can reeat this rocess for x( k ), x( k ), etc., to obtain If x( k) z' ( ) + + z' ( k ). (.7) Π max{ z' () i }, this uer bound can be simlified to x( k) kπ. (.75) We now have an uer bound (Eq. (.75)) and a lower bound (Eq. (.7)) on the squared length of the weight vector at iteration k. If we combine the two inequalities we find kπ x( k) > ( kδ) x Π x or k < (.76) δ Because k has an uer bound, this means that the weights will only be changed a finite number of times. Therefore, the ercetron learning rule will converge in a finite number of iterations. The maximum number of iterations (changes to the weight vector) is inversely related to the square of δ. This arameter is a measure of how close the solution decision boundary is to the inut atterns. This means that if the inut classes are difficult to searate (are close to the decision boundary) it will take many iterations for the algorithm to converge. Note that there are only three key assumtions required for the roof:. A solution to the roblem exists, so that Eq. (.66) is satisfied.. The weights are only udated when the inut vector is misclassified, therefore Eq. (.7) is satisfied. 3. An uer bound, Π, exists for the length of the inut vectors. Because of the generality of the roof, there are many variations of the ercetron learning rule that can also be shown to converge. (See Exercise E.9.) -8

19 Proof of Convergence Linear Searability ortant question. What roblems can a ercetron solve? Recall that a single-neuron ercetron is able to divide the inut sace into two regions. The boundary between the regions is defined by the equation w T + b. (.77) This is a linear boundary (hyerlane). The ercetron can be used to classify inut vectors that can be searated by a linear boundary. We call such vectors linearly searable. The logical AND gate examle on age -7 illustrates a two-dimensional examle of a linearly searable roblem. The ale/orange recognition roblem of Chater 3 was a three-dimensional examle. Unfortunately, many roblems are not linearly searable. The classic examle is the XOR gate. The inut/target airs for the XOR gate are, t, t 3, t 3, t. This roblem is illustrated grahically on the left side of Figure.6, which also shows two other linearly insearable roblems. Try drawing a straight line between the vectors with targets of and those with targets of in any of the diagrams of Figure.6. Figure.6 Linearly Insearable Problems It was the inability of the basic ercetron to solve such simle roblems that led, in art, to a reduction in interest in neural network research during the 97s. Rosenblatt had investigated more comlex networks, which he felt would overcome the limitations of the basic ercetron, but he was never able to effectively extend the ercetron rule to such networks. In Chater we will introduce multilayer ercetrons, which can solve arbitrary classification roblems, and will describe the backroagation algorithm, which can be used to train them. -9

20 Percetron Learning Rule Summary of Results Percetron Architecture Inut Hard Limit Layer R R x W S x R b S x n S x S a S x a hardlim (W + b) a hardlim( W + b) W wt wt S wt a i hardlim( n i ) hardlim( w T i + b i ) Decision Boundary w T i + b i. The decision boundary is always orthogonal to the weight vector. Single-layer ercetrons can only classify linearly searable vectors. Percetron Learning Rule W new W old + e T b new b old + e where e t a. -

21 Solved Problems Solved Problems P. Solve the three simle classification roblems shown in Figure P. by drawing a decision boundary. Find weight and bias values that result in single-neuron ercetrons with the chosen decision boundaries. (a) (b) (c) Figure P. Simle Classification Problems First we draw a line between each set of dark and light data oints. (a) (b) (c) The next ste is to find the weights and biases. The weight vectors must be orthogonal to the decision boundaries, and ointing in the direction of oints to be classified as (the dark oints). The weight vectors can have any length we like. w w (a) (b) (c) w Here is one set of choices for the weight vectors: (a) w T, (b) w T, (c) w T. -

22 Percetron Learning Rule Now we find the bias values for each ercetron by icking a oint on the decision boundary and satisfying Eq. (.5). + b b w T w T This gives us the following three biases: (a) b, (b) b, (c) b 6 We can now check our solution against the original oints. Here we test the first network on the inut vector T. a hardlim( w T + b) hardlim + hardlim( 6)» + ans We can use MATLAB to automate the testing rocess and to try new oints. Here the first network is used to classify a oint that was not in the original roblem. w[- ]; b ; a hardlim(w*[;]+b) a P. Convert the classification roblem defined below into an equivalent roblem definition consisting of inequalities constraining weight and bias values., t, t 3, t 3, t Each target t i indicates whether or not the net inut in resonse to i must be less than, or greater than or equal to. For examle, since is, we t -

23 Solved Problems know that the net inut corresonding to to. Thus we get the following inequality: must be greater than or equal Alying the same rocedure to the inut/target airs for {, t }, { 3, t 3 } and {, t } results in the following set of inequalities. W + b +, + b +. w, w w, b w w w, w, + b () i, + b ( ii) + b < ( iii), + b < ( iv) Solving a set of inequalities is more difficult than solving a set of equalities. One added comlexity is that there are often an infinite number of solutions (just as there are often an infinite number of linear decision boundaries that can solve a linearly searable classification roblem). However, because of the simlicity of this roblem, we can solve it by grahing the solution saces defined by the inequalities. Note that w, only aears in inequalities (ii) and (iv), and w, only aears in inequalities (i) and (iii). We can lot each air of inequalities with two grahs. ii w, w, iv b iii i b Any weight and bias values that fall in both dark gray regions will solve the classification roblem. Here is one such solution: W 3 b 3. -3

24 Percetron Learning Rule P.3 We have a classification roblem with four classes of inut vector. The four classes are class :,, class : 3,, class 3: 5, 6, class : 7, 8. Design a ercetron network to solve this roblem. To solve a roblem with four classes of inut vector we will need a ercetron with at least two neurons, since an S-neuron ercetron can categorize S classes. The two-neuron ercetron is shown in Figure P.. Inut Hard Limit Layer x W x b x n Figure P. Two-Neuron Percetron LetÕs begin by dislaying the inut vectors, as in Figure P.3. The light circles indicate class vectors, the light squares indicate class vectors, the dark circles indicate class 3 vectors, and the dark squares indicate class vectors. A two-neuron ercetron creates two decision boundaries. Therefore, to divide the inut sace into the four categories, we need to have one decision boundary divide the four classes into two sets of two. The remaining boundary must then isolate each class. Two such boundaries are illustrated in Figure P.. We now know that our atterns are linearly searable. x a hardlim (W + b) a x -

25 Solved Problems 3 Figure P.3 Inut Vectors for Problem P.3 3 Figure P. Tentative Decision Boundaries for Problem P.3 The weight vectors should be orthogonal to the decision boundaries and should oint toward the regions where the neuron oututs are. The next ste is to decide which side of each boundary should roduce a. One choice is illustrated in Figure P.5, where the shaded areas reresent oututs of. The darkest shading indicates that both neuron oututs are. Note that this solution corresonds to target values of class : t t,, class : t 3 t,, class 3: t 5 t, 6, class : t 7 t, 8. We can now select the weight vectors: -5

26 Percetron Learning Rule w 3 and w. Note that the lengths of the weight vectors is not imortant, only their directions. They must be orthogonal to the decision boundaries. Now we can calculate the bias by icking a oint on a boundary and satisfying Eq. (.5): b w T 3, b w T. 3 In matrix form we have Figure P.5 Decision Regions for Problem P.3 W wt 3 and b, wt which comletes our design. P. Solve the following classification roblem with the ercetron rule. Aly each inut vector in order, for as many reetitions as it takes to ensure that the roblem is solved. Draw a grah of the roblem only after you have found a solution. -6

27 Solved Problems, t, t 3, t 3, t Use the initial weights and bias: W( ) b( ). We start by calculating the ercetronõs outut a, using the initial weights and bias. for the first inut vector a hardlim( W( ) + b( ) ) hardlim + hardlim( ) The outut a does not equal the target value t, so we use the ercetron rule to find new weights and biases based on the error. e t a T W( ) W( ) + e + ( ) b( ) b( ) + e + ( ) We now aly the second inut vector bias., using the udated weights and a hardlim( W( ) + b( ) ) hardlim hardlim( ) This time the outut a is equal to the target t. Alication of the ercetron rule will not result in any changes. W( ) W( ) b( ) b( ) We now aly the third inut vector. -7

28 Percetron Learning Rule a hardlim( W( ) 3 + b( ) ) hardlim hardlim( ) The outut in resonse to inut vector 3 is equal to the target t 3, so there will be no changes. W( 3) W( ) b( 3) b( ) We now move on to the last inut vector. a hardlim( W( 3) + b( 3) ) hardlim hardlim( ) This time the outut a does not equal the aroriate target t. The ercetron rule will result in a new set of values for W and b. e t a T W( ) W( 3) + e + ( ) 3 b( ) b( 3) + e + We now must check the first vector again. This time the outut a is equal to the associated target. t a hardlim( W( ) + b( ) ) hardlim 3 + hardlim( 8) Therefore there are no changes. W( 5) W( ) b( 5) b( ) The second resentation of of weight and bias values. results in an error and therefore a new set -8

29 Solved Problems a hardlim( W( 5) + b( 5) ) hardlim 3 + hardlim( ) Here are those new values: e t a T W( 6) W( 5) + e 3 + ( ) 3 b( 6) b( 5) + e +. Cycling through each inut vector once more results in no errors. a hardlim( W( 6) 3 + b( 6) ) hardlim 3 + t 3 a hardlim( W( 6) + b( 6) ) hardlim 3 + t a hardlim( W( 6) + b( 6) ) hardlim 3 + t a hardlim( W( 6) + b( 6) ) hardlim 3 + t Therefore the algorithm has converged. The final solution is: W 3 b. Now we can grah the training data and the decision boundary of the solution. The decision boundary is given by n W + b w, + w, + b 3 +. To find the intercet of the decision boundary, set : b if. 3 3 w, To find the intercet, set : b if. w, -9

30 Percetron Learning Rule The resulting decision boundary is illustrated in Figure P.6. W Figure P.6 Decision Boundary for Problem P. Note that the decision boundary falls across one of the training vectors. This is accetable, given the roblem definition, since the hard limit function returns when given an inut of, and the target for the vector in question is indeed. P.5 Consider again the four-class decision roblem that we introduced in Problem P.3. Train a ercetron network to solve this roblem using the ercetron learning rule. If we use the same target vectors that we introduced in Problem P.3, the training set will be: t, t, 3 t, 3 t, LetÕs begin the algorithm with the following initial weights and biases: The first iteration is 5 t, 5 7 t, 7 W( ) 8 t, 8, b( ). 6 t, 6. a hardlim ( W( ) + b( ) ) hardlim ( + ), -3

31 Solved Problems e t a, T W( ) W( ) + e +, b( ) b( ) + e +. The second iteration is a hardlim ( W( ) + b( ) ) hardlim ( + ), e t a, T W( ) W( ) + e +, b( ) b( ) + e + The third iteration is. a hardlim ( W( ) 3 + b( ) ) hardlim ( + ), e t 3 a, T W( 3) W( ) + e 3 +, -3

32 Percetron Learning Rule b( 3) b( ) + e +. Iterations four through eight roduce no changes in the weights. W( 8) W( 7) W( 6) W( 5) W( ) W( 3) b( 8) b( 7) b( 6) b( 5) b( ) b( 3) The ninth iteration roduces a hardlim ( W( 8) + b( 8) ) hardlim ( + ), e t a, T W( 9) W( 8) + e +, b( 9) b( 8) + e +. At this oint the algorithm has converged, since all inut atterns will be correctly classified. The final decision boundaries are dislayed in Figure P.7. Comare this result with the network we designed in Problem P.3. 3 Figure P.7 Final Decision Boundaries for Problem P.5-3

33 Eilogue Eilogue In this chater we have introduced our first learning rule Ñ the ercetron learning rule. It is a tye of learning called suervised learning, in which the learning rule is rovided with a set of examles of roer network behavior. As each inut is alied to the network, the learning rule adjusts the network arameters so that the network outut will move closer to the target. The ercetron learning rule is very simle, but it is also quite owerful. We have shown that the rule will always converge to a correct solution, if such a solution exists. The weakness of the ercetron network lies not with the learning rule, but with the structure of the network. The standard ercetron is only able to classify vectors that are linearly searable. We will see in Chater that the ercetron architecture can be generalized to mutlilayer ercetrons, which can solve arbitrary classification roblems. The backroagation learning rule, which is introduced in Chater, can be used to train these networks. In Chaters 3 and we have used many concets from the field of linear algebra, such as inner roduct, rojection, distance (norm), etc. We will find in later chaters that a good foundation in linear algebra is essential to our understanding of all neural networks. In Chaters 5 and 6 we will review some of the key concets from linear algebra that will be most imortant in our study of neural networks. Our objective will be to obtain a fundamental understanding of how neural networks work. -33

34 Percetron Learning Rule Further Reading [BaSu83] [Brog9] [McPi3] [MiPa69] [Rose58] A. Barto, R. Sutton and C. Anderson, ÒNeuron-like adative elements can solve difficult learning control roblems,ó IEEE Transactions on Systems, Man and Cybernetics, Vol. 3, No. 5,. 83Ð86, 983. A classic aer in which a reinforcement learning algorithm is used to train a neural network to balance an inverted endulum. W. L. Brogan, Modern Control Theory, 3rd Ed., Englewood Cliffs, NJ: Prentice-Hall, 99. A well-written book on the subject of linear systems. The first half of the book is devoted to linear algebra. It also has good sections on the solution of linear differential equations and the stability of linear and nonlinear systems. It has many worked roblems. W. McCulloch and W. Pitts, ÒA logical calculus of the ideas immanent in nervous activity,ó Bulletin of Mathematical Biohysics, Vol. 5,. 5Ð33, 93. This article introduces the first mathematical model of a neuron, in which a weighted sum of inut signals is comared to a threshold to determine whether or not the neuron fires. M. Minsky and S. Paert, Percetrons, Cambridge, MA: MIT Press, 969. A landmark book that contains the first rigorous study devoted to determining what a ercetron network is caable of learning. A formal treatment of the ercetron was needed both to exlain the ercetronõs limitations and to indicate directions for overcoming them. Unfortunately, the book essimistically redicted that the limitations of ercetrons indicated that the field of neural networks was a dead end. Although this was not true, it temorarily cooled research and funding for research for several years. F. Rosenblatt, ÒThe ercetron: A robabilistic model for information storage and organization in the brain,ó Psychological Review, Vol. 65,. 386Ð8, 958. This aer resents the first ractical artificial neural network Ñ the ercetron. -3

35 Further Reading [Rose6] [WhSo9] F. Rosenblatt, Princiles of Neurodynamics, Washington DC: Sartan Press, 96. One of the first books on neurocomuting. D. White and D. Sofge (Eds.), Handbook of Intelligent Control, New York: Van Nostrand Reinhold, 99. Collection of articles describing current research and alications of neural networks and fuzzy logic to control systems. -35

36 Percetron Learning Rule Exercises E. Consider the classification roblem defined below:, t, t 3, t 3 5, t 5., t i. Draw a diagram of the single-neuron ercetron you would use to solve this roblem. How many inuts are required? ii. Draw a grah of the data oints, labeled according to their targets. Is this roblem solvable with the network you defined in art (i)? Why or why not? E. Consider the classification roblem defined below., t, t 3, t 3, t. i. Design a single-neuron ercetron to solve this roblem. Design the network grahically, by choosing weight vectors that are orthogonal to the decision boundaries.» + ans ii. Test your solution with all four inut vectors. iii. Classify the following inut vectors with your solution. You can either erform the calculations manually or with MATLAB iv. Which of the vectors in art (iii) will always be classified the same way, regardless of the solution values for W and b? Which may vary deending on the solution? Why? E.3 Solve the classification roblem in Exercise E. by solving inequalities (as in Problem P.), and reeat arts (ii) and (iii) with the new solution. (The solution is more difficult than Problem P., since you canõt isolate the weights and biases in a airwise manner.) -36

37 Exercises E. Solve the classification roblem in Exercise E. by alying the ercetron rule to the following initial arameters, and reeat arts (ii) and (iii) with the new solution. W( ) b( ) E.5 Prove mathematically (not grahically) that the following roblem is unsolvable for a two-inut/single-neuron ercetron., t, t 3, t 3, t (Hint: start by rewriting the inut/target requirements as inequalities that constrain the weight and bias values.) a hardlims (n) n W + b E.6 The symmetric hard limit function is sometimes used in ercetron networks, instead of the hard limit function. Target values are then taken from the set [-, ] instead of [, ]. i. Write a simle exression that mas numbers in the ordered set [, ] into the ordered set [-, ]. Write the exression that erforms the inverse maing. ii. Consider two single-neuron ercetrons with the same weight and bias values. The first network uses the hard limit function ([, ] values), and the second network uses the symmetric hard limit function. If the two networks are given the same inut, and udated with the ercetron learning rule, will their weights continue to have the same value? iii. If the changes to the weights of the two neurons are different, how do they differ? Why? iv. Given initial weight and bias values for a standard hard limit ercetron, create a method for initializing a symmetric hard limit ercetron so that the two neurons will always resond identically when trained on identical data.» + ans E.7 The vectors in the ordered set defined below were obtained by measuring the weight and ear lengths of toy rabbits and bears in the Fuzzy Wuzzy Animal Factory. The target values indicate whether the resective inut vector was taken from a rabbit () or a bear (). The first element of the inut vector is the weight of the toy, and the second element is the ear length., t, t 3, t 3, t

38 Percetron Learning Rule 3 5, t 5 3 6, t 6 7, t 7 8, t 8 i. Use MATLAB to initialize and train a network to solve this ÒracticalÓ roblem. ii. Use MATLAB to test the resulting weight and bias values against the inut vectors. iii. Alter the inut vectors to ensure that the decision boundary of any solution will not intersect one of the original inut vectors (i.e., to ensure only robust solutions are found). Then retrain the network. E.8 Consider again the four-category classification roblem described in Problems P.3 and P.5. Suose that we change the inut vector to 3. 3» + ans i. Is the roblem still linearly searable? Demonstrate your answer grahically. ii. Use MATLAB and to initialize and train a network to solve this roblem. Exlain your results. iii. If 3 is changed to 3.5 is the roblem linearly searable? iv. With the 3 from (iii), use MATLAB to initialize and train a network to solve this roblem. Exlain your results. E.9 One variation of the ercetron learning rule is W new W old + αe T b new b old + αe where α is called the learning rate. Prove convergence of this algorithm. Does the roof require a limit on the learning rate? Exlain. -38

3 An Illustrative Example

3 An Illustrative Example Objectives An Illustrative Example Objectives - Theory and Examples -2 Problem Statement -2 Perceptron - Two-Input Case -4 Pattern Recognition Example -5 Hamming Network -8 Feedforward Layer -8 Recurrent

More information

Point Location. Preprocess a planar, polygonal subdivision for point location queries. p = (18, 11)

Point Location. Preprocess a planar, polygonal subdivision for point location queries. p = (18, 11) Point Location Prerocess a lanar, olygonal subdivision for oint location ueries. = (18, 11) Inut is a subdivision S of comlexity n, say, number of edges. uild a data structure on S so that for a uery oint

More information

ENFORCING SAFETY PROPERTIES IN WEB APPLICATIONS USING PETRI NETS

ENFORCING SAFETY PROPERTIES IN WEB APPLICATIONS USING PETRI NETS ENFORCING SAFETY PROPERTIES IN WEB APPLICATIONS USING PETRI NETS Liviu Grigore Comuter Science Deartment University of Illinois at Chicago Chicago, IL, 60607 lgrigore@cs.uic.edu Ugo Buy Comuter Science

More information

CSI:FLORIDA. Section 4.4: Logistic Regression

CSI:FLORIDA. Section 4.4: Logistic Regression SI:FLORIDA Section 4.4: Logistic Regression SI:FLORIDA Reisit Masked lass Problem.5.5 2 -.5 - -.5 -.5 - -.5.5.5 We can generalize this roblem to two class roblem as well! SI:FLORIDA Reisit Masked lass

More information

A MOST PROBABLE POINT-BASED METHOD FOR RELIABILITY ANALYSIS, SENSITIVITY ANALYSIS AND DESIGN OPTIMIZATION

A MOST PROBABLE POINT-BASED METHOD FOR RELIABILITY ANALYSIS, SENSITIVITY ANALYSIS AND DESIGN OPTIMIZATION 9 th ASCE Secialty Conference on Probabilistic Mechanics and Structural Reliability PMC2004 Abstract A MOST PROBABLE POINT-BASED METHOD FOR RELIABILITY ANALYSIS, SENSITIVITY ANALYSIS AND DESIGN OPTIMIZATION

More information

Effect Sizes Based on Means

Effect Sizes Based on Means CHAPTER 4 Effect Sizes Based on Means Introduction Raw (unstardized) mean difference D Stardized mean difference, d g Resonse ratios INTRODUCTION When the studies reort means stard deviations, the referred

More information

NAVAL POSTGRADUATE SCHOOL THESIS

NAVAL POSTGRADUATE SCHOOL THESIS NAVAL POSTGRADUATE SCHOOL MONTEREY CALIFORNIA THESIS SYMMETRICAL RESIDUE-TO-BINARY CONVERSION ALGORITHM PIPELINED FPGA IMPLEMENTATION AND TESTING LOGIC FOR USE IN HIGH-SPEED FOLDING DIGITIZERS by Ross

More information

SQUARE GRID POINTS COVERAGED BY CONNECTED SOURCES WITH COVERAGE RADIUS OF ONE ON A TWO-DIMENSIONAL GRID

SQUARE GRID POINTS COVERAGED BY CONNECTED SOURCES WITH COVERAGE RADIUS OF ONE ON A TWO-DIMENSIONAL GRID International Journal of Comuter Science & Information Technology (IJCSIT) Vol 6, No 4, August 014 SQUARE GRID POINTS COVERAGED BY CONNECTED SOURCES WITH COVERAGE RADIUS OF ONE ON A TWO-DIMENSIONAL GRID

More information

6.042/18.062J Mathematics for Computer Science December 12, 2006 Tom Leighton and Ronitt Rubinfeld. Random Walks

6.042/18.062J Mathematics for Computer Science December 12, 2006 Tom Leighton and Ronitt Rubinfeld. Random Walks 6.042/8.062J Mathematics for Comuter Science December 2, 2006 Tom Leighton and Ronitt Rubinfeld Lecture Notes Random Walks Gambler s Ruin Today we re going to talk about one-dimensional random walks. In

More information

A Simple Model of Pricing, Markups and Market. Power Under Demand Fluctuations

A Simple Model of Pricing, Markups and Market. Power Under Demand Fluctuations A Simle Model of Pricing, Markus and Market Power Under Demand Fluctuations Stanley S. Reynolds Deartment of Economics; University of Arizona; Tucson, AZ 85721 Bart J. Wilson Economic Science Laboratory;

More information

CABRS CELLULAR AUTOMATON BASED MRI BRAIN SEGMENTATION

CABRS CELLULAR AUTOMATON BASED MRI BRAIN SEGMENTATION XI Conference "Medical Informatics & Technologies" - 2006 Rafał Henryk KARTASZYŃSKI *, Paweł MIKOŁAJCZAK ** MRI brain segmentation, CT tissue segmentation, Cellular Automaton, image rocessing, medical

More information

On the predictive content of the PPI on CPI inflation: the case of Mexico

On the predictive content of the PPI on CPI inflation: the case of Mexico On the redictive content of the PPI on inflation: the case of Mexico José Sidaoui, Carlos Caistrán, Daniel Chiquiar and Manuel Ramos-Francia 1 1. Introduction It would be natural to exect that shocks to

More information

An important observation in supply chain management, known as the bullwhip effect,

An important observation in supply chain management, known as the bullwhip effect, Quantifying the Bullwhi Effect in a Simle Suly Chain: The Imact of Forecasting, Lead Times, and Information Frank Chen Zvi Drezner Jennifer K. Ryan David Simchi-Levi Decision Sciences Deartment, National

More information

Softmax Model as Generalization upon Logistic Discrimination Suffers from Overfitting

Softmax Model as Generalization upon Logistic Discrimination Suffers from Overfitting Journal of Data Science 12(2014),563-574 Softmax Model as Generalization uon Logistic Discrimination Suffers from Overfitting F. Mohammadi Basatini 1 and Rahim Chiniardaz 2 1 Deartment of Statistics, Shoushtar

More information

Project Management and. Scheduling CHAPTER CONTENTS

Project Management and. Scheduling CHAPTER CONTENTS 6 Proect Management and Scheduling HAPTER ONTENTS 6.1 Introduction 6.2 Planning the Proect 6.3 Executing the Proect 6.7.1 Monitor 6.7.2 ontrol 6.7.3 losing 6.4 Proect Scheduling 6.5 ritical Path Method

More information

The Online Freeze-tag Problem

The Online Freeze-tag Problem The Online Freeze-tag Problem Mikael Hammar, Bengt J. Nilsson, and Mia Persson Atus Technologies AB, IDEON, SE-3 70 Lund, Sweden mikael.hammar@atus.com School of Technology and Society, Malmö University,

More information

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem Coyright c 2009 by Karl Sigman 1 Gambler s Ruin Problem Let N 2 be an integer and let 1 i N 1. Consider a gambler who starts with an initial fortune of $i and then on each successive gamble either wins

More information

Pythagorean Triples and Rational Points on the Unit Circle

Pythagorean Triples and Rational Points on the Unit Circle Pythagorean Triles and Rational Points on the Unit Circle Solutions Below are samle solutions to the roblems osed. You may find that your solutions are different in form and you may have found atterns

More information

Measuring relative phase between two waveforms using an oscilloscope

Measuring relative phase between two waveforms using an oscilloscope Measuring relative hase between two waveforms using an oscilloscoe Overview There are a number of ways to measure the hase difference between two voltage waveforms using an oscilloscoe. This document covers

More information

Synopsys RURAL ELECTRICATION PLANNING SOFTWARE (LAPER) Rainer Fronius Marc Gratton Electricité de France Research and Development FRANCE

Synopsys RURAL ELECTRICATION PLANNING SOFTWARE (LAPER) Rainer Fronius Marc Gratton Electricité de France Research and Development FRANCE RURAL ELECTRICATION PLANNING SOFTWARE (LAPER) Rainer Fronius Marc Gratton Electricité de France Research and Develoment FRANCE Synosys There is no doubt left about the benefit of electrication and subsequently

More information

More Properties of Limits: Order of Operations

More Properties of Limits: Order of Operations math 30 day 5: calculating its 6 More Proerties of Limits: Order of Oerations THEOREM 45 (Order of Oerations, Continued) Assume that!a f () L and that m and n are ositive integers Then 5 (Power)!a [ f

More information

Machine Learning with Operational Costs

Machine Learning with Operational Costs Journal of Machine Learning Research 14 (2013) 1989-2028 Submitted 12/11; Revised 8/12; Published 7/13 Machine Learning with Oerational Costs Theja Tulabandhula Deartment of Electrical Engineering and

More information

Multiperiod Portfolio Optimization with General Transaction Costs

Multiperiod Portfolio Optimization with General Transaction Costs Multieriod Portfolio Otimization with General Transaction Costs Victor DeMiguel Deartment of Management Science and Oerations, London Business School, London NW1 4SA, UK, avmiguel@london.edu Xiaoling Mei

More information

POISSON PROCESSES. Chapter 2. 2.1 Introduction. 2.1.1 Arrival processes

POISSON PROCESSES. Chapter 2. 2.1 Introduction. 2.1.1 Arrival processes Chater 2 POISSON PROCESSES 2.1 Introduction A Poisson rocess is a simle and widely used stochastic rocess for modeling the times at which arrivals enter a system. It is in many ways the continuous-time

More information

Monitoring Frequency of Change By Li Qin

Monitoring Frequency of Change By Li Qin Monitoring Frequency of Change By Li Qin Abstract Control charts are widely used in rocess monitoring roblems. This aer gives a brief review of control charts for monitoring a roortion and some initial

More information

Stochastic Derivation of an Integral Equation for Probability Generating Functions

Stochastic Derivation of an Integral Equation for Probability Generating Functions Journal of Informatics and Mathematical Sciences Volume 5 (2013), Number 3,. 157 163 RGN Publications htt://www.rgnublications.com Stochastic Derivation of an Integral Equation for Probability Generating

More information

Web Application Scalability: A Model-Based Approach

Web Application Scalability: A Model-Based Approach Coyright 24, Software Engineering Research and Performance Engineering Services. All rights reserved. Web Alication Scalability: A Model-Based Aroach Lloyd G. Williams, Ph.D. Software Engineering Research

More information

SOME PROPERTIES OF EXTENSIONS OF SMALL DEGREE OVER Q. 1. Quadratic Extensions

SOME PROPERTIES OF EXTENSIONS OF SMALL DEGREE OVER Q. 1. Quadratic Extensions SOME PROPERTIES OF EXTENSIONS OF SMALL DEGREE OVER Q TREVOR ARNOLD Abstract This aer demonstrates a few characteristics of finite extensions of small degree over the rational numbers Q It comrises attemts

More information

Introduction to NP-Completeness Written and copyright c by Jie Wang 1

Introduction to NP-Completeness Written and copyright c by Jie Wang 1 91.502 Foundations of Comuter Science 1 Introduction to Written and coyright c by Jie Wang 1 We use time-bounded (deterministic and nondeterministic) Turing machines to study comutational comlexity of

More information

Memory management. Chapter 4: Memory Management. Memory hierarchy. In an ideal world. Basic memory management. Fixed partitions: multiple programs

Memory management. Chapter 4: Memory Management. Memory hierarchy. In an ideal world. Basic memory management. Fixed partitions: multiple programs Memory management Chater : Memory Management Part : Mechanisms for Managing Memory asic management Swaing Virtual Page relacement algorithms Modeling age relacement algorithms Design issues for aging systems

More information

C-Bus Voltage Calculation

C-Bus Voltage Calculation D E S I G N E R N O T E S C-Bus Voltage Calculation Designer note number: 3-12-1256 Designer: Darren Snodgrass Contact Person: Darren Snodgrass Aroved: Date: Synosis: The guidelines used by installers

More information

United Arab Emirates University College of Sciences Department of Mathematical Sciences HOMEWORK 1 SOLUTION. Section 10.1 Vectors in the Plane

United Arab Emirates University College of Sciences Department of Mathematical Sciences HOMEWORK 1 SOLUTION. Section 10.1 Vectors in the Plane United Arab Emirates University College of Sciences Deartment of Mathematical Sciences HOMEWORK 1 SOLUTION Section 10.1 Vectors in the Plane Calculus II for Engineering MATH 110 SECTION 0 CRN 510 :00 :00

More information

c 2009 Je rey A. Miron 3. Examples: Linear Demand Curves and Monopoly

c 2009 Je rey A. Miron 3. Examples: Linear Demand Curves and Monopoly Lecture 0: Monooly. c 009 Je rey A. Miron Outline. Introduction. Maximizing Pro ts. Examles: Linear Demand Curves and Monooly. The Ine ciency of Monooly. The Deadweight Loss of Monooly. Price Discrimination.

More information

Concurrent Program Synthesis Based on Supervisory Control

Concurrent Program Synthesis Based on Supervisory Control 010 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 30-July 0, 010 ThB07.5 Concurrent Program Synthesis Based on Suervisory Control Marian V. Iordache and Panos J. Antsaklis Abstract

More information

Int. J. Advanced Networking and Applications Volume: 6 Issue: 4 Pages: 2386-2392 (2015) ISSN: 0975-0290

Int. J. Advanced Networking and Applications Volume: 6 Issue: 4 Pages: 2386-2392 (2015) ISSN: 0975-0290 2386 Survey: Biological Insired Comuting in the Network Security V Venkata Ramana Associate Professor, Deartment of CSE, CBIT, Proddatur, Y.S.R (dist), A.P-516360 Email: ramanacsecbit@gmail.com Y.Subba

More information

A Modified Measure of Covert Network Performance

A Modified Measure of Covert Network Performance A Modified Measure of Covert Network Performance LYNNE L DOTY Marist College Deartment of Mathematics Poughkeesie, NY UNITED STATES lynnedoty@maristedu Abstract: In a covert network the need for secrecy

More information

TRANSCENDENTAL NUMBERS

TRANSCENDENTAL NUMBERS TRANSCENDENTAL NUMBERS JEREMY BOOHER. Introduction The Greeks tried unsuccessfully to square the circle with a comass and straightedge. In the 9th century, Lindemann showed that this is imossible by demonstrating

More information

Price Elasticity of Demand MATH 104 and MATH 184 Mark Mac Lean (with assistance from Patrick Chan) 2011W

Price Elasticity of Demand MATH 104 and MATH 184 Mark Mac Lean (with assistance from Patrick Chan) 2011W Price Elasticity of Demand MATH 104 and MATH 184 Mark Mac Lean (with assistance from Patrick Chan) 2011W The rice elasticity of demand (which is often shortened to demand elasticity) is defined to be the

More information

Branch-and-Price for Service Network Design with Asset Management Constraints

Branch-and-Price for Service Network Design with Asset Management Constraints Branch-and-Price for Servicee Network Design with Asset Management Constraints Jardar Andersen Roar Grønhaug Mariellee Christiansen Teodor Gabriel Crainic December 2007 CIRRELT-2007-55 Branch-and-Price

More information

The fast Fourier transform method for the valuation of European style options in-the-money (ITM), at-the-money (ATM) and out-of-the-money (OTM)

The fast Fourier transform method for the valuation of European style options in-the-money (ITM), at-the-money (ATM) and out-of-the-money (OTM) Comutational and Alied Mathematics Journal 15; 1(1: 1-6 Published online January, 15 (htt://www.aascit.org/ournal/cam he fast Fourier transform method for the valuation of Euroean style otions in-the-money

More information

An Associative Memory Readout in ESN for Neural Action Potential Detection

An Associative Memory Readout in ESN for Neural Action Potential Detection g An Associative Memory Readout in ESN for Neural Action Potential Detection Nicolas J. Dedual, Mustafa C. Ozturk, Justin C. Sanchez and José C. Princie Abstract This aer describes how Echo State Networks

More information

Precalculus Prerequisites a.k.a. Chapter 0. August 16, 2013

Precalculus Prerequisites a.k.a. Chapter 0. August 16, 2013 Precalculus Prerequisites a.k.a. Chater 0 by Carl Stitz, Ph.D. Lakeland Community College Jeff Zeager, Ph.D. Lorain County Community College August 6, 0 Table of Contents 0 Prerequisites 0. Basic Set

More information

Static and Dynamic Properties of Small-world Connection Topologies Based on Transit-stub Networks

Static and Dynamic Properties of Small-world Connection Topologies Based on Transit-stub Networks Static and Dynamic Proerties of Small-world Connection Toologies Based on Transit-stub Networks Carlos Aguirre Fernando Corbacho Ramón Huerta Comuter Engineering Deartment, Universidad Autónoma de Madrid,

More information

Failure Behavior Analysis for Reliable Distributed Embedded Systems

Failure Behavior Analysis for Reliable Distributed Embedded Systems Failure Behavior Analysis for Reliable Distributed Embedded Systems Mario Tra, Bernd Schürmann, Torsten Tetteroo {tra schuerma tetteroo}@informatik.uni-kl.de Deartment of Comuter Science, University of

More information

The risk of using the Q heterogeneity estimator for software engineering experiments

The risk of using the Q heterogeneity estimator for software engineering experiments Dieste, O., Fernández, E., García-Martínez, R., Juristo, N. 11. The risk of using the Q heterogeneity estimator for software engineering exeriments. The risk of using the Q heterogeneity estimator for

More information

Large-Scale IP Traceback in High-Speed Internet: Practical Techniques and Theoretical Foundation

Large-Scale IP Traceback in High-Speed Internet: Practical Techniques and Theoretical Foundation Large-Scale IP Traceback in High-Seed Internet: Practical Techniques and Theoretical Foundation Jun Li Minho Sung Jun (Jim) Xu College of Comuting Georgia Institute of Technology {junli,mhsung,jx}@cc.gatech.edu

More information

The Magnus-Derek Game

The Magnus-Derek Game The Magnus-Derek Game Z. Nedev S. Muthukrishnan Abstract We introduce a new combinatorial game between two layers: Magnus and Derek. Initially, a token is laced at osition 0 on a round table with n ositions.

More information

Risk in Revenue Management and Dynamic Pricing

Risk in Revenue Management and Dynamic Pricing OPERATIONS RESEARCH Vol. 56, No. 2, March Aril 2008,. 326 343 issn 0030-364X eissn 1526-5463 08 5602 0326 informs doi 10.1287/ore.1070.0438 2008 INFORMS Risk in Revenue Management and Dynamic Pricing Yuri

More information

An Introduction to Risk Parity Hossein Kazemi

An Introduction to Risk Parity Hossein Kazemi An Introduction to Risk Parity Hossein Kazemi In the aftermath of the financial crisis, investors and asset allocators have started the usual ritual of rethinking the way they aroached asset allocation

More information

Load Balancing Mechanism in Agent-based Grid

Load Balancing Mechanism in Agent-based Grid Communications on Advanced Comutational Science with Alications 2016 No. 1 (2016) 57-62 Available online at www.isacs.com/cacsa Volume 2016, Issue 1, Year 2016 Article ID cacsa-00042, 6 Pages doi:10.5899/2016/cacsa-00042

More information

Simulink Implementation of a CDMA Smart Antenna System

Simulink Implementation of a CDMA Smart Antenna System Simulink Imlementation of a CDMA Smart Antenna System MOSTAFA HEFNAWI Deartment of Electrical and Comuter Engineering Royal Military College of Canada Kingston, Ontario, K7K 7B4 CANADA Abstract: - The

More information

HALF-WAVE & FULL-WAVE RECTIFICATION

HALF-WAVE & FULL-WAVE RECTIFICATION HALF-WAE & FULL-WAE RECTIFICATION Objectives: HALF-WAE & FULL-WAE RECTIFICATION To recognize a half-wave rectified sinusoidal voltage. To understand the term mean value as alied to a rectified waveform.

More information

Automatic Search for Correlated Alarms

Automatic Search for Correlated Alarms Automatic Search for Correlated Alarms Klaus-Dieter Tuchs, Peter Tondl, Markus Radimirsch, Klaus Jobmann Institut für Allgemeine Nachrichtentechnik, Universität Hannover Aelstraße 9a, 0167 Hanover, Germany

More information

Time-Cost Trade-Offs in Resource-Constraint Project Scheduling Problems with Overlapping Modes

Time-Cost Trade-Offs in Resource-Constraint Project Scheduling Problems with Overlapping Modes Time-Cost Trade-Offs in Resource-Constraint Proect Scheduling Problems with Overlaing Modes François Berthaut Robert Pellerin Nathalie Perrier Adnène Hai February 2011 CIRRELT-2011-10 Bureaux de Montréal

More information

Stability Improvements of Robot Control by Periodic Variation of the Gain Parameters

Stability Improvements of Robot Control by Periodic Variation of the Gain Parameters Proceedings of the th World Congress in Mechanism and Machine Science ril ~4, 4, ianin, China China Machinery Press, edited by ian Huang. 86-8 Stability Imrovements of Robot Control by Periodic Variation

More information

On Multicast Capacity and Delay in Cognitive Radio Mobile Ad-hoc Networks

On Multicast Capacity and Delay in Cognitive Radio Mobile Ad-hoc Networks On Multicast Caacity and Delay in Cognitive Radio Mobile Ad-hoc Networks Jinbei Zhang, Yixuan Li, Zhuotao Liu, Fan Wu, Feng Yang, Xinbing Wang Det of Electronic Engineering Det of Comuter Science and Engineering

More information

http://www.ualberta.ca/~mlipsett/engm541/engm541.htm

http://www.ualberta.ca/~mlipsett/engm541/engm541.htm ENGM 670 & MECE 758 Modeling and Simulation of Engineering Systems (Advanced Toics) Winter 011 Lecture 9: Extra Material M.G. Lisett University of Alberta htt://www.ualberta.ca/~mlisett/engm541/engm541.htm

More information

Modeling and Simulation of an Incremental Encoder Used in Electrical Drives

Modeling and Simulation of an Incremental Encoder Used in Electrical Drives 10 th International Symosium of Hungarian Researchers on Comutational Intelligence and Informatics Modeling and Simulation of an Incremental Encoder Used in Electrical Drives János Jób Incze, Csaba Szabó,

More information

Learning Human Behavior from Analyzing Activities in Virtual Environments

Learning Human Behavior from Analyzing Activities in Virtual Environments Learning Human Behavior from Analyzing Activities in Virtual Environments C. BAUCKHAGE 1, B. GORMAN 2, C. THURAU 3 & M. HUMPHRYS 2 1) Deutsche Telekom Laboratories, Berlin, Germany 2) Dublin City University,

More information

DAY-AHEAD ELECTRICITY PRICE FORECASTING BASED ON TIME SERIES MODELS: A COMPARISON

DAY-AHEAD ELECTRICITY PRICE FORECASTING BASED ON TIME SERIES MODELS: A COMPARISON DAY-AHEAD ELECTRICITY PRICE FORECASTING BASED ON TIME SERIES MODELS: A COMPARISON Rosario Esínola, Javier Contreras, Francisco J. Nogales and Antonio J. Conejo E.T.S. de Ingenieros Industriales, Universidad

More information

CRITICAL AVIATION INFRASTRUCTURES VULNERABILITY ASSESSMENT TO TERRORIST THREATS

CRITICAL AVIATION INFRASTRUCTURES VULNERABILITY ASSESSMENT TO TERRORIST THREATS Review of the Air Force Academy No (23) 203 CRITICAL AVIATION INFRASTRUCTURES VULNERABILITY ASSESSMENT TO TERRORIST THREATS Cătălin CIOACĂ Henri Coandă Air Force Academy, Braşov, Romania Abstract: The

More information

Principles of Hydrology. Hydrograph components include rising limb, recession limb, peak, direct runoff, and baseflow.

Principles of Hydrology. Hydrograph components include rising limb, recession limb, peak, direct runoff, and baseflow. Princiles of Hydrology Unit Hydrograh Runoff hydrograh usually consists of a fairly regular lower ortion that changes slowly throughout the year and a raidly fluctuating comonent that reresents the immediate

More information

Finding a Needle in a Haystack: Pinpointing Significant BGP Routing Changes in an IP Network

Finding a Needle in a Haystack: Pinpointing Significant BGP Routing Changes in an IP Network Finding a Needle in a Haystack: Pinointing Significant BGP Routing Changes in an IP Network Jian Wu, Zhuoqing Morley Mao University of Michigan Jennifer Rexford Princeton University Jia Wang AT&T Labs

More information

Title: Stochastic models of resource allocation for services

Title: Stochastic models of resource allocation for services Title: Stochastic models of resource allocation for services Author: Ralh Badinelli,Professor, Virginia Tech, Deartment of BIT (235), Virginia Tech, Blacksburg VA 2461, USA, ralhb@vt.edu Phone : (54) 231-7688,

More information

A Novel Architecture Style: Diffused Cloud for Virtual Computing Lab

A Novel Architecture Style: Diffused Cloud for Virtual Computing Lab A Novel Architecture Style: Diffused Cloud for Virtual Comuting Lab Deven N. Shah Professor Terna College of Engg. & Technology Nerul, Mumbai Suhada Bhingarar Assistant Professor MIT College of Engg. Paud

More information

Risk and Return. Sample chapter. e r t u i o p a s d f CHAPTER CONTENTS LEARNING OBJECTIVES. Chapter 7

Risk and Return. Sample chapter. e r t u i o p a s d f CHAPTER CONTENTS LEARNING OBJECTIVES. Chapter 7 Chater 7 Risk and Return LEARNING OBJECTIVES After studying this chater you should be able to: e r t u i o a s d f understand how return and risk are defined and measured understand the concet of risk

More information

The Cubic Formula. The quadratic formula tells us the roots of a quadratic polynomial, a polynomial of the form ax 2 + bx + c. The roots (if b 2 b+

The Cubic Formula. The quadratic formula tells us the roots of a quadratic polynomial, a polynomial of the form ax 2 + bx + c. The roots (if b 2 b+ The Cubic Formula The quadratic formula tells us the roots of a quadratic olynomial, a olynomial of the form ax + bx + c. The roots (if b b+ 4ac 0) are b 4ac a and b b 4ac a. The cubic formula tells us

More information

The Priority R-Tree: A Practically Efficient and Worst-Case Optimal R-Tree

The Priority R-Tree: A Practically Efficient and Worst-Case Optimal R-Tree The Priority R-Tree: A Practically Efficient and Worst-Case Otimal R-Tree Lars Arge Deartment of Comuter Science Duke University, ox 90129 Durham, NC 27708-0129 USA large@cs.duke.edu Mark de erg Deartment

More information

Implementation of Statistic Process Control in a Painting Sector of a Automotive Manufacturer

Implementation of Statistic Process Control in a Painting Sector of a Automotive Manufacturer 4 th International Conference on Industrial Engineering and Industrial Management IV Congreso de Ingeniería de Organización Donostia- an ebastián, etember 8 th - th Imlementation of tatistic Process Control

More information

COST CALCULATION IN COMPLEX TRANSPORT SYSTEMS

COST CALCULATION IN COMPLEX TRANSPORT SYSTEMS OST ALULATION IN OMLEX TRANSORT SYSTEMS Zoltán BOKOR 1 Introduction Determining the real oeration and service costs is essential if transort systems are to be lanned and controlled effectively. ost information

More information

Local Connectivity Tests to Identify Wormholes in Wireless Networks

Local Connectivity Tests to Identify Wormholes in Wireless Networks Local Connectivity Tests to Identify Wormholes in Wireless Networks Xiaomeng Ban Comuter Science Stony Brook University xban@cs.sunysb.edu Rik Sarkar Comuter Science Freie Universität Berlin sarkar@inf.fu-berlin.de

More information

Fluent Software Training TRN-99-003. Solver Settings. Fluent Inc. 2/23/01

Fluent Software Training TRN-99-003. Solver Settings. Fluent Inc. 2/23/01 Solver Settings E1 Using the Solver Setting Solver Parameters Convergence Definition Monitoring Stability Accelerating Convergence Accuracy Grid Indeendence Adation Aendix: Background Finite Volume Method

More information

Feed-Forward mapping networks KAIST 바이오및뇌공학과 정재승

Feed-Forward mapping networks KAIST 바이오및뇌공학과 정재승 Feed-Forward mapping networks KAIST 바이오및뇌공학과 정재승 How much energy do we need for brain functions? Information processing: Trade-off between energy consumption and wiring cost Trade-off between energy consumption

More information

Improved Symmetric Lists

Improved Symmetric Lists Imroved Symmetric Lists Technical Reort MIP-49 October, 24 Christian Bachmaier and Marcus Raitner University of Passau, 943 Passau, Germany Fax: +49 85 59 332 {bachmaier,raitner}@fmi.uni-assau.de Abstract.

More information

Predicate Encryption Supporting Disjunctions, Polynomial Equations, and Inner Products

Predicate Encryption Supporting Disjunctions, Polynomial Equations, and Inner Products Predicate Encrytion Suorting Disjunctions, Polynomial Equations, and Inner Products Jonathan Katz Amit Sahai Brent Waters Abstract Predicate encrytion is a new aradigm for ublic-key encrytion that generalizes

More information

Migration to Object Oriented Platforms: A State Transformation Approach

Migration to Object Oriented Platforms: A State Transformation Approach Migration to Object Oriented Platforms: A State Transformation Aroach Ying Zou, Kostas Kontogiannis Det. of Electrical & Comuter Engineering University of Waterloo Waterloo, ON, N2L 3G1, Canada {yzou,

More information

type The annotations of the 62 samples with respect to the cancer types FL, CLL, DLBCL-A, DLBCL-G.

type The annotations of the 62 samples with respect to the cancer types FL, CLL, DLBCL-A, DLBCL-G. alizadeh Samle a from a lymhoma/leukemia gene exression study Samle a for the ISIS method Format x A 2000 x 62 gene exression a matrix of log-ratio values. 2,000 genes with the highest variance across

More information

2D Modeling of the consolidation of soft soils. Introduction

2D Modeling of the consolidation of soft soils. Introduction D Modeling of the consolidation of soft soils Matthias Haase, WISMUT GmbH, Chemnitz, Germany Mario Exner, WISMUT GmbH, Chemnitz, Germany Uwe Reichel, Technical University Chemnitz, Chemnitz, Germany Abstract:

More information

Re-Dispatch Approach for Congestion Relief in Deregulated Power Systems

Re-Dispatch Approach for Congestion Relief in Deregulated Power Systems Re-Disatch Aroach for Congestion Relief in Deregulated ower Systems Ch. Naga Raja Kumari #1, M. Anitha 2 #1, 2 Assistant rofessor, Det. of Electrical Engineering RVR & JC College of Engineering, Guntur-522019,

More information

The impact of metadata implementation on webpage visibility in search engine results (Part II) q

The impact of metadata implementation on webpage visibility in search engine results (Part II) q Information Processing and Management 41 (2005) 691 715 www.elsevier.com/locate/inforoman The imact of metadata imlementation on webage visibility in search engine results (Part II) q Jin Zhang *, Alexandra

More information

FREQUENCIES OF SUCCESSIVE PAIRS OF PRIME RESIDUES

FREQUENCIES OF SUCCESSIVE PAIRS OF PRIME RESIDUES FREQUENCIES OF SUCCESSIVE PAIRS OF PRIME RESIDUES AVNER ASH, LAURA BELTIS, ROBERT GROSS, AND WARREN SINNOTT Abstract. We consider statistical roerties of the sequence of ordered airs obtained by taking

More information

Interaction Expressions A Powerful Formalism for Describing Inter-Workflow Dependencies

Interaction Expressions A Powerful Formalism for Describing Inter-Workflow Dependencies Interaction Exressions A Powerful Formalism for Describing Inter-Workflow Deendencies Christian Heinlein, Peter Dadam Det. Databases and Information Systems University of Ulm, Germany {heinlein,dadam}@informatik.uni-ulm.de

More information

Software Cognitive Complexity Measure Based on Scope of Variables

Software Cognitive Complexity Measure Based on Scope of Variables Software Cognitive Comlexity Measure Based on Scoe of Variables Kwangmyong Rim and Yonghua Choe Faculty of Mathematics, Kim Il Sung University, D.P.R.K mathchoeyh@yahoo.com Abstract In this aer, we define

More information

Design of A Knowledge Based Trouble Call System with Colored Petri Net Models

Design of A Knowledge Based Trouble Call System with Colored Petri Net Models 2005 IEEE/PES Transmission and Distribution Conference & Exhibition: Asia and Pacific Dalian, China Design of A Knowledge Based Trouble Call System with Colored Petri Net Models Hui-Jen Chuang, Chia-Hung

More information

Comparing Dissimilarity Measures for Symbolic Data Analysis

Comparing Dissimilarity Measures for Symbolic Data Analysis Comaring Dissimilarity Measures for Symbolic Data Analysis Donato MALERBA, Floriana ESPOSITO, Vincenzo GIOVIALE and Valentina TAMMA Diartimento di Informatica, University of Bari Via Orabona 4 76 Bari,

More information

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore.

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore. This document is downloaded from DR-NTU, Nanyang Technological University Library, Singaore. Title Automatic Robot Taing: Auto-Path Planning and Maniulation Author(s) Citation Yuan, Qilong; Lembono, Teguh

More information

MOS Transistors as Switches

MOS Transistors as Switches MOS Transistors as Switches G (gate) nmos transistor: Closed (conducting) when Gate = 1 (V DD ) D (drain) S (source) Oen (non-conducting) when Gate = 0 (ground, 0V) G MOS transistor: Closed (conducting)

More information

Two-resource stochastic capacity planning employing a Bayesian methodology

Two-resource stochastic capacity planning employing a Bayesian methodology Journal of the Oerational Research Society (23) 54, 1198 128 r 23 Oerational Research Society Ltd. All rights reserved. 16-5682/3 $25. www.algrave-journals.com/jors Two-resource stochastic caacity lanning

More information

Evaluating a Web-Based Information System for Managing Master of Science Summer Projects

Evaluating a Web-Based Information System for Managing Master of Science Summer Projects Evaluating a Web-Based Information System for Managing Master of Science Summer Projects Till Rebenich University of Southamton tr08r@ecs.soton.ac.uk Andrew M. Gravell University of Southamton amg@ecs.soton.ac.uk

More information

Assignment 9; Due Friday, March 17

Assignment 9; Due Friday, March 17 Assignment 9; Due Friday, March 17 24.4b: A icture of this set is shown below. Note that the set only contains oints on the lines; internal oints are missing. Below are choices for U and V. Notice that

More information

Pinhole Optics. OBJECTIVES To study the formation of an image without use of a lens.

Pinhole Optics. OBJECTIVES To study the formation of an image without use of a lens. Pinhole Otics Science, at bottom, is really anti-intellectual. It always distrusts ure reason and demands the roduction of the objective fact. H. L. Mencken (1880-1956) OBJECTIVES To study the formation

More information

Forensic Science International

Forensic Science International Forensic Science International 214 (2012) 33 43 Contents lists available at ScienceDirect Forensic Science International jou r nal h o me age: w ww.els evier.co m/lo c ate/fo r sc iin t A robust detection

More information

Computational Finance The Martingale Measure and Pricing of Derivatives

Computational Finance The Martingale Measure and Pricing of Derivatives 1 The Martingale Measure 1 Comutational Finance The Martingale Measure and Pricing of Derivatives 1 The Martingale Measure The Martingale measure or the Risk Neutral robabilities are a fundamental concet

More information

Mean shift-based clustering

Mean shift-based clustering Pattern Recognition (7) www.elsevier.com/locate/r Mean shift-based clustering Kuo-Lung Wu a, Miin-Shen Yang b, a Deartment of Information Management, Kun Shan University of Technology, Yung-Kang, Tainan

More information

Stat 134 Fall 2011: Gambler s ruin

Stat 134 Fall 2011: Gambler s ruin Stat 134 Fall 2011: Gambler s ruin Michael Lugo Setember 12, 2011 In class today I talked about the roblem of gambler s ruin but there wasn t enough time to do it roerly. I fear I may have confused some

More information

As we have seen, there is a close connection between Legendre symbols of the form

As we have seen, there is a close connection between Legendre symbols of the form Gauss Sums As we have seen, there is a close connection between Legendre symbols of the form 3 and cube roots of unity. Secifically, if is a rimitive cube root of unity, then 2 ± i 3 and hence 2 2 3 In

More information

A Virtual Machine Dynamic Migration Scheduling Model Based on MBFD Algorithm

A Virtual Machine Dynamic Migration Scheduling Model Based on MBFD Algorithm International Journal of Comuter Theory and Engineering, Vol. 7, No. 4, August 2015 A Virtual Machine Dynamic Migration Scheduling Model Based on MBFD Algorithm Xin Lu and Zhuanzhuan Zhang Abstract This

More information

One-Chip Linear Control IPS, F5106H

One-Chip Linear Control IPS, F5106H One-Chi Linear Control IPS, F5106H NAKAGAWA Sho OE Takatoshi IWAMOTO Motomitsu ABSTRACT In the fi eld of vehicle electrical comonents, the increasing demands for miniaturization, reliability imrovement

More information

X How to Schedule a Cascade in an Arbitrary Graph

X How to Schedule a Cascade in an Arbitrary Graph X How to Schedule a Cascade in an Arbitrary Grah Flavio Chierichetti, Cornell University Jon Kleinberg, Cornell University Alessandro Panconesi, Saienza University When individuals in a social network

More information