Recurrent networks and types of associative memories
|
|
- Bonnie Owen
- 7 years ago
- Views:
Transcription
1 12 Assocatve Networks 12.1 Assocatve pattern recognton The prevous chapters were devoted to the analyss of neural networks wthout feedback, capable of mappng an nput space nto an output space usng only feed-forward computatons. In the case of backpropagaton networks we demanded contnuty from the actvaton functons at the nodes. The neghborhood of a vector x n nput space s therefore mapped to a neghborhood of the mage y of x n output space. It s ths property whch gves ts name to the contnuous mappng networks we have consdered up to ths pont. In ths chapter we deal wth another class of systems, known genercally as assocatve memores. The goal of learnng s to assocate known nput vectors wth gven output vectors. Contrary to contnuous mappngs, the neghborhood of a known nput vector x should also be mapped to the mage y of x, that s f B(x) denotes all vectors whose dstance from x (usng a sutable metrc) s lower than some postve constant ε, then we expect the network to map B(x) to y. Nosy nput vectors can then be assocated wth the correct output Recurrent networks and types of assocatve memores Assocatve memores can be mplemented usng networks wth or wthout feedback, but the latter produce better results. In ths chapter we deal wth the smplest knd of feedback: the output of a network s used repettvely as a new nput untl the process converges. However, as we wll see n the next chapter not all networks converge to a stable state after havng been set n moton. Some restrctons on the network archtecture are needed. The functon of an assocatve memory s to recognze prevously learned nput vectors, even n the case where some nose has been added. We have already dscussed a smlar problem when dealng wth clusters of nput vectors (Chap. 5). The approach we used there was to fnd cluster centrods n
2 Assocatve Networks nput space. For an nput x the weght vectors produced a maxmal exctaton at the unt representng the cluster assgned to x, whereas all other unts remaned slent. A smple approach to nhbt the other unts s to use non-local nformaton to decde whch unt should fre. The advantage of assocatve memores compared to ths approach s that only the local nformaton stream must be consdered. The response of each unt s determned exclusvely by the nformaton flowng through ts own weghts. If we take the bologcal analogy serously or f we want to mplement these systems n VLSI, localty s always an mportant goal. And as we wll see n ths chapter, a learnng algorthm derved from bologcal neurons can be used to tran assocatve networks: t s called Hebban learnng. The assocatve networks we want to consder should not be confused wth conventonal assocatve memory of the knd used n dgtal computers, whch conssts of content addressable memory chps. Wth ths n mnd we can dstngush between three overlappng knds of assocatve networks [294, 255]: Heteroassocatve networks map m nput vectors x 1, x 2,...,x m n n- dmensonal space to m output vectors y 1, y 2,...,y m n k-dmensonal space, so that x y.if x x 2 < ε then x y. Ths should be acheved by the learnng algorthm, but becomes very hard when the number m of vectors to be learned s too hgh. Autoassocatve networks are a specal subset of the heteroassocatve networks, n whch each vector s assocated wth tself,.e., y = x for =1,...,m. The functon of such networks s to correct nosy nput vectors. Pattern recognton networks are also a specal type of heteroassocatve networks. Each vector x s assocated wth the scalar. The goal of such a network s to dentfy the name of the nput pattern. Fgure 12.1 shows the three knds of networks and ther ntended use. They can be understood as automata whch produce the desred output whenever a gven nput s fed nto the network Structure of an assocatve memory The three knds of assocatve networks dscussed above can be mplemented wth a sngle layer of computng unts. Fgure 12.2 shows the structure of a heteroassocatve network wthout feedback. Let w j be the weght between nput ste and unt j. LetW denote the n k weght matrx [w j ]. The row vector x =(x 1,x 2,...,x n ) produces the exctaton vector e through the computaton e = xw. The actvaton functon s computed next for each unt. If t s the dentty, the unts are just lnear assocators and the output y s just xw. In general m dfferent n-dmensonal row vectors x 1, x 2,...,x m have to be assocated
3 12.1 Assocatve pattern recognton 313 x 1 x n heteroassocatve network y 1 y k x 1 x n autoassocatve network x 1 x n th vector x 1 x n pattern recognton network Fg Types of assocatve networks wth mk-dmensonal row vectors y 1, y 2,...,y m.letx be the m n matrx whose rows are each one of the nput vectors and let Y be the m k matrx whose rows are the output vectors. We are lookng for a weght matrx W for whch XW = Y (12.1) holds. In the autoassocatve case we assocate each vector wth tself and equaton (12.1) becomes XW = X (12.2) If m = n, thenx s a square matrx. If t s nvertble, the soluton of (12.1) s W = X 1 Y, whch means that fndng W amounts to solvng a lnear system of equatons. What happens now f the output of the network s used as the new nput? Fgure 12.3 shows such a recurrent autoassocatve network. We make the assumpton that all unts compute ther outputs smultaneously and we call such networks asynchronous. In each step the network s fed an nput vector x() and produces a new output x( + 1). The queston wth ths class of networks s whether there s a fxed pont ξ such that ξ W = ξ. The vector ξ s an egenvector of W wth egenvalue 1. The network behaves as a frst-order dynamcal system, snce each new state x( + 1) s completely determned by ts most recent predecessor [30].
4 Assocatve Networks y 1 x 1 x 2 x n y k Fg Heteroassocatve network wthout feedback x 1 x 1 x 2 x n x n Fg Autoassocatve network wth feedback The egenvector automaton Let W be the weght matrx of an autoassocatve network, as shown n Fgure The ndvdual unts are lnear assocators. As we sad before, we are nterested n fxed ponts of the dynamcal system but not all weght matrces lead to convergence to a stable state. A smple example s a rotaton by 90 degrees n two-dmensonal space, gven by the matrx [ ] 0 1 W =. 1 0 Such a matrx has no non-trval egenvectors. There s no fxed pont for the dynamcal system, but nfntely many cycles of length four. By changng the angle of rotaton, arbtrarly long cycles can be produced and by pckng an rratonal angle, even a non-cyclc successon of states can be produced.
5 12.2 Assocatve learnng 315 Quadratc matrces wth a complete set of egenvectors are more useful for storage purposes. An n n matrx W has at most n lnear ndependent egenvectors and n egenvalues. The egenvectors x 1, x 2,...,x n satsfy the set of equatons x W = λ x for =1,...,n, where λ 1,...,λ n are the matrx egenvalues. Each weght matrx wth a full set of egenvectors defnes a knd of egenvector automaton. Gven an ntal vector, the egenvector wth the largest egenvalue can be found (f t exsts). Assume wthout loss of generalty that λ 1 s the egenvalue of w wth the largest magntude, that s, λ 1 > λ for j. Letλ 1 > 0 and pck randomly a non-zero n-dmensonal vector a 0.Ths vector can be expressed as a lnear combnaton of the n egenvectors of the matrx W: a 0 = α 1 x 1 + α 2 x α n x n. Assume that all constants α are non-zero. After the frst teraton wth the weght matrx W we get After t teratons the result s a 1 = a 0 W =(α 1 x 1 + α 2 x α n x n )W = α 1 λ 1 x 1 + α 2 λ 2 x α n λ n x n. a t = α 1 λ t 1x 1 + α 2 λ t 2x α n λ t nx n. It s obvous that the egenvalue λ 1, the one wth the largest magntude, domnates ths expresson for a bg enough t. The vector a t can be brought arbtrarly close to the egenvector x 1 (wth respect to the drecton and wthout consderng the relatve lengths). In each teraton of the assocatve network, the vector x 1 attracts any other vector a 0 whose component α 1 s non-zero. An example s the followng weght matrx: [ ] 2 0 W = 0 1 Two egenvectors of ths matrx are (1, 0) and (0, 1) wth respectve egenvalues 2and1.Aftert teratons any ntal vector (x 1,x 2 )wthx 1 0 s transformed nto the vector (2 t x 1,x 2 ), whch comes arbtrarly near to the vector (1, 0) for a large enough t. In the language of the theory of dynamcal systems, the vector (1, 0) s an attractor for all those two-dmensonal vectors whose frst component does not vansh Assocatve learnng The smple examples of the last secton llustrate our goal: we want to use assocatve networks as dynamcal systems, whose attractors are exactly those
6 Assocatve Networks vectors we would lke to store. In the case of the lnear egenvector automaton, unfortunately just one vector absorbs almost the whole of nput space. The secret of assocatve network desgn s locatng as many attractors as possble n nput space, each one of them wth a well-defned and bounded nfluence regon. To do ths we must ntroduce a nonlnearty n the actvaton of the network unts so that the dynamcal system becomes nonlnear. Bpolar codng has some advantages over bnary codng, as we have already seen several tmes, but ths s stll more notceable n the case of assocatve networks. We wll use a step functon as nonlnearty, computng the output of each unt wth the sgn functon { 1 f x 0 sgn(x) = 1 f x<0 The man advantage of bpolar codng s that bpolar vectors have a greater probablty of beng orthogonal than bnary vectors. Ths wll be an ssue when we start storng vectors n the assocatve network Hebban learnng the correlaton matrx Consder a sngle-layer network of k unts wth the sgn functon as actvaton functon. We want to fnd the approprate weghts to map the n-dmensonal nput vector x to the k-dmensonal output vector y. The smplest learnng rule that can be used n ths case s Hebban learnng, proposed n 1949 by the psychologst Donald Hebb [182]. Hs dea was that two neurons whch are smultaneously actve should develop a degree of nteracton hgher than those neurons whose actvtes are uncorrelated. In the latter case the nteracton between the elements should be very low or zero [308]. w j y j x Δw j = γ x y j Fg The Hebb rule In the case of an assocatve network, Hebban learnng s appled, updatng the weght w j by Δw j. Ths ncrements measures the correlaton between the nput x at ste and the output y j of unt j. Durng learnng, both nput and output are clamped to the network and the weght update s gven by Δw j = γx x j. The factor γ s a learnng constant n ths equaton. The weght matrx W s set to zero before Hebban learnng s started. The learnng rule s appled
7 12.2 Assocatve learnng 317 to all weghts clampng the n-dmensonal row vector x 1 to the nput and the k-dmensonal row vector y 1 to the output. The updated weght matrx W s the correlaton matrx of the two vectors and has the form W =[w j ] n k = [ x 1 yj 1 ] n k. The matrx W maps the non-zero vector x 1 exactly to the vector y 1,ascan be proved by lookng at the exctaton of the k output unts of the network, whch s gven by the components of x 1 W =(y 1 1 n x 1 x1,y1 2 =1 = y 1 (x 1 x 1 ). n x 1 x1,..., y1 k =1 n x 1 x1 ) For x 1 0 t holds x 1 x 1 > 0 and the output of the network s sgn(x 1 W)=(y 1 1,y1 2,..., y1 k )=y1, where the sgn functon s appled to each component of the vector of exctatons. In general, f we want to assocate m n-dmensonal non-zero vectors x 1, x 2,...,x m wth mk-dmensonal vectors y 1, y 2,...,y m, we apply Hebban learnng to each nput-output par and the resultng weght matrx s =1 W = W 1 + W W m, (12.3) where each matrx W l s the n k correlaton matrx of the vectors x l and y l,.e., W l = [ x l ] yl j. n k If the nput to the network s the vector x p, the vector of unt exctatons s x p W = x p (W 1 + W W m ) m = x p W p + x p W l l p = y p (x p x p )+ m y l (x l x p ). The exctaton vector s equal to y p multpled by a postve constant plus an addtonal perturbaton term m l p yl (x l xp ) called the crosstalk. The network produces the desred vector y p as output when the crosstalk s zero. Ths s the case whenever the nput patterns x 1, x 2,...,x m are parwse orthogonal. Yet, the perturbaton term can be dfferent from zero and the mappng can stll work. The crosstalk should only be suffcently smaller than y p (x p x p ). In that case the output of the network s l p
8 Assocatve Networks m sgn(x p W)=sgn y p (x p x p )+ y l (x l x p ). l p Snce x p x p s a postve constant, t holds that m sgn(x p W)=sgn y p + y l (xl x p ) (x p x p. ) l p To produce the output y p the equaton m y p =sgn y p + y l (xl x p ) (x p x p ) must hold. Ths s the case when the absolute value of all components of the perturbaton term m y l (xl x p ) (x p x p ) l p l p s smaller than 1. Ths means that the scalar products x l x p must be small n comparson to the quadratc length of the vector x p (equal to n for n- dmensonal bpolar vectors). If randomly selected bpolar vectors are assocated wth other also randomly selected bpolar vectors, the probablty s hgh that they wll be nearly parwse orthogonal, as long as not many of them are selected. The crosstalk wll be small. In ths case Hebban learnng wll lead to an effcent set of weghts for the assocatve network. In the autoassocatve case, n whch a vector x 1 s to be assocated wth tself, the matrx W 1 can also be computed usng Hebban learnng. The matrx s gven by W 1 =(x 1 ) T x 1. Some authors defne the autocorrelaton matrx W 1 as W 1 =(1/n)(x 1 ) T x 1. Snce the addtonal postve constant does not change the sgn of the exctaton, we wll not use t n our computatons. If a set of m row vectors x 1, x 2,...,x m s to be autoassocated, the weght matrx W s gven by W =(x 1 ) T x 1 +(x 2 ) T x 2 + +(x m ) T x m = X T X, where X s the m n matrx whose rows are the m gven vectors. The matrx W s now the autocorrelaton matrx for the set of m gven vectors. It s expected from W that t can lead to the reproducton of each one of the vectors x 1, x 2,...,x m when used as weght matrx of the autoassocatve network. Ths means that
9 or alternatvely 12.2 Assocatve learnng 319 sgn(x W)=x for =1,..., m, (12.4) sgn(xw) =X. (12.5) The functon sgn(xw) can be consdered to be a nonlnear operator. Equaton (12.4) expresses the fact that the vectors x 1, x 2,...,x m are the egenvectors of the nonlnear operator. The learnng problem for assocatve networks conssts n constructng the matrx W for whch the nonlnear operator sgn(xw) has these egenvectors as fxed ponts. Ths s just a generalzaton of the egenvector concept to nonlnear functons. On the other hand, we do not want to smply use the dentty matrx, snce the objectve of the whole exercse s to correct nosy nput vectors. Those vectors located near to stored patterns should be attracted to them. For W = X T X, (12.5) demands that sgn(xx T X)=X. (12.6) If the vectors x 1, x 2,...,x m are parwse orthogonal X T X s the dentty matrx multpled by n and (12.6) s fulflled. If the patterns are nearly parwse orthogonal X T X s nearly the dentty matrx (tmes n) and the assocatve network contnues to work Geometrc nterpretaton of Hebban learnng The matrces W 1, W 2,...,W m n equaton (12.3) can be nterpreted geometrcally. Ths would provde us wth a hnt as to the possble ways of mprovng the learnng method. Consder frst the matrx W 1, defned n the autoassocatve case as W 1 =(x 1 ) T x 1. Any nput vector z fed to the network s projected nto the lnear subspace L 1 spanned by the vector x 1,snce zw 1 = z(x 1 ) T x 1 =(z(x 1 ) T )x 1 = c 1 x 1, where c 1 represents a constant. The matrx W 1 represents, n general, a nonorthogonal projecton operator that projects the vector z n L 1. A smlar nterpretaton can be derved for each weght matrx W 2 to W m.thematrx W = m =1 W s therefore a lnear transformaton whch projects a vector z nto the lnear subspace spanned by the vectors x 1, x 2,...,x m,snce zw = zw 1 + zw zw m = c 1 x 1 + c 2 x c m x m wth constants c 1,c 2,...,c m. In general W does not represent an orthogonal projecton.
10 Assocatve Networks Networks as dynamcal systems some experments How good s Hebban learnng when appled to an assocatve network? Some emprcal results can llustrate ths pont, as we proceed to show n ths secton. Snce assocatve networks can be consdered to be dynamcal systems, one of the mportant ssues s how to dentfy ther attractors and how extended the basns of attracton are, that s, how large s the regon of nput space mapped to each one of the fxed ponts of the system. We have already dscussed how to engneer a nonlnear operator whose egenvectors are the patterns to be stored. Now we nvestgate ths matter further and measure the changes n the basns of attracton when the patterns are learned one after the other usng the Hebb rule. In order to measure the sze of the basns of attracton we must use a sutable metrc. Ths wll be the Hammng dstance between bpolar vectors, whch s just the number of dfferent components whch both contan. Table Percentage of 10-dmensonal vectors wth a Hammng dstance (H) from 0 to 4, whch converge to a stored vector n a sngle teraton. The number of stored vectors ncreases from 1 to 10. Number of stored vectors (dmenson 10) H Table 12.1 shows the results of a computer experment. Ten randomly chosen vectors n 10-dmensonal space were stored n the weght matrx of an assocatve network usng Hebban learnng. The weght matrx was computed teratvely, frst for one vector, then for two, etc. After each update of the weght matrx the number of vectors wth a Hammng dstance from 0 to 4 to the stored vectors, and whch could be mapped to each one of them by the network, was counted. The table shows the percentage of vectors that were mapped n a sngle teraton to a stored pattern. As can be seen, each new stored pattern reduces the sze of the average basns of attracton, untl by 10 stored vectors only 33% of the vectors wth a sngle dfferent component are mapped to one of them. The table shows the average results for all stored patterns. When only one vector has been stored, all vectors up to Hammng dstance 4 converge to t. If two vectors have been stored, only 86.7% of the vectors wth Hammng
11 12.2 Assocatve learnng Fg The grey shadng of each concentrc crcle represents the percentage of vectors wth Hammng dstance from 0 to 4 to stored patterns and whch converge to them. The smallest crcle represents the vectors wth Hammng dstance 0. Increasng the number of stored patterns from 1 to 8 reduces the sze of the basns of attracton. dstance 2 converge to the stored patterns. Fgure 12.5 s a graphcal representaton of the changes to the basns of attracton after each new pattern has been stored. The basns of attracton become contnuously smaller and less dense. Table 12.1 shows only the results for vectors wth a maxmal Hammng dstance of 4, because vectors wth a Hammng dstance greater than 5 are mapped not to x but to x, whenx s one of the stored patterns. Ths s so because sgn( xw) = sgn(xw) = x. These addtonal stored patterns are usually a nusance. They are called spurous stable states. However, they are nevtable n the knd of assocatve networks consdered n ths chapter. The results of Table 12.1 can be mproved usng a recurrent network of the type shown n Fgure The operator sgn(xw) s used repettvely. If an nput vector does not converge to a stored pattern n one teraton, ts mage s nevertheless rotated n the drecton of the fxed pont. An addtonal teraton can then lead to convergence. The teratve method was descrbed before for the egenvector automaton. The dsadvantage of that automaton was that a sngle vector attracts almost the whole of nput space. The nonlnear operator sgn(xw) provdesasoluton for ths problem. On the one hand, the stored vectors can be reproduced,.e., sgn(xw) = x. There are no egenvalues dfferent from 1. On the other hand, the nonlnear sgn functon does not allow a sngle vector to attract most of nput space. Input space s dvded more regularly nto basns of attracton for the dfferent stored vectors. Table 12.2 shows the convergence
12 Assocatve Networks of an autoassocatve network when fve teratons are used. Only the frst seven vectors used n Table 12.1 were consdered agan for ths table. Table Percentage of 10-dmensonal vectors wth a Hammng dstance (H) from 0 to 4, whch converge to a stored vector n fve teratons. The number of stored vectors ncreases from 1 to 7. Number of stored vectors (dmenson 10) H The table shows an mprovement n the convergence propertes of the network, that s, an expanson of the average sze and densty of the basns of attracton. After sx stored vectors the results of Table 12.2 become very smlar to those of Table The results of Table 12.1 and Table 12.2 can be compared graphcally. Fgure 12.6 shows the percentage of vectors wth a Hammng dstance of 0 to 5 from a stored vector that converge to t. The broken lne shows the results for the recurrent network of Table The contnuous lne summarzes the results for a sngle teraton. Comparson of the sngle shot method and the recurrent computaton show that the latter s more effectve at least as long as not too many vectors have been stored. When the capacty of the weght matrx begns to be put under stran, the dfferences between the two methods dsappear. The szes of the basns of attracton n the feed-forward and the recurrent network can be compared usng an ndex I for ther sze. Ths ndex can be defned as 5 I = hp h h=0 where p h represents the percentage of vectors wth Hammng dstance h from a stored vector whch converge to t. Two ndces were computed usng the data n Table 12.1 and Table The results are shown n Fgure Both ndces are clearly dfferent up to fve stored vectors. As all these comparsons show, the sze of the basns of attracton falls very fast. Ths can be explaned by rememberng that fve stored vectors mply at least ten actually stored patterns, snce together wth x the vector x s also stored. Addtonally, dstorton of the basns of attracton produced by
13 12.2 Assocatve learnng stored vectors stored vectors % % recursve recursve stored vectors stored vectors % % 50 recursve 50 recursve stored vectors stored vectors % % recursve recursve Fg Comparson of the percentage of vectors wth a Hammng dstance from 0 to 5 from stored patterns and whch converge to them Hebban learnng leads to the appearance of other spurous stable states. A computer experment was used to count the number of spurous states that appear when 2 to 7 random bpolar vectors are stored n a assocatve matrx. Table 12.3 shows the fast ncrease n the number of spurous states. Spurous states other than the negatve stored patterns appear because the crosstalk becomes too large. An alternatve way to mnmze the crosstalk term s to use another knd of learnng rule, as we dscuss n the next secton.
14 Assocatve Networks ndex one teraton recursve Number of stored vectors Fg Comparson of the ndces of attracton for an assocatve network wth and wthout feedback Table Increase n the number of spurous states when the number of stored patterns ncreases from 2 to 7 (dmenson 10) stored vectors negatve vectors spurous fxed ponts total Another vsualzaton A second knd of vsualzaton of the basns of attracton allows us to perceve ther gradual fragmentaton n an assocatve network. Fgure 12.8 shows the result of an experment usng nput vectors n a 100-dmensonal space. Several vectors were stored and the sze of the basns of attracton of one of them and ts bpolar complement were montored usng the followng technque: the 100 components of each vector were dvded nto two groups of 50 components usng random selecton. The Hammng dstance ofa vector to the stored vector was measured by countng the number of dfferent components n each group. In ths way the vector z wth Hammng dstances 20 and 30 to the vector x, accordng to the chosen partton n two groups of components, s a vector wth total Hammng dstance 50 to the stored vector. The chosen partton allows us to represent the vector z as a pont at the poston (20, 30) n a two-dmensonal grd. Ths pont s colored black f z s assocated wth x by the assocatve network, otherwse t s left whte. For each pont n the grd a vector wth the correspondng Hammng dstance to x was generated. As Fgure 12.8 shows, when only 4 vectors have been stored, the vector x and ts complement x attract a consderable porton of ther neghborhood.
15 12.3 The capacty problem Fg Basn of attracton n 100-dmensonal space of a stored vector. The number of stored patterns s 4, 6, 10 and 15, from top to bottom, left to rght. As the number of stored vectors ncreases, the basns of attracton become ragged and smaller, untl by 15 stored vectors they are startng to become unusable. Ths s near to the theoretcal lmt for the capacty of an assocatve network The capacty problem The experments of the prevous sectons have shown that the basns of attracton of stored patterns deterorate every tme new vectors are presented to an assocatve network. If the crosstalk term becomes too large, t can even happen that prevously stored patterns are lost, because when they are presented to the network one or more of ther bts are flpped by the assocatve computaton. We would lke to keep the probablty that ths could happen low, so that stored patterns can always be recalled. Dependng on the upper bound that we want to put on ths probablty some lmts to the number of patterns m that can be stored safely n an autoassocatve network wth an n n weght matrx W wll arse. Ths s called the maxmum capacty of the network and an often quoted rule of thumb s m 0.18n. It s actually easy to derve ths rule [189].
16 Assocatve Networks The crosstalk term for n-dmensonal bpolar vectors and m patterns n the autoassocatve case s 1 n m x l (x l x p ). l p Ths can flp a bt of a stored pattern f the magntude of ths term s larger than 1 and f t s of opposte sgn. Assume that the stored vectors are chosen randomly from nput space. In ths case the crosstalk term for bt of the nput vector s gven by 1 m x l n (xl x p ). l p Ths s a sum of (m 1)n bpolar bts. Snce the components of each pattern have been selected randomly we can thnk of mn random bt selectons (for large m and n). The expected value of the sum s zero. It can be shown that the sum has a bnomal dstrbuton and for large mn we can approxmate t wth a normal dstrbuton. The standard devaton of the dstrbuton s σ = m/n. The probablty of error P that the sum becomes larger than 1 or 1 s gven by the area under the Gaussan from 1 to or from 1 to. Thats, P = 1 e x2 /(2σ 2) dx 2πσ 1 If we set the upper bound for one bt falure at 0.01, the above expresson can be solved numercally to fnd the approprate combnaton of m and n. The numercal soluton leads to m 0.18n as mentoned before. Note that these are just probablstc results. If the patterns are correlated, even m<0.18n can produce problems. Also, we dd not consder the case of a recursve computaton and only consdered the case of a false bt. A more precse computaton does not essentally change the order of magntude of these results: an assocatve network remans statstcally safe only f fewer than 0.18n patterns are stored. If we demand perfect recall of all patterns more strngent bounds are needed [189]. The experments n the prevous secton are on such a small scale that no problems were found wth the recall of stored patterns even when m was much larger than 0.18n The pseudonverse Hebban learnng produces good results when the stored patterns are nearly orthogonal. Ths s the case when m bpolar vectors are selected randomly from an n-dmensonal space, n s large enough and m much smaller than n. In real applcatons the patterns are almost always correlated and the crosstalk n the expresson
17 x p W = y p (x p x p ) The pseudonverse 327 m y l (x l x p ) affects the recall process because the scalar products x l x p,forl p, are not small enough. Ths causes a reducton n the capacty of the assocatve network, that s, of the number of patterns that can be stored and later recalled. Consder the example of scanned letters of the alphabet, dgtzed usnga16 16 grd of pxels. The pattern vectors do not occupy nput space homogeneously but concentrate around a small regon. We must therefore look for alternatve learnng methods capable of mnmzng the crosstalk between the stored patterns. One of the preferred methods s usng the pseudonverse of the pattern matrx nstead of the correlaton matrx. l p Defnton and propertes of the pseudonverse Let x 1, x 2,...,x m be n-dmensonal vectors and assocate them wth the m k-dmensonal vectors y 1, y 2,...,y m.thematrxx s, as before, the m n matrx whose rows are the vectors x 1, x 2,...,x m.therowsofthem k matrx Y are the vectors y 1, y 2,...,y m. We are lookng for a weght matrx W such that XW = Y. Snce n general m n and the vectors x 1, x 2,...,x m are not necessarly lnearly ndependent, the matrx X cannot be nverted. However, we can look for a matrx W whch mnmzes the quadratc norm of the matrx XW Y, that s, the sum of the squares of all ts elements. It s a well-known result of lnear algebra that the expresson XW Y 2 s mnmzed by the matrx W = X + Y,whereX + s the so-called pseudonverse of the matrx X. Insome sense the pseudonverse s the best approxmaton to an nverse that we can get, and f X 1 exsts, then X 1 = X +. Defnton 14. The pseudonverse of a real m n matrx s the real matrx X + wth the followng propertes: a) XX + X = X. b) X + XX + = X +. c) X + X and XX + are symmetrcal. It can be shown that the pseudonverse of a matrx X always exsts and s unque [14]. We can now prove the result mentoned before [185]. Proposton 18. Let X be an m n real matrx and Y be an m k real matrx. The n k matrx W = X + Y mnmzes the quadratc norm of the matrx XW Y.
18 Assocatve Networks Proof. Defne E = XW Y 2.ThevalueofE can be computed from the equaton E =tr(s), where S =(XW Y) T (XW Y) andtr( ) denotes the functon that computes the trace of a matrx, that s, the sum of all ts dagonal elements. The matrx S can be rewrtten as S =(X + Y W) T X T X(X + Y W)+Y T (I XX + )Y. (12.7) Ths can be verfed by brngng S back to ts orgnal form. Equaton (12.7) can be wrtten as S =(X + Y W) T (X T XX + Y X T XW)+Y T (I XX + )Y. Snce the matrx XX + s symmetrcal (from the defnton of X + ), the above expresson transforms to S =(X + Y W) T ((XX + X) T Y X T XW)+Y T (I XX + )Y. Snce from the defnton of X + we know that XX + X = X t follows that S =(X + Y W) T (X T Y X T XW)+Y T (I XX + )Y =(X + Y W) T X T (Y XW)+Y T (I XX + )Y =(XX + Y XW) T (Y XW)+Y T (I XX + )Y =( XW) T (Y XW)+Y T XX + (Y XW)+Y T (I XX + )Y =( XW) T (Y XW)+Y T ( XW)+Y T Y =(Y XW) T (Y XW). Ths confrms that equaton (12.7) s correct. Usng (12.7) the value of E can be wrtten as E =tr ( (X + Y W) T X T X(X + Y W) ) +tr ( Y T (I XX + )Y ), snce the trace of an addton of two matrces s the addton of the trace of each matrx. The trace of the second matrx s a constant. The trace of the frst s postve or zero, snce the trace of a matrx of the form AA T s always postve or zero. The mnmum of E s reached therefore when W = X + Y, as we wanted to prove. An nterestng corollary of the above proposton s that the pseudonverse of a matrx X mnmzes the norm of the matrx XX + I. Thsswhatwe mean when we say that the pseudonverse s the second best choce f the nverse of a matrx s not defned. The quadratc norm E has a straghtforward nterpretaton for our assocatve learnng task. Mnmzng E = XW Y 2 amounts to mnmzng
19 12.4 The pseudonverse 329 the sum of the quadratc norm of the vectors x W y,where(x, y ), for =1,...,m, are the vector pars we want to assocate. The dentty x W = y +(x W y ) always holds. Snce the desred result of the computaton s y,thetermx W y represents the devaton from the deal stuaton, that s, the crosstalk n the assocatve network. The quadratc norm E defned before can also be wrtten as m E = x W y 2. =1 Mnmzng E amounts to mnmzng the devaton from perfect recall. The pseudonverse can thus be used to compute an optmal weght matrx W. Table Percentage of 10-dmensonal vectors wth a Hammng dstance (H) from 0 to 4, whch converge to a stored vector n fve teratons. The number of stored vectors ncreases from 1 to 7. The weght matrx s X + X. Number of stored vectors (dmenson 10) H An experment smlar to the ones performed before (Table 12.4) shows that usng the pseudonverse to compute the weght matrx produces better results than when the correlaton matrx s used. Only one teraton was performed durng the assocatve recall. The table also shows that f more than fve vectors are stored the qualty of the results does not mprove. Ths can be explaned by notng that the capacty of the assocatve network has been overflowed. The stored vectors were also randomly chosen. The pseudonverse method performs better than Hebban learnng when the patterns are correlated Orthogonal projectons In Sect we provded a geometrc nterpretaton of the effect of the correlaton matrx n the assocatve computaton. We arrved at the concluson that when an n-dmensonal vector x 1 s autoassocated, the weght matrx (x 1 ) T x 1 projects the whole of nput space nto the lnear subspace spanned by
20 Assocatve Networks x 1. However, the projecton s not an orthogonal projecton. The pseudonverse can be used to construct an operator capable of projectng the nput vector orthogonally nto the subspace spanned by the stored patterns. What s the advantage of performng an orthogonal projecton? An autoassocatve network must assocate each nput vector wth the nearest stored pattern and must produce ths as the result. In nonlnear assocatve networks ths s done n two steps: the nput vector x s projected frst onto the lnear subspace spanned by the stored patterns, and then the nonlnear functon sgn( ) s used to fnd the bpolar vector nearest to the projecton. If the projecton falsfes the nformaton about the dstance of the nput to the stored patterns, then a suboptmal selecton could be the result. Fgure 12.9 llustrates ths problem. The vector x s projected orthogonally and non-orthogonally n the space spanned by the vectors x 1 and x 2. The vector x s closer to x 2 than to x 1. The orthogonal projecton x s also closer to x 2. However, the non-orthogonal projecton x s closer to x 1 as to x 2.Thss certanly a scenaro we should try to avod. orthogonal projecton non-orthogonal projecton x ˆx x ˆx x 2 x 2 x 1 x x 1 x Fg Projectons of a vector x In what follows we consder only the orthogonal projecton. If we use the scalar product as the metrc to assess the dstance between patterns, the dstance between the nput vector x and the stored vector x s gven by d = x x =( x + ˆx) x = x x, (12.8) where x represents the orthogonal projecton onto the vector subspace spanned by the stored vectors, and ˆx = x x. Equaton (12.8) shows that the orgnal dstance d s equal to the dstance between the projecton x and the vector x. The orthogonal projecton does not falsfy the orgnal nformaton. We must therefore only show that the pseudonverse provdes a smple way
21 12.4 The pseudonverse 331 to fnd the orthogonal projecton to the lnear subspace defned by the stored patterns. Consder mn-dmensonal vectors x 1, x 2,...,x m,wherem<n.let x be the orthogonal projecton of an arbtrary nput vector x 0 on the subspace spanned by the m gven vectors. In ths case x = x + ˆx, where ˆx s orthogonal to the vectors x 1, x 2,...,x m. We want to fnd an n n matrx W such that xw = x. Snce ˆx = x x, thematrxw must fulfll ˆx = x(i W). (12.9) Let X be the m n matrx whose rows are the vectors x 1, x 2,...,x m.then t holds for ˆx that Xˆx T = 0, snce ˆx s orthogonal to all vectors x 1, x 2,...,x m. A possble soluton for ths equaton s ˆx T =(I X + X)x T, (12.10) because n ths case Xˆx T = X(I X + X)x T =(X X)x T = 0. Snce the matrx I X + X s symmetrcal t holds that ˆx = x(i X + X). (12.11) A drect comparson of equatons (12.9) and (12.11) shows that W = X + X, snce the orthogonal projecton ˆx s unque. Ths amounts to a proof that the orthogonal projecton of a vector x can be computed by multplyng t wth the matrx X + X. What happens f the number of patterns m s larger than the dmenson of the nput space? Assume that the mn-dmensonal vectors x 1, x 2,...,x m are to be assocated wth the m real numbers y 1,y 2,...,y m.letx be the matrx whose rows are the gven patterns and let y =(y 1,y 2,...,y m ). We are lookng for the n 1matrxW such that XW = y T. In general there s no exact soluton for ths system of m equatons n n varables. The best approxmaton s gven by W = X + y T, whch mnmzes the quadratc error XW y T 2.ThesolutonX + X T s therefore the same as that we fnd f we use lnear regresson.
22 Assocatve Networks x 1 x 2 w j + (o1 y 1 ) E x n + (o k y k ) 2 Fg Backpropagaton network for a lnear assocatve memory The only open queston s how best to compute the pseudonverse. We already dscussed n Chap. 9 that there are several algorthms whch can be used, but that a smple approach s to compute an approxmaton usng a backpropagaton network. A network, such as the one shown n Fgure 12.10, can be used to fnd the approprate weghts for an assocatve memory. The unts n the frst layer are lnear assocators. The learnng task conssts n fndng the weght matrx W wth elements w j that produces the best mappng from the vectors x 1, x 2,...,x m to the vectors y 1, y 2,...,y m. The network s extended as shown n Fgure For the -th nput vector, the output of the network s compared to the vector y and the quadratc error E s computed. The total quadratc error E = m =1 E s the quadratc norm of the matrx XW Y. Backpropagaton can fnd the matrx W whch mnmzes the expresson XW Y 2. If the nput vectors are lnearly ndependent and m<n, then a soluton wth zero error can be found. If m n and Y = I, the network of Fgure can be used to fnd the pseudonverse X + or the nverse X 1 (f t exsts). Ths algorthm for the computaton of the pseudonverse brngs backpropagaton networks n drect contact wth the problem of assocatve networks. Strctly speakng, we should mnmze XW Y 2 and the quadratc norm of W. Ths s equvalent to ntroducng a weght decay term n the backpropagaton computaton Holographc memores Assocatve networks have found many applcatons for pattern recognton tasks. Fgure shows the canoncal example. The faces were scanned and encoded as vectors. A whte pxel was encoded as 1 andablackpxel as 1. Some mages were stored n an assocatve network. When later a key, that s, an ncomplete or nosy verson of one of the stored mages s presented to the network, ths completes the mage and fnds the one wth the greatest smlarty. Ths s called an assocatve recall.
23 12.4 The pseudonverse 333 Fg Assocatve recall of stored patterns Ths knd of applcaton can be mplemented n conventonal workstatons. If the dmenson of the mages becomes too large (for example 10 6 pxels for a mage, the only alternatve s to use nnovatve computer archtectures, lke so-called holographc memores. They work wth smlar methods to those descrbed here, but computaton s performed n parallel usng optcal elements. We leave the dscusson of these archtectures for Chap Translaton nvarant pattern recognton It would be nce f the process of assocatve recall could be made robust n the sense that those patterns whch are very smlar, except for a translaton n the plane, could be dentfed as beng, n fact, smlar. Ths can be done usng the two-dmensonal Fourer transform of the scanned mages. Fgure shows an example of two dentcal patterns postoned wth a small dsplacement from each other. Fg The pattern 0 at two postons of a grd The two-dmensonal dscrete Fourer transform of a two-dmensonal array s computed by frst performng a one-dmensonal Fourer transform of the rows of the array and then a one-dmensonal Fourer transform of the columns of the results (or vce versa). It s easy to show that the absolute value of the Fourer coeffcents does not change under a translaton of the two-dmensonal pattern. Snce the Fourer transform also has the property that t preserves angles, that s, smlar patterns n the orgnal doman are also smlar n
24 Assocatve Networks the Fourer doman, ths preprocessng can be used to mplement translatonnvarant assocatve recall. In ths case the vectors whch are stored n the network are the absolute values of the Fourer coeffcents for each pattern. Defnton 15. Let a(x, y) denote an array of real values, where x and y represent ntegers such that 0 x N 1 and 0 y N 1. The twodmensonal dscrete Fourer transform F a (f x,f y ) of a(x, y) s gven by F a (f x,f y )= 1 N N 1 N 1 y=0 x=0 where 0 f x N 1 and 0 f y N 1. a(x, y)e 2π N xfx e 2π N yfy Consder two arrays a(x, y) andb(x, y) of real values. Assume that they represent the same pattern, but wth a certan dsplacement, that s b(x, y) = a(x + d x,y+ d y ), where d x and d y are two gven ntegers. Note that the addtons x+d x and y+d y are performed modulo N (that s, the dsplaced patterns wrap around the two-dmensonal array). The two-dmensonal Fourer transform of the array b(x, y) s therefore F b (f x,f y )= 1 N N 1 N 1 y=0 x=0 a(x + d x,y+ d y )e 2π N xfx e 2π N yfy. Wth the change of ndces x =(x + d x )modn and y =(y + d y )modn the above expresson transforms to F b (f x,f y )= 1 N N 1 N 1 y=0 x=0 a(x,y )e 2π N x f x e 2π N y f y e 2π N dxfx e 2π N dyfy. The two sums can be rearranged and the constant factors taken out of the double sum to yeld F b (f x,f y )= 1 N e 2π N whch can be wrtten as dxfx e 2π N N 1 dyfy N 1 y =0 x =0 F b (f x,f y )=e 2π N dxfx e 2π N dyfy F a (f x,f y ). a(x,y )e 2π N x f x e 2π N y f y. Ths expresson tells us that the Fourer coeffcents of the array b(x, y) are dentcal to the coeffcents of the array a(x, y), except for a phase factor. Snce takng the absolute values of the Fourer coeffcents elmnates any phase factor, t holds that
25 12.5 Hstorcal and bblographcal remarks Fg Absolute values of the two-dmensonal Fourer transform of the patterns of Fgure F b (f x,f y ) = e 2π N = F a (f x,f y ), dxfx e 2π N dyfy F a (f x,f y ) and ths proves that the absolute values of the Fourer coeffcents for both patterns are dentcal. Fgure shows the absolute values of the two-dmensonal Fourer coeffcents for the two patterns of Fgure They are dentcal, as expected from the above consderatons. Note that ths type of comparson holds as long as the background nose s low. If there s a background wth a certan structure t should be dsplaced together wth the patterns, otherwse the analyss performed above does not hold. Ths s what makes the translaton-nvarant recognton of patterns a dffcult problem when the background s structured. Recognzng faces n real photographs can become extremely dffcult when a typcal background (a forest, a street, for example) s taken nto account Hstorcal and bblographcal remarks Assocatve networks have been studed for a long tme. Donald Hebb consdered n the 1950s how neural assembles can self-organze nto feedback crcuts capable of recognzng patterns [308]. Hebban learnng has been nterpreted n dfferent ways and several modfcatons of the basc algorthm have been proposed, but they all have three aspects n common: Hebban learnng s a local, nteractve, and tme-dependent mechansm [73]. A synaptc phenomenon n the hppocampus, known as long-term potentaton, s thought to be produced by Hebban modfcaton of the synaptc strength. There were some other experments wth assocatve memores n the 1960s, for example the hardware mplementatons by Karl Stenbuch of hs learnng matrx. A precser mathematcal descrpton of assocatve networks was gven by Kohonen n the 1970s [253]. Hs experments wth many dfferent
26 Assocatve Networks classes of assocatve memores contrbuted enormously to the renewed surge of nterest n neural models. Some researchers tred to fnd bologcally plausble models of assocatve memores followng hs lead [89]. Some dscrete varants of correlaton matrces were analyzed n the 1980s, as done for example by Kanerva who consdered the case of weght matrces wth only two classes of elements, 0 or 1 [233]. In recent tmes much work has been done on understandng the dynamcal propertes of recurrent assocatve networks [231]. It s mportant to fnd methods to descrbe the changes n the basns of attracton of the stored patterns [294]. More recently Haken has proposed hs model of a synergetc computer, a knd of assocatve network wth a contnuous dynamcs n whch synergetc effects play the crucal role [178]. The fundamental dfference from conventonal models s the contnuous, nstead of dscrete, dynamcs of the network, regulated by some dfferental equatons. The Moore Penrose pseudonverse has become an essental nstrument n lnear algebra and statstcs [14]. It has a smple geometrc nterpretaton and can be computed usng standard lnear algebrac methods. It makes t possble to deal effcently wth correlated data of the knd found n real applcatons. Exercses 1. The Hammng dstance between two n-dmensonal bnary vectors x and y s gven by h = n =1 (x (1 y )+y (1 x )). Wrte h as a functon of the scalar product x y. Fnd a smlar expresson for bpolar vectors. Show that measurng the smlarty of vectors usng the Hammng dstance s equvalent to measurng t usng the scalar product. 2. Implement an assocatve memory capable of recognzng the ten dgts from 0 to 9, even n the case of a nosy nput. 3. Preprocess the data, so that the pattern recognton process s made translaton nvarant. 4. Fnd the nverse of an nvertble matrx usng the backpropagaton algorthm. 5. Show that the pseudonverse of a matrx s unque. 6. Does the backpropagaton algorthm always fnd the pseudonverse of any gven matrx? 7. Show that the two-dmensonal Fourer transform s a composton of two one-dmensonal Fourer transforms. 8. Express the two-dmensonal Fourer transform as the product of three matrces (see the expresson for the one-dmensonal FT n Chap. 9).
8.5 UNITARY AND HERMITIAN MATRICES. The conjugate transpose of a complex matrix A, denoted by A*, is given by
6 CHAPTER 8 COMPLEX VECTOR SPACES 5. Fnd the kernel of the lnear transformaton gven n Exercse 5. In Exercses 55 and 56, fnd the mage of v, for the ndcated composton, where and are gven by the followng
More informationModule 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module LOSSLESS IMAGE COMPRESSION SYSTEMS Lesson 3 Lossless Compresson: Huffman Codng Instructonal Objectves At the end of ths lesson, the students should be able to:. Defne and measure source entropy..
More informationv a 1 b 1 i, a 2 b 2 i,..., a n b n i.
SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 455 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces we have studed thus far n the text are real vector spaces snce the scalars are
More informationwhere the coordinates are related to those in the old frame as follows.
Chapter 2 - Cartesan Vectors and Tensors: Ther Algebra Defnton of a vector Examples of vectors Scalar multplcaton Addton of vectors coplanar vectors Unt vectors A bass of non-coplanar vectors Scalar product
More informationThe Development of Web Log Mining Based on Improve-K-Means Clustering Analysis
The Development of Web Log Mnng Based on Improve-K-Means Clusterng Analyss TngZhong Wang * College of Informaton Technology, Luoyang Normal Unversty, Luoyang, 471022, Chna wangtngzhong2@sna.cn Abstract.
More informationL10: Linear discriminants analysis
L0: Lnear dscrmnants analyss Lnear dscrmnant analyss, two classes Lnear dscrmnant analyss, C classes LDA vs. PCA Lmtatons of LDA Varants of LDA Other dmensonalty reducton methods CSCE 666 Pattern Analyss
More informationLogistic Regression. Lecture 4: More classifiers and classes. Logistic regression. Adaboost. Optimization. Multiple class classification
Lecture 4: More classfers and classes C4B Machne Learnng Hlary 20 A. Zsserman Logstc regresson Loss functons revsted Adaboost Loss functons revsted Optmzaton Multple class classfcaton Logstc Regresson
More informationRecurrence. 1 Definitions and main statements
Recurrence 1 Defntons and man statements Let X n, n = 0, 1, 2,... be a MC wth the state space S = (1, 2,...), transton probabltes p j = P {X n+1 = j X n = }, and the transton matrx P = (p j ),j S def.
More informationWhat is Candidate Sampling
What s Canddate Samplng Say we have a multclass or mult label problem where each tranng example ( x, T ) conssts of a context x a small (mult)set of target classes T out of a large unverse L of possble
More informationFace Verification Problem. Face Recognition Problem. Application: Access Control. Biometric Authentication. Face Verification (1:1 matching)
Face Recognton Problem Face Verfcaton Problem Face Verfcaton (1:1 matchng) Querymage face query Face Recognton (1:N matchng) database Applcaton: Access Control www.vsage.com www.vsoncs.com Bometrc Authentcaton
More informationSupport Vector Machines
Support Vector Machnes Max Wellng Department of Computer Scence Unversty of Toronto 10 Kng s College Road Toronto, M5S 3G5 Canada wellng@cs.toronto.edu Abstract Ths s a note to explan support vector machnes.
More information1 Example 1: Axis-aligned rectangles
COS 511: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture # 6 Scrbe: Aaron Schld February 21, 2013 Last class, we dscussed an analogue for Occam s Razor for nfnte hypothess spaces that, n conjuncton
More informationCausal, Explanatory Forecasting. Analysis. Regression Analysis. Simple Linear Regression. Which is Independent? Forecasting
Causal, Explanatory Forecastng Assumes cause-and-effect relatonshp between system nputs and ts output Forecastng wth Regresson Analyss Rchard S. Barr Inputs System Cause + Effect Relatonshp The job of
More informationLecture 2: Single Layer Perceptrons Kevin Swingler
Lecture 2: Sngle Layer Perceptrons Kevn Sngler kms@cs.str.ac.uk Recap: McCulloch-Ptts Neuron Ths vastly smplfed model of real neurons s also knon as a Threshold Logc Unt: W 2 A Y 3 n W n. A set of synapses
More informationLoop Parallelization
- - Loop Parallelzaton C-52 Complaton steps: nested loops operatng on arrays, sequentell executon of teraton space DECLARE B[..,..+] FOR I :=.. FOR J :=.. I B[I,J] := B[I-,J]+B[I-,J-] ED FOR ED FOR analyze
More informationThe OC Curve of Attribute Acceptance Plans
The OC Curve of Attrbute Acceptance Plans The Operatng Characterstc (OC) curve descrbes the probablty of acceptng a lot as a functon of the lot s qualty. Fgure 1 shows a typcal OC Curve. 10 8 6 4 1 3 4
More information+ + + - - This circuit than can be reduced to a planar circuit
MeshCurrent Method The meshcurrent s analog of the nodeoltage method. We sole for a new set of arables, mesh currents, that automatcally satsfy KCLs. As such, meshcurrent method reduces crcut soluton to
More informationbenefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ).
REVIEW OF RISK MANAGEMENT CONCEPTS LOSS DISTRIBUTIONS AND INSURANCE Loss and nsurance: When someone s subject to the rsk of ncurrng a fnancal loss, the loss s generally modeled usng a random varable or
More information1. Measuring association using correlation and regression
How to measure assocaton I: Correlaton. 1. Measurng assocaton usng correlaton and regresson We often would lke to know how one varable, such as a mother's weght, s related to another varable, such as a
More informationBERNSTEIN POLYNOMIALS
On-Lne Geometrc Modelng Notes BERNSTEIN POLYNOMIALS Kenneth I. Joy Vsualzaton and Graphcs Research Group Department of Computer Scence Unversty of Calforna, Davs Overvew Polynomals are ncredbly useful
More informationAn Alternative Way to Measure Private Equity Performance
An Alternatve Way to Measure Prvate Equty Performance Peter Todd Parlux Investment Technology LLC Summary Internal Rate of Return (IRR) s probably the most common way to measure the performance of prvate
More informationForecasting the Demand of Emergency Supplies: Based on the CBR Theory and BP Neural Network
700 Proceedngs of the 8th Internatonal Conference on Innovaton & Management Forecastng the Demand of Emergency Supples: Based on the CBR Theory and BP Neural Network Fu Deqang, Lu Yun, L Changbng School
More informationVision Mouse. Saurabh Sarkar a* University of Cincinnati, Cincinnati, USA ABSTRACT 1. INTRODUCTION
Vson Mouse Saurabh Sarkar a* a Unversty of Cncnnat, Cncnnat, USA ABSTRACT The report dscusses a vson based approach towards trackng of eyes and fngers. The report descrbes the process of locatng the possble
More informationLuby s Alg. for Maximal Independent Sets using Pairwise Independence
Lecture Notes for Randomzed Algorthms Luby s Alg. for Maxmal Independent Sets usng Parwse Independence Last Updated by Erc Vgoda on February, 006 8. Maxmal Independent Sets For a graph G = (V, E), an ndependent
More informationConversion between the vector and raster data structures using Fuzzy Geographical Entities
Converson between the vector and raster data structures usng Fuzzy Geographcal Enttes Cdála Fonte Department of Mathematcs Faculty of Scences and Technology Unversty of Combra, Apartado 38, 3 454 Combra,
More informationPERRON FROBENIUS THEOREM
PERRON FROBENIUS THEOREM R. CLARK ROBINSON Defnton. A n n matrx M wth real entres m, s called a stochastc matrx provded () all the entres m satsfy 0 m, () each of the columns sum to one, m = for all, ()
More information1. Fundamentals of probability theory 2. Emergence of communication traffic 3. Stochastic & Markovian Processes (SP & MP)
6.3 / -- Communcaton Networks II (Görg) SS20 -- www.comnets.un-bremen.de Communcaton Networks II Contents. Fundamentals of probablty theory 2. Emergence of communcaton traffc 3. Stochastc & Markovan Processes
More information8 Algorithm for Binary Searching in Trees
8 Algorthm for Bnary Searchng n Trees In ths secton we present our algorthm for bnary searchng n trees. A crucal observaton employed by the algorthm s that ths problem can be effcently solved when the
More information21 Vectors: The Cross Product & Torque
21 Vectors: The Cross Product & Torque Do not use our left hand when applng ether the rght-hand rule for the cross product of two vectors dscussed n ths chapter or the rght-hand rule for somethng curl
More informationCHAPTER 5 RELATIONSHIPS BETWEEN QUANTITATIVE VARIABLES
CHAPTER 5 RELATIONSHIPS BETWEEN QUANTITATIVE VARIABLES In ths chapter, we wll learn how to descrbe the relatonshp between two quanttatve varables. Remember (from Chapter 2) that the terms quanttatve varable
More informationQuantization Effects in Digital Filters
Quantzaton Effects n Dgtal Flters Dstrbuton of Truncaton Errors In two's complement representaton an exact number would have nfntely many bts (n general). When we lmt the number of bts to some fnte value
More informationLogical Development Of Vogel s Approximation Method (LD-VAM): An Approach To Find Basic Feasible Solution Of Transportation Problem
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME, ISSUE, FEBRUARY ISSN 77-866 Logcal Development Of Vogel s Approxmaton Method (LD- An Approach To Fnd Basc Feasble Soluton Of Transportaton
More informationTHE METHOD OF LEAST SQUARES THE METHOD OF LEAST SQUARES
The goal: to measure (determne) an unknown quantty x (the value of a RV X) Realsaton: n results: y 1, y 2,..., y j,..., y n, (the measured values of Y 1, Y 2,..., Y j,..., Y n ) every result s encumbered
More informationProject Networks With Mixed-Time Constraints
Project Networs Wth Mxed-Tme Constrants L Caccetta and B Wattananon Western Australan Centre of Excellence n Industral Optmsaton (WACEIO) Curtn Unversty of Technology GPO Box U1987 Perth Western Australa
More informationHow To Calculate The Accountng Perod Of Nequalty
Inequalty and The Accountng Perod Quentn Wodon and Shlomo Ytzha World Ban and Hebrew Unversty September Abstract Income nequalty typcally declnes wth the length of tme taen nto account for measurement.
More informationTHE DISTRIBUTION OF LOAN PORTFOLIO VALUE * Oldrich Alfons Vasicek
HE DISRIBUION OF LOAN PORFOLIO VALUE * Oldrch Alfons Vascek he amount of captal necessary to support a portfolo of debt securtes depends on the probablty dstrbuton of the portfolo loss. Consder a portfolo
More informationForecasting the Direction and Strength of Stock Market Movement
Forecastng the Drecton and Strength of Stock Market Movement Jngwe Chen Mng Chen Nan Ye cjngwe@stanford.edu mchen5@stanford.edu nanye@stanford.edu Abstract - Stock market s one of the most complcated systems
More informationLeast Squares Fitting of Data
Least Squares Fttng of Data Davd Eberly Geoetrc Tools, LLC http://www.geoetrctools.co/ Copyrght c 1998-2016. All Rghts Reserved. Created: July 15, 1999 Last Modfed: January 5, 2015 Contents 1 Lnear Fttng
More informationImplementation of Deutsch's Algorithm Using Mathcad
Implementaton of Deutsch's Algorthm Usng Mathcad Frank Roux The followng s a Mathcad mplementaton of Davd Deutsch's quantum computer prototype as presented on pages - n "Machnes, Logc and Quantum Physcs"
More informationCS 2750 Machine Learning. Lecture 3. Density estimation. CS 2750 Machine Learning. Announcements
Lecture 3 Densty estmaton Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Next lecture: Matlab tutoral Announcements Rules for attendng the class: Regstered for credt Regstered for audt (only f there
More informationAn Interest-Oriented Network Evolution Mechanism for Online Communities
An Interest-Orented Network Evoluton Mechansm for Onlne Communtes Cahong Sun and Xaopng Yang School of Informaton, Renmn Unversty of Chna, Bejng 100872, P.R. Chna {chsun,yang}@ruc.edu.cn Abstract. Onlne
More informationThe Greedy Method. Introduction. 0/1 Knapsack Problem
The Greedy Method Introducton We have completed data structures. We now are gong to look at algorthm desgn methods. Often we are lookng at optmzaton problems whose performance s exponental. For an optmzaton
More informationLinear Circuits Analysis. Superposition, Thevenin /Norton Equivalent circuits
Lnear Crcuts Analyss. Superposton, Theenn /Norton Equalent crcuts So far we hae explored tmendependent (resste) elements that are also lnear. A tmendependent elements s one for whch we can plot an / cure.
More informationRing structure of splines on triangulations
www.oeaw.ac.at Rng structure of splnes on trangulatons N. Vllamzar RICAM-Report 2014-48 www.rcam.oeaw.ac.at RING STRUCTURE OF SPLINES ON TRIANGULATIONS NELLY VILLAMIZAR Introducton For a trangulated regon
More informationn + d + q = 24 and.05n +.1d +.25q = 2 { n + d + q = 24 (3) n + 2d + 5q = 40 (2)
MATH 16T Exam 1 : Part I (In-Class) Solutons 1. (0 pts) A pggy bank contans 4 cons, all of whch are nckels (5 ), dmes (10 ) or quarters (5 ). The pggy bank also contans a con of each denomnaton. The total
More informationOn the Optimal Control of a Cascade of Hydro-Electric Power Stations
On the Optmal Control of a Cascade of Hydro-Electrc Power Statons M.C.M. Guedes a, A.F. Rbero a, G.V. Smrnov b and S. Vlela c a Department of Mathematcs, School of Scences, Unversty of Porto, Portugal;
More informationDEFINING %COMPLETE IN MICROSOFT PROJECT
CelersSystems DEFINING %COMPLETE IN MICROSOFT PROJECT PREPARED BY James E Aksel, PMP, PMI-SP, MVP For Addtonal Informaton about Earned Value Management Systems and reportng, please contact: CelersSystems,
More informationHow Sets of Coherent Probabilities May Serve as Models for Degrees of Incoherence
1 st Internatonal Symposum on Imprecse Probabltes and Ther Applcatons, Ghent, Belgum, 29 June 2 July 1999 How Sets of Coherent Probabltes May Serve as Models for Degrees of Incoherence Mar J. Schervsh
More informationInter-Ing 2007. INTERDISCIPLINARITY IN ENGINEERING SCIENTIFIC INTERNATIONAL CONFERENCE, TG. MUREŞ ROMÂNIA, 15-16 November 2007.
Inter-Ing 2007 INTERDISCIPLINARITY IN ENGINEERING SCIENTIFIC INTERNATIONAL CONFERENCE, TG. MUREŞ ROMÂNIA, 15-16 November 2007. UNCERTAINTY REGION SIMULATION FOR A SERIAL ROBOT STRUCTURE MARIUS SEBASTIAN
More informationCHOLESTEROL REFERENCE METHOD LABORATORY NETWORK. Sample Stability Protocol
CHOLESTEROL REFERENCE METHOD LABORATORY NETWORK Sample Stablty Protocol Background The Cholesterol Reference Method Laboratory Network (CRMLN) developed certfcaton protocols for total cholesterol, HDL
More informationCalculation of Sampling Weights
Perre Foy Statstcs Canada 4 Calculaton of Samplng Weghts 4.1 OVERVIEW The basc sample desgn used n TIMSS Populatons 1 and 2 was a two-stage stratfed cluster desgn. 1 The frst stage conssted of a sample
More informationHÜCKEL MOLECULAR ORBITAL THEORY
1 HÜCKEL MOLECULAR ORBITAL THEORY In general, the vast maorty polyatomc molecules can be thought of as consstng of a collecton of two electron bonds between pars of atoms. So the qualtatve pcture of σ
More informationAnswer: A). There is a flatter IS curve in the high MPC economy. Original LM LM after increase in M. IS curve for low MPC economy
4.02 Quz Solutons Fall 2004 Multple-Choce Questons (30/00 ponts) Please, crcle the correct answer for each of the followng 0 multple-choce questons. For each queston, only one of the answers s correct.
More informationFeature selection for intrusion detection. Slobodan Petrović NISlab, Gjøvik University College
Feature selecton for ntruson detecton Slobodan Petrovć NISlab, Gjøvk Unversty College Contents The feature selecton problem Intruson detecton Traffc features relevant for IDS The CFS measure The mrmr measure
More informationSection 5.4 Annuities, Present Value, and Amortization
Secton 5.4 Annutes, Present Value, and Amortzaton Present Value In Secton 5.2, we saw that the present value of A dollars at nterest rate per perod for n perods s the amount that must be deposted today
More informationAn Evaluation of the Extended Logistic, Simple Logistic, and Gompertz Models for Forecasting Short Lifecycle Products and Services
An Evaluaton of the Extended Logstc, Smple Logstc, and Gompertz Models for Forecastng Short Lfecycle Products and Servces Charles V. Trappey a,1, Hsn-yng Wu b a Professor (Management Scence), Natonal Chao
More informationAdaptive Fractal Image Coding in the Frequency Domain
PROCEEDINGS OF INTERNATIONAL WORKSHOP ON IMAGE PROCESSING: THEORY, METHODOLOGY, SYSTEMS AND APPLICATIONS 2-22 JUNE,1994 BUDAPEST,HUNGARY Adaptve Fractal Image Codng n the Frequency Doman K AI UWE BARTHEL
More informationJ. Parallel Distrib. Comput.
J. Parallel Dstrb. Comput. 71 (2011) 62 76 Contents lsts avalable at ScenceDrect J. Parallel Dstrb. Comput. journal homepage: www.elsever.com/locate/jpdc Optmzng server placement n dstrbuted systems n
More informationPoint cloud to point cloud rigid transformations. Minimizing Rigid Registration Errors
Pont cloud to pont cloud rgd transformatons Russell Taylor 600.445 1 600.445 Fall 000-014 Copyrght R. H. Taylor Mnmzng Rgd Regstraton Errors Typcally, gven a set of ponts {a } n one coordnate system and
More informationEfficient Project Portfolio as a tool for Enterprise Risk Management
Effcent Proect Portfolo as a tool for Enterprse Rsk Management Valentn O. Nkonov Ural State Techncal Unversty Growth Traectory Consultng Company January 5, 27 Effcent Proect Portfolo as a tool for Enterprse
More informationTraffic State Estimation in the Traffic Management Center of Berlin
Traffc State Estmaton n the Traffc Management Center of Berln Authors: Peter Vortsch, PTV AG, Stumpfstrasse, D-763 Karlsruhe, Germany phone ++49/72/965/35, emal peter.vortsch@ptv.de Peter Möhl, PTV AG,
More informationA machine vision approach for detecting and inspecting circular parts
A machne vson approach for detectng and nspectng crcular parts Du-Mng Tsa Machne Vson Lab. Department of Industral Engneerng and Management Yuan-Ze Unversty, Chung-L, Tawan, R.O.C. E-mal: edmtsa@saturn.yzu.edu.tw
More informationCHAPTER 14 MORE ABOUT REGRESSION
CHAPTER 14 MORE ABOUT REGRESSION We learned n Chapter 5 that often a straght lne descrbes the pattern of a relatonshp between two quanttatve varables. For nstance, n Example 5.1 we explored the relatonshp
More informationEnabling P2P One-view Multi-party Video Conferencing
Enablng P2P One-vew Mult-party Vdeo Conferencng Yongxang Zhao, Yong Lu, Changja Chen, and JanYn Zhang Abstract Mult-Party Vdeo Conferencng (MPVC) facltates realtme group nteracton between users. Whle P2P
More informationChapter 4 ECONOMIC DISPATCH AND UNIT COMMITMENT
Chapter 4 ECOOMIC DISATCH AD UIT COMMITMET ITRODUCTIO A power system has several power plants. Each power plant has several generatng unts. At any pont of tme, the total load n the system s met by the
More informationA Fast Incremental Spectral Clustering for Large Data Sets
2011 12th Internatonal Conference on Parallel and Dstrbuted Computng, Applcatons and Technologes A Fast Incremental Spectral Clusterng for Large Data Sets Tengteng Kong 1,YeTan 1, Hong Shen 1,2 1 School
More informationRotation Kinematics, Moment of Inertia, and Torque
Rotaton Knematcs, Moment of Inerta, and Torque Mathematcally, rotaton of a rgd body about a fxed axs s analogous to a lnear moton n one dmenson. Although the physcal quanttes nvolved n rotaton are qute
More informationA hybrid global optimization algorithm based on parallel chaos optimization and outlook algorithm
Avalable onlne www.ocpr.com Journal of Chemcal and Pharmaceutcal Research, 2014, 6(7):1884-1889 Research Artcle ISSN : 0975-7384 CODEN(USA) : JCPRC5 A hybrd global optmzaton algorthm based on parallel
More informationAn Enhanced Super-Resolution System with Improved Image Registration, Automatic Image Selection, and Image Enhancement
An Enhanced Super-Resoluton System wth Improved Image Regstraton, Automatc Image Selecton, and Image Enhancement Yu-Chuan Kuo ( ), Chen-Yu Chen ( ), and Chou-Shann Fuh ( ) Department of Computer Scence
More informationWe are now ready to answer the question: What are the possible cardinalities for finite fields?
Chapter 3 Fnte felds We have seen, n the prevous chapters, some examples of fnte felds. For example, the resdue class rng Z/pZ (when p s a prme) forms a feld wth p elements whch may be dentfed wth the
More informationANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING
ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING Matthew J. Lberatore, Department of Management and Operatons, Vllanova Unversty, Vllanova, PA 19085, 610-519-4390,
More informationGRAVITY DATA VALIDATION AND OUTLIER DETECTION USING L 1 -NORM
GRAVITY DATA VALIDATION AND OUTLIER DETECTION USING L 1 -NORM BARRIOT Jean-Perre, SARRAILH Mchel BGI/CNES 18.av.E.Beln 31401 TOULOUSE Cedex 4 (France) Emal: jean-perre.barrot@cnes.fr 1/Introducton The
More informationCompiling for Parallelism & Locality. Dependence Testing in General. Algorithms for Solving the Dependence Problem. Dependence Testing
Complng for Parallelsm & Localty Dependence Testng n General Assgnments Deadlne for proect 4 extended to Dec 1 Last tme Data dependences and loops Today Fnsh data dependence analyss for loops General code
More informationA Probabilistic Theory of Coherence
A Probablstc Theory of Coherence BRANDEN FITELSON. The Coherence Measure C Let E be a set of n propostons E,..., E n. We seek a probablstc measure C(E) of the degree of coherence of E. Intutvely, we want
More informationDescriptive Models. Cluster Analysis. Example. General Applications of Clustering. Examples of Clustering Applications
CMSC828G Prncples of Data Mnng Lecture #9 Today s Readng: HMS, chapter 9 Today s Lecture: Descrptve Modelng Clusterng Algorthms Descrptve Models model presents the man features of the data, a global summary
More informationInstitute of Informatics, Faculty of Business and Management, Brno University of Technology,Czech Republic
Lagrange Multplers as Quanttatve Indcators n Economcs Ivan Mezník Insttute of Informatcs, Faculty of Busness and Management, Brno Unversty of TechnologCzech Republc Abstract The quanttatve role of Lagrange
More information2008/8. An integrated model for warehouse and inventory planning. Géraldine Strack and Yves Pochet
2008/8 An ntegrated model for warehouse and nventory plannng Géraldne Strack and Yves Pochet CORE Voe du Roman Pays 34 B-1348 Louvan-la-Neuve, Belgum. Tel (32 10) 47 43 04 Fax (32 10) 47 43 01 E-mal: corestat-lbrary@uclouvan.be
More information"Research Note" APPLICATION OF CHARGE SIMULATION METHOD TO ELECTRIC FIELD CALCULATION IN THE POWER CABLES *
Iranan Journal of Scence & Technology, Transacton B, Engneerng, ol. 30, No. B6, 789-794 rnted n The Islamc Republc of Iran, 006 Shraz Unversty "Research Note" ALICATION OF CHARGE SIMULATION METHOD TO ELECTRIC
More informationRealistic Image Synthesis
Realstc Image Synthess - Combned Samplng and Path Tracng - Phlpp Slusallek Karol Myszkowsk Vncent Pegoraro Overvew: Today Combned Samplng (Multple Importance Samplng) Renderng and Measurng Equaton Random
More informationHow To Understand The Results Of The German Meris Cloud And Water Vapour Product
Ttel: Project: Doc. No.: MERIS level 3 cloud and water vapour products MAPP MAPP-ATBD-ClWVL3 Issue: 1 Revson: 0 Date: 9.12.1998 Functon Name Organsaton Sgnature Date Author: Bennartz FUB Preusker FUB Schüller
More informationSolution: Let i = 10% and d = 5%. By definition, the respective forces of interest on funds A and B are. i 1 + it. S A (t) = d (1 dt) 2 1. = d 1 dt.
Chapter 9 Revew problems 9.1 Interest rate measurement Example 9.1. Fund A accumulates at a smple nterest rate of 10%. Fund B accumulates at a smple dscount rate of 5%. Fnd the pont n tme at whch the forces
More informationSPEE Recommended Evaluation Practice #6 Definition of Decline Curve Parameters Background:
SPEE Recommended Evaluaton Practce #6 efnton of eclne Curve Parameters Background: The producton hstores of ol and gas wells can be analyzed to estmate reserves and future ol and gas producton rates and
More informationPSYCHOLOGICAL RESEARCH (PYC 304-C) Lecture 12
14 The Ch-squared dstrbuton PSYCHOLOGICAL RESEARCH (PYC 304-C) Lecture 1 If a normal varable X, havng mean µ and varance σ, s standardsed, the new varable Z has a mean 0 and varance 1. When ths standardsed
More informationSIMPLE LINEAR CORRELATION
SIMPLE LINEAR CORRELATION Smple lnear correlaton s a measure of the degree to whch two varables vary together, or a measure of the ntensty of the assocaton between two varables. Correlaton often s abused.
More informationAn interactive system for structure-based ASCII art creation
An nteractve system for structure-based ASCII art creaton Katsunor Myake Henry Johan Tomoyuk Nshta The Unversty of Tokyo Nanyang Technologcal Unversty Abstract Non-Photorealstc Renderng (NPR), whose am
More informationExtending Probabilistic Dynamic Epistemic Logic
Extendng Probablstc Dynamc Epstemc Logc Joshua Sack May 29, 2008 Probablty Space Defnton A probablty space s a tuple (S, A, µ), where 1 S s a set called the sample space. 2 A P(S) s a σ-algebra: a set
More informationLatent Class Regression. Statistics for Psychosocial Research II: Structural Models December 4 and 6, 2006
Latent Class Regresson Statstcs for Psychosocal Research II: Structural Models December 4 and 6, 2006 Latent Class Regresson (LCR) What s t and when do we use t? Recall the standard latent class model
More informationPOLYSA: A Polynomial Algorithm for Non-binary Constraint Satisfaction Problems with and
POLYSA: A Polynomal Algorthm for Non-bnary Constrant Satsfacton Problems wth and Mguel A. Saldo, Federco Barber Dpto. Sstemas Informátcos y Computacón Unversdad Poltécnca de Valenca, Camno de Vera s/n
More informationSingle and multiple stage classifiers implementing logistic discrimination
Sngle and multple stage classfers mplementng logstc dscrmnaton Hélo Radke Bttencourt 1 Dens Alter de Olvera Moraes 2 Vctor Haertel 2 1 Pontfíca Unversdade Católca do Ro Grande do Sul - PUCRS Av. Ipranga,
More informationProduction. 2. Y is closed A set is closed if it contains its boundary. We need this for the solution existence in the profit maximization problem.
Producer Theory Producton ASSUMPTION 2.1 Propertes of the Producton Set The producton set Y satsfes the followng propertes 1. Y s non-empty If Y s empty, we have nothng to talk about 2. Y s closed A set
More informationNPAR TESTS. One-Sample Chi-Square Test. Cell Specification. Observed Frequencies 1O i 6. Expected Frequencies 1EXP i 6
PAR TESTS If a WEIGHT varable s specfed, t s used to replcate a case as many tmes as ndcated by the weght value rounded to the nearest nteger. If the workspace requrements are exceeded and samplng has
More informationStochastic epidemic models revisited: Analysis of some continuous performance measures
Stochastc epdemc models revsted: Analyss of some contnuous performance measures J.R. Artalejo Faculty of Mathematcs, Complutense Unversty of Madrd, 28040 Madrd, Span A. Economou Department of Mathematcs,
More informationRisk-based Fatigue Estimate of Deep Water Risers -- Course Project for EM388F: Fracture Mechanics, Spring 2008
Rsk-based Fatgue Estmate of Deep Water Rsers -- Course Project for EM388F: Fracture Mechancs, Sprng 2008 Chen Sh Department of Cvl, Archtectural, and Envronmental Engneerng The Unversty of Texas at Austn
More informationIMPACT ANALYSIS OF A CELLULAR PHONE
4 th ASA & μeta Internatonal Conference IMPACT AALYSIS OF A CELLULAR PHOE We Lu, 2 Hongy L Bejng FEAonlne Engneerng Co.,Ltd. Bejng, Chna ABSTRACT Drop test smulaton plays an mportant role n nvestgatng
More informationPerformance Analysis and Coding Strategy of ECOC SVMs
Internatonal Journal of Grd and Dstrbuted Computng Vol.7, No. (04), pp.67-76 http://dx.do.org/0.457/jgdc.04.7..07 Performance Analyss and Codng Strategy of ECOC SVMs Zhgang Yan, and Yuanxuan Yang, School
More informationSoftware project management with GAs
Informaton Scences 177 (27) 238 241 www.elsever.com/locate/ns Software project management wth GAs Enrque Alba *, J. Francsco Chcano Unversty of Málaga, Grupo GISUM, Departamento de Lenguajes y Cencas de
More informationAn MILP model for planning of batch plants operating in a campaign-mode
An MILP model for plannng of batch plants operatng n a campagn-mode Yanna Fumero Insttuto de Desarrollo y Dseño CONICET UTN yfumero@santafe-concet.gov.ar Gabrela Corsano Insttuto de Desarrollo y Dseño
More informationSoftware Alignment for Tracking Detectors
Software Algnment for Trackng Detectors V. Blobel Insttut für Expermentalphysk, Unverstät Hamburg, Germany Abstract Trackng detectors n hgh energy physcs experments requre an accurate determnaton of a
More informationComparison of Control Strategies for Shunt Active Power Filter under Different Load Conditions
Comparson of Control Strateges for Shunt Actve Power Flter under Dfferent Load Condtons Sanjay C. Patel 1, Tushar A. Patel 2 Lecturer, Electrcal Department, Government Polytechnc, alsad, Gujarat, Inda
More informationHowHow to Find the Best Online Stock Broker
A GENERAL APPROACH FOR SECURITY MONITORING AND PREVENTIVE CONTROL OF NETWORKS WITH LARGE WIND POWER PRODUCTION Helena Vasconcelos INESC Porto hvasconcelos@nescportopt J N Fdalgo INESC Porto and FEUP jfdalgo@nescportopt
More information