On Mean Squared Error of Hierarchical Estimator


 Melvyn Strickland
 2 years ago
 Views:
Transcription
1 S C H E D A E I N F O R M A T I C A E VOLUME 0 0 On Mean Squared Error of Herarchcal Estmator Stans law Brodowsk Faculty of Physcs, Astronomy, and Appled Computer Scence, Jagellonan Unversty, Reymonta 4, Kraków, Poland emal: Abstract. In ths paper a new theorem about components of mean squared error of Herarchcal Estmator s presented. Herarchcal Estmator s a machne learnng metaalgorthm that attempts to buld, n an ncremental and herarchcal manner, a tree of relatvely smple functon estmators and combne ther results to acheve better accuracy than any of the ndvdual ones. The components of the error of a node of such tree are: weghted mean of the error of the estmator n a node and the errors of chldren, a nonpostve term that descreases below 0 f chldren responses on any example dffer and a term representng relatve qualty of an nternal weghtng functon, whch can be conservatvely kept at 0 f needed. Gudelnes for achevng good results based on the theorem are brefly dscussed. Keywords: Herarchal Estmator, herarchcal model, regresson, functon approxmaton, error, theorem. Introducton Machne learnng s one of the classcal topcs n computer scence [, ]. Ths paper presents some theoretcal fndngs about a machne learnng soluton concerned wth supervsed learnng called Herarchcal Estmator. That metaalgorthm, presented n [3], arranges many smple, possbly relatvely naccurate, functon estmators (approxmators) nto a tree structure and combnes ther results n an attempt to obtan one more accurate. The basc general task of the mentoned technque s to predct values of a random varable Y wth possble values n Y R r beng presented wth values of another
2 84 varable X (wth possble values n X R p ) and knowng some set (or seres) of values of X pared wth values of Y called tranng set D = {(x (k), y (k) ), k {... D }, k : x (k) X, y (k) Y}. Ths may be done by approxmatng functon f : X Y, such that: Y = f(x) + ε, () where ε s some error varable of certan propertes (e.g. havng 0 mean). Because jont probablty P X,Y (so also ε), s not avalable, usually mnmzng of a loss functon, e.g. squared loss over D s attempted nstead [4]. As mentoned above, the man task s predcton, so workng on examples not beng n the tranng set s requred. If a soluton works well only on tranng set, but poorly on unseen examples, t s descrbed as havng low generalzaton. If the technque that s used s parametrc,.e. frst some model s selected and than parameters optmzed, low generalzaton s often result of too complcated model selected [5, 6, 7]... Smlar solutons Herarchcal Estmator attempts to combne many less accurate estmators nto more accurate one, so t s loosely related to the Theory of Weak Learnablty [8]. Its executon may be seen as buldng a problem model n an ncremental manner startng from a smple one and ncreasng complexty. Because some parts of t gude the creaton and workng of others, t may be consdered herarchcal. It creates a tree structure that s automatcally adapted to the problem beng learned, so ts smlarty to well known AdaBoost [9] s at most moderate. Another dfference s that whle orgnal AdaBoost sets the weght of component models for all examples, Herarchcal Estmator assgns dfferent weghts to the experts based on the example beng evaluated. Ths makes t more smlar to Herarchcal Mxture of Experts (see [0]), but ts operaton dffers sgnfcantly, even when constructve algorthms lke [] are consdered. For example, HME has expert nodes n leaf nodes, whle Herarchcal Estmator has them n all nodes and they all solve some subproblem of the orgnal problem. The outputs of nternal nodes can be used both for evaluatng the result and, after addtonal processng and possbly ncludng other varables, for weghtng results of component estmators. Ths also consttutes the most sgnfcant of many dfferences between Herarchcal Estmator and regresson trees M5 []. Probably the most smlar soluton to Herarchcal Estmator s Herarchcal Classfer [3], based on smlar premses. Its detals are strongly connected to the classfcaton task though and that forced many dfferences [3]. Herarchcal Estmator s desgned for predctng contnuouslyvalued number or vector outputs, so ts scope s dfferent than that of Herarchcal Classfer. The metaalgorthm nature of Herarchcal Estmator s also more explct than n the case of Herarchcal Classfer.
3 85.. Herarchcal Estmator... Basc defntons In a very general sense, Herarchcal Estmator s a functon HE : X Y, that uses a tree structure where node ndces are from some ndex set I. Let be the number of chldren (possbly 0) of the vald tree node wth ndex (called for the sake of brevty node ). Two functons are assgned to each vald node [3]:. a functon estmator (approxmator) g : X Y that solves some subproblem of the orgnal problem, n [3] smple neural networks are used for ths task,. competence functon C : {0,..., } X [0, ] whch values are used as weghts for results of chldren nodes and the result of estmator n node when the value of estmator for a gven example s calculated, as descrbed n Eq. (). We assgn each chld a number among ts sblngs. P : I N I s the functon that returns global node ndex of a chld based on the parent s ndex and that chld number,.e. P (, j) gves the ndex of the jth chld of node. Defnton (Herarchcal Estmator node response). The recursve formula for retrevng response of some node on kth example s [3]: g (x (k) ) = j= g P (,j) (x (k) ) C (j, x (k) ) + C (0, x (k) ) g (x (k) ), () where C (j, x (k) ) =. (3) Defnton (Herarchcal Estmator response). The Herarchcal Estmator response for a gven example s the response of the root (ts ndex denoted here as r): HE(x) = g r (x). (4) For a leaf C (0, x (k) ) = and g (x (k) ) = g (x (k) ). A more compact verson of the defnton arses when we dentfy result of the estmator n gven node g (x (k) ) wth a result of a vrtual zeroth chld g P (,0) : g (x (k) ) = g P (,j) (x (k) ) C (j, x (k) ). (5) When aggregatng the result, an example s frst propagated down the tree startng from root. Weghts are proposed for a gven example and each chld node by
4 86 functon C and only those chldren that acheved a nonzero value are used. Ths means that example s not propagated through the whole tree, but only certan paths and branches. The propagaton along gven path stops f t reaches a node n whch C (0, x (k) ) =, usually, but not necessarly, a leaf node. It s very mportant that functon C depends on example beng evaluated. Therefore, although for any gven example the response of the Herarchcal Estmator s a weghted mean of the response of some nodes, the whole Herarchcal Estmator s not a lnear combnaton of the estmators n the nodes.... Useful terms competence In dscusson about Herarchcal Estmator two more defntons wll be very useful [3]: Defnton 3 (Competence area). Competence area s the set of all feature vectors that a gven node may possbly be requred to evaluate. Defnton 4 (Competence set). Competence set contans all examples from a gven set (also f that set s only known from context of the term use) that fall nto competence area of the node. An example can fall nto the competence set or competence area of a node f ths s a vald feature vector and the node s root, or f for some gven set S (a set of all possble vectors for competence area) and the gven node beng a jth chld of node, the competence functon from node s nonzero. In the latter case the competence set s desgnated as S P (,j) and follows: S P (,j) = {(x (k), y (k) ) (x (k), y (k) ) S C (j, x (k) ) > 0}. (6) Ths can be also appled to vrtual chld Learnng The whole structure of Herarchcal Estmator s found whle learnng from examples, so at least a bref descrpton of learnng algorthm s needed for full understandng the consequences of theoretcal fndngs descrbed n ths artcle. Procedure of learnng Herarchcal Estmator on a tranng set D s:. Create root node and make D ts tranng set.. Buld a functon estmator (possbly smple) n the processed node (later called node ).
5 87 3. Compute E(S, g ) mean squared error or some other error measure for the gven node and ts competence set (whch s not necessarly dentcal to tranng set D ). If t s smaller than some preset value ( the goal ) stop algorthm for ths branch. If, on the other hand, ths error ts greater than that of ts parent (on the same set), stop the algorthm for ths branch, but also delete ths node. 4. If the soluton s becomng too complex wth respect to some preset parameter (usually maxmum tree depth) stop the algorthm (for ths branch). Ths condton s placed to lmt the learnng tme. 5. Buld (a) Tranng sets for the chldren nodes {D P (,)... D P (,) } ( also needs to be found). Ths s usually done by creatng a functon U, such that (x (k), y (k) ) D P (,j) U (j, x (k), y (k) ) > 0. Because competence sets generally should overlap (as ndcated n [3]) tranng sets usually also wll. (b) Competence functon C. As these tasks are closely related, they are usually performed together [3]. 6. Run ths algorthm for the chldren of the gven node from pont. In [3], the creaton of competence functon C and dvdng the tranng set s based prmarly on the responses of the estmator n the node. Usually t nvolves some form of fuzzy clusterng e.g. Fuzzy CMeans [4] wth cluster number selecton technque, descrbed n [5]. For example, n the smplest, but not very effectve form, outputs of estmator n the node are fuzzyclustered, each cluster consttutes one tranng set for a chld and competence functon value s the membershp of the gven example n the gven cluster. In one of the more sophstcated methods, both outputs of the estmator n node and true values are clustered by means of fuzzy clusterng. Then a corellaton matrx s made between clusters n estmator outputs and clusters n true values. Fnally, the rows of such matrx are clustered. The competence functon s based on fndng the membershps of a gven example n each row, by usng membershps of the example n response clusters, corellaton matrx and a chosen set of fuzzy operators, and then combnng ths nformaton, agan usng fuzzy operators, wth the membershps of each row n fnal clusters. The tranng sets are found n a smlar way, but nformaton about true values s also used. Paper [3] presents ths method n detal and n two varants as well as one other method. It should be mentoned that because soluton presented n [3] uses Artfcal Neural Networks as estmators n the nodes, the data gven as ther nput should be adequatly prepared, normalzed among others. Ths s sometmes not a trval task [6, 7] though usually standard normalzaton procedures are used.
6 Detals As t can be seen from defntons above, certan mportant detals have to be determned separately. Ths concerns not only the selecton of estmators n nodes (lke n many other solutons, e.g. AdaBoost) but also the exact form of competence functon and creatng tranng sets for successor nodes. Several versons of such detals are descrbed n [3] and ther performance evaluated on several datasets. They are nspred by the theorems cted n Secton.. and ther proofs.. Error Structure of Herarchcal Estmator.. Prelmnary Notons In ths secton, several notons wll be used that were not explaned above, because ther scope s more lmted. For convenence they are grouped here. Most of them appear n a smlar form as n [3]. S = {(x (k), y (k) ) k =,..., } s used for a set of examples on whch the estmator (or a gven node) s evaluated. s the sze of that set. Please recall that each x (k) X R p, y (k) Y R r S P (,j) s the sze of the set S P (,j), a competence set of jth chld of node wthn set S. e((x (k), y (k) ), g) a squared error of estmator g on example (x (k), y (k) ) e((x (k), y (k) ), g) = Ths notaton can be shortened as n: r l= (y (k) l g(x (k) ) l ). (7) e (,j) (k) = e((x (k), y (k) ), g P (,j) ), (8) ẽ (,j) (k) = e((x (k), y (k) ), g P (,j) ), (9) e () (k) = e((x (k), y (k) ), g ), (0) η,j (k) a short way for denotng the dfference between target functon value on kth example of gven set and the result of jth chld of node for ths example; η,j (k) = g P (,j) (x (k) ) y (k), x (k) S P (,j), η,j (k) = g P (,j) (x (k) ) y (k), x (k) S P (,j). ()
7 89 Error functon can be easly created from η: r ẽ (,j) (k) = η,j (k) l. () l= E(S, g) a mean squared error of estmator g on the set S E(S, g) = e((x (k), y (k) ), g). (3) k= C s a characterstc (ndcator) functon of competence set (or area) C (j, x (k) ) = {, C (j, x (k) ) > 0 0, otherwse. (4) Note, that C multpled by C s stll C. n k s the number of such j for whch C (j, x (k) ) > 0 so t s the number of chldren actually used on a gven example (possbly ncludng vrtual chld 0). n max s ts maxmum on the whole set S: n max = max k:(x (k),y (k) ) S n k n s used f n k s constant over all examples that are consdered, so the k ndex can be omtted... Exstng theorems about error of Herarchcal Estmator In [3] several facts were proved about Herarchcal Estmator squared error. For the purpose of ths artcle the frst of them s of most nterest. Theorem. For any node n Herarchcal Estmator suppose that: S s a competence set of node, for each example n set S, n k s constant: k : (x (k), y (k) ) S, C (j, x (k) ) = n, (5) where n > 0,
8 90 C fulflls r k= l= j:c (j,x (k) )>0 k= l= r n η,j (k) l C (j, x (k) ) j:c (j,x (k) )>0 η,j (k) l (6) Then E(S, g ) E(S, g ) j= S P (,j) n ( E(SP (,j), g ) E(S P (,j), g P (,j) ) ) (7) In other words, f we always use n chldren for an example (or one less, but use estmator n a gven node) and errors acheved when the gven competence functon s used are no greater than f the same chldren were used, but weghted equally, the error s no greater than the error of estmator dmnshed by dfferences between ts error on competence sets of chldren and the chldren errors on that sets. It s not a suprsng result, but one of the corollares proved n the artcle [3] (Corollary ) states that the fnal nequalty can be easly made strct t s enough that the used chldren nodes have dfferent errors on one example. Ths theorem and ts proof brought some more detaled nformaton on what s needed for the soluton to work properly. The assumpton 5 about constant number of used chldren s nconvenent (though necessary for the gven form of the theorem), so modfed verson of theorem was proved, exchangng t for another (consdered weaker by author) [3]: Theorem. Consder node and example set S. Here ponts (5) and (6) from Theorem are replaced by: r k= l= k= l= j:c (j,x (k) )>0 r n max ( η,j (k) l C (j, x (k) ) j:c (j,x (k) )>0 j>0 n max n k + η,j (k) l η,0 (k) l n max. (8) And k : (x (k), y (k) ) S, C (0, x (k) ) > 0, (9) The concluson s then S P (,j) ( E(S, g ) E(S, g ) n max E(SP (,j), g ) E(S P (,j), g P (,j) ) ) (0) j=
9 9 The assumpton 5 about constant number of used chldren s replaced by a requrement that the estmator n a gven (parent) node s always used (9). It may be n many cases less restrctng than that of the frst theorem and possbly also more techncal, as we can use arbtrary small values of competence functon for that node. The thess changed accordngly and can be called somewhat weaker. The sketch of proof s also n [3], the techncal detals are n [8]. These two theorems lad foundaton for several corollares, also proved n [3]. One of the most mportant (apart from the one mentoned above, about strct nequalty) states that f the condtons of ths theorem are met and each chld node gves better average results than ts parent on the chld s competence set, then addng nodes to tree decreases error on the respectve set (on whch the condtons are met). Unfortunately, strct meetng assumptons of those theorems s not easy on examples that were not used for tranng. However, t was not establshed that those are necessary condtons, just suffcent ones, so, for example, t s not perfectly clear what really happens f one or more are not met. That s why a bt more detaled analyss s attempted n ths paper..3. The new theorem concernng error components The theorems from [3] mentoned several condtons suffcent for the soluton to work well and ponted at several places n whch the nequalty n the Theorem mght be made strct, but dd not formally answer the queston about performance of the soluton when not all condtons are perfectly met or how large the dfference can be. The theorem presented below tres to formally shed some lght on ths. As the Theorem was the basc one, the new theorem s a modfcaton of that one. Below s addtonal notaton, that was not needed for the prevous theorems, but s necessary now. τ s the notaton correspondng to assumpton gven n (6) relatve qualty condton on functon C. τ = k= l= r τ kl, where τ kl s just the dfference between the error on example k on the coordnate l when the actual competence functon C s used and the error n case the same estmators would be used (as C s nonzero f and only f C s nonzero) for evaluaton of that example, but the results were weghted equally, τ kl = η,j (k) l C (j, x (k) ) η,j (k) l C (j, x (k) ) n. ()
10 9 The notaton δ s used to descrbed one of the error components, ts full meanng wll be best explaned durng proof. δ = r k= l= δ kl δ kl = η,j (k) l C (j, x (k) ) C (j, x (k) ) η,j (k) l C (j, x (k) ) C (j, x (k) ). () Accordng to CauchySchwarz nequalty, δ kl s never postve. The new theorem s: Theorem 3. For any node n Herarchcal Estmator suppose that: S s a competence set of node. As n Theorem, for each example n set S, n k s constant: where n > 0. k : (x (k), y (k) ) S, C (j, x (k) ) = n, (3) Then E(S, g ) = n S P (,j) E(S P (,j), g P (,j) ) + τ + n δ. (4) The frst term s not suprsng mean of errors of chldren weghted by szes of ther competence sets wthn the man set, but there are two more. One that s never postve (δ, t s usually negatve) and another one, that corresponds to qualty of competence functon C relatve to a functon that chooses the same estmators for each example, but weghts them equally (τ). Ths one can qute easly be kept 0. Proof. The proof s analogcal to that of the Theorem. Frst, we take squared error defntons (ncludng Eq. (8) and (3)): E(S, g ) = k= l= r ( g (x (k) ) l y (k) l ), and apply the man equaton for response () to them: E(S, g ) = r ( ) g P (,j) (x (k) ) C (j, x (k) ) y (k) l l k= l=
11 93 As the sum of C (j, x (k) ) by defnton (Eq. (3)) equals to for each example, we can expand the equaton and then collapse wth a convenent notaton of η (see Eq. ()) : E(S, g ) = = = k= l= k= l= k= l= r ( g P (,j) (x (k) )) l C (j, x (k) ) r (( g P (,j) (x (k) )) l y (k) l ) C (j, x (k) ) r η,j (k) l C (j, x (k) ) C (j, x (k) ) y (k) l We can extract the term τ usng ts defnton Eq. (): E(S, g ) = r η,j (k) l C (j, x (k) ) k= l= = r η,j (k) l C (j, x (k) ) n = k= l= k= l= r η,j (k) l C (j, x (k) ) n + τ kl + τ (5) Because values C are 0 or, rasng them to any power greater than 0 does not change them: E(S, g ) = r η,j (k) l C (j, x (k) ) C (j, x (k) ) + τ (6) n k= l= At ths pont we can apply notaton δ () E(S, g ) = (7) = r η,j (k) l C (j, x (k) ) C (j, x (k) ) + δ kl + τ n k= l= The fact that, accordng to CauchySchwarz nequalty, δ kl s never postve s qute mportant here. Assumpton (3) requres that C (j, x (k) ) = C (j, x (k) ) = n. So we can wrte: E(S, g ) = r n k= l= η,j (k) l C (j, x (k) ) n + δ kl + τ (8)
12 94 then extract δ, concurrently smplfyng /n n to /n E(S, g ) = r η,j (k) l C (j, x (k) ) + τ + n n δ, (9) k= l= then reorder sums and factors: E(S, g ) = C (j, x (k) ) n k= r η,j (k) l + τ + n δ. In ths form t s easy to apply defnton of squared error (9) and observaton about η (), rememberng that rasng C to postve power does not change t: E(S, g ) = l= C (j, x (k) ) ẽ (,j) (k) + τ + n n δ (30) k= and use the fact that C s a characterstc functon of S P (,j) to apply Eq. (3) E(S, g ) = n S P (,j) E(S P (,j), g P (,j) ) + τ + n δ Whch ends the proof. Analogcally to Corollary n [3], we can show that δ = 0 s n fact a rather specal case, so n most cases t s negatve. Corollary (Of δ). δ kl s zero only f all errors of used approxmators are the same: δ kl = 0 = j, o : C (j, x (k) ) > 0 C (o, x (k) ) > 0 η,o (k) l = η,j (k) l Proof. Because nonpostveness of δ kl () comes from CauchySchwarz theorem, t could only be 0 f the two vectors for whch t s appled were lnearly dependent. In case of two real, nonnull vectors one of them would have to be dentcal to the second one, just scaled by some number. Ths should apply to vectors ( C (j, x (k) )) ) j= and ( η,j (k) l C (j, x (k) ) ) j= so each η,j (k) l should have the same value, whch s the thess of the corollary. Change of error n the node durng addng a subtree. For assessng the plausblty of the Herarchcal Estmator, followng observaton, based on Theorem 3, may be of use. If the assumptons of theorem 3 hold, then: E(S, g ) E(S, g ) = S P (,j) ( E(SP (,j), g ) E(S P (,j), g P (,j) ) ) + τ + n n δ. (3)
13 95 That equaton may be used to descrbe the dfference of error n the node wth (E(S, g )) and wthout (E(S, g )) the subtree rooted n t, n the manner smlar to the thess of Theorem 3. One of the components of that dfference (/n δ) s never postve (see Eq. ), and s negatve f only chldren errors dffer on some examples, as ndcated by Corollary. Another one (τ, a relatve qualty of competence functon, Eq. ) can be kept at 0 f needed. If the whole dfference s negatve, the exstence of the subtree decreases soluton error on the gven set. Of course, for ths to happen, the remanng component, a mean of dfferences between the mean errors of estmator n the node and the mean errors of chldren of the node (wth ther subtrees, f they have them) on the chldren competence sets, should not cause ncrease greater than the decrease caused by τ + /n δ. On the tranng set, ths ncrease s guaranteed to be nonpostve (see pt. 3 n learnng algorthm n Sect...3.). Keepng t low on unknown examples s one of man concerns when creatng competence functons and dvdng tranng set [3]. The proof begns wth addton of the term E(S, g ) to both sdes: E(S, g ) = E(S, g ) S P (,j) ( E(SP (,j), g ) E(S P (,j), g P (,j) ) ) + τ + n n δ. Then, we can transform the left sde accordng to Theorem 3 (Eq. 4), achevng: n = E(S, g ) S P (,j) E(S P (,j), g P (,j) ) + τ + n δ = S P (,j) ( E(SP (,j), g ) E(S P (,j), g P (,j) ) ) + τ + n n δ. (3) Next we can subtract the term τ + δ from both sdes and arrange the sums n dfferently n the rght sde: n = = S P (,j) E(S P (,j), g P (,j) ) = S P (,j) n (E(S P (,j), g ) ( E(S P (,j), g ) E(S P (,j), g P (,j) ) ) ) ( ) S P (,j) E(S P (,j), g ) n S P (,j) n ( E(SP (,j), g ) E(S P (,j), g P (,j) ) ) We just extracted another term that s on rght sde of (3) S P (,j) ( E(SP (,j), g ) E(S P (,j), g P (,j) ) ), so we can cancel t out of equa n
14 96 ton, whch then acheves form: ( ) S P (,j) E(S P (,j), g ) = E(S, g ) n Agan, we wll transform the left sde. As C s a characterstc functon of S P (,j), we can expand mean squared error usng defntons (0) and (3). Then rearrange sums agan: ( ) S P (,j) E(S P (,j), g ) = n = e () (k) C (j, x (k) ) n k= e () (k) C (j, x (k) ). n Because assumpton (3): C (j, x (k) ) = n stll holds, we may use defnton of mean square error (3) and get e () (k) n k= So the Equaton (3) s true. k= C (j, x (k) ) = (e () (k)) = E(S, g ). k= The change of error durng tree growng. The last observaton wll be descrbed nformally here, but analogcal corollary wth more formal proof can be found n [3] (Corollares 5 and 6). It concerns the change of error of the whole tree when a new subtree s added for a gven node. Obvously n such case E(S, g ) changes from E(S, g ) to a dfferent value, as descrbed by Eq. (3). Ths causes change n one of the E(S P (u,j), g P (u,j) ) of ts parent u, proportonally to the sze of the competence set. The same thng happens one level up and the change s propagated to the root and whole estmator. 3. Dscusson Theorem proved n ths artcle specfcally gves the components of mean squared error for the Herarchcal Estmator:. The error of estmator n nodes E(S, g ), both n leaves (where they are E(S, g )) and nternal nodes.
15 97. The relatve qualty of competence functon τ. Ths qualty s measured wth respect to the reference functon that selects the same chldren as the assessed one, but weghts them equally (and has, by defnton, τ = 0). 3. δ whch s never postve and s negatve f only chldren results dffer on an example, so usually reduces the error. It requres the number of used estmators n a gven node to be constant. Ths can be easly forced by always usng n estmators that are consdered best and possbly gvng some of them very low weghts. However, ths can nfluence the term τ, so developng a theorem lftng the requrement seems to be urgent. A possble way to do that may be to reuse the technque from Theorem presented n [3]. Perhaps the most mportant concluson that could be drawn from the theoretcal consderatons above, especally Theorem 3 and Corollary, s that mean squared error of the whole wll be lower than the weghted mean of errors of the nvolved chld nodes and estmator n the node f more than one of them are used and they have nondentcal errors. Though smlar concluson may be drawn from Theorems n [3] here t s descrbed a bt more precsely. Ths decrease n error can be renforced f the competence functon s able to assgn greater weghts to the chldren that gve lower errors, but t s not necessary. An mprovement over theoretcal bass from [3] allows to draw followng concluson, stronger than before. Accordng the observaton from the end of the prevous secton and other theorems, addng a subtree to a node n exstng tree can lower the mean squared error of the whole Herarchcal Estmator even f we are not able to assure that all chldren nodes have lower errors on ther competence sets than ther parent, or that competence functon offers gan over the reference functon (τ close to 0). It s just enough that the loss on them does not exceed the gan from δ. Theorems proved n [3] dd not allow to state t so clearly. Such concluson s sgnfcant because t s generally not easy to guarantee that a chld node has lower error on examples that were not avalable durng tranng. Mostly because t s a dffcult task for a competence functon to assgn the examples to the rght estmators,.e. the ones that would made low errors on them. Falng to do that ncreases the errors of the approxmators that actually receved the example. Another, though maybe easer to avod, problem s that n a gven node there may not be any functon estmators (n chldren nor the approxmator n the node) that would perform well on a gven example because of e.g. generalzaton problems. Based on these conclusons, one may try to formulate practcal gudelnes for constructon detaled solutons, n a manner smlar to [3]. For example:. It would be good f competence area represented truly smaller and somewhat separated problem,.e. f the chld was able to acheve greater accuracy wthout sgnfcant threat of overfttng, ncreased learnng tme or competence functon assgnng wrong examples.. An example should be evaluated by more than one chld (possbly ncludng vrtual, the estmator n gven node) so that δ could be negatve. 3. It s better f the chldren have dfferent errors from each other on the gven example rather than smlar to make δ even lower.
16 98 4. Choosng the rght chldren by the competence functon seems to be a more mportant task then assgnng them exact weghts, because the soluton can work well also f τ are 0 all chosen chldren are weghted equally. Stll, negatve average τ can decrease the error. Unsurprsngly, those gudelnes are very smlar to those from [3]. Some of them are approxmated n [3] as requrement that examples wthn one competence area should be smlar (gudelne ) whle tranng sets should be rather dssmlar ( and 3) and further consderatons about what smlarty measure to use follow. An mportant trat of all error components found n the theorem descrbed n ths artcle s that they can be drectly measured durng tranng and valdatng, so t s possble to measure where the error comes from, at least to some degree. Refnements of the soluton could even automatcally use such measures to mprove the soluton performance. 4. References [] Bshop C.; Pattern recognton and machne learnng, Sprnger, Berln, Hedelberg, New York, 006. [] Hand D., Mannla H., Smyth P.; Prncples of Data Mnng, MIT Press, 00. [3] Brodowsk S., Podolak I. T.; Herarchcal Estmator, Expert Systems wth Applcatons, 38(0), 0, pp [4] Haste T., Tbshran R., Fredman J.; The Elements of Statstcal Learnng, Sprnger, Berln, Hedelberg, New York, 00. [5] Russell S. J., Norvg P.; Artfcal Intellgence: A Modern Approach, Pearson Educaton, 003. [6] Chrstan N., ShaweTaylor J.; Support Vector Machnes and other kernel based learnng methods, Cambrdge Unversty Press, 000. [7] Scholkopf B., Smola A.; Learnng wth kernels, MIT Press, Cambrdge, 00. [8] Schapre R. E.; The Strength of Weak Learnablty, Machne Learnng, 5(), 990, pp [9] Freund Y., Schapre R.; A decson theoretc generalzaton of onlne learnng and an applcaton to boostng, Journal of Computer and System Scences, 55, 997, pp [0] Jordan M. I., Jacobs R. A.; Herarchcal mxtures of experts and the EM algorthm, Neural Computaton, 994, pp [] Sato K., Nakano R.; A constructve learnng algorthm for an HME, IEEE Internatonal Conference on Neural Networks, 3, 996, pp [] Qunlan J. R.; Learnng wth contnuous classes, Proceedngs of the 5th Australan Conference on Artfcal Intellgence, 99, pp [3] Podolak I. T.; Herarchcal classfer wth overlappng class groups, Expert Systems wth Applcatons, 34(), 008, pp
17 99 [4] Pal N., Bezdek J.; On cluster valdty for the fuzzy cmeans model, IEEE Transactons on Fuzzy Systems, 3(3), 995, pp [5] Brodowsk S.; A Valdty Crteron for Fuzzy Clusterng, n: Jedrzejowcz P., Nguyen N. T., Hoang K. (ed.), Computatonal Collectve Integllgence ICCCI 0, Sprnger, Berln, Hedelberg, 0. [6] Beleck A., Belecka M., Chmelowec A.; Input Sgnals Normalzaton n Kohonen Neural Networks, n: Rutkowsk L., Tadeusewcz R., Zadeh L., Zurada J. (ed.), Artfcal Intellgence and Soft Computng ICAISC 008, Sprnger, Berln, Hedelberg, 008. [7] Barszcz T., Belecka M., Beleck A., Wójck M.; Wnd turbnes states classfcaton by a fuzzyart neural network wth a stereographc projecton as a sgnal normalzaton, Proceedngs of the 0th nternatonal conference on Adaptve and natural computng algorthms, 0, pp [8] Brodowsk S.; Adaptuj acy sȩ herarchczny aproksymator, Master s thess, Jagellonan Unversty, 007. Receved May 9, 00
18 wersja.0
Nonlinear data mapping by neural networks
Nonlnear data mappng by neural networks R.P.W. Dun Delft Unversty of Technology, Netherlands Abstract A revew s gven of the use of neural networks for nonlnear mappng of hgh dmensonal data on lower dmensonal
More informationModule 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module LOSSLESS IMAGE COMPRESSION SYSTEMS Lesson 3 Lossless Compresson: Huffman Codng Instructonal Objectves At the end of ths lesson, the students should be able to:. Defne and measure source entropy..
More informationAn Alternative Way to Measure Private Equity Performance
An Alternatve Way to Measure Prvate Equty Performance Peter Todd Parlux Investment Technology LLC Summary Internal Rate of Return (IRR) s probably the most common way to measure the performance of prvate
More informationThe Development of Web Log Mining Based on ImproveKMeans Clustering Analysis
The Development of Web Log Mnng Based on ImproveKMeans Clusterng Analyss TngZhong Wang * College of Informaton Technology, Luoyang Normal Unversty, Luoyang, 471022, Chna wangtngzhong2@sna.cn Abstract.
More informationLogistic Regression. Lecture 4: More classifiers and classes. Logistic regression. Adaboost. Optimization. Multiple class classification
Lecture 4: More classfers and classes C4B Machne Learnng Hlary 20 A. Zsserman Logstc regresson Loss functons revsted Adaboost Loss functons revsted Optmzaton Multple class classfcaton Logstc Regresson
More informationWhat is Candidate Sampling
What s Canddate Samplng Say we have a multclass or mult label problem where each tranng example ( x, T ) conssts of a context x a small (mult)set of target classes T out of a large unverse L of possble
More informationForecasting the Direction and Strength of Stock Market Movement
Forecastng the Drecton and Strength of Stock Market Movement Jngwe Chen Mng Chen Nan Ye cjngwe@stanford.edu mchen5@stanford.edu nanye@stanford.edu Abstract  Stock market s one of the most complcated systems
More informationRecurrence. 1 Definitions and main statements
Recurrence 1 Defntons and man statements Let X n, n = 0, 1, 2,... be a MC wth the state space S = (1, 2,...), transton probabltes p j = P {X n+1 = j X n = }, and the transton matrx P = (p j ),j S def.
More information8 Algorithm for Binary Searching in Trees
8 Algorthm for Bnary Searchng n Trees In ths secton we present our algorthm for bnary searchng n trees. A crucal observaton employed by the algorthm s that ths problem can be effcently solved when the
More informationbenefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ).
REVIEW OF RISK MANAGEMENT CONCEPTS LOSS DISTRIBUTIONS AND INSURANCE Loss and nsurance: When someone s subject to the rsk of ncurrng a fnancal loss, the loss s generally modeled usng a random varable or
More information1. Measuring association using correlation and regression
How to measure assocaton I: Correlaton. 1. Measurng assocaton usng correlaton and regresson We often would lke to know how one varable, such as a mother's weght, s related to another varable, such as a
More informationState function: eigenfunctions of hermitian operators> normalization, orthogonality completeness
Schroednger equaton Basc postulates of quantum mechancs. Operators: Hermtan operators, commutators State functon: egenfunctons of hermtan operators> normalzaton, orthogonalty completeness egenvalues and
More informationInstitute of Informatics, Faculty of Business and Management, Brno University of Technology,Czech Republic
Lagrange Multplers as Quanttatve Indcators n Economcs Ivan Mezník Insttute of Informatcs, Faculty of Busness and Management, Brno Unversty of TechnologCzech Republc Abstract The quanttatve role of Lagrange
More informationAryabhata s Root Extraction Methods. Abhishek Parakh Louisiana State University Aug 31 st 2006
Aryabhata s Root Extracton Methods Abhshek Parakh Lousana State Unversty Aug 1 st 1 Introducton Ths artcle presents an analyss of the root extracton algorthms of Aryabhata gven n hs book Āryabhatīya [1,
More informationFeature selection for intrusion detection. Slobodan Petrović NISlab, Gjøvik University College
Feature selecton for ntruson detecton Slobodan Petrovć NISlab, Gjøvk Unversty College Contents The feature selecton problem Intruson detecton Traffc features relevant for IDS The CFS measure The mrmr measure
More informationBERNSTEIN POLYNOMIALS
OnLne Geometrc Modelng Notes BERNSTEIN POLYNOMIALS Kenneth I. Joy Vsualzaton and Graphcs Research Group Department of Computer Scence Unversty of Calforna, Davs Overvew Polynomals are ncredbly useful
More informationQuestions that we may have about the variables
Antono Olmos, 01 Multple Regresson Problem: we want to determne the effect of Desre for control, Famly support, Number of frends, and Score on the BDI test on Perceved Support of Latno women. Dependent
More information1 Approximation Algorithms
CME 305: Dscrete Mathematcs and Algorthms 1 Approxmaton Algorthms In lght of the apparent ntractablty of the problems we beleve not to le n P, t makes sense to pursue deas other than complete solutons
More informationCalculation of Sampling Weights
Perre Foy Statstcs Canada 4 Calculaton of Samplng Weghts 4.1 OVERVIEW The basc sample desgn used n TIMSS Populatons 1 and 2 was a twostage stratfed cluster desgn. 1 The frst stage conssted of a sample
More informationStudy on CET4 Marks in China s Graded English Teaching
Study on CET4 Marks n Chna s Graded Englsh Teachng CHE We College of Foregn Studes, Shandong Insttute of Busness and Technology, P.R.Chna, 264005 Abstract: Ths paper deploys Logt model, and decomposes
More informationLossless Data Compression
Lossless Data Compresson Lecture : Unquely Decodable and Instantaneous Codes Sam Rowes September 5, 005 Let s focus on the lossless data compresson problem for now, and not worry about nosy channel codng
More informationJ. Parallel Distrib. Comput.
J. Parallel Dstrb. Comput. 71 (2011) 62 76 Contents lsts avalable at ScenceDrect J. Parallel Dstrb. Comput. journal homepage: www.elsever.com/locate/jpdc Optmzng server placement n dstrbuted systems n
More informationLuby s Alg. for Maximal Independent Sets using Pairwise Independence
Lecture Notes for Randomzed Algorthms Luby s Alg. for Maxmal Independent Sets usng Parwse Independence Last Updated by Erc Vgoda on February, 006 8. Maxmal Independent Sets For a graph G = (V, E), an ndependent
More informationCalculating the high frequency transmission line parameters of power cables
< ' Calculatng the hgh frequency transmsson lne parameters of power cables Authors: Dr. John Dcknson, Laboratory Servces Manager, N 0 RW E B Communcatons Mr. Peter J. Ncholson, Project Assgnment Manager,
More informationPSYCHOLOGICAL RESEARCH (PYC 304C) Lecture 12
14 The Chsquared dstrbuton PSYCHOLOGICAL RESEARCH (PYC 304C) Lecture 1 If a normal varable X, havng mean µ and varance σ, s standardsed, the new varable Z has a mean 0 and varance 1. When ths standardsed
More information8.5 UNITARY AND HERMITIAN MATRICES. The conjugate transpose of a complex matrix A, denoted by A*, is given by
6 CHAPTER 8 COMPLEX VECTOR SPACES 5. Fnd the kernel of the lnear transformaton gven n Exercse 5. In Exercses 55 and 56, fnd the mage of v, for the ndcated composton, where and are gven by the followng
More informationIdentifying Workloads in Mixed Applications
, pp.395400 http://dx.do.org/0.4257/astl.203.29.8 Identfyng Workloads n Mxed Applcatons Jeong Seok Oh, Hyo Jung Bang, Yong Do Cho, Insttute of Gas Safety R&D, Korea Gas Safety Corporaton, ShghungSh,
More informationCS 2750 Machine Learning. Lecture 3. Density estimation. CS 2750 Machine Learning. Announcements
Lecture 3 Densty estmaton Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Next lecture: Matlab tutoral Announcements Rules for attendng the class: Regstered for credt Regstered for audt (only f there
More informationMAPP. MERIS level 3 cloud and water vapour products. Issue: 1. Revision: 0. Date: 9.12.1998. Function Name Organisation Signature Date
Ttel: Project: Doc. No.: MERIS level 3 cloud and water vapour products MAPP MAPPATBDClWVL3 Issue: 1 Revson: 0 Date: 9.12.1998 Functon Name Organsaton Sgnature Date Author: Bennartz FUB Preusker FUB Schüller
More informationAn InterestOriented Network Evolution Mechanism for Online Communities
An InterestOrented Network Evoluton Mechansm for Onlne Communtes Cahong Sun and Xaopng Yang School of Informaton, Renmn Unversty of Chna, Bejng 100872, P.R. Chna {chsun,yang}@ruc.edu.cn Abstract. Onlne
More informationCommunication Networks II Contents
8 / 1  Communcaton Networs II (Görg)  www.comnets.unbremen.de Communcaton Networs II Contents 1 Fundamentals of probablty theory 2 Traffc n communcaton networs 3 Stochastc & Marovan Processes (SP
More informationSupport Vector Machines
Support Vector Machnes Max Wellng Department of Computer Scence Unversty of Toronto 10 Kng s College Road Toronto, M5S 3G5 Canada wellng@cs.toronto.edu Abstract Ths s a note to explan support vector machnes.
More informationLatent Class Regression. Statistics for Psychosocial Research II: Structural Models December 4 and 6, 2006
Latent Class Regresson Statstcs for Psychosocal Research II: Structural Models December 4 and 6, 2006 Latent Class Regresson (LCR) What s t and when do we use t? Recall the standard latent class model
More informationA Novel Methodology of Working Capital Management for Large. Public Constructions by Using Fuzzy Scurve Regression
Novel Methodology of Workng Captal Management for Large Publc Constructons by Usng Fuzzy Scurve Regresson ChengWu Chen, Morrs H. L. Wang and TngYa Hseh Department of Cvl Engneerng, Natonal Central Unversty,
More information1 Example 1: Axisaligned rectangles
COS 511: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture # 6 Scrbe: Aaron Schld February 21, 2013 Last class, we dscussed an analogue for Occam s Razor for nfnte hypothess spaces that, n conjuncton
More informationForecasting the Demand of Emergency Supplies: Based on the CBR Theory and BP Neural Network
700 Proceedngs of the 8th Internatonal Conference on Innovaton & Management Forecastng the Demand of Emergency Supples: Based on the CBR Theory and BP Neural Network Fu Deqang, Lu Yun, L Changbng School
More informationGraph Theory and Cayley s Formula
Graph Theory and Cayley s Formula Chad Casarotto August 10, 2006 Contents 1 Introducton 1 2 Bascs and Defntons 1 Cayley s Formula 4 4 Prüfer Encodng A Forest of Trees 7 1 Introducton In ths paper, I wll
More informationOnline Learning from Experts: Minimax Regret
E0 370 tatstcal Learnng Theory Lecture 2 Nov 24, 20) Onlne Learnng from Experts: Mn Regret Lecturer: hvan garwal crbe: Nkhl Vdhan Introducton In the last three lectures we have been dscussng the onlne
More informationEE201 Circuit Theory I 2015 Spring. Dr. Yılmaz KALKAN
EE201 Crcut Theory I 2015 Sprng Dr. Yılmaz KALKAN 1. Basc Concepts (Chapter 1 of Nlsson  3 Hrs.) Introducton, Current and Voltage, Power and Energy 2. Basc Laws (Chapter 2&3 of Nlsson  6 Hrs.) Voltage
More informationHow Sets of Coherent Probabilities May Serve as Models for Degrees of Incoherence
1 st Internatonal Symposum on Imprecse Probabltes and Ther Applcatons, Ghent, Belgum, 29 June 2 July 1999 How Sets of Coherent Probabltes May Serve as Models for Degrees of Incoherence Mar J. Schervsh
More informationTrafficlight a stress test for life insurance provisions
MEMORANDUM Date 006097 Authors Bengt von Bahr, Göran Ronge Traffclght a stress test for lfe nsurance provsons Fnansnspetonen P.O. Box 6750 SE113 85 Stocholm [Sveavägen 167] Tel +46 8 787 80 00 Fax
More informationSIX WAYS TO SOLVE A SIMPLE PROBLEM: FITTING A STRAIGHT LINE TO MEASUREMENT DATA
SIX WAYS TO SOLVE A SIMPLE PROBLEM: FITTING A STRAIGHT LINE TO MEASUREMENT DATA E. LAGENDIJK Department of Appled Physcs, Delft Unversty of Technology Lorentzweg 1, 68 CJ, The Netherlands Emal: e.lagendjk@tnw.tudelft.nl
More informationECE544NA Final Project: Robust Machine Learning Hardware via Classifier Ensemble
1 ECE544NA Fnal Project: Robust Machne Learnng Hardware va Classfer Ensemble Sa Zhang, szhang12@llnos.edu Dept. of Electr. & Comput. Eng., Unv. of Illnos at UrbanaChampagn, Urbana, IL, USA Abstract In
More informationAnswer: A). There is a flatter IS curve in the high MPC economy. Original LM LM after increase in M. IS curve for low MPC economy
4.02 Quz Solutons Fall 2004 MultpleChoce Questons (30/00 ponts) Please, crcle the correct answer for each of the followng 0 multplechoce questons. For each queston, only one of the answers s correct.
More informationCHOLESTEROL REFERENCE METHOD LABORATORY NETWORK. Sample Stability Protocol
CHOLESTEROL REFERENCE METHOD LABORATORY NETWORK Sample Stablty Protocol Background The Cholesterol Reference Method Laboratory Network (CRMLN) developed certfcaton protocols for total cholesterol, HDL
More informationDEFINING %COMPLETE IN MICROSOFT PROJECT
CelersSystems DEFINING %COMPLETE IN MICROSOFT PROJECT PREPARED BY James E Aksel, PMP, PMISP, MVP For Addtonal Informaton about Earned Value Management Systems and reportng, please contact: CelersSystems,
More informationImplementation of Deutsch's Algorithm Using Mathcad
Implementaton of Deutsch's Algorthm Usng Mathcad Frank Roux The followng s a Mathcad mplementaton of Davd Deutsch's quantum computer prototype as presented on pages  n "Machnes, Logc and Quantum Physcs"
More informationMultivariate EWMA Control Chart
Multvarate EWMA Control Chart Summary The Multvarate EWMA Control Chart procedure creates control charts for two or more numerc varables. Examnng the varables n a multvarate sense s extremely mportant
More informationA Computer Technique for Solving LP Problems with Bounded Variables
Dhaka Unv. J. Sc. 60(2): 163168, 2012 (July) A Computer Technque for Solvng LP Problems wth Bounded Varables S. M. Atqur Rahman Chowdhury * and Sanwar Uddn Ahmad Department of Mathematcs; Unversty of
More informationMining Feature Importance: Applying Evolutionary Algorithms within a Webbased Educational System
Mnng Feature Importance: Applyng Evolutonary Algorthms wthn a Webbased Educatonal System Behrouz MINAEIBIDGOLI 1, and Gerd KORTEMEYER 2, and Wllam F. PUNCH 1 1 Genetc Algorthms Research and Applcatons
More informationAssessing Student Learning Through Keyword Density Analysis of Online Class Messages
Assessng Student Learnng Through Keyword Densty Analyss of Onlne Class Messages Xn Chen New Jersey Insttute of Technology xc7@njt.edu Brook Wu New Jersey Insttute of Technology wu@njt.edu ABSTRACT Ths
More information9.1 The Cumulative Sum Control Chart
Learnng Objectves 9.1 The Cumulatve Sum Control Chart 9.1.1 Basc Prncples: Cusum Control Chart for Montorng the Process Mean If s the target for the process mean, then the cumulatve sum control chart s
More informationMultiplePeriod Attribution: Residuals and Compounding
MultplePerod Attrbuton: Resduals and Compoundng Our revewer gave these authors full marks for dealng wth an ssue that performance measurers and vendors often regard as propretary nformaton. In 1994, Dens
More informationNew bounds in BalogSzemerédiGowers theorem
New bounds n BalogSzemerédGowers theorem By Tomasz Schoen Abstract We prove, n partcular, that every fnte subset A of an abelan group wth the addtve energy κ A 3 contans a set A such that A κ A and A
More informationTraffic State Estimation in the Traffic Management Center of Berlin
Traffc State Estmaton n the Traffc Management Center of Berln Authors: Peter Vortsch, PTV AG, Stumpfstrasse, D763 Karlsruhe, Germany phone ++49/72/965/35, emal peter.vortsch@ptv.de Peter Möhl, PTV AG,
More informationAnts Can Schedule Software Projects
Ants Can Schedule Software Proects Broderck Crawford 1,2, Rcardo Soto 1,3, Frankln Johnson 4, and Erc Monfroy 5 1 Pontfca Unversdad Católca de Valparaíso, Chle FrstName.Name@ucv.cl 2 Unversdad Fns Terrae,
More informationBayesian Network Based Causal Relationship Identification and Funding Success Prediction in P2P Lending
Proceedngs of 2012 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 25 (2012) (2012) IACSIT Press, Sngapore Bayesan Network Based Causal Relatonshp Identfcaton and Fundng Success
More informationNaive Rule Induction for Text Classification based on Keyphrases
Nave Rule Inducton for Text Classfcaton based on Keyphrases Nktas N. Karankolas & Chrstos Skourlas Department of Informatcs, Technologcal Educatonal Insttute of Athens, Greece. Abstract In ths paper,
More informationInterIng 2007. INTERDISCIPLINARITY IN ENGINEERING SCIENTIFIC INTERNATIONAL CONFERENCE, TG. MUREŞ ROMÂNIA, 1516 November 2007.
InterIng 2007 INTERDISCIPLINARITY IN ENGINEERING SCIENTIFIC INTERNATIONAL CONFERENCE, TG. MUREŞ ROMÂNIA, 1516 November 2007. UNCERTAINTY REGION SIMULATION FOR A SERIAL ROBOT STRUCTURE MARIUS SEBASTIAN
More informationCan Auto Liability Insurance Purchases Signal Risk Attitude?
Internatonal Journal of Busness and Economcs, 2011, Vol. 10, No. 2, 159164 Can Auto Lablty Insurance Purchases Sgnal Rsk Atttude? ChuShu L Department of Internatonal Busness, Asa Unversty, Tawan ShengChang
More informationStatistical Methods to Develop Rating Models
Statstcal Methods to Develop Ratng Models [Evelyn Hayden and Danel Porath, Österrechsche Natonalbank and Unversty of Appled Scences at Manz] Source: The Basel II Rsk Parameters Estmaton, Valdaton, and
More informationSingle and multiple stage classifiers implementing logistic discrimination
Sngle and multple stage classfers mplementng logstc dscrmnaton Hélo Radke Bttencourt 1 Dens Alter de Olvera Moraes 2 Vctor Haertel 2 1 Pontfíca Unversdade Católca do Ro Grande do Sul  PUCRS Av. Ipranga,
More informationFace Verification Problem. Face Recognition Problem. Application: Access Control. Biometric Authentication. Face Verification (1:1 matching)
Face Recognton Problem Face Verfcaton Problem Face Verfcaton (1:1 matchng) Querymage face query Face Recognton (1:N matchng) database Applcaton: Access Control www.vsage.com www.vsoncs.com Bometrc Authentcaton
More informationIntroduction: Analysis of Electronic Circuits
/30/008 ntroducton / ntroducton: Analyss of Electronc Crcuts Readng Assgnment: KVL and KCL text from EECS Just lke EECS, the majorty of problems (hw and exam) n EECS 3 wll be crcut analyss problems. Thus,
More informationThe Application of Fractional Brownian Motion in Option Pricing
Vol. 0, No. (05), pp. 738 http://dx.do.org/0.457/jmue.05.0..6 The Applcaton of Fractonal Brownan Moton n Opton Prcng Qngxn Zhou School of Basc Scence,arbn Unversty of Commerce,arbn zhouqngxn98@6.com
More informationFormula of Total Probability, Bayes Rule, and Applications
1 Formula of Total Probablty, Bayes Rule, and Applcatons Recall that for any event A, the par of events A and A has an ntersecton that s empty, whereas the unon A A represents the total populaton of nterest.
More informationgreatest common divisor
4. GCD 1 The greatest common dvsor of two ntegers a and b (not both zero) s the largest nteger whch s a common factor of both a and b. We denote ths number by gcd(a, b), or smply (a, b) when there s no
More informationNEUROFUZZY INFERENCE SYSTEM FOR ECOMMERCE WEBSITE EVALUATION
NEUROFUZZY INFERENE SYSTEM FOR EOMMERE WEBSITE EVALUATION Huan Lu, School of Software, Harbn Unversty of Scence and Technology, Harbn, hna Faculty of Appled Mathematcs and omputer Scence, Belarusan State
More informationLinear Circuits Analysis. Superposition, Thevenin /Norton Equivalent circuits
Lnear Crcuts Analyss. Superposton, Theenn /Norton Equalent crcuts So far we hae explored tmendependent (resste) elements that are also lnear. A tmendependent elements s one for whch we can plot an / cure.
More informationCS 2750 Machine Learning. Lecture 17a. Clustering. CS 2750 Machine Learning. Clustering
Lecture 7a Clusterng Mlos Hauskrecht mlos@cs.ptt.edu 539 Sennott Square Clusterng Groups together smlar nstances n the data sample Basc clusterng problem: dstrbute data nto k dfferent groups such that
More information6. EIGENVALUES AND EIGENVECTORS 3 = 3 2
EIGENVALUES AND EIGENVECTORS The Characterstc Polynomal If A s a square matrx and v s a nonzero vector such that Av v we say that v s an egenvector of A and s the correspondng egenvalue Av v Example :
More informationLecture 18: Clustering & classification
O CPS260/BGT204. Algorthms n Computatonal Bology October 30, 2003 Lecturer: Pana K. Agarwal Lecture 8: Clusterng & classfcaton Scrbe: Daun Hou Open Problem In HomeWor 2, problem 5 has an open problem whch
More informationDescriptive Models. Cluster Analysis. Example. General Applications of Clustering. Examples of Clustering Applications
CMSC828G Prncples of Data Mnng Lecture #9 Today s Readng: HMS, chapter 9 Today s Lecture: Descrptve Modelng Clusterng Algorthms Descrptve Models model presents the man features of the data, a global summary
More informationIMPROVEMENT OF CONVERGENCE CONDITION OF THE SQUAREROOT INTERVAL METHOD FOR MULTIPLE ZEROS 1
Nov Sad J. Math. Vol. 36, No. 2, 2006, 009 IMPROVEMENT OF CONVERGENCE CONDITION OF THE SQUAREROOT INTERVAL METHOD FOR MULTIPLE ZEROS Modrag S. Petkovć 2, Dušan M. Mloševć 3 Abstract. A new theorem concerned
More information1. Fundamentals of probability theory 2. Emergence of communication traffic 3. Stochastic & Markovian Processes (SP & MP)
6.3 /  Communcaton Networks II (Görg) SS20  www.comnets.unbremen.de Communcaton Networks II Contents. Fundamentals of probablty theory 2. Emergence of communcaton traffc 3. Stochastc & Markovan Processes
More informationNasdaq Iceland Bond Indices 01 April 2015
Nasdaq Iceland Bond Indces 01 Aprl 2015 Fxed duraton Indces Introducton Nasdaq Iceland (the Exchange) began calculatng ts current bond ndces n the begnnng of 2005. They were a response to recent changes
More informationImplementation and Evaluation of a Random Forest Machine Learning Algorithm
Implementaton and Evaluaton of a Random Forest Machne Learnng Algorthm Vachaslau Sazonau Unversty of Manchester, Oxford Road, Manchester, M13 9PL,UK sazonauv@cs.manchester.ac.uk Abstract hs work s amed
More informationGender Classification for RealTime Audience Analysis System
Gender Classfcaton for RealTme Audence Analyss System Vladmr Khryashchev, Lev Shmaglt, Andrey Shemyakov, Anton Lebedev Yaroslavl State Unversty Yaroslavl, Russa vhr@yandex.ru, shmaglt_lev@yahoo.com, andrey.shemakov@gmal.com,
More informationVision Mouse. Saurabh Sarkar a* University of Cincinnati, Cincinnati, USA ABSTRACT 1. INTRODUCTION
Vson Mouse Saurabh Sarkar a* a Unversty of Cncnnat, Cncnnat, USA ABSTRACT The report dscusses a vson based approach towards trackng of eyes and fngers. The report descrbes the process of locatng the possble
More informationQuality Adjustment of Secondhand Motor Vehicle Application of Hedonic Approach in Hong Kong s Consumer Price Index
Qualty Adustment of Secondhand Motor Vehcle Applcaton of Hedonc Approach n Hong Kong s Consumer Prce Index Prepared for the 14 th Meetng of the Ottawa Group on Prce Indces 20 22 May 2015, Tokyo, Japan
More informationANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING
ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING Matthew J. Lberatore, Department of Management and Operatons, Vllanova Unversty, Vllanova, PA 19085, 6105194390,
More informationJoe Pimbley, unpublished, 2005. Yield Curve Calculations
Joe Pmbley, unpublshed, 005. Yeld Curve Calculatons Background: Everythng s dscount factors Yeld curve calculatons nclude valuaton of forward rate agreements (FRAs), swaps, nterest rate optons, and forward
More informationEfficient Project Portfolio as a tool for Enterprise Risk Management
Effcent Proect Portfolo as a tool for Enterprse Rsk Management Valentn O. Nkonov Ural State Techncal Unversty Growth Traectory Consultng Company January 5, 27 Effcent Proect Portfolo as a tool for Enterprse
More informationConversion between the vector and raster data structures using Fuzzy Geographical Entities
Converson between the vector and raster data structures usng Fuzzy Geographcal Enttes Cdála Fonte Department of Mathematcs Faculty of Scences and Technology Unversty of Combra, Apartado 38, 3 454 Combra,
More informationOn the Optimal Control of a Cascade of HydroElectric Power Stations
On the Optmal Control of a Cascade of HydroElectrc Power Statons M.C.M. Guedes a, A.F. Rbero a, G.V. Smrnov b and S. Vlela c a Department of Mathematcs, School of Scences, Unversty of Porto, Portugal;
More informationDesign and Development of a Security Evaluation Platform Based on International Standards
Internatonal Journal of Informatcs Socety, VOL.5, NO.2 (203) 780 7 Desgn and Development of a Securty Evaluaton Platform Based on Internatonal Standards Yuj Takahash and Yoshm Teshgawara Graduate School
More informationAn Analysis of Central Processor Scheduling in Multiprogrammed Computer Systems
STANCS73355 I SUSE73013 An Analyss of Central Processor Schedulng n Multprogrammed Computer Systems (Dgest Edton) by Thomas G. Prce October 1972 Techncal Report No. 57 Reproducton n whole or n part
More informationNPAR TESTS. OneSample ChiSquare Test. Cell Specification. Observed Frequencies 1O i 6. Expected Frequencies 1EXP i 6
PAR TESTS If a WEIGHT varable s specfed, t s used to replcate a case as many tmes as ndcated by the weght value rounded to the nearest nteger. If the workspace requrements are exceeded and samplng has
More information7.5. Present Value of an Annuity. Investigate
7.5 Present Value of an Annuty Owen and Anna are approachng retrement and are puttng ther fnances n order. They have worked hard and nvested ther earnngs so that they now have a large amount of money on
More informationL10: Linear discriminants analysis
L0: Lnear dscrmnants analyss Lnear dscrmnant analyss, two classes Lnear dscrmnant analyss, C classes LDA vs. PCA Lmtatons of LDA Varants of LDA Other dmensonalty reducton methods CSCE 666 Pattern Analyss
More informationRisk Model of LongTerm Production Scheduling in Open Pit Gold Mining
Rsk Model of LongTerm Producton Schedulng n Open Pt Gold Mnng R Halatchev 1 and P Lever 2 ABSTRACT Open pt gold mnng s an mportant sector of the Australan mnng ndustry. It uses large amounts of nvestments,
More informationFrequency Selective IQ Phase and IQ Amplitude Imbalance Adjustments for OFDM Direct Conversion Transmitters
Frequency Selectve IQ Phase and IQ Ampltude Imbalance Adjustments for OFDM Drect Converson ransmtters Edmund Coersmeer, Ernst Zelnsk Noka, Meesmannstrasse 103, 44807 Bochum, Germany edmund.coersmeer@noka.com,
More informationLogistic Regression. Steve Kroon
Logstc Regresson Steve Kroon Course notes sectons: 24.324.4 Dsclamer: these notes do not explctly ndcate whether values are vectors or scalars, but expects the reader to dscern ths from the context. Scenaro
More informationA Fast Incremental Spectral Clustering for Large Data Sets
2011 12th Internatonal Conference on Parallel and Dstrbuted Computng, Applcatons and Technologes A Fast Incremental Spectral Clusterng for Large Data Sets Tengteng Kong 1,YeTan 1, Hong Shen 1,2 1 School
More informationLogical Development Of Vogel s Approximation Method (LDVAM): An Approach To Find Basic Feasible Solution Of Transportation Problem
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME, ISSUE, FEBRUARY ISSN 77866 Logcal Development Of Vogel s Approxmaton Method (LD An Approach To Fnd Basc Feasble Soluton Of Transportaton
More informationA study on the ability of Support Vector Regression and Neural Networks to Forecast Basic Time Series Patterns
A study on the ablty of Support Vector Regresson and Neural Networks to Forecast Basc Tme Seres Patterns Sven F. Crone, Jose Guajardo 2, and Rchard Weber 2 Lancaster Unversty, Department of Management
More informationA hybrid global optimization algorithm based on parallel chaos optimization and outlook algorithm
Avalable onlne www.ocpr.com Journal of Chemcal and Pharmaceutcal Research, 2014, 6(7):18841889 Research Artcle ISSN : 09757384 CODEN(USA) : JCPRC5 A hybrd global optmzaton algorthm based on parallel
More informationv a 1 b 1 i, a 2 b 2 i,..., a n b n i.
SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 455 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces we have studed thus far n the text are real vector spaces snce the scalars are
More informationExtending Probabilistic Dynamic Epistemic Logic
Extendng Probablstc Dynamc Epstemc Logc Joshua Sack May 29, 2008 Probablty Space Defnton A probablty space s a tuple (S, A, µ), where 1 S s a set called the sample space. 2 A P(S) s a σalgebra: a set
More informationCausal, Explanatory Forecasting. Analysis. Regression Analysis. Simple Linear Regression. Which is Independent? Forecasting
Causal, Explanatory Forecastng Assumes causeandeffect relatonshp between system nputs and ts output Forecastng wth Regresson Analyss Rchard S. Barr Inputs System Cause + Effect Relatonshp The job of
More information