OutofSample Extensions for LLE, Isomap, MDS, Eigenmaps, and Spectral Clustering


 Francis Ferguson
 1 years ago
 Views:
Transcription
1 OutofSample Extensons for LLE, Isomap, MDS, Egenmaps, and Spectral Clusterng Yoshua Bengo, JeanFranços Paement, Pascal Vncent Olver Delalleau, Ncolas Le Roux and Mare Oumet Département d Informatque et Recherche Opératonnelle Unversté de Montréal Montréal, Québec, Canada, H3C 3J7 Abstract Several unsupervsed learnng algorthms based on an egendecomposton provde ether an embeddng or a clusterng only for gven tranng ponts, wth no straghtforward extenson for outofsample examples short of recomputng egenvectors. Ths paper provdes a unfed framework for extendng Local Lnear Embeddng (LLE), Isomap, Laplacan Egenmaps, MultDmensonal Scalng (for dmensonalty reducton) as well as for Spectral Clusterng. Ths framework s based on seeng these algorthms as learnng egenfunctons of a datadependent kernel. Numercal experments show that the generalzatons performed have a level of error comparable to the varablty of the embeddng algorthms due to the choce of tranng data. 1 Introducton Many unsupervsed learnng algorthms have been recently proposed, all usng an egendecomposton for obtanng a lowerdmensonal embeddng of data lyng on a nonlnear manfold: Local Lnear Embeddng (LLE) (Rowes and Saul, 2000), Isomap (Tenenbaum, de Slva and Langford, 2000) and Laplacan Egenmaps (Belkn and Nyog, 2003). There are also many varants of Spectral Clusterng (Wess, 1999; Ng, Jordan and Wess, 2002), n whch such an embeddng s an ntermedate step before obtanng a clusterng of the data that can capture flat, elongated and even curved clusters. The two tasks (manfold learnng and clusterng) are lnked because the clusters found by spectral clusterng can be arbtrary curved manfolds (as long as there s enough data to locally capture ther curvature). 2 Common Framework In ths paper we consder fve types of unsupervsed learnng algorthms that can be cast n the same framework, based on the computaton of an embeddng for the tranng ponts obtaned from the prncpal egenvectors of a symmetrc matrx. Algorthm 1 1. Start from a data set D = {x 1,..., x n } wth n ponts n R d. Construct a n n neghborhood or smlarty matrx M. Let us denote K D (, ) (or K for shorthand) the datadependent functon whch produces M by M j = K D (x, x j ). 2. Optonally transform M, yeldng a normalzed matrx M. Equvalently, ths corresponds to generatng M from a K D by M j = K D (x, x j ).
2 3. Compute the m largest postve egenvalues λ k and egenvectors v k of M. 4. The embeddng of each example x s the vector y wth y k the th element of the kth prncpal egenvector v k of M. Alternatvely (MDS and Isomap), the embeddng s e, wth e k = λ k y k. If the frst m egenvalues are postve, then e e j s the best approxmaton of M j usng only m coordnates, n the squared error sense. In the followng, we consder the specalzatons of Algorthm 1 for dfferent unsupervsed learnng algorthms. Let S be the th row sum of the affnty matrx M: S = j M j. (1) We say that two ponts (a, b) are knearestneghbors of each other f a s among the k nearest neghbors of b n D {a} or vceversa. We denote by x j the jth coordnate of the vector x. 2.1 MultDmensonal Scalng MultDmensonal Scalng (MDS) starts from a noton of dstance or affnty K that s computed between each par of tranng examples. We consder here metrc MDS (Cox and Cox, 1994). For the normalzaton step 2 n Algorthm 1, these dstances are converted to equvalent dot products usng ( the doublecenterng formula: ) M j = 1 M j 1 2 n S 1 n S j + 1 n 2 S k. (2) The embeddng e k of example x s gven by λ k v k. 2.2 Spectral Clusterng Spectral clusterng (Wess, 1999) can yeld mpressvely good results where tradtonal clusterng lookng for round blobs n the data, such as Kmeans, would fal mserably. It s based on two man steps: frst embeddng the data ponts n a space n whch clusters are more obvous (usng the egenvectors of a Gram matrx), and then applyng a classcal clusterng algorthm such as Kmeans, e.g. as n (Ng, Jordan and Wess, 2002). The affnty matrx M s formed usng a kernel such as the Gaussan kernel. Several normalzaton steps have been proposed. Among the most successful ones, as advocated n (Wess, 1999; Ng, Jordan and Wess, 2002), s the followng: M j = k M j S S j. (3) To obtan m clusters, the frst m prncpal egenvectors of M are computed and Kmeans s appled on the untnorm coordnates, obtaned from the embeddng y k = v k. 2.3 Laplacan Egenmaps Laplacan Egenmaps s a recently proposed dmensonalty reducton procedure (Belkn and Nyog, 2003) that has been proposed for semsupervsed learnng. The authors use an approxmaton of the Laplacan operator such as the Gaussan kernel or the matrx whose element (, j) s 1 f x and x j are knearestneghbors and 0 otherwse. Instead of solvng an ordnary egenproblem, the followng generalzed egenproblem s solved: (S M)v j = λ j Sv j (4) wth egenvalues λ j, egenvectors v j and S the dagonal matrx wth entres gven by eq. (1). The smallest egenvalue s left out and the egenvectors correspondng to the other small egenvalues are used for the embeddng. Ths s the same embeddng that s computed wth the spectral clusterng algorthm from (Sh and Malk, 1997). As noted n (Wess, 1999) (Normalzaton Lemma 1), an equvalent result (up to a componentwse scalng of the embeddng) can be obtaned by consderng the prncpal egenvectors of the normalzed matrx defned n eq. (3).
3 2.4 Isomap Isomap (Tenenbaum, de Slva and Langford, 2000) generalzes MDS to nonlnear manfolds. It s based on replacng the Eucldean dstance by an approxmaton of the geodesc dstance on the manfold. We defne the geodesc dstance wth respect to a data set D, a dstance d(u, v) and a neghborhood k as follows: D(a, b) = mn p d(p, p +1 ) (5) where p s a sequence of ponts of length l 2 wth p 1 = a, p l = b, p D {2,..., l 1} and (p,p +1 ) are knearestneghbors. The length l s free n the mnmzaton. The Isomap algorthm obtans the normalzed matrx M from whch the embeddng s derved by transformng the raw parwse dstances matrx as follows: frst compute the matrx M j = D 2 (x, x j ) of squared geodesc dstances wth respect to the data D, then apply to ths matrx the dstancetodotproduct transformaton (eq. (2)), as for MDS. As n MDS, the embeddng s e k = λ k v k rather than y k = v k. 2.5 LLE The Local Lnear Embeddng (LLE) algorthm (Rowes and Saul, 2000) looks for an embeddng that preserves the local geometry n the neghborhood of each data pont. Frst, a sparse matrx of local predctve weghts W j s computed, such that j W j = 1, W j = 0 f x j s not a knearestneghbor of x and ( j W jx j x ) 2 s mnmzed. Then the matrx M = (I W ) (I W ) (6) s formed. The embeddng s obtaned from the lowest egenvectors of M, except for the smallest egenvector whch s unnterestng because t s (1, 1,... 1), wth egenvalue 0. Note that the lowest egenvectors of M are the largest egenvectors of M µ = µi M to ft Algorthm 1 (the use of µ > 0 wll be dscussed n secton 4.4). The embeddng s gven by y k = v k, and s constant wth respect to µ. 3 From Egenvectors to Egenfunctons To obtan an embeddng for a new data pont, we propose to use the Nyström formula (eq. 9) (Baker, 1977), whch has been used successfully to speedup kernel methods computatons by focussng the heaver computatons (the egendecomposton) on a subset of examples. The use of ths formula can be justfed by consderng the convergence of egenvectors and egenvalues, as the number of examples ncreases (Baker, 1977; Wllams and Seeger, 2000; Koltchnsk and Gné, 2000; ShaweTaylor and Wllams, 2003). Intutvely, the extensons to obtan the embeddng for a new example requre specfyng a new column of the Gram matrx M, through a tranngset dependent kernel functon K D, n whch one of the arguments may be requred to be n the tranng set. If we start from a data set D, obtan an embeddng for ts elements, and add more and more data, the embeddng for the ponts n D converges (for egenvalues that are unque). (ShaweTaylor and Wllams, 2003) gve bounds on the convergence error (n the case of kernel PCA). In the lmt, we expect each egenvector to converge to an egenfuncton for the lnear operator defned below, n the sense that the th element of the kth egenvector converges to the applcaton of the kth egenfuncton to x (up to a normalzaton factor). Consder a Hlbert space H p of functons wth nner product f, g p = f(x)g(x)p(x)dx, wth a densty functon p(x). Assocate wth kernel K a lnear operator K p n H p : (K p f)(x) = K(x, y)f(y)p(y)dy. (7) We don t know the true densty p but we can approxmate the above nner product and lnear operator (and ts egenfunctons) usng the emprcal dstrbuton ˆp. An emprcal Hlbert space Hˆp s thus defned usng ˆp nstead of p. Note that the proposton below can be
4 appled even f the kernel s not postve semdefnte, although the embeddng algorthms we have studed are restrcted to usng the prncpal coordnates assocated wth postve egenvalues. For a more rgorous mathematcal analyss, see (Bengo et al., 2003). Proposton 1 Let K(a, b) be a kernel functon, not necessarly postve semdefnte, that gves rse to a symmetrc matrx M wth entres M j = K(x, x j ) upon a dataset D = {x 1,..., x n }. Let (v k, λ k ) be an (egenvector,egenvalue) par that solves Mv k = λ k v k. Let (f k, λ k ) be an (egenfuncton,egenvalue) par that solves ( Kˆp f k )(x) = λ k f k(x) for any x, wth ˆp the emprcal dstrbuton over D. Let e k (x) = y k (x) λ k or y k (x) denote the embeddng assocated wth a new pont x. Then λ k = 1 n λ k (8) n n f k (x) = v k K(x, x ) (9) λ k =1 f k (x ) = nv k (10) y k (x) = f k(x) = 1 n v k K(x, x ) (11) n λ k =1 y k (x ) = y k, e k (x ) = e k (12) See (Bengo et al., 2003) for a proof and further justfcatons of the above formulae. The generalzed embeddng for Isomap and MDS s e k (x) = λ k y k (x) whereas the one for spectral clusterng, Laplacan egenmaps and LLE s y k (x). Proposton 2 In addton, f the datadependent kernel K D s postve semdefnte, then n f k (x) = π k (x) λ k where π k (x) s the kth component of the kernel PCA projecton of x obtaned from the kernel K D (up to centerng). Ths relaton wth kernel PCA (Schölkopf, Smola and Müller, 1998), already ponted out n (Wllams and Seeger, 2000), s further dscussed n (Bengo et al., 2003). 4 Extendng to new Ponts Usng Proposton 1, one obtans a natural extenson of all the unsupervsed learnng algorthms mapped to Algorthm 1, provded we can wrte down a kernel functon K that gves rse to the matrx M on D, and can be used n eq. (11) to generalze the embeddng. We consder each of them n turn below. In addton to the convergence propertes dscussed n secton 3, another justfcaton for usng equaton (9) s gven by the followng proposton: Proposton 3 If we defne the f k (x ) by eq. (10) and take a new pont x, the value of f k (x) that mnmzes ( 2 n m K(x, x ) λ tf t (x)f t (x )) (13) =1 t=1 s gven by eq. (9), for m 1 and any k m. The proof s a drect consequence of the orthogonalty of the egenvectors v k. Ths proposton lnks equatons (9) and (10). Indeed, we can obtan eq. (10) when tryng to approxmate
5 K at the data ponts by mnmzng ( the cost n m K(x, x j ) λ tf t (x )f t (x j ),j=1 t=1 for m = 1, 2,... When we add a new pont x, t s thus natural to use the same cost to approxmate the K(x, x ), whch yelds (13). Note that by dong so, we do not seek to approxmate K(x, x). Future work should nvestgate embeddngs whch mnmze the emprcal reconstructon error of K but gnore the dagonal contrbutons. 4.1 Extendng MDS For MDS, a normalzed kernel can be defned as follows, usng a contnuous verson of the doublecenterng eq. (2): K(a, b) = 1 2 (d2 (a, b) E x [d 2 (x, b)] E x [d 2 (a, x )] + E x,x [d 2 (x, x )]) (14) where d(a, b) s the orgnal dstance and the expectatons are taken over the emprcal data D. An extenson of metrc MDS to new ponts has already been proposed n (Gower, 1968), solvng exactly for the embeddng of x to be consstent wth ts dstances to tranng ponts, whch n general requres addng a new dmenson. 4.2 Extendng Spectral Clusterng and Laplacan Egenmaps Both the verson of Spectral Clusterng and Laplacan Egenmaps descrbed above are based on an ntal kernel K, such as the Gaussan or nearestneghbor kernel. An equvalent normalzed kernel s: K(a, b) = 1 K(a, b) n Ex [K(a, x)]e x [K(b, x )] where the expectatons are taken over the emprcal data D. 4.3 Extendng Isomap To extend Isomap, the test pont s not used n computng the geodesc dstance between tranng ponts, otherwse we would have to recompute all the geodesc dstances. A reasonable soluton s to use the defnton of D(a, b) n eq. (5), whch only uses the tranng ponts n the ntermedate ponts on the path from a to b. We obtan a normalzed kernel by applyng the contnuous doublecenterng of eq. (14) wth d = D. A formula has already been proposed (de Slva and Tenenbaum, 2003) to approxmate Isomap usng only a subset of the examples (the landmark ponts) to compute the egenvectors. Usng our notatons, ths formula s e k(x) = 1 2 v k (E x [ λ D 2 (x, x )] D 2 (x, x)). (15) k where E x s an average over the data set. The formula s appled to obtan an embeddng for the nonlandmark examples. Corollary 1 The embeddng proposed n Proposton 1 for Isomap (e k (x)) s equal to formula 15 (Landmark Isomap) when K(x, y) s defned as n eq. (14) wth d = D. Proof: the proof reles on a property of the Gram matrx for Isomap: M j = 0, by constructon. Therefore (1, 1,... 1) s an egenvector wth egenvalue 0, and all the other egenvectors v k have the property v k = 0 because of the orthogonalty wth (1, 1,... 1). Wrtng (E x [ D 2 (x, x )] D 2 (x, x )) = 2 K(x, x )+E x,x [ D 2 (x, x )] E x [ D 2 (x, x )] yelds e k (x) = 2 2 λ k v K(x, k x ) + (E x,x [ D 2 (x, x )] E x [ D 2 (x, x )]) v k = e k (x), snce the last sum s 0. ) 2
6 4.4 Extendng LLE The extenson of LLE s the most challengng one because t does not ft as well the framework of Algorthm 1: the M matrx for LLE does not have a clear nterpretaton n terms of dstance or dot product. An extenson has been proposed n (Saul and Rowes, 2002), but unfortunately t cannot be cast drectly nto the framework of Proposton 1. Ther embeddng of a new pont x s gven by n y k (x) = y k (x )w(x, x ) (16) =1 where w(x, x ) s the weght of x n the reconstructon of x by ts knearestneghbors n the tranng set (f x = x j D, w(x, x ) = δ j ). Ths s very close to eq. (11), but lacks the normalzaton by λ k. However, we can see ths embeddng as a lmt case of Proposton 1, as shown below. We frst need to defne a kernel K µ such that K µ (x, x j ) = M µ,j = (µ 1)δ j + W j + W j k W k W kj (17) for x, x j D. Let us defne a kernel K by K (x, x) = K (x, x ) = w(x, x ) and K (x, y) = 0 when nether x nor y s n the tranng set D. Let K be defned by K (x, x j ) = W j + W j k W k W kj and K (x, y) = 0 when ether x or y sn t n D. Then, by constructon, the kernel Kµ = (µ 1) K + K verfes eq. (17). Thus, we can apply eq. (11) to obtan an embeddng of a new pont x, whch yelds y µ,k (x) = 1 y k ((µ 1) λ K (x, x ) + K ) (x, x ) k wth λ k = (µ ˆλ k ), and ˆλ k beng the kth lowest egenvalue of M. Ths rewrtes nto y µ,k (x) = µ 1 µ ˆλ y k w(x, x ) + 1 k µ ˆλ y K k (x, x ). k Then when µ, y µ,k (x) y k (x) defned by eq. (16). Snce the choce of µ s free, we can thus consder eq. (16) as approxmatng the use of the kernel Kµ wth a large µ n Proposton 1. Ths s what we have done n the experments descrbed n the next secton. Note however that we can fnd smoother kernels K µ verfyng eq. (17), gvng other extensons of LLE from Proposton 1. It s out of the scope of ths paper to study whch kernel s best for generalzaton, but t seems desrable to use a smooth kernel that would take nto account not only the reconstructon of x by ts neghbors x, but also the reconstructon of the x by ther neghbors ncludng the new pont x. 5 Experments We want to evaluate whether the precson of the generalzatons suggested n the prevous secton s comparable to the ntrnsc perturbatons of the embeddng algorthms. The perturbaton analyss wll be acheved by consderng splts of the data n three sets, D = F R 1 R 2 and tranng ether wth F R 1 or F R 2, comparng the embeddngs on F. For each algorthm descrbed n secton 2, we apply the followng procedure:
7 10 x 10 4 x x Fgure 1: Tranng set varablty mnus outofsample error, wrt the proporton of tranng samples substtuted. Top left: MDS. Top rght: spectral clusterng or Laplacan egenmaps. Bottom left: Isomap. Bottom rght: LLE. Error bars are 95% confdence ntervals. 1. We choose F D wth m = F samples. The remanng n m samples n D/F are splt nto two equal sze subsets R 1 and R 2. We tran (obtan the egenvectors) over F R 1 and F R 2. When egenvalues are close, the estmated egenvectors are unstable and can rotate n the subspace they span. Thus we estmate an affne algnment between the two embeddngs usng the ponts n F, and we calculate the Eucldean dstance between the algned embeddngs obtaned for each s F. 2. For each sample s F, we also tran over {F R 1 }/{s }. We apply the extenson to outofsample ponts to fnd the predcted embeddng of s and calculate the Eucldean dstance between ths embeddng and the one obtaned when tranng wth F R 1,.e. wth s n the tranng set. 3. We calculate the mean dfference (and ts standard error, shown n the fgure) between the dstance obtaned n step 1 and the one obtaned n step 2 for each sample s F, and we repeat ths experment for varous szes of F. The results obtaned for MDS, Isomap, spectral clusterng and LLE are shown n fgure 1 for dfferent values of m. Experments are done over a database of 698 synthetc face mages descrbed by 4096 components that s avalable at Qualtatvely smlar results have been obtaned over other databases such as Ionosphere (http://www.cs.uc.edu/ mlearn/mlsummary.html) and swssroll (http://www.cs.toronto.edu/ rowes/lle/). Each algorthm generates a twodmensonal embeddng of the mages, followng the experments reported for Isomap. The number of neghbors s 10 for Isomap and LLE, and a Gaussan kernel wth a standard devaton of 0.01 s used for spectral clusterng / Laplacan egenmaps. 95% confdence
8 ntervals are drawn besde each mean dfference of error on the fgure. As expected, the mean dfference between the two dstances s almost monotoncally ncreasng as the fracton of substtuted examples grows (xaxs n the fgure). In most cases, the outofsample error s less than or comparable to the tranng set embeddng stablty: t corresponds to substtutng a fracton of between 1 and 4% of the tranng examples. 6 Conclusons In ths paper we have presented an extenson to fve unsupervsed learnng algorthms based on a spectral embeddng of the data: MDS, spectral clusterng, Laplacan egenmaps, Isomap and LLE. Ths extenson allows one to apply a traned model to outofsample ponts wthout havng to recompute egenvectors. It ntroduces a noton of functon nducton and generalzaton error for these algorthms. The experments on real hghdmensonal data show that the average dstance between the outofsample and nsample embeddngs s comparable or lower than the varaton n nsample embeddng due to replacng a few ponts n the tranng set. References Baker, C. (1977). The numercal treatment of ntegral equatons. Clarendon Press, Oxford. Belkn, M. and Nyog, P. (2003). Laplacan egenmaps for dmensonalty reducton and data representaton. Neural Computaton, 15(6): Bengo, Y., Vncent, P., Paement, J., Delalleau, O., Oumet, M., and Le Roux, N. (2003). Spectral clusterng and kernel pca are learnng egenfunctons. Techncal report, Département d nformatque et recherche opératonnelle, Unversté de Montréal. Cox, T. and Cox, M. (1994). Multdmensonal Scalng. Chapman & Hall, London. de Slva, V. and Tenenbaum, J. (2003). Global versus local methods n nonlnear dmensonalty reducton. In Becker, S., Thrun, S., and Obermayer, K., edtors, Advances n Neural Informaton Processng Systems, volume 15, pages , Cambrdge, MA. The MIT Press. Gower, J. (1968). Addng a pont to vector dagrams n multvarate analyss. Bometrka, 55(3): Koltchnsk, V. and Gné, E. (2000). Random matrx approxmaton of spectra of ntegral operators. Bernoull, 6(1): Ng, A. Y., Jordan, M. I., and Wess, Y. (2002). On spectral clusterng: Analyss and an algorthm. In Detterch, T. G., Becker, S., and Ghahraman, Z., edtors, Advances n Neural Informaton Processng Systems 14, Cambrdge, MA. MIT Press. Rowes, S. and Saul, L. (2000). Nonlnear dmensonalty reducton by locally lnear embeddng. Scence, 290(5500): Saul, L. and Rowes, S. (2002). Thnk globally, ft locally: unsupervsed learnng of low dmensonal manfolds. Journal of Machne Learnng Research, 4: Schölkopf, B., Smola, A., and Müller, K.R. (1998). Nonlnear component analyss as a kernel egenvalue problem. Neural Computaton, 10: ShaweTaylor, J. and Wllams, C. (2003). The stablty of kernel prncpal components analyss and ts relaton to the process egenspectrum. In Becker, S., Thrun, S., and Obermayer, K., edtors, Advances n Neural Informaton Processng Systems, volume 15. The MIT Press. Sh, J. and Malk, J. (1997). Normalzed cuts and mage segmentaton. In Proc. IEEE Conf. Computer Vson and Pattern Recognton, pages Tenenbaum, J., de Slva, V., and Langford, J. (2000). A global geometrc framework for nonlnear dmensonalty reducton. Scence, 290(5500): Wess, Y. (1999). Segmentaton usng egenvectors: a unfyng vew. In Proceedngs IEEE Internatonal Conference on Computer Vson, pages Wllams, C. and Seeger, M. (2000). The effect of the nput densty dstrbuton on kernelbased classfers. In Proceedngs of the Seventeenth Internatonal Conference on Machne Learnng. Morgan Kaufmann.
8.5 UNITARY AND HERMITIAN MATRICES. The conjugate transpose of a complex matrix A, denoted by A*, is given by
6 CHAPTER 8 COMPLEX VECTOR SPACES 5. Fnd the kernel of the lnear transformaton gven n Exercse 5. In Exercses 55 and 56, fnd the mage of v, for the ndcated composton, where and are gven by the followng
More informationv a 1 b 1 i, a 2 b 2 i,..., a n b n i.
SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 455 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces we have studed thus far n the text are real vector spaces snce the scalars are
More informationFace Verification Problem. Face Recognition Problem. Application: Access Control. Biometric Authentication. Face Verification (1:1 matching)
Face Recognton Problem Face Verfcaton Problem Face Verfcaton (1:1 matchng) Querymage face query Face Recognton (1:N matchng) database Applcaton: Access Control www.vsage.com www.vsoncs.com Bometrc Authentcaton
More informationL10: Linear discriminants analysis
L0: Lnear dscrmnants analyss Lnear dscrmnant analyss, two classes Lnear dscrmnant analyss, C classes LDA vs. PCA Lmtatons of LDA Varants of LDA Other dmensonalty reducton methods CSCE 666 Pattern Analyss
More informationWhat is Candidate Sampling
What s Canddate Samplng Say we have a multclass or mult label problem where each tranng example ( x, T ) conssts of a context x a small (mult)set of target classes T out of a large unverse L of possble
More informationCS 2750 Machine Learning. Lecture 17a. Clustering. CS 2750 Machine Learning. Clustering
Lecture 7a Clusterng Mlos Hauskrecht mlos@cs.ptt.edu 539 Sennott Square Clusterng Groups together smlar nstances n the data sample Basc clusterng problem: dstrbute data nto k dfferent groups such that
More informationA Fast Incremental Spectral Clustering for Large Data Sets
2011 12th Internatonal Conference on Parallel and Dstrbuted Computng, Applcatons and Technologes A Fast Incremental Spectral Clusterng for Large Data Sets Tengteng Kong 1,YeTan 1, Hong Shen 1,2 1 School
More informationSupport Vector Machines
Support Vector Machnes Max Wellng Department of Computer Scence Unversty of Toronto 10 Kng s College Road Toronto, M5S 3G5 Canada wellng@cs.toronto.edu Abstract Ths s a note to explan support vector machnes.
More informationForecasting the Direction and Strength of Stock Market Movement
Forecastng the Drecton and Strength of Stock Market Movement Jngwe Chen Mng Chen Nan Ye cjngwe@stanford.edu mchen5@stanford.edu nanye@stanford.edu Abstract  Stock market s one of the most complcated systems
More informationNonlinear data mapping by neural networks
Nonlnear data mappng by neural networks R.P.W. Dun Delft Unversty of Technology, Netherlands Abstract A revew s gven of the use of neural networks for nonlnear mappng of hgh dmensonal data on lower dmensonal
More informationLuby s Alg. for Maximal Independent Sets using Pairwise Independence
Lecture Notes for Randomzed Algorthms Luby s Alg. for Maxmal Independent Sets usng Parwse Independence Last Updated by Erc Vgoda on February, 006 8. Maxmal Independent Sets For a graph G = (V, E), an ndependent
More informationDimensionality Reduction for Data Visualization
Dmensonalty Reducton for Data Vsualzaton Samuel Kask and Jaakko Peltonen Dmensonalty reducton s one of the basc operatons n the toolbox of dataanalysts and desgners of machne learnng and pattern recognton
More informationMANY machine learning and pattern recognition applications
1 Trace Rato Problem Revsted Yangqng Ja, Fepng Ne, and Changshu Zhang Abstract Dmensonalty reducton s an mportant ssue n many machne learnng and pattern recognton applcatons, and the trace rato problem
More information1 Example 1: Axisaligned rectangles
COS 511: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture # 6 Scrbe: Aaron Schld February 21, 2013 Last class, we dscussed an analogue for Occam s Razor for nfnte hypothess spaces that, n conjuncton
More informationBERNSTEIN POLYNOMIALS
OnLne Geometrc Modelng Notes BERNSTEIN POLYNOMIALS Kenneth I. Joy Vsualzaton and Graphcs Research Group Department of Computer Scence Unversty of Calforna, Davs Overvew Polynomals are ncredbly useful
More informationRing structure of splines on triangulations
www.oeaw.ac.at Rng structure of splnes on trangulatons N. Vllamzar RICAMReport 201448 www.rcam.oeaw.ac.at RING STRUCTURE OF SPLINES ON TRIANGULATIONS NELLY VILLAMIZAR Introducton For a trangulated regon
More information6. EIGENVALUES AND EIGENVECTORS 3 = 3 2
EIGENVALUES AND EIGENVECTORS The Characterstc Polynomal If A s a square matrx and v s a nonzero vector such that Av v we say that v s an egenvector of A and s the correspondng egenvalue Av v Example :
More informationThe Development of Web Log Mining Based on ImproveKMeans Clustering Analysis
The Development of Web Log Mnng Based on ImproveKMeans Clusterng Analyss TngZhong Wang * College of Informaton Technology, Luoyang Normal Unversty, Luoyang, 471022, Chna wangtngzhong2@sna.cn Abstract.
More informationLearning from Multiple Outlooks
Learnng from Multple Outlooks Maayan Harel Department of Electrcal Engneerng, Technon, Hafa, Israel She Mannor Department of Electrcal Engneerng, Technon, Hafa, Israel maayanga@tx.technon.ac.l she@ee.technon.ac.l
More informationVision Mouse. Saurabh Sarkar a* University of Cincinnati, Cincinnati, USA ABSTRACT 1. INTRODUCTION
Vson Mouse Saurabh Sarkar a* a Unversty of Cncnnat, Cncnnat, USA ABSTRACT The report dscusses a vson based approach towards trackng of eyes and fngers. The report descrbes the process of locatng the possble
More informationForecasting the Demand of Emergency Supplies: Based on the CBR Theory and BP Neural Network
700 Proceedngs of the 8th Internatonal Conference on Innovaton & Management Forecastng the Demand of Emergency Supples: Based on the CBR Theory and BP Neural Network Fu Deqang, Lu Yun, L Changbng School
More informationgreatest common divisor
4. GCD 1 The greatest common dvsor of two ntegers a and b (not both zero) s the largest nteger whch s a common factor of both a and b. We denote ths number by gcd(a, b), or smply (a, b) when there s no
More informationUsing Mixture Covariance Matrices to Improve Face and Facial Expression Recognitions
Usng Mxture Covarance Matrces to Improve Face and Facal Expresson Recogntons Carlos E. homaz, Duncan F. Glles and Raul Q. Fetosa 2 Imperal College of Scence echnology and Medcne, Department of Computng,
More informationRecurrence. 1 Definitions and main statements
Recurrence 1 Defntons and man statements Let X n, n = 0, 1, 2,... be a MC wth the state space S = (1, 2,...), transton probabltes p j = P {X n+1 = j X n = }, and the transton matrx P = (p j ),j S def.
More informationThe eigenvalue derivatives of linear damped systems
Control and Cybernetcs vol. 32 (2003) No. 4 The egenvalue dervatves of lnear damped systems by YeongJeu Sun Department of Electrcal Engneerng IShou Unversty Kaohsung, Tawan 840, R.O.C emal: yjsun@su.edu.tw
More informationConversion between the vector and raster data structures using Fuzzy Geographical Entities
Converson between the vector and raster data structures usng Fuzzy Geographcal Enttes Cdála Fonte Department of Mathematcs Faculty of Scences and Technology Unversty of Combra, Apartado 38, 3 454 Combra,
More information8 Algorithm for Binary Searching in Trees
8 Algorthm for Bnary Searchng n Trees In ths secton we present our algorthm for bnary searchng n trees. A crucal observaton employed by the algorthm s that ths problem can be effcently solved when the
More informationLoop Parallelization
  Loop Parallelzaton C52 Complaton steps: nested loops operatng on arrays, sequentell executon of teraton space DECLARE B[..,..+] FOR I :=.. FOR J :=.. I B[I,J] := B[I,J]+B[I,J] ED FOR ED FOR analyze
More informationGeorey E. Hinton. University oftoronto. Email: zoubin@cs.toronto.edu. Technical Report CRGTR961. May 21, 1996 (revised Feb 27, 1997) Abstract
The EM Algorthm for Mxtures of Factor Analyzers Zoubn Ghahraman Georey E. Hnton Department of Computer Scence Unversty oftoronto 6 Kng's College Road Toronto, Canada M5S A4 Emal: zoubn@cs.toronto.edu Techncal
More informationCausal, Explanatory Forecasting. Analysis. Regression Analysis. Simple Linear Regression. Which is Independent? Forecasting
Causal, Explanatory Forecastng Assumes causeandeffect relatonshp between system nputs and ts output Forecastng wth Regresson Analyss Rchard S. Barr Inputs System Cause + Effect Relatonshp The job of
More informationLogistic Regression. Lecture 4: More classifiers and classes. Logistic regression. Adaboost. Optimization. Multiple class classification
Lecture 4: More classfers and classes C4B Machne Learnng Hlary 20 A. Zsserman Logstc regresson Loss functons revsted Adaboost Loss functons revsted Optmzaton Multple class classfcaton Logstc Regresson
More informationJ. Parallel Distrib. Comput.
J. Parallel Dstrb. Comput. 71 (2011) 62 76 Contents lsts avalable at ScenceDrect J. Parallel Dstrb. Comput. journal homepage: www.elsever.com/locate/jpdc Optmzng server placement n dstrbuted systems n
More informationActive Learning for Interactive Visualization
Actve Learnng for Interactve Vsualzaton Tomoharu Iwata Nel Houlsby Zoubn Ghahraman Unversty of Cambrdge Unversty of Cambrdge Unversty of Cambrdge Abstract Many automatc vsualzaton methods have been. However,
More informationGRAVITY DATA VALIDATION AND OUTLIER DETECTION USING L 1 NORM
GRAVITY DATA VALIDATION AND OUTLIER DETECTION USING L 1 NORM BARRIOT JeanPerre, SARRAILH Mchel BGI/CNES 18.av.E.Beln 31401 TOULOUSE Cedex 4 (France) Emal: jeanperre.barrot@cnes.fr 1/Introducton The
More informationCommunication Networks II Contents
8 / 1  Communcaton Networs II (Görg)  www.comnets.unbremen.de Communcaton Networs II Contents 1 Fundamentals of probablty theory 2 Traffc n communcaton networs 3 Stochastc & Marovan Processes (SP
More informationQUANTUM MECHANICS, BRAS AND KETS
PH575 SPRING QUANTUM MECHANICS, BRAS AND KETS The followng summares the man relatons and defntons from quantum mechancs that we wll be usng. State of a phscal sstem: The state of a phscal sstem s represented
More informationSupport vector domain description
Pattern Recognton Letters 20 (1999) 1191±1199 www.elsever.nl/locate/patrec Support vector doman descrpton Davd M.J. Tax *,1, Robert P.W. Dun Pattern Recognton Group, Faculty of Appled Scence, Delft Unversty
More informationLecture 18: Clustering & classification
O CPS260/BGT204. Algorthms n Computatonal Bology October 30, 2003 Lecturer: Pana K. Agarwal Lecture 8: Clusterng & classfcaton Scrbe: Daun Hou Open Problem In HomeWor 2, problem 5 has an open problem whch
More informationAn InterestOriented Network Evolution Mechanism for Online Communities
An InterestOrented Network Evoluton Mechansm for Onlne Communtes Cahong Sun and Xaopng Yang School of Informaton, Renmn Unversty of Chna, Bejng 100872, P.R. Chna {chsun,yang}@ruc.edu.cn Abstract. Onlne
More informationPerformance Analysis and Coding Strategy of ECOC SVMs
Internatonal Journal of Grd and Dstrbuted Computng Vol.7, No. (04), pp.6776 http://dx.do.org/0.457/jgdc.04.7..07 Performance Analyss and Codng Strategy of ECOC SVMs Zhgang Yan, and Yuanxuan Yang, School
More information1 Approximation Algorithms
CME 305: Dscrete Mathematcs and Algorthms 1 Approxmaton Algorthms In lght of the apparent ntractablty of the problems we beleve not to le n P, t makes sense to pursue deas other than complete solutons
More information5. Simultaneous eigenstates: Consider two operators that commute: Â η = a η (13.29)
5. Smultaneous egenstates: Consder two operators that commute: [ Â, ˆB ] = 0 (13.28) Let Â satsfy the followng egenvalue equaton: Multplyng both sdes by ˆB Â η = a η (13.29) ˆB [ Â η ] = ˆB [a η ] = a
More informationSection 5.4 Annuities, Present Value, and Amortization
Secton 5.4 Annutes, Present Value, and Amortzaton Present Value In Secton 5.2, we saw that the present value of A dollars at nterest rate per perod for n perods s the amount that must be deposted today
More informationPoint cloud to point cloud rigid transformations. Minimizing Rigid Registration Errors
Pont cloud to pont cloud rgd transformatons Russell Taylor 600.445 1 600.445 Fall 000014 Copyrght R. H. Taylor Mnmzng Rgd Regstraton Errors Typcally, gven a set of ponts {a } n one coordnate system and
More informationExtending Probabilistic Dynamic Epistemic Logic
Extendng Probablstc Dynamc Epstemc Logc Joshua Sack May 29, 2008 Probablty Space Defnton A probablty space s a tuple (S, A, µ), where 1 S s a set called the sample space. 2 A P(S) s a σalgebra: a set
More informationbenefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ).
REVIEW OF RISK MANAGEMENT CONCEPTS LOSS DISTRIBUTIONS AND INSURANCE Loss and nsurance: When someone s subject to the rsk of ncurrng a fnancal loss, the loss s generally modeled usng a random varable or
More information+ + +   This circuit than can be reduced to a planar circuit
MeshCurrent Method The meshcurrent s analog of the nodeoltage method. We sole for a new set of arables, mesh currents, that automatcally satsfy KCLs. As such, meshcurrent method reduces crcut soluton to
More informationData Visualization by Pairwise Distortion Minimization
Communcatons n Statstcs, Theory and Methods 34 (6), 005 Data Vsualzaton by Parwse Dstorton Mnmzaton By Marc Sobel, and Longn Jan Lateck* Department of Statstcs and Department of Computer and Informaton
More informationCS 2750 Machine Learning. Lecture 3. Density estimation. CS 2750 Machine Learning. Announcements
Lecture 3 Densty estmaton Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Next lecture: Matlab tutoral Announcements Rules for attendng the class: Regstered for credt Regstered for audt (only f there
More informationOn Mean Squared Error of Hierarchical Estimator
S C H E D A E I N F O R M A T I C A E VOLUME 0 0 On Mean Squared Error of Herarchcal Estmator Stans law Brodowsk Faculty of Physcs, Astronomy, and Appled Computer Scence, Jagellonan Unversty, Reymonta
More informationHow Sets of Coherent Probabilities May Serve as Models for Degrees of Incoherence
1 st Internatonal Symposum on Imprecse Probabltes and Ther Applcatons, Ghent, Belgum, 29 June 2 July 1999 How Sets of Coherent Probabltes May Serve as Models for Degrees of Incoherence Mar J. Schervsh
More informationTransients Analysis of a Nuclear Power Plant Component for Fault Diagnosis
A publcaton of CHEMICAL ENGINEERING TRANSACTIONS VOL. 33, 213 Guest Edtors: Enrco Zo, Pero Barald Copyrght 213, AIDIC Servz S.r.l., ISBN 978889568242; ISSN 19749791 The Italan Assocaton of Chemcal
More informationPSYCHOLOGICAL RESEARCH (PYC 304C) Lecture 12
14 The Chsquared dstrbuton PSYCHOLOGICAL RESEARCH (PYC 304C) Lecture 1 If a normal varable X, havng mean µ and varance σ, s standardsed, the new varable Z has a mean 0 and varance 1. When ths standardsed
More informationProject Networks With MixedTime Constraints
Project Networs Wth MxedTme Constrants L Caccetta and B Wattananon Western Australan Centre of Excellence n Industral Optmsaton (WACEIO) Curtn Unversty of Technology GPO Box U1987 Perth Western Australa
More informationDescriptive Models. Cluster Analysis. Example. General Applications of Clustering. Examples of Clustering Applications
CMSC828G Prncples of Data Mnng Lecture #9 Today s Readng: HMS, chapter 9 Today s Lecture: Descrptve Modelng Clusterng Algorthms Descrptve Models model presents the man features of the data, a global summary
More informationModule 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module LOSSLESS IMAGE COMPRESSION SYSTEMS Lesson 3 Lossless Compresson: Huffman Codng Instructonal Objectves At the end of ths lesson, the students should be able to:. Defne and measure source entropy..
More informationLogistic Regression. Steve Kroon
Logstc Regresson Steve Kroon Course notes sectons: 24.324.4 Dsclamer: these notes do not explctly ndcate whether values are vectors or scalars, but expects the reader to dscern ths from the context. Scenaro
More informationAn Enhanced SuperResolution System with Improved Image Registration, Automatic Image Selection, and Image Enhancement
An Enhanced SuperResoluton System wth Improved Image Regstraton, Automatc Image Selecton, and Image Enhancement YuChuan Kuo ( ), ChenYu Chen ( ), and ChouShann Fuh ( ) Department of Computer Scence
More informationNPAR TESTS. OneSample ChiSquare Test. Cell Specification. Observed Frequencies 1O i 6. Expected Frequencies 1EXP i 6
PAR TESTS If a WEIGHT varable s specfed, t s used to replcate a case as many tmes as ndcated by the weght value rounded to the nearest nteger. If the workspace requrements are exceeded and samplng has
More informationMAPP. MERIS level 3 cloud and water vapour products. Issue: 1. Revision: 0. Date: 9.12.1998. Function Name Organisation Signature Date
Ttel: Project: Doc. No.: MERIS level 3 cloud and water vapour products MAPP MAPPATBDClWVL3 Issue: 1 Revson: 0 Date: 9.12.1998 Functon Name Organsaton Sgnature Date Author: Bennartz FUB Preusker FUB Schüller
More informationNew bounds in BalogSzemerédiGowers theorem
New bounds n BalogSzemerédGowers theorem By Tomasz Schoen Abstract We prove, n partcular, that every fnte subset A of an abelan group wth the addtve energy κ A 3 contans a set A such that A κ A and A
More informationDesign of Output Codes for Fast Covering Learning using Basic Decomposition Techniques
Journal of Computer Scence (7): 56557, 6 ISSN 5966 6 Scence Publcatons Desgn of Output Codes for Fast Coverng Learnng usng Basc Decomposton Technques Aruna Twar and Narendra S. Chaudhar, Faculty of Computer
More informationAn Alternative Way to Measure Private Equity Performance
An Alternatve Way to Measure Prvate Equty Performance Peter Todd Parlux Investment Technology LLC Summary Internal Rate of Return (IRR) s probably the most common way to measure the performance of prvate
More informationSVM Tutorial: Classification, Regression, and Ranking
SVM Tutoral: Classfcaton, Regresson, and Rankng Hwanjo Yu and Sungchul Km 1 Introducton Support Vector Machnes(SVMs) have been extensvely researched n the data mnng and machne learnng communtes for the
More informationDEFINING %COMPLETE IN MICROSOFT PROJECT
CelersSystems DEFINING %COMPLETE IN MICROSOFT PROJECT PREPARED BY James E Aksel, PMP, PMISP, MVP For Addtonal Informaton about Earned Value Management Systems and reportng, please contact: CelersSystems,
More informationJoe Pimbley, unpublished, 2005. Yield Curve Calculations
Joe Pmbley, unpublshed, 005. Yeld Curve Calculatons Background: Everythng s dscount factors Yeld curve calculatons nclude valuaton of forward rate agreements (FRAs), swaps, nterest rate optons, and forward
More information320 The Internatonal Arab Journal of Informaton Technology, Vol. 5, No. 3, July 2008 Comparsons Between Data Clusterng Algorthms Osama Abu Abbas Computer Scence Department, Yarmouk Unversty, Jordan Abstract:
More informationWhen Network Effect Meets Congestion Effect: Leveraging Social Services for Wireless Services
When Network Effect Meets Congeston Effect: Leveragng Socal Servces for Wreless Servces aowen Gong School of Electrcal, Computer and Energy Engeerng Arzona State Unversty Tempe, AZ 8587, USA xgong9@asuedu
More informationAN EFFECTIVE MATRIX GEOMETRIC MEAN SATISFYING THE ANDO LI MATHIAS PROPERTIES
MATHEMATICS OF COMPUTATION Volume, Number, Pages S 5578(XX) AN EFFECTIVE MATRIX GEOMETRIC MEAN SATISFYING THE ANDO LI MATHIAS PROPERTIES DARIO A. BINI, BEATRICE MEINI AND FEDERICO POLONI Abstract. We
More informationErrorPropagation.nb 1. Error Propagation
ErrorPropagaton.nb Error Propagaton Suppose that we make observatons of a quantty x that s subject to random fluctuatons or measurement errors. Our best estmate of the true value for ths quantty s then
More informationWe are now ready to answer the question: What are the possible cardinalities for finite fields?
Chapter 3 Fnte felds We have seen, n the prevous chapters, some examples of fnte felds. For example, the resdue class rng Z/pZ (when p s a prme) forms a feld wth p elements whch may be dentfed wth the
More informationSolution: Let i = 10% and d = 5%. By definition, the respective forces of interest on funds A and B are. i 1 + it. S A (t) = d (1 dt) 2 1. = d 1 dt.
Chapter 9 Revew problems 9.1 Interest rate measurement Example 9.1. Fund A accumulates at a smple nterest rate of 10%. Fund B accumulates at a smple dscount rate of 5%. Fnd the pont n tme at whch the forces
More informationQuantization Effects in Digital Filters
Quantzaton Effects n Dgtal Flters Dstrbuton of Truncaton Errors In two's complement representaton an exact number would have nfntely many bts (n general). When we lmt the number of bts to some fnte value
More informationInequality and The Accounting Period. Quentin Wodon and Shlomo Yitzhaki. World Bank and Hebrew University. September 2001.
Inequalty and The Accountng Perod Quentn Wodon and Shlomo Ytzha World Ban and Hebrew Unversty September Abstract Income nequalty typcally declnes wth the length of tme taen nto account for measurement.
More informationTime Series Analysis in Studies of AGN Variability. Bradley M. Peterson The Ohio State University
Tme Seres Analyss n Studes of AGN Varablty Bradley M. Peterson The Oho State Unversty 1 Lnear Correlaton Degree to whch two parameters are lnearly correlated can be expressed n terms of the lnear correlaton
More informationBrigid Mullany, Ph.D University of North Carolina, Charlotte
Evaluaton And Comparson Of The Dfferent Standards Used To Defne The Postonal Accuracy And Repeatablty Of Numercally Controlled Machnng Center Axes Brgd Mullany, Ph.D Unversty of North Carolna, Charlotte
More informationCHAPTER 7 VECTOR BUNDLES
CHAPTER 7 VECTOR BUNDLES We next begn addressng the queston: how do we assemble the tangent spaces at varous ponts of a manfold nto a coherent whole? In order to gude the decson, consder the case of U
More informationThe covariance is the two variable analog to the variance. The formula for the covariance between two variables is
Regresson Lectures So far we have talked only about statstcs that descrbe one varable. What we are gong to be dscussng for much of the remander of the course s relatonshps between two or more varables.
More informationFinancial Mathemetics
Fnancal Mathemetcs 15 Mathematcs Grade 12 Teacher Gude Fnancal Maths Seres Overvew In ths seres we am to show how Mathematcs can be used to support personal fnancal decsons. In ths seres we jon Tebogo,
More informationUsing Series to Analyze Financial Situations: Present Value
2.8 Usng Seres to Analyze Fnancal Stuatons: Present Value In the prevous secton, you learned how to calculate the amount, or future value, of an ordnary smple annuty. The amount s the sum of the accumulated
More informationSingle and multiple stage classifiers implementing logistic discrimination
Sngle and multple stage classfers mplementng logstc dscrmnaton Hélo Radke Bttencourt 1 Dens Alter de Olvera Moraes 2 Vctor Haertel 2 1 Pontfíca Unversdade Católca do Ro Grande do Sul  PUCRS Av. Ipranga,
More informationTHE DISTRIBUTION OF LOAN PORTFOLIO VALUE * Oldrich Alfons Vasicek
HE DISRIBUION OF LOAN PORFOLIO VALUE * Oldrch Alfons Vascek he amount of captal necessary to support a portfolo of debt securtes depends on the probablty dstrbuton of the portfolo loss. Consder a portfolo
More informationInstitute of Informatics, Faculty of Business and Management, Brno University of Technology,Czech Republic
Lagrange Multplers as Quanttatve Indcators n Economcs Ivan Mezník Insttute of Informatcs, Faculty of Busness and Management, Brno Unversty of TechnologCzech Republc Abstract The quanttatve role of Lagrange
More informationNonbinary Quantum ErrorCorrecting Codes from Algebraic Curves
Nonbnary Quantum ErrorCorrectng Codes from Algebrac Curves JonLark Km and Judy Walker Department of Mathematcs Unversty of NebraskaLncoln, Lncoln, NE 685880130 USA emal: {jlkm, jwalker}@math.unl.edu
More informationNMT EE 589 & UNM ME 482/582 ROBOT ENGINEERING. Dr. Stephen Bruder NMT EE 589 & UNM ME 482/582
NMT EE 589 & UNM ME 482/582 ROBOT ENGINEERING Dr. Stephen Bruder NMT EE 589 & UNM ME 482/582 7. Root Dynamcs 7.2 Intro to Root Dynamcs We now look at the forces requred to cause moton of the root.e. dynamcs!!
More informationwhere the coordinates are related to those in the old frame as follows.
Chapter 2  Cartesan Vectors and Tensors: Ther Algebra Defnton of a vector Examples of vectors Scalar multplcaton Addton of vectors coplanar vectors Unt vectors A bass of noncoplanar vectors Scalar product
More informationAnswer: A). There is a flatter IS curve in the high MPC economy. Original LM LM after increase in M. IS curve for low MPC economy
4.02 Quz Solutons Fall 2004 MultpleChoce Questons (30/00 ponts) Please, crcle the correct answer for each of the followng 0 multplechoce questons. For each queston, only one of the answers s correct.
More informationOptimal resource capacity management for stochastic networks
Submtted for publcaton. Optmal resource capacty management for stochastc networks A.B. Deker H. Mlton Stewart School of ISyE, Georga Insttute of Technology, Atlanta, GA 30332, ton.deker@sye.gatech.edu
More information21 Vectors: The Cross Product & Torque
21 Vectors: The Cross Product & Torque Do not use our left hand when applng ether the rghthand rule for the cross product of two vectors dscussed n ths chapter or the rghthand rule for somethng curl
More informationThe Application of Fractional Brownian Motion in Option Pricing
Vol. 0, No. (05), pp. 738 http://dx.do.org/0.457/jmue.05.0..6 The Applcaton of Fractonal Brownan Moton n Opton Prcng Qngxn Zhou School of Basc Scence,arbn Unversty of Commerce,arbn zhouqngxn98@6.com
More informationx f(x) 1 0.25 1 0.75 x 1 0 1 1 0.04 0.01 0.20 1 0.12 0.03 0.60
BIVARIATE DISTRIBUTIONS Let be a varable that assumes the values { 1,,..., n }. Then, a functon that epresses the relatve frequenc of these values s called a unvarate frequenc functon. It must be true
More informationProductForm Stationary Distributions for Deficiency Zero Chemical Reaction Networks
Bulletn of Mathematcal Bology (21 DOI 1.17/s11538195174 ORIGINAL ARTICLE ProductForm Statonary Dstrbutons for Defcency Zero Chemcal Reacton Networks Davd F. Anderson, Gheorghe Cracun, Thomas G. Kurtz
More informationMean Value Coordinates for Closed Triangular Meshes
Mean Value Coordnates for Closed Trangular Meshes Tao Ju, Scott Schaefer, Joe Warren Rce Unversty (a) (b) (c) (d) Fgure : Orgnal horse model wth enclosng trangle control mesh shown n black (a). Several
More informationUsing mixture covariance matrices to improve face and facial expression recognitions
Pattern Recognton Letters 24 (2003) 2159 2165 www.elsever.com/locate/patrec Usng mxture covarance matrces to mprove face and facal expresson recogntons Carlos E. Thomaz a, *, Duncan F. Glles a, Raul Q.
More informationNew Approaches to Support Vector Ordinal Regression
New Approaches to Support Vector Ordnal Regresson We Chu chuwe@gatsby.ucl.ac.uk Gatsby Computatonal Neuroscence Unt, Unversty College London, London, WCN 3AR, UK S. Sathya Keerth selvarak@yahoonc.com
More informationA study on the ability of Support Vector Regression and Neural Networks to Forecast Basic Time Series Patterns
A study on the ablty of Support Vector Regresson and Neural Networks to Forecast Basc Tme Seres Patterns Sven F. Crone, Jose Guajardo 2, and Rchard Weber 2 Lancaster Unversty, Department of Management
More informationn + d + q = 24 and.05n +.1d +.25q = 2 { n + d + q = 24 (3) n + 2d + 5q = 40 (2)
MATH 16T Exam 1 : Part I (InClass) Solutons 1. (0 pts) A pggy bank contans 4 cons, all of whch are nckels (5 ), dmes (10 ) or quarters (5 ). The pggy bank also contans a con of each denomnaton. The total
More informationCHAPTER 14 MORE ABOUT REGRESSION
CHAPTER 14 MORE ABOUT REGRESSION We learned n Chapter 5 that often a straght lne descrbes the pattern of a relatonshp between two quanttatve varables. For nstance, n Example 5.1 we explored the relatonshp
More informationPERRON FROBENIUS THEOREM
PERRON FROBENIUS THEOREM R. CLARK ROBINSON Defnton. A n n matrx M wth real entres m, s called a stochastc matrx provded () all the entres m satsfy 0 m, () each of the columns sum to one, m = for all, ()
More informationRegression Models for a Binary Response Using EXCEL and JMP
SEMATECH 997 Statstcal Methods Symposum Austn Regresson Models for a Bnary Response Usng EXCEL and JMP Davd C. Trndade, Ph.D. STATTECH Consultng and Tranng n Appled Statstcs San Jose, CA Topcs Practcal
More information