Dropout: A Simple Way to Prevent Neural Networks from Overfitting

Size: px
Start display at page:

Download "Dropout: A Simple Way to Prevent Neural Networks from Overfitting"

Transcription

1 Journal of Machne Learnng Research 15 (2014) Submtted 11/13; Publshed 6/14 Dropout: A Smple Way to Prevent Neural Networks from Overfttng Ntsh Srvastava Geoffrey Hnton Alex Krzhevsky Ilya Sutskever Ruslan Salakhutdnov Department of Computer Scence Unversty of Toronto 10 Kngs College Road, Rm 3302 Toronto, Ontaro, M5S 3G4, Canada. Edtor: Yoshua Bengo Abstract Deep neural nets wth a large number of parameters are very powerful machne learnng systems. However, overfttng s a serous problem n such networks. Large networks are also slow to use, makng t dffcult to deal wth overfttng by combnng the predctons of many dfferent large neural nets at test tme. Dropout s a technque for addressng ths problem. The key dea s to randomly drop unts (along wth ther connectons) from the neural network durng tranng. Ths prevents unts from co-adaptng too much. Durng tranng, dropout samples from an exponental number of dfferent thnned networks. At test tme, t s easy to approxmate the effect of averagng the predctons of all these thnned networks by smply usng a sngle unthnned network that has smaller weghts. Ths sgnfcantly reduces overfttng and gves major mprovements over other regularzaton methods. We show that dropout mproves the performance of neural networks on supervsed learnng tasks n vson, speech recognton, document classfcaton and computatonal bology, obtanng state-of-the-art results on many benchmark data sets. Keywords: neural networks, regularzaton, model combnaton, deep learnng 1. Introducton Deep neural networks contan multple non-lnear hdden layers and ths makes them very expressve models that can learn very complcated relatonshps between ther nputs and outputs. Wth lmted tranng data, however, many of these complcated relatonshps wll be the result of samplng nose, so they wll exst n the tranng set but not n real test data even f t s drawn from the same dstrbuton. Ths leads to overfttng and many methods have been developed for reducng t. These nclude stoppng the tranng as soon as performance on a valdaton set starts to get worse, ntroducng weght penaltes of varous knds such as L1 and L2 regularzaton and soft weght sharng (Nowlan and Hnton, 1992). Wth unlmted computaton, the best way to regularze a fxed-szed model s to average the predctons of all possble settngs of the parameters, weghtng each settng by c 2014 Ntsh Srvastava, Geoffrey Hnton, Alex Krzhevsky, Ilya Sutskever and Ruslan Salakhutdnov.

2 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov (a) Standard Neural Net (b) After applyng dropout. Fgure 1: Dropout Neural Net Model. Left: A standard neural net wth 2 hdden layers. Rght: An example of a thnned net produced by applyng dropout to the network on the left. Crossed unts have been dropped. ts posteror probablty gven the tranng data. Ths can sometmes be approxmated qute well for smple or small models (Xong et al., 2011; Salakhutdnov and Mnh, 2008), but we would lke to approach the performance of the Bayesan gold standard usng consderably less computaton. We propose to do ths by approxmatng an equally weghted geometrc mean of the predctons of an exponental number of learned models that share parameters. Model combnaton nearly always mproves the performance of machne learnng methods. Wth large neural networks, however, the obvous dea of averagng the outputs of many separately traned nets s prohbtvely expensve. Combnng several models s most helpful when the ndvdual models are dfferent from each other and n order to make neural net models dfferent, they should ether have dfferent archtectures or be traned on dfferent data. Tranng many dfferent archtectures s hard because fndng optmal hyperparameters for each archtecture s a dauntng task and tranng each large network requres a lot of computaton. Moreover, large networks normally requre large amounts of tranng data and there may not be enough data avalable to tran dfferent networks on dfferent subsets of the data. Even f one was able to tran many dfferent large networks, usng them all at test tme s nfeasble n applcatons where t s mportant to respond quckly. Dropout s a technque that addresses both these ssues. It prevents overfttng and provdes a way of approxmately combnng exponentally many dfferent neural network archtectures effcently. The term dropout refers to droppng out unts (hdden and vsble) n a neural network. By droppng a unt out, we mean temporarly removng t from the network, along wth all ts ncomng and outgong connectons, as shown n Fgure 1. The choce of whch unts to drop s random. In the smplest case, each unt s retaned wth a fxed probablty p ndependent of other unts, where p can be chosen usng a valdaton set or can smply be set at 0.5, whch seems to be close to optmal for a wde range of networks and tasks. For the nput unts, however, the optmal probablty of retenton s usually closer to 1 than to

3 Dropout w Present wth probablty p (a) At tranng tme Always present (b) At test tme pw Fgure 2: Left: A unt at tranng tme that s present wth probablty p and s connected to unts n the next layer wth weghts w. Rght: At test tme, the unt s always present and the weghts are multpled by p. The output at test tme s same as the expected output at tranng tme. Applyng dropout to a neural network amounts to samplng a thnned network from t. The thnned network conssts of all the unts that survved dropout (Fgure 1b). A neural net wth n unts, can be seen as a collecton of 2 n possble thnned neural networks. These networks all share weghts so that the total number of parameters s stll O(n 2 ), or less. For each presentaton of each tranng case, a new thnned network s sampled and traned. So tranng a neural network wth dropout can be seen as tranng a collecton of 2 n thnned networks wth extensve weght sharng, where each thnned network gets traned very rarely, f at all. At test tme, t s not feasble to explctly average the predctons from exponentally many thnned models. However, a very smple approxmate averagng method works well n practce. The dea s to use a sngle neural net at test tme wthout dropout. The weghts of ths network are scaled-down versons of the traned weghts. If a unt s retaned wth probablty p durng tranng, the outgong weghts of that unt are multpled by p at test tme as shown n Fgure 2. Ths ensures that for any hdden unt the expected output (under the dstrbuton used to drop unts at tranng tme) s the same as the actual output at test tme. By dong ths scalng, 2 n networks wth shared weghts can be combned nto a sngle neural network to be used at test tme. We found that tranng a network wth dropout and usng ths approxmate averagng method at test tme leads to sgnfcantly lower generalzaton error on a wde varety of classfcaton problems compared to tranng wth other regularzaton methods. The dea of dropout s not lmted to feed-forward neural nets. It can be more generally appled to graphcal models such as Boltzmann Machnes. In ths paper, we ntroduce the dropout Restrcted Boltzmann Machne model and compare t to standard Restrcted Boltzmann Machnes (RBM). Our experments show that dropout RBMs are better than standard RBMs n certan respects. Ths paper s structured as follows. Secton 2 descrbes the motvaton for ths dea. Secton 3 descrbes relevant prevous work. Secton 4 formally descrbes the dropout model. Secton 5 gves an algorthm for tranng dropout networks. In Secton 6, we present our expermental results where we apply dropout to problems n dfferent domans and compare t wth other forms of regularzaton and model combnaton. Secton 7 analyzes the effect of dropout on dfferent propertes of a neural network and descrbes how dropout nteracts wth the network s hyperparameters. Secton 8 descrbes the Dropout RBM model. In Secton 9 we explore the dea of margnalzng dropout. In Appendx A we present a practcal gude 1931

4 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov for tranng dropout nets. Ths ncludes a detaled analyss of the practcal consderatons nvolved n choosng hyperparameters when tranng dropout networks. 2. Motvaton A motvaton for dropout comes from a theory of the role of sex n evoluton (Lvnat et al., 2010). Sexual reproducton nvolves takng half the genes of one parent and half of the other, addng a very small amount of random mutaton, and combnng them to produce an offsprng. The asexual alternatve s to create an offsprng wth a slghtly mutated copy of the parent s genes. It seems plausble that asexual reproducton should be a better way to optmze ndvdual ftness because a good set of genes that have come to work well together can be passed on drectly to the offsprng. On the other hand, sexual reproducton s lkely to break up these co-adapted sets of genes, especally f these sets are large and, ntutvely, ths should decrease the ftness of organsms that have already evolved complcated coadaptatons. However, sexual reproducton s the way most advanced organsms evolved. One possble explanaton for the superorty of sexual reproducton s that, over the long term, the crteron for natural selecton may not be ndvdual ftness but rather mx-ablty of genes. The ablty of a set of genes to be able to work well wth another random set of genes makes them more robust. Snce a gene cannot rely on a large set of partners to be present at all tmes, t must learn to do somethng useful on ts own or n collaboraton wth a small number of other genes. Accordng to ths theory, the role of sexual reproducton s not just to allow useful new genes to spread throughout the populaton, but also to facltate ths process by reducng complex co-adaptatons that would reduce the chance of a new gene mprovng the ftness of an ndvdual. Smlarly, each hdden unt n a neural network traned wth dropout must learn to work wth a randomly chosen sample of other unts. Ths should make each hdden unt more robust and drve t towards creatng useful features on ts own wthout relyng on other hdden unts to correct ts mstakes. However, the hdden unts wthn a layer wll stll learn to do dfferent thngs from each other. One mght magne that the net would become robust aganst dropout by makng many copes of each hdden unt, but ths s a poor soluton for exactly the same reason as replca codes are a poor way to deal wth a nosy channel. A closely related, but slghtly dfferent motvaton for dropout comes from thnkng about successful conspraces. Ten conspraces each nvolvng fve people s probably a better way to create havoc than one bg conspracy that requres ffty people to all play ther parts correctly. If condtons do not change and there s plenty of tme for rehearsal, a bg conspracy can work well, but wth non-statonary condtons, the smaller the conspracy the greater ts chance of stll workng. Complex co-adaptatons can be traned to work well on a tranng set, but on novel test data they are far more lkely to fal than multple smpler co-adaptatons that acheve the same thng. 3. Related Work Dropout can be nterpreted as a way of regularzng a neural network by addng nose to ts hdden unts. The dea of addng nose to the states of unts has prevously been used n the context of Denosng Autoencoders (DAEs) by Vncent et al. (2008, 2010) where nose 1932

5 Dropout s added to the nput unts of an autoencoder and the network s traned to reconstruct the nose-free nput. Our work extends ths dea by showng that dropout can be effectvely appled n the hdden layers as well and that t can be nterpreted as a form of model averagng. We also show that addng nose s not only useful for unsupervsed feature learnng but can also be extended to supervsed learnng problems. In fact, our method can be appled to other neuron-based archtectures, for example, Boltzmann Machnes. Whle 5% nose typcally works best for DAEs, we found that our weght scalng procedure appled at test tme enables us to use much hgher nose levels. Droppng out 20% of the nput unts and 50% of the hdden unts was often found to be optmal. Snce dropout can be seen as a stochastc regularzaton technque, t s natural to consder ts determnstc counterpart whch s obtaned by margnalzng out the nose. In ths paper, we show that, n smple cases, dropout can be analytcally margnalzed out to obtan determnstc regularzaton methods. Recently, van der Maaten et al. (2013) also explored determnstc regularzers correspondng to dfferent exponental-famly nose dstrbutons, ncludng dropout (whch they refer to as blankout nose ). However, they apply nose to the nputs and only explore models wth no hdden layers. Wang and Mannng (2013) proposed a method for speedng up dropout by margnalzng dropout nose. Chen et al. (2012) explored margnalzaton n the context of denosng autoencoders. In dropout, we mnmze the loss functon stochastcally under a nose dstrbuton. Ths can be seen as mnmzng an expected loss functon. Prevous work of Globerson and Rowes (2006); Dekel et al. (2010) explored an alternate settng where the loss s mnmzed when an adversary gets to pck whch unts to drop. Here, nstead of a nose dstrbuton, the maxmum number of unts that can be dropped s fxed. However, ths work also does not explore models wth hdden unts. 4. Model Descrpton Ths secton descrbes the dropout neural network model. Consder a neural network wth L hdden layers. Let l {1,..., L} ndex the hdden layers of the network. Let z (l) denote the vector of nputs nto layer l, y (l) denote the vector of outputs from layer l (y (0) = x s the nput). W (l) and b (l) are the weghts and bases at layer l. The feed-forward operaton of a standard neural network (Fgure 3a) can be descrbed as (for l {0,..., L 1} and any hdden unt ) z (l+1) = w (l+1) y l + b (l+1), y (l+1) = f(z (l+1) ), where f s any actvaton functon, for example, f(x) = 1/ (1 + exp( x)). Wth dropout, the feed-forward operaton becomes (Fgure 3b) r (l) j Bernoull(p), ỹ (l) = r (l) y (l), z (l+1) = w (l+1) ỹ l + b (l+1), y (l+1) = f(z (l+1) ). 1933

6 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov b (l+1) r (l) 3 b (l+1) y (l) 3 y (l) 3 ỹ (l) 3 w (l+1) z (l+1) f y (l+1) r (l) 2 w (l+1) z (l+1) f y (l+1) y (l) 2 y (l) 2 ỹ (l) 2 r (l) 1 y (l) 1 y (l) 1 (a) Standard network (b) Dropout network Fgure 3: Comparson of the basc operatons of a standard and dropout network. Here denotes an element-wse product. For any layer l, r (l) s a vector of ndependent Bernoull random varables each of whch has probablty p of beng 1. Ths vector s sampled and multpled element-wse wth the outputs of that layer, y (l), to create the thnned outputs ỹ (l). The thnned outputs are then used as nput to the next layer. Ths process s appled at each layer. Ths amounts to samplng a sub-network from a larger network. For learnng, the dervatves of the loss functon are backpropagated through the sub-network. At test tme, the weghts are scaled as W (l) test = pw (l) as shown n Fgure 2. The resultng neural network s used wthout dropout. ỹ (l) 1 5. Learnng Dropout Nets Ths secton descrbes a procedure for tranng dropout neural nets. 5.1 Backpropagaton Dropout neural networks can be traned usng stochastc gradent descent n a manner smlar to standard neural nets. The only dfference s that for each tranng case n a mn-batch, we sample a thnned network by droppng out unts. Forward and backpropagaton for that tranng case are done only on ths thnned network. The gradents for each parameter are averaged over the tranng cases n each mn-batch. Any tranng case whch does not use a parameter contrbutes a gradent of zero for that parameter. Many methods have been used to mprove stochastc gradent descent such as momentum, annealed learnng rates and L2 weght decay. Those were found to be useful for dropout neural networks as well. One partcular form of regularzaton was found to be especally useful for dropout constranng the norm of the ncomng weght vector at each hdden unt to be upper bounded by a fxed constant c. In other words, f w represents the vector of weghts ncdent on any hdden unt, the neural network was optmzed under the constrant w 2 c. Ths constrant was mposed durng optmzaton by projectng w onto the surface of a ball of radus c, whenever w went out of t. Ths s also called max-norm regularzaton snce t mples that the maxmum value that the norm of any weght can take s c. The constant 1934

7 Dropout c s a tunable hyperparameter, whch s determned usng a valdaton set. Max-norm regularzaton has been prevously used n the context of collaboratve flterng (Srebro and Shrabman, 2005). It typcally mproves the performance of stochastc gradent descent tranng of deep neural nets, even when no dropout s used. Although dropout alone gves sgnfcant mprovements, usng dropout along wth maxnorm regularzaton, large decayng learnng rates and hgh momentum provdes a sgnfcant boost over just usng dropout. A possble justfcaton s that constranng weght vectors to le nsde a ball of fxed radus makes t possble to use a huge learnng rate wthout the possblty of weghts blowng up. The nose provded by dropout then allows the optmzaton process to explore dfferent regons of the weght space that would have otherwse been dffcult to reach. As the learnng rate decays, the optmzaton takes shorter steps, thereby dong less exploraton and eventually settles nto a mnmum. 5.2 Unsupervsed Pretranng Neural networks can be pretraned usng stacks of RBMs (Hnton and Salakhutdnov, 2006), autoencoders (Vncent et al., 2010) or Deep Boltzmann Machnes (Salakhutdnov and Hnton, 2009). Pretranng s an effectve way of makng use of unlabeled data. Pretranng followed by fnetunng wth backpropagaton has been shown to gve sgnfcant performance boosts over fnetunng from random ntalzatons n certan cases. Dropout can be appled to fnetune nets that have been pretraned usng these technques. The pretranng procedure stays the same. The weghts obtaned from pretranng should be scaled up by a factor of 1/p. Ths makes sure that for each unt, the expected output from t under random dropout wll be the same as the output durng pretranng. We were ntally concerned that the stochastc nature of dropout mght wpe out the nformaton n the pretraned weghts. Ths dd happen when the learnng rates used durng fnetunng were comparable to the best learnng rates for randomly ntalzed nets. However, when the learnng rates were chosen to be smaller, the nformaton n the pretraned weghts seemed to be retaned and we were able to get mprovements n terms of the fnal generalzaton error compared to not usng dropout when fnetunng. 6. Expermental Results We traned dropout neural networks for classfcaton problems on data sets n dfferent domans. We found that dropout mproved generalzaton performance on all data sets compared to neural networks that dd not use dropout. Table 1 gves a bref descrpton of the data sets. The data sets are MNIST : A standard toy data set of handwrtten dgts. TIMIT : A standard speech benchmark for clean speech recognton. CIFAR-10 and CIFAR-100 : Tny natural mages (Krzhevsky, 2009). Street Vew House Numbers data set (SVHN) : Images of house numbers collected by Google Street Vew (Netzer et al., 2011). ImageNet : A large collecton of natural mages. Reuters-RCV1 : A collecton of Reuters newswre artcles. 1935

8 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov Alternatve Splcng data set: RNA features for predctng alternatve gene splcng (Xong et al., 2011). We chose a dverse set of data sets to demonstrate that dropout s a general technque for mprovng neural nets and s not specfc to any partcular applcaton doman. In ths secton, we present some key results that show the effectveness of dropout. A more detaled descrpton of all the experments and data sets s provded n Appendx B. Data Set Doman Dmensonalty Tranng Set Test Set MNIST Vson 784 (28 28 grayscale) 60K 10K SVHN Vson 3072 (32 32 color) 600K 26K CIFAR-10/100 Vson 3072 (32 32 color) 60K 10K ImageNet (ILSVRC-2012) Vson ( color) 1.2M 150K TIMIT Speech 2520 (120-dm, 21 frames) 1.1M frames 58K frames Reuters-RCV1 Text K 200K Alternatve Splcng Genetcs Results on Image Data Sets Table 1: Overvew of the data sets used n ths paper. We used fve mage data sets to evaluate dropout MNIST, SVHN, CIFAR-10, CIFAR-100 and ImageNet. These data sets nclude dfferent mage types and tranng set szes. Models whch acheve state-of-the-art results on all of these data sets use dropout MNIST Method Unt Type Archtecture Error % Standard Neural Net (Smard et al., 2003) Logstc 2 layers, 800 unts 1.60 SVM Gaussan kernel NA NA 1.40 Dropout NN Logstc 3 layers, 1024 unts 1.35 Dropout NN ReLU 3 layers, 1024 unts 1.25 Dropout NN + max-norm constrant ReLU 3 layers, 1024 unts 1.06 Dropout NN + max-norm constrant ReLU 3 layers, 2048 unts 1.04 Dropout NN + max-norm constrant ReLU 2 layers, 4096 unts 1.01 Dropout NN + max-norm constrant ReLU 2 layers, 8192 unts 0.95 Dropout NN + max-norm constrant (Goodfellow et al., 2013) Maxout 2 layers, (5 240) unts DBN + fnetunng (Hnton and Salakhutdnov, 2006) Logstc DBM + fnetunng (Salakhutdnov and Hnton, 2009) Logstc DBN + dropout fnetunng Logstc DBM + dropout fnetunng Logstc Table 2: Comparson of dfferent models on MNIST. The MNIST data set conssts of pxel handwrtten dgt mages. The task s to classfy the mages nto 10 dgt classes. Table 2 compares the performance of dropout wth other technques. The best performng neural networks for the permutaton nvarant

9 Dropout settng that do not use dropout or unsupervsed pretranng acheve an error of about 1.60% (Smard et al., 2003). Wth dropout the error reduces to 1.35%. Replacng logstc unts wth rectfed lnear unts (ReLUs) (Jarrett et al., 2009) further reduces the error to 1.25%. Addng max-norm regularzaton agan reduces t to 1.06%. Increasng the sze of the network leads to better results. A neural net wth 2 layers and 8192 unts per layer gets down to 0.95% error. Note that ths network has more than 65 mllon parameters and s beng traned on a data set of sze 60,000. Tranng a network of ths sze to gve good generalzaton error s very hard wth standard regularzaton methods and early stoppng. Dropout, on the other hand, prevents overfttng, even n ths case. It does not even need early stoppng. Goodfellow et al. (2013) showed that results can be further mproved to 0.94% by replacng ReLU unts wth maxout unts. All dropout nets use p = 0.5 for hdden unts and p = 0.8 for nput unts. More expermental detals can be found n Appendx B.1. Dropout nets pretraned wth stacks of RBMs and Deep Boltzmann Machnes also gve mprovements as shown n Table 2. DBM pretraned dropout nets acheve a test error of 0.79% whch s the best performance ever reported for the permutaton nvarant settng. We note that t possble to obtan better results by usng 2-D spatal nformaton and augmentng the tranng set wth dstorted versons of mages from the standard tranng set. We demonstrate the effectveness of dropout n that settng on more nterestng data sets. In order to test the robustness of dropout, classfcaton experments were done wth networks of many dfferent archtectures keepng all hyperparameters, ncludng p, fxed. Fgure 4 shows the test error rates obtaned for these dfferent archtectures as tranng progresses. The same archtectures traned wth and wthout dropout have drastcally dfferent test errors as seen as by the two separate clusters of trajectores. Dropout gves a huge mprovement across all archtectures, wthout usng hyperparameters that were tuned specfcally for each archtecture Street Vew House Numbers The Street Vew House Numbers (SVHN) Data Set (Netzer et al., 2011) conssts of color mages of house numbers collected by Classfcaton Error % Wthout dropout Wth dropout Number of weght updates Fgure 4: Test error for dfferent archtectures wth and wthout dropout. The networks have 2 to 4 hdden layers each wth 1024 to 2048 unts. Google Street Vew. Fgure 5a shows some examples of mages from ths data set. The part of the data set that we use n our experments conssts of color mages roughly centered on a dgt n a house number. The task s to dentfy that dgt. For ths data set, we appled dropout to convolutonal neural networks (LeCun et al., 1989). The best archtecture that we found has three convolutonal layers followed by 2 fully connected hdden layers. All hdden unts were ReLUs. Each convolutonal layer was 1937

10 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov Method Error % Bnary Features (WDCH) (Netzer et al., 2011) 36.7 HOG (Netzer et al., 2011) 15.0 Stacked Sparse Autoencoders (Netzer et al., 2011) 10.3 KMeans (Netzer et al., 2011) 9.4 Mult-stage Conv Net wth average poolng (Sermanet et al., 2012) 9.06 Mult-stage Conv Net + L2 poolng (Sermanet et al., 2012) 5.36 Mult-stage Conv Net + L4 poolng + paddng (Sermanet et al., 2012) 4.90 Conv Net + max-poolng 3.95 Conv Net + max poolng + dropout n fully connected layers 3.02 Conv Net + stochastc poolng (Zeler and Fergus, 2013) 2.80 Conv Net + max poolng + dropout n all layers 2.55 Conv Net + maxout (Goodfellow et al., 2013) 2.47 Human Performance 2.0 Table 3: Results on the Street Vew House Numbers data set. followed by a max-poolng layer. Appendx B.2 descrbes the archtecture n more detal. Dropout was appled to all the layers of the network wth the probablty of retanng a hdden unt beng p = (0.9, 0.75, 0.75, 0.5, 0.5, 0.5) for the dfferent layers of the network (gong from nput to convolutonal layers to fully connected layers). Max-norm regularzaton was used for weghts n both convolutonal and fully connected layers. Table 3 compares the results obtaned by dfferent methods. We fnd that convolutonal nets outperform other methods. The best performng convolutonal nets that do not use dropout acheve an error rate of 3.95%. Addng dropout only to the fully connected layers reduces the error to 3.02%. Addng dropout to the convolutonal layers as well further reduces the error to 2.55%. Even more gans can be obtaned by usng maxout unts. The addtonal gan n performance obtaned by addng dropout n the convolutonal layers (3.02% to 2.55%) s worth notng. One may have presumed that snce the convolutonal layers don t have a lot of parameters, overfttng s not a problem and therefore dropout would not have much effect. However, dropout n the lower layers stll helps because t provdes nosy nputs for the hgher fully connected layers whch prevents them from overfttng CIFAR-10 and CIFAR-100 The CIFAR-10 and CIFAR-100 data sets consst of color mages drawn from 10 and 100 categores respectvely. Fgure 5b shows some examples of mages from ths data set. A detaled descrpton of the data sets, nput preprocessng, network archtectures and other expermental detals s gven n Appendx B.3. Table 4 shows the error rate obtaned by dfferent methods on these data sets. Wthout any data augmentaton, Snoek et al. (2012) used Bayesan hyperparameter optmzaton to obtaned an error rate of 14.98% on CIFAR-10. Usng dropout n the fully connected layers reduces that to 14.32% and addng dropout n every layer further reduces the error to 12.61%. Goodfellow et al. (2013) showed that the error s further reduced to 11.68% by replacng ReLU unts wth maxout unts. On CIFAR-100, dropout reduces the error from 43.48% to 37.20% whch s a huge mprovement. No data augmentaton was used for ether data set (apart from the nput dropout). 1938

11 Dropout (a) Street Vew House Numbers (SVHN) (b) CIFAR-10 Fgure 5: Samples from mage data sets. Each row corresponds to a dfferent category. Method Conv Conv Conv Conv Conv Conv Net Net Net Net Net Net max poolng (hand tuned) stochastc poolng (Zeler and Fergus, 2013) max poolng (Snoek et al., 2012) max poolng + dropout fully connected layers max poolng + dropout n all layers maxout (Goodfellow et al., 2013) CIFAR-10 CIFAR Table 4: Error rates on CIFAR-10 and CIFAR ImageNet ImageNet s a data set of over 15 mllon labeled hgh-resoluton mages belongng to roughly 22,000 categores. Startng n 2010, as part of the Pascal Vsual Object Challenge, an annual competton called the ImageNet Large-Scale Vsual Recognton Challenge (ILSVRC) has been held. A subset of ImageNet wth roughly 1000 mages n each of 1000 categores s used n ths challenge. Snce the number of categores s rather large, t s conventonal to report two error rates: top-1 and top-5, where the top-5 error rate s the fracton of test mages for whch the correct label s not among the fve labels consdered most probable by the model. Fgure 6 shows some predctons made by our model on a few test mages. ILSVRC-2010 s the only verson of ILSVRC for whch the test set labels are avalable, so most of our experments were performed on ths data set. Table 5 compares the performance of dfferent methods. Convolutonal nets wth dropout outperform other methods by a large margn. The archtecture and mplementaton detals are descrbed n detal n Krzhevsky et al. (2012). 1939

12 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov Fgure 6: Some ImageNet test cases wth the 4 most probable labels as predcted by our model. The length of the horzontal bars s proportonal to the probablty assgned to the labels by the model. Pnk ndcates ground truth. Model Model Top-1 Top-5 Sparse Codng (Ln et al., 2010) SIFT + Fsher Vectors (Sanchez and Perronnn, 2011) Conv Net + dropout (Krzhevsky et al., 2012) Table 5: Results on the ILSVRC-2010 test set. Top-1 (val) Top-5 (val) Top-5 (test) SVM on Fsher Vectors of Dense SIFT and Color Statstcs Avg of classfers over FVs of SIFT, LBP, GIST and CSIFT Conv Net + dropout (Krzhevsky et al., 2012) Avg of 5 Conv Nets + dropout (Krzhevsky et al., 2012) Table 6: Results on the ILSVRC-2012 valdaton/test set. Our model based on convolutonal nets and dropout won the ILSVRC-2012 competton. Snce the labels for the test set are not avalable, we report our results on the test set for the fnal submsson and nclude the valdaton set results for dfferent varatons of our model. Table 6 shows the results from the competton. Whle the best methods based on standard vson features acheve a top-5 error rate of about 26%, convolutonal nets wth dropout acheve a test error of about 16% whch s a staggerng dfference. Fgure 6 shows some examples of predctons made by our model. We can see that the model makes very reasonable predctons, even when ts best guess s not correct. 6.2 Results on TIMIT Next, we appled dropout to a speech recognton task. We use the TIMIT data set whch conssts of recordngs from 680 speakers coverng 8 major dalects of Amercan Englsh readng ten phonetcally-rch sentences n a controlled nose-free envronment. Dropout neural networks were traned on wndows of 21 log-flter bank frames to predct the label of the central frame. No speaker dependent operatons were performed. Appendx B.4 descrbes the data preprocessng and tranng detals. Table 7 compares dropout neural 1940

13 Dropout nets wth other models. A 6-layer net gves a phone error rate of 23.4%. Dropout further mproves t to 21.8%. We also traned dropout nets startng from pretraned weghts. A 4-layer net pretraned wth a stack of RBMs get a phone error rate of 22.7%. Wth dropout, ths reduces to 19.7%. Smlarly, for an 8-layer net the error reduces from 20.5% to 19.7%. Method Phone Error Rate% NN (6 layers) (Mohamed et al., 2010) 23.4 Dropout NN (6 layers) 21.8 DBN-pretraned NN (4 layers) 22.7 DBN-pretraned NN (6 layers) (Mohamed et al., 2010) 22.4 DBN-pretraned NN (8 layers) (Mohamed et al., 2010) 20.7 mcrbm-dbn-pretraned NN (5 layers) (Dahl et al., 2010) 20.5 DBN-pretraned NN (4 layers) + dropout 19.7 DBN-pretraned NN (8 layers) + dropout Results on a Text Data Set Table 7: Phone error rate on the TIMIT core test set. To test the usefulness of dropout n the text doman, we used dropout networks to tran a document classfer. We used a subset of the Reuters-RCV1 data set whch s a collecton of over 800,000 newswre artcles from Reuters. These artcles cover a varety of topcs. The task s to take a bag of words representaton of a document and classfy t nto 50 dsjont topcs. Appendx B.5 descrbes the setup n more detal. Our best neural net whch dd not use dropout obtaned an error rate of 31.05%. Addng dropout reduced the error to 29.62%. We found that the mprovement was much smaller compared to that for the vson and speech data sets. 6.4 Comparson wth Bayesan Neural Networks Dropout can be seen as a way of dong an equally-weghted averagng of exponentally many models wth shared weghts. On the other hand, Bayesan neural networks (Neal, 1996) are the proper way of dong model averagng over the space of neural network structures and parameters. In dropout, each model s weghted equally, whereas n a Bayesan neural network each model s weghted takng nto account the pror and how well the model fts the data, whch s the more correct approach. Bayesan neural nets are extremely useful for solvng problems n domans where data s scarce such as medcal dagnoss, genetcs, drug dscovery and other computatonal bology applcatons. However, Bayesan neural nets are slow to tran and dffcult to scale to very large network szes. Besdes, t s expensve to get predctons from many large nets at test tme. On the other hand, dropout neural nets are much faster to tran and use at test tme. In ths secton, we report experments that compare Bayesan neural nets wth dropout neural nets on a small data set where Bayesan neural networks are known to perform well and obtan state-of-the-art results. The am s to analyze how much does dropout lose compared to Bayesan neural nets. The data set that we use (Xong et al., 2011) comes from the doman of genetcs. The task s to predct the occurrence of alternatve splcng based on RNA features. Alternatve splcng s a sgnfcant cause of cellular dversty n mammalan tssues. Predctng the 1941

14 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov Method Code Qualty (bts) Neural Network (early stoppng) (Xong et al., 2011) 440 Regresson, PCA (Xong et al., 2011) 463 SVM, PCA (Xong et al., 2011) 487 Neural Network wth dropout 567 Bayesan Neural Network (Xong et al., 2011) 623 Table 8: Results on the Alternatve Splcng Data Set. occurrence of alternate splcng n certan tssues under dfferent condtons s mportant for understandng many human dseases. Gven the RNA features, the task s to predct the probablty of three splcng related events that bologsts care about. The evaluaton metrc s Code Qualty whch s a measure of the negatve KL dvergence between the target and the predcted probablty dstrbutons (hgher s better). Appendx B.6 ncludes a detaled descrpton of the data set and ths performance metrc. Table 8 summarzes the performance of dfferent models on ths data set. Xong et al. (2011) used Bayesan neural nets for ths task. As expected, we found that Bayesan neural nets perform better than dropout. However, we see that dropout mproves sgnfcantly upon the performance of standard neural nets and outperforms all other methods. The challenge n ths data set s to prevent overfttng snce the sze of the tranng set s small. One way to prevent overfttng s to reduce the nput dmensonalty usng PCA. Thereafter, standard technques such as SVMs or logstc regresson can be used. However, wth dropout we were able to prevent overfttng wthout the need to do dmensonalty reducton. The dropout nets are very large (1000s of hdden unts) compared to a few tens of unts n the Bayesan network. Ths shows that dropout has a strong regularzng effect. 6.5 Comparson wth Standard Regularzers Several regularzaton methods have been proposed for preventng overfttng n neural networks. These nclude L2 weght decay (more generally Tkhonov regularzaton (Tkhonov, 1943)), lasso (Tbshran, 1996), KL-sparsty and max-norm regularzaton. Dropout can be seen as another way of regularzng neural networks. In ths secton we compare dropout wth some of these regularzaton methods usng the MNIST data set. The same network archtecture ( ) wth ReLUs was traned usng stochastc gradent descent wth dfferent regularzatons. Table 9 shows the results. The values of dfferent hyperparameters assocated wth each knd of regularzaton (decay constants, target sparsty, dropout rate, max-norm upper bound) were obtaned usng a valdaton set. We found that dropout combned wth max-norm regularzaton gves the lowest generalzaton error. 7. Salent Features The experments descrbed n the prevous secton provde strong evdence that dropout s a useful technque for mprovng neural networks. In ths secton, we closely examne how dropout affects a neural network. We analyze the effect of dropout on the qualty of features produced. We see how dropout affects the sparsty of hdden unt actvatons. We 1942

15 Dropout Method Test Classfcaton error % L L2 + L1 appled towards the end of tranng 1.60 L2 + KL-sparsty 1.55 Max-norm 1.35 Dropout + L Dropout + Max-norm 1.05 Table 9: Comparson of dfferent regularzaton methods on MNIST. also see how the advantages obtaned from dropout vary wth the probablty of retanng unts, sze of the network and the sze of the tranng set. These observatons gve some nsght nto why dropout works so well. 7.1 Effect on Features (a) Wthout dropout (b) Dropout wth p = 0.5. Fgure 7: Features learned on MNIST wth one hdden layer autoencoders havng 256 rectfed lnear unts. In a standard neural network, the dervatve receved by each parameter tells t how t should change so the fnal loss functon s reduced, gven what all other unts are dong. Therefore, unts may change n a way that they fx up the mstakes of the other unts. Ths may lead to complex co-adaptatons. Ths n turn leads to overfttng because these co-adaptatons do not generalze to unseen data. We hypothesze that for each hdden unt, dropout prevents co-adaptaton by makng the presence of other hdden unts unrelable. Therefore, a hdden unt cannot rely on other specfc unts to correct ts mstakes. It must perform well n a wde varety of dfferent contexts provded by the other hdden unts. To observe ths effect drectly, we look at the frst level features learned by neural networks traned on vsual tasks wth and wthout dropout. 1943

16 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov Fgure 7a shows features learned by an autoencoder on MNIST wth a sngle hdden layer of 256 rectfed lnear unts wthout dropout. Fgure 7b shows the features learned by an dentcal autoencoder whch used dropout n the hdden layer wth p = 0.5. Both autoencoders had smlar test reconstructon errors. However, t s apparent that the features shown n Fgure 7a have co-adapted n order to produce good reconstructons. Each hdden unt on ts own does not seem to be detectng a meanngful feature. On the other hand, n Fgure 7b, the hdden unts seem to detect edges, strokes and spots n dfferent parts of the mage. Ths shows that dropout does break up co-adaptatons, whch s probably the man reason why t leads to lower generalzaton errors. 7.2 Effect on Sparsty (a) Wthout dropout (b) Dropout wth p = 0.5. Fgure 8: Effect of dropout on sparsty. ReLUs were used for both models. Left: The hstogram of mean actvatons shows that most unts have a mean actvaton of about 2.0. The hstogram of actvatons shows a huge mode away from zero. Clearly, a large fracton of unts have hgh actvaton. Rght: The hstogram of mean actvatons shows that most unts have a smaller mean mean actvaton of about 0.7. The hstogram of actvatons shows a sharp peak at zero. Very few unts have hgh actvaton. We found that as a sde-effect of dong dropout, the actvatons of the hdden unts become sparse, even when no sparsty nducng regularzers are present. Thus, dropout automatcally leads to sparse representatons. To observe ths effect, we take the autoencoders traned n the prevous secton and look at the sparsty of hdden unt actvatons on a random mn-batch taken from the test set. Fgure 8a and Fgure 8b compare the sparsty for the two models. In a good sparse model, there should only be a few hghly actvated unts for any data case. Moreover, the average actvaton of any unt across data cases should be low. To assess both of these qualtes, we plot two hstograms for each model. For each model, the hstogram on the left shows the dstrbuton of mean actvatons of hdden unts across the mnbatch. The hstogram on the rght shows the dstrbuton of actvatons of the hdden unts. Comparng the hstograms of actvatons we can see that fewer hdden unts have hgh actvatons n Fgure 8b compared to Fgure 8a, as seen by the sgnfcant mass away from 1944

17 Dropout zero for the net that does not use dropout. The mean actvatons are also smaller for the dropout net. The overall mean actvaton of hdden unts s close to 2.0 for the autoencoder wthout dropout but drops to around 0.7 when dropout s used. 7.3 Effect of Dropout Rate Dropout has a tunable hyperparameter p (the probablty of retanng a unt n the network). In ths secton, we explore the effect of varyng ths hyperparameter. The comparson s done n two stuatons. 1. The number of hdden unts s held constant. 2. The number of hdden unts s changed so that the expected number of hdden unts that wll be retaned after dropout s held constant. In the frst case, we tran the same network archtecture wth dfferent amounts of dropout. We use a archtecture. No nput dropout was used. Fgure 9a shows the test error obtaned as a functon of p. If the archtecture s held constant, havng a small p means very few unts wll turn on durng tranng. It can be seen that ths has led to underfttng snce the tranng error s also hgh. We see that as p ncreases, the error goes down. It becomes flat when 0.4 p 0.8 and then ncreases as p becomes close to Test Error Tranng Error Test Error Tranng Error Classfcaton Error % Classfcaton Error % Probablty of retanng a unt (p) (a) Keepng n fxed Probablty of retanng a unt (p) (b) Keepng pn fxed. Fgure 9: Effect of changng dropout rates on MNIST. Another nterestng settng s the second case n whch the quantty pn s held constant where n s the number of hdden unts n any partcular layer. Ths means that networks that have small p wll have a large number of hdden unts. Therefore, after applyng dropout, the expected number of unts that are present wll be the same across dfferent archtectures. However, the test networks wll be of dfferent szes. In our experments, we set pn = 256 for the frst two hdden layers and pn = 512 for the last hdden layer. Fgure 9b shows the test error obtaned as a functon of p. We notce that the magntude of errors for small values of p has reduced by a lot compared to Fgure 9a (for p = 0.1 t fell from 2.7% to 1.7%). Values of p that are close to 0.6 seem to perform best for ths choce of pn but our usual default value of 0.5 s close to optmal. 1945

18 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov 7.4 Effect of Data Set Sze One test of a good regularzer s that t should make t possble to get good generalzaton error from models wth a large number of parameters traned on small data sets. Ths secton explores the effect of changng the data set sze when dropout s used wth feedforward networks. Huge neural networks traned n the standard way overft massvely on small data sets. To see f dropout can help, we run classfcaton experments on MNIST and vary the amount of data gven to the network. The results of these experments are 30 shown n Fgure 10. The network was gven Wth dropout Wthout dropout data sets of sze 100, 500, 1K, 5K, 10K 25 and 50K chosen randomly from the MNIST tranng set. The same network archtecture ( ) was used for all data sets. Dropout wth p = 0.5 was performed at all the hdden layers and p = 0.8 at the nput layer. It can be observed that for extremely small data sets (100, 500) dropout does not gve any mprovements. The model has enough parameters that t can overft on the tranng data, even wth all the nose comng from dropout. As the sze of the data set s ncreased, the gan Classfcaton Error % Dataset sze Fgure 10: Effect of varyng data set sze. from dong dropout ncreases up to a pont and then declnes. Ths suggests that for any gven archtecture and dropout rate, there s a sweet spot correspondng to some amount of data that s large enough to not be memorzed n spte of the nose but not so large that overfttng s not a problem anyways. 7.5 Monte-Carlo Model Averagng vs. Weght Scalng The effcent test tme procedure that we propose s to do an approxmate model combnaton by scalng down the weghts of the traned neural network. An expensve but more correct way of averagng the models s to sample k neural nets usng dropout for each test case and average ther predctons. As k, ths Monte-Carlo model average gets close to the true model average. It s nterestng to see emprcally how many samples k are needed to match the performance of the approxmate averagng method. By computng the error for dfferent values of k we can see how quckly the error rate of the fnte-sample average approaches the error rate of the true model average. Test Classfcaton error % Monte-Carlo Model Averagng Approxmate averagng by weght scalng Number of samples used for Monte-Carlo averagng (k) Fgure 11: Monte-Carlo model averagng vs. weght scalng. 1946

19 Dropout We agan use the MNIST data set and do classfcaton by averagng the predctons of k randomly sampled neural networks. Fgure 11 shows the test error rate obtaned for dfferent values of k. Ths s compared wth the error obtaned usng the weght scalng method (shown as a horzontal lne). It can be seen that around k = 50, the Monte-Carlo method becomes as good as the approxmate method. Thereafter, the Monte-Carlo method s slghtly better than the approxmate method but well wthn one standard devaton of t. Ths suggests that the weght scalng method s a farly good approxmaton of the true model average. 8. Dropout Restrcted Boltzmann Machnes Besdes feed-forward neural networks, dropout can also be appled to Restrcted Boltzmann Machnes (RBM). In ths secton, we formally descrbe ths model and show some results to llustrate ts key propertes. 8.1 Model Descrpton Consder an RBM wth vsble unts v {0, 1} D and hdden unts h {0, 1} F. It defnes the followng probablty dstrbuton P (h, v; θ) = 1 Z(θ) exp(v W h + a h + b v). Where θ = {W, a, b} represents the model parameters and Z s the partton functon. Dropout RBMs are RBMs augmented wth a vector of bnary random varables r {0, 1} F. Each random varable r j takes the value 1 wth probablty p, ndependent of others. If r j takes the value 1, the hdden unt h j s retaned, otherwse t s dropped from the model. The jont dstrbuton defned by a Dropout RBM can be expressed as P (r, h, v; p, θ) = P (r; p)p (h, v r; θ), F P (r; p) = p r j (1 p) 1 r j, P (h, v r; θ) = j=1 1 Z (θ, r) exp(v W h + a h + b v) g(h j, r j ) = 1(r j = 1) + 1(r j = 0)1(h j = 0). F g(h j, r j ), Z (θ, r) s the normalzaton constant. g(h j, r j ) mposes the constrant that f r j = 0, h j must be 0. The dstrbuton over h, condtoned on v and r s factoral j=1 F P (h r, v) = P (h j r j, v), j=1 P (h j = 1 r j, v) = 1(r j = 1)σ b j + ( W j v ). 1947

20 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov (a) Wthout dropout (b) Dropout wth p = 0.5. Fgure 12: Features learned on MNIST by 256 hdden unt RBMs. The features are ordered by L2 norm. The dstrbuton over v condtoned on h s same as that of an RBM P (v h) = D P (v h), =1 P (v = 1 h) = σ a + j W j h j. Condtoned on r, the dstrbuton over {v, h} s same as the dstrbuton that an RBM would mpose, except that the unts for whch r j = 0 are dropped from h. Therefore, the Dropout RBM model can be seen as a mxture of exponentally many RBMs wth shared weghts each usng a dfferent subset of h. 8.2 Learnng Dropout RBMs Learnng algorthms developed for RBMs such as Contrastve Dvergence (Hnton et al., 2006) can be drectly appled for learnng Dropout RBMs. The only dfference s that r s frst sampled and only the hdden unts that are retaned are used for tranng. Smlar to dropout neural networks, a dfferent r s sampled for each tranng case n every mnbatch. In our experments, we use CD-1 for tranng dropout RBMs. 8.3 Effect on Features Dropout n feed-forward networks mproved the qualty of features by reducng co-adaptatons. Ths secton explores whether ths effect transfers to Dropout RBMs as well. Fgure 12a shows features learned by a bnary RBM wth 256 hdden unts. Fgure 12b shows features learned by a dropout RBM wth the same number of hdden unts. Features 1948

What is Candidate Sampling

What is Candidate Sampling What s Canddate Samplng Say we have a multclass or mult label problem where each tranng example ( x, T ) conssts of a context x a small (mult)set of target classes T out of a large unverse L of possble

More information

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis The Development of Web Log Mnng Based on Improve-K-Means Clusterng Analyss TngZhong Wang * College of Informaton Technology, Luoyang Normal Unversty, Luoyang, 471022, Chna wangtngzhong2@sna.cn Abstract.

More information

Forecasting the Direction and Strength of Stock Market Movement

Forecasting the Direction and Strength of Stock Market Movement Forecastng the Drecton and Strength of Stock Market Movement Jngwe Chen Mng Chen Nan Ye cjngwe@stanford.edu mchen5@stanford.edu nanye@stanford.edu Abstract - Stock market s one of the most complcated systems

More information

Logistic Regression. Lecture 4: More classifiers and classes. Logistic regression. Adaboost. Optimization. Multiple class classification

Logistic Regression. Lecture 4: More classifiers and classes. Logistic regression. Adaboost. Optimization. Multiple class classification Lecture 4: More classfers and classes C4B Machne Learnng Hlary 20 A. Zsserman Logstc regresson Loss functons revsted Adaboost Loss functons revsted Optmzaton Multple class classfcaton Logstc Regresson

More information

Face Verification Problem. Face Recognition Problem. Application: Access Control. Biometric Authentication. Face Verification (1:1 matching)

Face Verification Problem. Face Recognition Problem. Application: Access Control. Biometric Authentication. Face Verification (1:1 matching) Face Recognton Problem Face Verfcaton Problem Face Verfcaton (1:1 matchng) Querymage face query Face Recognton (1:N matchng) database Applcaton: Access Control www.vsage.com www.vsoncs.com Bometrc Authentcaton

More information

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ).

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ). REVIEW OF RISK MANAGEMENT CONCEPTS LOSS DISTRIBUTIONS AND INSURANCE Loss and nsurance: When someone s subject to the rsk of ncurrng a fnancal loss, the loss s generally modeled usng a random varable or

More information

CS 2750 Machine Learning. Lecture 3. Density estimation. CS 2750 Machine Learning. Announcements

CS 2750 Machine Learning. Lecture 3. Density estimation. CS 2750 Machine Learning. Announcements Lecture 3 Densty estmaton Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Next lecture: Matlab tutoral Announcements Rules for attendng the class: Regstered for credt Regstered for audt (only f there

More information

An Alternative Way to Measure Private Equity Performance

An Alternative Way to Measure Private Equity Performance An Alternatve Way to Measure Prvate Equty Performance Peter Todd Parlux Investment Technology LLC Summary Internal Rate of Return (IRR) s probably the most common way to measure the performance of prvate

More information

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module LOSSLESS IMAGE COMPRESSION SYSTEMS Lesson 3 Lossless Compresson: Huffman Codng Instructonal Objectves At the end of ths lesson, the students should be able to:. Defne and measure source entropy..

More information

Lecture 2: Single Layer Perceptrons Kevin Swingler

Lecture 2: Single Layer Perceptrons Kevin Swingler Lecture 2: Sngle Layer Perceptrons Kevn Sngler kms@cs.str.ac.uk Recap: McCulloch-Ptts Neuron Ths vastly smplfed model of real neurons s also knon as a Threshold Logc Unt: W 2 A Y 3 n W n. A set of synapses

More information

Logistic Regression. Steve Kroon

Logistic Regression. Steve Kroon Logstc Regresson Steve Kroon Course notes sectons: 24.3-24.4 Dsclamer: these notes do not explctly ndcate whether values are vectors or scalars, but expects the reader to dscern ths from the context. Scenaro

More information

Forecasting the Demand of Emergency Supplies: Based on the CBR Theory and BP Neural Network

Forecasting the Demand of Emergency Supplies: Based on the CBR Theory and BP Neural Network 700 Proceedngs of the 8th Internatonal Conference on Innovaton & Management Forecastng the Demand of Emergency Supples: Based on the CBR Theory and BP Neural Network Fu Deqang, Lu Yun, L Changbng School

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Max Wellng Department of Computer Scence Unversty of Toronto 10 Kng s College Road Toronto, M5S 3G5 Canada wellng@cs.toronto.edu Abstract Ths s a note to explan support vector machnes.

More information

Vision Mouse. Saurabh Sarkar a* University of Cincinnati, Cincinnati, USA ABSTRACT 1. INTRODUCTION

Vision Mouse. Saurabh Sarkar a* University of Cincinnati, Cincinnati, USA ABSTRACT 1. INTRODUCTION Vson Mouse Saurabh Sarkar a* a Unversty of Cncnnat, Cncnnat, USA ABSTRACT The report dscusses a vson based approach towards trackng of eyes and fngers. The report descrbes the process of locatng the possble

More information

Single and multiple stage classifiers implementing logistic discrimination

Single and multiple stage classifiers implementing logistic discrimination Sngle and multple stage classfers mplementng logstc dscrmnaton Hélo Radke Bttencourt 1 Dens Alter de Olvera Moraes 2 Vctor Haertel 2 1 Pontfíca Unversdade Católca do Ro Grande do Sul - PUCRS Av. Ipranga,

More information

Feature selection for intrusion detection. Slobodan Petrović NISlab, Gjøvik University College

Feature selection for intrusion detection. Slobodan Petrović NISlab, Gjøvik University College Feature selecton for ntruson detecton Slobodan Petrovć NISlab, Gjøvk Unversty College Contents The feature selecton problem Intruson detecton Traffc features relevant for IDS The CFS measure The mrmr measure

More information

1 Approximation Algorithms

1 Approximation Algorithms CME 305: Dscrete Mathematcs and Algorthms 1 Approxmaton Algorthms In lght of the apparent ntractablty of the problems we beleve not to le n P, t makes sense to pursue deas other than complete solutons

More information

L10: Linear discriminants analysis

L10: Linear discriminants analysis L0: Lnear dscrmnants analyss Lnear dscrmnant analyss, two classes Lnear dscrmnant analyss, C classes LDA vs. PCA Lmtatons of LDA Varants of LDA Other dmensonalty reducton methods CSCE 666 Pattern Analyss

More information

The OC Curve of Attribute Acceptance Plans

The OC Curve of Attribute Acceptance Plans The OC Curve of Attrbute Acceptance Plans The Operatng Characterstc (OC) curve descrbes the probablty of acceptng a lot as a functon of the lot s qualty. Fgure 1 shows a typcal OC Curve. 10 8 6 4 1 3 4

More information

How Sets of Coherent Probabilities May Serve as Models for Degrees of Incoherence

How Sets of Coherent Probabilities May Serve as Models for Degrees of Incoherence 1 st Internatonal Symposum on Imprecse Probabltes and Ther Applcatons, Ghent, Belgum, 29 June 2 July 1999 How Sets of Coherent Probabltes May Serve as Models for Degrees of Incoherence Mar J. Schervsh

More information

PSYCHOLOGICAL RESEARCH (PYC 304-C) Lecture 12

PSYCHOLOGICAL RESEARCH (PYC 304-C) Lecture 12 14 The Ch-squared dstrbuton PSYCHOLOGICAL RESEARCH (PYC 304-C) Lecture 1 If a normal varable X, havng mean µ and varance σ, s standardsed, the new varable Z has a mean 0 and varance 1. When ths standardsed

More information

Calculation of Sampling Weights

Calculation of Sampling Weights Perre Foy Statstcs Canada 4 Calculaton of Samplng Weghts 4.1 OVERVIEW The basc sample desgn used n TIMSS Populatons 1 and 2 was a two-stage stratfed cluster desgn. 1 The frst stage conssted of a sample

More information

Luby s Alg. for Maximal Independent Sets using Pairwise Independence

Luby s Alg. for Maximal Independent Sets using Pairwise Independence Lecture Notes for Randomzed Algorthms Luby s Alg. for Maxmal Independent Sets usng Parwse Independence Last Updated by Erc Vgoda on February, 006 8. Maxmal Independent Sets For a graph G = (V, E), an ndependent

More information

CS 2750 Machine Learning. Lecture 17a. Clustering. CS 2750 Machine Learning. Clustering

CS 2750 Machine Learning. Lecture 17a. Clustering. CS 2750 Machine Learning. Clustering Lecture 7a Clusterng Mlos Hauskrecht mlos@cs.ptt.edu 539 Sennott Square Clusterng Groups together smlar nstances n the data sample Basc clusterng problem: dstrbute data nto k dfferent groups such that

More information

8 Algorithm for Binary Searching in Trees

8 Algorithm for Binary Searching in Trees 8 Algorthm for Bnary Searchng n Trees In ths secton we present our algorthm for bnary searchng n trees. A crucal observaton employed by the algorthm s that ths problem can be effcently solved when the

More information

An Interest-Oriented Network Evolution Mechanism for Online Communities

An Interest-Oriented Network Evolution Mechanism for Online Communities An Interest-Orented Network Evoluton Mechansm for Onlne Communtes Cahong Sun and Xaopng Yang School of Informaton, Renmn Unversty of Chna, Bejng 100872, P.R. Chna {chsun,yang}@ruc.edu.cn Abstract. Onlne

More information

Software project management with GAs

Software project management with GAs Informaton Scences 177 (27) 238 241 www.elsever.com/locate/ns Software project management wth GAs Enrque Alba *, J. Francsco Chcano Unversty of Málaga, Grupo GISUM, Departamento de Lenguajes y Cencas de

More information

1. Fundamentals of probability theory 2. Emergence of communication traffic 3. Stochastic & Markovian Processes (SP & MP)

1. Fundamentals of probability theory 2. Emergence of communication traffic 3. Stochastic & Markovian Processes (SP & MP) 6.3 / -- Communcaton Networks II (Görg) SS20 -- www.comnets.un-bremen.de Communcaton Networks II Contents. Fundamentals of probablty theory 2. Emergence of communcaton traffc 3. Stochastc & Markovan Processes

More information

On Mean Squared Error of Hierarchical Estimator

On Mean Squared Error of Hierarchical Estimator S C H E D A E I N F O R M A T I C A E VOLUME 0 0 On Mean Squared Error of Herarchcal Estmator Stans law Brodowsk Faculty of Physcs, Astronomy, and Appled Computer Scence, Jagellonan Unversty, Reymonta

More information

DEFINING %COMPLETE IN MICROSOFT PROJECT

DEFINING %COMPLETE IN MICROSOFT PROJECT CelersSystems DEFINING %COMPLETE IN MICROSOFT PROJECT PREPARED BY James E Aksel, PMP, PMI-SP, MVP For Addtonal Informaton about Earned Value Management Systems and reportng, please contact: CelersSystems,

More information

The Greedy Method. Introduction. 0/1 Knapsack Problem

The Greedy Method. Introduction. 0/1 Knapsack Problem The Greedy Method Introducton We have completed data structures. We now are gong to look at algorthm desgn methods. Often we are lookng at optmzaton problems whose performance s exponental. For an optmzaton

More information

Causal, Explanatory Forecasting. Analysis. Regression Analysis. Simple Linear Regression. Which is Independent? Forecasting

Causal, Explanatory Forecasting. Analysis. Regression Analysis. Simple Linear Regression. Which is Independent? Forecasting Causal, Explanatory Forecastng Assumes cause-and-effect relatonshp between system nputs and ts output Forecastng wth Regresson Analyss Rchard S. Barr Inputs System Cause + Effect Relatonshp The job of

More information

CHAPTER 14 MORE ABOUT REGRESSION

CHAPTER 14 MORE ABOUT REGRESSION CHAPTER 14 MORE ABOUT REGRESSION We learned n Chapter 5 that often a straght lne descrbes the pattern of a relatonshp between two quanttatve varables. For nstance, n Example 5.1 we explored the relatonshp

More information

1. Measuring association using correlation and regression

1. Measuring association using correlation and regression How to measure assocaton I: Correlaton. 1. Measurng assocaton usng correlaton and regresson We often would lke to know how one varable, such as a mother's weght, s related to another varable, such as a

More information

Institute of Informatics, Faculty of Business and Management, Brno University of Technology,Czech Republic

Institute of Informatics, Faculty of Business and Management, Brno University of Technology,Czech Republic Lagrange Multplers as Quanttatve Indcators n Economcs Ivan Mezník Insttute of Informatcs, Faculty of Busness and Management, Brno Unversty of TechnologCzech Republc Abstract The quanttatve role of Lagrange

More information

An Enhanced Super-Resolution System with Improved Image Registration, Automatic Image Selection, and Image Enhancement

An Enhanced Super-Resolution System with Improved Image Registration, Automatic Image Selection, and Image Enhancement An Enhanced Super-Resoluton System wth Improved Image Regstraton, Automatc Image Selecton, and Image Enhancement Yu-Chuan Kuo ( ), Chen-Yu Chen ( ), and Chou-Shann Fuh ( ) Department of Computer Scence

More information

ECE544NA Final Project: Robust Machine Learning Hardware via Classifier Ensemble

ECE544NA Final Project: Robust Machine Learning Hardware via Classifier Ensemble 1 ECE544NA Fnal Project: Robust Machne Learnng Hardware va Classfer Ensemble Sa Zhang, szhang12@llnos.edu Dept. of Electr. & Comput. Eng., Unv. of Illnos at Urbana-Champagn, Urbana, IL, USA Abstract In

More information

Project Networks With Mixed-Time Constraints

Project Networks With Mixed-Time Constraints Project Networs Wth Mxed-Tme Constrants L Caccetta and B Wattananon Western Australan Centre of Excellence n Industral Optmsaton (WACEIO) Curtn Unversty of Technology GPO Box U1987 Perth Western Australa

More information

On the Optimal Control of a Cascade of Hydro-Electric Power Stations

On the Optimal Control of a Cascade of Hydro-Electric Power Stations On the Optmal Control of a Cascade of Hydro-Electrc Power Statons M.C.M. Guedes a, A.F. Rbero a, G.V. Smrnov b and S. Vlela c a Department of Mathematcs, School of Scences, Unversty of Porto, Portugal;

More information

Can Auto Liability Insurance Purchases Signal Risk Attitude?

Can Auto Liability Insurance Purchases Signal Risk Attitude? Internatonal Journal of Busness and Economcs, 2011, Vol. 10, No. 2, 159-164 Can Auto Lablty Insurance Purchases Sgnal Rsk Atttude? Chu-Shu L Department of Internatonal Busness, Asa Unversty, Tawan Sheng-Chang

More information

Adaptive Fractal Image Coding in the Frequency Domain

Adaptive Fractal Image Coding in the Frequency Domain PROCEEDINGS OF INTERNATIONAL WORKSHOP ON IMAGE PROCESSING: THEORY, METHODOLOGY, SYSTEMS AND APPLICATIONS 2-22 JUNE,1994 BUDAPEST,HUNGARY Adaptve Fractal Image Codng n the Frequency Doman K AI UWE BARTHEL

More information

A hybrid global optimization algorithm based on parallel chaos optimization and outlook algorithm

A hybrid global optimization algorithm based on parallel chaos optimization and outlook algorithm Avalable onlne www.ocpr.com Journal of Chemcal and Pharmaceutcal Research, 2014, 6(7):1884-1889 Research Artcle ISSN : 0975-7384 CODEN(USA) : JCPRC5 A hybrd global optmzaton algorthm based on parallel

More information

Performance Analysis and Coding Strategy of ECOC SVMs

Performance Analysis and Coding Strategy of ECOC SVMs Internatonal Journal of Grd and Dstrbuted Computng Vol.7, No. (04), pp.67-76 http://dx.do.org/0.457/jgdc.04.7..07 Performance Analyss and Codng Strategy of ECOC SVMs Zhgang Yan, and Yuanxuan Yang, School

More information

Bayesian Network Based Causal Relationship Identification and Funding Success Prediction in P2P Lending

Bayesian Network Based Causal Relationship Identification and Funding Success Prediction in P2P Lending Proceedngs of 2012 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 25 (2012) (2012) IACSIT Press, Sngapore Bayesan Network Based Causal Relatonshp Identfcaton and Fundng Success

More information

CHAPTER 5 RELATIONSHIPS BETWEEN QUANTITATIVE VARIABLES

CHAPTER 5 RELATIONSHIPS BETWEEN QUANTITATIVE VARIABLES CHAPTER 5 RELATIONSHIPS BETWEEN QUANTITATIVE VARIABLES In ths chapter, we wll learn how to descrbe the relatonshp between two quanttatve varables. Remember (from Chapter 2) that the terms quanttatve varable

More information

MAPP. MERIS level 3 cloud and water vapour products. Issue: 1. Revision: 0. Date: 9.12.1998. Function Name Organisation Signature Date

MAPP. MERIS level 3 cloud and water vapour products. Issue: 1. Revision: 0. Date: 9.12.1998. Function Name Organisation Signature Date Ttel: Project: Doc. No.: MERIS level 3 cloud and water vapour products MAPP MAPP-ATBD-ClWVL3 Issue: 1 Revson: 0 Date: 9.12.1998 Functon Name Organsaton Sgnature Date Author: Bennartz FUB Preusker FUB Schüller

More information

) of the Cell class is created containing information about events associated with the cell. Events are added to the Cell instance

) of the Cell class is created containing information about events associated with the cell. Events are added to the Cell instance Calbraton Method Instances of the Cell class (one nstance for each FMS cell) contan ADC raw data and methods assocated wth each partcular FMS cell. The calbraton method ncludes event selecton (Class Cell

More information

Gender Classification for Real-Time Audience Analysis System

Gender Classification for Real-Time Audience Analysis System Gender Classfcaton for Real-Tme Audence Analyss System Vladmr Khryashchev, Lev Shmaglt, Andrey Shemyakov, Anton Lebedev Yaroslavl State Unversty Yaroslavl, Russa vhr@yandex.ru, shmaglt_lev@yahoo.com, andrey.shemakov@gmal.com,

More information

Joint Scheduling of Processing and Shuffle Phases in MapReduce Systems

Joint Scheduling of Processing and Shuffle Phases in MapReduce Systems Jont Schedulng of Processng and Shuffle Phases n MapReduce Systems Fangfe Chen, Mural Kodalam, T. V. Lakshman Department of Computer Scence and Engneerng, The Penn State Unversty Bell Laboratores, Alcatel-Lucent

More information

Mining Feature Importance: Applying Evolutionary Algorithms within a Web-based Educational System

Mining Feature Importance: Applying Evolutionary Algorithms within a Web-based Educational System Mnng Feature Importance: Applyng Evolutonary Algorthms wthn a Web-based Educatonal System Behrouz MINAEI-BIDGOLI 1, and Gerd KORTEMEYER 2, and Wllam F. PUNCH 1 1 Genetc Algorthms Research and Applcatons

More information

IMPACT ANALYSIS OF A CELLULAR PHONE

IMPACT ANALYSIS OF A CELLULAR PHONE 4 th ASA & μeta Internatonal Conference IMPACT AALYSIS OF A CELLULAR PHOE We Lu, 2 Hongy L Bejng FEAonlne Engneerng Co.,Ltd. Bejng, Chna ABSTRACT Drop test smulaton plays an mportant role n nvestgatng

More information

A study on the ability of Support Vector Regression and Neural Networks to Forecast Basic Time Series Patterns

A study on the ability of Support Vector Regression and Neural Networks to Forecast Basic Time Series Patterns A study on the ablty of Support Vector Regresson and Neural Networks to Forecast Basc Tme Seres Patterns Sven F. Crone, Jose Guajardo 2, and Rchard Weber 2 Lancaster Unversty, Department of Management

More information

Statistical Methods to Develop Rating Models

Statistical Methods to Develop Rating Models Statstcal Methods to Develop Ratng Models [Evelyn Hayden and Danel Porath, Österrechsche Natonalbank and Unversty of Appled Scences at Manz] Source: The Basel II Rsk Parameters Estmaton, Valdaton, and

More information

Using Mixture Covariance Matrices to Improve Face and Facial Expression Recognitions

Using Mixture Covariance Matrices to Improve Face and Facial Expression Recognitions Usng Mxture Covarance Matrces to Improve Face and Facal Expresson Recogntons Carlos E. homaz, Duncan F. Glles and Raul Q. Fetosa 2 Imperal College of Scence echnology and Medcne, Department of Computng,

More information

Clustering Gene Expression Data. (Slides thanks to Dr. Mark Craven)

Clustering Gene Expression Data. (Slides thanks to Dr. Mark Craven) Clusterng Gene Epresson Data Sldes thanks to Dr. Mark Craven Gene Epresson Proles we ll assume we have a D matr o gene epresson measurements rows represent genes columns represent derent eperments tme

More information

Robust Design of Public Storage Warehouses. Yeming (Yale) Gong EMLYON Business School

Robust Design of Public Storage Warehouses. Yeming (Yale) Gong EMLYON Business School Robust Desgn of Publc Storage Warehouses Yemng (Yale) Gong EMLYON Busness School Rene de Koster Rotterdam school of management, Erasmus Unversty Abstract We apply robust optmzaton and revenue management

More information

Descriptive Models. Cluster Analysis. Example. General Applications of Clustering. Examples of Clustering Applications

Descriptive Models. Cluster Analysis. Example. General Applications of Clustering. Examples of Clustering Applications CMSC828G Prncples of Data Mnng Lecture #9 Today s Readng: HMS, chapter 9 Today s Lecture: Descrptve Modelng Clusterng Algorthms Descrptve Models model presents the man features of the data, a global summary

More information

The covariance is the two variable analog to the variance. The formula for the covariance between two variables is

The covariance is the two variable analog to the variance. The formula for the covariance between two variables is Regresson Lectures So far we have talked only about statstcs that descrbe one varable. What we are gong to be dscussng for much of the remander of the course s relatonshps between two or more varables.

More information

A GENETIC ALGORITHM-BASED METHOD FOR CREATING IMPARTIAL WORK SCHEDULES FOR NURSES

A GENETIC ALGORITHM-BASED METHOD FOR CREATING IMPARTIAL WORK SCHEDULES FOR NURSES 82 Internatonal Journal of Electronc Busness Management, Vol. 0, No. 3, pp. 82-93 (202) A GENETIC ALGORITHM-BASED METHOD FOR CREATING IMPARTIAL WORK SCHEDULES FOR NURSES Feng-Cheng Yang * and We-Tng Wu

More information

Activity Scheduling for Cost-Time Investment Optimization in Project Management

Activity Scheduling for Cost-Time Investment Optimization in Project Management PROJECT MANAGEMENT 4 th Internatonal Conference on Industral Engneerng and Industral Management XIV Congreso de Ingenería de Organzacón Donosta- San Sebastán, September 8 th -10 th 010 Actvty Schedulng

More information

Exhaustive Regression. An Exploration of Regression-Based Data Mining Techniques Using Super Computation

Exhaustive Regression. An Exploration of Regression-Based Data Mining Techniques Using Super Computation Exhaustve Regresson An Exploraton of Regresson-Based Data Mnng Technques Usng Super Computaton Antony Daves, Ph.D. Assocate Professor of Economcs Duquesne Unversty Pttsburgh, PA 58 Research Fellow The

More information

Inequality and The Accounting Period. Quentin Wodon and Shlomo Yitzhaki. World Bank and Hebrew University. September 2001.

Inequality and The Accounting Period. Quentin Wodon and Shlomo Yitzhaki. World Bank and Hebrew University. September 2001. Inequalty and The Accountng Perod Quentn Wodon and Shlomo Ytzha World Ban and Hebrew Unversty September Abstract Income nequalty typcally declnes wth the length of tme taen nto account for measurement.

More information

Analysis of Premium Liabilities for Australian Lines of Business

Analysis of Premium Liabilities for Australian Lines of Business Summary of Analyss of Premum Labltes for Australan Lnes of Busness Emly Tao Honours Research Paper, The Unversty of Melbourne Emly Tao Acknowledgements I am grateful to the Australan Prudental Regulaton

More information

Efficient Striping Techniques for Variable Bit Rate Continuous Media File Servers æ

Efficient Striping Techniques for Variable Bit Rate Continuous Media File Servers æ Effcent Strpng Technques for Varable Bt Rate Contnuous Meda Fle Servers æ Prashant J. Shenoy Harrck M. Vn Department of Computer Scence, Department of Computer Scences, Unversty of Massachusetts at Amherst

More information

GRAVITY DATA VALIDATION AND OUTLIER DETECTION USING L 1 -NORM

GRAVITY DATA VALIDATION AND OUTLIER DETECTION USING L 1 -NORM GRAVITY DATA VALIDATION AND OUTLIER DETECTION USING L 1 -NORM BARRIOT Jean-Perre, SARRAILH Mchel BGI/CNES 18.av.E.Beln 31401 TOULOUSE Cedex 4 (France) Emal: jean-perre.barrot@cnes.fr 1/Introducton The

More information

Recurrence. 1 Definitions and main statements

Recurrence. 1 Definitions and main statements Recurrence 1 Defntons and man statements Let X n, n = 0, 1, 2,... be a MC wth the state space S = (1, 2,...), transton probabltes p j = P {X n+1 = j X n = }, and the transton matrx P = (p j ),j S def.

More information

Brigid Mullany, Ph.D University of North Carolina, Charlotte

Brigid Mullany, Ph.D University of North Carolina, Charlotte Evaluaton And Comparson Of The Dfferent Standards Used To Defne The Postonal Accuracy And Repeatablty Of Numercally Controlled Machnng Center Axes Brgd Mullany, Ph.D Unversty of North Carolna, Charlotte

More information

ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING

ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING Matthew J. Lberatore, Department of Management and Operatons, Vllanova Unversty, Vllanova, PA 19085, 610-519-4390,

More information

Latent Class Regression. Statistics for Psychosocial Research II: Structural Models December 4 and 6, 2006

Latent Class Regression. Statistics for Psychosocial Research II: Structural Models December 4 and 6, 2006 Latent Class Regresson Statstcs for Psychosocal Research II: Structural Models December 4 and 6, 2006 Latent Class Regresson (LCR) What s t and when do we use t? Recall the standard latent class model

More information

THE DISTRIBUTION OF LOAN PORTFOLIO VALUE * Oldrich Alfons Vasicek

THE DISTRIBUTION OF LOAN PORTFOLIO VALUE * Oldrich Alfons Vasicek HE DISRIBUION OF LOAN PORFOLIO VALUE * Oldrch Alfons Vascek he amount of captal necessary to support a portfolo of debt securtes depends on the probablty dstrbuton of the portfolo loss. Consder a portfolo

More information

Data Visualization by Pairwise Distortion Minimization

Data Visualization by Pairwise Distortion Minimization Communcatons n Statstcs, Theory and Methods 34 (6), 005 Data Vsualzaton by Parwse Dstorton Mnmzaton By Marc Sobel, and Longn Jan Lateck* Department of Statstcs and Department of Computer and Informaton

More information

Big Data Deep Learning: Challenges and Perspectives

Big Data Deep Learning: Challenges and Perspectives Receved Aprl 20, 2014, accepted May 13, 2014, date of publcaton May 16, 2014, date of current verson May 28, 2014. Dgtal Object Identfer 10.1109/ACCESS.2014.2325029 Bg Data Deep Learnng: Challenges and

More information

The Analysis of Covariance. ERSH 8310 Keppel and Wickens Chapter 15

The Analysis of Covariance. ERSH 8310 Keppel and Wickens Chapter 15 The Analyss of Covarance ERSH 830 Keppel and Wckens Chapter 5 Today s Class Intal Consderatons Covarance and Lnear Regresson The Lnear Regresson Equaton TheAnalyss of Covarance Assumptons Underlyng the

More information

Tracking with Non-Linear Dynamic Models

Tracking with Non-Linear Dynamic Models CHAPTER 2 Trackng wth Non-Lnear Dynamc Models In a lnear dynamc model wth lnear measurements, there s always only one peak n the posteror; very small non-lneartes n dynamc models can lead to a substantal

More information

Georey E. Hinton. University oftoronto. Email: zoubin@cs.toronto.edu. Technical Report CRG-TR-96-1. May 21, 1996 (revised Feb 27, 1997) Abstract

Georey E. Hinton. University oftoronto. Email: zoubin@cs.toronto.edu. Technical Report CRG-TR-96-1. May 21, 1996 (revised Feb 27, 1997) Abstract The EM Algorthm for Mxtures of Factor Analyzers Zoubn Ghahraman Georey E. Hnton Department of Computer Scence Unversty oftoronto 6 Kng's College Road Toronto, Canada M5S A4 Emal: zoubn@cs.toronto.edu Techncal

More information

Politecnico di Torino. Porto Institutional Repository

Politecnico di Torino. Porto Institutional Repository Poltecnco d Torno Porto Insttutonal Repostory [Artcle] A cost-effectve cloud computng framework for acceleratng multmeda communcaton smulatons Orgnal Ctaton: D. Angel, E. Masala (2012). A cost-effectve

More information

Power-of-Two Policies for Single- Warehouse Multi-Retailer Inventory Systems with Order Frequency Discounts

Power-of-Two Policies for Single- Warehouse Multi-Retailer Inventory Systems with Order Frequency Discounts Power-of-wo Polces for Sngle- Warehouse Mult-Retaler Inventory Systems wth Order Frequency Dscounts José A. Ventura Pennsylvana State Unversty (USA) Yale. Herer echnon Israel Insttute of echnology (Israel)

More information

Time Series Analysis in Studies of AGN Variability. Bradley M. Peterson The Ohio State University

Time Series Analysis in Studies of AGN Variability. Bradley M. Peterson The Ohio State University Tme Seres Analyss n Studes of AGN Varablty Bradley M. Peterson The Oho State Unversty 1 Lnear Correlaton Degree to whch two parameters are lnearly correlated can be expressed n terms of the lnear correlaton

More information

CHOLESTEROL REFERENCE METHOD LABORATORY NETWORK. Sample Stability Protocol

CHOLESTEROL REFERENCE METHOD LABORATORY NETWORK. Sample Stability Protocol CHOLESTEROL REFERENCE METHOD LABORATORY NETWORK Sample Stablty Protocol Background The Cholesterol Reference Method Laboratory Network (CRMLN) developed certfcaton protocols for total cholesterol, HDL

More information

INVESTIGATION OF VEHICULAR USERS FAIRNESS IN CDMA-HDR NETWORKS

INVESTIGATION OF VEHICULAR USERS FAIRNESS IN CDMA-HDR NETWORKS 21 22 September 2007, BULGARIA 119 Proceedngs of the Internatonal Conference on Informaton Technologes (InfoTech-2007) 21 st 22 nd September 2007, Bulgara vol. 2 INVESTIGATION OF VEHICULAR USERS FAIRNESS

More information

Efficient Project Portfolio as a tool for Enterprise Risk Management

Efficient Project Portfolio as a tool for Enterprise Risk Management Effcent Proect Portfolo as a tool for Enterprse Rsk Management Valentn O. Nkonov Ural State Techncal Unversty Growth Traectory Consultng Company January 5, 27 Effcent Proect Portfolo as a tool for Enterprse

More information

Calculating the high frequency transmission line parameters of power cables

Calculating the high frequency transmission line parameters of power cables < ' Calculatng the hgh frequency transmsson lne parameters of power cables Authors: Dr. John Dcknson, Laboratory Servces Manager, N 0 RW E B Communcatons Mr. Peter J. Ncholson, Project Assgnment Manager,

More information

1 Example 1: Axis-aligned rectangles

1 Example 1: Axis-aligned rectangles COS 511: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture # 6 Scrbe: Aaron Schld February 21, 2013 Last class, we dscussed an analogue for Occam s Razor for nfnte hypothess spaces that, n conjuncton

More information

Quantization Effects in Digital Filters

Quantization Effects in Digital Filters Quantzaton Effects n Dgtal Flters Dstrbuton of Truncaton Errors In two's complement representaton an exact number would have nfntely many bts (n general). When we lmt the number of bts to some fnte value

More information

Realistic Image Synthesis

Realistic Image Synthesis Realstc Image Synthess - Combned Samplng and Path Tracng - Phlpp Slusallek Karol Myszkowsk Vncent Pegoraro Overvew: Today Combned Samplng (Multple Importance Samplng) Renderng and Measurng Equaton Random

More information

SPEE Recommended Evaluation Practice #6 Definition of Decline Curve Parameters Background:

SPEE Recommended Evaluation Practice #6 Definition of Decline Curve Parameters Background: SPEE Recommended Evaluaton Practce #6 efnton of eclne Curve Parameters Background: The producton hstores of ol and gas wells can be analyzed to estmate reserves and future ol and gas producton rates and

More information

Enterprise Master Patient Index

Enterprise Master Patient Index Enterprse Master Patent Index Healthcare data are captured n many dfferent settngs such as hosptals, clncs, labs, and physcan offces. Accordng to a report by the CDC, patents n the Unted States made an

More information

Enriching the Knowledge Sources Used in a Maximum Entropy Part-of-Speech Tagger

Enriching the Knowledge Sources Used in a Maximum Entropy Part-of-Speech Tagger Enrchng the Knowledge Sources Used n a Maxmum Entropy Part-of-Speech Tagger Krstna Toutanova Dept of Computer Scence Gates Bldg 4A, 353 Serra Mall Stanford, CA 94305 9040, USA krstna@cs.stanford.edu Chrstopher

More information

THE APPLICATION OF DATA MINING TECHNIQUES AND MULTIPLE CLASSIFIERS TO MARKETING DECISION

THE APPLICATION OF DATA MINING TECHNIQUES AND MULTIPLE CLASSIFIERS TO MARKETING DECISION Internatonal Journal of Electronc Busness Management, Vol. 3, No. 4, pp. 30-30 (2005) 30 THE APPLICATION OF DATA MINING TECHNIQUES AND MULTIPLE CLASSIFIERS TO MARKETING DECISION Yu-Mn Chang *, Yu-Cheh

More information

Improved Mining of Software Complexity Data on Evolutionary Filtered Training Sets

Improved Mining of Software Complexity Data on Evolutionary Filtered Training Sets Improved Mnng of Software Complexty Data on Evolutonary Fltered Tranng Sets VILI PODGORELEC Insttute of Informatcs, FERI Unversty of Marbor Smetanova ulca 17, SI-2000 Marbor SLOVENIA vl.podgorelec@un-mb.s

More information

A DATA MINING APPLICATION IN A STUDENT DATABASE

A DATA MINING APPLICATION IN A STUDENT DATABASE JOURNAL OF AERONAUTICS AND SPACE TECHNOLOGIES JULY 005 VOLUME NUMBER (53-57) A DATA MINING APPLICATION IN A STUDENT DATABASE Şenol Zafer ERDOĞAN Maltepe Ünversty Faculty of Engneerng Büyükbakkalköy-Istanbul

More information

Fast Fuzzy Clustering of Web Page Collections

Fast Fuzzy Clustering of Web Page Collections Fast Fuzzy Clusterng of Web Page Collectons Chrstan Borgelt and Andreas Nürnberger Dept. of Knowledge Processng and Language Engneerng Otto-von-Guercke-Unversty of Magdeburg Unverstätsplatz, D-396 Magdeburg,

More information

AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE

AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE Yu-L Huang Industral Engneerng Department New Mexco State Unversty Las Cruces, New Mexco 88003, U.S.A. Abstract Patent

More information

Abstract. Clustering ensembles have emerged as a powerful method for improving both the

Abstract. Clustering ensembles have emerged as a powerful method for improving both the Clusterng Ensembles: {topchyal, Models jan, of punch}@cse.msu.edu Consensus and Weak Parttons * Alexander Topchy, Anl K. Jan, and Wllam Punch Department of Computer Scence and Engneerng, Mchgan State Unversty

More information

ONE of the most crucial problems that every image

ONE of the most crucial problems that every image IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 23, NO. 10, OCTOBER 2014 4413 Maxmum Margn Projecton Subspace Learnng for Vsual Data Analyss Symeon Nktds, Anastasos Tefas, Member, IEEE, and Ioanns Ptas, Fellow,

More information

Lecture 5,6 Linear Methods for Classification. Summary

Lecture 5,6 Linear Methods for Classification. Summary Lecture 5,6 Lnear Methods for Classfcaton Rce ELEC 697 Farnaz Koushanfar Fall 2006 Summary Bayes Classfers Lnear Classfers Lnear regresson of an ndcator matrx Lnear dscrmnant analyss (LDA) Logstc regresson

More information

Staff Paper. Farm Savings Accounts: Examining Income Variability, Eligibility, and Benefits. Brent Gloy, Eddy LaDue, and Charles Cuykendall

Staff Paper. Farm Savings Accounts: Examining Income Variability, Eligibility, and Benefits. Brent Gloy, Eddy LaDue, and Charles Cuykendall SP 2005-02 August 2005 Staff Paper Department of Appled Economcs and Management Cornell Unversty, Ithaca, New York 14853-7801 USA Farm Savngs Accounts: Examnng Income Varablty, Elgblty, and Benefts Brent

More information

Chapter 4 ECONOMIC DISPATCH AND UNIT COMMITMENT

Chapter 4 ECONOMIC DISPATCH AND UNIT COMMITMENT Chapter 4 ECOOMIC DISATCH AD UIT COMMITMET ITRODUCTIO A power system has several power plants. Each power plant has several generatng unts. At any pont of tme, the total load n the system s met by the

More information

1. Introduction. Graham Kendall School of Computer Science and IT ASAP Research Group University of Nottingham Nottingham, NG8 1BB gxk@cs.nott.ac.

1. Introduction. Graham Kendall School of Computer Science and IT ASAP Research Group University of Nottingham Nottingham, NG8 1BB gxk@cs.nott.ac. The Co-evoluton of Tradng Strateges n A Mult-agent Based Smulated Stock Market Through the Integraton of Indvdual Learnng and Socal Learnng Graham Kendall School of Computer Scence and IT ASAP Research

More information

New Approaches to Support Vector Ordinal Regression

New Approaches to Support Vector Ordinal Regression New Approaches to Support Vector Ordnal Regresson We Chu chuwe@gatsby.ucl.ac.uk Gatsby Computatonal Neuroscence Unt, Unversty College London, London, WCN 3AR, UK S. Sathya Keerth selvarak@yahoo-nc.com

More information