Dropout: A Simple Way to Prevent Neural Networks from Overfitting
|
|
|
- Bridget Cross
- 10 years ago
- Views:
Transcription
1 Journal of Machne Learnng Research 15 (2014) Submtted 11/13; Publshed 6/14 Dropout: A Smple Way to Prevent Neural Networks from Overfttng Ntsh Srvastava Geoffrey Hnton Alex Krzhevsky Ilya Sutskever Ruslan Salakhutdnov Department of Computer Scence Unversty of Toronto 10 Kngs College Road, Rm 3302 Toronto, Ontaro, M5S 3G4, Canada. [email protected] [email protected] [email protected] [email protected] [email protected] Edtor: Yoshua Bengo Abstract Deep neural nets wth a large number of parameters are very powerful machne learnng systems. However, overfttng s a serous problem n such networks. Large networks are also slow to use, makng t dffcult to deal wth overfttng by combnng the predctons of many dfferent large neural nets at test tme. Dropout s a technque for addressng ths problem. The key dea s to randomly drop unts (along wth ther connectons) from the neural network durng tranng. Ths prevents unts from co-adaptng too much. Durng tranng, dropout samples from an exponental number of dfferent thnned networks. At test tme, t s easy to approxmate the effect of averagng the predctons of all these thnned networks by smply usng a sngle unthnned network that has smaller weghts. Ths sgnfcantly reduces overfttng and gves major mprovements over other regularzaton methods. We show that dropout mproves the performance of neural networks on supervsed learnng tasks n vson, speech recognton, document classfcaton and computatonal bology, obtanng state-of-the-art results on many benchmark data sets. Keywords: neural networks, regularzaton, model combnaton, deep learnng 1. Introducton Deep neural networks contan multple non-lnear hdden layers and ths makes them very expressve models that can learn very complcated relatonshps between ther nputs and outputs. Wth lmted tranng data, however, many of these complcated relatonshps wll be the result of samplng nose, so they wll exst n the tranng set but not n real test data even f t s drawn from the same dstrbuton. Ths leads to overfttng and many methods have been developed for reducng t. These nclude stoppng the tranng as soon as performance on a valdaton set starts to get worse, ntroducng weght penaltes of varous knds such as L1 and L2 regularzaton and soft weght sharng (Nowlan and Hnton, 1992). Wth unlmted computaton, the best way to regularze a fxed-szed model s to average the predctons of all possble settngs of the parameters, weghtng each settng by c 2014 Ntsh Srvastava, Geoffrey Hnton, Alex Krzhevsky, Ilya Sutskever and Ruslan Salakhutdnov.
2 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov (a) Standard Neural Net (b) After applyng dropout. Fgure 1: Dropout Neural Net Model. Left: A standard neural net wth 2 hdden layers. Rght: An example of a thnned net produced by applyng dropout to the network on the left. Crossed unts have been dropped. ts posteror probablty gven the tranng data. Ths can sometmes be approxmated qute well for smple or small models (Xong et al., 2011; Salakhutdnov and Mnh, 2008), but we would lke to approach the performance of the Bayesan gold standard usng consderably less computaton. We propose to do ths by approxmatng an equally weghted geometrc mean of the predctons of an exponental number of learned models that share parameters. Model combnaton nearly always mproves the performance of machne learnng methods. Wth large neural networks, however, the obvous dea of averagng the outputs of many separately traned nets s prohbtvely expensve. Combnng several models s most helpful when the ndvdual models are dfferent from each other and n order to make neural net models dfferent, they should ether have dfferent archtectures or be traned on dfferent data. Tranng many dfferent archtectures s hard because fndng optmal hyperparameters for each archtecture s a dauntng task and tranng each large network requres a lot of computaton. Moreover, large networks normally requre large amounts of tranng data and there may not be enough data avalable to tran dfferent networks on dfferent subsets of the data. Even f one was able to tran many dfferent large networks, usng them all at test tme s nfeasble n applcatons where t s mportant to respond quckly. Dropout s a technque that addresses both these ssues. It prevents overfttng and provdes a way of approxmately combnng exponentally many dfferent neural network archtectures effcently. The term dropout refers to droppng out unts (hdden and vsble) n a neural network. By droppng a unt out, we mean temporarly removng t from the network, along wth all ts ncomng and outgong connectons, as shown n Fgure 1. The choce of whch unts to drop s random. In the smplest case, each unt s retaned wth a fxed probablty p ndependent of other unts, where p can be chosen usng a valdaton set or can smply be set at 0.5, whch seems to be close to optmal for a wde range of networks and tasks. For the nput unts, however, the optmal probablty of retenton s usually closer to 1 than to
3 Dropout w Present wth probablty p (a) At tranng tme Always present (b) At test tme pw Fgure 2: Left: A unt at tranng tme that s present wth probablty p and s connected to unts n the next layer wth weghts w. Rght: At test tme, the unt s always present and the weghts are multpled by p. The output at test tme s same as the expected output at tranng tme. Applyng dropout to a neural network amounts to samplng a thnned network from t. The thnned network conssts of all the unts that survved dropout (Fgure 1b). A neural net wth n unts, can be seen as a collecton of 2 n possble thnned neural networks. These networks all share weghts so that the total number of parameters s stll O(n 2 ), or less. For each presentaton of each tranng case, a new thnned network s sampled and traned. So tranng a neural network wth dropout can be seen as tranng a collecton of 2 n thnned networks wth extensve weght sharng, where each thnned network gets traned very rarely, f at all. At test tme, t s not feasble to explctly average the predctons from exponentally many thnned models. However, a very smple approxmate averagng method works well n practce. The dea s to use a sngle neural net at test tme wthout dropout. The weghts of ths network are scaled-down versons of the traned weghts. If a unt s retaned wth probablty p durng tranng, the outgong weghts of that unt are multpled by p at test tme as shown n Fgure 2. Ths ensures that for any hdden unt the expected output (under the dstrbuton used to drop unts at tranng tme) s the same as the actual output at test tme. By dong ths scalng, 2 n networks wth shared weghts can be combned nto a sngle neural network to be used at test tme. We found that tranng a network wth dropout and usng ths approxmate averagng method at test tme leads to sgnfcantly lower generalzaton error on a wde varety of classfcaton problems compared to tranng wth other regularzaton methods. The dea of dropout s not lmted to feed-forward neural nets. It can be more generally appled to graphcal models such as Boltzmann Machnes. In ths paper, we ntroduce the dropout Restrcted Boltzmann Machne model and compare t to standard Restrcted Boltzmann Machnes (RBM). Our experments show that dropout RBMs are better than standard RBMs n certan respects. Ths paper s structured as follows. Secton 2 descrbes the motvaton for ths dea. Secton 3 descrbes relevant prevous work. Secton 4 formally descrbes the dropout model. Secton 5 gves an algorthm for tranng dropout networks. In Secton 6, we present our expermental results where we apply dropout to problems n dfferent domans and compare t wth other forms of regularzaton and model combnaton. Secton 7 analyzes the effect of dropout on dfferent propertes of a neural network and descrbes how dropout nteracts wth the network s hyperparameters. Secton 8 descrbes the Dropout RBM model. In Secton 9 we explore the dea of margnalzng dropout. In Appendx A we present a practcal gude 1931
4 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov for tranng dropout nets. Ths ncludes a detaled analyss of the practcal consderatons nvolved n choosng hyperparameters when tranng dropout networks. 2. Motvaton A motvaton for dropout comes from a theory of the role of sex n evoluton (Lvnat et al., 2010). Sexual reproducton nvolves takng half the genes of one parent and half of the other, addng a very small amount of random mutaton, and combnng them to produce an offsprng. The asexual alternatve s to create an offsprng wth a slghtly mutated copy of the parent s genes. It seems plausble that asexual reproducton should be a better way to optmze ndvdual ftness because a good set of genes that have come to work well together can be passed on drectly to the offsprng. On the other hand, sexual reproducton s lkely to break up these co-adapted sets of genes, especally f these sets are large and, ntutvely, ths should decrease the ftness of organsms that have already evolved complcated coadaptatons. However, sexual reproducton s the way most advanced organsms evolved. One possble explanaton for the superorty of sexual reproducton s that, over the long term, the crteron for natural selecton may not be ndvdual ftness but rather mx-ablty of genes. The ablty of a set of genes to be able to work well wth another random set of genes makes them more robust. Snce a gene cannot rely on a large set of partners to be present at all tmes, t must learn to do somethng useful on ts own or n collaboraton wth a small number of other genes. Accordng to ths theory, the role of sexual reproducton s not just to allow useful new genes to spread throughout the populaton, but also to facltate ths process by reducng complex co-adaptatons that would reduce the chance of a new gene mprovng the ftness of an ndvdual. Smlarly, each hdden unt n a neural network traned wth dropout must learn to work wth a randomly chosen sample of other unts. Ths should make each hdden unt more robust and drve t towards creatng useful features on ts own wthout relyng on other hdden unts to correct ts mstakes. However, the hdden unts wthn a layer wll stll learn to do dfferent thngs from each other. One mght magne that the net would become robust aganst dropout by makng many copes of each hdden unt, but ths s a poor soluton for exactly the same reason as replca codes are a poor way to deal wth a nosy channel. A closely related, but slghtly dfferent motvaton for dropout comes from thnkng about successful conspraces. Ten conspraces each nvolvng fve people s probably a better way to create havoc than one bg conspracy that requres ffty people to all play ther parts correctly. If condtons do not change and there s plenty of tme for rehearsal, a bg conspracy can work well, but wth non-statonary condtons, the smaller the conspracy the greater ts chance of stll workng. Complex co-adaptatons can be traned to work well on a tranng set, but on novel test data they are far more lkely to fal than multple smpler co-adaptatons that acheve the same thng. 3. Related Work Dropout can be nterpreted as a way of regularzng a neural network by addng nose to ts hdden unts. The dea of addng nose to the states of unts has prevously been used n the context of Denosng Autoencoders (DAEs) by Vncent et al. (2008, 2010) where nose 1932
5 Dropout s added to the nput unts of an autoencoder and the network s traned to reconstruct the nose-free nput. Our work extends ths dea by showng that dropout can be effectvely appled n the hdden layers as well and that t can be nterpreted as a form of model averagng. We also show that addng nose s not only useful for unsupervsed feature learnng but can also be extended to supervsed learnng problems. In fact, our method can be appled to other neuron-based archtectures, for example, Boltzmann Machnes. Whle 5% nose typcally works best for DAEs, we found that our weght scalng procedure appled at test tme enables us to use much hgher nose levels. Droppng out 20% of the nput unts and 50% of the hdden unts was often found to be optmal. Snce dropout can be seen as a stochastc regularzaton technque, t s natural to consder ts determnstc counterpart whch s obtaned by margnalzng out the nose. In ths paper, we show that, n smple cases, dropout can be analytcally margnalzed out to obtan determnstc regularzaton methods. Recently, van der Maaten et al. (2013) also explored determnstc regularzers correspondng to dfferent exponental-famly nose dstrbutons, ncludng dropout (whch they refer to as blankout nose ). However, they apply nose to the nputs and only explore models wth no hdden layers. Wang and Mannng (2013) proposed a method for speedng up dropout by margnalzng dropout nose. Chen et al. (2012) explored margnalzaton n the context of denosng autoencoders. In dropout, we mnmze the loss functon stochastcally under a nose dstrbuton. Ths can be seen as mnmzng an expected loss functon. Prevous work of Globerson and Rowes (2006); Dekel et al. (2010) explored an alternate settng where the loss s mnmzed when an adversary gets to pck whch unts to drop. Here, nstead of a nose dstrbuton, the maxmum number of unts that can be dropped s fxed. However, ths work also does not explore models wth hdden unts. 4. Model Descrpton Ths secton descrbes the dropout neural network model. Consder a neural network wth L hdden layers. Let l {1,..., L} ndex the hdden layers of the network. Let z (l) denote the vector of nputs nto layer l, y (l) denote the vector of outputs from layer l (y (0) = x s the nput). W (l) and b (l) are the weghts and bases at layer l. The feed-forward operaton of a standard neural network (Fgure 3a) can be descrbed as (for l {0,..., L 1} and any hdden unt ) z (l+1) = w (l+1) y l + b (l+1), y (l+1) = f(z (l+1) ), where f s any actvaton functon, for example, f(x) = 1/ (1 + exp( x)). Wth dropout, the feed-forward operaton becomes (Fgure 3b) r (l) j Bernoull(p), ỹ (l) = r (l) y (l), z (l+1) = w (l+1) ỹ l + b (l+1), y (l+1) = f(z (l+1) ). 1933
6 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov b (l+1) r (l) 3 b (l+1) y (l) 3 y (l) 3 ỹ (l) 3 w (l+1) z (l+1) f y (l+1) r (l) 2 w (l+1) z (l+1) f y (l+1) y (l) 2 y (l) 2 ỹ (l) 2 r (l) 1 y (l) 1 y (l) 1 (a) Standard network (b) Dropout network Fgure 3: Comparson of the basc operatons of a standard and dropout network. Here denotes an element-wse product. For any layer l, r (l) s a vector of ndependent Bernoull random varables each of whch has probablty p of beng 1. Ths vector s sampled and multpled element-wse wth the outputs of that layer, y (l), to create the thnned outputs ỹ (l). The thnned outputs are then used as nput to the next layer. Ths process s appled at each layer. Ths amounts to samplng a sub-network from a larger network. For learnng, the dervatves of the loss functon are backpropagated through the sub-network. At test tme, the weghts are scaled as W (l) test = pw (l) as shown n Fgure 2. The resultng neural network s used wthout dropout. ỹ (l) 1 5. Learnng Dropout Nets Ths secton descrbes a procedure for tranng dropout neural nets. 5.1 Backpropagaton Dropout neural networks can be traned usng stochastc gradent descent n a manner smlar to standard neural nets. The only dfference s that for each tranng case n a mn-batch, we sample a thnned network by droppng out unts. Forward and backpropagaton for that tranng case are done only on ths thnned network. The gradents for each parameter are averaged over the tranng cases n each mn-batch. Any tranng case whch does not use a parameter contrbutes a gradent of zero for that parameter. Many methods have been used to mprove stochastc gradent descent such as momentum, annealed learnng rates and L2 weght decay. Those were found to be useful for dropout neural networks as well. One partcular form of regularzaton was found to be especally useful for dropout constranng the norm of the ncomng weght vector at each hdden unt to be upper bounded by a fxed constant c. In other words, f w represents the vector of weghts ncdent on any hdden unt, the neural network was optmzed under the constrant w 2 c. Ths constrant was mposed durng optmzaton by projectng w onto the surface of a ball of radus c, whenever w went out of t. Ths s also called max-norm regularzaton snce t mples that the maxmum value that the norm of any weght can take s c. The constant 1934
7 Dropout c s a tunable hyperparameter, whch s determned usng a valdaton set. Max-norm regularzaton has been prevously used n the context of collaboratve flterng (Srebro and Shrabman, 2005). It typcally mproves the performance of stochastc gradent descent tranng of deep neural nets, even when no dropout s used. Although dropout alone gves sgnfcant mprovements, usng dropout along wth maxnorm regularzaton, large decayng learnng rates and hgh momentum provdes a sgnfcant boost over just usng dropout. A possble justfcaton s that constranng weght vectors to le nsde a ball of fxed radus makes t possble to use a huge learnng rate wthout the possblty of weghts blowng up. The nose provded by dropout then allows the optmzaton process to explore dfferent regons of the weght space that would have otherwse been dffcult to reach. As the learnng rate decays, the optmzaton takes shorter steps, thereby dong less exploraton and eventually settles nto a mnmum. 5.2 Unsupervsed Pretranng Neural networks can be pretraned usng stacks of RBMs (Hnton and Salakhutdnov, 2006), autoencoders (Vncent et al., 2010) or Deep Boltzmann Machnes (Salakhutdnov and Hnton, 2009). Pretranng s an effectve way of makng use of unlabeled data. Pretranng followed by fnetunng wth backpropagaton has been shown to gve sgnfcant performance boosts over fnetunng from random ntalzatons n certan cases. Dropout can be appled to fnetune nets that have been pretraned usng these technques. The pretranng procedure stays the same. The weghts obtaned from pretranng should be scaled up by a factor of 1/p. Ths makes sure that for each unt, the expected output from t under random dropout wll be the same as the output durng pretranng. We were ntally concerned that the stochastc nature of dropout mght wpe out the nformaton n the pretraned weghts. Ths dd happen when the learnng rates used durng fnetunng were comparable to the best learnng rates for randomly ntalzed nets. However, when the learnng rates were chosen to be smaller, the nformaton n the pretraned weghts seemed to be retaned and we were able to get mprovements n terms of the fnal generalzaton error compared to not usng dropout when fnetunng. 6. Expermental Results We traned dropout neural networks for classfcaton problems on data sets n dfferent domans. We found that dropout mproved generalzaton performance on all data sets compared to neural networks that dd not use dropout. Table 1 gves a bref descrpton of the data sets. The data sets are MNIST : A standard toy data set of handwrtten dgts. TIMIT : A standard speech benchmark for clean speech recognton. CIFAR-10 and CIFAR-100 : Tny natural mages (Krzhevsky, 2009). Street Vew House Numbers data set (SVHN) : Images of house numbers collected by Google Street Vew (Netzer et al., 2011). ImageNet : A large collecton of natural mages. Reuters-RCV1 : A collecton of Reuters newswre artcles. 1935
8 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov Alternatve Splcng data set: RNA features for predctng alternatve gene splcng (Xong et al., 2011). We chose a dverse set of data sets to demonstrate that dropout s a general technque for mprovng neural nets and s not specfc to any partcular applcaton doman. In ths secton, we present some key results that show the effectveness of dropout. A more detaled descrpton of all the experments and data sets s provded n Appendx B. Data Set Doman Dmensonalty Tranng Set Test Set MNIST Vson 784 (28 28 grayscale) 60K 10K SVHN Vson 3072 (32 32 color) 600K 26K CIFAR-10/100 Vson 3072 (32 32 color) 60K 10K ImageNet (ILSVRC-2012) Vson ( color) 1.2M 150K TIMIT Speech 2520 (120-dm, 21 frames) 1.1M frames 58K frames Reuters-RCV1 Text K 200K Alternatve Splcng Genetcs Results on Image Data Sets Table 1: Overvew of the data sets used n ths paper. We used fve mage data sets to evaluate dropout MNIST, SVHN, CIFAR-10, CIFAR-100 and ImageNet. These data sets nclude dfferent mage types and tranng set szes. Models whch acheve state-of-the-art results on all of these data sets use dropout MNIST Method Unt Type Archtecture Error % Standard Neural Net (Smard et al., 2003) Logstc 2 layers, 800 unts 1.60 SVM Gaussan kernel NA NA 1.40 Dropout NN Logstc 3 layers, 1024 unts 1.35 Dropout NN ReLU 3 layers, 1024 unts 1.25 Dropout NN + max-norm constrant ReLU 3 layers, 1024 unts 1.06 Dropout NN + max-norm constrant ReLU 3 layers, 2048 unts 1.04 Dropout NN + max-norm constrant ReLU 2 layers, 4096 unts 1.01 Dropout NN + max-norm constrant ReLU 2 layers, 8192 unts 0.95 Dropout NN + max-norm constrant (Goodfellow et al., 2013) Maxout 2 layers, (5 240) unts DBN + fnetunng (Hnton and Salakhutdnov, 2006) Logstc DBM + fnetunng (Salakhutdnov and Hnton, 2009) Logstc DBN + dropout fnetunng Logstc DBM + dropout fnetunng Logstc Table 2: Comparson of dfferent models on MNIST. The MNIST data set conssts of pxel handwrtten dgt mages. The task s to classfy the mages nto 10 dgt classes. Table 2 compares the performance of dropout wth other technques. The best performng neural networks for the permutaton nvarant
9 Dropout settng that do not use dropout or unsupervsed pretranng acheve an error of about 1.60% (Smard et al., 2003). Wth dropout the error reduces to 1.35%. Replacng logstc unts wth rectfed lnear unts (ReLUs) (Jarrett et al., 2009) further reduces the error to 1.25%. Addng max-norm regularzaton agan reduces t to 1.06%. Increasng the sze of the network leads to better results. A neural net wth 2 layers and 8192 unts per layer gets down to 0.95% error. Note that ths network has more than 65 mllon parameters and s beng traned on a data set of sze 60,000. Tranng a network of ths sze to gve good generalzaton error s very hard wth standard regularzaton methods and early stoppng. Dropout, on the other hand, prevents overfttng, even n ths case. It does not even need early stoppng. Goodfellow et al. (2013) showed that results can be further mproved to 0.94% by replacng ReLU unts wth maxout unts. All dropout nets use p = 0.5 for hdden unts and p = 0.8 for nput unts. More expermental detals can be found n Appendx B.1. Dropout nets pretraned wth stacks of RBMs and Deep Boltzmann Machnes also gve mprovements as shown n Table 2. DBM pretraned dropout nets acheve a test error of 0.79% whch s the best performance ever reported for the permutaton nvarant settng. We note that t possble to obtan better results by usng 2-D spatal nformaton and augmentng the tranng set wth dstorted versons of mages from the standard tranng set. We demonstrate the effectveness of dropout n that settng on more nterestng data sets. In order to test the robustness of dropout, classfcaton experments were done wth networks of many dfferent archtectures keepng all hyperparameters, ncludng p, fxed. Fgure 4 shows the test error rates obtaned for these dfferent archtectures as tranng progresses. The same archtectures traned wth and wthout dropout have drastcally dfferent test errors as seen as by the two separate clusters of trajectores. Dropout gves a huge mprovement across all archtectures, wthout usng hyperparameters that were tuned specfcally for each archtecture Street Vew House Numbers The Street Vew House Numbers (SVHN) Data Set (Netzer et al., 2011) conssts of color mages of house numbers collected by Classfcaton Error % Wthout dropout Wth dropout Number of weght updates Fgure 4: Test error for dfferent archtectures wth and wthout dropout. The networks have 2 to 4 hdden layers each wth 1024 to 2048 unts. Google Street Vew. Fgure 5a shows some examples of mages from ths data set. The part of the data set that we use n our experments conssts of color mages roughly centered on a dgt n a house number. The task s to dentfy that dgt. For ths data set, we appled dropout to convolutonal neural networks (LeCun et al., 1989). The best archtecture that we found has three convolutonal layers followed by 2 fully connected hdden layers. All hdden unts were ReLUs. Each convolutonal layer was 1937
10 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov Method Error % Bnary Features (WDCH) (Netzer et al., 2011) 36.7 HOG (Netzer et al., 2011) 15.0 Stacked Sparse Autoencoders (Netzer et al., 2011) 10.3 KMeans (Netzer et al., 2011) 9.4 Mult-stage Conv Net wth average poolng (Sermanet et al., 2012) 9.06 Mult-stage Conv Net + L2 poolng (Sermanet et al., 2012) 5.36 Mult-stage Conv Net + L4 poolng + paddng (Sermanet et al., 2012) 4.90 Conv Net + max-poolng 3.95 Conv Net + max poolng + dropout n fully connected layers 3.02 Conv Net + stochastc poolng (Zeler and Fergus, 2013) 2.80 Conv Net + max poolng + dropout n all layers 2.55 Conv Net + maxout (Goodfellow et al., 2013) 2.47 Human Performance 2.0 Table 3: Results on the Street Vew House Numbers data set. followed by a max-poolng layer. Appendx B.2 descrbes the archtecture n more detal. Dropout was appled to all the layers of the network wth the probablty of retanng a hdden unt beng p = (0.9, 0.75, 0.75, 0.5, 0.5, 0.5) for the dfferent layers of the network (gong from nput to convolutonal layers to fully connected layers). Max-norm regularzaton was used for weghts n both convolutonal and fully connected layers. Table 3 compares the results obtaned by dfferent methods. We fnd that convolutonal nets outperform other methods. The best performng convolutonal nets that do not use dropout acheve an error rate of 3.95%. Addng dropout only to the fully connected layers reduces the error to 3.02%. Addng dropout to the convolutonal layers as well further reduces the error to 2.55%. Even more gans can be obtaned by usng maxout unts. The addtonal gan n performance obtaned by addng dropout n the convolutonal layers (3.02% to 2.55%) s worth notng. One may have presumed that snce the convolutonal layers don t have a lot of parameters, overfttng s not a problem and therefore dropout would not have much effect. However, dropout n the lower layers stll helps because t provdes nosy nputs for the hgher fully connected layers whch prevents them from overfttng CIFAR-10 and CIFAR-100 The CIFAR-10 and CIFAR-100 data sets consst of color mages drawn from 10 and 100 categores respectvely. Fgure 5b shows some examples of mages from ths data set. A detaled descrpton of the data sets, nput preprocessng, network archtectures and other expermental detals s gven n Appendx B.3. Table 4 shows the error rate obtaned by dfferent methods on these data sets. Wthout any data augmentaton, Snoek et al. (2012) used Bayesan hyperparameter optmzaton to obtaned an error rate of 14.98% on CIFAR-10. Usng dropout n the fully connected layers reduces that to 14.32% and addng dropout n every layer further reduces the error to 12.61%. Goodfellow et al. (2013) showed that the error s further reduced to 11.68% by replacng ReLU unts wth maxout unts. On CIFAR-100, dropout reduces the error from 43.48% to 37.20% whch s a huge mprovement. No data augmentaton was used for ether data set (apart from the nput dropout). 1938
11 Dropout (a) Street Vew House Numbers (SVHN) (b) CIFAR-10 Fgure 5: Samples from mage data sets. Each row corresponds to a dfferent category. Method Conv Conv Conv Conv Conv Conv Net Net Net Net Net Net max poolng (hand tuned) stochastc poolng (Zeler and Fergus, 2013) max poolng (Snoek et al., 2012) max poolng + dropout fully connected layers max poolng + dropout n all layers maxout (Goodfellow et al., 2013) CIFAR-10 CIFAR Table 4: Error rates on CIFAR-10 and CIFAR ImageNet ImageNet s a data set of over 15 mllon labeled hgh-resoluton mages belongng to roughly 22,000 categores. Startng n 2010, as part of the Pascal Vsual Object Challenge, an annual competton called the ImageNet Large-Scale Vsual Recognton Challenge (ILSVRC) has been held. A subset of ImageNet wth roughly 1000 mages n each of 1000 categores s used n ths challenge. Snce the number of categores s rather large, t s conventonal to report two error rates: top-1 and top-5, where the top-5 error rate s the fracton of test mages for whch the correct label s not among the fve labels consdered most probable by the model. Fgure 6 shows some predctons made by our model on a few test mages. ILSVRC-2010 s the only verson of ILSVRC for whch the test set labels are avalable, so most of our experments were performed on ths data set. Table 5 compares the performance of dfferent methods. Convolutonal nets wth dropout outperform other methods by a large margn. The archtecture and mplementaton detals are descrbed n detal n Krzhevsky et al. (2012). 1939
12 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov Fgure 6: Some ImageNet test cases wth the 4 most probable labels as predcted by our model. The length of the horzontal bars s proportonal to the probablty assgned to the labels by the model. Pnk ndcates ground truth. Model Model Top-1 Top-5 Sparse Codng (Ln et al., 2010) SIFT + Fsher Vectors (Sanchez and Perronnn, 2011) Conv Net + dropout (Krzhevsky et al., 2012) Table 5: Results on the ILSVRC-2010 test set. Top-1 (val) Top-5 (val) Top-5 (test) SVM on Fsher Vectors of Dense SIFT and Color Statstcs Avg of classfers over FVs of SIFT, LBP, GIST and CSIFT Conv Net + dropout (Krzhevsky et al., 2012) Avg of 5 Conv Nets + dropout (Krzhevsky et al., 2012) Table 6: Results on the ILSVRC-2012 valdaton/test set. Our model based on convolutonal nets and dropout won the ILSVRC-2012 competton. Snce the labels for the test set are not avalable, we report our results on the test set for the fnal submsson and nclude the valdaton set results for dfferent varatons of our model. Table 6 shows the results from the competton. Whle the best methods based on standard vson features acheve a top-5 error rate of about 26%, convolutonal nets wth dropout acheve a test error of about 16% whch s a staggerng dfference. Fgure 6 shows some examples of predctons made by our model. We can see that the model makes very reasonable predctons, even when ts best guess s not correct. 6.2 Results on TIMIT Next, we appled dropout to a speech recognton task. We use the TIMIT data set whch conssts of recordngs from 680 speakers coverng 8 major dalects of Amercan Englsh readng ten phonetcally-rch sentences n a controlled nose-free envronment. Dropout neural networks were traned on wndows of 21 log-flter bank frames to predct the label of the central frame. No speaker dependent operatons were performed. Appendx B.4 descrbes the data preprocessng and tranng detals. Table 7 compares dropout neural 1940
13 Dropout nets wth other models. A 6-layer net gves a phone error rate of 23.4%. Dropout further mproves t to 21.8%. We also traned dropout nets startng from pretraned weghts. A 4-layer net pretraned wth a stack of RBMs get a phone error rate of 22.7%. Wth dropout, ths reduces to 19.7%. Smlarly, for an 8-layer net the error reduces from 20.5% to 19.7%. Method Phone Error Rate% NN (6 layers) (Mohamed et al., 2010) 23.4 Dropout NN (6 layers) 21.8 DBN-pretraned NN (4 layers) 22.7 DBN-pretraned NN (6 layers) (Mohamed et al., 2010) 22.4 DBN-pretraned NN (8 layers) (Mohamed et al., 2010) 20.7 mcrbm-dbn-pretraned NN (5 layers) (Dahl et al., 2010) 20.5 DBN-pretraned NN (4 layers) + dropout 19.7 DBN-pretraned NN (8 layers) + dropout Results on a Text Data Set Table 7: Phone error rate on the TIMIT core test set. To test the usefulness of dropout n the text doman, we used dropout networks to tran a document classfer. We used a subset of the Reuters-RCV1 data set whch s a collecton of over 800,000 newswre artcles from Reuters. These artcles cover a varety of topcs. The task s to take a bag of words representaton of a document and classfy t nto 50 dsjont topcs. Appendx B.5 descrbes the setup n more detal. Our best neural net whch dd not use dropout obtaned an error rate of 31.05%. Addng dropout reduced the error to 29.62%. We found that the mprovement was much smaller compared to that for the vson and speech data sets. 6.4 Comparson wth Bayesan Neural Networks Dropout can be seen as a way of dong an equally-weghted averagng of exponentally many models wth shared weghts. On the other hand, Bayesan neural networks (Neal, 1996) are the proper way of dong model averagng over the space of neural network structures and parameters. In dropout, each model s weghted equally, whereas n a Bayesan neural network each model s weghted takng nto account the pror and how well the model fts the data, whch s the more correct approach. Bayesan neural nets are extremely useful for solvng problems n domans where data s scarce such as medcal dagnoss, genetcs, drug dscovery and other computatonal bology applcatons. However, Bayesan neural nets are slow to tran and dffcult to scale to very large network szes. Besdes, t s expensve to get predctons from many large nets at test tme. On the other hand, dropout neural nets are much faster to tran and use at test tme. In ths secton, we report experments that compare Bayesan neural nets wth dropout neural nets on a small data set where Bayesan neural networks are known to perform well and obtan state-of-the-art results. The am s to analyze how much does dropout lose compared to Bayesan neural nets. The data set that we use (Xong et al., 2011) comes from the doman of genetcs. The task s to predct the occurrence of alternatve splcng based on RNA features. Alternatve splcng s a sgnfcant cause of cellular dversty n mammalan tssues. Predctng the 1941
14 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov Method Code Qualty (bts) Neural Network (early stoppng) (Xong et al., 2011) 440 Regresson, PCA (Xong et al., 2011) 463 SVM, PCA (Xong et al., 2011) 487 Neural Network wth dropout 567 Bayesan Neural Network (Xong et al., 2011) 623 Table 8: Results on the Alternatve Splcng Data Set. occurrence of alternate splcng n certan tssues under dfferent condtons s mportant for understandng many human dseases. Gven the RNA features, the task s to predct the probablty of three splcng related events that bologsts care about. The evaluaton metrc s Code Qualty whch s a measure of the negatve KL dvergence between the target and the predcted probablty dstrbutons (hgher s better). Appendx B.6 ncludes a detaled descrpton of the data set and ths performance metrc. Table 8 summarzes the performance of dfferent models on ths data set. Xong et al. (2011) used Bayesan neural nets for ths task. As expected, we found that Bayesan neural nets perform better than dropout. However, we see that dropout mproves sgnfcantly upon the performance of standard neural nets and outperforms all other methods. The challenge n ths data set s to prevent overfttng snce the sze of the tranng set s small. One way to prevent overfttng s to reduce the nput dmensonalty usng PCA. Thereafter, standard technques such as SVMs or logstc regresson can be used. However, wth dropout we were able to prevent overfttng wthout the need to do dmensonalty reducton. The dropout nets are very large (1000s of hdden unts) compared to a few tens of unts n the Bayesan network. Ths shows that dropout has a strong regularzng effect. 6.5 Comparson wth Standard Regularzers Several regularzaton methods have been proposed for preventng overfttng n neural networks. These nclude L2 weght decay (more generally Tkhonov regularzaton (Tkhonov, 1943)), lasso (Tbshran, 1996), KL-sparsty and max-norm regularzaton. Dropout can be seen as another way of regularzng neural networks. In ths secton we compare dropout wth some of these regularzaton methods usng the MNIST data set. The same network archtecture ( ) wth ReLUs was traned usng stochastc gradent descent wth dfferent regularzatons. Table 9 shows the results. The values of dfferent hyperparameters assocated wth each knd of regularzaton (decay constants, target sparsty, dropout rate, max-norm upper bound) were obtaned usng a valdaton set. We found that dropout combned wth max-norm regularzaton gves the lowest generalzaton error. 7. Salent Features The experments descrbed n the prevous secton provde strong evdence that dropout s a useful technque for mprovng neural networks. In ths secton, we closely examne how dropout affects a neural network. We analyze the effect of dropout on the qualty of features produced. We see how dropout affects the sparsty of hdden unt actvatons. We 1942
15 Dropout Method Test Classfcaton error % L L2 + L1 appled towards the end of tranng 1.60 L2 + KL-sparsty 1.55 Max-norm 1.35 Dropout + L Dropout + Max-norm 1.05 Table 9: Comparson of dfferent regularzaton methods on MNIST. also see how the advantages obtaned from dropout vary wth the probablty of retanng unts, sze of the network and the sze of the tranng set. These observatons gve some nsght nto why dropout works so well. 7.1 Effect on Features (a) Wthout dropout (b) Dropout wth p = 0.5. Fgure 7: Features learned on MNIST wth one hdden layer autoencoders havng 256 rectfed lnear unts. In a standard neural network, the dervatve receved by each parameter tells t how t should change so the fnal loss functon s reduced, gven what all other unts are dong. Therefore, unts may change n a way that they fx up the mstakes of the other unts. Ths may lead to complex co-adaptatons. Ths n turn leads to overfttng because these co-adaptatons do not generalze to unseen data. We hypothesze that for each hdden unt, dropout prevents co-adaptaton by makng the presence of other hdden unts unrelable. Therefore, a hdden unt cannot rely on other specfc unts to correct ts mstakes. It must perform well n a wde varety of dfferent contexts provded by the other hdden unts. To observe ths effect drectly, we look at the frst level features learned by neural networks traned on vsual tasks wth and wthout dropout. 1943
16 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov Fgure 7a shows features learned by an autoencoder on MNIST wth a sngle hdden layer of 256 rectfed lnear unts wthout dropout. Fgure 7b shows the features learned by an dentcal autoencoder whch used dropout n the hdden layer wth p = 0.5. Both autoencoders had smlar test reconstructon errors. However, t s apparent that the features shown n Fgure 7a have co-adapted n order to produce good reconstructons. Each hdden unt on ts own does not seem to be detectng a meanngful feature. On the other hand, n Fgure 7b, the hdden unts seem to detect edges, strokes and spots n dfferent parts of the mage. Ths shows that dropout does break up co-adaptatons, whch s probably the man reason why t leads to lower generalzaton errors. 7.2 Effect on Sparsty (a) Wthout dropout (b) Dropout wth p = 0.5. Fgure 8: Effect of dropout on sparsty. ReLUs were used for both models. Left: The hstogram of mean actvatons shows that most unts have a mean actvaton of about 2.0. The hstogram of actvatons shows a huge mode away from zero. Clearly, a large fracton of unts have hgh actvaton. Rght: The hstogram of mean actvatons shows that most unts have a smaller mean mean actvaton of about 0.7. The hstogram of actvatons shows a sharp peak at zero. Very few unts have hgh actvaton. We found that as a sde-effect of dong dropout, the actvatons of the hdden unts become sparse, even when no sparsty nducng regularzers are present. Thus, dropout automatcally leads to sparse representatons. To observe ths effect, we take the autoencoders traned n the prevous secton and look at the sparsty of hdden unt actvatons on a random mn-batch taken from the test set. Fgure 8a and Fgure 8b compare the sparsty for the two models. In a good sparse model, there should only be a few hghly actvated unts for any data case. Moreover, the average actvaton of any unt across data cases should be low. To assess both of these qualtes, we plot two hstograms for each model. For each model, the hstogram on the left shows the dstrbuton of mean actvatons of hdden unts across the mnbatch. The hstogram on the rght shows the dstrbuton of actvatons of the hdden unts. Comparng the hstograms of actvatons we can see that fewer hdden unts have hgh actvatons n Fgure 8b compared to Fgure 8a, as seen by the sgnfcant mass away from 1944
17 Dropout zero for the net that does not use dropout. The mean actvatons are also smaller for the dropout net. The overall mean actvaton of hdden unts s close to 2.0 for the autoencoder wthout dropout but drops to around 0.7 when dropout s used. 7.3 Effect of Dropout Rate Dropout has a tunable hyperparameter p (the probablty of retanng a unt n the network). In ths secton, we explore the effect of varyng ths hyperparameter. The comparson s done n two stuatons. 1. The number of hdden unts s held constant. 2. The number of hdden unts s changed so that the expected number of hdden unts that wll be retaned after dropout s held constant. In the frst case, we tran the same network archtecture wth dfferent amounts of dropout. We use a archtecture. No nput dropout was used. Fgure 9a shows the test error obtaned as a functon of p. If the archtecture s held constant, havng a small p means very few unts wll turn on durng tranng. It can be seen that ths has led to underfttng snce the tranng error s also hgh. We see that as p ncreases, the error goes down. It becomes flat when 0.4 p 0.8 and then ncreases as p becomes close to Test Error Tranng Error Test Error Tranng Error Classfcaton Error % Classfcaton Error % Probablty of retanng a unt (p) (a) Keepng n fxed Probablty of retanng a unt (p) (b) Keepng pn fxed. Fgure 9: Effect of changng dropout rates on MNIST. Another nterestng settng s the second case n whch the quantty pn s held constant where n s the number of hdden unts n any partcular layer. Ths means that networks that have small p wll have a large number of hdden unts. Therefore, after applyng dropout, the expected number of unts that are present wll be the same across dfferent archtectures. However, the test networks wll be of dfferent szes. In our experments, we set pn = 256 for the frst two hdden layers and pn = 512 for the last hdden layer. Fgure 9b shows the test error obtaned as a functon of p. We notce that the magntude of errors for small values of p has reduced by a lot compared to Fgure 9a (for p = 0.1 t fell from 2.7% to 1.7%). Values of p that are close to 0.6 seem to perform best for ths choce of pn but our usual default value of 0.5 s close to optmal. 1945
18 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov 7.4 Effect of Data Set Sze One test of a good regularzer s that t should make t possble to get good generalzaton error from models wth a large number of parameters traned on small data sets. Ths secton explores the effect of changng the data set sze when dropout s used wth feedforward networks. Huge neural networks traned n the standard way overft massvely on small data sets. To see f dropout can help, we run classfcaton experments on MNIST and vary the amount of data gven to the network. The results of these experments are 30 shown n Fgure 10. The network was gven Wth dropout Wthout dropout data sets of sze 100, 500, 1K, 5K, 10K 25 and 50K chosen randomly from the MNIST tranng set. The same network archtecture ( ) was used for all data sets. Dropout wth p = 0.5 was performed at all the hdden layers and p = 0.8 at the nput layer. It can be observed that for extremely small data sets (100, 500) dropout does not gve any mprovements. The model has enough parameters that t can overft on the tranng data, even wth all the nose comng from dropout. As the sze of the data set s ncreased, the gan Classfcaton Error % Dataset sze Fgure 10: Effect of varyng data set sze. from dong dropout ncreases up to a pont and then declnes. Ths suggests that for any gven archtecture and dropout rate, there s a sweet spot correspondng to some amount of data that s large enough to not be memorzed n spte of the nose but not so large that overfttng s not a problem anyways. 7.5 Monte-Carlo Model Averagng vs. Weght Scalng The effcent test tme procedure that we propose s to do an approxmate model combnaton by scalng down the weghts of the traned neural network. An expensve but more correct way of averagng the models s to sample k neural nets usng dropout for each test case and average ther predctons. As k, ths Monte-Carlo model average gets close to the true model average. It s nterestng to see emprcally how many samples k are needed to match the performance of the approxmate averagng method. By computng the error for dfferent values of k we can see how quckly the error rate of the fnte-sample average approaches the error rate of the true model average. Test Classfcaton error % Monte-Carlo Model Averagng Approxmate averagng by weght scalng Number of samples used for Monte-Carlo averagng (k) Fgure 11: Monte-Carlo model averagng vs. weght scalng. 1946
19 Dropout We agan use the MNIST data set and do classfcaton by averagng the predctons of k randomly sampled neural networks. Fgure 11 shows the test error rate obtaned for dfferent values of k. Ths s compared wth the error obtaned usng the weght scalng method (shown as a horzontal lne). It can be seen that around k = 50, the Monte-Carlo method becomes as good as the approxmate method. Thereafter, the Monte-Carlo method s slghtly better than the approxmate method but well wthn one standard devaton of t. Ths suggests that the weght scalng method s a farly good approxmaton of the true model average. 8. Dropout Restrcted Boltzmann Machnes Besdes feed-forward neural networks, dropout can also be appled to Restrcted Boltzmann Machnes (RBM). In ths secton, we formally descrbe ths model and show some results to llustrate ts key propertes. 8.1 Model Descrpton Consder an RBM wth vsble unts v {0, 1} D and hdden unts h {0, 1} F. It defnes the followng probablty dstrbuton P (h, v; θ) = 1 Z(θ) exp(v W h + a h + b v). Where θ = {W, a, b} represents the model parameters and Z s the partton functon. Dropout RBMs are RBMs augmented wth a vector of bnary random varables r {0, 1} F. Each random varable r j takes the value 1 wth probablty p, ndependent of others. If r j takes the value 1, the hdden unt h j s retaned, otherwse t s dropped from the model. The jont dstrbuton defned by a Dropout RBM can be expressed as P (r, h, v; p, θ) = P (r; p)p (h, v r; θ), F P (r; p) = p r j (1 p) 1 r j, P (h, v r; θ) = j=1 1 Z (θ, r) exp(v W h + a h + b v) g(h j, r j ) = 1(r j = 1) + 1(r j = 0)1(h j = 0). F g(h j, r j ), Z (θ, r) s the normalzaton constant. g(h j, r j ) mposes the constrant that f r j = 0, h j must be 0. The dstrbuton over h, condtoned on v and r s factoral j=1 F P (h r, v) = P (h j r j, v), j=1 P (h j = 1 r j, v) = 1(r j = 1)σ b j + ( W j v ). 1947
20 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov (a) Wthout dropout (b) Dropout wth p = 0.5. Fgure 12: Features learned on MNIST by 256 hdden unt RBMs. The features are ordered by L2 norm. The dstrbuton over v condtoned on h s same as that of an RBM P (v h) = D P (v h), =1 P (v = 1 h) = σ a + j W j h j. Condtoned on r, the dstrbuton over {v, h} s same as the dstrbuton that an RBM would mpose, except that the unts for whch r j = 0 are dropped from h. Therefore, the Dropout RBM model can be seen as a mxture of exponentally many RBMs wth shared weghts each usng a dfferent subset of h. 8.2 Learnng Dropout RBMs Learnng algorthms developed for RBMs such as Contrastve Dvergence (Hnton et al., 2006) can be drectly appled for learnng Dropout RBMs. The only dfference s that r s frst sampled and only the hdden unts that are retaned are used for tranng. Smlar to dropout neural networks, a dfferent r s sampled for each tranng case n every mnbatch. In our experments, we use CD-1 for tranng dropout RBMs. 8.3 Effect on Features Dropout n feed-forward networks mproved the qualty of features by reducng co-adaptatons. Ths secton explores whether ths effect transfers to Dropout RBMs as well. Fgure 12a shows features learned by a bnary RBM wth 256 hdden unts. Fgure 12b shows features learned by a dropout RBM wth the same number of hdden unts. Features 1948
21 Dropout (a) Wthout dropout (b) Dropout wth p = 0.5. Fgure 13: Effect of dropout on sparsty. Left: The actvaton hstogram shows that a large number of unts have actvatons away from zero. Rght: A large number of unts have actvatons close to zero and very few unts have hgh actvaton. learned by the dropout RBM appear qualtatvely dfferent n the sense that they seem to capture features that are coarser compared to the sharply defned stroke-lke features n the standard RBM. There seem to be very few dead unts n the dropout RBM relatve to the standard RBM. 8.4 Effect on Sparsty Next, we nvestgate the effect of dropout RBM tranng on sparsty of the hdden unt actvatons. Fgure 13a shows the hstograms of hdden unt actvatons and ther means on a test mn-batch after tranng an RBM. Fgure 13b shows the same for dropout RBMs. The hstograms clearly ndcate that the dropout RBMs learn much sparser representatons than standard RBMs even when no addtonal sparsty nducng regularzer s present. 9. Margnalzng Dropout Dropout can be seen as a way of addng nose to the states of hdden unts n a neural network. In ths secton, we explore the class of models that arse as a result of margnalzng ths nose. These models can be seen as determnstc versons of dropout. In contrast to standard ( Monte-Carlo ) dropout, these models do not need random bts and t s possble to get gradents for the margnalzed loss functons. In ths secton, we brefly explore these models. Determnstc algorthms have been proposed that try to learn models that are robust to feature deleton at test tme (Globerson and Rowes, 2006). Margnalzaton n the context of denosng autoencoders has been explored prevously (Chen et al., 2012). The margnalzaton of dropout nose n the context of lnear regresson was dscussed n Srvastava (2013). Wang and Mannng (2013) further explored the dea of margnalzng dropout to speed-up tranng. van der Maaten et al. (2013) nvestgated dfferent nput nose dstrbutons and 1949
22 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov the regularzers obtaned by margnalzng ths nose. Wager et al. (2013) descrbes how dropout can be seen as an adaptve regularzer. 9.1 Lnear Regresson Frst we explore a very smple case of applyng dropout to the classcal problem of lnear regresson. Let X R N D be a data matrx of N data ponts. y R N be a vector of targets. Lnear regresson tres to fnd a w R D that mnmzes y Xw 2. When the nput X s dropped out such that any nput dmenson s retaned wth probablty p, the nput can be expressed as R X where R {0, 1} N D s a random matrx wth R j Bernoull(p) and denotes an element-wse product. Margnalzng the nose, the objectve functon becomes Ths reduces to mnmze w mnmze w E R Bernoull(p) [ y (R X)w 2 ]. y pxw 2 + p(1 p) Γw 2, where Γ = (dag(x X)) 1/2. Therefore, dropout wth lnear regresson s equvalent, n expectaton, to rdge regresson wth a partcular form for Γ. Ths form of Γ essentally scales the weght cost for weght w by the standard devaton of the th dmenson of the data. If a partcular data dmenson vares a lot, the regularzer tres to squeeze ts weght more. Another nterestng way to look at ths objectve s to absorb the factor of p nto w. Ths leads to the followng form mnmze w y X w p p Γ w 2, where w = pw. Ths makes the dependence of the regularzaton constant on p explct. For p close to 1, all the nputs are retaned and the regularzaton constant s small. As more dropout s done (by decreasng p), the regularzaton constant grows larger. 9.2 Logstc Regresson and Deep Networks For logstc regresson and deep neural nets, t s hard to obtan a closed form margnalzed model. However, Wang and Mannng (2013) showed that n the context of dropout appled to logstc regresson, the correspondng margnalzed model can be traned approxmately. Under reasonable assumptons, the dstrbutons over the nputs to the logstc unt and over the gradents of the margnalzed model are Gaussan. Ther means and varances can be computed effcently. Ths approxmate margnalzaton outperforms Monte-Carlo dropout n terms of tranng tme and generalzaton performance. However, the assumptons nvolved n ths technque become successvely weaker as more layers are added. Therefore, the results are not drectly applcable to deep networks. 1950
23 Dropout Data Set Archtecture Bernoull dropout Gaussan dropout MNIST 2 layers, 1024 unts each 1.08 ± ± 0.04 CIFAR-10 3 conv + 2 fully connected layers 12.6 ± ± 0.1 Table 10: Comparson of classfcaton error % wth Bernoull and Gaussan dropout. For MNIST, the Bernoull model uses p = 0.5 for the hdden unts and p = 0.8 for the nput unts. For CIFAR-10, we use p = (0.9, 0.75, 0.75, 0.5, 0.5, 0.5) gong from the nput layer to the top. The value of σ for the Gaussan dropout models was set to be 1 p p. Results were averaged over 10 dfferent random seeds. 10. Multplcatve Gaussan Nose Dropout nvolves multplyng hdden actvatons by Bernoull dstrbuted random varables whch take the value 1 wth probablty p and 0 otherwse. Ths dea can be generalzed by multplyng the actvatons wth random varables drawn from other dstrbutons. We recently dscovered that multplyng by a random varable drawn from N (1, 1) works just as well, or perhaps better than usng Bernoull nose. Ths new form of dropout amounts to addng a Gaussan dstrbuted random varable wth zero mean and standard devaton equal to the actvaton of the unt. That s, each hdden actvaton h s perturbed to h + h r where r N (0, 1), or equvalently h r where r N (1, 1). We can generalze ths to r N (1, σ 2 ) where σ becomes an addtonal hyperparameter to tune, just lke p was n the standard (Bernoull) dropout. The expected value of the actvatons remans unchanged, therefore no weght scalng s requred at test tme. In ths paper, we descrbed dropout as a method where we retan unts wth probablty p at tranng tme and scale down the weghts by multplyng them by a factor of p at test tme. Another way to acheve the same effect s to scale up the retaned actvatons by multplyng by 1/p at tranng tme and not modfyng the weghts at test tme. These methods are equvalent wth approprate scalng of the learnng rate and weght ntalzatons at each layer. Therefore, dropout can be seen as multplyng h by a Bernoull random varable r b that takes the value 1/p wth probablty p and 0 otherwse. E[r b ] = 1 and V ar[r b ] = (1 p)/p. For the Gaussan multplcatve nose, f we set σ 2 = (1 p)/p, we end up multplyng h by a random varable r g, where E[r g ] = 1 and V ar[r g ] = (1 p)/p. Therefore, both forms of dropout can be set up so that the random varable beng multpled by has the same mean and varance. However, gven these frst and second order moments, r g has the hghest entropy and r b has the lowest. Both these extremes work well, although prelmnary expermental results shown n Table 10 suggest that the hgh entropy case mght work slghtly better. For each layer, the value of σ n the Gaussan model was set to be usng the p from the correspondng layer n the Bernoull model. 11. Concluson 1 p p Dropout s a technque for mprovng neural networks by reducng overfttng. Standard backpropagaton learnng bulds up brttle co-adaptatons that work for the tranng data but do not generalze to unseen data. Random dropout breaks up these co-adaptatons by 1951
24 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov makng the presence of any partcular hdden unt unrelable. Ths technque was found to mprove the performance of neural nets n a wde varety of applcaton domans ncludng object classfcaton, dgt recognton, speech recognton, document classfcaton and analyss of computatonal bology data. Ths suggests that dropout s a general technque and s not specfc to any doman. Methods that use dropout acheve state-of-the-art results on SVHN, ImageNet, CIFAR-100 and MNIST. Dropout consderably mproved the performance of standard neural nets on other data sets as well. Ths dea can be extended to Restrcted Boltzmann Machnes and other graphcal models. The central dea of dropout s to take a large model that overfts easly and repeatedly sample and tran smaller sub-models from t. RBMs easly ft nto ths framework. We developed Dropout RBMs and emprcally showed that they have certan desrable propertes. One of the drawbacks of dropout s that t ncreases tranng tme. A dropout network typcally takes 2-3 tmes longer to tran than a standard neural network of the same archtecture. A major cause of ths ncrease s that the parameter updates are very nosy. Each tranng case effectvely tres to tran a dfferent random archtecture. Therefore, the gradents that are beng computed are not gradents of the fnal archtecture that wll be used at test tme. Therefore, t s not surprsng that tranng takes a long tme. However, t s lkely that ths stochastcty prevents overfttng. Ths creates a trade-off between overfttng and tranng tme. Wth more tranng tme, one can use hgh dropout and suffer less overfttng. However, one way to obtan some of the benefts of dropout wthout stochastcty s to margnalze the nose to obtan a regularzer that does the same thng as the dropout procedure, n expectaton. We showed that for lnear regresson ths regularzer s a modfed form of L2 regularzaton. For more complcated models, t s not obvous how to obtan an equvalent regularzer. Speedng up dropout s an nterestng drecton for future work. Acknowledgments Ths research was supported by OGS, NSERC and an Early Researcher Award. Appendx A. A Practcal Gude for Tranng Dropout Networks Neural networks are nfamous for requrng extensve hyperparameter tunng. Dropout networks are no excepton. In ths secton, we descrbe heurstcs that mght be useful for applyng dropout. A.1 Network Sze It s to be expected that droppng unts wll reduce the capacty of a neural network. If n s the number of hdden unts n any layer and p s the probablty of retanng a unt, then nstead of n hdden unts, only pn unts wll be present after dropout, n expectaton. Moreover, ths set of pn unts wll be dfferent each tme and the unts are not allowed to buld co-adaptatons freely. Therefore, f an n-szed layer s optmal for a standard neural net on any gven task, a good dropout net should have at least n/p unts. We found ths to be a useful heurstc for settng the number of hdden unts n both convolutonal and fully connected networks. 1952
25 Dropout A.2 Learnng Rate and Momentum Dropout ntroduces a sgnfcant amount of nose n the gradents compared to standard stochastc gradent descent. Therefore, a lot of gradents tend to cancel each other. In order to make up for ths, a dropout net should typcally use tmes the learnng rate that was optmal for a standard neural net. Another way to reduce the effect the nose s to use a hgh momentum. Whle momentum values of 0.9 are common for standard nets, wth dropout we found that values around 0.95 to 0.99 work qute a lot better. Usng hgh learnng rate and/or momentum sgnfcantly speed up learnng. A.3 Max-norm Regularzaton Though large momentum and learnng rate speed up learnng, they sometmes cause the network weghts to grow very large. To prevent ths, we can use max-norm regularzaton. Ths constrans the norm of the vector of ncomng weghts at each hdden unt to be bound by a constant c. Typcal values of c range from 3 to 4. A.4 Dropout Rate Dropout ntroduces an extra hyperparameter the probablty of retanng a unt p. Ths hyperparameter controls the ntensty of dropout. p = 1, mples no dropout and low values of p mean more dropout. Typcal values of p for hdden unts are n the range 0.5 to 0.8. For nput layers, the choce depends on the knd of nput. For real-valued nputs (mage patches or speech frames), a typcal value s 0.8. For hdden layers, the choce of p s coupled wth the choce of number of hdden unts n. Smaller p requres bg n whch slows down the tranng and leads to underfttng. Large p may not produce enough dropout to prevent overfttng. Appendx B. Detaled Descrpton of Experments and Data Sets. Ths secton descrbes the network archtectures and tranng detals for the expermental results reported n ths paper. The code for reproducng these results can be obtaned from The mplementaton s GPU-based. We used the excellent CUDA lbrares cudamat (Mnh, 2009) and cuda-convnet (Krzhevsky et al., 2012) to mplement our networks. B.1 MNIST The MNIST data set conssts of 60,000 tranng and 10,000 test examples each representng a dgt mage. We held out 10,000 random tranng mages for valdaton. Hyperparameters were tuned on the valdaton set such that the best valdaton error was produced after 1 mllon weght updates. The valdaton set was then combned wth the tranng set and tranng was done for 1 mllon weght updates. Ths net was used to evaluate the performance on the test set. Ths way of usng the valdaton set was chosen because we found that t was easy to set up hyperparameters so that early stoppng was not requred at all. Therefore, once the hyperparameters were fxed, t made sense to combne the valdaton and tranng sets and tran for a very long tme. 1953
26 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov The archtectures shown n Fgure 4 nclude all combnatons of 2, 3, and 4 layer networks wth 1024 and 2048 unts n each layer. Thus, there are sx archtectures n all. For all the archtectures (ncludng the ones reported n Table 2), we used p = 0.5 n all hdden layers and p = 0.8 n the nput layer. A fnal momentum of 0.95 and weght constrants wth c = 2 was used n all the layers. To test the lmts of dropout s regularzaton power, we also expermented wth 2 and 3 layer nets havng 4096 and 8192 unts. 2 layer nets gave mprovements as shown n Table 2. However, the three layer nets performed slghtly worse than 2 layer ones wth the same level of dropout. When we ncreased dropout, performance mproved but not enough to outperform the 2 layer nets. B.2 SVHN The SVHN data set conssts of approxmately 600,000 tranng mages and 26,000 test mages. The tranng set conssts of two parts A standard labeled tranng set and another set of labeled examples that are easy. A valdaton set was constructed by takng examples from both the parts. Two-thrds of t were taken from the standard set (400 per class) and one-thrd from the extra set (200 per class), a total of 6000 samples. Ths same process s used by Sermanet et al. (2012). The nputs were RGB pxels normalzed to have zero mean and unt varance. Other preprocessng technques such as global or local contrast normalzaton or ZCA whtenng dd not gve any notceable mprovements. The best archtecture that we found uses three convolutonal layers each followed by a max-poolng layer. The convolutonal layers have 96, 128 and 256 flters respectvely. Each convolutonal layer has a 5 5 receptve feld appled wth a strde of 1 pxel. Each max poolng layer pools 3 3 regons at strdes of 2 pxels. The convolutonal layers are followed by two fully connected hdden layers havng 2048 unts each. All unts use the rectfed lnear actvaton functon. Dropout was appled to all the layers of the network wth the probablty of retanng the unt beng p = (0.9, 0.75, 0.75, 0.5, 0.5, 0.5) for the dfferent layers of the network (gong from nput to convolutonal layers to fully connected layers). In addton, the max-norm constrant wth c = 4 was used for all the weghts. A momentum of 0.95 was used n all the layers. These hyperparameters were tuned usng a valdaton set. Snce the tranng set was qute large, we dd not combne the valdaton set wth the tranng set for fnal tranng. We reported test error of the model that had smallest valdaton error. B.3 CIFAR-10 and CIFAR-100 The CIFAR-10 and CIFAR-100 data sets conssts of 50,000 tranng and 10,000 test mages each. They have 10 and 100 mage categores respectvely. These are color mages. We used 5,000 of the tranng mages for valdaton. We followed the procedure smlar to MNIST, where we found the best hyperparameters usng the valdaton set and then combned t wth the tranng set. The mages were preprocessed by dong global contrast normalzaton n each color channel followed by ZCA whtenng. Global contrast normalzaton means that for mage and each color channel n that mage, we compute the mean of the pxel ntenstes and subtract t from the channel. ZCA whtenng means that we mean center the data, rotate t onto ts prncple components, normalze each component 1954
27 Dropout and then rotate t back. The network archtecture and dropout rates are same as that for SVHN, except the learnng rates for the nput layer whch had to be set to smaller values. B.4 TIMIT The open source Kald toolkt (Povey et al., 2011) was used to preprocess the data nto logflter banks. A monophone system was traned to do a forced algnment and to get labels for speech frames. Dropout neural networks were traned on wndows of 21 consecutve frames to predct the label of the central frame. No speaker dependent operatons were performed. The nputs were mean centered and normalzed to have unt varance. We used probablty of retenton p = 0.8 n the nput layers and 0.5 n the hdden layers. Max-norm constrant wth c = 4 was used n all the layers. A momentum of 0.95 wth a hgh learnng rate of 0.1 was used. The learnng rate was decayed as ɛ 0 (1 + t/t ) 1. For DBN pretranng, we traned RBMs usng CD-1. The varance of each nput unt for the Gaussan RBM was fxed to 1. For fnetunng the DBN wth dropout, we found that n order to get the best results t was mportant to use a smaller learnng rate (about 0.01). Addng max-norm constrants dd not gve any mprovements. B.5 Reuters The Reuters RCV1 corpus contans more than 800,000 documents categorzed nto 103 classes. These classes are arranged n a tree herarchy. We created a subset of ths data set consstng of 402,738 artcles and a vocabulary of 2000 words comprsng of 50 categores n whch each document belongs to exactly one class. The data was splt nto equal szed tranng and test sets. We tred many network archtectures and found that dropout gave mprovements n classfcaton accuracy over all of them. However, the mprovement was not as sgnfcant as that for the mage and speech data sets. Ths mght be explaned by the fact that ths data set s qute bg (more than 200,000 tranng examples) and overfttng s not a very serous problem. B.6 Alternatve Splcng The alternatve splcng data set conssts of data for 3665 cassette exons, 1014 RNA features and 4 tssue types derved from 27 mouse tssues. For each nput, the target conssts of 4 softmax unts (one for tssue type). Each softmax unt has 3 states (nc, exc, nc) whch are of the bologcal mportance. For each softmax unt, the am s to predct a dstrbuton over these 3 states that matches the observed dstrbuton from wet lab experments as closely as possble. The evaluaton metrc s Code Qualty whch s defned as data ponts =1 t tssue types s {nc, exc, nc} p s,t log( qs t (r ) p s ), where, p s,t s the target probablty for state s and tssue type t n nput ; qs t (r ) s the predcted probablty for state s n tssue type t for nput r and p s s the average of p s,t over and t. A two layer dropout network wth 1024 unts n each layer was traned on ths data set. A value of p = 0.5 was used for the hdden layer and p = 0.7 for the nput layer. Max-norm regularzaton wth hgh decayng learnng rates was used. Results were averaged across the same 5 folds used by Xong et al. (2011). 1955
28 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov References M. Chen, Z. Xu, K. Wenberger, and F. Sha. Margnalzed denosng autoencoders for doman adaptaton. In Proceedngs of the 29th Internatonal Conference on Machne Learnng, pages ACM, G. E. Dahl, M. Ranzato, A. Mohamed, and G. E. Hnton. Phone recognton wth the meancovarance restrcted Boltzmann machne. In Advances n Neural Informaton Processng Systems 23, pages , O. Dekel, O. Shamr, and L. Xao. Learnng to classfy wth mssng and corrupted features. Machne Learnng, 81(2): , A. Globerson and S. Rowes. Nghtmare at test tme: robust learnng by feature deleton. In Proceedngs of the 23rd Internatonal Conference on Machne Learnng, pages ACM, I. J. Goodfellow, D. Warde-Farley, M. Mrza, A. Courvlle, and Y. Bengo. Maxout networks. In Proceedngs of the 30th Internatonal Conference on Machne Learnng, pages ACM, G. Hnton and R. Salakhutdnov. Reducng the dmensonalty of data wth neural networks. Scence, 313(5786): , G. E. Hnton, S. Osndero, and Y. Teh. A fast learnng algorthm for deep belef nets. Neural Computaton, 18: , K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun. What s the best mult-stage archtecture for object recognton? In Proceedngs of the Internatonal Conference on Computer Vson (ICCV 09). IEEE, A. Krzhevsky. Learnng multple layers of features from tny mages. Techncal report, Unversty of Toronto, A. Krzhevsky, I. Sutskever, and G. E. Hnton. Imagenet classfcaton wth deep convolutonal neural networks. In Advances n Neural Informaton Processng Systems 25, pages , Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagaton appled to handwrtten zp code recognton. Neural Computaton, 1(4): , Y. Ln, F. Lv, S. Zhu, M. Yang, T. Cour, K. Yu, L. Cao, Z. L, M.-H. Tsa, X. Zhou, T. Huang, and T. Zhang. Imagenet classfcaton: fast descrptor codng and large-scale svm tranng. Large scale vsual recognton challenge, A. Lvnat, C. Papadmtrou, N. Pppenger, and M. W. Feldman. Sex, mxablty, and modularty. Proceedngs of the Natonal Academy of Scences, 107(4): , V. Mnh. CUDAMat: a CUDA-based matrx class for Python. Techncal Report UTML TR , Department of Computer Scence, Unversty of Toronto, November
29 Dropout A. Mohamed, G. E. Dahl, and G. E. Hnton. Acoustc modelng usng deep belef networks. IEEE Transactons on Audo, Speech, and Language Processng, R. M. Neal. Bayesan Learnng for Neural Networks. Sprnger-Verlag New York, Inc., Y. Netzer, T. Wang, A. Coates, A. Bssacco, B. Wu, and A. Y. Ng. Readng dgts n natural mages wth unsupervsed feature learnng. In NIPS Workshop on Deep Learnng and Unsupervsed Feature Learnng 2011, S. J. Nowlan and G. E. Hnton. Smplfyng neural networks by soft weght-sharng. Neural Computaton, 4(4), D. Povey, A. Ghoshal, G. Boulanne, L. Burget, O. Glembek, N. Goel, M. Hannemann, P. Motlcek, Y. Qan, P. Schwarz, J. Slovsky, G. Stemmer, and K. Vesely. The Kald Speech Recognton Toolkt. In IEEE 2011 Workshop on Automatc Speech Recognton and Understandng. IEEE Sgnal Processng Socety, R. Salakhutdnov and G. Hnton. Deep Boltzmann machnes. In Proceedngs of the Internatonal Conference on Artfcal Intellgence and Statstcs, volume 5, pages , R. Salakhutdnov and A. Mnh. Bayesan probablstc matrx factorzaton usng Markov chan Monte Carlo. In Proceedngs of the 25th Internatonal Conference on Machne Learnng. ACM, J. Sanchez and F. Perronnn. Hgh-dmensonal sgnature compresson for large-scale mage classfcaton. In Proceedngs of the 2011 IEEE Conference on Computer Vson and Pattern Recognton, pages , P. Sermanet, S. Chntala, and Y. LeCun. Convolutonal neural networks appled to house numbers dgt classfcaton. In Internatonal Conference on Pattern Recognton (ICPR 2012), P. Smard, D. Stenkraus, and J. Platt. Best practces for convolutonal neural networks appled to vsual document analyss. In Proceedngs of the Seventh Internatonal Conference on Document Analyss and Recognton, volume 2, pages , J. Snoek, H. Larochelle, and R. Adams. Practcal Bayesan optmzaton of machne learnng algorthms. In Advances n Neural Informaton Processng Systems 25, pages , N. Srebro and A. Shrabman. Rank, trace-norm and max-norm. In Proceedngs of the 18th annual conference on Learnng Theory, COLT 05, pages Sprnger-Verlag, N. Srvastava. Improvng Neural Networks wth Dropout. Master s thess, Unversty of Toronto, January R. Tbshran. Regresson shrnkage and selecton va the lasso. Journal of the Royal Statstcal Socety. Seres B. Methodologcal, 58(1): ,
30 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov A. N. Tkhonov. On the stablty of nverse problems. Doklady Akadem Nauk SSSR, 39(5): , L. van der Maaten, M. Chen, S. Tyree, and K. Q. Wenberger. Learnng wth margnalzed corrupted features. In Proceedngs of the 30th Internatonal Conference on Machne Learnng, pages ACM, P. Vncent, H. Larochelle, Y. Bengo, and P.-A. Manzagol. Extractng and composng robust features wth denosng autoencoders. In Proceedngs of the 25th Internatonal Conference on Machne Learnng, pages ACM, P. Vncent, H. Larochelle, I. Lajoe, Y. Bengo, and P.-A. Manzagol. Stacked denosng autoencoders: Learnng useful representatons n a deep network wth a local denosng crteron. In Proceedngs of the 27th Internatonal Conference on Machne Learnng, pages ACM, S. Wager, S. Wang, and P. Lang. Dropout tranng as adaptve regularzaton. In Advances n Neural Informaton Processng Systems 26, pages , S. Wang and C. D. Mannng. Fast dropout tranng. In Proceedngs of the 30th Internatonal Conference on Machne Learnng, pages ACM, H. Y. Xong, Y. Barash, and B. J. Frey. Bayesan predcton of tssue-regulated splcng usng RNA sequence and cellular context. Bonformatcs, 27(18): , M. D. Zeler and R. Fergus. Stochastc poolng for regularzaton of deep convolutonal neural networks. CoRR, abs/ ,
What is Candidate Sampling
What s Canddate Samplng Say we have a multclass or mult label problem where each tranng example ( x, T ) conssts of a context x a small (mult)set of target classes T out of a large unverse L of possble
The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis
The Development of Web Log Mnng Based on Improve-K-Means Clusterng Analyss TngZhong Wang * College of Informaton Technology, Luoyang Normal Unversty, Luoyang, 471022, Chna [email protected] Abstract.
Forecasting the Direction and Strength of Stock Market Movement
Forecastng the Drecton and Strength of Stock Market Movement Jngwe Chen Mng Chen Nan Ye [email protected] [email protected] [email protected] Abstract - Stock market s one of the most complcated systems
Logistic Regression. Lecture 4: More classifiers and classes. Logistic regression. Adaboost. Optimization. Multiple class classification
Lecture 4: More classfers and classes C4B Machne Learnng Hlary 20 A. Zsserman Logstc regresson Loss functons revsted Adaboost Loss functons revsted Optmzaton Multple class classfcaton Logstc Regresson
Face Verification Problem. Face Recognition Problem. Application: Access Control. Biometric Authentication. Face Verification (1:1 matching)
Face Recognton Problem Face Verfcaton Problem Face Verfcaton (1:1 matchng) Querymage face query Face Recognton (1:N matchng) database Applcaton: Access Control www.vsage.com www.vsoncs.com Bometrc Authentcaton
benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ).
REVIEW OF RISK MANAGEMENT CONCEPTS LOSS DISTRIBUTIONS AND INSURANCE Loss and nsurance: When someone s subject to the rsk of ncurrng a fnancal loss, the loss s generally modeled usng a random varable or
CS 2750 Machine Learning. Lecture 3. Density estimation. CS 2750 Machine Learning. Announcements
Lecture 3 Densty estmaton Mlos Hauskrecht [email protected] 5329 Sennott Square Next lecture: Matlab tutoral Announcements Rules for attendng the class: Regstered for credt Regstered for audt (only f there
An Alternative Way to Measure Private Equity Performance
An Alternatve Way to Measure Prvate Equty Performance Peter Todd Parlux Investment Technology LLC Summary Internal Rate of Return (IRR) s probably the most common way to measure the performance of prvate
Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module LOSSLESS IMAGE COMPRESSION SYSTEMS Lesson 3 Lossless Compresson: Huffman Codng Instructonal Objectves At the end of ths lesson, the students should be able to:. Defne and measure source entropy..
Lecture 2: Single Layer Perceptrons Kevin Swingler
Lecture 2: Sngle Layer Perceptrons Kevn Sngler [email protected] Recap: McCulloch-Ptts Neuron Ths vastly smplfed model of real neurons s also knon as a Threshold Logc Unt: W 2 A Y 3 n W n. A set of synapses
Forecasting the Demand of Emergency Supplies: Based on the CBR Theory and BP Neural Network
700 Proceedngs of the 8th Internatonal Conference on Innovaton & Management Forecastng the Demand of Emergency Supples: Based on the CBR Theory and BP Neural Network Fu Deqang, Lu Yun, L Changbng School
Logistic Regression. Steve Kroon
Logstc Regresson Steve Kroon Course notes sectons: 24.3-24.4 Dsclamer: these notes do not explctly ndcate whether values are vectors or scalars, but expects the reader to dscern ths from the context. Scenaro
Support Vector Machines
Support Vector Machnes Max Wellng Department of Computer Scence Unversty of Toronto 10 Kng s College Road Toronto, M5S 3G5 Canada [email protected] Abstract Ths s a note to explan support vector machnes.
Vision Mouse. Saurabh Sarkar a* University of Cincinnati, Cincinnati, USA ABSTRACT 1. INTRODUCTION
Vson Mouse Saurabh Sarkar a* a Unversty of Cncnnat, Cncnnat, USA ABSTRACT The report dscusses a vson based approach towards trackng of eyes and fngers. The report descrbes the process of locatng the possble
Single and multiple stage classifiers implementing logistic discrimination
Sngle and multple stage classfers mplementng logstc dscrmnaton Hélo Radke Bttencourt 1 Dens Alter de Olvera Moraes 2 Vctor Haertel 2 1 Pontfíca Unversdade Católca do Ro Grande do Sul - PUCRS Av. Ipranga,
Feature selection for intrusion detection. Slobodan Petrović NISlab, Gjøvik University College
Feature selecton for ntruson detecton Slobodan Petrovć NISlab, Gjøvk Unversty College Contents The feature selecton problem Intruson detecton Traffc features relevant for IDS The CFS measure The mrmr measure
L10: Linear discriminants analysis
L0: Lnear dscrmnants analyss Lnear dscrmnant analyss, two classes Lnear dscrmnant analyss, C classes LDA vs. PCA Lmtatons of LDA Varants of LDA Other dmensonalty reducton methods CSCE 666 Pattern Analyss
How Sets of Coherent Probabilities May Serve as Models for Degrees of Incoherence
1 st Internatonal Symposum on Imprecse Probabltes and Ther Applcatons, Ghent, Belgum, 29 June 2 July 1999 How Sets of Coherent Probabltes May Serve as Models for Degrees of Incoherence Mar J. Schervsh
PSYCHOLOGICAL RESEARCH (PYC 304-C) Lecture 12
14 The Ch-squared dstrbuton PSYCHOLOGICAL RESEARCH (PYC 304-C) Lecture 1 If a normal varable X, havng mean µ and varance σ, s standardsed, the new varable Z has a mean 0 and varance 1. When ths standardsed
The OC Curve of Attribute Acceptance Plans
The OC Curve of Attrbute Acceptance Plans The Operatng Characterstc (OC) curve descrbes the probablty of acceptng a lot as a functon of the lot s qualty. Fgure 1 shows a typcal OC Curve. 10 8 6 4 1 3 4
Calculation of Sampling Weights
Perre Foy Statstcs Canada 4 Calculaton of Samplng Weghts 4.1 OVERVIEW The basc sample desgn used n TIMSS Populatons 1 and 2 was a two-stage stratfed cluster desgn. 1 The frst stage conssted of a sample
Luby s Alg. for Maximal Independent Sets using Pairwise Independence
Lecture Notes for Randomzed Algorthms Luby s Alg. for Maxmal Independent Sets usng Parwse Independence Last Updated by Erc Vgoda on February, 006 8. Maxmal Independent Sets For a graph G = (V, E), an ndependent
8 Algorithm for Binary Searching in Trees
8 Algorthm for Bnary Searchng n Trees In ths secton we present our algorthm for bnary searchng n trees. A crucal observaton employed by the algorthm s that ths problem can be effcently solved when the
An Interest-Oriented Network Evolution Mechanism for Online Communities
An Interest-Orented Network Evoluton Mechansm for Onlne Communtes Cahong Sun and Xaopng Yang School of Informaton, Renmn Unversty of Chna, Bejng 100872, P.R. Chna {chsun,yang}@ruc.edu.cn Abstract. Onlne
Software project management with GAs
Informaton Scences 177 (27) 238 241 www.elsever.com/locate/ns Software project management wth GAs Enrque Alba *, J. Francsco Chcano Unversty of Málaga, Grupo GISUM, Departamento de Lenguajes y Cencas de
1. Fundamentals of probability theory 2. Emergence of communication traffic 3. Stochastic & Markovian Processes (SP & MP)
6.3 / -- Communcaton Networks II (Görg) SS20 -- www.comnets.un-bremen.de Communcaton Networks II Contents. Fundamentals of probablty theory 2. Emergence of communcaton traffc 3. Stochastc & Markovan Processes
How To Know The Components Of Mean Squared Error Of Herarchcal Estmator S
S C H E D A E I N F O R M A T I C A E VOLUME 0 0 On Mean Squared Error of Herarchcal Estmator Stans law Brodowsk Faculty of Physcs, Astronomy, and Appled Computer Scence, Jagellonan Unversty, Reymonta
DEFINING %COMPLETE IN MICROSOFT PROJECT
CelersSystems DEFINING %COMPLETE IN MICROSOFT PROJECT PREPARED BY James E Aksel, PMP, PMI-SP, MVP For Addtonal Informaton about Earned Value Management Systems and reportng, please contact: CelersSystems,
Causal, Explanatory Forecasting. Analysis. Regression Analysis. Simple Linear Regression. Which is Independent? Forecasting
Causal, Explanatory Forecastng Assumes cause-and-effect relatonshp between system nputs and ts output Forecastng wth Regresson Analyss Rchard S. Barr Inputs System Cause + Effect Relatonshp The job of
The Greedy Method. Introduction. 0/1 Knapsack Problem
The Greedy Method Introducton We have completed data structures. We now are gong to look at algorthm desgn methods. Often we are lookng at optmzaton problems whose performance s exponental. For an optmzaton
Project Networks With Mixed-Time Constraints
Project Networs Wth Mxed-Tme Constrants L Caccetta and B Wattananon Western Australan Centre of Excellence n Industral Optmsaton (WACEIO) Curtn Unversty of Technology GPO Box U1987 Perth Western Australa
1. Measuring association using correlation and regression
How to measure assocaton I: Correlaton. 1. Measurng assocaton usng correlaton and regresson We often would lke to know how one varable, such as a mother's weght, s related to another varable, such as a
CHAPTER 14 MORE ABOUT REGRESSION
CHAPTER 14 MORE ABOUT REGRESSION We learned n Chapter 5 that often a straght lne descrbes the pattern of a relatonshp between two quanttatve varables. For nstance, n Example 5.1 we explored the relatonshp
Institute of Informatics, Faculty of Business and Management, Brno University of Technology,Czech Republic
Lagrange Multplers as Quanttatve Indcators n Economcs Ivan Mezník Insttute of Informatcs, Faculty of Busness and Management, Brno Unversty of TechnologCzech Republc Abstract The quanttatve role of Lagrange
ECE544NA Final Project: Robust Machine Learning Hardware via Classifier Ensemble
1 ECE544NA Fnal Project: Robust Machne Learnng Hardware va Classfer Ensemble Sa Zhang, [email protected] Dept. of Electr. & Comput. Eng., Unv. of Illnos at Urbana-Champagn, Urbana, IL, USA Abstract In
An Enhanced Super-Resolution System with Improved Image Registration, Automatic Image Selection, and Image Enhancement
An Enhanced Super-Resoluton System wth Improved Image Regstraton, Automatc Image Selecton, and Image Enhancement Yu-Chuan Kuo ( ), Chen-Yu Chen ( ), and Chou-Shann Fuh ( ) Department of Computer Scence
On the Optimal Control of a Cascade of Hydro-Electric Power Stations
On the Optmal Control of a Cascade of Hydro-Electrc Power Statons M.C.M. Guedes a, A.F. Rbero a, G.V. Smrnov b and S. Vlela c a Department of Mathematcs, School of Scences, Unversty of Porto, Portugal;
Can Auto Liability Insurance Purchases Signal Risk Attitude?
Internatonal Journal of Busness and Economcs, 2011, Vol. 10, No. 2, 159-164 Can Auto Lablty Insurance Purchases Sgnal Rsk Atttude? Chu-Shu L Department of Internatonal Busness, Asa Unversty, Tawan Sheng-Chang
Performance Analysis and Coding Strategy of ECOC SVMs
Internatonal Journal of Grd and Dstrbuted Computng Vol.7, No. (04), pp.67-76 http://dx.do.org/0.457/jgdc.04.7..07 Performance Analyss and Codng Strategy of ECOC SVMs Zhgang Yan, and Yuanxuan Yang, School
Bayesian Network Based Causal Relationship Identification and Funding Success Prediction in P2P Lending
Proceedngs of 2012 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 25 (2012) (2012) IACSIT Press, Sngapore Bayesan Network Based Causal Relatonshp Identfcaton and Fundng Success
How To Understand The Results Of The German Meris Cloud And Water Vapour Product
Ttel: Project: Doc. No.: MERIS level 3 cloud and water vapour products MAPP MAPP-ATBD-ClWVL3 Issue: 1 Revson: 0 Date: 9.12.1998 Functon Name Organsaton Sgnature Date Author: Bennartz FUB Preusker FUB Schüller
A hybrid global optimization algorithm based on parallel chaos optimization and outlook algorithm
Avalable onlne www.ocpr.com Journal of Chemcal and Pharmaceutcal Research, 2014, 6(7):1884-1889 Research Artcle ISSN : 0975-7384 CODEN(USA) : JCPRC5 A hybrd global optmzaton algorthm based on parallel
) of the Cell class is created containing information about events associated with the cell. Events are added to the Cell instance
Calbraton Method Instances of the Cell class (one nstance for each FMS cell) contan ADC raw data and methods assocated wth each partcular FMS cell. The calbraton method ncludes event selecton (Class Cell
CHAPTER 5 RELATIONSHIPS BETWEEN QUANTITATIVE VARIABLES
CHAPTER 5 RELATIONSHIPS BETWEEN QUANTITATIVE VARIABLES In ths chapter, we wll learn how to descrbe the relatonshp between two quanttatve varables. Remember (from Chapter 2) that the terms quanttatve varable
Robust Design of Public Storage Warehouses. Yeming (Yale) Gong EMLYON Business School
Robust Desgn of Publc Storage Warehouses Yemng (Yale) Gong EMLYON Busness School Rene de Koster Rotterdam school of management, Erasmus Unversty Abstract We apply robust optmzaton and revenue management
Gender Classification for Real-Time Audience Analysis System
Gender Classfcaton for Real-Tme Audence Analyss System Vladmr Khryashchev, Lev Shmaglt, Andrey Shemyakov, Anton Lebedev Yaroslavl State Unversty Yaroslavl, Russa [email protected], [email protected], [email protected],
Joint Scheduling of Processing and Shuffle Phases in MapReduce Systems
Jont Schedulng of Processng and Shuffle Phases n MapReduce Systems Fangfe Chen, Mural Kodalam, T. V. Lakshman Department of Computer Scence and Engneerng, The Penn State Unversty Bell Laboratores, Alcatel-Lucent
Mining Feature Importance: Applying Evolutionary Algorithms within a Web-based Educational System
Mnng Feature Importance: Applyng Evolutonary Algorthms wthn a Web-based Educatonal System Behrouz MINAEI-BIDGOLI 1, and Gerd KORTEMEYER 2, and Wllam F. PUNCH 1 1 Genetc Algorthms Research and Applcatons
Adaptive Fractal Image Coding in the Frequency Domain
PROCEEDINGS OF INTERNATIONAL WORKSHOP ON IMAGE PROCESSING: THEORY, METHODOLOGY, SYSTEMS AND APPLICATIONS 2-22 JUNE,1994 BUDAPEST,HUNGARY Adaptve Fractal Image Codng n the Frequency Doman K AI UWE BARTHEL
A study on the ability of Support Vector Regression and Neural Networks to Forecast Basic Time Series Patterns
A study on the ablty of Support Vector Regresson and Neural Networks to Forecast Basc Tme Seres Patterns Sven F. Crone, Jose Guajardo 2, and Rchard Weber 2 Lancaster Unversty, Department of Management
IMPACT ANALYSIS OF A CELLULAR PHONE
4 th ASA & μeta Internatonal Conference IMPACT AALYSIS OF A CELLULAR PHOE We Lu, 2 Hongy L Bejng FEAonlne Engneerng Co.,Ltd. Bejng, Chna ABSTRACT Drop test smulaton plays an mportant role n nvestgatng
Statistical Methods to Develop Rating Models
Statstcal Methods to Develop Ratng Models [Evelyn Hayden and Danel Porath, Österrechsche Natonalbank and Unversty of Appled Scences at Manz] Source: The Basel II Rsk Parameters Estmaton, Valdaton, and
Descriptive Models. Cluster Analysis. Example. General Applications of Clustering. Examples of Clustering Applications
CMSC828G Prncples of Data Mnng Lecture #9 Today s Readng: HMS, chapter 9 Today s Lecture: Descrptve Modelng Clusterng Algorthms Descrptve Models model presents the man features of the data, a global summary
Analysis of Premium Liabilities for Australian Lines of Business
Summary of Analyss of Premum Labltes for Australan Lnes of Busness Emly Tao Honours Research Paper, The Unversty of Melbourne Emly Tao Acknowledgements I am grateful to the Australan Prudental Regulaton
A GENETIC ALGORITHM-BASED METHOD FOR CREATING IMPARTIAL WORK SCHEDULES FOR NURSES
82 Internatonal Journal of Electronc Busness Management, Vol. 0, No. 3, pp. 82-93 (202) A GENETIC ALGORITHM-BASED METHOD FOR CREATING IMPARTIAL WORK SCHEDULES FOR NURSES Feng-Cheng Yang * and We-Tng Wu
Activity Scheduling for Cost-Time Investment Optimization in Project Management
PROJECT MANAGEMENT 4 th Internatonal Conference on Industral Engneerng and Industral Management XIV Congreso de Ingenería de Organzacón Donosta- San Sebastán, September 8 th -10 th 010 Actvty Schedulng
Exhaustive Regression. An Exploration of Regression-Based Data Mining Techniques Using Super Computation
Exhaustve Regresson An Exploraton of Regresson-Based Data Mnng Technques Usng Super Computaton Antony Daves, Ph.D. Assocate Professor of Economcs Duquesne Unversty Pttsburgh, PA 58 Research Fellow The
How To Calculate The Accountng Perod Of Nequalty
Inequalty and The Accountng Perod Quentn Wodon and Shlomo Ytzha World Ban and Hebrew Unversty September Abstract Income nequalty typcally declnes wth the length of tme taen nto account for measurement.
Latent Class Regression. Statistics for Psychosocial Research II: Structural Models December 4 and 6, 2006
Latent Class Regresson Statstcs for Psychosocal Research II: Structural Models December 4 and 6, 2006 Latent Class Regresson (LCR) What s t and when do we use t? Recall the standard latent class model
Recurrence. 1 Definitions and main statements
Recurrence 1 Defntons and man statements Let X n, n = 0, 1, 2,... be a MC wth the state space S = (1, 2,...), transton probabltes p j = P {X n+1 = j X n = }, and the transton matrx P = (p j ),j S def.
Big Data Deep Learning: Challenges and Perspectives
Receved Aprl 20, 2014, accepted May 13, 2014, date of publcaton May 16, 2014, date of current verson May 28, 2014. Dgtal Object Identfer 10.1109/ACCESS.2014.2325029 Bg Data Deep Learnng: Challenges and
GRAVITY DATA VALIDATION AND OUTLIER DETECTION USING L 1 -NORM
GRAVITY DATA VALIDATION AND OUTLIER DETECTION USING L 1 -NORM BARRIOT Jean-Perre, SARRAILH Mchel BGI/CNES 18.av.E.Beln 31401 TOULOUSE Cedex 4 (France) Emal: [email protected] 1/Introducton The
Brigid Mullany, Ph.D University of North Carolina, Charlotte
Evaluaton And Comparson Of The Dfferent Standards Used To Defne The Postonal Accuracy And Repeatablty Of Numercally Controlled Machnng Center Axes Brgd Mullany, Ph.D Unversty of North Carolna, Charlotte
ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING
ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING Matthew J. Lberatore, Department of Management and Operatons, Vllanova Unversty, Vllanova, PA 19085, 610-519-4390,
THE DISTRIBUTION OF LOAN PORTFOLIO VALUE * Oldrich Alfons Vasicek
HE DISRIBUION OF LOAN PORFOLIO VALUE * Oldrch Alfons Vascek he amount of captal necessary to support a portfolo of debt securtes depends on the probablty dstrbuton of the portfolo loss. Consder a portfolo
Data Visualization by Pairwise Distortion Minimization
Communcatons n Statstcs, Theory and Methods 34 (6), 005 Data Vsualzaton by Parwse Dstorton Mnmzaton By Marc Sobel, and Longn Jan Lateck* Department of Statstcs and Department of Computer and Informaton
Tracking with Non-Linear Dynamic Models
CHAPTER 2 Trackng wth Non-Lnear Dynamc Models In a lnear dynamc model wth lnear measurements, there s always only one peak n the posteror; very small non-lneartes n dynamc models can lead to a substantal
Georey E. Hinton. University oftoronto. Email: [email protected]. Technical Report CRG-TR-96-1. May 21, 1996 (revised Feb 27, 1997) Abstract
The EM Algorthm for Mxtures of Factor Analyzers Zoubn Ghahraman Georey E. Hnton Department of Computer Scence Unversty oftoronto 6 Kng's College Road Toronto, Canada M5S A4 Emal: [email protected] Techncal
Politecnico di Torino. Porto Institutional Repository
Poltecnco d Torno Porto Insttutonal Repostory [Artcle] A cost-effectve cloud computng framework for acceleratng multmeda communcaton smulatons Orgnal Ctaton: D. Angel, E. Masala (2012). A cost-effectve
Power-of-Two Policies for Single- Warehouse Multi-Retailer Inventory Systems with Order Frequency Discounts
Power-of-wo Polces for Sngle- Warehouse Mult-Retaler Inventory Systems wth Order Frequency Dscounts José A. Ventura Pennsylvana State Unversty (USA) Yale. Herer echnon Israel Insttute of echnology (Israel)
INVESTIGATION OF VEHICULAR USERS FAIRNESS IN CDMA-HDR NETWORKS
21 22 September 2007, BULGARIA 119 Proceedngs of the Internatonal Conference on Informaton Technologes (InfoTech-2007) 21 st 22 nd September 2007, Bulgara vol. 2 INVESTIGATION OF VEHICULAR USERS FAIRNESS
Efficient Project Portfolio as a tool for Enterprise Risk Management
Effcent Proect Portfolo as a tool for Enterprse Rsk Management Valentn O. Nkonov Ural State Techncal Unversty Growth Traectory Consultng Company January 5, 27 Effcent Proect Portfolo as a tool for Enterprse
CHOLESTEROL REFERENCE METHOD LABORATORY NETWORK. Sample Stability Protocol
CHOLESTEROL REFERENCE METHOD LABORATORY NETWORK Sample Stablty Protocol Background The Cholesterol Reference Method Laboratory Network (CRMLN) developed certfcaton protocols for total cholesterol, HDL
Calculating the high frequency transmission line parameters of power cables
< ' Calculatng the hgh frequency transmsson lne parameters of power cables Authors: Dr. John Dcknson, Laboratory Servces Manager, N 0 RW E B Communcatons Mr. Peter J. Ncholson, Project Assgnment Manager,
1 Example 1: Axis-aligned rectangles
COS 511: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture # 6 Scrbe: Aaron Schld February 21, 2013 Last class, we dscussed an analogue for Occam s Razor for nfnte hypothess spaces that, n conjuncton
Quantization Effects in Digital Filters
Quantzaton Effects n Dgtal Flters Dstrbuton of Truncaton Errors In two's complement representaton an exact number would have nfntely many bts (n general). When we lmt the number of bts to some fnte value
Realistic Image Synthesis
Realstc Image Synthess - Combned Samplng and Path Tracng - Phlpp Slusallek Karol Myszkowsk Vncent Pegoraro Overvew: Today Combned Samplng (Multple Importance Samplng) Renderng and Measurng Equaton Random
THE APPLICATION OF DATA MINING TECHNIQUES AND MULTIPLE CLASSIFIERS TO MARKETING DECISION
Internatonal Journal of Electronc Busness Management, Vol. 3, No. 4, pp. 30-30 (2005) 30 THE APPLICATION OF DATA MINING TECHNIQUES AND MULTIPLE CLASSIFIERS TO MARKETING DECISION Yu-Mn Chang *, Yu-Cheh
Enterprise Master Patient Index
Enterprse Master Patent Index Healthcare data are captured n many dfferent settngs such as hosptals, clncs, labs, and physcan offces. Accordng to a report by the CDC, patents n the Unted States made an
A DATA MINING APPLICATION IN A STUDENT DATABASE
JOURNAL OF AERONAUTICS AND SPACE TECHNOLOGIES JULY 005 VOLUME NUMBER (53-57) A DATA MINING APPLICATION IN A STUDENT DATABASE Şenol Zafer ERDOĞAN Maltepe Ünversty Faculty of Engneerng Büyükbakkalköy-Istanbul
Fast Fuzzy Clustering of Web Page Collections
Fast Fuzzy Clusterng of Web Page Collectons Chrstan Borgelt and Andreas Nürnberger Dept. of Knowledge Processng and Language Engneerng Otto-von-Guercke-Unversty of Magdeburg Unverstätsplatz, D-396 Magdeburg,
AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE
AN APPOINTMENT ORDER OUTPATIENT SCHEDULING SYSTEM THAT IMPROVES OUTPATIENT EXPERIENCE Yu-L Huang Industral Engneerng Department New Mexco State Unversty Las Cruces, New Mexco 88003, U.S.A. Abstract Patent
SPEE Recommended Evaluation Practice #6 Definition of Decline Curve Parameters Background:
SPEE Recommended Evaluaton Practce #6 efnton of eclne Curve Parameters Background: The producton hstores of ol and gas wells can be analyzed to estmate reserves and future ol and gas producton rates and
ONE of the most crucial problems that every image
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 23, NO. 10, OCTOBER 2014 4413 Maxmum Margn Projecton Subspace Learnng for Vsual Data Analyss Symeon Nktds, Anastasos Tefas, Member, IEEE, and Ioanns Ptas, Fellow,
Staff Paper. Farm Savings Accounts: Examining Income Variability, Eligibility, and Benefits. Brent Gloy, Eddy LaDue, and Charles Cuykendall
SP 2005-02 August 2005 Staff Paper Department of Appled Economcs and Management Cornell Unversty, Ithaca, New York 14853-7801 USA Farm Savngs Accounts: Examnng Income Varablty, Elgblty, and Benefts Brent
Lecture 5,6 Linear Methods for Classification. Summary
Lecture 5,6 Lnear Methods for Classfcaton Rce ELEC 697 Farnaz Koushanfar Fall 2006 Summary Bayes Classfers Lnear Classfers Lnear regresson of an ndcator matrx Lnear dscrmnant analyss (LDA) Logstc regresson
Credit Limit Optimization (CLO) for Credit Cards
Credt Lmt Optmzaton (CLO) for Credt Cards Vay S. Desa CSCC IX, Ednburgh September 8, 2005 Copyrght 2003, SAS Insttute Inc. All rghts reserved. SAS Propretary Agenda Background Tradtonal approaches to credt
Chapter 4 ECONOMIC DISPATCH AND UNIT COMMITMENT
Chapter 4 ECOOMIC DISATCH AD UIT COMMITMET ITRODUCTIO A power system has several power plants. Each power plant has several generatng unts. At any pont of tme, the total load n the system s met by the
New Approaches to Support Vector Ordinal Regression
New Approaches to Support Vector Ordnal Regresson We Chu [email protected] Gatsby Computatonal Neuroscence Unt, Unversty College London, London, WCN 3AR, UK S. Sathya Keerth [email protected]
Multiple-Period Attribution: Residuals and Compounding
Multple-Perod Attrbuton: Resduals and Compoundng Our revewer gave these authors full marks for dealng wth an ssue that performance measurers and vendors often regard as propretary nformaton. In 1994, Dens
Sketching Sampled Data Streams
Sketchng Sampled Data Streams Florn Rusu, Aln Dobra CISE Department Unversty of Florda Ganesvlle, FL, USA [email protected] [email protected] Abstract Samplng s used as a unversal method to reduce the
Data Broadcast on a Multi-System Heterogeneous Overlayed Wireless Network *
JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 24, 819-840 (2008) Data Broadcast on a Mult-System Heterogeneous Overlayed Wreless Network * Department of Computer Scence Natonal Chao Tung Unversty Hsnchu,
NPAR TESTS. One-Sample Chi-Square Test. Cell Specification. Observed Frequencies 1O i 6. Expected Frequencies 1EXP i 6
PAR TESTS If a WEIGHT varable s specfed, t s used to replcate a case as many tmes as ndcated by the weght value rounded to the nearest nteger. If the workspace requrements are exceeded and samplng has
Damage detection in composite laminates using coin-tap method
Damage detecton n composte lamnates usng con-tap method S.J. Km Korea Aerospace Research Insttute, 45 Eoeun-Dong, Youseong-Gu, 35-333 Daejeon, Republc of Korea [email protected] 45 The con-tap test has the
How To Calculate An Approxmaton Factor Of 1 1/E
Approxmaton algorthms for allocaton problems: Improvng the factor of 1 1/e Urel Fege Mcrosoft Research Redmond, WA 98052 [email protected] Jan Vondrák Prnceton Unversty Prnceton, NJ 08540 [email protected]
