Journal of Machne Learnng Research 15 (2014) 1929-1958 Submtted 11/13; Publshed 6/14 Dropout: A Smple Way to Prevent Neural Networks from Overfttng Ntsh Srvastava Geoffrey Hnton Alex Krzhevsky Ilya Sutskever Ruslan Salakhutdnov Department of Computer Scence Unversty of Toronto 10 Kngs College Road, Rm 3302 Toronto, Ontaro, M5S 3G4, Canada. ntsh@cs.toronto.edu hnton@cs.toronto.edu krz@cs.toronto.edu lya@cs.toronto.edu rsalakhu@cs.toronto.edu Edtor: Yoshua Bengo Abstract Deep neural nets wth a large number of parameters are very powerful machne learnng systems. However, overfttng s a serous problem n such networks. Large networks are also slow to use, makng t dffcult to deal wth overfttng by combnng the predctons of many dfferent large neural nets at test tme. Dropout s a technque for addressng ths problem. The key dea s to randomly drop unts (along wth ther connectons) from the neural network durng tranng. Ths prevents unts from co-adaptng too much. Durng tranng, dropout samples from an exponental number of dfferent thnned networks. At test tme, t s easy to approxmate the effect of averagng the predctons of all these thnned networks by smply usng a sngle unthnned network that has smaller weghts. Ths sgnfcantly reduces overfttng and gves major mprovements over other regularzaton methods. We show that dropout mproves the performance of neural networks on supervsed learnng tasks n vson, speech recognton, document classfcaton and computatonal bology, obtanng state-of-the-art results on many benchmark data sets. Keywords: neural networks, regularzaton, model combnaton, deep learnng 1. Introducton Deep neural networks contan multple non-lnear hdden layers and ths makes them very expressve models that can learn very complcated relatonshps between ther nputs and outputs. Wth lmted tranng data, however, many of these complcated relatonshps wll be the result of samplng nose, so they wll exst n the tranng set but not n real test data even f t s drawn from the same dstrbuton. Ths leads to overfttng and many methods have been developed for reducng t. These nclude stoppng the tranng as soon as performance on a valdaton set starts to get worse, ntroducng weght penaltes of varous knds such as L1 and L2 regularzaton and soft weght sharng (Nowlan and Hnton, 1992). Wth unlmted computaton, the best way to regularze a fxed-szed model s to average the predctons of all possble settngs of the parameters, weghtng each settng by c 2014 Ntsh Srvastava, Geoffrey Hnton, Alex Krzhevsky, Ilya Sutskever and Ruslan Salakhutdnov.
Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov (a) Standard Neural Net (b) After applyng dropout. Fgure 1: Dropout Neural Net Model. Left: A standard neural net wth 2 hdden layers. Rght: An example of a thnned net produced by applyng dropout to the network on the left. Crossed unts have been dropped. ts posteror probablty gven the tranng data. Ths can sometmes be approxmated qute well for smple or small models (Xong et al., 2011; Salakhutdnov and Mnh, 2008), but we would lke to approach the performance of the Bayesan gold standard usng consderably less computaton. We propose to do ths by approxmatng an equally weghted geometrc mean of the predctons of an exponental number of learned models that share parameters. Model combnaton nearly always mproves the performance of machne learnng methods. Wth large neural networks, however, the obvous dea of averagng the outputs of many separately traned nets s prohbtvely expensve. Combnng several models s most helpful when the ndvdual models are dfferent from each other and n order to make neural net models dfferent, they should ether have dfferent archtectures or be traned on dfferent data. Tranng many dfferent archtectures s hard because fndng optmal hyperparameters for each archtecture s a dauntng task and tranng each large network requres a lot of computaton. Moreover, large networks normally requre large amounts of tranng data and there may not be enough data avalable to tran dfferent networks on dfferent subsets of the data. Even f one was able to tran many dfferent large networks, usng them all at test tme s nfeasble n applcatons where t s mportant to respond quckly. Dropout s a technque that addresses both these ssues. It prevents overfttng and provdes a way of approxmately combnng exponentally many dfferent neural network archtectures effcently. The term dropout refers to droppng out unts (hdden and vsble) n a neural network. By droppng a unt out, we mean temporarly removng t from the network, along wth all ts ncomng and outgong connectons, as shown n Fgure 1. The choce of whch unts to drop s random. In the smplest case, each unt s retaned wth a fxed probablty p ndependent of other unts, where p can be chosen usng a valdaton set or can smply be set at 0.5, whch seems to be close to optmal for a wde range of networks and tasks. For the nput unts, however, the optmal probablty of retenton s usually closer to 1 than to 0.5. 1930
Dropout w Present wth probablty p (a) At tranng tme Always present (b) At test tme pw Fgure 2: Left: A unt at tranng tme that s present wth probablty p and s connected to unts n the next layer wth weghts w. Rght: At test tme, the unt s always present and the weghts are multpled by p. The output at test tme s same as the expected output at tranng tme. Applyng dropout to a neural network amounts to samplng a thnned network from t. The thnned network conssts of all the unts that survved dropout (Fgure 1b). A neural net wth n unts, can be seen as a collecton of 2 n possble thnned neural networks. These networks all share weghts so that the total number of parameters s stll O(n 2 ), or less. For each presentaton of each tranng case, a new thnned network s sampled and traned. So tranng a neural network wth dropout can be seen as tranng a collecton of 2 n thnned networks wth extensve weght sharng, where each thnned network gets traned very rarely, f at all. At test tme, t s not feasble to explctly average the predctons from exponentally many thnned models. However, a very smple approxmate averagng method works well n practce. The dea s to use a sngle neural net at test tme wthout dropout. The weghts of ths network are scaled-down versons of the traned weghts. If a unt s retaned wth probablty p durng tranng, the outgong weghts of that unt are multpled by p at test tme as shown n Fgure 2. Ths ensures that for any hdden unt the expected output (under the dstrbuton used to drop unts at tranng tme) s the same as the actual output at test tme. By dong ths scalng, 2 n networks wth shared weghts can be combned nto a sngle neural network to be used at test tme. We found that tranng a network wth dropout and usng ths approxmate averagng method at test tme leads to sgnfcantly lower generalzaton error on a wde varety of classfcaton problems compared to tranng wth other regularzaton methods. The dea of dropout s not lmted to feed-forward neural nets. It can be more generally appled to graphcal models such as Boltzmann Machnes. In ths paper, we ntroduce the dropout Restrcted Boltzmann Machne model and compare t to standard Restrcted Boltzmann Machnes (RBM). Our experments show that dropout RBMs are better than standard RBMs n certan respects. Ths paper s structured as follows. Secton 2 descrbes the motvaton for ths dea. Secton 3 descrbes relevant prevous work. Secton 4 formally descrbes the dropout model. Secton 5 gves an algorthm for tranng dropout networks. In Secton 6, we present our expermental results where we apply dropout to problems n dfferent domans and compare t wth other forms of regularzaton and model combnaton. Secton 7 analyzes the effect of dropout on dfferent propertes of a neural network and descrbes how dropout nteracts wth the network s hyperparameters. Secton 8 descrbes the Dropout RBM model. In Secton 9 we explore the dea of margnalzng dropout. In Appendx A we present a practcal gude 1931
Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov for tranng dropout nets. Ths ncludes a detaled analyss of the practcal consderatons nvolved n choosng hyperparameters when tranng dropout networks. 2. Motvaton A motvaton for dropout comes from a theory of the role of sex n evoluton (Lvnat et al., 2010). Sexual reproducton nvolves takng half the genes of one parent and half of the other, addng a very small amount of random mutaton, and combnng them to produce an offsprng. The asexual alternatve s to create an offsprng wth a slghtly mutated copy of the parent s genes. It seems plausble that asexual reproducton should be a better way to optmze ndvdual ftness because a good set of genes that have come to work well together can be passed on drectly to the offsprng. On the other hand, sexual reproducton s lkely to break up these co-adapted sets of genes, especally f these sets are large and, ntutvely, ths should decrease the ftness of organsms that have already evolved complcated coadaptatons. However, sexual reproducton s the way most advanced organsms evolved. One possble explanaton for the superorty of sexual reproducton s that, over the long term, the crteron for natural selecton may not be ndvdual ftness but rather mx-ablty of genes. The ablty of a set of genes to be able to work well wth another random set of genes makes them more robust. Snce a gene cannot rely on a large set of partners to be present at all tmes, t must learn to do somethng useful on ts own or n collaboraton wth a small number of other genes. Accordng to ths theory, the role of sexual reproducton s not just to allow useful new genes to spread throughout the populaton, but also to facltate ths process by reducng complex co-adaptatons that would reduce the chance of a new gene mprovng the ftness of an ndvdual. Smlarly, each hdden unt n a neural network traned wth dropout must learn to work wth a randomly chosen sample of other unts. Ths should make each hdden unt more robust and drve t towards creatng useful features on ts own wthout relyng on other hdden unts to correct ts mstakes. However, the hdden unts wthn a layer wll stll learn to do dfferent thngs from each other. One mght magne that the net would become robust aganst dropout by makng many copes of each hdden unt, but ths s a poor soluton for exactly the same reason as replca codes are a poor way to deal wth a nosy channel. A closely related, but slghtly dfferent motvaton for dropout comes from thnkng about successful conspraces. Ten conspraces each nvolvng fve people s probably a better way to create havoc than one bg conspracy that requres ffty people to all play ther parts correctly. If condtons do not change and there s plenty of tme for rehearsal, a bg conspracy can work well, but wth non-statonary condtons, the smaller the conspracy the greater ts chance of stll workng. Complex co-adaptatons can be traned to work well on a tranng set, but on novel test data they are far more lkely to fal than multple smpler co-adaptatons that acheve the same thng. 3. Related Work Dropout can be nterpreted as a way of regularzng a neural network by addng nose to ts hdden unts. The dea of addng nose to the states of unts has prevously been used n the context of Denosng Autoencoders (DAEs) by Vncent et al. (2008, 2010) where nose 1932
Dropout s added to the nput unts of an autoencoder and the network s traned to reconstruct the nose-free nput. Our work extends ths dea by showng that dropout can be effectvely appled n the hdden layers as well and that t can be nterpreted as a form of model averagng. We also show that addng nose s not only useful for unsupervsed feature learnng but can also be extended to supervsed learnng problems. In fact, our method can be appled to other neuron-based archtectures, for example, Boltzmann Machnes. Whle 5% nose typcally works best for DAEs, we found that our weght scalng procedure appled at test tme enables us to use much hgher nose levels. Droppng out 20% of the nput unts and 50% of the hdden unts was often found to be optmal. Snce dropout can be seen as a stochastc regularzaton technque, t s natural to consder ts determnstc counterpart whch s obtaned by margnalzng out the nose. In ths paper, we show that, n smple cases, dropout can be analytcally margnalzed out to obtan determnstc regularzaton methods. Recently, van der Maaten et al. (2013) also explored determnstc regularzers correspondng to dfferent exponental-famly nose dstrbutons, ncludng dropout (whch they refer to as blankout nose ). However, they apply nose to the nputs and only explore models wth no hdden layers. Wang and Mannng (2013) proposed a method for speedng up dropout by margnalzng dropout nose. Chen et al. (2012) explored margnalzaton n the context of denosng autoencoders. In dropout, we mnmze the loss functon stochastcally under a nose dstrbuton. Ths can be seen as mnmzng an expected loss functon. Prevous work of Globerson and Rowes (2006); Dekel et al. (2010) explored an alternate settng where the loss s mnmzed when an adversary gets to pck whch unts to drop. Here, nstead of a nose dstrbuton, the maxmum number of unts that can be dropped s fxed. However, ths work also does not explore models wth hdden unts. 4. Model Descrpton Ths secton descrbes the dropout neural network model. Consder a neural network wth L hdden layers. Let l {1,..., L} ndex the hdden layers of the network. Let z (l) denote the vector of nputs nto layer l, y (l) denote the vector of outputs from layer l (y (0) = x s the nput). W (l) and b (l) are the weghts and bases at layer l. The feed-forward operaton of a standard neural network (Fgure 3a) can be descrbed as (for l {0,..., L 1} and any hdden unt ) z (l+1) = w (l+1) y l + b (l+1), y (l+1) = f(z (l+1) ), where f s any actvaton functon, for example, f(x) = 1/ (1 + exp( x)). Wth dropout, the feed-forward operaton becomes (Fgure 3b) r (l) j Bernoull(p), ỹ (l) = r (l) y (l), z (l+1) = w (l+1) ỹ l + b (l+1), y (l+1) = f(z (l+1) ). 1933
Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov +1 +1 b (l+1) r (l) 3 b (l+1) y (l) 3 y (l) 3 ỹ (l) 3 w (l+1) z (l+1) f y (l+1) r (l) 2 w (l+1) z (l+1) f y (l+1) y (l) 2 y (l) 2 ỹ (l) 2 r (l) 1 y (l) 1 y (l) 1 (a) Standard network (b) Dropout network Fgure 3: Comparson of the basc operatons of a standard and dropout network. Here denotes an element-wse product. For any layer l, r (l) s a vector of ndependent Bernoull random varables each of whch has probablty p of beng 1. Ths vector s sampled and multpled element-wse wth the outputs of that layer, y (l), to create the thnned outputs ỹ (l). The thnned outputs are then used as nput to the next layer. Ths process s appled at each layer. Ths amounts to samplng a sub-network from a larger network. For learnng, the dervatves of the loss functon are backpropagated through the sub-network. At test tme, the weghts are scaled as W (l) test = pw (l) as shown n Fgure 2. The resultng neural network s used wthout dropout. ỹ (l) 1 5. Learnng Dropout Nets Ths secton descrbes a procedure for tranng dropout neural nets. 5.1 Backpropagaton Dropout neural networks can be traned usng stochastc gradent descent n a manner smlar to standard neural nets. The only dfference s that for each tranng case n a mn-batch, we sample a thnned network by droppng out unts. Forward and backpropagaton for that tranng case are done only on ths thnned network. The gradents for each parameter are averaged over the tranng cases n each mn-batch. Any tranng case whch does not use a parameter contrbutes a gradent of zero for that parameter. Many methods have been used to mprove stochastc gradent descent such as momentum, annealed learnng rates and L2 weght decay. Those were found to be useful for dropout neural networks as well. One partcular form of regularzaton was found to be especally useful for dropout constranng the norm of the ncomng weght vector at each hdden unt to be upper bounded by a fxed constant c. In other words, f w represents the vector of weghts ncdent on any hdden unt, the neural network was optmzed under the constrant w 2 c. Ths constrant was mposed durng optmzaton by projectng w onto the surface of a ball of radus c, whenever w went out of t. Ths s also called max-norm regularzaton snce t mples that the maxmum value that the norm of any weght can take s c. The constant 1934
Dropout c s a tunable hyperparameter, whch s determned usng a valdaton set. Max-norm regularzaton has been prevously used n the context of collaboratve flterng (Srebro and Shrabman, 2005). It typcally mproves the performance of stochastc gradent descent tranng of deep neural nets, even when no dropout s used. Although dropout alone gves sgnfcant mprovements, usng dropout along wth maxnorm regularzaton, large decayng learnng rates and hgh momentum provdes a sgnfcant boost over just usng dropout. A possble justfcaton s that constranng weght vectors to le nsde a ball of fxed radus makes t possble to use a huge learnng rate wthout the possblty of weghts blowng up. The nose provded by dropout then allows the optmzaton process to explore dfferent regons of the weght space that would have otherwse been dffcult to reach. As the learnng rate decays, the optmzaton takes shorter steps, thereby dong less exploraton and eventually settles nto a mnmum. 5.2 Unsupervsed Pretranng Neural networks can be pretraned usng stacks of RBMs (Hnton and Salakhutdnov, 2006), autoencoders (Vncent et al., 2010) or Deep Boltzmann Machnes (Salakhutdnov and Hnton, 2009). Pretranng s an effectve way of makng use of unlabeled data. Pretranng followed by fnetunng wth backpropagaton has been shown to gve sgnfcant performance boosts over fnetunng from random ntalzatons n certan cases. Dropout can be appled to fnetune nets that have been pretraned usng these technques. The pretranng procedure stays the same. The weghts obtaned from pretranng should be scaled up by a factor of 1/p. Ths makes sure that for each unt, the expected output from t under random dropout wll be the same as the output durng pretranng. We were ntally concerned that the stochastc nature of dropout mght wpe out the nformaton n the pretraned weghts. Ths dd happen when the learnng rates used durng fnetunng were comparable to the best learnng rates for randomly ntalzed nets. However, when the learnng rates were chosen to be smaller, the nformaton n the pretraned weghts seemed to be retaned and we were able to get mprovements n terms of the fnal generalzaton error compared to not usng dropout when fnetunng. 6. Expermental Results We traned dropout neural networks for classfcaton problems on data sets n dfferent domans. We found that dropout mproved generalzaton performance on all data sets compared to neural networks that dd not use dropout. Table 1 gves a bref descrpton of the data sets. The data sets are MNIST : A standard toy data set of handwrtten dgts. TIMIT : A standard speech benchmark for clean speech recognton. CIFAR-10 and CIFAR-100 : Tny natural mages (Krzhevsky, 2009). Street Vew House Numbers data set (SVHN) : Images of house numbers collected by Google Street Vew (Netzer et al., 2011). ImageNet : A large collecton of natural mages. Reuters-RCV1 : A collecton of Reuters newswre artcles. 1935
Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov Alternatve Splcng data set: RNA features for predctng alternatve gene splcng (Xong et al., 2011). We chose a dverse set of data sets to demonstrate that dropout s a general technque for mprovng neural nets and s not specfc to any partcular applcaton doman. In ths secton, we present some key results that show the effectveness of dropout. A more detaled descrpton of all the experments and data sets s provded n Appendx B. Data Set Doman Dmensonalty Tranng Set Test Set MNIST Vson 784 (28 28 grayscale) 60K 10K SVHN Vson 3072 (32 32 color) 600K 26K CIFAR-10/100 Vson 3072 (32 32 color) 60K 10K ImageNet (ILSVRC-2012) Vson 65536 (256 256 color) 1.2M 150K TIMIT Speech 2520 (120-dm, 21 frames) 1.1M frames 58K frames Reuters-RCV1 Text 2000 200K 200K Alternatve Splcng Genetcs 1014 2932 733 6.1 Results on Image Data Sets Table 1: Overvew of the data sets used n ths paper. We used fve mage data sets to evaluate dropout MNIST, SVHN, CIFAR-10, CIFAR-100 and ImageNet. These data sets nclude dfferent mage types and tranng set szes. Models whch acheve state-of-the-art results on all of these data sets use dropout. 6.1.1 MNIST Method Unt Type Archtecture Error % Standard Neural Net (Smard et al., 2003) Logstc 2 layers, 800 unts 1.60 SVM Gaussan kernel NA NA 1.40 Dropout NN Logstc 3 layers, 1024 unts 1.35 Dropout NN ReLU 3 layers, 1024 unts 1.25 Dropout NN + max-norm constrant ReLU 3 layers, 1024 unts 1.06 Dropout NN + max-norm constrant ReLU 3 layers, 2048 unts 1.04 Dropout NN + max-norm constrant ReLU 2 layers, 4096 unts 1.01 Dropout NN + max-norm constrant ReLU 2 layers, 8192 unts 0.95 Dropout NN + max-norm constrant (Goodfellow et al., 2013) Maxout 2 layers, (5 240) unts DBN + fnetunng (Hnton and Salakhutdnov, 2006) Logstc 500-500-2000 1.18 DBM + fnetunng (Salakhutdnov and Hnton, 2009) Logstc 500-500-2000 0.96 DBN + dropout fnetunng Logstc 500-500-2000 0.92 DBM + dropout fnetunng Logstc 500-500-2000 0.79 Table 2: Comparson of dfferent models on MNIST. The MNIST data set conssts of 28 28 pxel handwrtten dgt mages. The task s to classfy the mages nto 10 dgt classes. Table 2 compares the performance of dropout wth other technques. The best performng neural networks for the permutaton nvarant 0.94 1936
Dropout settng that do not use dropout or unsupervsed pretranng acheve an error of about 1.60% (Smard et al., 2003). Wth dropout the error reduces to 1.35%. Replacng logstc unts wth rectfed lnear unts (ReLUs) (Jarrett et al., 2009) further reduces the error to 1.25%. Addng max-norm regularzaton agan reduces t to 1.06%. Increasng the sze of the network leads to better results. A neural net wth 2 layers and 8192 unts per layer gets down to 0.95% error. Note that ths network has more than 65 mllon parameters and s beng traned on a data set of sze 60,000. Tranng a network of ths sze to gve good generalzaton error s very hard wth standard regularzaton methods and early stoppng. Dropout, on the other hand, prevents overfttng, even n ths case. It does not even need early stoppng. Goodfellow et al. (2013) showed that results can be further mproved to 0.94% by replacng ReLU unts wth maxout unts. All dropout nets use p = 0.5 for hdden unts and p = 0.8 for nput unts. More expermental detals can be found n Appendx B.1. Dropout nets pretraned wth stacks of RBMs and Deep Boltzmann Machnes also gve mprovements as shown n Table 2. DBM pretraned dropout nets acheve a test error of 0.79% whch s the best performance ever reported for the permutaton nvarant settng. We note that t possble to obtan better results by usng 2-D spatal nformaton and augmentng the tranng set wth dstorted versons of mages from the standard tranng set. We demonstrate the effectveness of dropout n that settng on more nterestng data sets. In order to test the robustness of dropout, classfcaton experments were done wth networks of many dfferent archtectures keepng all hyperparameters, ncludng p, fxed. Fgure 4 shows the test error rates obtaned for these dfferent archtectures as tranng progresses. The same archtectures traned wth and wthout dropout have drastcally dfferent test errors as seen as by the two separate clusters of trajectores. Dropout gves a huge mprovement across all archtectures, wthout usng hyperparameters that were tuned specfcally for each archtecture. 6.1.2 Street Vew House Numbers The Street Vew House Numbers (SVHN) Data Set (Netzer et al., 2011) conssts of color mages of house numbers collected by Classfcaton Error % 2.5 2.0 1.5 1.0 Wthout dropout Wth dropout 0 200000 400000 600000 800000 1000000 Number of weght updates Fgure 4: Test error for dfferent archtectures wth and wthout dropout. The networks have 2 to 4 hdden layers each wth 1024 to 2048 unts. Google Street Vew. Fgure 5a shows some examples of mages from ths data set. The part of the data set that we use n our experments conssts of 32 32 color mages roughly centered on a dgt n a house number. The task s to dentfy that dgt. For ths data set, we appled dropout to convolutonal neural networks (LeCun et al., 1989). The best archtecture that we found has three convolutonal layers followed by 2 fully connected hdden layers. All hdden unts were ReLUs. Each convolutonal layer was 1937
Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov Method Error % Bnary Features (WDCH) (Netzer et al., 2011) 36.7 HOG (Netzer et al., 2011) 15.0 Stacked Sparse Autoencoders (Netzer et al., 2011) 10.3 KMeans (Netzer et al., 2011) 9.4 Mult-stage Conv Net wth average poolng (Sermanet et al., 2012) 9.06 Mult-stage Conv Net + L2 poolng (Sermanet et al., 2012) 5.36 Mult-stage Conv Net + L4 poolng + paddng (Sermanet et al., 2012) 4.90 Conv Net + max-poolng 3.95 Conv Net + max poolng + dropout n fully connected layers 3.02 Conv Net + stochastc poolng (Zeler and Fergus, 2013) 2.80 Conv Net + max poolng + dropout n all layers 2.55 Conv Net + maxout (Goodfellow et al., 2013) 2.47 Human Performance 2.0 Table 3: Results on the Street Vew House Numbers data set. followed by a max-poolng layer. Appendx B.2 descrbes the archtecture n more detal. Dropout was appled to all the layers of the network wth the probablty of retanng a hdden unt beng p = (0.9, 0.75, 0.75, 0.5, 0.5, 0.5) for the dfferent layers of the network (gong from nput to convolutonal layers to fully connected layers). Max-norm regularzaton was used for weghts n both convolutonal and fully connected layers. Table 3 compares the results obtaned by dfferent methods. We fnd that convolutonal nets outperform other methods. The best performng convolutonal nets that do not use dropout acheve an error rate of 3.95%. Addng dropout only to the fully connected layers reduces the error to 3.02%. Addng dropout to the convolutonal layers as well further reduces the error to 2.55%. Even more gans can be obtaned by usng maxout unts. The addtonal gan n performance obtaned by addng dropout n the convolutonal layers (3.02% to 2.55%) s worth notng. One may have presumed that snce the convolutonal layers don t have a lot of parameters, overfttng s not a problem and therefore dropout would not have much effect. However, dropout n the lower layers stll helps because t provdes nosy nputs for the hgher fully connected layers whch prevents them from overfttng. 6.1.3 CIFAR-10 and CIFAR-100 The CIFAR-10 and CIFAR-100 data sets consst of 32 32 color mages drawn from 10 and 100 categores respectvely. Fgure 5b shows some examples of mages from ths data set. A detaled descrpton of the data sets, nput preprocessng, network archtectures and other expermental detals s gven n Appendx B.3. Table 4 shows the error rate obtaned by dfferent methods on these data sets. Wthout any data augmentaton, Snoek et al. (2012) used Bayesan hyperparameter optmzaton to obtaned an error rate of 14.98% on CIFAR-10. Usng dropout n the fully connected layers reduces that to 14.32% and addng dropout n every layer further reduces the error to 12.61%. Goodfellow et al. (2013) showed that the error s further reduced to 11.68% by replacng ReLU unts wth maxout unts. On CIFAR-100, dropout reduces the error from 43.48% to 37.20% whch s a huge mprovement. No data augmentaton was used for ether data set (apart from the nput dropout). 1938
Dropout (a) Street Vew House Numbers (SVHN) (b) CIFAR-10 Fgure 5: Samples from mage data sets. Each row corresponds to a dfferent category. Method Conv Conv Conv Conv Conv Conv Net Net Net Net Net Net + + + + + + max poolng (hand tuned) stochastc poolng (Zeler and Fergus, 2013) max poolng (Snoek et al., 2012) max poolng + dropout fully connected layers max poolng + dropout n all layers maxout (Goodfellow et al., 2013) CIFAR-10 CIFAR-100 15.60 15.13 14.98 14.32 12.61 11.68 43.48 42.51 41.26 37.20 38.57 Table 4: Error rates on CIFAR-10 and CIFAR-100. 6.1.4 ImageNet ImageNet s a data set of over 15 mllon labeled hgh-resoluton mages belongng to roughly 22,000 categores. Startng n 2010, as part of the Pascal Vsual Object Challenge, an annual competton called the ImageNet Large-Scale Vsual Recognton Challenge (ILSVRC) has been held. A subset of ImageNet wth roughly 1000 mages n each of 1000 categores s used n ths challenge. Snce the number of categores s rather large, t s conventonal to report two error rates: top-1 and top-5, where the top-5 error rate s the fracton of test mages for whch the correct label s not among the fve labels consdered most probable by the model. Fgure 6 shows some predctons made by our model on a few test mages. ILSVRC-2010 s the only verson of ILSVRC for whch the test set labels are avalable, so most of our experments were performed on ths data set. Table 5 compares the performance of dfferent methods. Convolutonal nets wth dropout outperform other methods by a large margn. The archtecture and mplementaton detals are descrbed n detal n Krzhevsky et al. (2012). 1939
Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov Fgure 6: Some ImageNet test cases wth the 4 most probable labels as predcted by our model. The length of the horzontal bars s proportonal to the probablty assgned to the labels by the model. Pnk ndcates ground truth. Model Model Top-1 Top-5 Sparse Codng (Ln et al., 2010) 47.1 28.2 SIFT + Fsher Vectors (Sanchez and Perronnn, 2011) 45.7 25.7 Conv Net + dropout (Krzhevsky et al., 2012) 37.5 17.0 Table 5: Results on the ILSVRC-2010 test set. Top-1 (val) Top-5 (val) Top-5 (test) SVM on Fsher Vectors of Dense SIFT and Color Statstcs - - 27.3 Avg of classfers over FVs of SIFT, LBP, GIST and CSIFT - - 26.2 Conv Net + dropout (Krzhevsky et al., 2012) 40.7 18.2 - Avg of 5 Conv Nets + dropout (Krzhevsky et al., 2012) 38.1 16.4 16.4 Table 6: Results on the ILSVRC-2012 valdaton/test set. Our model based on convolutonal nets and dropout won the ILSVRC-2012 competton. Snce the labels for the test set are not avalable, we report our results on the test set for the fnal submsson and nclude the valdaton set results for dfferent varatons of our model. Table 6 shows the results from the competton. Whle the best methods based on standard vson features acheve a top-5 error rate of about 26%, convolutonal nets wth dropout acheve a test error of about 16% whch s a staggerng dfference. Fgure 6 shows some examples of predctons made by our model. We can see that the model makes very reasonable predctons, even when ts best guess s not correct. 6.2 Results on TIMIT Next, we appled dropout to a speech recognton task. We use the TIMIT data set whch conssts of recordngs from 680 speakers coverng 8 major dalects of Amercan Englsh readng ten phonetcally-rch sentences n a controlled nose-free envronment. Dropout neural networks were traned on wndows of 21 log-flter bank frames to predct the label of the central frame. No speaker dependent operatons were performed. Appendx B.4 descrbes the data preprocessng and tranng detals. Table 7 compares dropout neural 1940
Dropout nets wth other models. A 6-layer net gves a phone error rate of 23.4%. Dropout further mproves t to 21.8%. We also traned dropout nets startng from pretraned weghts. A 4-layer net pretraned wth a stack of RBMs get a phone error rate of 22.7%. Wth dropout, ths reduces to 19.7%. Smlarly, for an 8-layer net the error reduces from 20.5% to 19.7%. Method Phone Error Rate% NN (6 layers) (Mohamed et al., 2010) 23.4 Dropout NN (6 layers) 21.8 DBN-pretraned NN (4 layers) 22.7 DBN-pretraned NN (6 layers) (Mohamed et al., 2010) 22.4 DBN-pretraned NN (8 layers) (Mohamed et al., 2010) 20.7 mcrbm-dbn-pretraned NN (5 layers) (Dahl et al., 2010) 20.5 DBN-pretraned NN (4 layers) + dropout 19.7 DBN-pretraned NN (8 layers) + dropout 19.7 6.3 Results on a Text Data Set Table 7: Phone error rate on the TIMIT core test set. To test the usefulness of dropout n the text doman, we used dropout networks to tran a document classfer. We used a subset of the Reuters-RCV1 data set whch s a collecton of over 800,000 newswre artcles from Reuters. These artcles cover a varety of topcs. The task s to take a bag of words representaton of a document and classfy t nto 50 dsjont topcs. Appendx B.5 descrbes the setup n more detal. Our best neural net whch dd not use dropout obtaned an error rate of 31.05%. Addng dropout reduced the error to 29.62%. We found that the mprovement was much smaller compared to that for the vson and speech data sets. 6.4 Comparson wth Bayesan Neural Networks Dropout can be seen as a way of dong an equally-weghted averagng of exponentally many models wth shared weghts. On the other hand, Bayesan neural networks (Neal, 1996) are the proper way of dong model averagng over the space of neural network structures and parameters. In dropout, each model s weghted equally, whereas n a Bayesan neural network each model s weghted takng nto account the pror and how well the model fts the data, whch s the more correct approach. Bayesan neural nets are extremely useful for solvng problems n domans where data s scarce such as medcal dagnoss, genetcs, drug dscovery and other computatonal bology applcatons. However, Bayesan neural nets are slow to tran and dffcult to scale to very large network szes. Besdes, t s expensve to get predctons from many large nets at test tme. On the other hand, dropout neural nets are much faster to tran and use at test tme. In ths secton, we report experments that compare Bayesan neural nets wth dropout neural nets on a small data set where Bayesan neural networks are known to perform well and obtan state-of-the-art results. The am s to analyze how much does dropout lose compared to Bayesan neural nets. The data set that we use (Xong et al., 2011) comes from the doman of genetcs. The task s to predct the occurrence of alternatve splcng based on RNA features. Alternatve splcng s a sgnfcant cause of cellular dversty n mammalan tssues. Predctng the 1941
Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov Method Code Qualty (bts) Neural Network (early stoppng) (Xong et al., 2011) 440 Regresson, PCA (Xong et al., 2011) 463 SVM, PCA (Xong et al., 2011) 487 Neural Network wth dropout 567 Bayesan Neural Network (Xong et al., 2011) 623 Table 8: Results on the Alternatve Splcng Data Set. occurrence of alternate splcng n certan tssues under dfferent condtons s mportant for understandng many human dseases. Gven the RNA features, the task s to predct the probablty of three splcng related events that bologsts care about. The evaluaton metrc s Code Qualty whch s a measure of the negatve KL dvergence between the target and the predcted probablty dstrbutons (hgher s better). Appendx B.6 ncludes a detaled descrpton of the data set and ths performance metrc. Table 8 summarzes the performance of dfferent models on ths data set. Xong et al. (2011) used Bayesan neural nets for ths task. As expected, we found that Bayesan neural nets perform better than dropout. However, we see that dropout mproves sgnfcantly upon the performance of standard neural nets and outperforms all other methods. The challenge n ths data set s to prevent overfttng snce the sze of the tranng set s small. One way to prevent overfttng s to reduce the nput dmensonalty usng PCA. Thereafter, standard technques such as SVMs or logstc regresson can be used. However, wth dropout we were able to prevent overfttng wthout the need to do dmensonalty reducton. The dropout nets are very large (1000s of hdden unts) compared to a few tens of unts n the Bayesan network. Ths shows that dropout has a strong regularzng effect. 6.5 Comparson wth Standard Regularzers Several regularzaton methods have been proposed for preventng overfttng n neural networks. These nclude L2 weght decay (more generally Tkhonov regularzaton (Tkhonov, 1943)), lasso (Tbshran, 1996), KL-sparsty and max-norm regularzaton. Dropout can be seen as another way of regularzng neural networks. In ths secton we compare dropout wth some of these regularzaton methods usng the MNIST data set. The same network archtecture (784-1024-1024-2048-10) wth ReLUs was traned usng stochastc gradent descent wth dfferent regularzatons. Table 9 shows the results. The values of dfferent hyperparameters assocated wth each knd of regularzaton (decay constants, target sparsty, dropout rate, max-norm upper bound) were obtaned usng a valdaton set. We found that dropout combned wth max-norm regularzaton gves the lowest generalzaton error. 7. Salent Features The experments descrbed n the prevous secton provde strong evdence that dropout s a useful technque for mprovng neural networks. In ths secton, we closely examne how dropout affects a neural network. We analyze the effect of dropout on the qualty of features produced. We see how dropout affects the sparsty of hdden unt actvatons. We 1942
Dropout Method Test Classfcaton error % L2 1.62 L2 + L1 appled towards the end of tranng 1.60 L2 + KL-sparsty 1.55 Max-norm 1.35 Dropout + L2 1.25 Dropout + Max-norm 1.05 Table 9: Comparson of dfferent regularzaton methods on MNIST. also see how the advantages obtaned from dropout vary wth the probablty of retanng unts, sze of the network and the sze of the tranng set. These observatons gve some nsght nto why dropout works so well. 7.1 Effect on Features (a) Wthout dropout (b) Dropout wth p = 0.5. Fgure 7: Features learned on MNIST wth one hdden layer autoencoders havng 256 rectfed lnear unts. In a standard neural network, the dervatve receved by each parameter tells t how t should change so the fnal loss functon s reduced, gven what all other unts are dong. Therefore, unts may change n a way that they fx up the mstakes of the other unts. Ths may lead to complex co-adaptatons. Ths n turn leads to overfttng because these co-adaptatons do not generalze to unseen data. We hypothesze that for each hdden unt, dropout prevents co-adaptaton by makng the presence of other hdden unts unrelable. Therefore, a hdden unt cannot rely on other specfc unts to correct ts mstakes. It must perform well n a wde varety of dfferent contexts provded by the other hdden unts. To observe ths effect drectly, we look at the frst level features learned by neural networks traned on vsual tasks wth and wthout dropout. 1943
Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov Fgure 7a shows features learned by an autoencoder on MNIST wth a sngle hdden layer of 256 rectfed lnear unts wthout dropout. Fgure 7b shows the features learned by an dentcal autoencoder whch used dropout n the hdden layer wth p = 0.5. Both autoencoders had smlar test reconstructon errors. However, t s apparent that the features shown n Fgure 7a have co-adapted n order to produce good reconstructons. Each hdden unt on ts own does not seem to be detectng a meanngful feature. On the other hand, n Fgure 7b, the hdden unts seem to detect edges, strokes and spots n dfferent parts of the mage. Ths shows that dropout does break up co-adaptatons, whch s probably the man reason why t leads to lower generalzaton errors. 7.2 Effect on Sparsty (a) Wthout dropout (b) Dropout wth p = 0.5. Fgure 8: Effect of dropout on sparsty. ReLUs were used for both models. Left: The hstogram of mean actvatons shows that most unts have a mean actvaton of about 2.0. The hstogram of actvatons shows a huge mode away from zero. Clearly, a large fracton of unts have hgh actvaton. Rght: The hstogram of mean actvatons shows that most unts have a smaller mean mean actvaton of about 0.7. The hstogram of actvatons shows a sharp peak at zero. Very few unts have hgh actvaton. We found that as a sde-effect of dong dropout, the actvatons of the hdden unts become sparse, even when no sparsty nducng regularzers are present. Thus, dropout automatcally leads to sparse representatons. To observe ths effect, we take the autoencoders traned n the prevous secton and look at the sparsty of hdden unt actvatons on a random mn-batch taken from the test set. Fgure 8a and Fgure 8b compare the sparsty for the two models. In a good sparse model, there should only be a few hghly actvated unts for any data case. Moreover, the average actvaton of any unt across data cases should be low. To assess both of these qualtes, we plot two hstograms for each model. For each model, the hstogram on the left shows the dstrbuton of mean actvatons of hdden unts across the mnbatch. The hstogram on the rght shows the dstrbuton of actvatons of the hdden unts. Comparng the hstograms of actvatons we can see that fewer hdden unts have hgh actvatons n Fgure 8b compared to Fgure 8a, as seen by the sgnfcant mass away from 1944
Dropout zero for the net that does not use dropout. The mean actvatons are also smaller for the dropout net. The overall mean actvaton of hdden unts s close to 2.0 for the autoencoder wthout dropout but drops to around 0.7 when dropout s used. 7.3 Effect of Dropout Rate Dropout has a tunable hyperparameter p (the probablty of retanng a unt n the network). In ths secton, we explore the effect of varyng ths hyperparameter. The comparson s done n two stuatons. 1. The number of hdden unts s held constant. 2. The number of hdden unts s changed so that the expected number of hdden unts that wll be retaned after dropout s held constant. In the frst case, we tran the same network archtecture wth dfferent amounts of dropout. We use a 784-2048-2048-2048-10 archtecture. No nput dropout was used. Fgure 9a shows the test error obtaned as a functon of p. If the archtecture s held constant, havng a small p means very few unts wll turn on durng tranng. It can be seen that ths has led to underfttng snce the tranng error s also hgh. We see that as p ncreases, the error goes down. It becomes flat when 0.4 p 0.8 and then ncreases as p becomes close to 1. 3.5 3.0 Test Error Tranng Error 3.0 2.5 Test Error Tranng Error Classfcaton Error % 2.5 2.0 1.5 1.0 Classfcaton Error % 2.0 1.5 1.0 0.5 0.5 0.0 0.0 0.2 0.4 0.6 0.8 1.0 Probablty of retanng a unt (p) (a) Keepng n fxed. 0.0 0.0 0.2 0.4 0.6 0.8 1.0 Probablty of retanng a unt (p) (b) Keepng pn fxed. Fgure 9: Effect of changng dropout rates on MNIST. Another nterestng settng s the second case n whch the quantty pn s held constant where n s the number of hdden unts n any partcular layer. Ths means that networks that have small p wll have a large number of hdden unts. Therefore, after applyng dropout, the expected number of unts that are present wll be the same across dfferent archtectures. However, the test networks wll be of dfferent szes. In our experments, we set pn = 256 for the frst two hdden layers and pn = 512 for the last hdden layer. Fgure 9b shows the test error obtaned as a functon of p. We notce that the magntude of errors for small values of p has reduced by a lot compared to Fgure 9a (for p = 0.1 t fell from 2.7% to 1.7%). Values of p that are close to 0.6 seem to perform best for ths choce of pn but our usual default value of 0.5 s close to optmal. 1945
Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov 7.4 Effect of Data Set Sze One test of a good regularzer s that t should make t possble to get good generalzaton error from models wth a large number of parameters traned on small data sets. Ths secton explores the effect of changng the data set sze when dropout s used wth feedforward networks. Huge neural networks traned n the standard way overft massvely on small data sets. To see f dropout can help, we run classfcaton experments on MNIST and vary the amount of data gven to the network. The results of these experments are 30 shown n Fgure 10. The network was gven Wth dropout Wthout dropout data sets of sze 100, 500, 1K, 5K, 10K 25 and 50K chosen randomly from the MNIST tranng set. The same network archtecture (784-1024-1024-2048-10) was used for all data sets. Dropout wth p = 0.5 was performed at all the hdden layers and p = 0.8 at the nput layer. It can be observed that for extremely small data sets (100, 500) dropout does not gve any mprovements. The model has enough parameters that t can overft on the tranng data, even wth all the nose comng from dropout. As the sze of the data set s ncreased, the gan Classfcaton Error % 20 15 10 5 0 10 2 10 3 10 4 10 5 Dataset sze Fgure 10: Effect of varyng data set sze. from dong dropout ncreases up to a pont and then declnes. Ths suggests that for any gven archtecture and dropout rate, there s a sweet spot correspondng to some amount of data that s large enough to not be memorzed n spte of the nose but not so large that overfttng s not a problem anyways. 7.5 Monte-Carlo Model Averagng vs. Weght Scalng The effcent test tme procedure that we propose s to do an approxmate model combnaton by scalng down the weghts of the traned neural network. An expensve but more correct way of averagng the models s to sample k neural nets usng dropout for each test case and average ther predctons. As k, ths Monte-Carlo model average gets close to the true model average. It s nterestng to see emprcally how many samples k are needed to match the performance of the approxmate averagng method. By computng the error for dfferent values of k we can see how quckly the error rate of the fnte-sample average approaches the error rate of the true model average. Test Classfcaton error % 1.35 1.30 1.25 1.20 1.15 1.10 1.05 Monte-Carlo Model Averagng Approxmate averagng by weght scalng 1.00 0 20 40 60 80 100 120 Number of samples used for Monte-Carlo averagng (k) Fgure 11: Monte-Carlo model averagng vs. weght scalng. 1946
Dropout We agan use the MNIST data set and do classfcaton by averagng the predctons of k randomly sampled neural networks. Fgure 11 shows the test error rate obtaned for dfferent values of k. Ths s compared wth the error obtaned usng the weght scalng method (shown as a horzontal lne). It can be seen that around k = 50, the Monte-Carlo method becomes as good as the approxmate method. Thereafter, the Monte-Carlo method s slghtly better than the approxmate method but well wthn one standard devaton of t. Ths suggests that the weght scalng method s a farly good approxmaton of the true model average. 8. Dropout Restrcted Boltzmann Machnes Besdes feed-forward neural networks, dropout can also be appled to Restrcted Boltzmann Machnes (RBM). In ths secton, we formally descrbe ths model and show some results to llustrate ts key propertes. 8.1 Model Descrpton Consder an RBM wth vsble unts v {0, 1} D and hdden unts h {0, 1} F. It defnes the followng probablty dstrbuton P (h, v; θ) = 1 Z(θ) exp(v W h + a h + b v). Where θ = {W, a, b} represents the model parameters and Z s the partton functon. Dropout RBMs are RBMs augmented wth a vector of bnary random varables r {0, 1} F. Each random varable r j takes the value 1 wth probablty p, ndependent of others. If r j takes the value 1, the hdden unt h j s retaned, otherwse t s dropped from the model. The jont dstrbuton defned by a Dropout RBM can be expressed as P (r, h, v; p, θ) = P (r; p)p (h, v r; θ), F P (r; p) = p r j (1 p) 1 r j, P (h, v r; θ) = j=1 1 Z (θ, r) exp(v W h + a h + b v) g(h j, r j ) = 1(r j = 1) + 1(r j = 0)1(h j = 0). F g(h j, r j ), Z (θ, r) s the normalzaton constant. g(h j, r j ) mposes the constrant that f r j = 0, h j must be 0. The dstrbuton over h, condtoned on v and r s factoral j=1 F P (h r, v) = P (h j r j, v), j=1 P (h j = 1 r j, v) = 1(r j = 1)σ b j + ( W j v ). 1947
Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov (a) Wthout dropout (b) Dropout wth p = 0.5. Fgure 12: Features learned on MNIST by 256 hdden unt RBMs. The features are ordered by L2 norm. The dstrbuton over v condtoned on h s same as that of an RBM P (v h) = D P (v h), =1 P (v = 1 h) = σ a + j W j h j. Condtoned on r, the dstrbuton over {v, h} s same as the dstrbuton that an RBM would mpose, except that the unts for whch r j = 0 are dropped from h. Therefore, the Dropout RBM model can be seen as a mxture of exponentally many RBMs wth shared weghts each usng a dfferent subset of h. 8.2 Learnng Dropout RBMs Learnng algorthms developed for RBMs such as Contrastve Dvergence (Hnton et al., 2006) can be drectly appled for learnng Dropout RBMs. The only dfference s that r s frst sampled and only the hdden unts that are retaned are used for tranng. Smlar to dropout neural networks, a dfferent r s sampled for each tranng case n every mnbatch. In our experments, we use CD-1 for tranng dropout RBMs. 8.3 Effect on Features Dropout n feed-forward networks mproved the qualty of features by reducng co-adaptatons. Ths secton explores whether ths effect transfers to Dropout RBMs as well. Fgure 12a shows features learned by a bnary RBM wth 256 hdden unts. Fgure 12b shows features learned by a dropout RBM wth the same number of hdden unts. Features 1948
Dropout (a) Wthout dropout (b) Dropout wth p = 0.5. Fgure 13: Effect of dropout on sparsty. Left: The actvaton hstogram shows that a large number of unts have actvatons away from zero. Rght: A large number of unts have actvatons close to zero and very few unts have hgh actvaton. learned by the dropout RBM appear qualtatvely dfferent n the sense that they seem to capture features that are coarser compared to the sharply defned stroke-lke features n the standard RBM. There seem to be very few dead unts n the dropout RBM relatve to the standard RBM. 8.4 Effect on Sparsty Next, we nvestgate the effect of dropout RBM tranng on sparsty of the hdden unt actvatons. Fgure 13a shows the hstograms of hdden unt actvatons and ther means on a test mn-batch after tranng an RBM. Fgure 13b shows the same for dropout RBMs. The hstograms clearly ndcate that the dropout RBMs learn much sparser representatons than standard RBMs even when no addtonal sparsty nducng regularzer s present. 9. Margnalzng Dropout Dropout can be seen as a way of addng nose to the states of hdden unts n a neural network. In ths secton, we explore the class of models that arse as a result of margnalzng ths nose. These models can be seen as determnstc versons of dropout. In contrast to standard ( Monte-Carlo ) dropout, these models do not need random bts and t s possble to get gradents for the margnalzed loss functons. In ths secton, we brefly explore these models. Determnstc algorthms have been proposed that try to learn models that are robust to feature deleton at test tme (Globerson and Rowes, 2006). Margnalzaton n the context of denosng autoencoders has been explored prevously (Chen et al., 2012). The margnalzaton of dropout nose n the context of lnear regresson was dscussed n Srvastava (2013). Wang and Mannng (2013) further explored the dea of margnalzng dropout to speed-up tranng. van der Maaten et al. (2013) nvestgated dfferent nput nose dstrbutons and 1949
Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov the regularzers obtaned by margnalzng ths nose. Wager et al. (2013) descrbes how dropout can be seen as an adaptve regularzer. 9.1 Lnear Regresson Frst we explore a very smple case of applyng dropout to the classcal problem of lnear regresson. Let X R N D be a data matrx of N data ponts. y R N be a vector of targets. Lnear regresson tres to fnd a w R D that mnmzes y Xw 2. When the nput X s dropped out such that any nput dmenson s retaned wth probablty p, the nput can be expressed as R X where R {0, 1} N D s a random matrx wth R j Bernoull(p) and denotes an element-wse product. Margnalzng the nose, the objectve functon becomes Ths reduces to mnmze w mnmze w E R Bernoull(p) [ y (R X)w 2 ]. y pxw 2 + p(1 p) Γw 2, where Γ = (dag(x X)) 1/2. Therefore, dropout wth lnear regresson s equvalent, n expectaton, to rdge regresson wth a partcular form for Γ. Ths form of Γ essentally scales the weght cost for weght w by the standard devaton of the th dmenson of the data. If a partcular data dmenson vares a lot, the regularzer tres to squeeze ts weght more. Another nterestng way to look at ths objectve s to absorb the factor of p nto w. Ths leads to the followng form mnmze w y X w 2 + 1 p p Γ w 2, where w = pw. Ths makes the dependence of the regularzaton constant on p explct. For p close to 1, all the nputs are retaned and the regularzaton constant s small. As more dropout s done (by decreasng p), the regularzaton constant grows larger. 9.2 Logstc Regresson and Deep Networks For logstc regresson and deep neural nets, t s hard to obtan a closed form margnalzed model. However, Wang and Mannng (2013) showed that n the context of dropout appled to logstc regresson, the correspondng margnalzed model can be traned approxmately. Under reasonable assumptons, the dstrbutons over the nputs to the logstc unt and over the gradents of the margnalzed model are Gaussan. Ther means and varances can be computed effcently. Ths approxmate margnalzaton outperforms Monte-Carlo dropout n terms of tranng tme and generalzaton performance. However, the assumptons nvolved n ths technque become successvely weaker as more layers are added. Therefore, the results are not drectly applcable to deep networks. 1950
Dropout Data Set Archtecture Bernoull dropout Gaussan dropout MNIST 2 layers, 1024 unts each 1.08 ± 0.04 0.95 ± 0.04 CIFAR-10 3 conv + 2 fully connected layers 12.6 ± 0.1 12.5 ± 0.1 Table 10: Comparson of classfcaton error % wth Bernoull and Gaussan dropout. For MNIST, the Bernoull model uses p = 0.5 for the hdden unts and p = 0.8 for the nput unts. For CIFAR-10, we use p = (0.9, 0.75, 0.75, 0.5, 0.5, 0.5) gong from the nput layer to the top. The value of σ for the Gaussan dropout models was set to be 1 p p. Results were averaged over 10 dfferent random seeds. 10. Multplcatve Gaussan Nose Dropout nvolves multplyng hdden actvatons by Bernoull dstrbuted random varables whch take the value 1 wth probablty p and 0 otherwse. Ths dea can be generalzed by multplyng the actvatons wth random varables drawn from other dstrbutons. We recently dscovered that multplyng by a random varable drawn from N (1, 1) works just as well, or perhaps better than usng Bernoull nose. Ths new form of dropout amounts to addng a Gaussan dstrbuted random varable wth zero mean and standard devaton equal to the actvaton of the unt. That s, each hdden actvaton h s perturbed to h + h r where r N (0, 1), or equvalently h r where r N (1, 1). We can generalze ths to r N (1, σ 2 ) where σ becomes an addtonal hyperparameter to tune, just lke p was n the standard (Bernoull) dropout. The expected value of the actvatons remans unchanged, therefore no weght scalng s requred at test tme. In ths paper, we descrbed dropout as a method where we retan unts wth probablty p at tranng tme and scale down the weghts by multplyng them by a factor of p at test tme. Another way to acheve the same effect s to scale up the retaned actvatons by multplyng by 1/p at tranng tme and not modfyng the weghts at test tme. These methods are equvalent wth approprate scalng of the learnng rate and weght ntalzatons at each layer. Therefore, dropout can be seen as multplyng h by a Bernoull random varable r b that takes the value 1/p wth probablty p and 0 otherwse. E[r b ] = 1 and V ar[r b ] = (1 p)/p. For the Gaussan multplcatve nose, f we set σ 2 = (1 p)/p, we end up multplyng h by a random varable r g, where E[r g ] = 1 and V ar[r g ] = (1 p)/p. Therefore, both forms of dropout can be set up so that the random varable beng multpled by has the same mean and varance. However, gven these frst and second order moments, r g has the hghest entropy and r b has the lowest. Both these extremes work well, although prelmnary expermental results shown n Table 10 suggest that the hgh entropy case mght work slghtly better. For each layer, the value of σ n the Gaussan model was set to be usng the p from the correspondng layer n the Bernoull model. 11. Concluson 1 p p Dropout s a technque for mprovng neural networks by reducng overfttng. Standard backpropagaton learnng bulds up brttle co-adaptatons that work for the tranng data but do not generalze to unseen data. Random dropout breaks up these co-adaptatons by 1951
Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov makng the presence of any partcular hdden unt unrelable. Ths technque was found to mprove the performance of neural nets n a wde varety of applcaton domans ncludng object classfcaton, dgt recognton, speech recognton, document classfcaton and analyss of computatonal bology data. Ths suggests that dropout s a general technque and s not specfc to any doman. Methods that use dropout acheve state-of-the-art results on SVHN, ImageNet, CIFAR-100 and MNIST. Dropout consderably mproved the performance of standard neural nets on other data sets as well. Ths dea can be extended to Restrcted Boltzmann Machnes and other graphcal models. The central dea of dropout s to take a large model that overfts easly and repeatedly sample and tran smaller sub-models from t. RBMs easly ft nto ths framework. We developed Dropout RBMs and emprcally showed that they have certan desrable propertes. One of the drawbacks of dropout s that t ncreases tranng tme. A dropout network typcally takes 2-3 tmes longer to tran than a standard neural network of the same archtecture. A major cause of ths ncrease s that the parameter updates are very nosy. Each tranng case effectvely tres to tran a dfferent random archtecture. Therefore, the gradents that are beng computed are not gradents of the fnal archtecture that wll be used at test tme. Therefore, t s not surprsng that tranng takes a long tme. However, t s lkely that ths stochastcty prevents overfttng. Ths creates a trade-off between overfttng and tranng tme. Wth more tranng tme, one can use hgh dropout and suffer less overfttng. However, one way to obtan some of the benefts of dropout wthout stochastcty s to margnalze the nose to obtan a regularzer that does the same thng as the dropout procedure, n expectaton. We showed that for lnear regresson ths regularzer s a modfed form of L2 regularzaton. For more complcated models, t s not obvous how to obtan an equvalent regularzer. Speedng up dropout s an nterestng drecton for future work. Acknowledgments Ths research was supported by OGS, NSERC and an Early Researcher Award. Appendx A. A Practcal Gude for Tranng Dropout Networks Neural networks are nfamous for requrng extensve hyperparameter tunng. Dropout networks are no excepton. In ths secton, we descrbe heurstcs that mght be useful for applyng dropout. A.1 Network Sze It s to be expected that droppng unts wll reduce the capacty of a neural network. If n s the number of hdden unts n any layer and p s the probablty of retanng a unt, then nstead of n hdden unts, only pn unts wll be present after dropout, n expectaton. Moreover, ths set of pn unts wll be dfferent each tme and the unts are not allowed to buld co-adaptatons freely. Therefore, f an n-szed layer s optmal for a standard neural net on any gven task, a good dropout net should have at least n/p unts. We found ths to be a useful heurstc for settng the number of hdden unts n both convolutonal and fully connected networks. 1952
Dropout A.2 Learnng Rate and Momentum Dropout ntroduces a sgnfcant amount of nose n the gradents compared to standard stochastc gradent descent. Therefore, a lot of gradents tend to cancel each other. In order to make up for ths, a dropout net should typcally use 10-100 tmes the learnng rate that was optmal for a standard neural net. Another way to reduce the effect the nose s to use a hgh momentum. Whle momentum values of 0.9 are common for standard nets, wth dropout we found that values around 0.95 to 0.99 work qute a lot better. Usng hgh learnng rate and/or momentum sgnfcantly speed up learnng. A.3 Max-norm Regularzaton Though large momentum and learnng rate speed up learnng, they sometmes cause the network weghts to grow very large. To prevent ths, we can use max-norm regularzaton. Ths constrans the norm of the vector of ncomng weghts at each hdden unt to be bound by a constant c. Typcal values of c range from 3 to 4. A.4 Dropout Rate Dropout ntroduces an extra hyperparameter the probablty of retanng a unt p. Ths hyperparameter controls the ntensty of dropout. p = 1, mples no dropout and low values of p mean more dropout. Typcal values of p for hdden unts are n the range 0.5 to 0.8. For nput layers, the choce depends on the knd of nput. For real-valued nputs (mage patches or speech frames), a typcal value s 0.8. For hdden layers, the choce of p s coupled wth the choce of number of hdden unts n. Smaller p requres bg n whch slows down the tranng and leads to underfttng. Large p may not produce enough dropout to prevent overfttng. Appendx B. Detaled Descrpton of Experments and Data Sets. Ths secton descrbes the network archtectures and tranng detals for the expermental results reported n ths paper. The code for reproducng these results can be obtaned from http://www.cs.toronto.edu/~ntsh/dropout. The mplementaton s GPU-based. We used the excellent CUDA lbrares cudamat (Mnh, 2009) and cuda-convnet (Krzhevsky et al., 2012) to mplement our networks. B.1 MNIST The MNIST data set conssts of 60,000 tranng and 10,000 test examples each representng a 28 28 dgt mage. We held out 10,000 random tranng mages for valdaton. Hyperparameters were tuned on the valdaton set such that the best valdaton error was produced after 1 mllon weght updates. The valdaton set was then combned wth the tranng set and tranng was done for 1 mllon weght updates. Ths net was used to evaluate the performance on the test set. Ths way of usng the valdaton set was chosen because we found that t was easy to set up hyperparameters so that early stoppng was not requred at all. Therefore, once the hyperparameters were fxed, t made sense to combne the valdaton and tranng sets and tran for a very long tme. 1953
Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov The archtectures shown n Fgure 4 nclude all combnatons of 2, 3, and 4 layer networks wth 1024 and 2048 unts n each layer. Thus, there are sx archtectures n all. For all the archtectures (ncludng the ones reported n Table 2), we used p = 0.5 n all hdden layers and p = 0.8 n the nput layer. A fnal momentum of 0.95 and weght constrants wth c = 2 was used n all the layers. To test the lmts of dropout s regularzaton power, we also expermented wth 2 and 3 layer nets havng 4096 and 8192 unts. 2 layer nets gave mprovements as shown n Table 2. However, the three layer nets performed slghtly worse than 2 layer ones wth the same level of dropout. When we ncreased dropout, performance mproved but not enough to outperform the 2 layer nets. B.2 SVHN The SVHN data set conssts of approxmately 600,000 tranng mages and 26,000 test mages. The tranng set conssts of two parts A standard labeled tranng set and another set of labeled examples that are easy. A valdaton set was constructed by takng examples from both the parts. Two-thrds of t were taken from the standard set (400 per class) and one-thrd from the extra set (200 per class), a total of 6000 samples. Ths same process s used by Sermanet et al. (2012). The nputs were RGB pxels normalzed to have zero mean and unt varance. Other preprocessng technques such as global or local contrast normalzaton or ZCA whtenng dd not gve any notceable mprovements. The best archtecture that we found uses three convolutonal layers each followed by a max-poolng layer. The convolutonal layers have 96, 128 and 256 flters respectvely. Each convolutonal layer has a 5 5 receptve feld appled wth a strde of 1 pxel. Each max poolng layer pools 3 3 regons at strdes of 2 pxels. The convolutonal layers are followed by two fully connected hdden layers havng 2048 unts each. All unts use the rectfed lnear actvaton functon. Dropout was appled to all the layers of the network wth the probablty of retanng the unt beng p = (0.9, 0.75, 0.75, 0.5, 0.5, 0.5) for the dfferent layers of the network (gong from nput to convolutonal layers to fully connected layers). In addton, the max-norm constrant wth c = 4 was used for all the weghts. A momentum of 0.95 was used n all the layers. These hyperparameters were tuned usng a valdaton set. Snce the tranng set was qute large, we dd not combne the valdaton set wth the tranng set for fnal tranng. We reported test error of the model that had smallest valdaton error. B.3 CIFAR-10 and CIFAR-100 The CIFAR-10 and CIFAR-100 data sets conssts of 50,000 tranng and 10,000 test mages each. They have 10 and 100 mage categores respectvely. These are 32 32 color mages. We used 5,000 of the tranng mages for valdaton. We followed the procedure smlar to MNIST, where we found the best hyperparameters usng the valdaton set and then combned t wth the tranng set. The mages were preprocessed by dong global contrast normalzaton n each color channel followed by ZCA whtenng. Global contrast normalzaton means that for mage and each color channel n that mage, we compute the mean of the pxel ntenstes and subtract t from the channel. ZCA whtenng means that we mean center the data, rotate t onto ts prncple components, normalze each component 1954
Dropout and then rotate t back. The network archtecture and dropout rates are same as that for SVHN, except the learnng rates for the nput layer whch had to be set to smaller values. B.4 TIMIT The open source Kald toolkt (Povey et al., 2011) was used to preprocess the data nto logflter banks. A monophone system was traned to do a forced algnment and to get labels for speech frames. Dropout neural networks were traned on wndows of 21 consecutve frames to predct the label of the central frame. No speaker dependent operatons were performed. The nputs were mean centered and normalzed to have unt varance. We used probablty of retenton p = 0.8 n the nput layers and 0.5 n the hdden layers. Max-norm constrant wth c = 4 was used n all the layers. A momentum of 0.95 wth a hgh learnng rate of 0.1 was used. The learnng rate was decayed as ɛ 0 (1 + t/t ) 1. For DBN pretranng, we traned RBMs usng CD-1. The varance of each nput unt for the Gaussan RBM was fxed to 1. For fnetunng the DBN wth dropout, we found that n order to get the best results t was mportant to use a smaller learnng rate (about 0.01). Addng max-norm constrants dd not gve any mprovements. B.5 Reuters The Reuters RCV1 corpus contans more than 800,000 documents categorzed nto 103 classes. These classes are arranged n a tree herarchy. We created a subset of ths data set consstng of 402,738 artcles and a vocabulary of 2000 words comprsng of 50 categores n whch each document belongs to exactly one class. The data was splt nto equal szed tranng and test sets. We tred many network archtectures and found that dropout gave mprovements n classfcaton accuracy over all of them. However, the mprovement was not as sgnfcant as that for the mage and speech data sets. Ths mght be explaned by the fact that ths data set s qute bg (more than 200,000 tranng examples) and overfttng s not a very serous problem. B.6 Alternatve Splcng The alternatve splcng data set conssts of data for 3665 cassette exons, 1014 RNA features and 4 tssue types derved from 27 mouse tssues. For each nput, the target conssts of 4 softmax unts (one for tssue type). Each softmax unt has 3 states (nc, exc, nc) whch are of the bologcal mportance. For each softmax unt, the am s to predct a dstrbuton over these 3 states that matches the observed dstrbuton from wet lab experments as closely as possble. The evaluaton metrc s Code Qualty whch s defned as data ponts =1 t tssue types s {nc, exc, nc} p s,t log( qs t (r ) p s ), where, p s,t s the target probablty for state s and tssue type t n nput ; qs t (r ) s the predcted probablty for state s n tssue type t for nput r and p s s the average of p s,t over and t. A two layer dropout network wth 1024 unts n each layer was traned on ths data set. A value of p = 0.5 was used for the hdden layer and p = 0.7 for the nput layer. Max-norm regularzaton wth hgh decayng learnng rates was used. Results were averaged across the same 5 folds used by Xong et al. (2011). 1955
Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov References M. Chen, Z. Xu, K. Wenberger, and F. Sha. Margnalzed denosng autoencoders for doman adaptaton. In Proceedngs of the 29th Internatonal Conference on Machne Learnng, pages 767 774. ACM, 2012. G. E. Dahl, M. Ranzato, A. Mohamed, and G. E. Hnton. Phone recognton wth the meancovarance restrcted Boltzmann machne. In Advances n Neural Informaton Processng Systems 23, pages 469 477, 2010. O. Dekel, O. Shamr, and L. Xao. Learnng to classfy wth mssng and corrupted features. Machne Learnng, 81(2):149 178, 2010. A. Globerson and S. Rowes. Nghtmare at test tme: robust learnng by feature deleton. In Proceedngs of the 23rd Internatonal Conference on Machne Learnng, pages 353 360. ACM, 2006. I. J. Goodfellow, D. Warde-Farley, M. Mrza, A. Courvlle, and Y. Bengo. Maxout networks. In Proceedngs of the 30th Internatonal Conference on Machne Learnng, pages 1319 1327. ACM, 2013. G. Hnton and R. Salakhutdnov. Reducng the dmensonalty of data wth neural networks. Scence, 313(5786):504 507, 2006. G. E. Hnton, S. Osndero, and Y. Teh. A fast learnng algorthm for deep belef nets. Neural Computaton, 18:1527 1554, 2006. K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun. What s the best mult-stage archtecture for object recognton? In Proceedngs of the Internatonal Conference on Computer Vson (ICCV 09). IEEE, 2009. A. Krzhevsky. Learnng multple layers of features from tny mages. Techncal report, Unversty of Toronto, 2009. A. Krzhevsky, I. Sutskever, and G. E. Hnton. Imagenet classfcaton wth deep convolutonal neural networks. In Advances n Neural Informaton Processng Systems 25, pages 1106 1114, 2012. Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagaton appled to handwrtten zp code recognton. Neural Computaton, 1(4):541 551, 1989. Y. Ln, F. Lv, S. Zhu, M. Yang, T. Cour, K. Yu, L. Cao, Z. L, M.-H. Tsa, X. Zhou, T. Huang, and T. Zhang. Imagenet classfcaton: fast descrptor codng and large-scale svm tranng. Large scale vsual recognton challenge, 2010. A. Lvnat, C. Papadmtrou, N. Pppenger, and M. W. Feldman. Sex, mxablty, and modularty. Proceedngs of the Natonal Academy of Scences, 107(4):1452 1457, 2010. V. Mnh. CUDAMat: a CUDA-based matrx class for Python. Techncal Report UTML TR 2009-004, Department of Computer Scence, Unversty of Toronto, November 2009. 1956
Dropout A. Mohamed, G. E. Dahl, and G. E. Hnton. Acoustc modelng usng deep belef networks. IEEE Transactons on Audo, Speech, and Language Processng, 2010. R. M. Neal. Bayesan Learnng for Neural Networks. Sprnger-Verlag New York, Inc., 1996. Y. Netzer, T. Wang, A. Coates, A. Bssacco, B. Wu, and A. Y. Ng. Readng dgts n natural mages wth unsupervsed feature learnng. In NIPS Workshop on Deep Learnng and Unsupervsed Feature Learnng 2011, 2011. S. J. Nowlan and G. E. Hnton. Smplfyng neural networks by soft weght-sharng. Neural Computaton, 4(4), 1992. D. Povey, A. Ghoshal, G. Boulanne, L. Burget, O. Glembek, N. Goel, M. Hannemann, P. Motlcek, Y. Qan, P. Schwarz, J. Slovsky, G. Stemmer, and K. Vesely. The Kald Speech Recognton Toolkt. In IEEE 2011 Workshop on Automatc Speech Recognton and Understandng. IEEE Sgnal Processng Socety, 2011. R. Salakhutdnov and G. Hnton. Deep Boltzmann machnes. In Proceedngs of the Internatonal Conference on Artfcal Intellgence and Statstcs, volume 5, pages 448 455, 2009. R. Salakhutdnov and A. Mnh. Bayesan probablstc matrx factorzaton usng Markov chan Monte Carlo. In Proceedngs of the 25th Internatonal Conference on Machne Learnng. ACM, 2008. J. Sanchez and F. Perronnn. Hgh-dmensonal sgnature compresson for large-scale mage classfcaton. In Proceedngs of the 2011 IEEE Conference on Computer Vson and Pattern Recognton, pages 1665 1672, 2011. P. Sermanet, S. Chntala, and Y. LeCun. Convolutonal neural networks appled to house numbers dgt classfcaton. In Internatonal Conference on Pattern Recognton (ICPR 2012), 2012. P. Smard, D. Stenkraus, and J. Platt. Best practces for convolutonal neural networks appled to vsual document analyss. In Proceedngs of the Seventh Internatonal Conference on Document Analyss and Recognton, volume 2, pages 958 962, 2003. J. Snoek, H. Larochelle, and R. Adams. Practcal Bayesan optmzaton of machne learnng algorthms. In Advances n Neural Informaton Processng Systems 25, pages 2960 2968, 2012. N. Srebro and A. Shrabman. Rank, trace-norm and max-norm. In Proceedngs of the 18th annual conference on Learnng Theory, COLT 05, pages 545 560. Sprnger-Verlag, 2005. N. Srvastava. Improvng Neural Networks wth Dropout. Master s thess, Unversty of Toronto, January 2013. R. Tbshran. Regresson shrnkage and selecton va the lasso. Journal of the Royal Statstcal Socety. Seres B. Methodologcal, 58(1):267 288, 1996. 1957
Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov A. N. Tkhonov. On the stablty of nverse problems. Doklady Akadem Nauk SSSR, 39(5): 195 198, 1943. L. van der Maaten, M. Chen, S. Tyree, and K. Q. Wenberger. Learnng wth margnalzed corrupted features. In Proceedngs of the 30th Internatonal Conference on Machne Learnng, pages 410 418. ACM, 2013. P. Vncent, H. Larochelle, Y. Bengo, and P.-A. Manzagol. Extractng and composng robust features wth denosng autoencoders. In Proceedngs of the 25th Internatonal Conference on Machne Learnng, pages 1096 1103. ACM, 2008. P. Vncent, H. Larochelle, I. Lajoe, Y. Bengo, and P.-A. Manzagol. Stacked denosng autoencoders: Learnng useful representatons n a deep network wth a local denosng crteron. In Proceedngs of the 27th Internatonal Conference on Machne Learnng, pages 3371 3408. ACM, 2010. S. Wager, S. Wang, and P. Lang. Dropout tranng as adaptve regularzaton. In Advances n Neural Informaton Processng Systems 26, pages 351 359, 2013. S. Wang and C. D. Mannng. Fast dropout tranng. In Proceedngs of the 30th Internatonal Conference on Machne Learnng, pages 118 126. ACM, 2013. H. Y. Xong, Y. Barash, and B. J. Frey. Bayesan predcton of tssue-regulated splcng usng RNA sequence and cellular context. Bonformatcs, 27(18):2554 2562, 2011. M. D. Zeler and R. Fergus. Stochastc poolng for regularzaton of deep convolutonal neural networks. CoRR, abs/1301.3557, 2013. 1958