Dropout: A Simple Way to Prevent Neural Networks from Overfitting

Size: px
Start display at page:

Download "Dropout: A Simple Way to Prevent Neural Networks from Overfitting"


1 Journal of Machne Learnng Research 15 (2014) Submtted 11/13; Publshed 6/14 Dropout: A Smple Way to Prevent Neural Networks from Overfttng Ntsh Srvastava Geoffrey Hnton Alex Krzhevsky Ilya Sutskever Ruslan Salakhutdnov Department of Computer Scence Unversty of Toronto 10 Kngs College Road, Rm 3302 Toronto, Ontaro, M5S 3G4, Canada. Edtor: Yoshua Bengo Abstract Deep neural nets wth a large number of parameters are very powerful machne learnng systems. However, overfttng s a serous problem n such networks. Large networks are also slow to use, makng t dffcult to deal wth overfttng by combnng the predctons of many dfferent large neural nets at test tme. Dropout s a technque for addressng ths problem. The key dea s to randomly drop unts (along wth ther connectons) from the neural network durng tranng. Ths prevents unts from co-adaptng too much. Durng tranng, dropout samples from an exponental number of dfferent thnned networks. At test tme, t s easy to approxmate the effect of averagng the predctons of all these thnned networks by smply usng a sngle unthnned network that has smaller weghts. Ths sgnfcantly reduces overfttng and gves major mprovements over other regularzaton methods. We show that dropout mproves the performance of neural networks on supervsed learnng tasks n vson, speech recognton, document classfcaton and computatonal bology, obtanng state-of-the-art results on many benchmark data sets. Keywords: neural networks, regularzaton, model combnaton, deep learnng 1. Introducton Deep neural networks contan multple non-lnear hdden layers and ths makes them very expressve models that can learn very complcated relatonshps between ther nputs and outputs. Wth lmted tranng data, however, many of these complcated relatonshps wll be the result of samplng nose, so they wll exst n the tranng set but not n real test data even f t s drawn from the same dstrbuton. Ths leads to overfttng and many methods have been developed for reducng t. These nclude stoppng the tranng as soon as performance on a valdaton set starts to get worse, ntroducng weght penaltes of varous knds such as L1 and L2 regularzaton and soft weght sharng (Nowlan and Hnton, 1992). Wth unlmted computaton, the best way to regularze a fxed-szed model s to average the predctons of all possble settngs of the parameters, weghtng each settng by c 2014 Ntsh Srvastava, Geoffrey Hnton, Alex Krzhevsky, Ilya Sutskever and Ruslan Salakhutdnov.

2 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov (a) Standard Neural Net (b) After applyng dropout. Fgure 1: Dropout Neural Net Model. Left: A standard neural net wth 2 hdden layers. Rght: An example of a thnned net produced by applyng dropout to the network on the left. Crossed unts have been dropped. ts posteror probablty gven the tranng data. Ths can sometmes be approxmated qute well for smple or small models (Xong et al., 2011; Salakhutdnov and Mnh, 2008), but we would lke to approach the performance of the Bayesan gold standard usng consderably less computaton. We propose to do ths by approxmatng an equally weghted geometrc mean of the predctons of an exponental number of learned models that share parameters. Model combnaton nearly always mproves the performance of machne learnng methods. Wth large neural networks, however, the obvous dea of averagng the outputs of many separately traned nets s prohbtvely expensve. Combnng several models s most helpful when the ndvdual models are dfferent from each other and n order to make neural net models dfferent, they should ether have dfferent archtectures or be traned on dfferent data. Tranng many dfferent archtectures s hard because fndng optmal hyperparameters for each archtecture s a dauntng task and tranng each large network requres a lot of computaton. Moreover, large networks normally requre large amounts of tranng data and there may not be enough data avalable to tran dfferent networks on dfferent subsets of the data. Even f one was able to tran many dfferent large networks, usng them all at test tme s nfeasble n applcatons where t s mportant to respond quckly. Dropout s a technque that addresses both these ssues. It prevents overfttng and provdes a way of approxmately combnng exponentally many dfferent neural network archtectures effcently. The term dropout refers to droppng out unts (hdden and vsble) n a neural network. By droppng a unt out, we mean temporarly removng t from the network, along wth all ts ncomng and outgong connectons, as shown n Fgure 1. The choce of whch unts to drop s random. In the smplest case, each unt s retaned wth a fxed probablty p ndependent of other unts, where p can be chosen usng a valdaton set or can smply be set at 0.5, whch seems to be close to optmal for a wde range of networks and tasks. For the nput unts, however, the optmal probablty of retenton s usually closer to 1 than to

3 Dropout w Present wth probablty p (a) At tranng tme Always present (b) At test tme pw Fgure 2: Left: A unt at tranng tme that s present wth probablty p and s connected to unts n the next layer wth weghts w. Rght: At test tme, the unt s always present and the weghts are multpled by p. The output at test tme s same as the expected output at tranng tme. Applyng dropout to a neural network amounts to samplng a thnned network from t. The thnned network conssts of all the unts that survved dropout (Fgure 1b). A neural net wth n unts, can be seen as a collecton of 2 n possble thnned neural networks. These networks all share weghts so that the total number of parameters s stll O(n 2 ), or less. For each presentaton of each tranng case, a new thnned network s sampled and traned. So tranng a neural network wth dropout can be seen as tranng a collecton of 2 n thnned networks wth extensve weght sharng, where each thnned network gets traned very rarely, f at all. At test tme, t s not feasble to explctly average the predctons from exponentally many thnned models. However, a very smple approxmate averagng method works well n practce. The dea s to use a sngle neural net at test tme wthout dropout. The weghts of ths network are scaled-down versons of the traned weghts. If a unt s retaned wth probablty p durng tranng, the outgong weghts of that unt are multpled by p at test tme as shown n Fgure 2. Ths ensures that for any hdden unt the expected output (under the dstrbuton used to drop unts at tranng tme) s the same as the actual output at test tme. By dong ths scalng, 2 n networks wth shared weghts can be combned nto a sngle neural network to be used at test tme. We found that tranng a network wth dropout and usng ths approxmate averagng method at test tme leads to sgnfcantly lower generalzaton error on a wde varety of classfcaton problems compared to tranng wth other regularzaton methods. The dea of dropout s not lmted to feed-forward neural nets. It can be more generally appled to graphcal models such as Boltzmann Machnes. In ths paper, we ntroduce the dropout Restrcted Boltzmann Machne model and compare t to standard Restrcted Boltzmann Machnes (RBM). Our experments show that dropout RBMs are better than standard RBMs n certan respects. Ths paper s structured as follows. Secton 2 descrbes the motvaton for ths dea. Secton 3 descrbes relevant prevous work. Secton 4 formally descrbes the dropout model. Secton 5 gves an algorthm for tranng dropout networks. In Secton 6, we present our expermental results where we apply dropout to problems n dfferent domans and compare t wth other forms of regularzaton and model combnaton. Secton 7 analyzes the effect of dropout on dfferent propertes of a neural network and descrbes how dropout nteracts wth the network s hyperparameters. Secton 8 descrbes the Dropout RBM model. In Secton 9 we explore the dea of margnalzng dropout. In Appendx A we present a practcal gude 1931

4 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov for tranng dropout nets. Ths ncludes a detaled analyss of the practcal consderatons nvolved n choosng hyperparameters when tranng dropout networks. 2. Motvaton A motvaton for dropout comes from a theory of the role of sex n evoluton (Lvnat et al., 2010). Sexual reproducton nvolves takng half the genes of one parent and half of the other, addng a very small amount of random mutaton, and combnng them to produce an offsprng. The asexual alternatve s to create an offsprng wth a slghtly mutated copy of the parent s genes. It seems plausble that asexual reproducton should be a better way to optmze ndvdual ftness because a good set of genes that have come to work well together can be passed on drectly to the offsprng. On the other hand, sexual reproducton s lkely to break up these co-adapted sets of genes, especally f these sets are large and, ntutvely, ths should decrease the ftness of organsms that have already evolved complcated coadaptatons. However, sexual reproducton s the way most advanced organsms evolved. One possble explanaton for the superorty of sexual reproducton s that, over the long term, the crteron for natural selecton may not be ndvdual ftness but rather mx-ablty of genes. The ablty of a set of genes to be able to work well wth another random set of genes makes them more robust. Snce a gene cannot rely on a large set of partners to be present at all tmes, t must learn to do somethng useful on ts own or n collaboraton wth a small number of other genes. Accordng to ths theory, the role of sexual reproducton s not just to allow useful new genes to spread throughout the populaton, but also to facltate ths process by reducng complex co-adaptatons that would reduce the chance of a new gene mprovng the ftness of an ndvdual. Smlarly, each hdden unt n a neural network traned wth dropout must learn to work wth a randomly chosen sample of other unts. Ths should make each hdden unt more robust and drve t towards creatng useful features on ts own wthout relyng on other hdden unts to correct ts mstakes. However, the hdden unts wthn a layer wll stll learn to do dfferent thngs from each other. One mght magne that the net would become robust aganst dropout by makng many copes of each hdden unt, but ths s a poor soluton for exactly the same reason as replca codes are a poor way to deal wth a nosy channel. A closely related, but slghtly dfferent motvaton for dropout comes from thnkng about successful conspraces. Ten conspraces each nvolvng fve people s probably a better way to create havoc than one bg conspracy that requres ffty people to all play ther parts correctly. If condtons do not change and there s plenty of tme for rehearsal, a bg conspracy can work well, but wth non-statonary condtons, the smaller the conspracy the greater ts chance of stll workng. Complex co-adaptatons can be traned to work well on a tranng set, but on novel test data they are far more lkely to fal than multple smpler co-adaptatons that acheve the same thng. 3. Related Work Dropout can be nterpreted as a way of regularzng a neural network by addng nose to ts hdden unts. The dea of addng nose to the states of unts has prevously been used n the context of Denosng Autoencoders (DAEs) by Vncent et al. (2008, 2010) where nose 1932

5 Dropout s added to the nput unts of an autoencoder and the network s traned to reconstruct the nose-free nput. Our work extends ths dea by showng that dropout can be effectvely appled n the hdden layers as well and that t can be nterpreted as a form of model averagng. We also show that addng nose s not only useful for unsupervsed feature learnng but can also be extended to supervsed learnng problems. In fact, our method can be appled to other neuron-based archtectures, for example, Boltzmann Machnes. Whle 5% nose typcally works best for DAEs, we found that our weght scalng procedure appled at test tme enables us to use much hgher nose levels. Droppng out 20% of the nput unts and 50% of the hdden unts was often found to be optmal. Snce dropout can be seen as a stochastc regularzaton technque, t s natural to consder ts determnstc counterpart whch s obtaned by margnalzng out the nose. In ths paper, we show that, n smple cases, dropout can be analytcally margnalzed out to obtan determnstc regularzaton methods. Recently, van der Maaten et al. (2013) also explored determnstc regularzers correspondng to dfferent exponental-famly nose dstrbutons, ncludng dropout (whch they refer to as blankout nose ). However, they apply nose to the nputs and only explore models wth no hdden layers. Wang and Mannng (2013) proposed a method for speedng up dropout by margnalzng dropout nose. Chen et al. (2012) explored margnalzaton n the context of denosng autoencoders. In dropout, we mnmze the loss functon stochastcally under a nose dstrbuton. Ths can be seen as mnmzng an expected loss functon. Prevous work of Globerson and Rowes (2006); Dekel et al. (2010) explored an alternate settng where the loss s mnmzed when an adversary gets to pck whch unts to drop. Here, nstead of a nose dstrbuton, the maxmum number of unts that can be dropped s fxed. However, ths work also does not explore models wth hdden unts. 4. Model Descrpton Ths secton descrbes the dropout neural network model. Consder a neural network wth L hdden layers. Let l {1,..., L} ndex the hdden layers of the network. Let z (l) denote the vector of nputs nto layer l, y (l) denote the vector of outputs from layer l (y (0) = x s the nput). W (l) and b (l) are the weghts and bases at layer l. The feed-forward operaton of a standard neural network (Fgure 3a) can be descrbed as (for l {0,..., L 1} and any hdden unt ) z (l+1) = w (l+1) y l + b (l+1), y (l+1) = f(z (l+1) ), where f s any actvaton functon, for example, f(x) = 1/ (1 + exp( x)). Wth dropout, the feed-forward operaton becomes (Fgure 3b) r (l) j Bernoull(p), ỹ (l) = r (l) y (l), z (l+1) = w (l+1) ỹ l + b (l+1), y (l+1) = f(z (l+1) ). 1933

6 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov b (l+1) r (l) 3 b (l+1) y (l) 3 y (l) 3 ỹ (l) 3 w (l+1) z (l+1) f y (l+1) r (l) 2 w (l+1) z (l+1) f y (l+1) y (l) 2 y (l) 2 ỹ (l) 2 r (l) 1 y (l) 1 y (l) 1 (a) Standard network (b) Dropout network Fgure 3: Comparson of the basc operatons of a standard and dropout network. Here denotes an element-wse product. For any layer l, r (l) s a vector of ndependent Bernoull random varables each of whch has probablty p of beng 1. Ths vector s sampled and multpled element-wse wth the outputs of that layer, y (l), to create the thnned outputs ỹ (l). The thnned outputs are then used as nput to the next layer. Ths process s appled at each layer. Ths amounts to samplng a sub-network from a larger network. For learnng, the dervatves of the loss functon are backpropagated through the sub-network. At test tme, the weghts are scaled as W (l) test = pw (l) as shown n Fgure 2. The resultng neural network s used wthout dropout. ỹ (l) 1 5. Learnng Dropout Nets Ths secton descrbes a procedure for tranng dropout neural nets. 5.1 Backpropagaton Dropout neural networks can be traned usng stochastc gradent descent n a manner smlar to standard neural nets. The only dfference s that for each tranng case n a mn-batch, we sample a thnned network by droppng out unts. Forward and backpropagaton for that tranng case are done only on ths thnned network. The gradents for each parameter are averaged over the tranng cases n each mn-batch. Any tranng case whch does not use a parameter contrbutes a gradent of zero for that parameter. Many methods have been used to mprove stochastc gradent descent such as momentum, annealed learnng rates and L2 weght decay. Those were found to be useful for dropout neural networks as well. One partcular form of regularzaton was found to be especally useful for dropout constranng the norm of the ncomng weght vector at each hdden unt to be upper bounded by a fxed constant c. In other words, f w represents the vector of weghts ncdent on any hdden unt, the neural network was optmzed under the constrant w 2 c. Ths constrant was mposed durng optmzaton by projectng w onto the surface of a ball of radus c, whenever w went out of t. Ths s also called max-norm regularzaton snce t mples that the maxmum value that the norm of any weght can take s c. The constant 1934

7 Dropout c s a tunable hyperparameter, whch s determned usng a valdaton set. Max-norm regularzaton has been prevously used n the context of collaboratve flterng (Srebro and Shrabman, 2005). It typcally mproves the performance of stochastc gradent descent tranng of deep neural nets, even when no dropout s used. Although dropout alone gves sgnfcant mprovements, usng dropout along wth maxnorm regularzaton, large decayng learnng rates and hgh momentum provdes a sgnfcant boost over just usng dropout. A possble justfcaton s that constranng weght vectors to le nsde a ball of fxed radus makes t possble to use a huge learnng rate wthout the possblty of weghts blowng up. The nose provded by dropout then allows the optmzaton process to explore dfferent regons of the weght space that would have otherwse been dffcult to reach. As the learnng rate decays, the optmzaton takes shorter steps, thereby dong less exploraton and eventually settles nto a mnmum. 5.2 Unsupervsed Pretranng Neural networks can be pretraned usng stacks of RBMs (Hnton and Salakhutdnov, 2006), autoencoders (Vncent et al., 2010) or Deep Boltzmann Machnes (Salakhutdnov and Hnton, 2009). Pretranng s an effectve way of makng use of unlabeled data. Pretranng followed by fnetunng wth backpropagaton has been shown to gve sgnfcant performance boosts over fnetunng from random ntalzatons n certan cases. Dropout can be appled to fnetune nets that have been pretraned usng these technques. The pretranng procedure stays the same. The weghts obtaned from pretranng should be scaled up by a factor of 1/p. Ths makes sure that for each unt, the expected output from t under random dropout wll be the same as the output durng pretranng. We were ntally concerned that the stochastc nature of dropout mght wpe out the nformaton n the pretraned weghts. Ths dd happen when the learnng rates used durng fnetunng were comparable to the best learnng rates for randomly ntalzed nets. However, when the learnng rates were chosen to be smaller, the nformaton n the pretraned weghts seemed to be retaned and we were able to get mprovements n terms of the fnal generalzaton error compared to not usng dropout when fnetunng. 6. Expermental Results We traned dropout neural networks for classfcaton problems on data sets n dfferent domans. We found that dropout mproved generalzaton performance on all data sets compared to neural networks that dd not use dropout. Table 1 gves a bref descrpton of the data sets. The data sets are MNIST : A standard toy data set of handwrtten dgts. TIMIT : A standard speech benchmark for clean speech recognton. CIFAR-10 and CIFAR-100 : Tny natural mages (Krzhevsky, 2009). Street Vew House Numbers data set (SVHN) : Images of house numbers collected by Google Street Vew (Netzer et al., 2011). ImageNet : A large collecton of natural mages. Reuters-RCV1 : A collecton of Reuters newswre artcles. 1935

8 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov Alternatve Splcng data set: RNA features for predctng alternatve gene splcng (Xong et al., 2011). We chose a dverse set of data sets to demonstrate that dropout s a general technque for mprovng neural nets and s not specfc to any partcular applcaton doman. In ths secton, we present some key results that show the effectveness of dropout. A more detaled descrpton of all the experments and data sets s provded n Appendx B. Data Set Doman Dmensonalty Tranng Set Test Set MNIST Vson 784 (28 28 grayscale) 60K 10K SVHN Vson 3072 (32 32 color) 600K 26K CIFAR-10/100 Vson 3072 (32 32 color) 60K 10K ImageNet (ILSVRC-2012) Vson ( color) 1.2M 150K TIMIT Speech 2520 (120-dm, 21 frames) 1.1M frames 58K frames Reuters-RCV1 Text K 200K Alternatve Splcng Genetcs Results on Image Data Sets Table 1: Overvew of the data sets used n ths paper. We used fve mage data sets to evaluate dropout MNIST, SVHN, CIFAR-10, CIFAR-100 and ImageNet. These data sets nclude dfferent mage types and tranng set szes. Models whch acheve state-of-the-art results on all of these data sets use dropout MNIST Method Unt Type Archtecture Error % Standard Neural Net (Smard et al., 2003) Logstc 2 layers, 800 unts 1.60 SVM Gaussan kernel NA NA 1.40 Dropout NN Logstc 3 layers, 1024 unts 1.35 Dropout NN ReLU 3 layers, 1024 unts 1.25 Dropout NN + max-norm constrant ReLU 3 layers, 1024 unts 1.06 Dropout NN + max-norm constrant ReLU 3 layers, 2048 unts 1.04 Dropout NN + max-norm constrant ReLU 2 layers, 4096 unts 1.01 Dropout NN + max-norm constrant ReLU 2 layers, 8192 unts 0.95 Dropout NN + max-norm constrant (Goodfellow et al., 2013) Maxout 2 layers, (5 240) unts DBN + fnetunng (Hnton and Salakhutdnov, 2006) Logstc DBM + fnetunng (Salakhutdnov and Hnton, 2009) Logstc DBN + dropout fnetunng Logstc DBM + dropout fnetunng Logstc Table 2: Comparson of dfferent models on MNIST. The MNIST data set conssts of pxel handwrtten dgt mages. The task s to classfy the mages nto 10 dgt classes. Table 2 compares the performance of dropout wth other technques. The best performng neural networks for the permutaton nvarant

9 Dropout settng that do not use dropout or unsupervsed pretranng acheve an error of about 1.60% (Smard et al., 2003). Wth dropout the error reduces to 1.35%. Replacng logstc unts wth rectfed lnear unts (ReLUs) (Jarrett et al., 2009) further reduces the error to 1.25%. Addng max-norm regularzaton agan reduces t to 1.06%. Increasng the sze of the network leads to better results. A neural net wth 2 layers and 8192 unts per layer gets down to 0.95% error. Note that ths network has more than 65 mllon parameters and s beng traned on a data set of sze 60,000. Tranng a network of ths sze to gve good generalzaton error s very hard wth standard regularzaton methods and early stoppng. Dropout, on the other hand, prevents overfttng, even n ths case. It does not even need early stoppng. Goodfellow et al. (2013) showed that results can be further mproved to 0.94% by replacng ReLU unts wth maxout unts. All dropout nets use p = 0.5 for hdden unts and p = 0.8 for nput unts. More expermental detals can be found n Appendx B.1. Dropout nets pretraned wth stacks of RBMs and Deep Boltzmann Machnes also gve mprovements as shown n Table 2. DBM pretraned dropout nets acheve a test error of 0.79% whch s the best performance ever reported for the permutaton nvarant settng. We note that t possble to obtan better results by usng 2-D spatal nformaton and augmentng the tranng set wth dstorted versons of mages from the standard tranng set. We demonstrate the effectveness of dropout n that settng on more nterestng data sets. In order to test the robustness of dropout, classfcaton experments were done wth networks of many dfferent archtectures keepng all hyperparameters, ncludng p, fxed. Fgure 4 shows the test error rates obtaned for these dfferent archtectures as tranng progresses. The same archtectures traned wth and wthout dropout have drastcally dfferent test errors as seen as by the two separate clusters of trajectores. Dropout gves a huge mprovement across all archtectures, wthout usng hyperparameters that were tuned specfcally for each archtecture Street Vew House Numbers The Street Vew House Numbers (SVHN) Data Set (Netzer et al., 2011) conssts of color mages of house numbers collected by Classfcaton Error % Wthout dropout Wth dropout Number of weght updates Fgure 4: Test error for dfferent archtectures wth and wthout dropout. The networks have 2 to 4 hdden layers each wth 1024 to 2048 unts. Google Street Vew. Fgure 5a shows some examples of mages from ths data set. The part of the data set that we use n our experments conssts of color mages roughly centered on a dgt n a house number. The task s to dentfy that dgt. For ths data set, we appled dropout to convolutonal neural networks (LeCun et al., 1989). The best archtecture that we found has three convolutonal layers followed by 2 fully connected hdden layers. All hdden unts were ReLUs. Each convolutonal layer was 1937

10 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov Method Error % Bnary Features (WDCH) (Netzer et al., 2011) 36.7 HOG (Netzer et al., 2011) 15.0 Stacked Sparse Autoencoders (Netzer et al., 2011) 10.3 KMeans (Netzer et al., 2011) 9.4 Mult-stage Conv Net wth average poolng (Sermanet et al., 2012) 9.06 Mult-stage Conv Net + L2 poolng (Sermanet et al., 2012) 5.36 Mult-stage Conv Net + L4 poolng + paddng (Sermanet et al., 2012) 4.90 Conv Net + max-poolng 3.95 Conv Net + max poolng + dropout n fully connected layers 3.02 Conv Net + stochastc poolng (Zeler and Fergus, 2013) 2.80 Conv Net + max poolng + dropout n all layers 2.55 Conv Net + maxout (Goodfellow et al., 2013) 2.47 Human Performance 2.0 Table 3: Results on the Street Vew House Numbers data set. followed by a max-poolng layer. Appendx B.2 descrbes the archtecture n more detal. Dropout was appled to all the layers of the network wth the probablty of retanng a hdden unt beng p = (0.9, 0.75, 0.75, 0.5, 0.5, 0.5) for the dfferent layers of the network (gong from nput to convolutonal layers to fully connected layers). Max-norm regularzaton was used for weghts n both convolutonal and fully connected layers. Table 3 compares the results obtaned by dfferent methods. We fnd that convolutonal nets outperform other methods. The best performng convolutonal nets that do not use dropout acheve an error rate of 3.95%. Addng dropout only to the fully connected layers reduces the error to 3.02%. Addng dropout to the convolutonal layers as well further reduces the error to 2.55%. Even more gans can be obtaned by usng maxout unts. The addtonal gan n performance obtaned by addng dropout n the convolutonal layers (3.02% to 2.55%) s worth notng. One may have presumed that snce the convolutonal layers don t have a lot of parameters, overfttng s not a problem and therefore dropout would not have much effect. However, dropout n the lower layers stll helps because t provdes nosy nputs for the hgher fully connected layers whch prevents them from overfttng CIFAR-10 and CIFAR-100 The CIFAR-10 and CIFAR-100 data sets consst of color mages drawn from 10 and 100 categores respectvely. Fgure 5b shows some examples of mages from ths data set. A detaled descrpton of the data sets, nput preprocessng, network archtectures and other expermental detals s gven n Appendx B.3. Table 4 shows the error rate obtaned by dfferent methods on these data sets. Wthout any data augmentaton, Snoek et al. (2012) used Bayesan hyperparameter optmzaton to obtaned an error rate of 14.98% on CIFAR-10. Usng dropout n the fully connected layers reduces that to 14.32% and addng dropout n every layer further reduces the error to 12.61%. Goodfellow et al. (2013) showed that the error s further reduced to 11.68% by replacng ReLU unts wth maxout unts. On CIFAR-100, dropout reduces the error from 43.48% to 37.20% whch s a huge mprovement. No data augmentaton was used for ether data set (apart from the nput dropout). 1938

11 Dropout (a) Street Vew House Numbers (SVHN) (b) CIFAR-10 Fgure 5: Samples from mage data sets. Each row corresponds to a dfferent category. Method Conv Conv Conv Conv Conv Conv Net Net Net Net Net Net max poolng (hand tuned) stochastc poolng (Zeler and Fergus, 2013) max poolng (Snoek et al., 2012) max poolng + dropout fully connected layers max poolng + dropout n all layers maxout (Goodfellow et al., 2013) CIFAR-10 CIFAR Table 4: Error rates on CIFAR-10 and CIFAR ImageNet ImageNet s a data set of over 15 mllon labeled hgh-resoluton mages belongng to roughly 22,000 categores. Startng n 2010, as part of the Pascal Vsual Object Challenge, an annual competton called the ImageNet Large-Scale Vsual Recognton Challenge (ILSVRC) has been held. A subset of ImageNet wth roughly 1000 mages n each of 1000 categores s used n ths challenge. Snce the number of categores s rather large, t s conventonal to report two error rates: top-1 and top-5, where the top-5 error rate s the fracton of test mages for whch the correct label s not among the fve labels consdered most probable by the model. Fgure 6 shows some predctons made by our model on a few test mages. ILSVRC-2010 s the only verson of ILSVRC for whch the test set labels are avalable, so most of our experments were performed on ths data set. Table 5 compares the performance of dfferent methods. Convolutonal nets wth dropout outperform other methods by a large margn. The archtecture and mplementaton detals are descrbed n detal n Krzhevsky et al. (2012). 1939

12 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov Fgure 6: Some ImageNet test cases wth the 4 most probable labels as predcted by our model. The length of the horzontal bars s proportonal to the probablty assgned to the labels by the model. Pnk ndcates ground truth. Model Model Top-1 Top-5 Sparse Codng (Ln et al., 2010) SIFT + Fsher Vectors (Sanchez and Perronnn, 2011) Conv Net + dropout (Krzhevsky et al., 2012) Table 5: Results on the ILSVRC-2010 test set. Top-1 (val) Top-5 (val) Top-5 (test) SVM on Fsher Vectors of Dense SIFT and Color Statstcs Avg of classfers over FVs of SIFT, LBP, GIST and CSIFT Conv Net + dropout (Krzhevsky et al., 2012) Avg of 5 Conv Nets + dropout (Krzhevsky et al., 2012) Table 6: Results on the ILSVRC-2012 valdaton/test set. Our model based on convolutonal nets and dropout won the ILSVRC-2012 competton. Snce the labels for the test set are not avalable, we report our results on the test set for the fnal submsson and nclude the valdaton set results for dfferent varatons of our model. Table 6 shows the results from the competton. Whle the best methods based on standard vson features acheve a top-5 error rate of about 26%, convolutonal nets wth dropout acheve a test error of about 16% whch s a staggerng dfference. Fgure 6 shows some examples of predctons made by our model. We can see that the model makes very reasonable predctons, even when ts best guess s not correct. 6.2 Results on TIMIT Next, we appled dropout to a speech recognton task. We use the TIMIT data set whch conssts of recordngs from 680 speakers coverng 8 major dalects of Amercan Englsh readng ten phonetcally-rch sentences n a controlled nose-free envronment. Dropout neural networks were traned on wndows of 21 log-flter bank frames to predct the label of the central frame. No speaker dependent operatons were performed. Appendx B.4 descrbes the data preprocessng and tranng detals. Table 7 compares dropout neural 1940

13 Dropout nets wth other models. A 6-layer net gves a phone error rate of 23.4%. Dropout further mproves t to 21.8%. We also traned dropout nets startng from pretraned weghts. A 4-layer net pretraned wth a stack of RBMs get a phone error rate of 22.7%. Wth dropout, ths reduces to 19.7%. Smlarly, for an 8-layer net the error reduces from 20.5% to 19.7%. Method Phone Error Rate% NN (6 layers) (Mohamed et al., 2010) 23.4 Dropout NN (6 layers) 21.8 DBN-pretraned NN (4 layers) 22.7 DBN-pretraned NN (6 layers) (Mohamed et al., 2010) 22.4 DBN-pretraned NN (8 layers) (Mohamed et al., 2010) 20.7 mcrbm-dbn-pretraned NN (5 layers) (Dahl et al., 2010) 20.5 DBN-pretraned NN (4 layers) + dropout 19.7 DBN-pretraned NN (8 layers) + dropout Results on a Text Data Set Table 7: Phone error rate on the TIMIT core test set. To test the usefulness of dropout n the text doman, we used dropout networks to tran a document classfer. We used a subset of the Reuters-RCV1 data set whch s a collecton of over 800,000 newswre artcles from Reuters. These artcles cover a varety of topcs. The task s to take a bag of words representaton of a document and classfy t nto 50 dsjont topcs. Appendx B.5 descrbes the setup n more detal. Our best neural net whch dd not use dropout obtaned an error rate of 31.05%. Addng dropout reduced the error to 29.62%. We found that the mprovement was much smaller compared to that for the vson and speech data sets. 6.4 Comparson wth Bayesan Neural Networks Dropout can be seen as a way of dong an equally-weghted averagng of exponentally many models wth shared weghts. On the other hand, Bayesan neural networks (Neal, 1996) are the proper way of dong model averagng over the space of neural network structures and parameters. In dropout, each model s weghted equally, whereas n a Bayesan neural network each model s weghted takng nto account the pror and how well the model fts the data, whch s the more correct approach. Bayesan neural nets are extremely useful for solvng problems n domans where data s scarce such as medcal dagnoss, genetcs, drug dscovery and other computatonal bology applcatons. However, Bayesan neural nets are slow to tran and dffcult to scale to very large network szes. Besdes, t s expensve to get predctons from many large nets at test tme. On the other hand, dropout neural nets are much faster to tran and use at test tme. In ths secton, we report experments that compare Bayesan neural nets wth dropout neural nets on a small data set where Bayesan neural networks are known to perform well and obtan state-of-the-art results. The am s to analyze how much does dropout lose compared to Bayesan neural nets. The data set that we use (Xong et al., 2011) comes from the doman of genetcs. The task s to predct the occurrence of alternatve splcng based on RNA features. Alternatve splcng s a sgnfcant cause of cellular dversty n mammalan tssues. Predctng the 1941

14 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov Method Code Qualty (bts) Neural Network (early stoppng) (Xong et al., 2011) 440 Regresson, PCA (Xong et al., 2011) 463 SVM, PCA (Xong et al., 2011) 487 Neural Network wth dropout 567 Bayesan Neural Network (Xong et al., 2011) 623 Table 8: Results on the Alternatve Splcng Data Set. occurrence of alternate splcng n certan tssues under dfferent condtons s mportant for understandng many human dseases. Gven the RNA features, the task s to predct the probablty of three splcng related events that bologsts care about. The evaluaton metrc s Code Qualty whch s a measure of the negatve KL dvergence between the target and the predcted probablty dstrbutons (hgher s better). Appendx B.6 ncludes a detaled descrpton of the data set and ths performance metrc. Table 8 summarzes the performance of dfferent models on ths data set. Xong et al. (2011) used Bayesan neural nets for ths task. As expected, we found that Bayesan neural nets perform better than dropout. However, we see that dropout mproves sgnfcantly upon the performance of standard neural nets and outperforms all other methods. The challenge n ths data set s to prevent overfttng snce the sze of the tranng set s small. One way to prevent overfttng s to reduce the nput dmensonalty usng PCA. Thereafter, standard technques such as SVMs or logstc regresson can be used. However, wth dropout we were able to prevent overfttng wthout the need to do dmensonalty reducton. The dropout nets are very large (1000s of hdden unts) compared to a few tens of unts n the Bayesan network. Ths shows that dropout has a strong regularzng effect. 6.5 Comparson wth Standard Regularzers Several regularzaton methods have been proposed for preventng overfttng n neural networks. These nclude L2 weght decay (more generally Tkhonov regularzaton (Tkhonov, 1943)), lasso (Tbshran, 1996), KL-sparsty and max-norm regularzaton. Dropout can be seen as another way of regularzng neural networks. In ths secton we compare dropout wth some of these regularzaton methods usng the MNIST data set. The same network archtecture ( ) wth ReLUs was traned usng stochastc gradent descent wth dfferent regularzatons. Table 9 shows the results. The values of dfferent hyperparameters assocated wth each knd of regularzaton (decay constants, target sparsty, dropout rate, max-norm upper bound) were obtaned usng a valdaton set. We found that dropout combned wth max-norm regularzaton gves the lowest generalzaton error. 7. Salent Features The experments descrbed n the prevous secton provde strong evdence that dropout s a useful technque for mprovng neural networks. In ths secton, we closely examne how dropout affects a neural network. We analyze the effect of dropout on the qualty of features produced. We see how dropout affects the sparsty of hdden unt actvatons. We 1942

15 Dropout Method Test Classfcaton error % L L2 + L1 appled towards the end of tranng 1.60 L2 + KL-sparsty 1.55 Max-norm 1.35 Dropout + L Dropout + Max-norm 1.05 Table 9: Comparson of dfferent regularzaton methods on MNIST. also see how the advantages obtaned from dropout vary wth the probablty of retanng unts, sze of the network and the sze of the tranng set. These observatons gve some nsght nto why dropout works so well. 7.1 Effect on Features (a) Wthout dropout (b) Dropout wth p = 0.5. Fgure 7: Features learned on MNIST wth one hdden layer autoencoders havng 256 rectfed lnear unts. In a standard neural network, the dervatve receved by each parameter tells t how t should change so the fnal loss functon s reduced, gven what all other unts are dong. Therefore, unts may change n a way that they fx up the mstakes of the other unts. Ths may lead to complex co-adaptatons. Ths n turn leads to overfttng because these co-adaptatons do not generalze to unseen data. We hypothesze that for each hdden unt, dropout prevents co-adaptaton by makng the presence of other hdden unts unrelable. Therefore, a hdden unt cannot rely on other specfc unts to correct ts mstakes. It must perform well n a wde varety of dfferent contexts provded by the other hdden unts. To observe ths effect drectly, we look at the frst level features learned by neural networks traned on vsual tasks wth and wthout dropout. 1943

16 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov Fgure 7a shows features learned by an autoencoder on MNIST wth a sngle hdden layer of 256 rectfed lnear unts wthout dropout. Fgure 7b shows the features learned by an dentcal autoencoder whch used dropout n the hdden layer wth p = 0.5. Both autoencoders had smlar test reconstructon errors. However, t s apparent that the features shown n Fgure 7a have co-adapted n order to produce good reconstructons. Each hdden unt on ts own does not seem to be detectng a meanngful feature. On the other hand, n Fgure 7b, the hdden unts seem to detect edges, strokes and spots n dfferent parts of the mage. Ths shows that dropout does break up co-adaptatons, whch s probably the man reason why t leads to lower generalzaton errors. 7.2 Effect on Sparsty (a) Wthout dropout (b) Dropout wth p = 0.5. Fgure 8: Effect of dropout on sparsty. ReLUs were used for both models. Left: The hstogram of mean actvatons shows that most unts have a mean actvaton of about 2.0. The hstogram of actvatons shows a huge mode away from zero. Clearly, a large fracton of unts have hgh actvaton. Rght: The hstogram of mean actvatons shows that most unts have a smaller mean mean actvaton of about 0.7. The hstogram of actvatons shows a sharp peak at zero. Very few unts have hgh actvaton. We found that as a sde-effect of dong dropout, the actvatons of the hdden unts become sparse, even when no sparsty nducng regularzers are present. Thus, dropout automatcally leads to sparse representatons. To observe ths effect, we take the autoencoders traned n the prevous secton and look at the sparsty of hdden unt actvatons on a random mn-batch taken from the test set. Fgure 8a and Fgure 8b compare the sparsty for the two models. In a good sparse model, there should only be a few hghly actvated unts for any data case. Moreover, the average actvaton of any unt across data cases should be low. To assess both of these qualtes, we plot two hstograms for each model. For each model, the hstogram on the left shows the dstrbuton of mean actvatons of hdden unts across the mnbatch. The hstogram on the rght shows the dstrbuton of actvatons of the hdden unts. Comparng the hstograms of actvatons we can see that fewer hdden unts have hgh actvatons n Fgure 8b compared to Fgure 8a, as seen by the sgnfcant mass away from 1944

17 Dropout zero for the net that does not use dropout. The mean actvatons are also smaller for the dropout net. The overall mean actvaton of hdden unts s close to 2.0 for the autoencoder wthout dropout but drops to around 0.7 when dropout s used. 7.3 Effect of Dropout Rate Dropout has a tunable hyperparameter p (the probablty of retanng a unt n the network). In ths secton, we explore the effect of varyng ths hyperparameter. The comparson s done n two stuatons. 1. The number of hdden unts s held constant. 2. The number of hdden unts s changed so that the expected number of hdden unts that wll be retaned after dropout s held constant. In the frst case, we tran the same network archtecture wth dfferent amounts of dropout. We use a archtecture. No nput dropout was used. Fgure 9a shows the test error obtaned as a functon of p. If the archtecture s held constant, havng a small p means very few unts wll turn on durng tranng. It can be seen that ths has led to underfttng snce the tranng error s also hgh. We see that as p ncreases, the error goes down. It becomes flat when 0.4 p 0.8 and then ncreases as p becomes close to Test Error Tranng Error Test Error Tranng Error Classfcaton Error % Classfcaton Error % Probablty of retanng a unt (p) (a) Keepng n fxed Probablty of retanng a unt (p) (b) Keepng pn fxed. Fgure 9: Effect of changng dropout rates on MNIST. Another nterestng settng s the second case n whch the quantty pn s held constant where n s the number of hdden unts n any partcular layer. Ths means that networks that have small p wll have a large number of hdden unts. Therefore, after applyng dropout, the expected number of unts that are present wll be the same across dfferent archtectures. However, the test networks wll be of dfferent szes. In our experments, we set pn = 256 for the frst two hdden layers and pn = 512 for the last hdden layer. Fgure 9b shows the test error obtaned as a functon of p. We notce that the magntude of errors for small values of p has reduced by a lot compared to Fgure 9a (for p = 0.1 t fell from 2.7% to 1.7%). Values of p that are close to 0.6 seem to perform best for ths choce of pn but our usual default value of 0.5 s close to optmal. 1945

18 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov 7.4 Effect of Data Set Sze One test of a good regularzer s that t should make t possble to get good generalzaton error from models wth a large number of parameters traned on small data sets. Ths secton explores the effect of changng the data set sze when dropout s used wth feedforward networks. Huge neural networks traned n the standard way overft massvely on small data sets. To see f dropout can help, we run classfcaton experments on MNIST and vary the amount of data gven to the network. The results of these experments are 30 shown n Fgure 10. The network was gven Wth dropout Wthout dropout data sets of sze 100, 500, 1K, 5K, 10K 25 and 50K chosen randomly from the MNIST tranng set. The same network archtecture ( ) was used for all data sets. Dropout wth p = 0.5 was performed at all the hdden layers and p = 0.8 at the nput layer. It can be observed that for extremely small data sets (100, 500) dropout does not gve any mprovements. The model has enough parameters that t can overft on the tranng data, even wth all the nose comng from dropout. As the sze of the data set s ncreased, the gan Classfcaton Error % Dataset sze Fgure 10: Effect of varyng data set sze. from dong dropout ncreases up to a pont and then declnes. Ths suggests that for any gven archtecture and dropout rate, there s a sweet spot correspondng to some amount of data that s large enough to not be memorzed n spte of the nose but not so large that overfttng s not a problem anyways. 7.5 Monte-Carlo Model Averagng vs. Weght Scalng The effcent test tme procedure that we propose s to do an approxmate model combnaton by scalng down the weghts of the traned neural network. An expensve but more correct way of averagng the models s to sample k neural nets usng dropout for each test case and average ther predctons. As k, ths Monte-Carlo model average gets close to the true model average. It s nterestng to see emprcally how many samples k are needed to match the performance of the approxmate averagng method. By computng the error for dfferent values of k we can see how quckly the error rate of the fnte-sample average approaches the error rate of the true model average. Test Classfcaton error % Monte-Carlo Model Averagng Approxmate averagng by weght scalng Number of samples used for Monte-Carlo averagng (k) Fgure 11: Monte-Carlo model averagng vs. weght scalng. 1946

19 Dropout We agan use the MNIST data set and do classfcaton by averagng the predctons of k randomly sampled neural networks. Fgure 11 shows the test error rate obtaned for dfferent values of k. Ths s compared wth the error obtaned usng the weght scalng method (shown as a horzontal lne). It can be seen that around k = 50, the Monte-Carlo method becomes as good as the approxmate method. Thereafter, the Monte-Carlo method s slghtly better than the approxmate method but well wthn one standard devaton of t. Ths suggests that the weght scalng method s a farly good approxmaton of the true model average. 8. Dropout Restrcted Boltzmann Machnes Besdes feed-forward neural networks, dropout can also be appled to Restrcted Boltzmann Machnes (RBM). In ths secton, we formally descrbe ths model and show some results to llustrate ts key propertes. 8.1 Model Descrpton Consder an RBM wth vsble unts v {0, 1} D and hdden unts h {0, 1} F. It defnes the followng probablty dstrbuton P (h, v; θ) = 1 Z(θ) exp(v W h + a h + b v). Where θ = {W, a, b} represents the model parameters and Z s the partton functon. Dropout RBMs are RBMs augmented wth a vector of bnary random varables r {0, 1} F. Each random varable r j takes the value 1 wth probablty p, ndependent of others. If r j takes the value 1, the hdden unt h j s retaned, otherwse t s dropped from the model. The jont dstrbuton defned by a Dropout RBM can be expressed as P (r, h, v; p, θ) = P (r; p)p (h, v r; θ), F P (r; p) = p r j (1 p) 1 r j, P (h, v r; θ) = j=1 1 Z (θ, r) exp(v W h + a h + b v) g(h j, r j ) = 1(r j = 1) + 1(r j = 0)1(h j = 0). F g(h j, r j ), Z (θ, r) s the normalzaton constant. g(h j, r j ) mposes the constrant that f r j = 0, h j must be 0. The dstrbuton over h, condtoned on v and r s factoral j=1 F P (h r, v) = P (h j r j, v), j=1 P (h j = 1 r j, v) = 1(r j = 1)σ b j + ( W j v ). 1947

20 Srvastava, Hnton, Krzhevsky, Sutskever and Salakhutdnov (a) Wthout dropout (b) Dropout wth p = 0.5. Fgure 12: Features learned on MNIST by 256 hdden unt RBMs. The features are ordered by L2 norm. The dstrbuton over v condtoned on h s same as that of an RBM P (v h) = D P (v h), =1 P (v = 1 h) = σ a + j W j h j. Condtoned on r, the dstrbuton over {v, h} s same as the dstrbuton that an RBM would mpose, except that the unts for whch r j = 0 are dropped from h. Therefore, the Dropout RBM model can be seen as a mxture of exponentally many RBMs wth shared weghts each usng a dfferent subset of h. 8.2 Learnng Dropout RBMs Learnng algorthms developed for RBMs such as Contrastve Dvergence (Hnton et al., 2006) can be drectly appled for learnng Dropout RBMs. The only dfference s that r s frst sampled and only the hdden unts that are retaned are used for tranng. Smlar to dropout neural networks, a dfferent r s sampled for each tranng case n every mnbatch. In our experments, we use CD-1 for tranng dropout RBMs. 8.3 Effect on Features Dropout n feed-forward networks mproved the qualty of features by reducng co-adaptatons. Ths secton explores whether ths effect transfers to Dropout RBMs as well. Fgure 12a shows features learned by a bnary RBM wth 256 hdden unts. Fgure 12b shows features learned by a dropout RBM wth the same number of hdden unts. Features 1948

Boosting as a Regularized Path to a Maximum Margin Classifier

Boosting as a Regularized Path to a Maximum Margin Classifier Journal of Machne Learnng Research 5 (2004) 941 973 Submtted 5/03; Revsed 10/03; Publshed 8/04 Boostng as a Regularzed Path to a Maxmum Margn Classfer Saharon Rosset Data Analytcs Research Group IBM T.J.

More information

Sequential DOE via dynamic programming

Sequential DOE via dynamic programming IIE Transactons (00) 34, 1087 1100 Sequental DOE va dynamc programmng IRAD BEN-GAL 1 and MICHAEL CARAMANIS 1 Department of Industral Engneerng, Tel Avv Unversty, Ramat Avv, Tel Avv 69978, Israel E-mal:

More information

Who are you with and Where are you going?

Who are you with and Where are you going? Who are you wth and Where are you gong? Kota Yamaguch Alexander C. Berg Lus E. Ortz Tamara L. Berg Stony Brook Unversty Stony Brook Unversty, NY 11794, USA {kyamagu, aberg, leortz, tlberg}@cs.stonybrook.edu

More information

Ensembling Neural Networks: Many Could Be Better Than All

Ensembling Neural Networks: Many Could Be Better Than All Artfcal Intellgence, 22, vol.37, no.-2, pp.239-263. @Elsever Ensemblng eural etworks: Many Could Be Better Than All Zh-Hua Zhou*, Janxn Wu, We Tang atonal Laboratory for ovel Software Technology, anng

More information

Algebraic Point Set Surfaces

Algebraic Point Set Surfaces Algebrac Pont Set Surfaces Gae l Guennebaud Markus Gross ETH Zurch Fgure : Illustraton of the central features of our algebrac MLS framework From left to rght: effcent handlng of very complex pont sets,

More information

MANY of the problems that arise in early vision can be

MANY of the problems that arise in early vision can be IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 26, NO. 2, FEBRUARY 2004 147 What Energy Functons Can Be Mnmzed va Graph Cuts? Vladmr Kolmogorov, Member, IEEE, and Ramn Zabh, Member,

More information

(Almost) No Label No Cry

(Almost) No Label No Cry (Almost) No Label No Cry Gorgo Patrn,, Rchard Nock,, Paul Rvera,, Tbero Caetano,3,4 Australan Natonal Unversty, NICTA, Unversty of New South Wales 3, Ambata 4 Sydney, NSW, Australa {namesurname}@anueduau

More information

Support vector domain description

Support vector domain description Pattern Recognton Letters 20 (1999) 1191±1199 www.elsever.nl/locate/patrec Support vector doman descrpton Davd M.J. Tax *,1, Robert P.W. Dun Pattern Recognton Group, Faculty of Appled Scence, Delft Unversty

More information

Do Firms Maximize? Evidence from Professional Football

Do Firms Maximize? Evidence from Professional Football Do Frms Maxmze? Evdence from Professonal Football Davd Romer Unversty of Calforna, Berkeley and Natonal Bureau of Economc Research Ths paper examnes a sngle, narrow decson the choce on fourth down n the

More information

Face Alignment through Subspace Constrained Mean-Shifts

Face Alignment through Subspace Constrained Mean-Shifts Face Algnment through Subspace Constraned Mean-Shfts Jason M. Saragh, Smon Lucey, Jeffrey F. Cohn The Robotcs Insttute, Carnege Mellon Unversty Pttsburgh, PA 15213, USA {jsaragh,slucey,jeffcohn}@cs.cmu.edu

More information

Complete Fairness in Secure Two-Party Computation

Complete Fairness in Secure Two-Party Computation Complete Farness n Secure Two-Party Computaton S. Dov Gordon Carmt Hazay Jonathan Katz Yehuda Lndell Abstract In the settng of secure two-party computaton, two mutually dstrustng partes wsh to compute

More information

The Relationship between Exchange Rates and Stock Prices: Studied in a Multivariate Model Desislava Dimitrova, The College of Wooster

The Relationship between Exchange Rates and Stock Prices: Studied in a Multivariate Model Desislava Dimitrova, The College of Wooster Issues n Poltcal Economy, Vol. 4, August 005 The Relatonshp between Exchange Rates and Stock Prces: Studed n a Multvarate Model Desslava Dmtrova, The College of Wooster In the perod November 00 to February

More information

The Developing World Is Poorer Than We Thought, But No Less Successful in the Fight against Poverty

The Developing World Is Poorer Than We Thought, But No Less Successful in the Fight against Poverty Publc Dsclosure Authorzed Pol c y Re s e a rc h Wo r k n g Pa p e r 4703 WPS4703 Publc Dsclosure Authorzed Publc Dsclosure Authorzed The Developng World Is Poorer Than We Thought, But No Less Successful

More information

As-Rigid-As-Possible Image Registration for Hand-drawn Cartoon Animations

As-Rigid-As-Possible Image Registration for Hand-drawn Cartoon Animations As-Rgd-As-Possble Image Regstraton for Hand-drawn Cartoon Anmatons Danel Sýkora Trnty College Dubln John Dnglana Trnty College Dubln Steven Collns Trnty College Dubln source target our approach [Papenberg

More information

Why Don t We See Poverty Convergence?

Why Don t We See Poverty Convergence? Why Don t We See Poverty Convergence? Martn Ravallon 1 Development Research Group, World Bank 1818 H Street NW, Washngton DC, 20433, USA Abstract: We see sgns of convergence n average lvng standards amongst

More information

As-Rigid-As-Possible Shape Manipulation

As-Rigid-As-Possible Shape Manipulation As-Rgd-As-Possble Shape Manpulaton akeo Igarash 1, 3 omer Moscovch John F. Hughes 1 he Unversty of okyo Brown Unversty 3 PRESO, JS Abstract We present an nteractve system that lets a user move and deform

More information

4.3.3 Some Studies in Machine Learning Using the Game of Checkers

4.3.3 Some Studies in Machine Learning Using the Game of Checkers 4.3.3 Some Studes n Machne Learnng Usng the Game of Checkers 535 Some Studes n Machne Learnng Usng the Game of Checkers Arthur L. Samuel Abstract: Two machne-learnng procedures have been nvestgated n some

More information



More information

Assessing health efficiency across countries with a two-step and bootstrap analysis *

Assessing health efficiency across countries with a two-step and bootstrap analysis * Assessng health effcency across countres wth a two-step and bootstrap analyss * Antóno Afonso # $ and Mguel St. Aubyn # February 2007 Abstract We estmate a sem-parametrc model of health producton process

More information

The Global Macroeconomic Costs of Raising Bank Capital Adequacy Requirements

The Global Macroeconomic Costs of Raising Bank Capital Adequacy Requirements W/1/44 The Global Macroeconomc Costs of Rasng Bank Captal Adequacy Requrements Scott Roger and Francs Vtek 01 Internatonal Monetary Fund W/1/44 IMF Workng aper IMF Offces n Europe Monetary and Captal Markets

More information

Stable Distributions, Pseudorandom Generators, Embeddings, and Data Stream Computation

Stable Distributions, Pseudorandom Generators, Embeddings, and Data Stream Computation Stable Dstrbutons, Pseudorandom Generators, Embeddngs, and Data Stream Computaton PIOTR INDYK MIT, Cambrdge, Massachusetts Abstract. In ths artcle, we show several results obtaned by combnng the use of

More information


EVERY GOOD REGULATOR OF A SYSTEM MUST BE A MODEL OF THAT SYSTEM 1 Int. J. Systems Sc., 1970, vol. 1, No. 2, 89-97 EVERY GOOD REGULATOR OF A SYSTEM MUST BE A MODEL OF THAT SYSTEM 1 Roger C. Conant Department of Informaton Engneerng, Unversty of Illnos, Box 4348, Chcago,

More information

Ciphers with Arbitrary Finite Domains

Ciphers with Arbitrary Finite Domains Cphers wth Arbtrary Fnte Domans John Black 1 and Phllp Rogaway 2 1 Dept. of Computer Scence, Unversty of Nevada, Reno NV 89557, USA, jrb@cs.unr.edu, WWW home page: http://www.cs.unr.edu/~jrb 2 Dept. of

More information

can basic entrepreneurship transform the economic lives of the poor?

can basic entrepreneurship transform the economic lives of the poor? can basc entrepreneurshp transform the economc lves of the poor? Orana Bandera, Robn Burgess, Narayan Das, Selm Gulesc, Imran Rasul, Munsh Sulaman Aprl 2013 Abstract The world s poorest people lack captal

More information

Turbulence Models and Their Application to Complex Flows R. H. Nichols University of Alabama at Birmingham

Turbulence Models and Their Application to Complex Flows R. H. Nichols University of Alabama at Birmingham Turbulence Models and Ther Applcaton to Complex Flows R. H. Nchols Unversty of Alabama at Brmngham Revson 4.01 CONTENTS Page 1.0 Introducton 1.1 An Introducton to Turbulent Flow 1-1 1. Transton to Turbulent

More information

TrueSkill Through Time: Revisiting the History of Chess

TrueSkill Through Time: Revisiting the History of Chess TrueSkll Through Tme: Revstng the Hstory of Chess Perre Dangauther INRIA Rhone Alpes Grenoble, France perre.dangauther@mag.fr Ralf Herbrch Mcrosoft Research Ltd. Cambrdge, UK rherb@mcrosoft.com Tom Mnka

More information



More information

DISCUSSION PAPER. Should Urban Transit Subsidies Be Reduced? Ian W.H. Parry and Kenneth A. Small

DISCUSSION PAPER. Should Urban Transit Subsidies Be Reduced? Ian W.H. Parry and Kenneth A. Small DISCUSSION PAPER JULY 2007 RFF DP 07-38 Should Urban Transt Subsdes Be Reduced? Ian W.H. Parry and Kenneth A. Small 1616 P St. NW Washngton, DC 20036 202-328-5000 www.rff.org Should Urban Transt Subsdes

More information

Income per natural: Measuring development as if people mattered more than places

Income per natural: Measuring development as if people mattered more than places Income per natural: Measurng development as f people mattered more than places Mchael A. Clemens Center for Global Development Lant Prtchett Kennedy School of Government Harvard Unversty, and Center for

More information

Finance and Economics Discussion Series Divisions of Research & Statistics and Monetary Affairs Federal Reserve Board, Washington, D.C.

Finance and Economics Discussion Series Divisions of Research & Statistics and Monetary Affairs Federal Reserve Board, Washington, D.C. Fnance and Economcs Dscusson Seres Dvsons of Research & Statstcs and Monetary Affars Federal Reserve Board, Washngton, D.C. Banks as Patent Fxed Income Investors Samuel G. Hanson, Andre Shlefer, Jeremy

More information