Flexible, High Performance Convolutional Neural Networks for Image Classification
|
|
|
- Edwin Norton
- 9 years ago
- Views:
Transcription
1 Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence Flexible, High Performance Convolutional Neural Networks for Image Classification Dan C. Cireşan, Ueli Meier, Jonathan Masci, Luca M. Gambardella, Jürgen Schmidhuber IDSIA, USI and SUPSI Galleria 2, 6928 Manno-Lugano, Switzerland Abstract We present a fast, fully parameterizable GPU implementation of Convolutional Neural Network variants. Our feature extractors are neither carefully designed nor pre-wired, but rather learned in a supervised way. Our deep hierarchical architectures achieve the best published results on benchmarks for object classification (NORB, CIFAR10) and handwritten digit recognition (MNIST), with error rates of 2.53%, 19.51%, 0.35%, respectively. Deep nets trained by simple back-propagation perform better than more shallow ones. Learning is surprisingly rapid. NORB is completely trained within five epochs. Test error rates on MNIST drop to 2.42%, 0.97% and 0.48% after 1, 3 and 17 epochs, respectively. 1 Introduction The human visual system efficiently recognizes and localizes objects within cluttered scenes. For artificial systems, however, this is still difficult due to viewpoint-dependent object variability, and the high in-class variability of many object types. Deep hierarchical neural models roughly mimick the nature of mammalian visual cortex, and by community consensus are among the most promising architectures for such tasks. The most successful hierarchical object recognition systems all extract localized features from input images, convolving image patches with filters. Filter responses are then repeatedly sub-sampled and re-filtered, resulting in a deep feed-forward network architecture whose output feature vectors are eventually classified. One of the first hierarchical neural systems was the Neocognitron [Fukushima, 1980] which inspired many of the more recent variants. Unsupervised learning methods applied to patches of natural images tend to produce localized filters that resemble off-center-on-surround filters, orientation-sensitive bar detectors, Gabor filters [Schmidhuber et al., 1996; Olshausen and Field, 1997; Hoyer and Hyvärinen, 2000]. These findings in conjunction with experimental studies of the visual cortex justify the use of such filters in the so-called standard model for object recognition [Riesenhuber and Poggio, 1999; Serre et al., 2007; Mutch and Lowe, 2008], whose filters are fixed, in contrast to those of Convolutional Neural Networks (CNNs) [LeCun et al., 1998; Behnke, 2003; Simard et al., 2003], whose weights (filters) are randomly initialized and changed in a supervised way using back-propagation (BP). Despite the hardware progress of the past decades, computational speed is still a limiting factor for CNN architectures characterized by many building blocks typically set by trial and error. To systematically test the impact of various architectures on classification performance, we present a fast CNN implementation on Graphics Processing Units (GPUs). Previous GPU implementations of CNNs [Chellapilla et al., 2006; Uetz and Behnke, 2009; Strigl et al., 2010] were hard-coded to satisfy GPU hardware constraints or use general purpose libraries, whereas our implementation is flexible and fully online (i.e., weight updates after each image). A notable exception is [Jarrett et al., 2009] who performed a thorough analysis of the influence of all building blocks of a multistage architecture on recognition performance. Our implementation allows for training large CNNs within days instead of months, such that we can investigate the influence of various structural parameters by exploring large parameter spaces [Pinto et al., 2009] and performing error analysis on repeated experiments. We evaluate various networks on the handwritten digit benchmark MNIST [LeCun et al., 1998] and two image classification benchmarks: NORB [LeCun et al., 2004] and CI- FAR10 [Krizhevsky, 2009]. 2 Convolutional neural networks CNNs are hierarchical neural networks whose convolutional layers alternate with subsampling layers, reminiscent of simple and complex cells in the primary visual cortex [Wiesel and Hubel, 1959]. CNNs vary in how convolutional and subsampling layers are realized and how the nets are trained. 2.1 Image processing layer The image processing layer is an optional pre-processing layer of predefined filters that are kept fixed during training. Thus additional information besides the raw input image can be provided to the network, such as edges and gradients. In particular, we find that a contrast-extracting layer [Fukushima, 2003] helps to improve the recognition rate for NORB. 1237
2 2.2 Convolutional layer A convolutional layer is parametrized by the size and the number of the maps, kernel sizes, skipping factors, and the connection table. Each layer has M maps of equal size (M x, M y ). A kernel (blue rectangle in Fig 1) of size (K x, K y )is shifted over the valid region of the input image (i.e. the kernel has to be completely inside the image). The skipping factors S x and S y define how many pixels the filter/kernel skips in x- and y-direction between subsequent convolutions. The size of the output map is then defined as: Mx n = M x n 1 Kx n Sx n +1 +1; My n = M y n 1 Ky n Sy n (1) where index n indicates the layer. Each map in layer L n is connected to at most M n 1 maps in layer L n 1. Neurons of a given map share their weights but have different receptive fields. 2.3 Max-pooling layer The biggest architectural difference between our implementation and the CNN of [LeCun et al., 1998] is the use of a max-pooling layer instead of a sub-sampling layer. No such layer is used by [Simard et al., 2003] who simply skips nearby pixels prior to convolution, instead of pooling or averaging. [Scherer et al., 2010] found that max-pooling can lead to faster convergence, select superior invariant features, and improve generalization. A theoretical analysis of feature pooling in general and max-pooling in particular is given by [Boureau et al., 2010]. The output of the max-pooling layer is given by the maximum activation over non-overlapping rectangular regions of size (K x, K y ). Max-pooling enables position invariance over larger local regions and downsamples the input image by a factor of K x and K y along each direction. 2.4 Classification layer Kernel sizes of convolutional filters and max-pooling rectangles as well as skipping factors are chosen such that either the output maps of the last convolutional layer are downsampled to 1 pixel per map, or a fully connected layer combines the outputs of the topmost convolutional layer into a 1D feature vector. The top layer is always fully connected, with one output unit per class label. 3 GPU implementation The latest generation of NVIDIA GPUs, the 400 and 500 series (we use GTX 480 & GTX 580), has many advantages over older GPUs, most notably the presence of a R/W L2 global cache for device memory. This permits faster programs and simplifies writing the code. In fact, the corresponding transfer of complexity into hardware alleviates many software and optimization problems. Our experiments show that the CNN program becomes 2-3 times faster just by switching from GTX 285 to GTX 480. Manual optimization of CUDA code is very timeconsuming and error prone. We optimize for the new architecture, relying on the L2 cache for many of the device memory accesses, instead of manually writing code that uses Figure 1: Architecture of a convolutional neural network with fully connected layers, kernel sizes of 5 x 5 and skipping factors of 1. textures and shared memory. Code obtained by this pragmatic strategy is fast enough. We use the following types of optimization: pre-computed expressions, unrolled loops within template kernels, strided matrices to obtain coalesced memory accesses and registers wherever possible. Additional manual optimizations are possible in case future image classification problems will require even more computing power. 3.1 Data structures Both outputs y and deltas δ of layer L n are 2D strided. Their original size is M x MM y, but they are horizontally strided with a pitch of 32 floats (we use this stride for all 2D data), resulting in coalesced memory accesses. The vertical stride avoids additional bounding tests in CUDA kernels. All connections between maps of consecutive layers L n 1 and L n are stored in matrix C n. Each row of C n contains all connections that feed into a particular map in layer L n. Because we aim for a flexible architecture with partially connected layers, in the first column we store the number of previous connections. This index is useful for Forward Propagation (FP) and Adjusting Weights (AW) CUDA kernels. The second column stores the number of connections, followed by corresponding indices of maps in L n 1 connected to the current map. For BP and FP, analogous information about connections is needed. We therefore store backward connections in C BP. AW requires a list of all map connections (see Subsection 3.4), stored as an array of map index pairs. Dealing with biases in BP kernel requires to know where the weights of particular connections start; this information is stored in a 2D array WIDX BP of size M n M n Forward propagation A straightforward way of parallelizing FP is to assign a thread block to each map that has to be computed. For maps with more than 1024 neurons, the job is further split into smaller 1238
3 blocks by assigning a block to each line of the map, because the number of threads per block is limited (1024 for GTX 480). A one to one correspondence between threads and the map s neurons is assumed. Because of weight sharing, threads inside a block can access data in parallel, in particular the same weights and inputs from the previous layer. Each thread starts by initializing its sum with the bias, then loops over all map connections, convolving the appropriate patch of the input map with the corresponding kernel. The output is obtained by passing the sum through a scaled tanh activation function, and is then written to device memory. 3.3 Backward propagation BP of deltas can be done in two ways: by pushing or by pulling. Pushing deltas means taking each delta from the current layer and computing the corresponding deltas for the previous layer. For an architecture with shared weights this has the disadvantage of being hard to code. Each delta from the current layer contributes to many deltas in the previous layer, which translates into a lot of programming. There are two ways of avoiding this: either writing partial deltas to a separated block of memory and then putting everything together by calling another kernel (slow because of a tremendous increase in the number of memory accesses, and the need of another kernel), or using atomic writes (to avoid data hazards) to update deltas (very slow because many writings are serialized). We implement pulling deltas, which has almost none of the above speed-limiting drawbacks, but is a bit more complicated. The (uni- or bi-dimensional) thread grid assigns a (bi- or uni-dimensional) thread block to each map in the previous layer and a thread to each neuron in every map. Similar to FP, for maps with more than 1024 neurons, the 2D grid is further split into smaller 1D blocks by assigning a 2D block to each row of the map. Each thread computes the delta of its corresponding neuron by pulling deltas from the current layer. For every neuron in the previous layer we have to determine the list of neurons in the current layer which are connected to it. Let us consider neuron (i, j) from a map in layer L n 1, and then assume that (x, y) are the coordinates of neurons in maps of L n that contribute to the delta of neuron (i, j). The (x, y) neuron is connected to kernel size number neurons (K x K y ) from each connected map in the previous layer. The indices in L n 1 of the neurons connected through a kernel to the (x, y) neuron are: x(s x +1) i x(s x +1)+K x 1, y(s y +1) j y(s y +1)+K y 1. We can now compute the inequalities for (x, y): i K x +1 i x S x +1 S x +1, j K y +1 S y +1 y j S y +1. Because (x, y) has to be inside the map, the final inequalities are: ( ) ( ) i K max x+1 i S x+1, 0 x min,m x 1, S x +1 max ( ) j Ky+1 S y+1, 0 y ( min j S y +1 ),M y 1. The above inequalities state that the delta of neuron (i, j) from L n 1 is computed from deltas of neurons in a rectangular area in maps of L n (Fig. 2). After summing up the deltas, each thread multiplies the result by the derivative of the activation function. Figure 2: Back propagating deltas. A connection between two maps from two consecutive layers is displayed. The map in L n 1 has 29 x 29 neurons; the map in L n has 13 x 13 neurons. They are linked through a 5 x 5 kernel K. Skipping factors of S x = 1 and S y = 1 are assumed. Arrows and colors depict the correspondence between neurons in L n 1 and their sources in L n. 3.4 Adjusting weights FP and BP have a grid over the list of maps, but the AW thread grid is over the list of kernels (filters) between maps of two consecutive layers. The 1D grid has a block for each connection between two maps. Thread blocks are 2D, with a corresponding thread for every kernel weight. The bias weight is included as an entire row of threads, thus requiring thread blocks to have (K x +1) K y threads. Most of the time these additional K y threads will wait, thread (0,0) being activated only for blocks that have to process the bias. 4 Experiments We use a system with a Core i7-920 (2.66GHz), 12 GB DDR3 and four graphics cards: 2 x GTX 480 and 2 x GTX 580. The correctness of the CPU version is checked by comparing the analytical gradient with its finite difference approximation. On GPU this is not possible because all computations are performed with single precision floating point numbers. Hence the GPU implementation s correctness is checked by comparing its results to those of a randomly initialized net after training it for several epochs on the more accurate CPU version. Obtaining identical results after trillions of operations is a strong indication of correctness. The implemented CNN s plain feed-forward architecture is trained using on-line gradient descent. All images from the training set are used for training and also for validation. If deformations are enabled, only the images from the training set will be deformed. Weights are initialized according to a uniform random distribution in the range [ 0.05, 0.05]. Each 1239
4 neuron s activation function is a scaled hyperbolic tangent: y(a) = tanh(0.6666a) [LeCun et al., 1998]. We pick the trained CNN with the lowest validation error, and evaluate it on the test set (Test for best Validation - TfbV). The best test error (bt) is also listed for all experiments. The reported computation times per epoch include training, validation and testing as well as all data transfers. 4.1 Experiments on MNIST For the MNIST dataset the networks are trained on deformed images, continually generated in on-line fashion. Affine (translation, rotation, scaling, horizontal shearing) and elastic deformations [Simard et al., 2003] are combined. We use a variable learning rate that shrinks by a multiplicative constant after each epoch, from 10 3 down to after 500 epochs. Table 1: Error rates on MNIST test set for randomly connected CNNs with 2 to 6 convolutional layers with M Maps and an optional fully connected Layer with N Neurons, various kernel sizes and skipping factors were used. #M, #N TfbV in Hidden Layers [%] 20M-60M M-60M-150N M-60M-100M-150N M-40M-60M-80M-100M-120M-150N 0.35 Fully connected convolutional layers lead to an exploding number of network connections and weights, making training of big and deep CNNs for hundreds of epochs impractical even on GPUs. Partial connectivity alleviates this problem and is also biologically more plausible. We reduce the number of connections between convolutional layers in a random way [LeCun et al., 1998; Jarrett et al., 2009]. Table 1 lists results of various networks with 2 to 7 hidden layers with random connections. Additional layers result in better networks, the best one achieving a test error of 0.35% for best validation and a best test error of 0.27%. The best previous CNN result on MNIST is 0.40% [Simard et al., 2003]. A 0.35% error rate was recently also obtained by a big, deep MLP [Ciresan et al., 2010] with many more free parameters. Deeper nets require more computation time to complete an epoch, but we observe that they also need fewer epochs to achieve good test errors. The deepest CNN from Table 1 reaches 2.42%, 0.97% and 0.48% after one, three and seventeen epochs, respectively. On the other hand, the network with 4 instead of 7 hidden layers reaches 4.71%, 1.58%, 0.68% after one, three and seventeen epochs, achieving a test error below 0.50% after only 34 epochs. This shows once more that deep networks, contrary to common belief, can be trained successfully by backpropagation. Despite the numerous free parameters, deep networks seem to learn faster (better recognition rates after fewer epochs) than shallow ones. 4.2 Experiments on NORB NORB contains stereo images of 3D objects. Hence there are two maps on the input layer. Rotation, scaling, shearing and elastic distortions seem to have a negative impact on generalization. These deformations improve recognition rates for digits that are intrinsically 2D [Ciresan et al., 2010], but seem inadequate for 3D objects. Initial experiments on NORB show that unlike with MNIST where we use deformations, the CNN needs only 3 to 6 epochs to reach zero validation error. This allows us to quickly run numerous repetitive experiments with huge networks with hundreds of maps per layer. We decided to use a CNN with five hidden layers: layer1, a convolutional layer with 300 maps, kernel size 6 6 and skipping factors 1 1; layer2, a max-pooling layer over a 2 2 region; layer3, a convolutional layer with 500 maps, kernel size 4 4, skipping factors 0 0; layer4, a max-pooling layer over a 4 4 region; layer5, a fully connected layer with 500 neurons. The learning rate is initialized by and multiplied by 0.95 after every epoch. Table 2 summarizes the results of four different experiments by switching on/off translation as well as the fixed image processing layer. We report the average error rate as well as the standard deviation of N independent runs with identical architectures but different weight initializations. For the first experiment without translation and no image processing (IP), an average test error rate of 7.86% is obtained. With additional translations of at most 5%, the average error rate drops to 4.71%, contradicting the common belief that CNNs are translation invariant. These results are on par or better than others in the literature: 5.90% error rate for a combination of CNNs and SVMs [LeCun et al., 2004] and 5.20% error rate for restricted Boltzman machines [Nair and Hinton, 2009]. Table 2: Average error rates and standard deviations of N runs for a five hidden layer CNN on the NORB test set (see text for details). trans. [%] IP TfbV [%] runs time/epoch [s] 0 no 7.86 ± no 4.71 ± yes 3.94 ± yes 2.53 ± The best previously published result on NORB (2.87%) was obtained by a hierarchical neural network which to every convolutional layer provides a subsampled version plus edge information of the original image [Uetz and Behnke, 2009]. This motivates us to implement a pre-processing layer with fixed filters. We try simple edge masks (Sobel, Scharr), but find that a contrast-extraction layer [Fukushima, 2003] realized by mexican hat shaped filters of size works best. We use two filters, one with a concentric on-center receptive field and one with a concentric off-center receptive field. The first filter extracts positive contrast in brightness, whereas the latter extracts negative contrast. Each image from the original NORB is filtered, consequently the input of the CNN has six maps, the original image plus the positive and negative contrast for each of the two stereo channels. Using such a pre-processing layer results in lower average error rates, 3.94% without translation and 2.53% with translation. 1240
5 This result improves the previous state of the art on NORB [Uetz and Behnke, 2009]. Experience with other image datasets tells us that NORB is unusual. The training set has only five instances per class. The resulting poor training set variability makes the nets learn quickly but generalize badly. NORB is the only dataset that profits from a fixed pre-processing layer in a substantial way. For MNIST and CIFAR10 such pre-processing has little or no effect. It is also worth noting that NORB s standard error rate deviation is bigger than CIFAR10 s (see Tables 2 and 3). Identical nets with different initializations do not produce very consistent results. The best net had an error rate of 1.72%, the worst 3.69%. 4.3 Experiments on CIFAR 10 CIFAR10 is a collection of natural color images of 32x32 pixels. It contains 10 classes, each of them with 5000 samples in the training set and 1000 in the test set. The images greatly vary inside each class. They are not necessarily centered, may contain only parts of the object, and have varying backgrounds. All of this makes CIFAR10 the hardest problem addressed in this paper. The CNN has three maps, one for each color channel (RGB). The CIFAR10 images are relatively small in comparison to NORB s, and force us to use small kernels. The tested CNNs differ only in the number of maps per convolutional and max-pooling layer. All have eight hidden layers: layer1, a convolutional layer with 3 3 kernels and skipping factor of 0; layer2, a max-pooling layer over a 3 3 region; layer3, a convolutional layer with 3 3 kernels and skipping factors of 0 0; layer4, a max-pooling over a 2 2 region; layer5, a convolutional layer with 3 3 kernels and a skipping factors of 0 0; layer6, a max pooling layer over a 2 2 region; layer7, a fully connected layer with 300 neurons; layer8, a fully connected layer with 100 neurons. Like for MNIST, the learning rate is initialized by and multiplied by after every epoch. Results in Table 3 show that without translation the error rate does not drop below 28%; adding edge information does not help at all. Translations have a very positive effect, decreasing the error rate to almost 20%. Contrast extraction filters are better than the Sobel/Scharr filters but still worse than no pre-processing layer at all. Despite some CNN-inherent translation invariance, additional training image translations cause better generalization; additional image processing proved useless though. To see if bigger nets are better, we increase the number of maps per layer from 100 to 200, 300 and 400, respectively (last three rows in Tab. 3). Training time increases exponentially, but the test error decreases, reaching a minimum for nets with 300 maps per layer. Our 19.51% error rate is better than the previous state of the art for this dataset, 20.40% [Coates et al., 2010] and 25.50% [Yu and Zhang, 2010]. Unlike [Coates et al., 2010], however, we use the original images without any particular input normalization. Note that the error rate standard deviations are smaller than those obtained on NORB, that is, different initializations yield consistent results. Table 3: Average error rates and standard deviations for N runs of an eight hidden layer CNN on the CIFAR10 test set (see text for details). The first five nets have 100 maps per convolutional and max-pooling layer, whereas the sixth, seventh and eighth have 200, 300 and 400 maps per hidden layer, respectively. IP - image processing layer: edge Sobel and Scharr filters; hat positive and negative contrast extraction filters. trans. [%] IP TfbV [%] runs time/epoch [s] 0; 100M no ± ; 100M edge ± ; 100M no ± ; 100M edge ± ; 100M hat ± ; 200M no ± ; 300M no ± ; 400M no ± Speedup factor of GPU code The GPU code scales well with network size. For small nets the speedup is small (but still over 10) since they fit better inside the CPU cache, and GPU resources are underutilized. For huge nets (ex: Table 2) the GPU implementation is more than 60 times faster than a compiler-optimized CPU version. Given the flexibility of our GPU version, this is a significant speedup. One epoch takes 35 GPU minutes but more than 35 CPU hours. 5 Conclusion We presented high-performance GPU-based CNN variants trained by on-line gradient descent. Principal advantages include state-of-the-art generalization capabilities, great flexibility and speed. All structural CNN parameters such as input image size, number of hidden layers, number of maps per layer, kernel sizes, skipping factors and connection tables are adaptable to any particular application. We applied our networks to benchmark datasets for digit recognition (MNIST), 3D object recognition (NORB), and natural images (CIFAR10). On MNIST the best network achieved a recognition test error rate of 0.35%, on NORB 2.53% and on CIFAR %. Our results are raising the bars for all three benchmarks. Currently the particular CNN types discussed in this paper seem to be the best adaptive image recognizers, provided there is a labeled dataset of sufficient size. No unsupervised pretraining is required. Good results require big and deep but sparsely connected CNNs, computationally prohibitive on CPUs, but feasible on current GPUs, where our implementation is 10 to 60 times faster than a compileroptimized CPU version. Acknowledgment This work was partially funded by the Swiss Commission for Technology and Innovation (CTI), Project n IFF: Intelligent Fill in Form. 1241
6 References [Behnke, 2003] Sven Behnke. Hierarchical Neural Networks for Image Interpretation, volume 2766 of Lecture Notes in Computer Science. Springer, [Boureau et al., 2010] Y-Lan Boureau, Jean Ponce, and Yann LeCun. A Theoretical Analysis of Feature Pooling in Visual Recognition. In International Conference on Machine Learning, [Chellapilla et al., 2006] Kumar Chellapilla, Sidd Puri, and Patrice Simard. High performance convolutional neural networks for document processing. In International Workshop on Frontiers in Handwriting Recognition, [Ciresan et al., 2010] Dan C. Ciresan, Ueli Meier, Luca M. Gambardella, and Jürgen Schmidhuber. Deep big simple neural nets for handwritten digit recognition. Neural Computation, 22(12): , [Coates et al., 2010] Adam Coates, Honglak Lee, and Andrew Ng. An analysis of single-layer networks in unsupervised feature learning. In Advances in Neural Information Processing Systems, [Fukushima, 1980] Kunihiko Fukushima. Neocognitron: A self-organizing neural network for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 36(4): , [Fukushima, 2003] Kunihiko Fukushima. Neocognitron for handwritten digit recognition. Neurocomputing, 51: , [Hoyer and Hyvärinen, 2000] Patrik O. Hoyer and Aapo Hyvärinen. Independent component analysis applied to feature extraction from colour and stero images. Network: Computation in Neural Systems, 11(3): , [Jarrett et al., 2009] Kevin Jarrett, Koray Kavukcuoglu, Marc Aurelio Ranzato, and Yann LeCun. What is the best multi-stage architecture for object recognition? In Proc. International Conference on Computer Vision, [Krizhevsky, 2009] Alex Krizhevsky. Learning multiple layers of features from tiny images. Master s thesis, Computer Science Department, University of Toronto, [LeCun et al., 1998] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11): , November [LeCun et al., 2004] Yann LeCun, Fu-Jie Huang, and Leon Bottou. Learning methods for generic object recognition with invariance to pose and lighting. In Proc. of Computer Vision and Pattern Recognition Conference, [Mutch and Lowe, 2008] Jim Mutch and David G. Lowe. Object class recognition and localization using sparse features with limited receptive fields. Int. J. Comput. Vision, 56(6): , [Nair and Hinton, 2009] Vinod Nair and Geoffrey E. Hinton. 3d object recognition with deep belief nets. In Advances in Neural Information Processing Systems, [Olshausen and Field, 1997] Bruno A. Olshausen and David J. Field. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research, 37(23): , December [Pinto et al., 2009] Nicolas Pinto, David Doukhan, James J. DiCarlo, and David D Cox. A high-throughput screening approach to discovering good forms of biologically inspired visual representation. PLoS computational biology, 5(11):e , November [Riesenhuber and Poggio, 1999] Maximiliam Riesenhuber and Tomaso Poggio. Hierarchical models of object recognition in cortex. Nat. Neurosci., 2(11): , [Scherer et al., 2010] Dominik Scherer, Adreas Müller, and Sven Behnke. Evaluation of pooling operations in convolutional architectures for object recognition. In International Conference on Artificial Neural Networks, [Schmidhuber et al., 1996] J. Schmidhuber, M. Eldracher, and B. Foltin. Semilinear predictability minimization produces well-known feature detectors. Neural Computation, 8(4): , [Serre et al., 2007] Thomas Serre, Lior Wolf, and Tomaso Poggio. Object recognition with features inspired by visual cortex. In Proc. of Computer Vision and Pattern Recognition Conference, [Simard et al., 2003] P.Y. Simard, D. Steinkraus, and J.C. Platt. Best practices for convolutional neural networks applied to visual document analysis. In Seventh International Conference on Document Analysis and Recognition, [Strigl et al., 2010] Daniel Strigl, Klaus Kofler, and Stefan Podlipnig. Performance and scalability of GPU-based convolutional neural networks. In 18th Euromicro Conference on Parallel, Distributed, and Network-Based Processing, [Uetz and Behnke, 2009] Rafael Uetz and Sven Behnke. Large-scale object recognition with CUDA-accelerated hierarchical neural networks. In IEEE International Conference on Intelligent Computing and Intelligent Systems (ICIS), [Wiesel and Hubel, 1959] D. H. Wiesel and T. N. Hubel. Receptive fields of single neurones in the cat s striate cortex. J. Physiol., 148: , [Yu and Zhang, 2010] Kai Yu and Tong Zhang. Improved local coordinate coding using local tangents. In Proceedings of the International Conference on Machine Learning,
Multi-Column Deep Neural Network for Traffic Sign Classification
Multi-Column Deep Neural Network for Traffic Sign Classification Dan Cireşan, Ueli Meier, Jonathan Masci and Jürgen Schmidhuber IDSIA - USI - SUPSI Galleria 2, Manno - Lugano 6928, Switzerland Abstract
Transfer Learning for Latin and Chinese Characters with Deep Neural Networks
Transfer Learning for Latin and Chinese Characters with Deep Neural Networks Dan C. Cireşan IDSIA USI-SUPSI Manno, Switzerland, 6928 Email: [email protected] Ueli Meier IDSIA USI-SUPSI Manno, Switzerland, 6928
Introduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Deep Learning Barnabás Póczos & Aarti Singh Credits Many of the pictures, results, and other materials are taken from: Ruslan Salakhutdinov Joshua Bengio Geoffrey
Selecting Receptive Fields in Deep Networks
Selecting Receptive Fields in Deep Networks Adam Coates Department of Computer Science Stanford University Stanford, CA 94305 [email protected] Andrew Y. Ng Department of Computer Science Stanford
3D Object Recognition using Convolutional Neural Networks with Transfer Learning between Input Channels
3D Object Recognition using Convolutional Neural Networks with Transfer Learning between Input Channels Luís A. Alexandre Department of Informatics and Instituto de Telecomunicações Univ. Beira Interior,
CS 1699: Intro to Computer Vision. Deep Learning. Prof. Adriana Kovashka University of Pittsburgh December 1, 2015
CS 1699: Intro to Computer Vision Deep Learning Prof. Adriana Kovashka University of Pittsburgh December 1, 2015 Today: Deep neural networks Background Architectures and basic operations Applications Visualizing
Taking Inverse Graphics Seriously
CSC2535: 2013 Advanced Machine Learning Taking Inverse Graphics Seriously Geoffrey Hinton Department of Computer Science University of Toronto The representation used by the neural nets that work best
The Role of Size Normalization on the Recognition Rate of Handwritten Numerals
The Role of Size Normalization on the Recognition Rate of Handwritten Numerals Chun Lei He, Ping Zhang, Jianxiong Dong, Ching Y. Suen, Tien D. Bui Centre for Pattern Recognition and Machine Intelligence,
Image Classification for Dogs and Cats
Image Classification for Dogs and Cats Bang Liu, Yan Liu Department of Electrical and Computer Engineering {bang3,yan10}@ualberta.ca Kai Zhou Department of Computing Science [email protected] Abstract
Online Evolution of Deep Convolutional Network for Vision-Based Reinforcement Learning
Online Evolution of Deep Convolutional Network for Vision-Based Reinforcement Learning Jan Koutník, Jürgen Schmidhuber, and Faustino Gomez IDSIA, USI-SUPSI Galleria 2 Manno-Lugano, CH 6928 {hkou,juergen,tino}@idsia.ch
Stochastic Pooling for Regularization of Deep Convolutional Neural Networks
Stochastic Pooling for Regularization of Deep Convolutional Neural Networks Matthew D. Zeiler Department of Computer Science Courant Institute, New York University [email protected] Rob Fergus Department
Analecta Vol. 8, No. 2 ISSN 2064-7964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
Simplified Machine Learning for CUDA. Umar Arshad @arshad_umar Arrayfire @arrayfire
Simplified Machine Learning for CUDA Umar Arshad @arshad_umar Arrayfire @arrayfire ArrayFire CUDA and OpenCL experts since 2007 Headquartered in Atlanta, GA In search for the best and the brightest Expert
An Analysis of Single-Layer Networks in Unsupervised Feature Learning
An Analysis of Single-Layer Networks in Unsupervised Feature Learning Adam Coates 1, Honglak Lee 2, Andrew Y. Ng 1 1 Computer Science Department, Stanford University {acoates,ang}@cs.stanford.edu 2 Computer
Unsupervised Learning of Invariant Feature Hierarchies with Applications to Object Recognition
Unsupervised Learning of Invariant Feature Hierarchies with Applications to Object Recognition Marc Aurelio Ranzato, Fu-Jie Huang, Y-Lan Boureau, Yann LeCun Courant Institute of Mathematical Sciences,
Efficient online learning of a non-negative sparse autoencoder
and Machine Learning. Bruges (Belgium), 28-30 April 2010, d-side publi., ISBN 2-93030-10-2. Efficient online learning of a non-negative sparse autoencoder Andre Lemme, R. Felix Reinhart and Jochen J. Steil
Novelty Detection in image recognition using IRF Neural Networks properties
Novelty Detection in image recognition using IRF Neural Networks properties Philippe Smagghe, Jean-Luc Buessler, Jean-Philippe Urban Université de Haute-Alsace MIPS 4, rue des Frères Lumière, 68093 Mulhouse,
Administrivia. Traditional Recognition Approach. Overview. CMPSCI 370: Intro. to Computer Vision Deep learning
: Intro. to Computer Vision Deep learning University of Massachusetts, Amherst April 19/21, 2016 Instructor: Subhransu Maji Finals (everyone) Thursday, May 5, 1-3pm, Hasbrouck 113 Final exam Tuesday, May
ImageNet Classification with Deep Convolutional Neural Networks
ImageNet Classification with Deep Convolutional Neural Networks Alex Krizhevsky University of Toronto [email protected] Ilya Sutskever University of Toronto [email protected] Geoffrey E. Hinton University
The Importance of Encoding Versus Training with Sparse Coding and Vector Quantization
The Importance of Encoding Versus Training with Sparse Coding and Vector Quantization Adam Coates Andrew Y. Ng Stanford University, 353 Serra Mall, Stanford, CA 94305 [email protected] [email protected]
TRAFFIC sign recognition has direct real-world applications
Traffic Sign Recognition with Multi-Scale Convolutional Networks Pierre Sermanet and Yann LeCun Courant Institute of Mathematical Sciences, New York University {sermanet,yann}@cs.nyu.edu Abstract We apply
Steven C.H. Hoi School of Information Systems Singapore Management University Email: [email protected]
Steven C.H. Hoi School of Information Systems Singapore Management University Email: [email protected] Introduction http://stevenhoi.org/ Finance Recommender Systems Cyber Security Machine Learning Visual
Learning Invariant Features through Topographic Filter Maps
Learning Invariant Features through Topographic Filter Maps Koray Kavukcuoglu Marc Aurelio Ranzato Rob Fergus Yann LeCun Courant Institute of Mathematical Sciences New York University {koray,ranzato,fergus,yann}@cs.nyu.edu
Supporting Online Material for
www.sciencemag.org/cgi/content/full/313/5786/504/dc1 Supporting Online Material for Reducing the Dimensionality of Data with Neural Networks G. E. Hinton* and R. R. Salakhutdinov *To whom correspondence
Mitosis Detection in Breast Cancer Histology Images with Deep Neural Networks
Mitosis Detection in Breast Cancer Histology Images with Deep Neural Networks Dan C. Cireşan, Alessandro Giusti, Luca M. Gambardella, Jürgen Schmidhuber IDSIA, Dalle Molle Institute for Artificial Intelligence,
Factored 3-Way Restricted Boltzmann Machines For Modeling Natural Images
For Modeling Natural Images Marc Aurelio Ranzato Alex Krizhevsky Geoffrey E. Hinton Department of Computer Science - University of Toronto Toronto, ON M5S 3G4, CANADA Abstract Deep belief nets have been
Using Convolutional Neural Networks for Image Recognition
Using Convolutional Neural Networks for Image Recognition By Samer Hijazi, Rishi Kumar, and Chris Rowen, IP Group, Cadence Convolutional neural networks (CNNs) are widely used in pattern- and image-recognition
Implementation of Neural Networks with Theano. http://deeplearning.net/tutorial/
Implementation of Neural Networks with Theano http://deeplearning.net/tutorial/ Feed Forward Neural Network (MLP) Hidden Layer Object Hidden Layer Object Hidden Layer Object Logistic Regression Object
Efficient Learning of Sparse Representations with an Energy-Based Model
Efficient Learning of Sparse Representations with an Energy-Based Model Marc Aurelio Ranzato Christopher Poultney Sumit Chopra Yann LeCun Courant Institute of Mathematical Sciences New York University,
Pedestrian Detection with RCNN
Pedestrian Detection with RCNN Matthew Chen Department of Computer Science Stanford University [email protected] Abstract In this paper we evaluate the effectiveness of using a Region-based Convolutional
Classifying Manipulation Primitives from Visual Data
Classifying Manipulation Primitives from Visual Data Sandy Huang and Dylan Hadfield-Menell Abstract One approach to learning from demonstrations in robotics is to make use of a classifier to predict if
Università degli Studi di Bologna
Università degli Studi di Bologna DEIS Biometric System Laboratory Incremental Learning by Message Passing in Hierarchical Temporal Memory Davide Maltoni Biometric System Laboratory DEIS - University of
Benchmark Hadoop and Mars: MapReduce on cluster versus on GPU
Benchmark Hadoop and Mars: MapReduce on cluster versus on GPU Heshan Li, Shaopeng Wang The Johns Hopkins University 3400 N. Charles Street Baltimore, Maryland 21218 {heshanli, shaopeng}@cs.jhu.edu 1 Overview
What is the Best Multi-Stage Architecture for Object Recognition?
What is the Best Multi-Stage Architecture for Object Recognition? Kevin Jarrett, Koray Kavukcuoglu, Marc Aurelio Ranzato and Yann LeCun The Courant Institute of Mathematical Sciences New York University,
Neural Networks and Back Propagation Algorithm
Neural Networks and Back Propagation Algorithm Mirza Cilimkovic Institute of Technology Blanchardstown Blanchardstown Road North Dublin 15 Ireland [email protected] Abstract Neural Networks (NN) are important
Lecture 6: Classification & Localization. boris. [email protected]
Lecture 6: Classification & Localization boris. [email protected] 1 Agenda ILSVRC 2014 Overfeat: integrated classification, localization, and detection Classification with Localization Detection. 2 ILSVRC-2014
Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski [email protected]
Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trakovski [email protected] Neural Networks 2 Neural Networks Analogy to biological neural systems, the most robust learning systems
Programming Exercise 3: Multi-class Classification and Neural Networks
Programming Exercise 3: Multi-class Classification and Neural Networks Machine Learning November 4, 2011 Introduction In this exercise, you will implement one-vs-all logistic regression and neural networks
Image and Video Understanding
Image and Video Understanding 2VO 710.095 WS Christoph Feichtenhofer, Axel Pinz Slide credits: Many thanks to all the great computer vision researchers on which this presentation relies on. Most material
Handwritten Digit Recognition with a Back-Propagation Network
396 Le Cun, Boser, Denker, Henderson, Howard, Hubbard and Jackel Handwritten Digit Recognition with a Back-Propagation Network Y. Le Cun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard,
6.2.8 Neural networks for data mining
6.2.8 Neural networks for data mining Walter Kosters 1 In many application areas neural networks are known to be valuable tools. This also holds for data mining. In this chapter we discuss the use of neural
Learning to Process Natural Language in Big Data Environment
CCF ADL 2015 Nanchang Oct 11, 2015 Learning to Process Natural Language in Big Data Environment Hang Li Noah s Ark Lab Huawei Technologies Part 1: Deep Learning - Present and Future Talk Outline Overview
Recurrent Neural Networks
Recurrent Neural Networks Neural Computation : Lecture 12 John A. Bullinaria, 2015 1. Recurrent Neural Network Architectures 2. State Space Models and Dynamical Systems 3. Backpropagation Through Time
On Optimization Methods for Deep Learning
Quoc V. Le [email protected] Jiquan Ngiam [email protected] Adam Coates [email protected] Abhik Lahiri [email protected] Bobby Prochnow [email protected] Andrew Y. Ng [email protected]
Visualization of 2D Domains
Visualization of 2D Domains This part of the visualization package is intended to supply a simple graphical interface for 2- dimensional finite element data structures. Furthermore, it is used as the low
Learning and transferring mid-level image representions using convolutional neural networks
Willow project-team Learning and transferring mid-level image representions using convolutional neural networks Maxime Oquab, Léon Bottou, Ivan Laptev, Josef Sivic 1 Image classification (easy) Is there
Binary search tree with SIMD bandwidth optimization using SSE
Binary search tree with SIMD bandwidth optimization using SSE Bowen Zhang, Xinwei Li 1.ABSTRACT In-memory tree structured index search is a fundamental database operation. Modern processors provide tremendous
Texture Cache Approximation on GPUs
Texture Cache Approximation on GPUs Mark Sutherland Joshua San Miguel Natalie Enright Jerger {suther68,enright}@ece.utoronto.ca, [email protected] 1 Our Contribution GPU Core Cache Cache
Neural network software tool development: exploring programming language options
INEB- PSI Technical Report 2006-1 Neural network software tool development: exploring programming language options Alexandra Oliveira [email protected] Supervisor: Professor Joaquim Marques de Sá June 2006
CHAPTER 1 INTRODUCTION
1 CHAPTER 1 INTRODUCTION 1.1 MOTIVATION OF RESEARCH Multicore processors have two or more execution cores (processors) implemented on a single chip having their own set of execution and architectural recourses.
Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 269 Class Project Report
Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 69 Class Project Report Junhua Mao and Lunbo Xu University of California, Los Angeles [email protected] and lunbo
Visualizing Higher-Layer Features of a Deep Network
Visualizing Higher-Layer Features of a Deep Network Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent Dept. IRO, Université de Montréal P.O. Box 6128, Downtown Branch, Montreal, H3C 3J7,
Implementation of Canny Edge Detector of color images on CELL/B.E. Architecture.
Implementation of Canny Edge Detector of color images on CELL/B.E. Architecture. Chirag Gupta,Sumod Mohan K [email protected], [email protected] Abstract In this project we propose a method to improve
EFFICIENT DATA PRE-PROCESSING FOR DATA MINING
EFFICIENT DATA PRE-PROCESSING FOR DATA MINING USING NEURAL NETWORKS JothiKumar.R 1, Sivabalan.R.V 2 1 Research scholar, Noorul Islam University, Nagercoil, India Assistant Professor, Adhiparasakthi College
ultra fast SOM using CUDA
ultra fast SOM using CUDA SOM (Self-Organizing Map) is one of the most popular artificial neural network algorithms in the unsupervised learning category. Sijo Mathew Preetha Joy Sibi Rajendra Manoj A
Intelligent Heuristic Construction with Active Learning
Intelligent Heuristic Construction with Active Learning William F. Ogilvie, Pavlos Petoumenos, Zheng Wang, Hugh Leather E H U N I V E R S I T Y T O H F G R E D I N B U Space is BIG! Hubble Ultra-Deep Field
Class-specific Sparse Coding for Learning of Object Representations
Class-specific Sparse Coding for Learning of Object Representations Stephan Hasler, Heiko Wersing, and Edgar Körner Honda Research Institute Europe GmbH Carl-Legien-Str. 30, 63073 Offenbach am Main, Germany
QCD as a Video Game?
QCD as a Video Game? Sándor D. Katz Eötvös University Budapest in collaboration with Győző Egri, Zoltán Fodor, Christian Hoelbling Dániel Nógrádi, Kálmán Szabó Outline 1. Introduction 2. GPU architecture
Applying Deep Learning to Car Data Logging (CDL) and Driver Assessor (DA) October 22-Oct-15
Applying Deep Learning to Car Data Logging (CDL) and Driver Assessor (DA) October 22-Oct-15 GENIVI is a registered trademark of the GENIVI Alliance in the USA and other countries Copyright GENIVI Alliance
Tattoo Detection for Soft Biometric De-Identification Based on Convolutional NeuralNetworks
1 Tattoo Detection for Soft Biometric De-Identification Based on Convolutional NeuralNetworks Tomislav Hrkać, Karla Brkić, Zoran Kalafatić Faculty of Electrical Engineering and Computing University of
THE HUMAN BRAIN. observations and foundations
THE HUMAN BRAIN observations and foundations brains versus computers a typical brain contains something like 100 billion miniscule cells called neurons estimates go from about 50 billion to as many as
Applying Deep Learning to Enhance Momentum Trading Strategies in Stocks
This version: December 12, 2013 Applying Deep Learning to Enhance Momentum Trading Strategies in Stocks Lawrence Takeuchi * Yu-Ying (Albert) Lee [email protected] [email protected] Abstract We
Making Sense of the Mayhem: Machine Learning and March Madness
Making Sense of the Mayhem: Machine Learning and March Madness Alex Tran and Adam Ginzberg Stanford University [email protected] [email protected] I. Introduction III. Model The goal of our research
Face Recognition For Remote Database Backup System
Face Recognition For Remote Database Backup System Aniza Mohamed Din, Faudziah Ahmad, Mohamad Farhan Mohamad Mohsin, Ku Ruhana Ku-Mahamud, Mustafa Mufawak Theab 2 Graduate Department of Computer Science,UUM
Deep Learning For Text Processing
Deep Learning For Text Processing Jeffrey A. Bilmes Professor Departments of Electrical Engineering & Computer Science and Engineering University of Washington, Seattle http://melodi.ee.washington.edu/~bilmes
Sense Making in an IOT World: Sensor Data Analysis with Deep Learning
Sense Making in an IOT World: Sensor Data Analysis with Deep Learning Natalia Vassilieva, PhD Senior Research Manager GTC 2016 Deep learning proof points as of today Vision Speech Text Other Search & information
Applications of Deep Learning to the GEOINT mission. June 2015
Applications of Deep Learning to the GEOINT mission June 2015 Overview Motivation Deep Learning Recap GEOINT applications: Imagery exploitation OSINT exploitation Geospatial and activity based analytics
Lecture 6. Artificial Neural Networks
Lecture 6 Artificial Neural Networks 1 1 Artificial Neural Networks In this note we provide an overview of the key concepts that have led to the emergence of Artificial Neural Networks as a major paradigm
How To Use Neural Networks In Data Mining
International Journal of Electronics and Computer Science Engineering 1449 Available Online at www.ijecse.org ISSN- 2277-1956 Neural Networks in Data Mining Priyanka Gaur Department of Information and
Real-time Visual Tracker by Stream Processing
Real-time Visual Tracker by Stream Processing Simultaneous and Fast 3D Tracking of Multiple Faces in Video Sequences by Using a Particle Filter Oscar Mateo Lozano & Kuzahiro Otsuka presented by Piotr Rudol
III. SEGMENTATION. A. Origin Segmentation
2012 International Conference on Frontiers in Handwriting Recognition Handwritten English Word Recognition based on Convolutional Neural Networks Aiquan Yuan, Gang Bai, Po Yang, Yanni Guo, Xinting Zhao
Advanced analytics at your hands
2.3 Advanced analytics at your hands Neural Designer is the most powerful predictive analytics software. It uses innovative neural networks techniques to provide data scientists with results in a way previously
Machine Learning. 01 - Introduction
Machine Learning 01 - Introduction Machine learning course One lecture (Wednesday, 9:30, 346) and one exercise (Monday, 17:15, 203). Oral exam, 20 minutes, 5 credit points. Some basic mathematical knowledge
Comparison of K-means and Backpropagation Data Mining Algorithms
Comparison of K-means and Backpropagation Data Mining Algorithms Nitu Mathuriya, Dr. Ashish Bansal Abstract Data mining has got more and more mature as a field of basic research in computer science and
A Genetic Algorithm-Evolved 3D Point Cloud Descriptor
A Genetic Algorithm-Evolved 3D Point Cloud Descriptor Dominik Wȩgrzyn and Luís A. Alexandre IT - Instituto de Telecomunicações Dept. of Computer Science, Univ. Beira Interior, 6200-001 Covilhã, Portugal
GeoImaging Accelerator Pansharp Test Results
GeoImaging Accelerator Pansharp Test Results Executive Summary After demonstrating the exceptional performance improvement in the orthorectification module (approximately fourteen-fold see GXL Ortho Performance
Modeling Pixel Means and Covariances Using Factorized Third-Order Boltzmann Machines
Modeling Pixel Means and Covariances Using Factorized Third-Order Boltzmann Machines Marc Aurelio Ranzato Geoffrey E. Hinton Department of Computer Science - University of Toronto 10 King s College Road,
Silverlight for Windows Embedded Graphics and Rendering Pipeline 1
Silverlight for Windows Embedded Graphics and Rendering Pipeline 1 Silverlight for Windows Embedded Graphics and Rendering Pipeline Windows Embedded Compact 7 Technical Article Writers: David Franklin,
Stream Processing on GPUs Using Distributed Multimedia Middleware
Stream Processing on GPUs Using Distributed Multimedia Middleware Michael Repplinger 1,2, and Philipp Slusallek 1,2 1 Computer Graphics Lab, Saarland University, Saarbrücken, Germany 2 German Research
COMBINED NEURAL NETWORKS FOR TIME SERIES ANALYSIS
COMBINED NEURAL NETWORKS FOR TIME SERIES ANALYSIS Iris Ginzburg and David Horn School of Physics and Astronomy Raymond and Beverly Sackler Faculty of Exact Science Tel-Aviv University Tel-A viv 96678,
An Introduction to Deep Learning
Thought Leadership Paper Predictive Analytics An Introduction to Deep Learning Examining the Advantages of Hierarchical Learning Table of Contents 4 The Emergence of Deep Learning 7 Applying Deep-Learning
Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence
Artificial Neural Networks and Support Vector Machines CS 486/686: Introduction to Artificial Intelligence 1 Outline What is a Neural Network? - Perceptron learners - Multi-layer networks What is a Support
The Delicate Art of Flower Classification
The Delicate Art of Flower Classification Paul Vicol Simon Fraser University University Burnaby, BC [email protected] Note: The following is my contribution to a group project for a graduate machine learning
Next Generation GPU Architecture Code-named Fermi
Next Generation GPU Architecture Code-named Fermi The Soul of a Supercomputer in the Body of a GPU Why is NVIDIA at Super Computing? Graphics is a throughput problem paint every pixel within frame time
Data Mining Algorithms Part 1. Dejan Sarka
Data Mining Algorithms Part 1 Dejan Sarka Join the conversation on Twitter: @DevWeek #DW2015 Instructor Bio Dejan Sarka ([email protected]) 30 years of experience SQL Server MVP, MCT, 13 books 7+ courses
Image denoising: Can plain Neural Networks compete with BM3D?
Image denoising: Can plain Neural Networks compete with BM3D? Harold C. Burger, Christian J. Schuler, and Stefan Harmeling Max Planck Institute for Intelligent Systems, Tübingen, Germany http://people.tuebingen.mpg.de/burger/neural_denoising/
A 240 G-ops/s Mobile Coprocessor for Deep Neural Networks
A 240 G-ops/s Mobile Coprocessor for Deep Neural Networks (Invited Paper) Vinayak Gokhale, Jonghoon Jin, Aysegul Dundar, Berin Martini and Eugenio Culurciello Electrical and Computer Engineering, Purdue
Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations
Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations Honglak Lee Roger Grosse Rajesh Ranganath Andrew Y. Ng Computer Science Department, Stanford University,
A Content based Spam Filtering Using Optical Back Propagation Technique
A Content based Spam Filtering Using Optical Back Propagation Technique Sarab M. Hameed 1, Noor Alhuda J. Mohammed 2 Department of Computer Science, College of Science, University of Baghdad - Iraq ABSTRACT
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic
Medical Image Processing on the GPU. Past, Present and Future. Anders Eklund, PhD Virginia Tech Carilion Research Institute [email protected].
Medical Image Processing on the GPU Past, Present and Future Anders Eklund, PhD Virginia Tech Carilion Research Institute [email protected] Outline Motivation why do we need GPUs? Past - how was GPU programming
Deep Learning using Linear Support Vector Machines
Yichuan Tang [email protected] Department of Computer Science, University of Toronto. Toronto, Ontario, Canada. Abstract Recently, fully-connected and convolutional neural networks have been trained
Big Data Deep Learning: Challenges and Perspectives
Big Data Deep Learning: Challenges and Perspectives D.saraswathy Department of computer science and engineering IFET college of engineering Villupuram [email protected] Abstract Deep
American International Journal of Research in Science, Technology, Engineering & Mathematics
American International Journal of Research in Science, Technology, Engineering & Mathematics Available online at http://www.iasir.net ISSN (Print): 2328-349, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629
How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm
IJSTE - International Journal of Science Technology & Engineering Volume 1 Issue 10 April 2015 ISSN (online): 2349-784X Image Estimation Algorithm for Out of Focus and Blur Images to Retrieve the Barcode
Neural Networks for Machine Learning. Lecture 13a The ups and downs of backpropagation
Neural Networks for Machine Learning Lecture 13a The ups and downs of backpropagation Geoffrey Hinton Nitish Srivastava, Kevin Swersky Tijmen Tieleman Abdel-rahman Mohamed A brief history of backpropagation
