SIGNAL INTERPRETATION
|
|
|
- Stuart McCormick
- 9 years ago
- Views:
Transcription
1 SIGNAL INTERPRETATION Lecture 6: ConvNets February 11, 2016 Heikki Huttunen Department of Signal Processing Tampere University of Technology
2 CONVNETS Continued from previous slideset
3 Convolutional Network: Example ˆ Let s train a convnet with the famous MNIST dataset. ˆ MNIST consists of training and test images representing handwritten numbers from US mail. ˆ Each image is pixels and there are 10 categories. ˆ Generally considered an easy problem: Logistic regression gives over 90% accuracy and convnet can reach (almost) 100%. ˆ However, 10 years ago, the state of the art error was still over 1%.
4 Convolutional Network: Example model = Sequential() # Training code (modified from mnist_cnn.py at Keras examples) from keras.datasets import mnist from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation, Flatten from keras.layers.convolutional import Convolution2D, MaxPooling2D # We use the handwritten digit database "MNIST". # training and test images of # size 28x28 (X_train, y_train), (X_test, y_test) = mnist. load_data() num_featmaps = 32 # This many filters per layer num_classes = 10 # Digits 0,1,...,9 num_epochs = 50 # Show all samples 50 times w, h = 5, 5 # Conv window size # Layer 1: needs input_shape as well. model.add(convolution2d(num_featmaps, w, h, input_shape=(1, 28, 28), activation = relu )) # Layer 2: model.add(convolution2d(num_featmaps, w, h, activation = relu )) model.add(maxpooling2d(pool_size=(2, 2))) model.add(dropout(0.25)) # Layer 3: dense layer with 128 nodes # Flatten() vectorizes the data: # 32x10x10 -> 3200 # (10x10 instead of 14x14 due to border effect) model.add(flatten()) model.add(dense(128, activation = relu )) model.add(dropout(0.5)) # Layer 4: Last layer producing 10 outputs. model.add(dense(num_classes, activation= softmax )) # Compile and train model.compile(loss= categorical_crossentropy, optimizer= adadelta ) model.fit(x_train, Y_train, nb_epoch=100)
5 Convolutional Network: Training Log ˆ The code runs for about 5-10 minutes on a GPU. ˆ On a CPU, this would take 1-2 hours (1 epoch 500 s) Using gpu device 0: Tesla K40m Using Theano backend. Compiling model... Model compilation took 0.1 minutes. Training... Train on samples, validate on samples Epoch 1/ /60000 [================] 31s loss : acc : val_loss : val_acc : Epoch 2/ /60000 [================] 31s loss : acc : val_loss : val_acc : Epoch 3/ /60000 [================] 31s loss : acc : val_loss : val_acc : Epoch 4/ /60000 [================] 31s loss : acc : val_loss : val_acc : Epoch 5/ /60000 [================] 31s loss : acc : val_loss : val_acc : Epoch 6/ /60000 [================] 31s loss : acc : val_loss : val_acc : Epoch 7/ /60000 [================] 31s loss : acc : val_loss : val_acc : Epoch 8/ /60000 [================] 31s loss : acc : val_loss : val_acc : Epoch 9/ /60000 [================] 31s loss : acc : val_loss : val_acc : Epoch 10/ /60000 [================] 31s loss : acc : val_loss : val_acc : Training (10 epochs) took 5.8 minutes.
6 Save and Load the Net ˆ The network can be saved to disk in two parts: ˆ Network topology as JSON or YAML: model.to_json() or model.to_yaml(). The resulting string can be written to disk using.write() of a file object. ˆ Coefficients are saved in HDF5 format using model.save_weights(). HDF5 is a serialization format similar to.mat or.pkl ˆ Alternatively, the net can be pickled, although this is not recommended. ˆ Read back to memory using model_from_json and load_weights Part of network definition in YAML format.
7 Network Structure ˆ It is possible to look into the filters on the convolutional layers. # First layer weights (shown on the right): weights = model.layers[0].get_weights()[0] ˆ The second layer is difficult to visualize, because the input is 32-dimensional: # Zeroth layer weights: >>> model.layers[0].get_weights()[0].shape (32, 1, 5, 5) # First layer weights: >>> model.layers[1].get_weights()[0].shape (32, 32, 5, 5) ˆ The dense layer is the 5th (conv conv maxpool dropout flatten dense). # Fifth layer weights map 3200 inputs to 128 outputs. # This is actually a matrix multiplication. >>> model.layers[5].get_weights()[0].shape (3200, 128)
8 Network Activations ˆ The layer outputs are usually more interesting than the filters. ˆ These can be visualized as well. ˆ For details, see Keras FAQ. ˆ Note: the outputs are actually grayscale; here in color only for better visualization
9 Second Layer Activations ˆ On the next layer, the figures are downsampled to 12x12. ˆ This provides spatial invariance: The same activation results although the input would be slightly displaced.
10 DEEP LEARNING HIGHLIGHTS OF was Full of Breakthroughs: Let s See Some of Them
11 Image Recognition Imagenet is the standard benchmark set for image recognition Classify 256x256 images into 1000 categories, such as person, bike, cheetah, etc. Total 1.2M images Many error metrics, including top-5 error: error rate with 5 guesses Picture from Alex Krizhevsky et al., ImageNet Classification with Deep Convolutional Neural Networks, 2012
12 ILSVRC2012 ILSVRC was a game changer ConvNets dropped the top-5 error 26.2% 15.3 %. The network is now called AlexNet named after the first author (see previous slide). Network contains 8 layers (5 convolutional followed by 3 dense); altogether 60M parameters. 1 Imagenet Large Scale Visual Recognition Challenge
13 The AlexNet The architecture is illustrated in the figure. The pipeline is divided to two paths (upper & lower) to fit to 3GB of GPU memory available at the time (running on 2 GPU s) Introduced many tricks for data augmentation Left-right flip Crop many subimages (224x224) Picture from Alex Krizhevsky et al., ImageNet Classification with Deep Convolutional Neural Networks, 2012
14 ILSVRC2014 Since 2012, ConvNets have dominated 2014 there were 2 almost equal teams: GoogLeNet Team with 6.66% Top-5 error VGG Team with 7.33% Top-5 error In some subchallenges VGG was the winner GoogLeNet: 22 layers, only 7M parameters due to fully convolutional structure and clever inception architecture VGG: 19 layers, 144M parameters
15 ILSVRC2015 Winner MSRA (Microsoft Research) with TOP-5 error 3.57 % 152 layers! 51M parameters. Built from residual blocks (which include the inception trick from previous year) Key idea is to add identity shortcuts, which make training easier Pictures from MSRA ICCV2015 slides
16 ILSVRC2016? ILSVRC2016 will be again full of surprises Most likely extremely deep nets will dominate MSRA experimented already with a 1202 layer net in ILSVRC2016 Still, too slow, too overfitting, slightly worse than 152 layer model These obstacles may be circumvented with new regularizers, multi-gpu and new tools:
17 Back to Earth TUT has studied shallow convolutional architectures for fast/real time detection tasks For example, Automatic Car Type Detection from Picture: Van, Bus, Truck or Normal Vehicle The network recognizes the car type (4 classes) with 98% accuracy ( images).
18 Components of the Network Convolution: 5x5 window Maxpooling: 2x2 downsampling with the maximum Relu: max(x, 0) Matrix multiplication Softmax:
19 Object Localization Object localization attempts to find objects from the image together with their location. The most successful approaches are again deep net based. The key question is the speed: it is not possible to use the net at all positions at all scales. Instead, the framework relies on object proposals: A few hundred bounding boxes that may contain an object. Additionally, the process speeds up by processing all conv layers at once.
20 R-CNN The tool for ConvNet based object localization is R-CNN (Regions with CNN). History: CVPR2014: R-CNN; ICCV2015: Fast R-CNN; NIPS2015: Faster R-CNN Python implementation of the latter here:
21 Real-Time Applications Pedestrian Detection (Google) Machine Translation (Google) CamFind App (CloudSight)
22 Recurrent Networks Recurrent networks process sequences of arbitrary length; e.g., Sequence sequence Image sequence Sequence class ID Picture from
23 Recurrent Networks Recurrent net consist of special nodes that remember past states. Each node receives 2 inputs: the data and the previous state. Most popular recurrent node type is Long Short Term Memory () node. includes also gates, which can turn on/off the history and a few additional inputs. Picture from G. Parascandolo M.Sc. Thesis,
24 Recurrent Networks An example of use is from our recent paper. We detect acoustic events within 61 categories. is particularly effective because it remembers the past events (or the context). In this case we used a bidirectional, which remembers also the future. B gives slight improvement over. Picture from Parascandolo et al., ICASSP 2016
25 in Keras layers can be added to the model like any other layer type. This is an example for natural language modeling: Can the network predict next symbol from the previous ones? Accuracy is greatly improved from N-Gram etc.
26 Time Text Modeling The input to should be a sequence of vectors. For text modeling, we represent the characters as binary vectors: _ d e h l o r w
27 Text Modeling The prediction target for the net is simply the input delayed by one step. For example: we have shown the net symbols h, e, l, l, o, _, w. Then the network should predict o. H E L L O _ W E L L O _ W O
28 Text Modeling Trained can be used as a text generator. Show the first character, and set the predicted symbol as the next input. Randomize among the top scoring symbols to avoid static loops. H E L L O _ W E L L O _ W O
29 Many Layers A straightforward extension of is to use it in multiple layers (typically less than 5). Below is an example of two layered. Note: Each blue block is exactly the same with, e.g., 512 nodes. So is each red block.
30 Training net can be viewed as a very deep nonrecurrent network. The net can be unfolded in time over a sequence of time steps. After unfolding, the normal gradient based learning rules apply. Picture from G. Parascandolo M.Sc. Thesis,
31 Text Modeling Experiment Keras includes an example script: Train a 2-layer (512 nodes each) by showing Nietzche texts. A sequence of characters consisting of 59 symbols (uppercase, lowercase, special characters). Sample of training data
32 Text Modeling Experiment The training runs for a few hours on a Nvidia high end GPU (Tesla K40m). At start, the net knows only a few words, but picks up the vocabulary rather soon. Epoch 1 Epoch 3 Epoch 25
33 Text Modeling Experiment Let s do the same thing for Finnish text: All discussions from Suomi24 forum are released for public. The message is nonsense, but syntax close to correct: A foreigner can not tell the diffence. Epoch 1 Epoch 4 Epoch 44
34 A Few Samples More
Lecture 6: Classification & Localization. boris. [email protected]
Lecture 6: Classification & Localization boris. [email protected] 1 Agenda ILSVRC 2014 Overfeat: integrated classification, localization, and detection Classification with Localization Detection. 2 ILSVRC-2014
CS 1699: Intro to Computer Vision. Deep Learning. Prof. Adriana Kovashka University of Pittsburgh December 1, 2015
CS 1699: Intro to Computer Vision Deep Learning Prof. Adriana Kovashka University of Pittsburgh December 1, 2015 Today: Deep neural networks Background Architectures and basic operations Applications Visualizing
MulticoreWare. Global Company, 250+ employees HQ = Sunnyvale, CA Other locations: US, China, India, Taiwan
1 MulticoreWare Global Company, 250+ employees HQ = Sunnyvale, CA Other locations: US, China, India, Taiwan Focused on Heterogeneous Computing Multiple verticals spawned from core competency Machine Learning
Image and Video Understanding
Image and Video Understanding 2VO 710.095 WS Christoph Feichtenhofer, Axel Pinz Slide credits: Many thanks to all the great computer vision researchers on which this presentation relies on. Most material
Module 5. Deep Convnets for Local Recognition Joost van de Weijer 4 April 2016
Module 5 Deep Convnets for Local Recognition Joost van de Weijer 4 April 2016 Previously, end-to-end.. Dog Slide credit: Jose M 2 Previously, end-to-end.. Dog Learned Representation Slide credit: Jose
Lecture 6: CNNs for Detection, Tracking, and Segmentation Object Detection
CSED703R: Deep Learning for Visual Recognition (206S) Lecture 6: CNNs for Detection, Tracking, and Segmentation Object Detection Bohyung Han Computer Vision Lab. [email protected] 2 3 Object detection
Steven C.H. Hoi School of Information Systems Singapore Management University Email: [email protected]
Steven C.H. Hoi School of Information Systems Singapore Management University Email: [email protected] Introduction http://stevenhoi.org/ Finance Recommender Systems Cyber Security Machine Learning Visual
Pedestrian Detection with RCNN
Pedestrian Detection with RCNN Matthew Chen Department of Computer Science Stanford University [email protected] Abstract In this paper we evaluate the effectiveness of using a Region-based Convolutional
Convolutional Feature Maps
Convolutional Feature Maps Elements of efficient (and accurate) CNN-based object detection Kaiming He Microsoft Research Asia (MSRA) ICCV 2015 Tutorial on Tools for Efficient Object Detection Overview
Fast R-CNN. Author: Ross Girshick Speaker: Charlie Liu Date: Oct, 13 th. Girshick, R. (2015). Fast R-CNN. arxiv preprint arxiv:1504.08083.
Fast R-CNN Author: Ross Girshick Speaker: Charlie Liu Date: Oct, 13 th Girshick, R. (2015). Fast R-CNN. arxiv preprint arxiv:1504.08083. ECS 289G 001 Paper Presentation, Prof. Lee Result 1 67% Accuracy
Applying Deep Learning to Car Data Logging (CDL) and Driver Assessor (DA) October 22-Oct-15
Applying Deep Learning to Car Data Logging (CDL) and Driver Assessor (DA) October 22-Oct-15 GENIVI is a registered trademark of the GENIVI Alliance in the USA and other countries Copyright GENIVI Alliance
Programming Exercise 3: Multi-class Classification and Neural Networks
Programming Exercise 3: Multi-class Classification and Neural Networks Machine Learning November 4, 2011 Introduction In this exercise, you will implement one-vs-all logistic regression and neural networks
Sense Making in an IOT World: Sensor Data Analysis with Deep Learning
Sense Making in an IOT World: Sensor Data Analysis with Deep Learning Natalia Vassilieva, PhD Senior Research Manager GTC 2016 Deep learning proof points as of today Vision Speech Text Other Search & information
Introduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Deep Learning Barnabás Póczos & Aarti Singh Credits Many of the pictures, results, and other materials are taken from: Ruslan Salakhutdinov Joshua Bengio Geoffrey
Implementation of Neural Networks with Theano. http://deeplearning.net/tutorial/
Implementation of Neural Networks with Theano http://deeplearning.net/tutorial/ Feed Forward Neural Network (MLP) Hidden Layer Object Hidden Layer Object Hidden Layer Object Logistic Regression Object
Learning to Process Natural Language in Big Data Environment
CCF ADL 2015 Nanchang Oct 11, 2015 Learning to Process Natural Language in Big Data Environment Hang Li Noah s Ark Lab Huawei Technologies Part 1: Deep Learning - Present and Future Talk Outline Overview
Fast R-CNN Object detection with Caffe
Fast R-CNN Object detection with Caffe Ross Girshick Microsoft Research arxiv code Latest roasts Goals for this section Super quick intro to object detection Show one way to tackle obj. det. with ConvNets
Compacting ConvNets for end to end Learning
Compacting ConvNets for end to end Learning Jose M. Alvarez Joint work with Lars Pertersson, Hao Zhou, Fatih Porikli. Success of CNN Image Classification Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton,
Supporting Online Material for
www.sciencemag.org/cgi/content/full/313/5786/504/dc1 Supporting Online Material for Reducing the Dimensionality of Data with Neural Networks G. E. Hinton* and R. R. Salakhutdinov *To whom correspondence
Deep Learning GPU-Based Hardware Platform
Deep Learning GPU-Based Hardware Platform Hardware and Software Criteria and Selection Mourad Bouache Yahoo! Performance Engineering Group Sunnyvale, CA +1.408.784.1446 [email protected] John Glover
Getting Started with Caffe Julien Demouth, Senior Engineer
Getting Started with Caffe Julien Demouth, Senior Engineer What is Caffe? Open Source Framework for Deep Learning http://github.com/bvlc/caffe Developed by the Berkeley Vision and Learning Center (BVLC)
DEEP LEARNING WITH GPUS
DEEP LEARNING WITH GPUS GEOINT 2015 Larry Brown Ph.D. June 2015 AGENDA 1 Introducing NVIDIA 2 What is Deep Learning? 3 GPUs and Deep Learning 4 cudnn and DiGiTS 5 Machine Learning & Data Analytics and
arxiv:1312.6034v2 [cs.cv] 19 Apr 2014
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps arxiv:1312.6034v2 [cs.cv] 19 Apr 2014 Karen Simonyan Andrea Vedaldi Andrew Zisserman Visual Geometry Group,
Applications of Deep Learning to the GEOINT mission. June 2015
Applications of Deep Learning to the GEOINT mission June 2015 Overview Motivation Deep Learning Recap GEOINT applications: Imagery exploitation OSINT exploitation Geospatial and activity based analytics
Advanced analytics at your hands
2.3 Advanced analytics at your hands Neural Designer is the most powerful predictive analytics software. It uses innovative neural networks techniques to provide data scientists with results in a way previously
Latest Advances in Deep Learning. Yao Chou
Latest Advances in Deep Learning Yao Chou Outline Introduction Images Classification Object Detection R-CNN Traditional Feature Descriptor Selective Search Implementation Latest Application Deep Learning
Simplified Machine Learning for CUDA. Umar Arshad @arshad_umar Arrayfire @arrayfire
Simplified Machine Learning for CUDA Umar Arshad @arshad_umar Arrayfire @arrayfire ArrayFire CUDA and OpenCL experts since 2007 Headquartered in Atlanta, GA In search for the best and the brightest Expert
arxiv:1409.1556v6 [cs.cv] 10 Apr 2015
VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION Karen Simonyan & Andrew Zisserman + Visual Geometry Group, Department of Engineering Science, University of Oxford {karen,az}@robots.ox.ac.uk
Pedestrian Detection using R-CNN
Pedestrian Detection using R-CNN CS676A: Computer Vision Project Report Advisor: Prof. Vinay P. Namboodiri Deepak Kumar Mohit Singh Solanki (12228) (12419) Group-17 April 15, 2016 Abstract Pedestrian detection
Image Classification for Dogs and Cats
Image Classification for Dogs and Cats Bang Liu, Yan Liu Department of Electrical and Computer Engineering {bang3,yan10}@ualberta.ca Kai Zhou Department of Computing Science [email protected] Abstract
Two-Stream Convolutional Networks for Action Recognition in Videos
Two-Stream Convolutional Networks for Action Recognition in Videos Karen Simonyan Andrew Zisserman Visual Geometry Group, University of Oxford {karen,az}@robots.ox.ac.uk Abstract We investigate architectures
PhD in Computer Science and Engineering Bologna, April 2016. Machine Learning. Marco Lippi. [email protected]. Marco Lippi Machine Learning 1 / 80
PhD in Computer Science and Engineering Bologna, April 2016 Machine Learning Marco Lippi [email protected] Marco Lippi Machine Learning 1 / 80 Recurrent Neural Networks Marco Lippi Machine Learning
Tattoo Detection for Soft Biometric De-Identification Based on Convolutional NeuralNetworks
1 Tattoo Detection for Soft Biometric De-Identification Based on Convolutional NeuralNetworks Tomislav Hrkać, Karla Brkić, Zoran Kalafatić Faculty of Electrical Engineering and Computing University of
Deformable Part Models with CNN Features
Deformable Part Models with CNN Features Pierre-André Savalle 1, Stavros Tsogkas 1,2, George Papandreou 3, Iasonas Kokkinos 1,2 1 Ecole Centrale Paris, 2 INRIA, 3 TTI-Chicago Abstract. In this work we
Chapter 10: Third Working Phase
LEARNING AND INFERENCE IN GRAPHICAL MODELS Chapter 10: Third Working Phase Dr. Martin Lauer University of Freiburg Machine Learning Lab Karlsruhe Institute of Technology Institute of Measurement and Control
arxiv:1502.01852v1 [cs.cv] 6 Feb 2015
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification Kaiming He Xiangyu Zhang Shaoqing Ren Jian Sun arxiv:1502.01852v1 [cs.cv] 6 Feb 2015 Abstract Rectified activation
Object Recognition and Template Matching
Object Recognition and Template Matching Template Matching A template is a small image (sub-image) The goal is to find occurrences of this template in a larger image That is, you want to find matches of
Deep Residual Networks
Deep Residual Networks Deep Learning Gets Way Deeper 8:30-10:30am, June 19 ICML 2016 tutorial Kaiming He Facebook AI Research* *as of July 2016. Formerly affiliated with Microsoft Research Asia 7x7 conv,
InstaNet: Object Classification Applied to Instagram Image Streams
InstaNet: Object Classification Applied to Instagram Image Streams Clifford Huang Stanford University [email protected] Mikhail Sushkov Stanford University [email protected] Abstract The growing
Administrivia. Traditional Recognition Approach. Overview. CMPSCI 370: Intro. to Computer Vision Deep learning
: Intro. to Computer Vision Deep learning University of Massachusetts, Amherst April 19/21, 2016 Instructor: Subhransu Maji Finals (everyone) Thursday, May 5, 1-3pm, Hasbrouck 113 Final exam Tuesday, May
MACHINE LEARNING IN HIGH ENERGY PHYSICS
MACHINE LEARNING IN HIGH ENERGY PHYSICS LECTURE #1 Alex Rogozhnikov, 2015 INTRO NOTES 4 days two lectures, two practice seminars every day this is introductory track to machine learning kaggle competition!
Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 269 Class Project Report
Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 69 Class Project Report Junhua Mao and Lunbo Xu University of California, Los Angeles [email protected] and lunbo
CNN Based Object Detection in Large Video Images. WangTao, [email protected] IQIYI ltd. 2016.4
CNN Based Object Detection in Large Video Images WangTao, [email protected] IQIYI ltd. 2016.4 Outline Introduction Background Challenge Our approach System framework Object detection Scene recognition Body
GPU-Based Deep Learning Inference:
Whitepaper GPU-Based Deep Learning Inference: A Performance and Power Analysis November 2015 1 Contents Abstract... 3 Introduction... 3 Inference versus Training... 4 GPUs Excel at Neural Network Inference...
Transfer Learning for Latin and Chinese Characters with Deep Neural Networks
Transfer Learning for Latin and Chinese Characters with Deep Neural Networks Dan C. Cireşan IDSIA USI-SUPSI Manno, Switzerland, 6928 Email: [email protected] Ueli Meier IDSIA USI-SUPSI Manno, Switzerland, 6928
Denoising Convolutional Autoencoders for Noisy Speech Recognition
Denoising Convolutional Autoencoders for Noisy Speech Recognition Mike Kayser Stanford University [email protected] Victor Zhong Stanford University [email protected] Abstract We propose the use of
CS231n Caffe Tutorial
CS231n Caffe Tutorial Outline Caffe walkthrough Finetuning example With demo! Python interface With demo! Caffe Most important tip... Don t be afraid to read the code! Caffe: Main classes SoftmaxLossLayer
Introduction to Learning & Decision Trees
Artificial Intelligence: Representation and Problem Solving 5-38 April 0, 2007 Introduction to Learning & Decision Trees Learning and Decision Trees to learning What is learning? - more than just memorizing
LONG BEACH CITY COLLEGE MEMORANDUM
LONG BEACH CITY COLLEGE MEMORANDUM DATE: May 5, 2000 TO: Academic Senate Equivalency Committee FROM: John Hugunin Department Head for CBIS SUBJECT: Equivalency statement for Computer Science Instructor
Analecta Vol. 8, No. 2 ISSN 2064-7964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
- Easy to insert & delete in O(1) time - Don t need to estimate total memory needed. - Hard to search in less than O(n) time
Skip Lists CMSC 420 Linked Lists Benefits & Drawbacks Benefits: - Easy to insert & delete in O(1) time - Don t need to estimate total memory needed Drawbacks: - Hard to search in less than O(n) time (binary
CAP 6412 Advanced Computer Vision
CAP 6412 Advanced Computer Vision http://www.cs.ucf.edu/~bgong/cap6412.html Boqing Gong Jan 26, 2016 Today Administrivia A bigger picture and some common questions Object detection proposals, by Samer
Bert Huang Department of Computer Science Virginia Tech
This paper was submitted as a final project report for CS6424/ECE6424 Probabilistic Graphical Models and Structured Prediction in the spring semester of 2016. The work presented here is done by students
Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence
Artificial Neural Networks and Support Vector Machines CS 486/686: Introduction to Artificial Intelligence 1 Outline What is a Neural Network? - Perceptron learners - Multi-layer networks What is a Support
Lecture 6. Artificial Neural Networks
Lecture 6 Artificial Neural Networks 1 1 Artificial Neural Networks In this note we provide an overview of the key concepts that have led to the emergence of Artificial Neural Networks as a major paradigm
Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski [email protected]
Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trakovski [email protected] Neural Networks 2 Neural Networks Analogy to biological neural systems, the most robust learning systems
Question 2 Naïve Bayes (16 points)
Question 2 Naïve Bayes (16 points) About 2/3 of your email is spam so you downloaded an open source spam filter based on word occurrences that uses the Naive Bayes classifier. Assume you collected the
Colorado School of Mines Computer Vision Professor William Hoff
Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Introduction to 2 What is? A process that produces from images of the external world a description
Medical Image Processing on the GPU. Past, Present and Future. Anders Eklund, PhD Virginia Tech Carilion Research Institute [email protected].
Medical Image Processing on the GPU Past, Present and Future Anders Eklund, PhD Virginia Tech Carilion Research Institute [email protected] Outline Motivation why do we need GPUs? Past - how was GPU programming
Learning and transferring mid-level image representions using convolutional neural networks
Willow project-team Learning and transferring mid-level image representions using convolutional neural networks Maxime Oquab, Léon Bottou, Ivan Laptev, Josef Sivic 1 Image classification (easy) Is there
The Delicate Art of Flower Classification
The Delicate Art of Flower Classification Paul Vicol Simon Fraser University University Burnaby, BC [email protected] Note: The following is my contribution to a group project for a graduate machine learning
Going Deeper with Convolutional Neural Network for Intelligent Transportation
Going Deeper with Convolutional Neural Network for Intelligent Transportation by Tairui Chen A Thesis Submitted to the Faculty of the WORCESTER POLYTECHNIC INSTITUTE In partial fulfillment of the requirements
APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder
APPM4720/5720: Fast algorithms for big data Gunnar Martinsson The University of Colorado at Boulder Course objectives: The purpose of this course is to teach efficient algorithms for processing very large
Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite
Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite Philip Lenz 1 Andreas Geiger 2 Christoph Stiller 1 Raquel Urtasun 3 1 KARLSRUHE INSTITUTE OF TECHNOLOGY 2 MAX-PLANCK-INSTITUTE IS 3
Topological Data Analysis Applications to Computer Vision
Topological Data Analysis Applications to Computer Vision Vitaliy Kurlin, http://kurlin.org Microsoft Research Cambridge and Durham University, UK Topological Data Analysis quantifies topological structures
Spark: Cluster Computing with Working Sets
Spark: Cluster Computing with Working Sets Outline Why? Mesos Resilient Distributed Dataset Spark & Scala Examples Uses Why? MapReduce deficiencies: Standard Dataflows are Acyclic Prevents Iterative Jobs
ECBDL 14: Evolu/onary Computa/on for Big Data and Big Learning Workshop July 13 th, 2014 Big Data Compe//on
ECBDL 14: Evolu/onary Computa/on for Big Data and Big Learning Workshop July 13 th, 2014 Big Data Compe//on Jaume Bacardit [email protected] The Interdisciplinary Compu/ng and Complex BioSystems
Stochastic Pooling for Regularization of Deep Convolutional Neural Networks
Stochastic Pooling for Regularization of Deep Convolutional Neural Networks Matthew D. Zeiler Department of Computer Science Courant Institute, New York University [email protected] Rob Fergus Department
EdVidParse: Detecting People and Content in Educational Videos
EdVidParse: Detecting People and Content in Educational Videos by Michele Pratusevich S.B., Massachusetts Institute of Technology (2013) Submitted to the Department of Electrical Engineering and Computer
The Impact of Big Data on Classic Machine Learning Algorithms. Thomas Jensen, Senior Business Analyst @ Expedia
The Impact of Big Data on Classic Machine Learning Algorithms Thomas Jensen, Senior Business Analyst @ Expedia Who am I? Senior Business Analyst @ Expedia Working within the competitive intelligence unit
Intelligent Heuristic Construction with Active Learning
Intelligent Heuristic Construction with Active Learning William F. Ogilvie, Pavlos Petoumenos, Zheng Wang, Hugh Leather E H U N I V E R S I T Y T O H F G R E D I N B U Space is BIG! Hubble Ultra-Deep Field
Deep Learning using Linear Support Vector Machines
Yichuan Tang [email protected] Department of Computer Science, University of Toronto. Toronto, Ontario, Canada. Abstract Recently, fully-connected and convolutional neural networks have been trained
3D Object Recognition using Convolutional Neural Networks with Transfer Learning between Input Channels
3D Object Recognition using Convolutional Neural Networks with Transfer Learning between Input Channels Luís A. Alexandre Department of Informatics and Instituto de Telecomunicações Univ. Beira Interior,
Self Organizing Maps: Fundamentals
Self Organizing Maps: Fundamentals Introduction to Neural Networks : Lecture 16 John A. Bullinaria, 2004 1. What is a Self Organizing Map? 2. Topographic Maps 3. Setting up a Self Organizing Map 4. Kohonen
Lecture 8 February 4
ICS273A: Machine Learning Winter 2008 Lecture 8 February 4 Scribe: Carlos Agell (Student) Lecturer: Deva Ramanan 8.1 Neural Nets 8.1.1 Logistic Regression Recall the logistic function: g(x) = 1 1 + e θt
Scalability and Classifications
Scalability and Classifications 1 Types of Parallel Computers MIMD and SIMD classifications shared and distributed memory multicomputers distributed shared memory computers 2 Network Topologies static
5MD00. Assignment Introduction. Luc Waeijen 16-12-2014
5MD00 Assignment Introduction Luc Waeijen 16-12-2014 Contents EEG application Background on EEG Early Seizure Detection Algorithm Implementation Details Super Scalar Assignment Description Tooling (simple
An Early Attempt at Applying Deep Reinforcement Learning to the Game 2048
An Early Attempt at Applying Deep Reinforcement Learning to the Game 2048 Hong Gui, Tinghan Wei, Ching-Bo Huang, I-Chen Wu 1 1 Department of Computer Science, National Chiao Tung University, Hsinchu, Taiwan
NAVIGATING SCIENTIFIC LITERATURE A HOLISTIC PERSPECTIVE. Venu Govindaraju
NAVIGATING SCIENTIFIC LITERATURE A HOLISTIC PERSPECTIVE Venu Govindaraju BIOMETRICS DOCUMENT ANALYSIS PATTERN RECOGNITION 8/24/2015 ICDAR- 2015 2 Towards a Globally Optimal Approach for Learning Deep Unsupervised
Deep Learning For Text Processing
Deep Learning For Text Processing Jeffrey A. Bilmes Professor Departments of Electrical Engineering & Computer Science and Engineering University of Washington, Seattle http://melodi.ee.washington.edu/~bilmes
STA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! [email protected]! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct
Simple and efficient online algorithms for real world applications
Simple and efficient online algorithms for real world applications Università degli Studi di Milano Milano, Italy Talk @ Centro de Visión por Computador Something about me PhD in Robotics at LIRA-Lab,
Colour Image Segmentation Technique for Screen Printing
60 R.U. Hewage and D.U.J. Sonnadara Department of Physics, University of Colombo, Sri Lanka ABSTRACT Screen-printing is an industry with a large number of applications ranging from printing mobile phone
E6895 Advanced Big Data Analytics Lecture 14:! NVIDIA GPU Examples and GPU on ios devices
E6895 Advanced Big Data Analytics Lecture 14: NVIDIA GPU Examples and GPU on ios devices Ching-Yung Lin, Ph.D. Adjunct Professor, Dept. of Electrical Engineering and Computer Science IBM Chief Scientist,
Car Insurance. Havránek, Pokorný, Tomášek
Car Insurance Havránek, Pokorný, Tomášek Outline Data overview Horizontal approach + Decision tree/forests Vertical (column) approach + Neural networks SVM Data overview Customers Viewed policies Bought
arxiv:1505.04597v1 [cs.cv] 18 May 2015
U-Net: Convolutional Networks for Biomedical Image Segmentation Olaf Ronneberger, Philipp Fischer, and Thomas Brox arxiv:1505.04597v1 [cs.cv] 18 May 2015 Computer Science Department and BIOSS Centre for
The Relationship between Artificial Intelligence and Finance
Material 1 The Relationship between Artificial Intelligence and Finance University of Tokyo, Yutaka Matsuo Provisional Translation by the Secretariat Please refer to the original material in Japanese 1
MapReduce. MapReduce and SQL Injections. CS 3200 Final Lecture. Introduction. MapReduce. Programming Model. Example
MapReduce MapReduce and SQL Injections CS 3200 Final Lecture Jeffrey Dean and Sanjay Ghemawat. MapReduce: Simplified Data Processing on Large Clusters. OSDI'04: Sixth Symposium on Operating System Design
CPSC 340: Machine Learning and Data Mining. Mark Schmidt University of British Columbia Fall 2015
CPSC 340: Machine Learning and Data Mining Mark Schmidt University of British Columbia Fall 2015 Outline 1) Intro to Machine Learning and Data Mining: Big data phenomenon and types of data. Definitions
Conjugating data mood and tenses: Simple past, infinite present, fast continuous, simpler imperative, conditional future perfect
Matteo Migliavacca (mm53@kent) School of Computing Conjugating data mood and tenses: Simple past, infinite present, fast continuous, simpler imperative, conditional future perfect Simple past - Traditional
ImageNet Classification with Deep Convolutional Neural Networks
ImageNet Classification with Deep Convolutional Neural Networks Alex Krizhevsky University of Toronto [email protected] Ilya Sutskever University of Toronto [email protected] Geoffrey E. Hinton University
Bringing Big Data Modelling into the Hands of Domain Experts
Bringing Big Data Modelling into the Hands of Domain Experts David Willingham Senior Application Engineer MathWorks [email protected] 2015 The MathWorks, Inc. 1 Data is the sword of the
Image Compression through DCT and Huffman Coding Technique
International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015 INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Rahul
Convolutional Networks for Stock Trading
Convolutional Networks for Stock Trading Ashwin Siripurapu Stanford University Department of Computer Science 353 Serra Mall, Stanford, CA 94305 [email protected] Abstract Convolutional neural networks
Scalable Machine Learning - or what to do with all that Big Data infrastructure
- or what to do with all that Big Data infrastructure TU Berlin blog.mikiobraun.de Strata+Hadoop World London, 2015 1 Complex Data Analysis at Scale Click-through prediction Personalized Spam Detection
Sequence to Sequence Weather Forecasting with Long Short-Term Memory Recurrent Neural Networks
Volume 143 - No.11, June 16 Sequence to Sequence Weather Forecasting with Long Short-Term Memory Recurrent Neural Networks Mohamed Akram Zaytar Research Student Department of Computer Engineering Faculty
Novelty Detection in image recognition using IRF Neural Networks properties
Novelty Detection in image recognition using IRF Neural Networks properties Philippe Smagghe, Jean-Luc Buessler, Jean-Philippe Urban Université de Haute-Alsace MIPS 4, rue des Frères Lumière, 68093 Mulhouse,
Non-negative Matrix Factorization (NMF) in Semi-supervised Learning Reducing Dimension and Maintaining Meaning
Non-negative Matrix Factorization (NMF) in Semi-supervised Learning Reducing Dimension and Maintaining Meaning SAMSI 10 May 2013 Outline Introduction to NMF Applications Motivations NMF as a middle step
Cees Snoek. Machine. Humans. Multimedia Archives. Euvision Technologies The Netherlands. University of Amsterdam The Netherlands. Tree.
Visual search: what's next? Cees Snoek University of Amsterdam The Netherlands Euvision Technologies The Netherlands Problem statement US flag Tree Aircraft Humans Dog Smoking Building Basketball Table
High level code and machine code
High level code and machine code Teacher s Notes Lesson Plan x Length 60 mins Specification Link 2.1.7/cde Programming languages Learning objective Students should be able to (a) explain the difference
