Fast Accurate Fish Detection and Recognition of Underwater Images with Fast R-CNN
|
|
|
- Beverly Morgan
- 9 years ago
- Views:
Transcription
1 Fast Accurate Fish Detection and Recognition of Underwater Images with Fast R-CNN Xiu Li 1, 2, Min Shang 1, 2, Hongwei Qin 1, 2, Liansheng Chen 1, 2 1. Department of Automation, Tsinghua University, Beijing Graduate School at Shenzhen, Tsinghua University, Shenzhen [email protected],[email protected] [email protected],[email protected] Abstract This paper aims at detecting and recognizing fish species from underwater images by means of Fast R-CNN (Regions with Convolutional Neural and Networks) features. Encouraged by powerful recognition results achieved by Convolutional Neural Networks (CNNs) on generic VOC and ImageNet dataset, we apply this popular deep ConvNets to domain-specific underwater environment which is more complicated than overland situation, using a new dataset of ImageCLEF fish images belonging to 12 classes. The experimental results demonstrate the promising performance of our networks. Fast R-CNN improves mean average precision (map) by 11.2% relative to Deformable Parts Model (DPM) baseline-achieving a map of 81.4%, and detects 80 faster than previous R-CNN on a single fish image. Keywords Fish detection and recognition, Fast R-CNN, underwater images, deep ConvNets I. INTRODUCTION Deep-sea observation systems such as the NEPTUNE and VENUS observatories [6], has been widely used in recent years for marine surveillance with a large amount of information-rich underwater videos and images. Estimating fish existence and quantity from these videos and images can help supporting marine biologists to understand the natural underwater environment, promote its preservation, and study behaviors and interactions between marine animals that are part of it [1]. However, it will be a tedious and time-consuming job for humans to manually analyze massive quantity of underwater video and image data daily generated. Therefore an automatic fish detection and recognition system is of crucial importance and practice, to reduce time-consuming and expensive input by human observers. But so far, limited pattern recognition methods are used to detect and recognize fish species in underwater images. Mehdi et al. [2] used Haar classier to classify the shape features modelled by Principal Component Analysis (PCA). Concetto et al. [10] used a moving average algorithm to get balance between processing time and accuracy over the static scenarios for underwater fish detection. However, These methods mentioned above tend to be complicated in feature extraction, and have a poor ability in dealing with large amount of underwater images scalably. Encouraged by the significant results achieved by deep convolutional neural networks on generic ImageNet [5] and VOC [16] datasets, we introduce this popular architecture to specific marine-domain fish detection and recognition for its high computing capability and scalability. The interest in CNNs has been widely raised after Krizhevsky et al [3] showed substantially higher image classification accuracy on the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) [4,5], in Effects of varieties of computer vision tasks such as detection, recognition, semantic segmentation, scene classification, and domain adaptation, have been updated using CNNs. In 2014, Ross et al [7] proposed a simple and scalable detection algorithm called R- CNN that improves map by more than 30% relative to the previous best result on VOC 2012 achieving a mean average precision (map) of 53.3%. Concurrent with R-CNN, K. He et al [8] put forward SPP-net with comparable accuracy and considerable speedup on VOC 2007 dataset, compared to R- CNN. Inspired by K.He s work, Ross [9] put forward Fast R- CNN an improvement of R-CNN,achieving higher map and faster speed on VOC datasets. Since that deep learning has achieved tremendous improvement in visual detection and recognition with thousands of categories [5], we study the performance of Fast R-CNN in underwater environment in this paper, given largescale training data and high-performance computational infrastructure. Unlike ImageNet and VOC datasets of highresolution images, oceanic images may be in poor quality condition because of light scattering, color change and device reasons. In this paper, we apply Fast R-CNN of high accuracy and detecting speed to complex underwater environment for fish detection and recognition. Figure 1 presents an overview of our method. The networks take as input a RGB image and its 2000 regions of interest (RoIs) collected by selective search [11] and produces a distribution over fish classes as well as related bounding-boxes. In summary,the contributions of this paper are three-fold : first, we incorporate deep learning into complex underwater environment; second, we build an automatic fish detection and recognition system with Fast R- CNN, achieving convincing performance in detection precision and speed; third, we construct a clean fish dataset with images over 12 classes, a subset of ImageCLEF [1] training and test datasets, to evaluate our system at scale.
2 Figure 1 : Overall architecture of automatic fish detection and recognition system using Fast R-CNN. The networks take as input a RGB fish image and its 2000regions of interest (RoIs) collected by selective search and produces a distribution over fish classes as well as related bounding-box values. Fish species train val test trainval Acanthurus Nigrofuscus Amphiprion Clarkii Chaetodon Lunulatus Chromis Chrysura Dascyllus Aruanus Dascyllus Reticulatus Hemigymnus Fasciatus Lutjanus Fulvus Myripristis Kuntee Neoniphon Sammara Pempheris Vanicolensis Plectrogly-Phidodon Dickii total images Table 1 : ImageCLEF_Fish_TS dataset. The proportion of training set, validation set and testing set. II. DATASET ACQUISITION To evaluate our automatic fish detection and recognition system, we use the training and test video datasets of LifeCLEF Fish task in ImageCLEF [1]. These video datasets are derived from the Fish4Knowledge [12] video repository, which contains about 700k 10-minute video clips that were taken in the past five years to monitor Taiwan coral reefs. Each video has a resolution of either or with 5 to 8 fps. The videos recorded from sunrise to sunset show several phenomena, e.g. murky water, algae on camera lens, etc., which makes the fish detection task more complex and makes the datasets suitable for proving the robustness and practice of our system. However, because the fish categories of these video datasets are not balanced in quantity and the ground truth is noisy in coordinate, we must manually filter proper fish frames out from the videos for training and testing. A. Save video frames as images.we collected fish images with resolution or resolution. These images are composed of 18 fish categories. And part of the images are in poor quality, for example, there can be some fish whose color is similar to that of background and there can be some occlusion among fish and background. We retained these inferior images because it is usual to capture unsatisfying frames on account of underwater environment and equipment fault and we aim to train a strong and robust network for complicated oceanic environment. B. Balance the new fish dataset As mentioned above, the 18 fish categories are not balanced in quantity, and we ignored the fish classes whose occurrence times are less than 600. The final dataset includes images belong to 12 fish classes as shown in table 1 and figure 2. We set nearly equal proportion for training set, validation set and testing set. C. Modify the ground truth of fish We transformed the annotations for every video of ImageCLEF into the annotations for every image of our new dataset. There are some noisy annotations in fish classes and bounding box values, for example, same fish class may have two different spellings and some bounding box values may be negative or out of resolution range. we manually modify these improper annotations to apply Fast R-CNN to the new fish dataset we named ImageCLEF_Fish_TS here.
3 (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) Figure 2: 12 fish species in ImageCLEF_Fish_TS dataset. (a) Acanthurus Nigrofuscus. (b) Amphiprion Clarkii. (c) Chaetodon Lunulatus. (d) Chromis Chrysura. (e) Dascyllus Aruanus. (f) Dascyllus Reticulatus. (g) Hemigymnus Fasciatus. (h) Lutjanus Fulvus. (i) Myripristis Kuntee. (j) Neoniphon Sammara. (k) Pempheris Vanicolensis. (l) Plectrogly-Phidodon Dickii. III. OVERALL ARCHITECTURE As depicted in Figure 1, the networks take as input a RGB image and its 2000 regions of interest (RoIs) collected by selective search and produces a distribution over fish classes as well as related bounding-box values. The networks contain five convolutional layers, a RoI pooling layer, two fully connected layers and two sibling layers (a fully connected layer and softmax layer over 12 fish classes plus background class and bounding-box regressors). Necessary response normalization and max pooling layers follow the first and second convolutional layers. The ReLU non-linearity is applied to the output of every convolutional layer and every fully connected layer. A. Pre-training and modifying We pre-trained an AlexNet [3] with five convolutional layers and three fully connected layers on a large auxiliary dataset (ILSVRC2012), using the open source Caffe CNN library [13]. Before initializing Fast R-CNN with the pretrained networks, we modified the AlexNet from three respects. The networks take two data inputs: a batch of N images and a list of R RoIs. We use selective search [11] to generate about 2000 category-independent region proposals and refer to these candidates as regions of interest (RoIs). The last max pooling layer is replaced by a RoI pooling layer. The RoI pooling layer pools RoIs of arbitrary sizes into fixed-size feature maps, which is similar to the spatial pyramid pooling layer used in SPPnet [8]. The final fully connected layer is replaced with two sibling layers, one that outputs softmax probabilities over 12 fish classes plus a background class and another layer that outputs related bounding-box values of the 12 fish classes. B. Fine-tuning for fish detection To adapt Fast R-CNN to new specific domain (fish detection and recognition), we modify AlexNet as mentioned above and use stochastic gradient descent (SGD) training the Fast R-CNN parameters. Multi-task loss. We use a multi-task loss L to train networks jointly for softmax classification and bounding-box regression:,,,, 1, (1) in which,,, is a discrete probability distribution (per RoI) over K+1 categories, is a true class label, and, log is the standard cross-entropy/log loss. The second loss is defined over a tuple of true boundingbox regression targets,,, for class and a predicted tuple,,, for class :,,,,, (2) in which (3) 0.5 The hyper-parameter controls the balance between the two task losses and we used 1 as [9]. Mini-batch sampling. we define object proposals that have intersection over union (IoU) overlap with ground truth bounding box of at least 0.5 as positive and define object proposals that have a maximum IoU with ground truth in the interval [0,0.3] as negative (background). We uniformly sample 32 positive RoIs and 96 negative RoIs from 2 images to construct a mini-batch of size 128. We ignore horizontally flipped trick adopted in [9] and other data augmentation, because our fish dataset is extracted from underwater videos and there is little difference between adjacent frames.
4 method DPM DPM(BP) RCNN RCNN(BB) FRCN FRCN(SVD) Acanthurus Nigrofuscus Amphiprion Clarkii Chaetodon Lunulatus Chromis Chrysura Dascyllus Aruanus Dascyllus Reticulatus Hemigymnus Fasciatus Lutjanus Fulvus Myripristis Kuntee Neoniphon Sammara Pempheris Vanicolensis Plectrogly-Phidodon Dickii map Table 2: Fish detection average precision (%). Here DPM (BP) means the DPM detection using a bounding-box prediction and RCNN (BB) means the RCNN detection using a bounding-box regression. FRCN (SVD) means the Fast R-CNN with singular value decomposition. Training details. We trained our networks using stochastic gradient descent with a batch size of 128 RoIs, momentum of 0.9, and weight decay of The fully connected layers for softmax classification and bounding-box regression are initialized randomly from a zero-mean Gaussian distribution with standard deviations 0.01 and following [9].We uses a global learning rate of for 30k mini-batch iterations and lowered the learning rate three times for every 10k iterations by ten. The update rule for weight is (4) (5) in which is the iteration index, is the momentum variable, is the learning rate and is the th mini-batch. Truncated Singular Value Decomposition. We use truncated Singular Value Decomposition (SVD) to compress fully connected layers [14] because the computation time spent on fully connected layers is nearly half of the forward pass time, following [9]. We want this simple compression method which does not need additional fine-tuning to obtain further speedup for fish detection and recognition. IV. EXPERIMENTS AND ANALYSIS We apply our fish detection and recognition system based on Fast R-CNN to ImageCLEF_Fish_TS dataset, and compare our method with two recent detection approaches that build on Deformable Part Models (DPM) and Regions with CNNs (R- CNN) respectively. HOG-based DPM is defined by filters that score subwindows of a feature pyramid after doing principal component analysis on HOG features [15]. HOG-based DPM has achieved state-of-the-art precision on VOC dataset before deep learning widely applied to object detection, and we refer to DPM as our baseline method. Similar to Fast R-CNN, R- CNN detects image objects combined region proposals with CNNs. But unlike single-stage Fast R-CNN, R-CNN is a piecewise method comprising of proposals generation, feature extraction and SVM classification [7]. A. Mean Average Precision (map) We use mean Average Precision (map) to evaluate three detection methods mentioned above on the new fish dataset ImageCLEF_Fish_TS. Mean Average Precision (map) is define by recall (R) and precision (P) as follows map (6) Table 2 shows the detection map of three approaches on the new dataset. Our fish detection and recognition method achieves an map of 81.4%, improving map by 11.2% compared to DPM baseline, slightly better than R-CNN with bounding-box regression. Compared to systems based on traditional HOG-like features, deep ConvNets can learn highlevel representations that combines class-tuned features together with shape, texture, color and material properties, which leads to dramatically higher fish detection performance on ImageCLEF_Fish_TS. B. Training and testing time Along with high accuracy and powerful capacity, deep ConvNets are faced with large time and space cost. Table 3 compares training time (hours), testing time (seconds per image), and map on ImageCLEF_Fish_TS between R-CNN and Fast R-CNN. On a computer equipped with an NVIDIA Tesla K20 GPU, Fast R-CNN tests 80 times faster than R-CNN per image, for the fact that Fast R-CNN takes 0.311s while R-CNN takes s on the same image. Moreover, Fast R-CNN trains more than 16 times faster than R-CNN, as Fast R-CNN takes 4 hours while R-CNN takes 67 hours on training. It is notable that further testing speedup is achieved by truncated SVD that tests 91 times faster than R-CNN on the same image despite slightly map decline. The reason for the outstanding speedup is that the singlestage Fast R-CNN uses a RoI pooling layer to help extract RoI feature vectors, and the existence of RoI pooling layer cuts down lots of feature extraction time and space cost in multistage R-CNN. Therefore, faced with messes of underwater images, an automatic fish identification system with Fast R- CNN is tend to be more practical.
5 method R-CNN FRCN FRCN(SVD) training time (h) training speedup testing time (s/im) testing speedup map (%) Table 3: Runtime comparison between Fast R-CNN and R- CNN. Here FRCN (SVD) means the Fast R-CNN with singular value decomposition. V. CONCLUSIONS Encouraged by the outstanding detection precision and speed property achieved by Fast R-CNN, we apply this promising networks to automatic fish identification system to help marine biologists estimate fish existence and quantity, and effectively understand oceanic geographical and biological environments. The experimental results demonstrate the promising performance of our detection system with a higher map than DPM and definitely speedup to R-CNN. Faced with large amounts of underwater images and videos collected day by day, further study for deep ConvNets with high precision and computation speed will be explored in this filed. REFERENCES [1] C. Spampinato, R B. Fisher, and B. Boom. Image Retrieval in CLET- Fish task [2] R. Mehdi, et al. Automated Fish Detection in Underwater Images Using Shape Based Level Sets. The Photogrammetric Record (2015): [3] A. Krizhevsky, I. Sutskever, and G. Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, [4] J. Deng, A. Berg, S. Satheesh, H. Su, A. Khosla, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Competition 2012 (ILSVRC2012). [5] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image database. In CVPR, [6] H. Qin, Y. Peng, and X. L. Foreground Extraction of Underwater Videos via Sparse and Low-Rank Matrix Decomposition. Computer Vision for Analysis of Underwater Imagery (CVAUI), 2014 ICPR Workshop on. IEEE, [7] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, [8] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In ECCV, [9] R. Girshick. Fast R-CNN. arxiv preprint arxiv: ,2015. [10] C. Spampinato, Y-H. Chen-Burger, G. Nadarajan, R. Fisher. Detecting, Tracking and Counting Fish in Low Quality Unconstrained Underwater Videos. In Proc. 3rd Int. Conf. on Computer Vision Theory and Applications (VISAPP). Vol p [11] J. Uijlings, K. van de Sande, T. Gevers, and A. Smeulders. Selective search for object recognition. IJCV, [12] Fish4Knowledge. [13] Y. Jia. Caffe: An open source convolutional architecture for fast feature embedding [14] E. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus. Exploiting linear structure within convolutional networks for efficient evaluation. In NIPS, [15] R. Girshick, P. Felzenszwalb, and D. McAllester. Discriminatively trained deformable part models, release 5. berkeley.edu/ rbg/latent-v5/. [16] M. Everingham, L. Van Gool, C. K. I.Williams, J.Winn, and A. Zisserman.The PASCAL Visual Object Classes (VOC) Challenge. IJCV, 2010.
Lecture 6: CNNs for Detection, Tracking, and Segmentation Object Detection
CSED703R: Deep Learning for Visual Recognition (206S) Lecture 6: CNNs for Detection, Tracking, and Segmentation Object Detection Bohyung Han Computer Vision Lab. [email protected] 2 3 Object detection
Lecture 6: Classification & Localization. boris. [email protected]
Lecture 6: Classification & Localization boris. [email protected] 1 Agenda ILSVRC 2014 Overfeat: integrated classification, localization, and detection Classification with Localization Detection. 2 ILSVRC-2014
Fast R-CNN. Author: Ross Girshick Speaker: Charlie Liu Date: Oct, 13 th. Girshick, R. (2015). Fast R-CNN. arxiv preprint arxiv:1504.08083.
Fast R-CNN Author: Ross Girshick Speaker: Charlie Liu Date: Oct, 13 th Girshick, R. (2015). Fast R-CNN. arxiv preprint arxiv:1504.08083. ECS 289G 001 Paper Presentation, Prof. Lee Result 1 67% Accuracy
Pedestrian Detection with RCNN
Pedestrian Detection with RCNN Matthew Chen Department of Computer Science Stanford University [email protected] Abstract In this paper we evaluate the effectiveness of using a Region-based Convolutional
CS 1699: Intro to Computer Vision. Deep Learning. Prof. Adriana Kovashka University of Pittsburgh December 1, 2015
CS 1699: Intro to Computer Vision Deep Learning Prof. Adriana Kovashka University of Pittsburgh December 1, 2015 Today: Deep neural networks Background Architectures and basic operations Applications Visualizing
Deformable Part Models with CNN Features
Deformable Part Models with CNN Features Pierre-André Savalle 1, Stavros Tsogkas 1,2, George Papandreou 3, Iasonas Kokkinos 1,2 1 Ecole Centrale Paris, 2 INRIA, 3 TTI-Chicago Abstract. In this work we
Pedestrian Detection using R-CNN
Pedestrian Detection using R-CNN CS676A: Computer Vision Project Report Advisor: Prof. Vinay P. Namboodiri Deepak Kumar Mohit Singh Solanki (12228) (12419) Group-17 April 15, 2016 Abstract Pedestrian detection
Module 5. Deep Convnets for Local Recognition Joost van de Weijer 4 April 2016
Module 5 Deep Convnets for Local Recognition Joost van de Weijer 4 April 2016 Previously, end-to-end.. Dog Slide credit: Jose M 2 Previously, end-to-end.. Dog Learned Representation Slide credit: Jose
Fast R-CNN Object detection with Caffe
Fast R-CNN Object detection with Caffe Ross Girshick Microsoft Research arxiv code Latest roasts Goals for this section Super quick intro to object detection Show one way to tackle obj. det. with ConvNets
arxiv:1504.08083v2 [cs.cv] 27 Sep 2015
Fast R-CNN Ross Girshick Microsoft Research [email protected] arxiv:1504.08083v2 [cs.cv] 27 Sep 2015 Abstract This paper proposes a Fast Region-based Convolutional Network method (Fast R-CNN) for object
Convolutional Feature Maps
Convolutional Feature Maps Elements of efficient (and accurate) CNN-based object detection Kaiming He Microsoft Research Asia (MSRA) ICCV 2015 Tutorial on Tools for Efficient Object Detection Overview
Scalable Object Detection by Filter Compression with Regularized Sparse Coding
Scalable Object Detection by Filter Compression with Regularized Sparse Coding Ting-Hsuan Chao, Yen-Liang Lin, Yin-Hsi Kuo, and Winston H Hsu National Taiwan University, Taipei, Taiwan Abstract For practical
Compacting ConvNets for end to end Learning
Compacting ConvNets for end to end Learning Jose M. Alvarez Joint work with Lars Pertersson, Hao Zhou, Fatih Porikli. Success of CNN Image Classification Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton,
Bert Huang Department of Computer Science Virginia Tech
This paper was submitted as a final project report for CS6424/ECE6424 Probabilistic Graphical Models and Structured Prediction in the spring semester of 2016. The work presented here is done by students
Semantic Recognition: Object Detection and Scene Segmentation
Semantic Recognition: Object Detection and Scene Segmentation Xuming He [email protected] Computer Vision Research Group NICTA Robotic Vision Summer School 2015 Acknowledgement: Slides from Fei-Fei
Image and Video Understanding
Image and Video Understanding 2VO 710.095 WS Christoph Feichtenhofer, Axel Pinz Slide credits: Many thanks to all the great computer vision researchers on which this presentation relies on. Most material
CAP 6412 Advanced Computer Vision
CAP 6412 Advanced Computer Vision http://www.cs.ucf.edu/~bgong/cap6412.html Boqing Gong Jan 26, 2016 Today Administrivia A bigger picture and some common questions Object detection proposals, by Samer
Administrivia. Traditional Recognition Approach. Overview. CMPSCI 370: Intro. to Computer Vision Deep learning
: Intro. to Computer Vision Deep learning University of Massachusetts, Amherst April 19/21, 2016 Instructor: Subhransu Maji Finals (everyone) Thursday, May 5, 1-3pm, Hasbrouck 113 Final exam Tuesday, May
Tattoo Detection for Soft Biometric De-Identification Based on Convolutional NeuralNetworks
1 Tattoo Detection for Soft Biometric De-Identification Based on Convolutional NeuralNetworks Tomislav Hrkać, Karla Brkić, Zoran Kalafatić Faculty of Electrical Engineering and Computing University of
Steven C.H. Hoi School of Information Systems Singapore Management University Email: [email protected]
Steven C.H. Hoi School of Information Systems Singapore Management University Email: [email protected] Introduction http://stevenhoi.org/ Finance Recommender Systems Cyber Security Machine Learning Visual
Image Classification for Dogs and Cats
Image Classification for Dogs and Cats Bang Liu, Yan Liu Department of Electrical and Computer Engineering {bang3,yan10}@ualberta.ca Kai Zhou Department of Computing Science [email protected] Abstract
Latest Advances in Deep Learning. Yao Chou
Latest Advances in Deep Learning Yao Chou Outline Introduction Images Classification Object Detection R-CNN Traditional Feature Descriptor Selective Search Implementation Latest Application Deep Learning
The Visual Internet of Things System Based on Depth Camera
The Visual Internet of Things System Based on Depth Camera Xucong Zhang 1, Xiaoyun Wang and Yingmin Jia Abstract The Visual Internet of Things is an important part of information technology. It is proposed
Object Detection in Video using Faster R-CNN
Object Detection in Video using Faster R-CNN Prajit Ramachandran University of Illinois at Urbana-Champaign [email protected] Abstract Convolutional neural networks (CNN) currently dominate the computer
InstaNet: Object Classification Applied to Instagram Image Streams
InstaNet: Object Classification Applied to Instagram Image Streams Clifford Huang Stanford University [email protected] Mikhail Sushkov Stanford University [email protected] Abstract The growing
Multi-view Face Detection Using Deep Convolutional Neural Networks
Multi-view Face Detection Using Deep Convolutional Neural Networks Sachin Sudhakar Farfade Yahoo [email protected] Mohammad Saberian Yahoo [email protected] Li-Jia Li Yahoo [email protected]
Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 269 Class Project Report
Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 69 Class Project Report Junhua Mao and Lunbo Xu University of California, Los Angeles [email protected] and lunbo
arxiv:1312.6034v2 [cs.cv] 19 Apr 2014
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps arxiv:1312.6034v2 [cs.cv] 19 Apr 2014 Karen Simonyan Andrea Vedaldi Andrew Zisserman Visual Geometry Group,
arxiv:1506.03365v2 [cs.cv] 19 Jun 2015
LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop Fisher Yu Yinda Zhang Shuran Song Ari Seff Jianxiong Xiao arxiv:1506.03365v2 [cs.cv] 19 Jun 2015 Princeton
Applications of Deep Learning to the GEOINT mission. June 2015
Applications of Deep Learning to the GEOINT mission June 2015 Overview Motivation Deep Learning Recap GEOINT applications: Imagery exploitation OSINT exploitation Geospatial and activity based analytics
Transform-based Domain Adaptation for Big Data
Transform-based Domain Adaptation for Big Data Erik Rodner University of Jena Judy Hoffman Jeff Donahue Trevor Darrell Kate Saenko UMass Lowell Abstract Images seen during test time are often not from
MulticoreWare. Global Company, 250+ employees HQ = Sunnyvale, CA Other locations: US, China, India, Taiwan
1 MulticoreWare Global Company, 250+ employees HQ = Sunnyvale, CA Other locations: US, China, India, Taiwan Focused on Heterogeneous Computing Multiple verticals spawned from core competency Machine Learning
Hybrid Learning Framework for Large-Scale Web Image Annotation and Localization
Hybrid Learning Framework for Large-Scale Web Image Annotation and Localization Yong Li 1, Jing Liu 1, Yuhang Wang 1, Bingyuan Liu 1, Jun Fu 1, Yunze Gao 1, Hui Wu 2, Hang Song 1, Peng Ying 1, and Hanqing
arxiv:1409.1556v6 [cs.cv] 10 Apr 2015
VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION Karen Simonyan & Andrew Zisserman + Visual Geometry Group, Department of Engineering Science, University of Oxford {karen,az}@robots.ox.ac.uk
Learning to Process Natural Language in Big Data Environment
CCF ADL 2015 Nanchang Oct 11, 2015 Learning to Process Natural Language in Big Data Environment Hang Li Noah s Ark Lab Huawei Technologies Part 1: Deep Learning - Present and Future Talk Outline Overview
Task-driven Progressive Part Localization for Fine-grained Recognition
Task-driven Progressive Part Localization for Fine-grained Recognition Chen Huang Zhihai He [email protected] University of Missouri [email protected] Abstract In this paper we propose a task-driven
SSD: Single Shot MultiBox Detector
SSD: Single Shot MultiBox Detector Wei Liu 1, Dragomir Anguelov 2, Dumitru Erhan 3, Christian Szegedy 3, Scott Reed 4, Cheng-Yang Fu 1, Alexander C. Berg 1 1 UNC Chapel Hill 2 Zoox Inc. 3 Google Inc. 4
arxiv:1604.08893v1 [cs.cv] 29 Apr 2016
Faster R-CNN Features for Instance Search Amaia Salvador, Xavier Giró-i-Nieto, Ferran Marqués Universitat Politècnica de Catalunya (UPC) Barcelona, Spain {amaia.salvador,xavier.giro}@upc.edu Shin ichi
Recognizing Cats and Dogs with Shape and Appearance based Models. Group Member: Chu Wang, Landu Jiang
Recognizing Cats and Dogs with Shape and Appearance based Models Group Member: Chu Wang, Landu Jiang Abstract Recognizing cats and dogs from images is a challenging competition raised by Kaggle platform
RECOGNIZING objects and localizing them in images is
1 Region-based Convolutional Networks for Accurate Object Detection and Segmentation Ross Girshick, Jeff Donahue, Student Member, IEEE, Trevor Darrell, Member, IEEE, and Jitendra Malik, Fellow, IEEE Abstract
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
1 Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun arxiv:1506.01497v3 [cs.cv] 6 Jan 2016 Abstract State-of-the-art object
3D Model based Object Class Detection in An Arbitrary View
3D Model based Object Class Detection in An Arbitrary View Pingkun Yan, Saad M. Khan, Mubarak Shah School of Electrical Engineering and Computer Science University of Central Florida http://www.eecs.ucf.edu/
Learning and transferring mid-level image representions using convolutional neural networks
Willow project-team Learning and transferring mid-level image representions using convolutional neural networks Maxime Oquab, Léon Bottou, Ivan Laptev, Josef Sivic 1 Image classification (easy) Is there
Object Detection from Video Tubelets with Convolutional Neural Networks
Object Detection from Video Tubelets with Convolutional Neural Networks Kai Kang Wanli Ouyang Hongsheng Li Xiaogang Wang Department of Electronic Engineering, The Chinese University of Hong Kong {kkang,wlouyang,hsli,xgwang}@ee.cuhk.edu.hk
R-CNN minus R. 1 Introduction. Karel Lenc http://www.robots.ox.ac.uk/~karel. Department of Engineering Science, University of Oxford, Oxford, UK.
LENC, VEDALDI: R-CNN MINUS R 1 R-CNN minus R Karel Lenc http://www.robots.ox.ac.uk/~karel Andrea Vedaldi http://www.robots.ox.ac.uk/~vedaldi Department of Engineering Science, University of Oxford, Oxford,
VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS
VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS Norbert Buch 1, Mark Cracknell 2, James Orwell 1 and Sergio A. Velastin 1 1. Kingston University, Penrhyn Road, Kingston upon Thames, KT1 2EE,
Supporting Online Material for
www.sciencemag.org/cgi/content/full/313/5786/504/dc1 Supporting Online Material for Reducing the Dimensionality of Data with Neural Networks G. E. Hinton* and R. R. Salakhutdinov *To whom correspondence
ImageNet Classification with Deep Convolutional Neural Networks
ImageNet Classification with Deep Convolutional Neural Networks Alex Krizhevsky University of Toronto [email protected] Ilya Sutskever University of Toronto [email protected] Geoffrey E. Hinton University
Stochastic Pooling for Regularization of Deep Convolutional Neural Networks
Stochastic Pooling for Regularization of Deep Convolutional Neural Networks Matthew D. Zeiler Department of Computer Science Courant Institute, New York University [email protected] Rob Fergus Department
Character Image Patterns as Big Data
22 International Conference on Frontiers in Handwriting Recognition Character Image Patterns as Big Data Seiichi Uchida, Ryosuke Ishida, Akira Yoshida, Wenjie Cai, Yaokai Feng Kyushu University, Fukuoka,
Two-Stream Convolutional Networks for Action Recognition in Videos
Two-Stream Convolutional Networks for Action Recognition in Videos Karen Simonyan Andrew Zisserman Visual Geometry Group, University of Oxford {karen,az}@robots.ox.ac.uk Abstract We investigate architectures
Edge Boxes: Locating Object Proposals from Edges
Edge Boxes: Locating Object Proposals from Edges C. Lawrence Zitnick and Piotr Dollár Microsoft Research Abstract. The use of object proposals is an effective recent approach for increasing the computational
Keypoint Density-based Region Proposal for Fine-Grained Object Detection and Classification using Regions with Convolutional Neural Network Features
Keypoint Density-based Region Proposal for Fine-Grained Object Detection and Classification using Regions with Convolutional Neural Network Features JT Turner 1, Kalyan Gupta 1, Brendan Morris 2, & David
Taking Inverse Graphics Seriously
CSC2535: 2013 Advanced Machine Learning Taking Inverse Graphics Seriously Geoffrey Hinton Department of Computer Science University of Toronto The representation used by the neural nets that work best
Exploit All the Layers: Fast and Accurate CNN Object Detector with Scale Dependent Pooling and Cascaded Rejection Classifiers
Exploit All the Layers: Fast and Accurate CNN Object Detector with Scale Dependent Pooling and Cascaded Rejection Classifiers Fan Yang 1,2, Wongun Choi 2, and Yuanqing Lin 2 1 Department of Computer Science,
A Convolutional Neural Network Cascade for Face Detection
A Neural Network Cascade for Face Detection Haoxiang Li, Zhe Lin, Xiaohui Shen, Jonathan Brandt, Gang Hua Stevens Institute of Technology Hoboken, NJ 07030 {hli18, ghua}@stevens.edu Adobe Research San
Cees Snoek. Machine. Humans. Multimedia Archives. Euvision Technologies The Netherlands. University of Amsterdam The Netherlands. Tree.
Visual search: what's next? Cees Snoek University of Amsterdam The Netherlands Euvision Technologies The Netherlands Problem statement US flag Tree Aircraft Humans Dog Smoking Building Basketball Table
Recognition. Sanja Fidler CSC420: Intro to Image Understanding 1 / 28
Recognition Topics that we will try to cover: Indexing for fast retrieval (we still owe this one) History of recognition techniques Object classification Bag-of-words Spatial pyramids Neural Networks Object
SEMANTIC CONTEXT AND DEPTH-AWARE OBJECT PROPOSAL GENERATION
SEMANTIC TEXT AND DEPTH-AWARE OBJECT PROPOSAL GENERATION Haoyang Zhang,, Xuming He,, Fatih Porikli,, Laurent Kneip NICTA, Canberra; Australian National University, Canberra ABSTRACT This paper presents
3D Object Recognition using Convolutional Neural Networks with Transfer Learning between Input Channels
3D Object Recognition using Convolutional Neural Networks with Transfer Learning between Input Channels Luís A. Alexandre Department of Informatics and Instituto de Telecomunicações Univ. Beira Interior,
Introduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Deep Learning Barnabás Póczos & Aarti Singh Credits Many of the pictures, results, and other materials are taken from: Ruslan Salakhutdinov Joshua Bengio Geoffrey
Getting Started with Caffe Julien Demouth, Senior Engineer
Getting Started with Caffe Julien Demouth, Senior Engineer What is Caffe? Open Source Framework for Deep Learning http://github.com/bvlc/caffe Developed by the Berkeley Vision and Learning Center (BVLC)
Do Convnets Learn Correspondence?
Do Convnets Learn Correspondence? Jonathan Long Ning Zhang Trevor Darrell University of California Berkeley {jonlong, nzhang, trevor}@cs.berkeley.edu Abstract Convolutional neural nets (convnets) trained
Denoising Convolutional Autoencoders for Noisy Speech Recognition
Denoising Convolutional Autoencoders for Noisy Speech Recognition Mike Kayser Stanford University [email protected] Victor Zhong Stanford University [email protected] Abstract We propose the use of
The Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
Real-Time Grasp Detection Using Convolutional Neural Networks
Real-Time Grasp Detection Using Convolutional Neural Networks Joseph Redmon 1, Anelia Angelova 2 Abstract We present an accurate, real-time approach to robotic grasp detection based on convolutional neural
SIGNAL INTERPRETATION
SIGNAL INTERPRETATION Lecture 6: ConvNets February 11, 2016 Heikki Huttunen [email protected] Department of Signal Processing Tampere University of Technology CONVNETS Continued from previous slideset
Tracking in flussi video 3D. Ing. Samuele Salti
Seminari XXIII ciclo Tracking in flussi video 3D Ing. Tutors: Prof. Tullio Salmon Cinotti Prof. Luigi Di Stefano The Tracking problem Detection Object model, Track initiation, Track termination, Tracking
Novelty Detection in image recognition using IRF Neural Networks properties
Novelty Detection in image recognition using IRF Neural Networks properties Philippe Smagghe, Jean-Luc Buessler, Jean-Philippe Urban Université de Haute-Alsace MIPS 4, rue des Frères Lumière, 68093 Mulhouse,
Naive-Deep Face Recognition: Touching the Limit of LFW Benchmark or Not?
Naive-Deep Face Recognition: Touching the Limit of LFW Benchmark or Not? Erjin Zhou [email protected] Zhimin Cao [email protected] Qi Yin [email protected] Abstract Face recognition performance improves rapidly
Analecta Vol. 8, No. 2 ISSN 2064-7964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
CNN Based Object Detection in Large Video Images. WangTao, [email protected] IQIYI ltd. 2016.4
CNN Based Object Detection in Large Video Images WangTao, [email protected] IQIYI ltd. 2016.4 Outline Introduction Background Challenge Our approach System framework Object detection Scene recognition Body
LIBSVX and Video Segmentation Evaluation
CVPR 14 Tutorial! 1! LIBSVX and Video Segmentation Evaluation Chenliang Xu and Jason J. Corso!! Computer Science and Engineering! SUNY at Buffalo!! Electrical Engineering and Computer Science! University
Behavior Analysis in Crowded Environments. XiaogangWang Department of Electronic Engineering The Chinese University of Hong Kong June 25, 2011
Behavior Analysis in Crowded Environments XiaogangWang Department of Electronic Engineering The Chinese University of Hong Kong June 25, 2011 Behavior Analysis in Sparse Scenes Zelnik-Manor & Irani CVPR
COVARIANCE BASED FISH TRACKING IN REAL-LIFE UNDERWATER ENVIRONMENT
COVARIANCE BASED FISH TRACKING IN REAL-LIFE UNDERWATER ENVIRONMENT Concetto Spampinato 1, Simone Palazzo 1, Daniela Giordano 1, Isaak Kavasidis 1 and Fang-Pang Lin 2 and Yun-Te Lin 2 1 Department of Electrical,
Deep Residual Networks
Deep Residual Networks Deep Learning Gets Way Deeper 8:30-10:30am, June 19 ICML 2016 tutorial Kaiming He Facebook AI Research* *as of July 2016. Formerly affiliated with Microsoft Research Asia 7x7 conv,
Applying Deep Learning to Car Data Logging (CDL) and Driver Assessor (DA) October 22-Oct-15
Applying Deep Learning to Car Data Logging (CDL) and Driver Assessor (DA) October 22-Oct-15 GENIVI is a registered trademark of the GENIVI Alliance in the USA and other countries Copyright GENIVI Alliance
The Role of Size Normalization on the Recognition Rate of Handwritten Numerals
The Role of Size Normalization on the Recognition Rate of Handwritten Numerals Chun Lei He, Ping Zhang, Jianxiong Dong, Ching Y. Suen, Tien D. Bui Centre for Pattern Recognition and Machine Intelligence,
Classifying Manipulation Primitives from Visual Data
Classifying Manipulation Primitives from Visual Data Sandy Huang and Dylan Hadfield-Menell Abstract One approach to learning from demonstrations in robotics is to make use of a classifier to predict if
Return of the Devil in the Details: Delving Deep into Convolutional Nets
CHATFIELD ET AL.: RETURN OF THE DEVIL 1 Return of the Devil in the Details: Delving Deep into Convolutional Nets Ken Chatfield [email protected] Karen Simonyan [email protected] Andrea Vedaldi [email protected]
Deep learning applications and challenges in big data analytics
Najafabadi et al. Journal of Big Data (2015) 2:1 DOI 10.1186/s40537-014-0007-7 RESEARCH Open Access Deep learning applications and challenges in big data analytics Maryam M Najafabadi 1, Flavio Villanustre
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic
Local features and matching. Image classification & object localization
Overview Instance level search Local features and matching Efficient visual recognition Image classification & object localization Category recognition Image classification: assigning a class label to
Studying Relationships Between Human Gaze, Description, and Computer Vision
Studying Relationships Between Human Gaze, Description, and Computer Vision Kiwon Yun1, Yifan Peng1, Dimitris Samaras1, Gregory J. Zelinsky2, Tamara L. Berg1 1 Department of Computer Science, 2 Department
Distributed forests for MapReduce-based machine learning
Distributed forests for MapReduce-based machine learning Ryoji Wakayama, Ryuei Murata, Akisato Kimura, Takayoshi Yamashita, Yuji Yamauchi, Hironobu Fujiyoshi Chubu University, Japan. NTT Communication
Big Data: Image & Video Analytics
Big Data: Image & Video Analytics How it could support Archiving & Indexing & Searching Dieter Haas, IBM Deutschland GmbH The Big Data Wave 60% of internet traffic is multimedia content (images and videos)
arxiv:1502.01852v1 [cs.cv] 6 Feb 2015
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification Kaiming He Xiangyu Zhang Shaoqing Ren Jian Sun arxiv:1502.01852v1 [cs.cv] 6 Feb 2015 Abstract Rectified activation
The Delicate Art of Flower Classification
The Delicate Art of Flower Classification Paul Vicol Simon Fraser University University Burnaby, BC [email protected] Note: The following is my contribution to a group project for a graduate machine learning
An Introduction to Deep Learning
Thought Leadership Paper Predictive Analytics An Introduction to Deep Learning Examining the Advantages of Hierarchical Learning Table of Contents 4 The Emergence of Deep Learning 7 Applying Deep-Learning
An Early Attempt at Applying Deep Reinforcement Learning to the Game 2048
An Early Attempt at Applying Deep Reinforcement Learning to the Game 2048 Hong Gui, Tinghan Wei, Ching-Bo Huang, I-Chen Wu 1 1 Department of Computer Science, National Chiao Tung University, Hsinchu, Taiwan
FACE RECOGNITION BASED ATTENDANCE MARKING SYSTEM
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 2, February 2014,
arxiv:1505.04597v1 [cs.cv] 18 May 2015
U-Net: Convolutional Networks for Biomedical Image Segmentation Olaf Ronneberger, Philipp Fischer, and Thomas Brox arxiv:1505.04597v1 [cs.cv] 18 May 2015 Computer Science Department and BIOSS Centre for
Fast, Accurate Detection of 100,000 Object Classes on a Single Machine
213 IEEE Conference on Computer Vision and Pattern Recognition Fast, Accurate Detection of 1, Object Classes on a Single Machine Thomas Dean Mark A. Ruzon Mark Segal Jonathon Shlens Sudheendra Vijayanarasimhan
large-scale machine learning revisited Léon Bottou Microsoft Research (NYC)
large-scale machine learning revisited Léon Bottou Microsoft Research (NYC) 1 three frequent ideas in machine learning. independent and identically distributed data This experimental paradigm has driven
Semantic Video Annotation by Mining Association Patterns from Visual and Speech Features
Semantic Video Annotation by Mining Association Patterns from and Speech Features Vincent. S. Tseng, Ja-Hwung Su, Jhih-Hong Huang and Chih-Jen Chen Department of Computer Science and Information Engineering
An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network
Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED '9) ISSN: 179-519 435 ISBN: 978-96-474-51-2 An Energy-Based Vehicle Tracking System using Principal
Simultaneous Gamma Correction and Registration in the Frequency Domain
Simultaneous Gamma Correction and Registration in the Frequency Domain Alexander Wong [email protected] William Bishop [email protected] Department of Electrical and Computer Engineering University
