Towards Embedded Waste Sorting using Constellations of Visual Words

Similar documents
Probabilistic Latent Semantic Analysis (plsa)

Randomized Trees for Real-Time Keypoint Recognition

Fast Matching of Binary Features

Recognition. Sanja Fidler CSC420: Intro to Image Understanding 1 / 28

TouchPaper - An Augmented Reality Application with Cloud-Based Image Recognition Service

AN EFFICIENT HYBRID REAL TIME FACE RECOGNITION ALGORITHM IN JAVA ENVIRONMENT ABSTRACT

FACE RECOGNITION BASED ATTENDANCE MARKING SYSTEM

The use of computer vision technologies to augment human monitoring of secure computing facilities

Build Panoramas on Android Phones

3D Model based Object Class Detection in An Arbitrary View

The Delicate Art of Flower Classification

Augmented Reality Tic-Tac-Toe

A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow

A Comparative Study between SIFT- Particle and SURF-Particle Video Tracking Algorithms

Point Matching as a Classification Problem for Fast and Robust Object Pose Estimation

Interactive person re-identification in TV series

Cees Snoek. Machine. Humans. Multimedia Archives. Euvision Technologies The Netherlands. University of Amsterdam The Netherlands. Tree.

Analecta Vol. 8, No. 2 ISSN

FAST APPROXIMATE NEAREST NEIGHBORS WITH AUTOMATIC ALGORITHM CONFIGURATION

Canny Edge Detection

The Scientific Data Mining Process

Local features and matching. Image classification & object localization

Face Recognition in Low-resolution Images by Using Local Zernike Moments

siftservice.com - Turning a Computer Vision algorithm into a World Wide Web Service

Automatic Labeling of Lane Markings for Autonomous Vehicles

Distinctive Image Features from Scale-Invariant Keypoints

MusicGuide: Album Reviews on the Go Serdar Sali

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

A Framework for Mobile Object Recognition of Internet of Things Devices and Inference with Contexts

Feature Tracking and Optical Flow

Implementation of Canny Edge Detector of color images on CELL/B.E. Architecture.

MIFT: A Mirror Reflection Invariant Feature Descriptor

Make and Model Recognition of Cars

Jiří Matas. Hough Transform

Surgical Tools Recognition and Pupil Segmentation for Cataract Surgical Process Modeling

Automatic georeferencing of imagery from high-resolution, low-altitude, low-cost aerial platforms

A General Framework for Tracking Objects in a Multi-Camera Environment

Distributed Kd-Trees for Retrieval from Very Large Image Collections

REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING

Tracking and Recognition in Sports Videos

BRIEF: Binary Robust Independent Elementary Features

ACCURACY ASSESSMENT OF BUILDING POINT CLOUDS AUTOMATICALLY GENERATED FROM IPHONE IMAGES

Low-resolution Character Recognition by Video-based Super-resolution

CATEGORIZATION OF SIMILAR OBJECTS USING BAG OF VISUAL WORDS AND k NEAREST NEIGHBOUR CLASSIFIER

GPS-aided Recognition-based User Tracking System with Augmented Reality in Extreme Large-scale Areas

Recognizing Cats and Dogs with Shape and Appearance based Models. Group Member: Chu Wang, Landu Jiang

Traffic Flow Monitoring in Crowded Cities

VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION

Tracking of Small Unmanned Aerial Vehicles

Automatic Grocery Shopping Assistant

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014

An Approach for Utility Pole Recognition in Real Conditions

How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm

SURF: Speeded Up Robust Features

Combining Local Recognition Methods for Better Image Recognition

FastKeypointRecognitioninTenLinesofCode

Edge tracking for motion segmentation and depth ordering

Simultaneous Gamma Correction and Registration in the Frequency Domain

EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM

Real-Time Tracking of Pedestrians and Vehicles

Human behavior analysis from videos using optical flow

Speed Performance Improvement of Vehicle Blob Tracking System

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network

Segmentation & Clustering

VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS

Incremental PCA: An Alternative Approach for Novelty Detection

PHYSIOLOGICALLY-BASED DETECTION OF COMPUTER GENERATED FACES IN VIDEO

Optical Flow. Shenlong Wang CSC2541 Course Presentation Feb 2, 2016

Outdoors Augmented Reality on Mobile Phone using Loxel-Based Visual Feature Organization

The Big Data methodology in computer vision systems

Cloud-Based Image Coding for Mobile Devices Toward Thousands to One Compression

Circle Object Recognition Based on Monocular Vision for Home Security Robot

Euler Vector: A Combinatorial Signature for Gray-Tone Images

Vision based Vehicle Tracking using a high angle camera

Footwear Print Retrieval System for Real Crime Scene Marks

Tracking in flussi video 3D. Ing. Samuele Salti

Flow Separation for Fast and Robust Stereo Odometry

IMAGE PROCESSING BASED APPROACH TO FOOD BALANCE ANALYSIS FOR PERSONAL FOOD LOGGING

Video-Rate Stereo Vision on a Reconfigurable Hardware. Ahmad Darabiha Department of Electrical and Computer Engineering University of Toronto

Android Ros Application

CS 534: Computer Vision 3D Model-based recognition

Low-resolution Image Processing based on FPGA

A System for Capturing High Resolution Images

Online Learning of Patch Perspective Rectification for Efficient Object Detection

ENHANCED WEB IMAGE RE-RANKING USING SEMANTIC SIGNATURES

Taking Inverse Graphics Seriously

A Study on M2M-based AR Multiple Objects Loading Technology using PPHT

Journal of Industrial Engineering Research. Adaptive sequence of Key Pose Detection for Human Action Recognition

A Computer Vision System on a Chip: a case study from the automotive domain

Removing Moving Objects from Point Cloud Scenes

CODING MODE DECISION ALGORITHM FOR BINARY DESCRIPTOR CODING

Static Environment Recognition Using Omni-camera from a Moving Vehicle

Mean-Shift Tracking with Random Sampling

Super-resolution method based on edge feature for high resolution imaging

IMPLICIT SHAPE MODELS FOR OBJECT DETECTION IN 3D POINT CLOUDS

Building an Advanced Invariant Real-Time Human Tracking System

JPEG compression of monochrome 2D-barcode images using DCT coefficient distributions

A Comparison of Keypoint Descriptors in the Context of Pedestrian Detection: FREAK vs. SURF vs. BRISK

Steven C.H. Hoi School of Information Systems Singapore Management University

Transcription:

Towards Embedded Waste Sorting using Constellations of Visual Words Abstract. In this paper, we present a method for fast and robust object recognition, especially developed for implementation on an embedded platform. As an example, the method is applied to the automatic sorting of consumer waste. Out of a stream of different thrown-away food packages, specific items in this case beverage cartons can be visually recognised and sorted out. To facilitate and optimise the implementation of this algorithm on an embedded platform containing parallel hardware, we developed a voting scheme for constellations of visual words, i.e. clustered local features (SURF in this case). On top of easy implementation and robust and fast performance, even with large databases, an extra advantage is that this method can handle multiple identical visual features in one model. 1 Introduction We do not live in a world with unlimited resources, therefore the principle of the Tetra- Pak company is a package should save more than it costs. One key issue in their recyling process is sorting the beverage carton fraction out of the consumer waste stream. Although sometimes beverage cartons are seperately collected, at most places a mixed recyclable fraction is seperately collected, which has to be sorted out afterwards. Sorting out some subfractions is easy, e.g. by using magnets for ferrometals. Some other subfractions are less easily automated and have to be sorted manually. This is the case with beverage cartons also. In waste processing plants, people have to pick out the beverage cartons from a stinking never-ending stream of waste on conveyor belts... Although techniques such as the measurement of UV light reflection can help the automated sorting process, we present in this work a reliable visual method. The system s input consists of images from a camera which is placed above the conveyor belt. These images are rapidly matched with a database of beverage carton photos. In realtime, a large fraction of all beverage cartons can be identified and picked out. Missing items in the database can be quickly added, on the basis of a photograph of the beverage carton. The remainder of this text is organised as follows. Section 2 gives an overview of relevant related work. In section 3, our algorithm is descibed. Some real-waste experiments are presented in section 4. The paper ends with a conclusion in section 5. 2 Related Work Since long, general object recognition is one of the core research subjects in computer vision. Numerous techiques are proposed, traditionally mainly based on the template matching technique [9]. A few years ago, a major revolution in the field was the appearance of the idea of local image features [14, 6]. Indeed, looking at local parts instead of the entire pattern to be recognised has the inherent advantage of robustness to partial occlusions. In both template and query image, local regions are extracted around

interest points, each described by a descriptor vector for comparison. The development of robust local feature descriptors, like e.g. Mindru s generalised colour moment based ones [8], added robustness to illumination and changes in viewpoint. Many researchers proposed algorithms for local region matching. The differences between approaches lie in the way in which interest points, local image regions, and descriptor vectors are extracted. An early example is the work of Schmid and Mohr [10], where geometric invariance was still under image rotations only. Scaling was handled by using circular regions of several sizes. Lowe et al. [6] extended these ideas to real scale-invariance. More general affine invariance has been achieved in the work of Baumberg [2], that uses an iterative scheme and the combination of multiple scales, and in the more direct, constructive methods of Tuytelaars & Van Gool [14, 13], Matas et al. [7], and Mikolajczyk & Schmid [11]. Although these methods are capable to find very qualitative correspondences, most of them are too slow for use in a real-time application as the one we envision here. Moreover, none of these methods are especially suited for the implementation on an embedded computing system, where both memory and computing power must be as low as possible to ensure reliable operation at the lowest cost possible. The classic recognition scheme with local features, presented in [6, 13], and used in many applications such as in our previous work on robot navigation [17, 16], is based on finding one-on-one matches. Between the query image and a model image of the object to be recognised, bijective matches are found. For each local feature of the one image, the most similar feature in the other is selected. This scheme contains a fundamental drawback, namely its disability to detect matches when multiple identical features are present in an image. In that case, no guarantee can be given that the most similar feature is the correct correspondence. Such pattern repetitions are quite common in the real world, though, especially in man-made environments. To reduce the number of incorrect matches due to this phenomenon, in classic matching techniques a criterium is used sich as comparing the distance to the most and the second most similar feature [6]. Of course, this practice throws away a lot of good matches in the presence of pattern repetitions. In this paper, we present a possible solution to this problem by making use of the visual word concept. Visual words are introduced [12, 5, 15] in the context of object classification. Local features are grouped into a large number of clusters with those with similar descriptors assigned into the same cluster. By treating each cluster as a visual word that represents the specic local pattern shared by the keypoints in that cluster, we have a visual word vocabulary describing all kinds of such local image patterns. With its local features mapped into visual words, an image can be represented as a bag of visual words, as a vector containing the (weighted) count of each visual word in that image, which is used as feature vector in the classication task. In contrast to the in categorisation often used bag-of-words concept, in this paper we present the constellation-of-words model. The main difference is that not only the presence of a number of visual words is tested, but also their relative positions.

3 Algorithm Figure 1 gives an overview of the algorithm. It consists of two phases, namely the model construction phase (upper row) and the matching phase (bottom row). First, in a model photograph (a), local features are extracted (b). Then, a vocabulary of visual words is formed by clustering these features based on their descriptor. The corresponding visual words on the image (c) are used to form the model description. The relative location of the image centre (the anchor) is stored for each visual word instance (d). The bottom row depicts the matching procedure. In a query image, local features are extracted (e). Matching with the vocabulary yields a set of visual words (f). For each visual word in the model description, a vote is cast at the relative location of the anchor location (g). The location of the object can be found based on these votes as local maxima in a voting Hough space (h). Each of the following subsections describes one step of this algorithm in detail. Fig. 1. Overview of the algorithm. Top row (model building): (a) model photo, (b) extracted local features, (c) features expressed as visual words from the vocabulary, (d) model description with relative anchor positions for each visual word. Bottom row (matching): (e) query image with extracted features, (f) visual words from the vocabulary, (g) anchor position voting based on relative anchor position, (h) Hough voting space. Local Feature Extraction We chose to use SURF as local feature detector, instead of the often used SIFT detector. SURF [3, 4] is developed to be substantially faster, but at least as performant as SIFT. In contrast to SIFT [6], which approximates Laplacian of Gaussian (LoG) with Difference of Gaussians (DoG), SURF approximates second order Gaussian derivatives with box filters. Image convolutions with these box filters can be computed rapidly by using integral images.

More details about SURF can be found in [3] and [4]. Visual Words As explained before, the next step is forming a vocabulary of visual words. This is accomplished by clustering a big set of extracted SURF features. It is important to build this vocabulary using a large number of features, in order to be representative for all images to be processed. The clustering itself is easily carried out with the k-means algorithm. Distances between features are computed as the Euclidean distance between the corresponding SURF descriptors. Keep in mind that this model-building phase can be processed offline, the real-time behaviour is only needed in the matching step. In the fictive ladybug example of figure 1, each visual word is symbolicly presented as a letter. It can be seen that the vocabulary exists of a file linking each visual word symbol with a mean descriptor vector of the corresponding cluster. 3.1 Model Construction All features found on a model image are matched with the visual word vocabulary, as shown in fig. 1 (c). In addition to the popular bag-of-words models, which consist of a set of visual words, we add the relative constellation of all visual words to the model description. Each line in the model desription file consists of the symbolic name of a visual word, and the relative coordinates (r rel, θ rel ) to the anchor point of the model item. As anchor point, we chose for instance the centre of the model picture. These coordinates are expressed as polar coordinates, relative to the individual axis frame of the visual word. Indeed, each visual word in the model photograph has a scale and an orientation because it is extracted as a SURF feature. Figure 2 illustrates this. The resulting model Fig. 2. The position of the anchor point is stored in the model as polar coordinates relative to the visual word scale and orientation. is a very compact description of the appearance of the model photo. Many of these models, based on the same visual word vocabulary, can be saved in a compact database. In our beverage carton sorting application, we build a database of all different carton prints to be recognised.

3.2 Matching Once a database of objects to be recognised is built, these objects can be detected in a query image. In our application, a camera overviews a section of the conveyor belt. The object detection algorithm here described gives cues where beverage cartons are located. With this information, a mechanical device can sort out the beverage cartons. This part of the algorithm is time-critical. We are spending lots of efforts in speeding up the matching procedure, in order to be able to implement it on an embedded system. The first operation carried out on incoming images is extracting SURF features, exactly as described in section 3. After local feature extraction, matching is performed with the visual words in the vocabulary. We used Mount s ANN (Approximate Nearest Neighbour) [1] algorithm for this, which is very performant. As seen in fig. 1 (f), some of the visual words of the object are recognised, amidst other visual words. Anchor Location Voting Because each SURF feature has a certain scale and rotation, we can reconstruct the anchor pixel location by using the feature-relative polar coordinates of the object anchor. For each instance in the object model description, this yields a vote for a certain anchor location. In figure 1 (g), this is depicted by the black lines ending with a black dot at the computed anchor location. Ideally, all these locations would coincide at the correct object centre. Unfortunately, this is not the case due to mismatches and noise. Moreover, if there are two identical visual words in the model description of an object (as is the case in the ladybug example for words A, C and D), each detected visual word of that kind in the query image will cast to different anchor location votes, of which only one can be correct. Object Detection For all different models in the database, anchor location votes can be quickly computed. Next task is to decide where a certain object is detected. Because a certain object can be present more than once in the query image, it is clear that a simple average of the anchor position votes is not a sufficient technique, even if robust estimators like RANSAC are used to eliminate outliers. Therefore, we construct a Hough space, a matrix which is initiated at zero and incremented at each anchor location vote, fig. 1 (h). The local maxima of the resulting Hough matrix are computed and interpreted as detected object positions. 4 Experiments For preliminary experiments, we implemented this algorithm using Octave and an executable of the SURF extractor. Figure 3 shows some typical results of different phases of the algorithm. The test images were made by pouring out a recyclable fraction garbage bag and taking 640 480 photographs of it from about 1 meter distance. In fig. 3, first two model photographs are shown, for two types of beverage cartons. Each of such images, having a resolution of about 100 150 pixels, yielded a thourough description of the carton print in a model description containing on the average 65 features, what boils down to a model file size of only 3.5 KB.

In the middle of the top row, the anchor position voting output is shown for the milk carton detection step. From matched visual words, black lines are drawn towards the anchor position. It is clearly visible that many lines point at the centres of both milk cartons. In the Hough voting space, next to it, this leads to two black spots at the positions of the milk cartons. The bottom row shows comparable experimental results for other query and model images. Fig. 3. Some experimental results. Top row: model photos of milk and juice cartons, query image with matching visual words (white) and relative anchor locations (black) for the milk carton, hough space. Bottom row: Two query images with detected milk cartons, one with detected juice carton. The cartons were detected by finding local maxima in the Hough space. We performed experiments on 25 query images, containing in total 189 milk cartons. We were able to detect 84% of the trained types. Detection failures were mostly due to a large occlusion of one carton by another object. 5 Conclusions and Future Work In this paper, we presented an algorithm for object detection based on the concept of visual word costellation voting. The preliminary experiments proved the performance of this approach. The method has the advantages that it is computing-power and memory efficient and that it can handle pattern repetitions in the models. We applied this method on the vision-based sorting process of consumer waste, by detecting the beverage cartons based on a database of previously trained beverage carton prints. As told before, our aim in this work is an embedded implementation of this algorithm. The Octave implementation presented here is only a first step towards that. But

we believe the proposed approach has a lot of advantages. The SURF extraction phase can mostly be migrated to a parallel hardware implementation on FPGA. Visual word matching is sped up using the ANN-libraries, making use of Kd-trees. Of course a large part of the memory is used by the (mostly sparse) hough space. A better description of the voting space will lead to a great memory improvement of the algorithm. References 1. S. Arya, D. Mount, N. Netanyahu, R. Silverman, and A. Wu, An optimal algorithm for approximate nearest neighbor searching, J. of the ACM, vol. 45, pp. 891-923, 1998, http://www.cs.umd.edu/ mount/ann/. 2. A. Baumberg, Reliable feature matching across widely separated views, Computer Vision and Pattern Recognition, Hilton Head, South Carolina, pp. 774-781, 2000. 3. H. Bay and T. Tuytelaars and L. Van Gool, Speeded Up Robust Features, ECCV, 2006. 4. Beat Fasel and Luc Van Gool, Interactive Museum Guide: Accurate Retrieval of Object Descriptions in Adaptive Multimedia Retrieval: User, Context, and Feedback, Lecture Notes in Computer Science, Springer, volume 4398, 2007. 5. F.-F. Li and P. Perona. A bayesian hierarchical model for learning natural scene categories. In Proc. of the 2005 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, pages 524531, 2005. 6. D. Lowe, Object Recognition from Local Scale-Invariant Features, International Conference on Computer Vision, pp. 1150-1157, 1999. 7. J. Matas, O. Chum, M. Urban and T. Pajdla, Robust wide baseline stereo from maximally stable extremal regions, British Machine Vision Conference, Cardiff, Wales, pp. 384-396, 2002. 8. F. Mindru, T. Moons, and L. Van Gool, Recognizing color patters irrespective of viewpoint and illumination, Computer Vision and Pattern Recognition, vol. 1, pp. 368-373, 1999. 9. A. Rosenfeld and A.C. Kak, Digital Picture Processing, Computer Science and Applied Mathematics, Academic Press, New York, 1976. 10. C. Schmid, R. Mohr, C. Bauckhage, Local Grey-value Invariants for Image Retrieval, International Journal on Pattern Analysis an Machine Intelligence, Vol. 19, no. 5, pp. 872-877, 1997. 11. K. Mikolajczyk and C. Schmid, An affine invariant interest point detector, ECCV, vol. 1, 128 142, 2002. 12. J. Sivic and A. Zisserman. Video google: A text retrieval approach to object matching in videos. In Proc. of 9th IEEE Intl Conf. on Computer Vision, Vol. 2, 2003. 13. T. Tuytelaars and L. Van Gool, Wide baseline stereo based on local, affinely invariant regions, British Machine Vision Conference, Bristol, UK, pp. 412-422, 2000. 14. T. Tuytelaars, L. Van Gool, L. D haene, and R. Koch, Matching of Affinely Invariant Regions for Visual Servoing, Intl. Conf. on Robotics and Automation, pp. 1601-1606, 1999. 15. J. Zhang, M. Marszalek, S. Lazebnik, and C. Schmid. Local features and kernels for classication of texture and object categories: An in-depth study. In Technical report, INRIA, 2005. 16. anonymous 17. anonymous