Natural Feature Tracking on a Mobile Handheld Tablet

Size: px
Start display at page:

Download "Natural Feature Tracking on a Mobile Handheld Tablet"

Transcription

1 Natural Feature Tracking on a Mobile Handheld Tablet Madjid Maidi #1, Marius Preda #2, Matthew N. Dailey *3, Sirisilp Kongsilp #*4 # Departement ARTMIS, Telecom SudParis / Institut Mines-Telecom 9, Rue Charles Fourier, 9111 vry Cedex, France 1madjid.maidi@telecom-sudparis.eu 2marius.preda@telecom-sudparis.eu 4sirisilp.kongsilp@telecom-sudparis.eu * AIT Vision and Graphics Lab., Asian Institute of Technology PO. Box 4, Klong Luang Pathumthani 1212, Thailand 3mdailey@ait.ac.th Abstract-This paper presents a natural feature tracking system for object recognition in real-life environments. The system is based on a local keypoint descriptor method optimized and adapted to extract salient regions within the image. ach object in the gallery is characterized by keypoints and corresponding local descriptors. The method first identifies gallery object features in new images using nearest neighbor classification. It then estimates camera pose and augments the image with registered synthetic graphics. We describe the optimizations necessary to enable realtime performance on a mobile tablet. An experimental evaluation of the system in real environments demonstrates that the method is accurate and robust. I. INTRODUCTION Mobile applications are mostly classified either as entertainment or visualization applications. Rather than displaying information statically, one can combine virtual graphics with the physical world using the mobile device's sensor data (gyroscope, GPS, and so on). Using virtual graphics, we can display extra information, creating a virtual world on top of the real one. This process of localization, registration, and synthesis is often called Augmented Reality CAR). While the most successful mobile AR systems use GPS or marker tracking approaches, only limited effort has focused on using image data and natural object tracking concepts [1]. Markerless tracking is a very complex task, as it uses image processing operators to detect natural features in the video stream [2]. Several markerless tracking approaches have been developed in recent years. Bleser and Stricker [3] developed a markerless tracking approach based on a 3D model of the scene. The system predicts the appearances of the features by rendering the model using the prediction data of a visualinertial fusion filter. Yoon et al. [4] present a model-based object tracking method to compute the camera's 3D pose. Their algorithm uses an xtended Kalman Filter (KF) to provide an incremental pose-update scheme in a predictionverification framework. Other approaches such as [5] combine different features such as edge, point, and texture information to compute the camera pose. Other authors were interested by the robustness aspect, thus, Stricker et al. [6] have presented an interactive AR application to solve occlusion problem. Occlusions are managed by locating the user hand and subtracting the background. This approach is feasible in the case of homogeneous background with the assumption of static camera. Naimark and Foxlin [7] implemented a hybrid vision-inertial self-tracker system which operates in various real-world lighting conditions. The aim is to extract coded fiducials in presence of very non-uniform lighting. In [8], the authors presented a technique based on active targets using amplitude modulation codes instead of binary codes. Such a system provides high precision with compact targets and operates in a wide range of viewing angles under various luminosity conditions. Maidi et al. [9] [1] [11] presented a robust fiducials tracking method for AR systems. A generic algorithm for object detection and feature points extraction is developed to identify targets in real-time. The authors proposed a tracking method based on RANSAC algorithm to deal with partial target occlusion. Moreover, a hybrid tracking architecture based on an inertial-vision system was presented to keep tracking in worst environment conditions such as motion blur and total occlusions of the markers [12]. Bleser and Stricker [3] presented a visual-inertial tracking device for augmented and virtual reality applications. The authors provided an evaluation of several markerless tracking approaches. Their solution relied on a 3D model of the scene to predict the appearances of the features by rendering the model using the prediction data of the sensor fusion filter. High stability and accuracy were demonstrated using the developed system. Recently, marker tracking based on keypoint descriptors such as SIFT [13] or SURF [14] has been proposed. Keypoint descriptor methods not only automatically detect points of interest, but also create invariant descriptors characterizing the local neighborhood of the keypoint. Such descriptors can be used to uniquely identify points of interest and to match them even under a variety of disturbing conditions such as noise and changes in scale, rotation, illumination, and viewpoint. 246

2 This invariance is an important criterion for mobile systems, for which environmental conditions are neither stable nor repeatable. Wagner et al. [I5] report on an optimized marker detection and tracking method based on SIFT, Ferns [16], and image patch tracking. The method achieves impressive tracking performance (up to 3 fps) on a 28-era smartphone. However, the method assumes only a single gallery image, and the performance evaluation was performed on raw images stored on the Windows mobile file system, not including image capture and application overhead. The work described in this paper focuses on real-time processing of images acquired directly from the camera device on an Android tablet in an OpenCV environment. At the time our experiments were performed, OpenCV for Android (Honeycomb) was in beta and camera capture overhead was very high, with maximum frame rates for the most trivial applications ranging from 5 to 1 fps. Despite this limitation, we are able to achieve real-time tracking, pose estimation, and graphics synthesis for a gallery of up to 12 objects. II. NATURAL 2D FATUR DTCTION AND TRACKING In recent years, there has been a growing interest in developing effective methods for searching images in large databases. Most approaches of database image retrieval focused on the search-by-query. These methods require that the users provide a request image. Then, the similar images to the query are retrieved in the image database. However, it is often difficult to find a good matching between query and trained images. Another approach of image retrieval is based on the browsing of the dataset environment that oflers an alternative to conventional search-by-query, but here also many issues have to be addressed. Indeed, the database needs to be organized logically and predictably so that the image query can be matched successfully. These approaches define a clustering system that classifies image features into groups, using quantitative and qualitative information of measurements and characteristics based on the training set in which clusters are already established (color segmentation, morphological parameters, geometrical primitives). Usually, the methodology consists in performing the class clustering to determine groups of features using techniques such as k-means, k-nn, etc. Afterwards, the classifiers are trained with learning algorithms (SVM, naive-bayes, Boosting,. ) and finally, a histogram of responses for each image is assigned for labeling the training dataset. Commonly, the methods developed for visual content analysis rely upon a multi-dimensional approach for huge database template matching and data retrieval demanding high computational resources. Thus, in our work we propose an effective search-by-query algorithm for descriptors matching based on a query array-image retrieval from a training database that we adapted to mobile application constraints. Moreover, the application is optimized in order to operate in an environment where memory storage and processing capabilities are, particularly, limited. The aim of this system is to enable robust feature-based recogmtlon and real-time tracking in unreliable performance-degrading conditions occurring in reallife tracking situations Our markerless tracking method relies upon the use of features already available in the scene, without adding specific patches or markers. Given a new image, the method computes SURF keypoints and keypoint descriptors and matches the descriptors to the stored descriptors for each gallery image using nearest neighbor search (figure I). Fig. 1. Pre-processing Keypoint extraction Desc ri ptor c I 1 Identification Image database The overall identification and tracking system diagram. First, the algorithm detects image keypoints, then, a descriptor array is computed for each keypoint in order to characterize the feature point and its surrounding pixel area. The keypoint extraction relies upon a Hessian detector that identifies the maximal salient regions. The detection is based on the Hessian matrix (q. 1) which operates in multiple scales and uses the spatial localization of invariant points [14]. H(x ") =, ( Lxx(x,O") Lxy(x, ") Once the keypoint is detected, the next step is to construct a circular region around the point and to compute the Haar wavelets in X and Y directions. This region is split into square sub-regions for which features at spaced sample points are computed. Hence, the descriptor array is built using the Haar wavelet response in horizontal direction Wx and vertical direction Wy [14]. Consequently, the descriptor array is represented by the following equation: (I) 247

3 Finally, the feature points represented with their descriptors are matched to reference data using a search-by-query strategy to retrieve the training descriptors corresponding to the closest descriptor set within a nearest neighbor ratio matching. The SURF- based algorithm presents several optimizations, the most significant one, is the improvement of the computing time using the integral image. Indeed, the image integral allows fast computation of the filters responses using a quick and efficient algorithm to sum up intensity values defined in subset grid of the image [14]. However, the difficulty in this technique is the effectiveness of the detection to recover relevant feature pairs by keeping only good matches while rejecting outliers. Indeed, these outliers will rise the probability that the system incorrectly matches the input pattern to a non-matching template in the database. The decision making process of the matching algorithm is based on a threshold which determines how close to a template the input needs to be located to be considered as a match. If the threshold is lower, there will be less false non-matches but more false rejects. Correspondingly, a higher threshold will increase the false accept rate. Practically, we estimate that a matching can be considered correct if the distance between two keypoints (distance between their corresponding descriptors) is 7% closer. III. POS STIMATION FOR AR APPLICATION estimation is the determination of the transformation that relates the object reference frame and the camera coordinate frames (figure 2). Before computing the pose parameters, we have to determine the 2D-3D matching pairs to establish a geometric constraint between the image and the 3D object reference systems [17]. To estimate camera pose, we first perform a camera calibration offline and store the camera's intrinsic parameters. At run time, when an object is detected, we use the set of 2D-3D correspondences with the analytical pose estimation method proposed by Zhang [18]. The pose estimation is an error minimization process enabling to define the function relating 3D points to 2D points by [19]: (3) F(p, P, Ix, R, T) = (X, Y, Z)T is a 3D point defined in the object ( u, v) T the projection of that point where P = coordinate frame and p = in the image. The camera model is defined by: sp = Ix (R T) P (4) s represents a scale factor, (R T) is the pose matrix: the rotation and the translation of the object coordinate frame according to the camera coordinate frame and I x is the intrinsic matrix of the camera. Assuming that the object is planar (Z = have: ), from q. 4 we (5) Camera reference frame x Fig. 2. Object reference frame y.x Reference frames used in pose estimation process. where Ti = (Tli T2i T3i) T is a column rotation vector. The projective matrix which relates the object coordinate frame to the camera reference frame can be estimated using at least 4 pairs of 2D-3D points. We denote this perspective transformation by ]vi and based on q. 4 and q. 5, we have: Finally, the resolution { of rotation and translation parameters is derived from the following equation system: Tl = >"I; l rnl T2 = >"I; l rn2 (7) T3 = Tl X T2 T = AI; l rn3 IV. IMPLMNTATION ON MOBIL TABLT We implemented a prototype AR application on an Android tablet running HoneyComb revision 7 and Java V6 update 31. We implemented the main image processing functions in native code with C++, OpenCV, and Android-ndk release 6. We modified the OpenCV SURF keypoint extractor to limit the number of feature points extracted. This simple change enabled a dramatic speedup in the keypoint detection and descriptor computation processes. The tracking system extracts local SURF features and classifies keypoints using k-nearest neighbors to find the best matching database image. The application enables continuous recognition of natural objects in live video captured by the tablet's camera. The user points the camera at a target and the system identifies the name of the object in the viewfinder in less than 25 rns. The boundary of the object is displayed and accurately tracked in real-time. The object is identified and its geometry is quickly retrieved and augmented from a database of object models. (6) 248

4 V. XPRIMNTS AND RSULTS A. Natural feature tracking results The objective of the application is to determine if the scene contains any pre-trained features. Initially, the system loads all objects and builds a database of images with keypoints and keypoint descriptors. J) Object recognition: We start by showing the obtained results from the object recognition test. Figure 3 shows how the object and its keypoints are characterized. SURF enables us to identifying objects with very few keypoints. However, we find that object texture affects the quality of the results; textured objects have more feature points and enable better recognition. the results. The number of matches tends to increase as the number of feature points increases. As pose estimation improves (to a limit) as the number of correspondences increases, it is important to find a sufficient number of matches. Match count Number of feature points Fig. 5. volution of number of pairs according to detected keypoints. Fig. 3. Feature detection and description. 2) Computational requirements: To characterize the method's resource requirements, we varied the number of keypoints and observed the impact of feature points on the frame rate. Figure 4 shows the results obtained from this test. We observe that the frame rate is inversely proportional to the number of detected points. This result is expected, since keypoint detection and descriptor extraction require computational resources (memory and CPU). 4 3 FPS 2 B. estimator evaluation J) xecution time: To evaluate the efficiency of the analytical pose algorithm in terms of computational resources, we compared the execution time of 3 pose estimators to highlight the most relevant method meeting the mobile realtime requirement. We computed 327 poses with the 3 pose estimators (figure 6). The mean time for determining a single pose was.1 ms using the analytical algorithm while it is about.6453 ms for the ICP algorithm (Iterative Closest Point [2]) and ms for the 1 method (Orthogonal Iteration [21]). These two iterative methods are, respectively, 6 and 16 times slower than the analytical algorithm. Obviously, the analytical algorithm presents the best performance, even though, the computation time required for pose estimation is negligible compared to that required for the keypoint detection and description steps. en. 2 CII 1.S j:: O.S -Analytical algorithm -ICP algorithm -Orthogonal Iteration 1 SOD 1 1S 2 2S 3 3S Fig. 4. o -----' r Number of feature points Variation of FPS according to the number of detected keypoints. 3) Matching pairs: In order to characterize the relationship between the number of feature points and number of matched points, we ran a third test with a single object. Figure 5 shows Fig. 6. Comparison of computation time using the analytical pose estimator versus iterative methods. 2) Reprojection error: To characterize pose estimation error, for each pose computation, we re-projected the target model onto the image and measured the deviation between the detected target corners and the reprojected corners. From figure 7, we notice that the algorithm is stable and accurate, with a mean error of.5 pixels. 249

5 Vi'.12 Qi ><.1 : :.8 s:::.6 :;::; (,J ::::I til s::: (,J :: >< I Fig. 7. Reprojection error. - algorithm -Real distance 3) Generalization error: This error measures the deviation of reprojecting other objects on the image according to the pose computed using only one fiducial object. To determine the generalization error, we use 2D-3D template pairs to estimate a reference pose. The difference resulting from the projection using the reference and the real pose constitutes the generalization error. The obtained results on generalization error are represented in figure 8. From the curve we notice that the overall error behavior of the algorithm presents some jittering but it doesn't affect the whole accuracy of the pose since this mean error is estimated to pixels only algorithm -Real distance 8,-----,-----,-----,-----,----,-----, In Qi >< 6c.... o 4",.,,I,.. 1\..... s::: o :;::; (II.!::! " s::: Fig. 8. Generalization error. 4) Real distance estimation: To evaluate the method's realworld pose estimation accuracy, we placed two targets in the environment and measured the real and estimated distance between them. The real distance between these two markers is: T = (-1,25,5) mm, this vector is the translation of the coordinate system of the second marker according to the coordinate system of the first marker. The results are illustrated in figure 9. The mean error in the translation vector is T = (3.884,3.719, ) mm. These errors are low except for the relatively large error in the Z translation direction, due to the use of a single camera. From table I, we notice that the pose estimator algorithm presents a relatively important mean error in Z direction with an important variance and standard deviation values. These errors have occurred due to the lack of information o 2 Fig. 9. Real distance estimation. J resulting from the use of a single camera which is unable to elaborate a depth function representing the scene. Indeed, accurate 3D localization approaches, generally, rely upon stereo-vision techniques which evaluate the distance using spatial disparity of a fiducial object in two images [1] [19]. The measurement system requires a pair of cameras, calibrated and oriented spatially, pointing the object of interest to defining a disparity function relating correspondent features in two views. Alternatively, a single camera can be used to estimate accurate 3D positions by capturing images from multiple viewpoints, whereas in our test, a single view monocular vision device is experimented which explains the inaccuracies in Z distance estimation. 5) AR tracking: The aim of this experiment is to informally evaluate the pose and the virtual overlay accuracy. Figure 1 shows the obtained results: the camera is moved freely around the target object. The algorithm detects and tracks the keypoints in each frame, and the pose estimator determines position and orientation of the camera. We notice that the virtual object is accurately superimposed on the real image over different camera pose. 25

6 Translation/Statistics Mean rror (mm) Variance (mm') Standard Deviation (mm) TABL I STATISTICS RLATD TO 3D LOCALIZATION USING TH POS STIMATOR. Fig. 1. estimation and virtual overlay results. VI. CONCLUSION In this paper, we present a markerless tracking system for mobile AR applications. We address the problem of tracking and augmenting the scene using only natural features present in the environment. The method extracts keypoints and local descriptors, matches keypoints in the probe image with those in the gallery, then performs pose estimation overlay of 3D graphics on the real image. We evaluate the performance of the application in terms of resources such as the processing time and memory utilization (number of keypoints and pairs). We analyze localization performance and find that the method is efficient and robust. Despite the limited support for image processing provided by the device's hardware and operating system, our experiments demonstrate robust real-time performance and prove the validity of vision-based tracking on mobile platforms using only image data. In future work, we aim to further improve the application's frame rate and image gallery size while retaining robust tracking. [2] M. Maidi, M. Preda, and V. H. Le, "Markerless tracking for mobile augmented reality," in I International Conference on Signal and Image Processing Applications (ICSIPA'211). Kuala Lumpur, Malaysia: I Signal Processing Society, November 211, pp [3] G. Bleser and D. Stricker, "Advanced tracking through efficient image processing and visual-inertial sensor fusion," in I Virtual Reality (VR), 28, pp [4] Y. Yoon, A. Kosaka, 1. Park, and A. Kak, "A new approach to the use of edge extremities for model-based object tracking," in Int. Con! on Robotics and Automation (ICRA), 25, pp [5] L. Vacchetti, V. Lepetit, and P. Fua, "Combining edge and texture information for real-time accurate 3d camera tracking," in Int. Symp. on Mixed and Augmented Reality (ISMAR), 24, pp [6] D. Stricker, G. Klinker, and D. Reiners, "A fast and robust line-based optical tracker for augmented reality applications," in Proc. First International Workshop on Augmented Reality (IWAR'98), San Francisco, USA, 1998, pp [7] L. Naimark and. Foxlin, "Circular data matrix fiducial system and robust image processing for a wearable vision-inertial self-tracker," in I International Symposium on Mixed and Augmented Reality (ISMAR'2), Darmstadt, Germany, 22, pp [8] --, "ncoded led system for optical trackers," in ACM and I International Symposium on Mixed and Augmented Reality (ISMAR'5), Vienna, Austria, October 25. [9] M. Maidi, F. Ababsa, and M. Mallem, "Robust fiducials tracking in augmented reality," in The 13th International Conference on Systems, Signals and Image Processing (IWSSIP 26), Budapest, Hungary, 26, pp [1] --, "Robust augmented reality tracking based visual pose estimation," in 3rd International Conference on Informatics in Control, Automation and Robotics (ICINCO'6), Setbal, Portugal, 26, pp [11] --, "Active contours motion based on optical flow for tracking in augmented reality," in 8th International Conference on Virtual Reality (VRIC'6), Laval, France, 26, pp [12] --, "Vision-inertial tracking system for robust fiducials registration in augmented reality," in I Symposium on Computational Intelligence for Multimedia Signal and Vision Processing (CIMSVP 29), Nashville (USA), March 3 April 2 29, pp [13] D. G. Lowe, "Distinctive image features from scale-invariant keypoints," Int. J. Computer Vision, vol. 6, no. 2, pp , 24. [14] H. Bay, A. ss, T. Tuytelaars, and L. Van Gool, "Speeded-up robust features (SURF)," J. Computer Vision and Image Understanding, vol. 1l, no. 3, pp , 28. [15] D. Wagner, G. Reitmayr, A. Mulloni, T. Drummond, and D. Schmal stieg, "Real-time detection and tracking for augmented reality on mobile phones," I Trans. on Visualization and Computer Graphics, vol. 16, no. 3, pp , 21. [16] M. Ozuysal, P. Fua, and V. Lepetit, "Fast keypoint recognition in ten lines of code," in I Con! on Computer Vision and Pattern Recognition (CVPR), 27, pp [17] M. Maidi, 1.-Y. Didier, F. Ababsa, and M. Mallem, "A performance study for camera pose estimation using visual marker based tracking," Machine Vision and Applications, IAPR International Journal, Springer, vol. 21, no. 3, pp , 21. [18] Z. Zhang, "A flexible new technique for camera calibration;' I Trans. on Pattern Analysis and Machine Intelligence, vol. 22, no. II, pp ,2. [19] M. Maidi, M. Mallem, L. Benchikh, and S. Otmane, An valuation of Camera Methods for an Augmented Reality System: Application to Teaching Industrial Robots, ser. Lecture Notes in Computer Science. Springer Berlin Heidelberg, 213, vol. 742, ch. Transactions on Computational Science XVII, pp [2] P. Besl and N. McKay, "A method for registration of 3-d shapes," I Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp , [21] C. P. Lu, G. D. Hager, and. Mjolsness, "Fast and globally convergent pose estimation from video images," I Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 6, pp , 2. RFRNCS [1] M. Maidi, F. Ababsa, and M. Mallem, "Handling occlusions for augmented reality systems," urasip Journal on Image and Video Processing" vol. 21, pp. 1-12,

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic

More information

A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow

A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow , pp.233-237 http://dx.doi.org/10.14257/astl.2014.51.53 A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow Giwoo Kim 1, Hye-Youn Lim 1 and Dae-Seong Kang 1, 1 Department of electronices

More information

Build Panoramas on Android Phones

Build Panoramas on Android Phones Build Panoramas on Android Phones Tao Chu, Bowen Meng, Zixuan Wang Stanford University, Stanford CA Abstract The purpose of this work is to implement panorama stitching from a sequence of photos taken

More information

The Scientific Data Mining Process

The Scientific Data Mining Process Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In

More information

Probabilistic Latent Semantic Analysis (plsa)

Probabilistic Latent Semantic Analysis (plsa) Probabilistic Latent Semantic Analysis (plsa) SS 2008 Bayesian Networks Multimedia Computing, Universität Augsburg Rainer.Lienhart@informatik.uni-augsburg.de www.multimedia-computing.{de,org} References

More information

GPS-aided Recognition-based User Tracking System with Augmented Reality in Extreme Large-scale Areas

GPS-aided Recognition-based User Tracking System with Augmented Reality in Extreme Large-scale Areas GPS-aided Recognition-based User Tracking System with Augmented Reality in Extreme Large-scale Areas Wei Guan Computer Graphics and Immersive Technologies Computer Science, USC wguan@usc.edu Suya You Computer

More information

The use of computer vision technologies to augment human monitoring of secure computing facilities

The use of computer vision technologies to augment human monitoring of secure computing facilities The use of computer vision technologies to augment human monitoring of secure computing facilities Marius Potgieter School of Information and Communication Technology Nelson Mandela Metropolitan University

More information

Augmented Reality Tic-Tac-Toe

Augmented Reality Tic-Tac-Toe Augmented Reality Tic-Tac-Toe Joe Maguire, David Saltzman Department of Electrical Engineering jmaguire@stanford.edu, dsaltz@stanford.edu Abstract: This project implements an augmented reality version

More information

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - nzarrin@qiau.ac.ir

More information

CS231M Project Report - Automated Real-Time Face Tracking and Blending

CS231M Project Report - Automated Real-Time Face Tracking and Blending CS231M Project Report - Automated Real-Time Face Tracking and Blending Steven Lee, slee2010@stanford.edu June 6, 2015 1 Introduction Summary statement: The goal of this project is to create an Android

More information

TouchPaper - An Augmented Reality Application with Cloud-Based Image Recognition Service

TouchPaper - An Augmented Reality Application with Cloud-Based Image Recognition Service TouchPaper - An Augmented Reality Application with Cloud-Based Image Recognition Service Feng Tang, Daniel R. Tretter, Qian Lin HP Laboratories HPL-2012-131R1 Keyword(s): image recognition; cloud service;

More information

Face Recognition in Low-resolution Images by Using Local Zernike Moments

Face Recognition in Low-resolution Images by Using Local Zernike Moments Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August14-15, 014 Paper No. 15 Face Recognition in Low-resolution Images by Using Local Zernie

More information

Image Segmentation and Registration

Image Segmentation and Registration Image Segmentation and Registration Dr. Christine Tanner (tanner@vision.ee.ethz.ch) Computer Vision Laboratory, ETH Zürich Dr. Verena Kaynig, Machine Learning Laboratory, ETH Zürich Outline Segmentation

More information

Tracking of Small Unmanned Aerial Vehicles

Tracking of Small Unmanned Aerial Vehicles Tracking of Small Unmanned Aerial Vehicles Steven Krukowski Adrien Perkins Aeronautics and Astronautics Stanford University Stanford, CA 94305 Email: spk170@stanford.edu Aeronautics and Astronautics Stanford

More information

SYNTHESIZING FREE-VIEWPOINT IMAGES FROM MULTIPLE VIEW VIDEOS IN SOCCER STADIUM

SYNTHESIZING FREE-VIEWPOINT IMAGES FROM MULTIPLE VIEW VIDEOS IN SOCCER STADIUM SYNTHESIZING FREE-VIEWPOINT IMAGES FROM MULTIPLE VIEW VIDEOS IN SOCCER STADIUM Kunihiko Hayashi, Hideo Saito Department of Information and Computer Science, Keio University {hayashi,saito}@ozawa.ics.keio.ac.jp

More information

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Automatic Photo Quality Assessment Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Estimating i the photorealism of images: Distinguishing i i paintings from photographs h Florin

More information

Automatic Labeling of Lane Markings for Autonomous Vehicles

Automatic Labeling of Lane Markings for Autonomous Vehicles Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 jkiske@stanford.edu 1. Introduction As autonomous vehicles become more popular,

More information

More Local Structure Information for Make-Model Recognition

More Local Structure Information for Make-Model Recognition More Local Structure Information for Make-Model Recognition David Anthony Torres Dept. of Computer Science The University of California at San Diego La Jolla, CA 9093 Abstract An object classification

More information

FACE RECOGNITION BASED ATTENDANCE MARKING SYSTEM

FACE RECOGNITION BASED ATTENDANCE MARKING SYSTEM Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 2, February 2014,

More information

A Method of Caption Detection in News Video

A Method of Caption Detection in News Video 3rd International Conference on Multimedia Technology(ICMT 3) A Method of Caption Detection in News Video He HUANG, Ping SHI Abstract. News video is one of the most important media for people to get information.

More information

VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS

VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS Norbert Buch 1, Mark Cracknell 2, James Orwell 1 and Sergio A. Velastin 1 1. Kingston University, Penrhyn Road, Kingston upon Thames, KT1 2EE,

More information

Local features and matching. Image classification & object localization

Local features and matching. Image classification & object localization Overview Instance level search Local features and matching Efficient visual recognition Image classification & object localization Category recognition Image classification: assigning a class label to

More information

Analecta Vol. 8, No. 2 ISSN 2064-7964

Analecta Vol. 8, No. 2 ISSN 2064-7964 EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,

More information

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED '9) ISSN: 179-519 435 ISBN: 978-96-474-51-2 An Energy-Based Vehicle Tracking System using Principal

More information

ACCURACY ASSESSMENT OF BUILDING POINT CLOUDS AUTOMATICALLY GENERATED FROM IPHONE IMAGES

ACCURACY ASSESSMENT OF BUILDING POINT CLOUDS AUTOMATICALLY GENERATED FROM IPHONE IMAGES ACCURACY ASSESSMENT OF BUILDING POINT CLOUDS AUTOMATICALLY GENERATED FROM IPHONE IMAGES B. Sirmacek, R. Lindenbergh Delft University of Technology, Department of Geoscience and Remote Sensing, Stevinweg

More information

Tracking Moving Objects In Video Sequences Yiwei Wang, Robert E. Van Dyck, and John F. Doherty Department of Electrical Engineering The Pennsylvania State University University Park, PA16802 Abstract{Object

More information

siftservice.com - Turning a Computer Vision algorithm into a World Wide Web Service

siftservice.com - Turning a Computer Vision algorithm into a World Wide Web Service siftservice.com - Turning a Computer Vision algorithm into a World Wide Web Service Ahmad Pahlavan Tafti 1, Hamid Hassannia 2, and Zeyun Yu 1 1 Department of Computer Science, University of Wisconsin -Milwaukee,

More information

AN EFFICIENT HYBRID REAL TIME FACE RECOGNITION ALGORITHM IN JAVA ENVIRONMENT ABSTRACT

AN EFFICIENT HYBRID REAL TIME FACE RECOGNITION ALGORITHM IN JAVA ENVIRONMENT ABSTRACT AN EFFICIENT HYBRID REAL TIME FACE RECOGNITION ALGORITHM IN JAVA ENVIRONMENT M. A. Abdou*, M. H. Fathy** *Informatics Research Institute, City for Scientific Research and Technology Applications (SRTA-City),

More information

Removing Moving Objects from Point Cloud Scenes

Removing Moving Objects from Point Cloud Scenes 1 Removing Moving Objects from Point Cloud Scenes Krystof Litomisky klitomis@cs.ucr.edu Abstract. Three-dimensional simultaneous localization and mapping is a topic of significant interest in the research

More information

High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound

High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound Ralf Bruder 1, Florian Griese 2, Floris Ernst 1, Achim Schweikard

More information

EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM

EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM Amol Ambardekar, Mircea Nicolescu, and George Bebis Department of Computer Science and Engineering University

More information

Randomized Trees for Real-Time Keypoint Recognition

Randomized Trees for Real-Time Keypoint Recognition Randomized Trees for Real-Time Keypoint Recognition Vincent Lepetit Pascal Lagger Pascal Fua Computer Vision Laboratory École Polytechnique Fédérale de Lausanne (EPFL) 1015 Lausanne, Switzerland Email:

More information

Classification of Fingerprints. Sarat C. Dass Department of Statistics & Probability

Classification of Fingerprints. Sarat C. Dass Department of Statistics & Probability Classification of Fingerprints Sarat C. Dass Department of Statistics & Probability Fingerprint Classification Fingerprint classification is a coarse level partitioning of a fingerprint database into smaller

More information

CS 534: Computer Vision 3D Model-based recognition

CS 534: Computer Vision 3D Model-based recognition CS 534: Computer Vision 3D Model-based recognition Ahmed Elgammal Dept of Computer Science CS 534 3D Model-based Vision - 1 High Level Vision Object Recognition: What it means? Two main recognition tasks:!

More information

A Genetic Algorithm-Evolved 3D Point Cloud Descriptor

A Genetic Algorithm-Evolved 3D Point Cloud Descriptor A Genetic Algorithm-Evolved 3D Point Cloud Descriptor Dominik Wȩgrzyn and Luís A. Alexandre IT - Instituto de Telecomunicações Dept. of Computer Science, Univ. Beira Interior, 6200-001 Covilhã, Portugal

More information

Journal of Industrial Engineering Research. Adaptive sequence of Key Pose Detection for Human Action Recognition

Journal of Industrial Engineering Research. Adaptive sequence of Key Pose Detection for Human Action Recognition IWNEST PUBLISHER Journal of Industrial Engineering Research (ISSN: 2077-4559) Journal home page: http://www.iwnest.com/aace/ Adaptive sequence of Key Pose Detection for Human Action Recognition 1 T. Sindhu

More information

Tracking and Recognition in Sports Videos

Tracking and Recognition in Sports Videos Tracking and Recognition in Sports Videos Mustafa Teke a, Masoud Sattari b a Graduate School of Informatics, Middle East Technical University, Ankara, Turkey mustafa.teke@gmail.com b Department of Computer

More information

Vehicle Tracking System Robust to Changes in Environmental Conditions

Vehicle Tracking System Robust to Changes in Environmental Conditions INORMATION & COMMUNICATIONS Vehicle Tracking System Robust to Changes in Environmental Conditions Yasuo OGIUCHI*, Masakatsu HIGASHIKUBO, Kenji NISHIDA and Takio KURITA Driving Safety Support Systems (DSSS)

More information

A Prototype For Eye-Gaze Corrected

A Prototype For Eye-Gaze Corrected A Prototype For Eye-Gaze Corrected Video Chat on Graphics Hardware Maarten Dumont, Steven Maesen, Sammy Rogmans and Philippe Bekaert Introduction Traditional webcam video chat: No eye contact. No extensive

More information

A Comparative Study between SIFT- Particle and SURF-Particle Video Tracking Algorithms

A Comparative Study between SIFT- Particle and SURF-Particle Video Tracking Algorithms A Comparative Study between SIFT- Particle and SURF-Particle Video Tracking Algorithms H. Kandil and A. Atwan Information Technology Department, Faculty of Computer and Information Sciences, Mansoura University,El-Gomhoria

More information

3D Scanner using Line Laser. 1. Introduction. 2. Theory

3D Scanner using Line Laser. 1. Introduction. 2. Theory . Introduction 3D Scanner using Line Laser Di Lu Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute The goal of 3D reconstruction is to recover the 3D properties of a geometric

More information

Segmentation of building models from dense 3D point-clouds

Segmentation of building models from dense 3D point-clouds Segmentation of building models from dense 3D point-clouds Joachim Bauer, Konrad Karner, Konrad Schindler, Andreas Klaus, Christopher Zach VRVis Research Center for Virtual Reality and Visualization, Institute

More information

Pedestrian Detection with RCNN

Pedestrian Detection with RCNN Pedestrian Detection with RCNN Matthew Chen Department of Computer Science Stanford University mcc17@stanford.edu Abstract In this paper we evaluate the effectiveness of using a Region-based Convolutional

More information

CODING MODE DECISION ALGORITHM FOR BINARY DESCRIPTOR CODING

CODING MODE DECISION ALGORITHM FOR BINARY DESCRIPTOR CODING CODING MODE DECISION ALGORITHM FOR BINARY DESCRIPTOR CODING Pedro Monteiro and João Ascenso Instituto Superior Técnico - Instituto de Telecomunicações ABSTRACT In visual sensor networks, local feature

More information

Template-based Eye and Mouth Detection for 3D Video Conferencing

Template-based Eye and Mouth Detection for 3D Video Conferencing Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer

More information

Simultaneous Gamma Correction and Registration in the Frequency Domain

Simultaneous Gamma Correction and Registration in the Frequency Domain Simultaneous Gamma Correction and Registration in the Frequency Domain Alexander Wong a28wong@uwaterloo.ca William Bishop wdbishop@uwaterloo.ca Department of Electrical and Computer Engineering University

More information

ECE 533 Project Report Ashish Dhawan Aditi R. Ganesan

ECE 533 Project Report Ashish Dhawan Aditi R. Ganesan Handwritten Signature Verification ECE 533 Project Report by Ashish Dhawan Aditi R. Ganesan Contents 1. Abstract 3. 2. Introduction 4. 3. Approach 6. 4. Pre-processing 8. 5. Feature Extraction 9. 6. Verification

More information

Fast Matching of Binary Features

Fast Matching of Binary Features Fast Matching of Binary Features Marius Muja and David G. Lowe Laboratory for Computational Intelligence University of British Columbia, Vancouver, Canada {mariusm,lowe}@cs.ubc.ca Abstract There has been

More information

VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION

VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION Mark J. Norris Vision Inspection Technology, LLC Haverhill, MA mnorris@vitechnology.com ABSTRACT Traditional methods of identifying and

More information

Mean-Shift Tracking with Random Sampling

Mean-Shift Tracking with Random Sampling 1 Mean-Shift Tracking with Random Sampling Alex Po Leung, Shaogang Gong Department of Computer Science Queen Mary, University of London, London, E1 4NS Abstract In this work, boosting the efficiency of

More information

A Study on M2M-based AR Multiple Objects Loading Technology using PPHT

A Study on M2M-based AR Multiple Objects Loading Technology using PPHT A Study on M2M-based AR Multiple Objects Loading Technology using PPHT Sungmo Jung, Seoksoo Kim * Department of Multimedia Hannam University 133, Ojeong-dong, Daedeok-gu, Daejeon-city Korea sungmoj@gmail.com,

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow 02/09/12 Feature Tracking and Optical Flow Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Many slides adapted from Lana Lazebnik, Silvio Saverse, who in turn adapted slides from Steve

More information

Face detection is a process of localizing and extracting the face region from the

Face detection is a process of localizing and extracting the face region from the Chapter 4 FACE NORMALIZATION 4.1 INTRODUCTION Face detection is a process of localizing and extracting the face region from the background. The detected face varies in rotation, brightness, size, etc.

More information

How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm

How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm IJSTE - International Journal of Science Technology & Engineering Volume 1 Issue 10 April 2015 ISSN (online): 2349-784X Image Estimation Algorithm for Out of Focus and Blur Images to Retrieve the Barcode

More information

Optical Tracking Using Projective Invariant Marker Pattern Properties

Optical Tracking Using Projective Invariant Marker Pattern Properties Optical Tracking Using Projective Invariant Marker Pattern Properties Robert van Liere, Jurriaan D. Mulder Department of Information Systems Center for Mathematics and Computer Science Amsterdam, the Netherlands

More information

Introduction to Computer Graphics

Introduction to Computer Graphics Introduction to Computer Graphics Torsten Möller TASC 8021 778-782-2215 torsten@sfu.ca www.cs.sfu.ca/~torsten Today What is computer graphics? Contents of this course Syllabus Overview of course topics

More information

Reliable automatic calibration of a marker-based position tracking system

Reliable automatic calibration of a marker-based position tracking system Reliable automatic calibration of a marker-based position tracking system David Claus and Andrew W. Fitzgibbon Department of Engineering Science, University of Oxford, Oxford OX1 3BN {dclaus,awf}@robots.ox.ac.uk

More information

Real time vehicle detection and tracking on multiple lanes

Real time vehicle detection and tracking on multiple lanes Real time vehicle detection and tracking on multiple lanes Kristian Kovačić Edouard Ivanjko Hrvoje Gold Department of Intelligent Transportation Systems Faculty of Transport and Traffic Sciences University

More information

Human behavior analysis from videos using optical flow

Human behavior analysis from videos using optical flow L a b o r a t o i r e I n f o r m a t i q u e F o n d a m e n t a l e d e L i l l e Human behavior analysis from videos using optical flow Yassine Benabbas Directeur de thèse : Chabane Djeraba Multitel

More information

A Reliability Point and Kalman Filter-based Vehicle Tracking Technique

A Reliability Point and Kalman Filter-based Vehicle Tracking Technique A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video

More information

The Visual Internet of Things System Based on Depth Camera

The Visual Internet of Things System Based on Depth Camera The Visual Internet of Things System Based on Depth Camera Xucong Zhang 1, Xiaoyun Wang and Yingmin Jia Abstract The Visual Internet of Things is an important part of information technology. It is proposed

More information

Colorado School of Mines Computer Vision Professor William Hoff

Colorado School of Mines Computer Vision Professor William Hoff Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Introduction to 2 What is? A process that produces from images of the external world a description

More information

Building an Advanced Invariant Real-Time Human Tracking System

Building an Advanced Invariant Real-Time Human Tracking System UDC 004.41 Building an Advanced Invariant Real-Time Human Tracking System Fayez Idris 1, Mazen Abu_Zaher 2, Rashad J. Rasras 3, and Ibrahiem M. M. El Emary 4 1 School of Informatics and Computing, German-Jordanian

More information

Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006

Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006 Practical Tour of Visual tracking David Fleet and Allan Jepson January, 2006 Designing a Visual Tracker: What is the state? pose and motion (position, velocity, acceleration, ) shape (size, deformation,

More information

Environmental Remote Sensing GEOG 2021

Environmental Remote Sensing GEOG 2021 Environmental Remote Sensing GEOG 2021 Lecture 4 Image classification 2 Purpose categorising data data abstraction / simplification data interpretation mapping for land cover mapping use land cover class

More information

AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION

AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION Saurabh Asija 1, Rakesh Singh 2 1 Research Scholar (Computer Engineering Department), Punjabi University, Patiala. 2 Asst.

More information

MetropoGIS: A City Modeling System DI Dr. Konrad KARNER, DI Andreas KLAUS, DI Joachim BAUER, DI Christopher ZACH

MetropoGIS: A City Modeling System DI Dr. Konrad KARNER, DI Andreas KLAUS, DI Joachim BAUER, DI Christopher ZACH MetropoGIS: A City Modeling System DI Dr. Konrad KARNER, DI Andreas KLAUS, DI Joachim BAUER, DI Christopher ZACH VRVis Research Center for Virtual Reality and Visualization, Virtual Habitat, Inffeldgasse

More information

Vision based Vehicle Tracking using a high angle camera

Vision based Vehicle Tracking using a high angle camera Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu gramos@clemson.edu dshu@clemson.edu Abstract A vehicle tracking and grouping algorithm is presented in this work

More information

LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. indhubatchvsa@gmail.com

LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. indhubatchvsa@gmail.com LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE 1 S.Manikandan, 2 S.Abirami, 2 R.Indumathi, 2 R.Nandhini, 2 T.Nanthini 1 Assistant Professor, VSA group of institution, Salem. 2 BE(ECE), VSA

More information

Low-resolution Character Recognition by Video-based Super-resolution

Low-resolution Character Recognition by Video-based Super-resolution 2009 10th International Conference on Document Analysis and Recognition Low-resolution Character Recognition by Video-based Super-resolution Ataru Ohkura 1, Daisuke Deguchi 1, Tomokazu Takahashi 2, Ichiro

More information

PHYSIOLOGICALLY-BASED DETECTION OF COMPUTER GENERATED FACES IN VIDEO

PHYSIOLOGICALLY-BASED DETECTION OF COMPUTER GENERATED FACES IN VIDEO PHYSIOLOGICALLY-BASED DETECTION OF COMPUTER GENERATED FACES IN VIDEO V. Conotter, E. Bodnari, G. Boato H. Farid Department of Information Engineering and Computer Science University of Trento, Trento (ITALY)

More information

Context-aware Library Management System using Augmented Reality

Context-aware Library Management System using Augmented Reality International Journal of Electronic and Electrical Engineering. ISSN 0974-2174 Volume 7, Number 9 (2014), pp. 923-929 International Research Publication House http://www.irphouse.com Context-aware Library

More information

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014 Efficient Attendance Management System Using Face Detection and Recognition Arun.A.V, Bhatath.S, Chethan.N, Manmohan.C.M, Hamsaveni M Department of Computer Science and Engineering, Vidya Vardhaka College

More information

Projection Center Calibration for a Co-located Projector Camera System

Projection Center Calibration for a Co-located Projector Camera System Projection Center Calibration for a Co-located Camera System Toshiyuki Amano Department of Computer and Communication Science Faculty of Systems Engineering, Wakayama University Sakaedani 930, Wakayama,

More information

Accurate and robust image superresolution by neural processing of local image representations

Accurate and robust image superresolution by neural processing of local image representations Accurate and robust image superresolution by neural processing of local image representations Carlos Miravet 1,2 and Francisco B. Rodríguez 1 1 Grupo de Neurocomputación Biológica (GNB), Escuela Politécnica

More information

Real Time Target Tracking with Pan Tilt Zoom Camera

Real Time Target Tracking with Pan Tilt Zoom Camera 2009 Digital Image Computing: Techniques and Applications Real Time Target Tracking with Pan Tilt Zoom Camera Pankaj Kumar, Anthony Dick School of Computer Science The University of Adelaide Adelaide,

More information

The Role of Size Normalization on the Recognition Rate of Handwritten Numerals

The Role of Size Normalization on the Recognition Rate of Handwritten Numerals The Role of Size Normalization on the Recognition Rate of Handwritten Numerals Chun Lei He, Ping Zhang, Jianxiong Dong, Ching Y. Suen, Tien D. Bui Centre for Pattern Recognition and Machine Intelligence,

More information

EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set

EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set Amhmed A. Bhih School of Electrical and Electronic Engineering Princy Johnson School of Electrical and Electronic Engineering Martin

More information

Face Recognition For Remote Database Backup System

Face Recognition For Remote Database Backup System Face Recognition For Remote Database Backup System Aniza Mohamed Din, Faudziah Ahmad, Mohamad Farhan Mohamad Mohsin, Ku Ruhana Ku-Mahamud, Mustafa Mufawak Theab 2 Graduate Department of Computer Science,UUM

More information

3D Model based Object Class Detection in An Arbitrary View

3D Model based Object Class Detection in An Arbitrary View 3D Model based Object Class Detection in An Arbitrary View Pingkun Yan, Saad M. Khan, Mubarak Shah School of Electrical Engineering and Computer Science University of Central Florida http://www.eecs.ucf.edu/

More information

Lighting Estimation in Indoor Environments from Low-Quality Images

Lighting Estimation in Indoor Environments from Low-Quality Images Lighting Estimation in Indoor Environments from Low-Quality Images Natalia Neverova, Damien Muselet, Alain Trémeau Laboratoire Hubert Curien UMR CNRS 5516, University Jean Monnet, Rue du Professeur Benoît

More information

Automatic Detection of PCB Defects

Automatic Detection of PCB Defects IJIRST International Journal for Innovative Research in Science & Technology Volume 1 Issue 6 November 2014 ISSN (online): 2349-6010 Automatic Detection of PCB Defects Ashish Singh PG Student Vimal H.

More information

Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data

Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data CMPE 59H Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data Term Project Report Fatma Güney, Kübra Kalkan 1/15/2013 Keywords: Non-linear

More information

Face Model Fitting on Low Resolution Images

Face Model Fitting on Low Resolution Images Face Model Fitting on Low Resolution Images Xiaoming Liu Peter H. Tu Frederick W. Wheeler Visualization and Computer Vision Lab General Electric Global Research Center Niskayuna, NY, 1239, USA {liux,tu,wheeler}@research.ge.com

More information

VOLUMNECT - Measuring Volumes with Kinect T M

VOLUMNECT - Measuring Volumes with Kinect T M VOLUMNECT - Measuring Volumes with Kinect T M Beatriz Quintino Ferreira a, Miguel Griné a, Duarte Gameiro a, João Paulo Costeira a,b and Beatriz Sousa Santos c,d a DEEC, Instituto Superior Técnico, Lisboa,

More information

Edge tracking for motion segmentation and depth ordering

Edge tracking for motion segmentation and depth ordering Edge tracking for motion segmentation and depth ordering P. Smith, T. Drummond and R. Cipolla Department of Engineering University of Cambridge Cambridge CB2 1PZ,UK {pas1001 twd20 cipolla}@eng.cam.ac.uk

More information

Social Media Mining. Data Mining Essentials

Social Media Mining. Data Mining Essentials Introduction Data production rate has been increased dramatically (Big Data) and we are able store much more data than before E.g., purchase data, social media data, mobile phone data Businesses and customers

More information

Real-time 3D Model-Based Tracking: Combining Edge and Texture Information

Real-time 3D Model-Based Tracking: Combining Edge and Texture Information IEEE Int. Conf. on Robotics and Automation, ICRA'6 Orlando, Fl, May 26 Real-time 3D Model-Based Tracking: Combining Edge and Texture Information Muriel Pressigout IRISA-Université de Rennes 1 Campus de

More information

Edge-based Template Matching and Tracking for Perspectively Distorted Planar Objects

Edge-based Template Matching and Tracking for Perspectively Distorted Planar Objects Edge-based Template Matching and Tracking for Perspectively Distorted Planar Objects Andreas Hofhauser, Carsten Steger, and Nassir Navab TU München, Boltzmannstr. 3, 85748 Garching bei München, Germany

More information

SIGNATURE VERIFICATION

SIGNATURE VERIFICATION SIGNATURE VERIFICATION Dr. H.B.Kekre, Dr. Dhirendra Mishra, Ms. Shilpa Buddhadev, Ms. Bhagyashree Mall, Mr. Gaurav Jangid, Ms. Nikita Lakhotia Computer engineering Department, MPSTME, NMIMS University

More information

CARDA: Content Management Systems for Augmented Reality with Dynamic Annotation

CARDA: Content Management Systems for Augmented Reality with Dynamic Annotation , pp.62-67 http://dx.doi.org/10.14257/astl.2015.90.14 CARDA: Content Management Systems for Augmented Reality with Dynamic Annotation Byeong Jeong Kim 1 and Seop Hyeong Park 1 1 Department of Electronic

More information

A secure face tracking system

A secure face tracking system International Journal of Information & Computation Technology. ISSN 0974-2239 Volume 4, Number 10 (2014), pp. 959-964 International Research Publications House http://www. irphouse.com A secure face tracking

More information

Open Access A Facial Expression Recognition Algorithm Based on Local Binary Pattern and Empirical Mode Decomposition

Open Access A Facial Expression Recognition Algorithm Based on Local Binary Pattern and Empirical Mode Decomposition Send Orders for Reprints to reprints@benthamscience.ae The Open Electrical & Electronic Engineering Journal, 2014, 8, 599-604 599 Open Access A Facial Expression Recognition Algorithm Based on Local Binary

More information

Situated Visualization with Augmented Reality. Augmented Reality

Situated Visualization with Augmented Reality. Augmented Reality , Austria 1 Augmented Reality Overlay computer graphics on real world Example application areas Tourist navigation Underground infrastructure Maintenance Games Simplify Qualcomm Vuforia [Wagner ISMAR 2008]

More information

Speed Performance Improvement of Vehicle Blob Tracking System

Speed Performance Improvement of Vehicle Blob Tracking System Speed Performance Improvement of Vehicle Blob Tracking System Sung Chun Lee and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA sungchun@usc.edu, nevatia@usc.edu Abstract. A speed

More information

Footwear Print Retrieval System for Real Crime Scene Marks

Footwear Print Retrieval System for Real Crime Scene Marks Footwear Print Retrieval System for Real Crime Scene Marks Yi Tang, Sargur N. Srihari, Harish Kasiviswanathan and Jason J. Corso Center of Excellence for Document Analysis and Recognition (CEDAR) University

More information

OBJECT TRACKING USING LOG-POLAR TRANSFORMATION

OBJECT TRACKING USING LOG-POLAR TRANSFORMATION OBJECT TRACKING USING LOG-POLAR TRANSFORMATION A Thesis Submitted to the Gradual Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements

More information

Indoor Surveillance System Using Android Platform

Indoor Surveillance System Using Android Platform Indoor Surveillance System Using Android Platform 1 Mandar Bhamare, 2 Sushil Dubey, 3 Praharsh Fulzele, 4 Rupali Deshmukh, 5 Dr. Shashi Dugad 1,2,3,4,5 Department of Computer Engineering, Fr. Conceicao

More information

LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK

LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK vii LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK LIST OF CONTENTS LIST OF TABLES LIST OF FIGURES LIST OF NOTATIONS LIST OF ABBREVIATIONS LIST OF APPENDICES

More information

Tracking performance evaluation on PETS 2015 Challenge datasets

Tracking performance evaluation on PETS 2015 Challenge datasets Tracking performance evaluation on PETS 2015 Challenge datasets Tahir Nawaz, Jonathan Boyle, Longzhen Li and James Ferryman Computational Vision Group, School of Systems Engineering University of Reading,

More information