MOBILE TECHNIQUES LOCALIZATION. Shyam Sunder Kumar & Sairam Sundaresan

Size: px
Start display at page:

Download "MOBILE TECHNIQUES LOCALIZATION. Shyam Sunder Kumar & Sairam Sundaresan"

Transcription

1 MOBILE TECHNIQUES LOCALIZATION Shyam Sunder Kumar & Sairam Sundaresan

2 LOCALISATION Process of identifying location and pose of client based on sensor inputs Common approach is GPS and orientation sensors There is significant interest for fast and accurate image based localisation Very accurate GPS and orientation sensors not cheap Useful for AR purposes

3 From Structure-from-Motion Point Clouds to Fast Location Recognition

4 INTUITION 3D models impose better geometric constraints to scene views Especially offering pose directly 3D models can be built efficiently from large image collections Image scene recognition and retrieval also possible in near real-time

5 PROPOSED APPROACH Offline: Build representative 3D models for given scene Index features from images using vocabulary trees for fast retrieval Online: Feature Matching Geometric Verification

6 3D SCENE REPRESENTATION Use SIFT as primary tool to represent point features. For stable points, i.e. points which appear in many images and are matchable, the descriptor list shows redundancy. Hence, the descriptor set can be compressed without loss in registration performance. Mean shift clustering is applied to quantize SIFT descriptors belonging to each point.

7 3D SCENE RECONSTRUCTION Using a fronto-parallel assumption, the scale found in the image can be extrapolated to a 3D scale. This scale is later used to estimate the size of a 3D feature in synthetic views, thereby affecting patch visibility. Each descriptor also carries a directional component pointing towards the camera in which the descriptor was extracted.

8 3D SCENE RECONSTRUCTION

9 SYNTHETIC VIEWS The reconstructed model is represented as a 3D point cloud with associated scale values and feature descriptors. In addition, the set of images used to build the model with known orientation is available. This information allows registration of new views sufficiently close to the original ones. But in order to be able to compute the poses for images taken far from the originally provided set of views, the authors propose the creation of synthetic views located at additional positions not covered by the original images.

10 SYNTHETIC VIEWS synthetic cameras are placed uniformly on the horizontal plane Under the assumption of dominant horizontal viewing directions, 12 synthetic views are used with a 30 rotation between the cameras. Not all generated synthetic views are really useful. Given the 3D position and the respective scale of each triangulated point in the sparse model, the projected feature size in the synthetic images can be estimated and therefore the visibility of each 3D point can be inferred given the set of visible features.

11 CONDITIONS FOR VISIBILITY OF A 3D POINT A 3D point is potentially visible in a synthetic view, if the following criteria are met: The projected feature must be in front of the camera and lie within the viewing frustum The scale of the projected 3D feature must be larger or equal to one pixel to ensure detectability One of the associated descriptors is extracted from an original image with a sufficiently similar viewing direction due to the limited repeatability of the SIFT.

12 COMPRESSED SCENE REPRESENTATION A reduced set of documents has two major advantages over utilizing the full set of real and synthetic views: the signal-tonoise ratio for vocabulary tree is increased, since it is expected that a reduced document set is more discriminative for their respective scene content. Further, the smaller database size has a positive impact on the run-time efficiency. The overall goal of the compression strategy is to keep a minimal number of documents while still ensuring a high probability for successful registration of new images.

13 COMPRESSED SCENE REPRESENTATION A view V can be successfully registered by a set of 3D points P, if a certain number of 3D points from P is visible in V and has a good spatial distribution in the image. For given sets of 3D documents and views a binary matrix can be constructed, which has an entry equal to one, if the respective document covers the particular view, and zero otherwise. In order to have every view covered by at least one document, a document covers its corresponding view by default.

14 COMPRESSED SCENE REPRESENTATION The objective is to determine a subset of the documents, such that every view is still covered by at least one 3D document. A straightforward greedy approach is used to determine a reduced but representative subset of documents with low time complexity.

15 VIEW REGISTRATION Steps: Find potentially matching relevant feature sets (using vocabulary tree) Geometric Verification (expensive) Check possible matches Determine pose wrt 3D model Reduce Verification for performance gain

16 VOCAB TREE 3 levels with 50 children per node Leaves: contain quantised feature descriptors Get approx. solution with K*D comparisons Novel Scoring scheme Optimisation: use a CUDA based approach for feature comparisons

17 BUILDING A VOCABULARY TREE k=3 L=2 SLIDE CREDIT T. TOMMASI 17

18 VOCAB TREE NOVEL SCORING SCHEME Assumptions: More features in Q < D if, then corresponding words in D and Q are the same with high probability. Probability of mismatch is uniform for every leaf of the vocabulary tree. Then obtain score as the ratio of true positives to false positives.

19 POSE EXTRACTION If Camera Parameters Available: Use fast RANSAC on 3-point Correspondence Else 4-point perspective pose approach to estimate pose and focal length simultaneously

20 RESULTS

21 RESULTS

22 RESULTS

23 WIDE AREA LOCALISATION ON MOBILE PHONES

24 OVERVIEW Localise Mobile User s 6DOF Approach: localisation using sparse 3D reconstruction

25 SYSTEM ORGANISATION Offline Offline Generate Sparse Reconstructions Feature extraction Online Localisation Feature extraction Matching Pose Estimation

26 OFFLINE Structure from Motion (SfM) Image Acquisition Reconstruction Feature Extraction and Triangulation Global Registration Potentially Visible Sets (PVS)

27 SFM IMAGE ACQUISITION 8 megapixel SLR with large FOV (~90o) Pre-calibrated camera

28 SFM - RECONSTRUCTION Extract SIFT features Coarse Match images using vocabulary trees Tree is trained using ~2million features from 2500 images Each segment reconstructed separately 50~300 images in each segment

29 FEATURE EXTRACTION AND TRIANGULATION Extract some more features (more later ) Triangulate features with matched image pairs Suppress outliers as features that have a reprojection error greater than some threshold.

30 RECONSTRUCTION (EX )

31 GLOBAL REGISTRATION Combine reconstructed segments Align manually to a 2D floor plan

32 POTENTIALLY VISIBLE SETS Idea: Discretise environment into viewing cells Pre-compute cell-by-cell visibility Need: Reduces the run-time load data

33 PVS - ORGANISATION Every cell contains a number of PVS. At least one PVS points to other cells Each cell contains visible features Each feature remembers all images it occurs in.

34 PVS EXAMPLE

35 ONLINE LOCALISATION Feature Extraction PVS Selection Localisation Feature Matching They also use their own online memory management (~5MB)

36 EXTRACT DESCRIPTORS Proprietary, SURF based descriptor (~80 bytes/feature) Faster than SURF and GPU-SIFT Tested on 2.5GHz Intel Core2 Quad, and, NVIDIA GeForce GTX 280 Takes about: Phone: 120ms for describing 640x480 Intel Core2 2.5GHz Quad: 20ms 80% of total localisation time as well!

37 PVS SELECTION Subselect PVS based on orientation/scene Outdoors: GPS Indoors: WiFi Triangulation, Bluetooth, Infrared Beacons (not sure how ) No Sensors: User Interface Perform incremental tracking (huh?) Reinitialise if required (using cue from previous PVS)

38 POINT MATCHING Two Methods: Directly matching PVS features with camera image features Vocabulary tree voting scheme Neither method is robust to outliers Use RANSAC with 3-point pose hypothesis and up-to 50 correspondences

39 EXPERIMENTS

40 EXPERIMENTS

41 EXPERIMENTS

42 WIDE AREA LOCALISATION ON MOBILE PHONES

43 OVERVIEW Localise Mobile User s 6DOF Approach: localisation using sparse 3D reconstruction

44 SYSTEM ORGANISATION Offline Offline Generate Sparse Reconstructions Feature extraction Online Localisation Feature extraction Matching Pose Estimation

45 OFFLINE Structure from Motion (SfM) Image Acquisition Reconstruction Feature Extraction and Triangulation Global Registration Potentially Visible Sets (PVS)

46 SFM IMAGE ACQUISITION 8 megapixel SLR with large FOV (~90o) Pre-calibrated camera

47 SFM - RECONSTRUCTION Extract SIFT features Coarse Match images using vocabulary trees Tree is trained using ~2million features from 2500 images Each segment reconstructed separately 50~300 images in each segment

48 BUILDING A VOCABULARY TREE k=3 L=2 SLIDE CREDIT T. TOMMASI 48

49 FEATURE EXTRACTION AND TRIANGULATION Extract some more features (more later ) Triangulate features with matched image pairs Suppress outliers as features that have a reprojection error greater than some threshold.

50 RECONSTRUCTION (EX )

51 GLOBAL REGISTRATION Combine reconstructed segments Align manually to a 2D floor plan

52 POTENTIALLY VISIBLE SETS Idea: Discretise environment into viewing cells Pre-compute cell-by-cell visibility Need: Reduces the run-time load data

53 PVS - ORGANISATION Every cell contains a number of PVS. At least one PVS points to other cells Each cell contains visible features Each feature remembers all images it occurs in.

54 PVS EXAMPLE

55 ONLINE LOCALISATION Feature Extraction PVS Selection Localisation Feature Matching They also use their own online memory management (~5MB)

56 EXTRACT DESCRIPTORS Proprietary, SURF based descriptor (~80 bytes/feature) Faster than SURF and GPU-SIFT Tested on 2.5GHz Intel Core2 Quad, and, NVIDIA GeForce GTX 280 Takes about: Phone: 120ms for describing 640x480 Intel Core2 2.5GHz Quad: 20ms 80% of total localisation time as well!

57 PVS SELECTION Subselect PVS based on orientation/scene Outdoors: GPS Indoors: WiFi Triangulation, Bluetooth, Infrared Beacons (not sure how ) No Sensors: User Interface Perform incremental tracking (huh?) Reinitialise if required (using cue from previous PVS)

58 POINT MATCHING Two Methods: Directly matching PVS features with camera image features Vocabulary tree voting scheme Neither method is robust to outliers Use RANSAC with 3-point pose hypothesis and up-to 50 correspondences

59 EXPERIMENTS

60 EXPERIMENTS

61 EXPERIMENTS

62 LOCATION BASED AUGMENTED REALITY ON MOBILE PHONES

63 IMPLEMENTING AR ON PHONES Typically object detection and recognition are used to provide information about the recognizable objects in the scene. Using markers on objects is invasive. SLAM approaches are more suitable for mapping unfamiliar areas, and the maps so generated are not precise enough for AR & Localization.

64 SYSTEM OVERVIEW A local database containing several images of the environment is created for use by the AR system. The images are taken at different locations, and are used by the algorithm to find the best match to the live cell phone image. Once the best match is found, point correspondences are found between the two images after feature extraction. Using this, the pose between the two images can be computed. Finally, using this, the position and orientation of the cell phone camera can be found.

65 SYSTEM OVERVIEW

66 BUILDING THE DATABASE A stereo camera is used to take images of the environment. For each image, the pose of the camera as well as it s intrinsic parameters are stored. For each image, SURF features are extracted, and the positions of these features as well as the descriptors are stored. Also, the 3D position of the image center is stored. The stereo camera provides metric information, which later is used in user localization.

67 SENSORS AND POSE ESTIMATION A Nokia N 97 is used for experiments. It has an accelerometer, a magnetometer and a rotation sensor. The accelerometer provides the second derivative of the position. However, data from this sensor is too noisy. It s used to estimate tilt using projected gravitational components on the phone, when the user is still. The magnetometer is used to measure the rotation around the vertical axis.

68 REDUCING THE SEARCH SPACE Based on the computed pose, images in the database that are not likely to be seen by the user are discarded. E.g. : Images behind the user. To further reduce the search space, images whose centers are not within the camera field of view are discarded. This reduces the chances of poorly matching configurations. Image descriptors are loaded on demand. Grouping images which belong to the same room, further reduces complexity.

69 IMAGE RETRIEVAL SURF features are matched between the user image and the images in the database. For each feature, the nearest neighbor in the database of image features is picked. Only matches that have a low enough distance, or whose ratio between the second best distance & best distance is high enough are selected. The image with the highest number of matches is then selected from the database for further computation.

70 OUTLIER REMOVAL First fit a homography between the two sets of points. The points from one image are projected onto the other image using the computed homography. Points whose projection error is large are discarded as outliers. The remaining points are further refined using RANSAC to remove any outliers which may have passed the homography test. This step is done prior to the pose computation.

71 POSE COMPUTATION The goal here is to find the rotation and translation between the two images. Given that, Kc and Kd are the calibrated camera matrices of the phone and the stereo camera respectively. ci and di are 2D points on the cell phone and database images respectively. Xi is a 3D point in the database system coordinates.

72 REPROJECTION MINIMIZATION In order to ensure that the projected virtual objects match the image content as close as possible, the reprojection error is minimized. The minimization is done using the LevenbergMarquardt algorithm over 6 parameters ( 3 rotation + 3 translation) There are two different methods used to estimate the pose. Initialization of R and T is done in two ways

73 POSE INITIALIZATION In order to augment the scene accurately, the pose has to be initialized prior to the final minimization mentioned before. In the first method, the estimated rotation from the sensors is used to estimate the translation up to scale. This is done using SVD. The estimated pose is then refined by minimizing the Sampson criterion.

74 POSE INITIALIZATION In the second method, a linearized version of the reprojection error criterion is used. Once again, the rotation estimate obtained from the sensors is used here. It has the advantage that it can be quickly minimized. However, it is less meaningful, because it gives greater weight to points that are farther away from the image center and points which have high depth.

75 EXPERIMENTS The virtual objects used in the experiments were planar rectangles. The cell phone can also be localized in the environment using the proposed method The pose error is between 10-15cm. A Nokia N97 is used. SURF features take around 8 seconds to be computed. The rest of the processing takes less than 450ms.

76 RESULTS

77 CONCLUSIONS Is this system feasible for practical use? : 8+ secs to spit out results. What are the applications? Museum tour, art gallery guide. Can this be done more efficiently and more SIMPLY?

Build Panoramas on Android Phones

Build Panoramas on Android Phones Build Panoramas on Android Phones Tao Chu, Bowen Meng, Zixuan Wang Stanford University, Stanford CA Abstract The purpose of this work is to implement panorama stitching from a sequence of photos taken

More information

Wide Area Localization on Mobile Phones

Wide Area Localization on Mobile Phones Wide Area Localization on Mobile Phones Clemens Arth, Daniel Wagner, Manfred Klopschitz, Arnold Irschara, Dieter Schmalstieg Graz University of Technology, Austria ABSTRACT We present a fast and memory

More information

CS 534: Computer Vision 3D Model-based recognition

CS 534: Computer Vision 3D Model-based recognition CS 534: Computer Vision 3D Model-based recognition Ahmed Elgammal Dept of Computer Science CS 534 3D Model-based Vision - 1 High Level Vision Object Recognition: What it means? Two main recognition tasks:!

More information

Wii Remote Calibration Using the Sensor Bar

Wii Remote Calibration Using the Sensor Bar Wii Remote Calibration Using the Sensor Bar Alparslan Yildiz Abdullah Akay Yusuf Sinan Akgul GIT Vision Lab - http://vision.gyte.edu.tr Gebze Institute of Technology Kocaeli, Turkey {yildiz, akay, akgul}@bilmuh.gyte.edu.tr

More information

Jiří Matas. Hough Transform

Jiří Matas. Hough Transform Hough Transform Jiří Matas Center for Machine Perception Department of Cybernetics, Faculty of Electrical Engineering Czech Technical University, Prague Many slides thanks to Kristen Grauman and Bastian

More information

Recognition. Sanja Fidler CSC420: Intro to Image Understanding 1 / 28

Recognition. Sanja Fidler CSC420: Intro to Image Understanding 1 / 28 Recognition Topics that we will try to cover: Indexing for fast retrieval (we still owe this one) History of recognition techniques Object classification Bag-of-words Spatial pyramids Neural Networks Object

More information

A Prototype For Eye-Gaze Corrected

A Prototype For Eye-Gaze Corrected A Prototype For Eye-Gaze Corrected Video Chat on Graphics Hardware Maarten Dumont, Steven Maesen, Sammy Rogmans and Philippe Bekaert Introduction Traditional webcam video chat: No eye contact. No extensive

More information

Segmentation of building models from dense 3D point-clouds

Segmentation of building models from dense 3D point-clouds Segmentation of building models from dense 3D point-clouds Joachim Bauer, Konrad Karner, Konrad Schindler, Andreas Klaus, Christopher Zach VRVis Research Center for Virtual Reality and Visualization, Institute

More information

CS231M Project Report - Automated Real-Time Face Tracking and Blending

CS231M Project Report - Automated Real-Time Face Tracking and Blending CS231M Project Report - Automated Real-Time Face Tracking and Blending Steven Lee, slee2010@stanford.edu June 6, 2015 1 Introduction Summary statement: The goal of this project is to create an Android

More information

Geometric Camera Parameters

Geometric Camera Parameters Geometric Camera Parameters What assumptions have we made so far? -All equations we have derived for far are written in the camera reference frames. -These equations are valid only when: () all distances

More information

Digital Image Increase

Digital Image Increase Exploiting redundancy for reliable aerial computer vision 1 Digital Image Increase 2 Images Worldwide 3 Terrestrial Image Acquisition 4 Aerial Photogrammetry 5 New Sensor Platforms Towards Fully Automatic

More information

The Scientific Data Mining Process

The Scientific Data Mining Process Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In

More information

MetropoGIS: A City Modeling System DI Dr. Konrad KARNER, DI Andreas KLAUS, DI Joachim BAUER, DI Christopher ZACH

MetropoGIS: A City Modeling System DI Dr. Konrad KARNER, DI Andreas KLAUS, DI Joachim BAUER, DI Christopher ZACH MetropoGIS: A City Modeling System DI Dr. Konrad KARNER, DI Andreas KLAUS, DI Joachim BAUER, DI Christopher ZACH VRVis Research Center for Virtual Reality and Visualization, Virtual Habitat, Inffeldgasse

More information

Removing Moving Objects from Point Cloud Scenes

Removing Moving Objects from Point Cloud Scenes 1 Removing Moving Objects from Point Cloud Scenes Krystof Litomisky klitomis@cs.ucr.edu Abstract. Three-dimensional simultaneous localization and mapping is a topic of significant interest in the research

More information

3D Scanner using Line Laser. 1. Introduction. 2. Theory

3D Scanner using Line Laser. 1. Introduction. 2. Theory . Introduction 3D Scanner using Line Laser Di Lu Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute The goal of 3D reconstruction is to recover the 3D properties of a geometric

More information

Automatic Labeling of Lane Markings for Autonomous Vehicles

Automatic Labeling of Lane Markings for Autonomous Vehicles Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 jkiske@stanford.edu 1. Introduction As autonomous vehicles become more popular,

More information

IMPLICIT SHAPE MODELS FOR OBJECT DETECTION IN 3D POINT CLOUDS

IMPLICIT SHAPE MODELS FOR OBJECT DETECTION IN 3D POINT CLOUDS IMPLICIT SHAPE MODELS FOR OBJECT DETECTION IN 3D POINT CLOUDS Alexander Velizhev 1 (presenter) Roman Shapovalov 2 Konrad Schindler 3 1 Hexagon Technology Center, Heerbrugg, Switzerland 2 Graphics & Media

More information

T-REDSPEED White paper

T-REDSPEED White paper T-REDSPEED White paper Index Index...2 Introduction...3 Specifications...4 Innovation...6 Technology added values...7 Introduction T-REDSPEED is an international patent pending technology for traffic violation

More information

Introduction Epipolar Geometry Calibration Methods Further Readings. Stereo Camera Calibration

Introduction Epipolar Geometry Calibration Methods Further Readings. Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration 12.10.2004 Overview Introduction Summary / Motivation Depth Perception Ambiguity of Correspondence

More information

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - nzarrin@qiau.ac.ir

More information

GPS-aided Recognition-based User Tracking System with Augmented Reality in Extreme Large-scale Areas

GPS-aided Recognition-based User Tracking System with Augmented Reality in Extreme Large-scale Areas GPS-aided Recognition-based User Tracking System with Augmented Reality in Extreme Large-scale Areas Wei Guan Computer Graphics and Immersive Technologies Computer Science, USC wguan@usc.edu Suya You Computer

More information

TouchPaper - An Augmented Reality Application with Cloud-Based Image Recognition Service

TouchPaper - An Augmented Reality Application with Cloud-Based Image Recognition Service TouchPaper - An Augmented Reality Application with Cloud-Based Image Recognition Service Feng Tang, Daniel R. Tretter, Qian Lin HP Laboratories HPL-2012-131R1 Keyword(s): image recognition; cloud service;

More information

A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow

A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow , pp.233-237 http://dx.doi.org/10.14257/astl.2014.51.53 A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow Giwoo Kim 1, Hye-Youn Lim 1 and Dae-Seong Kang 1, 1 Department of electronices

More information

Projection Center Calibration for a Co-located Projector Camera System

Projection Center Calibration for a Co-located Projector Camera System Projection Center Calibration for a Co-located Camera System Toshiyuki Amano Department of Computer and Communication Science Faculty of Systems Engineering, Wakayama University Sakaedani 930, Wakayama,

More information

Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R.

Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R. Epipolar Geometry We consider two perspective images of a scene as taken from a stereo pair of cameras (or equivalently, assume the scene is rigid and imaged with a single camera from two different locations).

More information

Fast Matching of Binary Features

Fast Matching of Binary Features Fast Matching of Binary Features Marius Muja and David G. Lowe Laboratory for Computational Intelligence University of British Columbia, Vancouver, Canada {mariusm,lowe}@cs.ubc.ca Abstract There has been

More information

An Iterative Image Registration Technique with an Application to Stereo Vision

An Iterative Image Registration Technique with an Application to Stereo Vision An Iterative Image Registration Technique with an Application to Stereo Vision Bruce D. Lucas Takeo Kanade Computer Science Department Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 Abstract

More information

Synthetic Sensing: Proximity / Distance Sensors

Synthetic Sensing: Proximity / Distance Sensors Synthetic Sensing: Proximity / Distance Sensors MediaRobotics Lab, February 2010 Proximity detection is dependent on the object of interest. One size does not fit all For non-contact distance measurement,

More information

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia

More information

Real-Time 3D Reconstruction Using a Kinect Sensor

Real-Time 3D Reconstruction Using a Kinect Sensor Computer Science and Information Technology 2(2): 95-99, 2014 DOI: 10.13189/csit.2014.020206 http://www.hrpub.org Real-Time 3D Reconstruction Using a Kinect Sensor Claudia Raluca Popescu *, Adrian Lungu

More information

3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving

3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving 3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving AIT Austrian Institute of Technology Safety & Security Department Christian Zinner Safe and Autonomous Systems

More information

Real-time Visual Tracker by Stream Processing

Real-time Visual Tracker by Stream Processing Real-time Visual Tracker by Stream Processing Simultaneous and Fast 3D Tracking of Multiple Faces in Video Sequences by Using a Particle Filter Oscar Mateo Lozano & Kuzahiro Otsuka presented by Piotr Rudol

More information

Relating Vanishing Points to Catadioptric Camera Calibration

Relating Vanishing Points to Catadioptric Camera Calibration Relating Vanishing Points to Catadioptric Camera Calibration Wenting Duan* a, Hui Zhang b, Nigel M. Allinson a a Laboratory of Vision Engineering, University of Lincoln, Brayford Pool, Lincoln, U.K. LN6

More information

KEYWORD SEARCH OVER PROBABILISTIC RDF GRAPHS

KEYWORD SEARCH OVER PROBABILISTIC RDF GRAPHS ABSTRACT KEYWORD SEARCH OVER PROBABILISTIC RDF GRAPHS In many real applications, RDF (Resource Description Framework) has been widely used as a W3C standard to describe data in the Semantic Web. In practice,

More information

Interactive Segmentation, Tracking, and Kinematic Modeling of Unknown 3D Articulated Objects

Interactive Segmentation, Tracking, and Kinematic Modeling of Unknown 3D Articulated Objects Interactive Segmentation, Tracking, and Kinematic Modeling of Unknown 3D Articulated Objects Dov Katz, Moslem Kazemi, J. Andrew Bagnell and Anthony Stentz 1 Abstract We present an interactive perceptual

More information

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic

More information

Augmented Reality Tic-Tac-Toe

Augmented Reality Tic-Tac-Toe Augmented Reality Tic-Tac-Toe Joe Maguire, David Saltzman Department of Electrical Engineering jmaguire@stanford.edu, dsaltz@stanford.edu Abstract: This project implements an augmented reality version

More information

Topographic Change Detection Using CloudCompare Version 1.0

Topographic Change Detection Using CloudCompare Version 1.0 Topographic Change Detection Using CloudCompare Version 1.0 Emily Kleber, Arizona State University Edwin Nissen, Colorado School of Mines J Ramón Arrowsmith, Arizona State University Introduction CloudCompare

More information

::pcl::registration Registering point clouds using the Point Cloud Library.

::pcl::registration Registering point clouds using the Point Cloud Library. ntroduction Correspondences Rejection Transformation Registration Examples Outlook ::pcl::registration Registering point clouds using the Point Cloud Library., University of Bonn January 27, 2011 ntroduction

More information

Solution Guide III-C. 3D Vision. Building Vision for Business. MVTec Software GmbH

Solution Guide III-C. 3D Vision. Building Vision for Business. MVTec Software GmbH Solution Guide III-C 3D Vision MVTec Software GmbH Building Vision for Business Machine vision in 3D world coordinates, Version 10.0.4 All rights reserved. No part of this publication may be reproduced,

More information

Point Cloud Simulation & Applications Maurice Fallon

Point Cloud Simulation & Applications Maurice Fallon Point Cloud & Applications Maurice Fallon Contributors: MIT: Hordur Johannsson and John Leonard U. of Salzburg: Michael Gschwandtner and Roland Kwitt Overview : Dense disparity information Efficient Image

More information

How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm

How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm IJSTE - International Journal of Science Technology & Engineering Volume 1 Issue 10 April 2015 ISSN (online): 2349-784X Image Estimation Algorithm for Out of Focus and Blur Images to Retrieve the Barcode

More information

Non-negative Matrix Factorization (NMF) in Semi-supervised Learning Reducing Dimension and Maintaining Meaning

Non-negative Matrix Factorization (NMF) in Semi-supervised Learning Reducing Dimension and Maintaining Meaning Non-negative Matrix Factorization (NMF) in Semi-supervised Learning Reducing Dimension and Maintaining Meaning SAMSI 10 May 2013 Outline Introduction to NMF Applications Motivations NMF as a middle step

More information

How does the Kinect work? John MacCormick

How does the Kinect work? John MacCormick How does the Kinect work? John MacCormick Xbox demo Laptop demo The Kinect uses structured light and machine learning Inferring body position is a two-stage process: first compute a depth map (using structured

More information

Face detection is a process of localizing and extracting the face region from the

Face detection is a process of localizing and extracting the face region from the Chapter 4 FACE NORMALIZATION 4.1 INTRODUCTION Face detection is a process of localizing and extracting the face region from the background. The detected face varies in rotation, brightness, size, etc.

More information

Untangling the megapixel lens myth! Which is the best lens to buy? And how to make that decision!

Untangling the megapixel lens myth! Which is the best lens to buy? And how to make that decision! Untangling the megapixel lens myth! Which is the best lens to buy? And how to make that decision! 1 In this presentation We are going to go over lens basics Explain figures of merit of lenses Show how

More information

Efficient Storage, Compression and Transmission

Efficient Storage, Compression and Transmission Efficient Storage, Compression and Transmission of Complex 3D Models context & problem definition general framework & classification our new algorithm applications for digital documents Mesh Decimation

More information

Head-Coupled Perspective

Head-Coupled Perspective Head-Coupled Perspective Introduction Head-Coupled Perspective (HCP) refers to a technique of rendering a scene that takes into account the position of the viewer relative to the display. As a viewer moves

More information

ACCURACY ASSESSMENT OF BUILDING POINT CLOUDS AUTOMATICALLY GENERATED FROM IPHONE IMAGES

ACCURACY ASSESSMENT OF BUILDING POINT CLOUDS AUTOMATICALLY GENERATED FROM IPHONE IMAGES ACCURACY ASSESSMENT OF BUILDING POINT CLOUDS AUTOMATICALLY GENERATED FROM IPHONE IMAGES B. Sirmacek, R. Lindenbergh Delft University of Technology, Department of Geoscience and Remote Sensing, Stevinweg

More information

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Automatic Photo Quality Assessment Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Estimating i the photorealism of images: Distinguishing i i paintings from photographs h Florin

More information

SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING

SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING AAS 07-228 SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING INTRODUCTION James G. Miller * Two historical uncorrelated track (UCT) processing approaches have been employed using general perturbations

More information

An Adaptive Hierarchical Next-Best-View Algorithm for 3D Reconstruction of Indoor Scenes

An Adaptive Hierarchical Next-Best-View Algorithm for 3D Reconstruction of Indoor Scenes Technical Report TR06-003, Department of Computer Science, University of North Carolina at Chapel Hill, January 2006. An Adaptive Hierarchical Next-Best-View Algorithm for 3D Reconstruction of Indoor Scenes

More information

Probabilistic Latent Semantic Analysis (plsa)

Probabilistic Latent Semantic Analysis (plsa) Probabilistic Latent Semantic Analysis (plsa) SS 2008 Bayesian Networks Multimedia Computing, Universität Augsburg Rainer.Lienhart@informatik.uni-augsburg.de www.multimedia-computing.{de,org} References

More information

Universidad de Cantabria Departamento de Tecnología Electrónica, Ingeniería de Sistemas y Automática. Tesis Doctoral

Universidad de Cantabria Departamento de Tecnología Electrónica, Ingeniería de Sistemas y Automática. Tesis Doctoral Universidad de Cantabria Departamento de Tecnología Electrónica, Ingeniería de Sistemas y Automática Tesis Doctoral CONTRIBUCIONES AL ALINEAMIENTO DE NUBES DE PUNTOS 3D PARA SU USO EN APLICACIONES DE CAPTURA

More information

RGB-D Mapping: Using Kinect-Style Depth Cameras for Dense 3D Modeling of Indoor Environments

RGB-D Mapping: Using Kinect-Style Depth Cameras for Dense 3D Modeling of Indoor Environments RGB-D Mapping: Using Kinect-Style Depth Cameras for Dense 3D Modeling of Indoor Environments Peter Henry 1, Michael Krainin 1, Evan Herbst 1, Xiaofeng Ren 2, Dieter Fox 1 Abstract RGB-D cameras (such as

More information

Interaction devices and sensors. EPFL Immersive Interaction Group Dr. Nan WANG Dr. Ronan BOULIC nan.wang@epfl.ch

Interaction devices and sensors. EPFL Immersive Interaction Group Dr. Nan WANG Dr. Ronan BOULIC nan.wang@epfl.ch Interaction devices and sensors EPFL Immersive Interaction Group Dr. Nan WANG Dr. Ronan BOULIC nan.wang@epfl.ch Outline 3D interaction tasks Action capture system Large range Short range Tracking system

More information

Local features and matching. Image classification & object localization

Local features and matching. Image classification & object localization Overview Instance level search Local features and matching Efficient visual recognition Image classification & object localization Category recognition Image classification: assigning a class label to

More information

Medical Image Processing on the GPU. Past, Present and Future. Anders Eklund, PhD Virginia Tech Carilion Research Institute andek@vtc.vt.

Medical Image Processing on the GPU. Past, Present and Future. Anders Eklund, PhD Virginia Tech Carilion Research Institute andek@vtc.vt. Medical Image Processing on the GPU Past, Present and Future Anders Eklund, PhD Virginia Tech Carilion Research Institute andek@vtc.vt.edu Outline Motivation why do we need GPUs? Past - how was GPU programming

More information

Optical Tracking Using Projective Invariant Marker Pattern Properties

Optical Tracking Using Projective Invariant Marker Pattern Properties Optical Tracking Using Projective Invariant Marker Pattern Properties Robert van Liere, Jurriaan D. Mulder Department of Information Systems Center for Mathematics and Computer Science Amsterdam, the Netherlands

More information

Introduction to Computer Graphics

Introduction to Computer Graphics Introduction to Computer Graphics Torsten Möller TASC 8021 778-782-2215 torsten@sfu.ca www.cs.sfu.ca/~torsten Today What is computer graphics? Contents of this course Syllabus Overview of course topics

More information

Situated Visualization with Augmented Reality. Augmented Reality

Situated Visualization with Augmented Reality. Augmented Reality , Austria 1 Augmented Reality Overlay computer graphics on real world Example application areas Tourist navigation Underground infrastructure Maintenance Games Simplify Qualcomm Vuforia [Wagner ISMAR 2008]

More information

Solving Simultaneous Equations and Matrices

Solving Simultaneous Equations and Matrices Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering

More information

More Local Structure Information for Make-Model Recognition

More Local Structure Information for Make-Model Recognition More Local Structure Information for Make-Model Recognition David Anthony Torres Dept. of Computer Science The University of California at San Diego La Jolla, CA 9093 Abstract An object classification

More information

3D Model based Object Class Detection in An Arbitrary View

3D Model based Object Class Detection in An Arbitrary View 3D Model based Object Class Detection in An Arbitrary View Pingkun Yan, Saad M. Khan, Mubarak Shah School of Electrical Engineering and Computer Science University of Central Florida http://www.eecs.ucf.edu/

More information

How To Analyze Ball Blur On A Ball Image

How To Analyze Ball Blur On A Ball Image Single Image 3D Reconstruction of Ball Motion and Spin From Motion Blur An Experiment in Motion from Blur Giacomo Boracchi, Vincenzo Caglioti, Alessandro Giusti Objective From a single image, reconstruct:

More information

PCL Tutorial: The Point Cloud Library By Example. Jeff Delmerico. Vision and Perceptual Machines Lab 106 Davis Hall UB North Campus. jad12@buffalo.

PCL Tutorial: The Point Cloud Library By Example. Jeff Delmerico. Vision and Perceptual Machines Lab 106 Davis Hall UB North Campus. jad12@buffalo. PCL Tutorial: The Point Cloud Library By Example Jeff Delmerico Vision and Perceptual Machines Lab 106 Davis Hall UB North Campus jad12@buffalo.edu February 11, 2013 Jeff Delmerico February 11, 2013 1/38

More information

Binary Image Scanning Algorithm for Cane Segmentation

Binary Image Scanning Algorithm for Cane Segmentation Binary Image Scanning Algorithm for Cane Segmentation Ricardo D. C. Marin Department of Computer Science University Of Canterbury Canterbury, Christchurch ricardo.castanedamarin@pg.canterbury.ac.nz Tom

More information

Visual-based ID Verification by Signature Tracking

Visual-based ID Verification by Signature Tracking Visual-based ID Verification by Signature Tracking Mario E. Munich and Pietro Perona California Institute of Technology www.vision.caltech.edu/mariomu Outline Biometric ID Visual Signature Acquisition

More information

Randomized Trees for Real-Time Keypoint Recognition

Randomized Trees for Real-Time Keypoint Recognition Randomized Trees for Real-Time Keypoint Recognition Vincent Lepetit Pascal Lagger Pascal Fua Computer Vision Laboratory École Polytechnique Fédérale de Lausanne (EPFL) 1015 Lausanne, Switzerland Email:

More information

CS 591.03 Introduction to Data Mining Instructor: Abdullah Mueen

CS 591.03 Introduction to Data Mining Instructor: Abdullah Mueen CS 591.03 Introduction to Data Mining Instructor: Abdullah Mueen LECTURE 3: DATA TRANSFORMATION AND DIMENSIONALITY REDUCTION Chapter 3: Data Preprocessing Data Preprocessing: An Overview Data Quality Major

More information

Cees Snoek. Machine. Humans. Multimedia Archives. Euvision Technologies The Netherlands. University of Amsterdam The Netherlands. Tree.

Cees Snoek. Machine. Humans. Multimedia Archives. Euvision Technologies The Netherlands. University of Amsterdam The Netherlands. Tree. Visual search: what's next? Cees Snoek University of Amsterdam The Netherlands Euvision Technologies The Netherlands Problem statement US flag Tree Aircraft Humans Dog Smoking Building Basketball Table

More information

Real Time Target Tracking with Pan Tilt Zoom Camera

Real Time Target Tracking with Pan Tilt Zoom Camera 2009 Digital Image Computing: Techniques and Applications Real Time Target Tracking with Pan Tilt Zoom Camera Pankaj Kumar, Anthony Dick School of Computer Science The University of Adelaide Adelaide,

More information

Tracking Moving Objects In Video Sequences Yiwei Wang, Robert E. Van Dyck, and John F. Doherty Department of Electrical Engineering The Pennsylvania State University University Park, PA16802 Abstract{Object

More information

Data Mining. Cluster Analysis: Advanced Concepts and Algorithms

Data Mining. Cluster Analysis: Advanced Concepts and Algorithms Data Mining Cluster Analysis: Advanced Concepts and Algorithms Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 1 More Clustering Methods Prototype-based clustering Density-based clustering Graph-based

More information

DETECTION OF PLANAR PATCHES IN HANDHELD IMAGE SEQUENCES

DETECTION OF PLANAR PATCHES IN HANDHELD IMAGE SEQUENCES DETECTION OF PLANAR PATCHES IN HANDHELD IMAGE SEQUENCES Olaf Kähler, Joachim Denzler Friedrich-Schiller-University, Dept. Mathematics and Computer Science, 07743 Jena, Germany {kaehler,denzler}@informatik.uni-jena.de

More information

Application of Face Recognition to Person Matching in Trains

Application of Face Recognition to Person Matching in Trains Application of Face Recognition to Person Matching in Trains May 2008 Objective Matching of person Context : in trains Using face recognition and face detection algorithms With a video-surveillance camera

More information

3D Object Recognition in Clutter with the Point Cloud Library

3D Object Recognition in Clutter with the Point Cloud Library 3D Object Recognition in Clutter with the Point Cloud Library Federico Tombari, Ph.D federico.tombari@unibo.it University of Bologna Open Perception Data representations in PCL PCL can deal with both organized

More information

Image Processing and Computer Graphics. Rendering Pipeline. Matthias Teschner. Computer Science Department University of Freiburg

Image Processing and Computer Graphics. Rendering Pipeline. Matthias Teschner. Computer Science Department University of Freiburg Image Processing and Computer Graphics Rendering Pipeline Matthias Teschner Computer Science Department University of Freiburg Outline introduction rendering pipeline vertex processing primitive processing

More information

Similarity Search in a Very Large Scale Using Hadoop and HBase

Similarity Search in a Very Large Scale Using Hadoop and HBase Similarity Search in a Very Large Scale Using Hadoop and HBase Stanislav Barton, Vlastislav Dohnal, Philippe Rigaux LAMSADE - Universite Paris Dauphine, France Internet Memory Foundation, Paris, France

More information

Real Time Baseball Augmented Reality

Real Time Baseball Augmented Reality Department of Computer Science & Engineering 2011-99 Real Time Baseball Augmented Reality Authors: Adam Kraft Abstract: As cellular phones grow faster and become equipped with better sensors, consumers

More information

EXPERIMENTAL EVALUATION OF RELATIVE POSE ESTIMATION ALGORITHMS

EXPERIMENTAL EVALUATION OF RELATIVE POSE ESTIMATION ALGORITHMS EXPERIMENTAL EVALUATION OF RELATIVE POSE ESTIMATION ALGORITHMS Marcel Brückner, Ferid Bajramovic, Joachim Denzler Chair for Computer Vision, Friedrich-Schiller-University Jena, Ernst-Abbe-Platz, 7743 Jena,

More information

GeoImaging Accelerator Pansharp Test Results

GeoImaging Accelerator Pansharp Test Results GeoImaging Accelerator Pansharp Test Results Executive Summary After demonstrating the exceptional performance improvement in the orthorectification module (approximately fourteen-fold see GXL Ortho Performance

More information

SURVEYING WITH GPS. GPS has become a standard surveying technique in most surveying practices

SURVEYING WITH GPS. GPS has become a standard surveying technique in most surveying practices SURVEYING WITH GPS Key Words: Static, Fast-static, Kinematic, Pseudo- Kinematic, Real-time kinematic, Receiver Initialization, On The Fly (OTF), Baselines, Redundant baselines, Base Receiver, Rover GPS

More information

THE CONTROL OF A ROBOT END-EFFECTOR USING PHOTOGRAMMETRY

THE CONTROL OF A ROBOT END-EFFECTOR USING PHOTOGRAMMETRY THE CONTROL OF A ROBOT END-EFFECTOR USING PHOTOGRAMMETRY Dr. T. Clarke & Dr. X. Wang Optical Metrology Centre, City University, Northampton Square, London, EC1V 0HB, UK t.a.clarke@city.ac.uk, x.wang@city.ac.uk

More information

Taking Inverse Graphics Seriously

Taking Inverse Graphics Seriously CSC2535: 2013 Advanced Machine Learning Taking Inverse Graphics Seriously Geoffrey Hinton Department of Computer Science University of Toronto The representation used by the neural nets that work best

More information

Edge tracking for motion segmentation and depth ordering

Edge tracking for motion segmentation and depth ordering Edge tracking for motion segmentation and depth ordering P. Smith, T. Drummond and R. Cipolla Department of Engineering University of Cambridge Cambridge CB2 1PZ,UK {pas1001 twd20 cipolla}@eng.cam.ac.uk

More information

USING THE XBOX KINECT TO DETECT FEATURES OF THE FLOOR SURFACE

USING THE XBOX KINECT TO DETECT FEATURES OF THE FLOOR SURFACE USING THE XBOX KINECT TO DETECT FEATURES OF THE FLOOR SURFACE By STEPHANIE COCKRELL Submitted in partial fulfillment of the requirements For the degree of Master of Science Thesis Advisor: Gregory Lee

More information

Real-Time Camera Tracking Using a Particle Filter

Real-Time Camera Tracking Using a Particle Filter Real-Time Camera Tracking Using a Particle Filter Mark Pupilli and Andrew Calway Department of Computer Science University of Bristol, UK {pupilli,andrew}@cs.bris.ac.uk Abstract We describe a particle

More information

Spatio-Temporally Coherent 3D Animation Reconstruction from Multi-view RGB-D Images using Landmark Sampling

Spatio-Temporally Coherent 3D Animation Reconstruction from Multi-view RGB-D Images using Landmark Sampling , March 13-15, 2013, Hong Kong Spatio-Temporally Coherent 3D Animation Reconstruction from Multi-view RGB-D Images using Landmark Sampling Naveed Ahmed Abstract We present a system for spatio-temporally

More information

Classifying Manipulation Primitives from Visual Data

Classifying Manipulation Primitives from Visual Data Classifying Manipulation Primitives from Visual Data Sandy Huang and Dylan Hadfield-Menell Abstract One approach to learning from demonstrations in robotics is to make use of a classifier to predict if

More information

Real time vehicle detection and tracking on multiple lanes

Real time vehicle detection and tracking on multiple lanes Real time vehicle detection and tracking on multiple lanes Kristian Kovačić Edouard Ivanjko Hrvoje Gold Department of Intelligent Transportation Systems Faculty of Transport and Traffic Sciences University

More information

MULTI-LAYER VISUALIZATION OF MOBILE MAPPING DATA

MULTI-LAYER VISUALIZATION OF MOBILE MAPPING DATA MULTI-LAYER VISUALIZATION OF MOBILE MAPPING DATA D. Eggert, M. Sester Institute of Cartography and Geoinformatics, Leibniz Universität Hannover, Germany - (eggert, sester)@ikg.uni-hannover.de KEY WORDS:

More information

3D Human Face Recognition Using Point Signature

3D Human Face Recognition Using Point Signature 3D Human Face Recognition Using Point Signature Chin-Seng Chua, Feng Han, Yeong-Khing Ho School of Electrical and Electronic Engineering Nanyang Technological University, Singapore 639798 ECSChua@ntu.edu.sg

More information

Fast and Robust Normal Estimation for Point Clouds with Sharp Features

Fast and Robust Normal Estimation for Point Clouds with Sharp Features 1/37 Fast and Robust Normal Estimation for Point Clouds with Sharp Features Alexandre Boulch & Renaud Marlet University Paris-Est, LIGM (UMR CNRS), Ecole des Ponts ParisTech Symposium on Geometry Processing

More information

Smart Infrastructure Emerging Trends and Future Opportunities. Tim Wark 7 May 2013

Smart Infrastructure Emerging Trends and Future Opportunities. Tim Wark 7 May 2013 Smart Infrastructure Emerging Trends and Future Opportunities Tim Wark 7 May 2013 INTELLIGENT CITIES SUMMIT 2013 What is the future of our cities? Presentation title Presenter name Page 2 Image: Ilja Musik

More information

Efficient Pose Clustering Using a Randomized Algorithm

Efficient Pose Clustering Using a Randomized Algorithm International Journal of Computer Vision 23(2), 131 147 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. Efficient Pose Clustering Using a Randomized Algorithm CLARK F. OLSON

More information

CE801: Intelligent Systems and Robotics Lecture 3: Actuators and Localisation. Prof. Dr. Hani Hagras

CE801: Intelligent Systems and Robotics Lecture 3: Actuators and Localisation. Prof. Dr. Hani Hagras 1 CE801: Intelligent Systems and Robotics Lecture 3: Actuators and Localisation Prof. Dr. Hani Hagras Robot Locomotion Robots might want to move in water, in the air, on land, in space.. 2 Most of the

More information

Envisor: Online Environment Map Construction for Mixed Reality

Envisor: Online Environment Map Construction for Mixed Reality Envisor: Online Environment Map Construction for Mixed Reality Category: Research Figure 1: A cylindrical projection of an environment map constructed using Envisor with a camera on a tripod. ABSTRACT

More information

WHITE PAPER. Are More Pixels Better? www.basler-ipcam.com. Resolution Does it Really Matter?

WHITE PAPER. Are More Pixels Better? www.basler-ipcam.com. Resolution Does it Really Matter? WHITE PAPER www.basler-ipcam.com Are More Pixels Better? The most frequently asked question when buying a new digital security camera is, What resolution does the camera provide? The resolution is indeed

More information

DESIGN & DEVELOPMENT OF AUTONOMOUS SYSTEM TO BUILD 3D MODEL FOR UNDERWATER OBJECTS USING STEREO VISION TECHNIQUE

DESIGN & DEVELOPMENT OF AUTONOMOUS SYSTEM TO BUILD 3D MODEL FOR UNDERWATER OBJECTS USING STEREO VISION TECHNIQUE DESIGN & DEVELOPMENT OF AUTONOMOUS SYSTEM TO BUILD 3D MODEL FOR UNDERWATER OBJECTS USING STEREO VISION TECHNIQUE N. Satish Kumar 1, B L Mukundappa 2, Ramakanth Kumar P 1 1 Dept. of Information Science,

More information