3D Object Recognition in Clutter with the Point Cloud Library



Similar documents
PCL Tutorial: The Point Cloud Library By Example. Jeff Delmerico. Vision and Perceptual Machines Lab 106 Davis Hall UB North Campus.

CS 534: Computer Vision 3D Model-based recognition

Introduction Feature Estimation Keypoint Detection Keypoints + Features. PCL :: Features. Michael Dixon and Radu B. Rusu

The Scientific Data Mining Process

Removing Moving Objects from Point Cloud Scenes

::pcl::registration Registering point clouds using the Point Cloud Library.

A Genetic Algorithm-Evolved 3D Point Cloud Descriptor

Jiří Matas. Hough Transform

Unique Signatures of Histograms for Local Surface Description

IMPLICIT SHAPE MODELS FOR OBJECT DETECTION IN 3D POINT CLOUDS

PCL - SURFACE RECONSTRUCTION

A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow

Image Segmentation and Registration

Real-Time 3D Reconstruction Using a Kinect Sensor

Feature Tracking and Optical Flow

3D Human Face Recognition Using Point Signature

Terrain Traversability Analysis using Organized Point Cloud, Superpixel Surface Normals-based segmentation and PCA-based Classification

Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition

A Short Introduction to Computer Graphics

Efficient 3D object recognition using foveated point clouds

Segmentation of building models from dense 3D point-clouds

Probabilistic Latent Semantic Analysis (plsa)

Point Cloud Simulation & Applications Maurice Fallon

Android Ros Application

Efficient Organized Point Cloud Segmentation with Connected Components

Module II: Multimedia Data Mining

3D Model based Object Class Detection in An Arbitrary View

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Classifying Manipulation Primitives from Visual Data

Local features and matching. Image classification & object localization

How To Analyze Ball Blur On A Ball Image

Build Panoramas on Android Phones

Point Cloud Data Filtering and Downsampling using Growing Neural Gas

Topographic Change Detection Using CloudCompare Version 1.0

Fast and Robust Normal Estimation for Point Clouds with Sharp Features

Part-Based Recognition

Automatic Building Facade Detection in Mobile Laser Scanner point Clouds

Template-based Eye and Mouth Detection for 3D Video Conferencing

Tracking in flussi video 3D. Ing. Samuele Salti

Segmentation & Clustering

Combining Local Recognition Methods for Better Image Recognition

RGB-D Mapping: Using Kinect-Style Depth Cameras for Dense 3D Modeling of Indoor Environments

Object Recognition. Selim Aksoy. Bilkent University

Crater detection with segmentation-based image processing algorithm

Camera geometry and image alignment

High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound

Fast Point Feature Histograms (FPFH) for 3D Registration

Computer Graphics. Geometric Modeling. Page 1. Copyright Gotsman, Elber, Barequet, Karni, Sheffer Computer Science - Technion. An Example.

The Delicate Art of Flower Classification

EXPLORING SPATIAL PATTERNS IN YOUR DATA

Introduction Epipolar Geometry Calibration Methods Further Readings. Stereo Camera Calibration

Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006

Interactive Segmentation, Tracking, and Kinematic Modeling of Unknown 3D Articulated Objects

Data Mining and Knowledge Discovery in Databases (KDD) State of the Art. Prof. Dr. T. Nouri Computer Science Department FHNW Switzerland

Automatic Labeling of Lane Markings for Autonomous Vehicles

NCC-RANSAC: A Fast Plane Extraction Method for Navigating a Smart Cane for the Visually Impaired

Feature Point Selection using Structural Graph Matching for MLS based Image Registration

Edge tracking for motion segmentation and depth ordering

Clustering. Adrian Groza. Department of Computer Science Technical University of Cluj-Napoca

On Fast Surface Reconstruction Methods for Large and Noisy Point Clouds

MeshLab and Arc3D: Photo-Reconstruction and Processing of 3D meshes

LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE.

Data Mining. Cluster Analysis: Advanced Concepts and Algorithms

Make and Model Recognition of Cars

Lecture 2: Descriptive Statistics and Exploratory Data Analysis

Recovering Primitives in 3D CAD meshes

Edge detection. (Trucco, Chapt 4 AND Jain et al., Chapt 5) -Edges are significant local changes of intensity in an image.

3D Modeling and Simulation using Image Stitching

DESIGN & DEVELOPMENT OF AUTONOMOUS SYSTEM TO BUILD 3D MODEL FOR UNDERWATER OBJECTS USING STEREO VISION TECHNIQUE

Interactive Offline Tracking for Color Objects

A. OPENING POINT CLOUDS. (Notepad++ Text editor) (Cloud Compare Point cloud and mesh editor) (MeshLab Point cloud and mesh editor)

Randomized Trees for Real-Time Keypoint Recognition

DEVELOPMENT OF AN IMAGING SYSTEM FOR THE CHARACTERIZATION OF THE THORACIC AORTA.

Fast Semantic Segmentation of 3D Point Clouds using a Dense CRF with Learned Parameters

Environmental Remote Sensing GEOG 2021

Social Media Mining. Data Mining Essentials

TouchPaper - An Augmented Reality Application with Cloud-Based Image Recognition Service

Recognition. Sanja Fidler CSC420: Intro to Image Understanding 1 / 28

ASSESSMENT OF VISUALIZATION SOFTWARE FOR SUPPORT OF CONSTRUCTION SITE INSPECTION TASKS USING DATA COLLECTED FROM REALITY CAPTURE TECHNOLOGIES

MVA ENS Cachan. Lecture 2: Logistic regression & intro to MIL Iasonas Kokkinos Iasonas.kokkinos@ecp.fr

Optical Design for Automatic Identification

Heat Kernel Signature

Spatio-Temporally Coherent 3D Animation Reconstruction from Multi-view RGB-D Images using Landmark Sampling

Convolution. 1D Formula: 2D Formula: Example on the web:

A Cognitive Approach to Vision for a Mobile Robot

Tracking of Small Unmanned Aerial Vehicles

Behavior Analysis in Crowded Environments. XiaogangWang Department of Electronic Engineering The Chinese University of Hong Kong June 25, 2011

Distinctive Image Features from Scale-Invariant Keypoints

So, you want to make a photo-realistic rendering of the Earth from orbit, eh? And you want it to look just like what astronauts see from the shuttle

Introduction. Chapter 1

Volume visualization I Elvins

Multivariate data visualization using shadow

Big Data: Rethinking Text Visualization

Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall

Transcription:

3D Object Recognition in Clutter with the Point Cloud Library Federico Tombari, Ph.D federico.tombari@unibo.it University of Bologna Open Perception

Data representations in PCL PCL can deal with both organized (e.g. range maps) and unorganized point clouds if the underlying 2d structure is available, efficient schemes can be used (e.g. integral images instead of kd-tree for nearest neighbor search) Both are handled by the same data structure (pcl::pointcloud, templated thus highly customizable) Points can be XYZ, XYZ+normals, XYZI, XYZRGB,... Support for RGB-D data Voxelized representations are implemented by pcl::pointcloud + voxelization functions (e.g. voxel sampling) no specific types for voxelized maps Currently rather limited support for 3D meshes Unorganized cloud Range map RGB map Voxel map 3D mesh

Object Recognition and data representations Usually Object Recognition in clutter is done on 2.5 data (model views against scene views) Can be done also 3D vs 3D, although scenes are usually 2.5D (and 3D vs. 2.5D does not work good) When models are 3D, we can render 2.5D views simulating input from a depth sensor: pcl::apps::renderviewstesselatedsphere render_views; render_views.setresolution (resolution_); render_views.settesselationlevel (1); //80 views render_views.addmodelfrompolydata (model); //vtk model render_views.generateviews (); std::vector< pcl::pointcloud<pcl::pointxyz>::ptr > views; std::vector < Eigen::Matrix4f > poses; render_views.getviews (views); render_views.getposes (poses);

Descriptor Matching Typical paradigm for finding similarities between two point clouds 1. Extract compatct and descriptive representations (3D descriptors) on each cloud (possibly over a subset of salient points) 2. Match these representations to yield (point-to-point) correspondences Applications: 3D Object recognition, cloud registration, 3D SLAM, object retrieval,..... matching... Descriptor array (cloud1) Descriptor array (cloud2) Keypoint Detection Description Matching

pcl::keypoints 3D keypoints are Distinctive, i.e. suitable for effective description and matching (globally definable) Repeatable with respect to point-of-view variations, noise, etc (locally definable) The pcl::keypoint module includes: A set of detectors specifically proposed for 3D point clouds and range maps Intrinsic Shape Signatures (ISS) [Zhong 09] NARF [Steder 11] (Uniform Sampling, i.e. voxelization) Several detectors «derived» from 2D interest point detectors Harris (2D, 3D, 6D) [Harris 88] SIFT [Lowe 04] SUSAN [Smith 95] AGAST [Mair 10]... Results from [Tombari 13] ISS HARRIS3D TOMASI

Global vs local descriptors Pcl::Features: compact representations aimed at detecting similarities between surfaces (surface matching) based on the support size Pointwise descriptors Simple, efficient, but not robust to noise, often not descriptive enough (e.g. normals, curvatures,..) Local/Regional descriptors Well suited to handle clutter and occlusions Can be vector quantized in codebooks Segmentation, registration, recognition in clutter, 3D SLAM Global descriptors Complete information concerning the surface is needed (no occlusions and clutter, unless pre-processing) Higher invariance, well suited for retrieval and categorization More descriptive on objects with poor geometric structure (household objects..)

Summing up.. : in PCL Method Category Unique LRF Texture Struct. Indexing [Stein92] Signature No No PS [Chua97] Signature No No 3DPF [Sun01] Signature No No 3DGSS [Novatnack08] Signature No No KPQ [Mian10] Signature No No 3D-SURF [Knopp10] Signature Yes No SI [Johnson99] Histogram RA No LSP [Chen07] Histogram RA No 3DSC [Frome04] Histogram No No ISS [Zhong09] Histogram No No USC [Tombari10] Histogram Yes No PFH [Rusu08] Histogram RA No FPFH [Rusu09] Histogram RA No Tensor [Mian06] Histogram No No RSD [Marton11] Histogram RA No HKS [Sun09] Other - No MeshHoG [Zaharescu09] Hybrid Yes Yes SHOT [Tombari10] Hybrid Yes Yes PFHRGB Histogram Yes Yes

3D Object Recognition in clutter Definition (typical setting): A. a set of 3D models (often, in the form of views) B. one scene (at a time) including one or more models, possibly (partially) occluded, + clutter. Models can be present in multiple instances in the same scene Goal(s): determine which model is present in the current scene (often) estimate the 6DoF pose of the model wrt. the scene Applications: industrial robotics, quality control, service robotics, autonomous navigation,..

Pipelines LOCAL PIPELINE Keypoint Extraction Description Matching Correspondence Grouping Absolute Orientation GLOBAL PIPELINE ICP refinement Hypothesis Verification Segmentation Description Matching Alignment

Correspondence Grouping Keypoint Extraction Description Matching Correspondence Grouping Absolute Orientation Problem: given a set of point-to-point correspondences C = c 1, c 2,.., c n where c i = p i,s, p i,m divide C intro groups (or clusters) each holding consensus for a specific 6DOF transformation non-grouped correspondences are considered outliers General approach: RANSAC [Fischler 81] the model is represented by the 6DOF transformation obtained via Absolute Orientation, its parameters being a 3D rotation and a 3D translation Approaches include specific geometric constraints deployed in the 3D space

Geometric consistency Enforcing geometric consistency between pairs of correspondences [Chen 07] Choose a seed correspondence c i Test the following pairwise geometric constraint between c i and all the other correspondences c j : p i,m p j,m p i,s p j,s < ε, j Add each correspondence holding the constraint to the group seeded by c i (and remove it from the list) Eliminate groups having a small consensus set MODEL p i,m c i SCENE p i,s p j,m c j p j,s

Geometric consistency (2) GC enforces a 1D constraint over a transformation with 6 DoF -> weak constraint, high number of ambiguities (might fail if the number of correspondences is low!) Pro: extremely simple and efficient Use if many correspondences, noisy data pcl::correspondencesptr m_s_corrs; //fill it std::vector<pcl::correspondences> clusters; //output pcl::geometricconsistencygrouping<pt, PT> gc_clusterer; gc_clusterer.setgcsize (cg_size); //1st param gc_clusterer.setgcthreshold (cg_thres); //2nd param gc_clusterer.setinputcloud (m_keypoints); gc_clusterer.setscenecloud (s_keypoints); gc_clusterer.setmodelscenecorrespondences (m_s_corrs); gc_clusterer.cluster (clusters); (m_s_corrs are correspondences with indices to m_keypoints and s_keypoints)

3D Hough Transform 3D Hough Transform [Vosselman 04], [Rabbani 05] Extension of classic 2D Hough Transform Can handle a small number of parameters simple surfaces (cylinders, spheres, cones,..) no generic free-form objects 3D Generalized Hough Transform [Khoshelham 07]: Normals in spite of gradient directions 6D Hough space: sparsity, high memory requirements O(MN 3 ): complexity of voting stage (M: 3D points, N: quant. intervals) Can hardly be used in practice [Khoshelham 07]

PoV-independent vote casting Using the definition of a local RF, global-to-local and local-to-global transformations of 3D vectors can be defined v i L = R GLi v i G v i G = R Li G v i L Local RF v i p i RGL i p i v i Local RF RL i G Global RF Global RF L L L T L L L RGL i i, x i, y i, z RL i G i, x i, y i, z

3D Hough Voting [Tombari 10] Training stage A unique reference point (e.g. the centroid) is used to cast votes for each feature These votes are transformed in the local RF of each feature to be PoVindependent: = R GLi b m p i,m Testing stage: L v i,m For each correspondence c i, its scene point p i,s casts a PoV-independent vote G = R Li G v L i,s + p i,s v i,s Global RF! p 1,m p 2,m p 1,s SCENE MODEL b s p 2,s b m p 3,m p 3,s

3D Hough Voting (2) Correspondence votes are accumulated in a 3D Hough space Coherent votes point at the same position of the reference point b s Local maxima in the Hough space identify object instances (handles the presence of multiple instances of the same model) The LRF allows reducing the voting from 6D to 3D (only translation) Votes can be interpolated to handle the approximation introduced by voting space quantization SCENE MODEL

3D Hough example in PCL typedef pcl::referenceframe RFType; pcl::pointcloud<rftype>::ptr model_rf; //fill with RFs pcl::pointcloud<rftype>::ptr scene_rf; //fill with RFs pcl::correspondencesptr m_s_corrs; //fill it std::vector<pcl::correspondences> clusters; pcl::hough3dgrouping<pt, PT, RFType, RFType> hc; hc.sethoughbinsize (cg_size); hc.sethoughthreshold (cg_thres); hc.setuseinterpolation (true); hc.setinputcloud (m_keypoints); hc.setinputrf (model_rf); hc.setscenecloud (s_keypoints); hc.setscenerf (scene_rf); hc.setmodelscenecorrespondences (m_s_corrs); hc.cluster (clusters);

Absolute Orientation Keypoint Extraction Description Matching Correspondence Grouping Absolute Orientation Given a set of coherent correspondences, determine the 6DOF transformation between the model and the scene (3x3 rotation matrix R - or equivalently a quaternion - and 3D translation vector T).... under the assumption that no outlier is present (conversely to, e.g., ICP) Given this assumption, the problem can be solved in closed solution via Absolute Orientation [Horn 87] [Arun 87]: given a set of n exact correspondences c 1 ={p 1,m, p 1,s },..., c n ={p n,m, p n,s }, R and T are obtained as n argmin p i,s R p i,m T 2 2 i=1 Simply a derivation of the least square estimation problem with 3D vectors

Segmentation for global pipelines Segmentation Description Matching Alignment With cluttered scenes global descriptors require data presegmentation General approach: smooth region segmentation [Rabbani 06] Region growing: Starting from a seed point p s, add to its segment all points in its neighborhood that satisfy: p s p i 2 < τ d n s n i > τ n Iterate for all the newly added points, considered as seeds Fails with high object density, non-smooth objects

MPS Segmentation MPS (Multi-Plane Segmentation) [Trevor13]: Fast, general approach focused on RGB-D data (depth + color) Works on organized point clouds (neighboring pixels can be accessed in constant time) Computes connected components on the organized cloud (exploiting 3D distances, but also normals and differences in the RGB space) PCL class: pcl::organizedmultiplanesegmentation See PCL s Organized Segmentation Demo for more: pcl_organized_segmentation_demo Once planes are detected, objects can be found via Euclidean clustering. E.g., class pcl::organizedconnectedcomponentsegmentation can follow using the extracted planes as a mask

Hypothesis Verification Both the local and global pipelines provide a set of Object Hypotheses H = 1, 2,.., n, where i = M i, R i, t i Several hypotheses are false positives: how do you discard them without compromising the Recall? Local Pipeline Global pipeline Hypothesis Verification Typical geometric cues being enforced [Papazov10] [Mian06] [Bariya10] [Johnson 99] % of (visible) model points being explained by the scene (ie. having one close correspondent), aka inliers Number of unexplained model points, aka outliers These cues are applied sequentially, one hypothesis at a time Two such methods in PCL: pcl::greedyverification pcl::papazovhv

Global Hypothesis Verification [Aldoma12] Simultaneous geometric verification of all object hypotheses Consider the two possible states of a single hypothesis: x i = {0,1} (inactive/active). By switching the state of an hypothesis, we can evaluate a global cost function that estimates how good the current solution χ = {x 1, x 2,.., x n } is A global cost function integrating four geometrical cues is minimized I χ : B n R, χ = x 1,, x n, x i B = 0,1 I χ considers the whole set of hypotheses as a global scene model instead of considering each model hypothesis separately.

Global Hypothesis Verification (2) I χ = Λ χ p + Υ χ p Ω χ p + λ Φ hi x i p S Ω χ : scene inliers n i=1 Υ χ : clutter Λ χ : multiple assignment Φ hi : #outliers for i Maximize number of scene points explained (orange). Minimize number of model outliers (green). Minimize number of scene points multiple explained (black). Minimize number of unexplained scene points close to active hypotheses (yellow, purple) Optimization solved using Simulated Annealing or other metaheuristics (METSlib library).

Using GHV in PCL pcl::globalhypothesesverification<pcl::pointxyz, pcl::pointxyz> gohv; gohv.setscenecloud (scene); gohv.addmodels (aligned_hypotheses, true); gohv.setresolution (0.005f); gohv.setinlierthreshold (0.005f); gohv.setradiusclutter (0.04f); gohv.setregularizer (3.f); //outliers model weight gohv.setclutterregularizer (5.f); //clutter points weight gohv.setdetectclutter (true); gohv.verify (); std::vector<bool> mask_hv; gohv.getmask (mask_hv);

Multi-pipeline HV [Aldoma 13] Typical scenario: objects in clutter laying on a dominant plane, RGB-D data Exploiting multiple pipelines from RGB-D data to handle different object characteristics (low texture, non-distinctive 3D shape, occlusions,...) Hypothesis Verification stage based on GO [Aldoma ECCV12] fits well the scenario Injection of RGB information along different modules 3D «global» description Hypothesis Verification 3D Local Pipeline 2D Local Pipeline Uniform 3D Keypoint Sampling SIFT Detection and Description SHOT Description 2D Descriptor Matching 3D Descriptor Matching 3D Backprojection Correspondence Grouping Absolute Orientation ICP Refinement GO Hypothesis Verification 3D Global Pipeline Scene Segmentation 3D Global Description 3D Descriptor Matching + 6DOF Pose Estimation

«Willow» dataset «Challenge» dataset Multi-pipeline HV (2)

THE END (of theory part.. let s try out the hands-on tutorial!)