Epipolar Geometry and Visual Servoing



Similar documents
How To Analyze Ball Blur On A Ball Image

Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R.

Introduction Epipolar Geometry Calibration Methods Further Readings. Stereo Camera Calibration

Path Tracking for a Miniature Robot

THE problem of visual servoing guiding a robot using

MetropoGIS: A City Modeling System DI Dr. Konrad KARNER, DI Andreas KLAUS, DI Joachim BAUER, DI Christopher ZACH

Projective Reconstruction from Line Correspondences

Animations in Creo 3.0

Geometric Camera Parameters

CONTRIBUTIONS TO THE AUTOMATIC CONTROL OF AERIAL VEHICLES

Relating Vanishing Points to Catadioptric Camera Calibration

2-View Geometry. Mark Fiala Ryerson University

CIS 536/636 Introduction to Computer Graphics. Kansas State University. CIS 536/636 Introduction to Computer Graphics

Wii Remote Calibration Using the Sensor Bar

Simulation of Trajectories and Comparison of Joint Variables for Robotic Manipulator Using Multibody Dynamics (MBD)

Using Many Cameras as One

B4 Computational Geometry

An Efficient Solution to the Five-Point Relative Pose Problem

INSTRUCTOR WORKBOOK Quanser Robotics Package for Education for MATLAB /Simulink Users

Force and Visual Control for Safe Human Robot Interaction

3D Scanner using Line Laser. 1. Introduction. 2. Theory

Geometry: Unit 1 Vocabulary TERM DEFINITION GEOMETRIC FIGURE. Cannot be defined by using other figures.

Metrics on SO(3) and Inverse Kinematics

Robot coined by Karel Capek in a 1921 science-fiction Czech play

An Efficient Solution to the Five-Point Relative Pose Problem

3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving

Sensor Based Control of Autonomous Wheeled Mobile Robots

TWO-DIMENSIONAL TRANSFORMATION

Real-time 3D Model-Based Tracking: Combining Edge and Texture Information

ROBOTICS 01PEEQW. Basilio Bona DAUIN Politecnico di Torino

Computer Graphics. Geometric Modeling. Page 1. Copyright Gotsman, Elber, Barequet, Karni, Sheffer Computer Science - Technion. An Example.

Solving Simultaneous Equations and Matrices

Least-Squares Intersection of Lines

Projective Geometry. Projective Geometry

Digital Image Increase

Multiple View Geometry in Computer Vision

Interactive Segmentation, Tracking, and Kinematic Modeling of Unknown 3D Articulated Objects

CALIBRATION OF A ROBUST 2 DOF PATH MONITORING TOOL FOR INDUSTRIAL ROBOTS AND MACHINE TOOLS BASED ON PARALLEL KINEMATICS

CS 534: Computer Vision 3D Model-based recognition

Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication

Monash University Clayton s School of Information Technology CSE3313 Computer Graphics Sample Exam Questions 2007

Segmentation of building models from dense 3D point-clouds

3D Model based Object Class Detection in An Arbitrary View

UAV Pose Estimation using POSIT Algorithm

Learning a wall following behaviour in mobile robotics using stereo and mono vision

Solution Guide III-C. 3D Vision. Building Vision for Business. MVTec Software GmbH

CE801: Intelligent Systems and Robotics Lecture 3: Actuators and Localisation. Prof. Dr. Hani Hagras

Vision based Vehicle Tracking using a high angle camera

Whole-body dynamic motion planning with centroidal dynamics and full kinematics

3-D Scene Data Recovery using Omnidirectional Multibaseline Stereo

Introduction to Robotics Analysis, Systems, Applications

Lecture 2: Homogeneous Coordinates, Lines and Conics

Flow Separation for Fast and Robust Stereo Odometry

Chapter. 4 Mechanism Design and Analysis

Robotics. Lecture 3: Sensors. See course website for up to date information.

On Motion of Robot End-Effector using the Curvature Theory of Timelike Ruled Surfaces with Timelike Directrix

521493S Computer Graphics. Exercise 2 & course schedule change

How To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud

A QCQP Approach to Triangulation. Chris Aholt, Sameer Agarwal, and Rekha Thomas University of Washington 2 Google, Inc.

CATIA Wireframe & Surfaces TABLE OF CONTENTS

Projective Geometry: A Short Introduction. Lecture Notes Edmond Boyer

3D/4D acquisition. 3D acquisition taxonomy Computer Vision. Computer Vision. 3D acquisition methods. passive. active.

Sensory-motor control scheme based on Kohonen Maps and AVITE model

AN INTERACTIVE USER INTERFACE TO THE MOBILITY OBJECT MANAGER FOR RWI ROBOTS

RealTime Tracking Meets Online Grasp Planning

Force/position control of a robotic system for transcranial magnetic stimulation

EXPERIMENTAL EVALUATION OF RELATIVE POSE ESTIMATION ALGORITHMS

Lecture L6 - Intrinsic Coordinates

3D Tranformations. CS 4620 Lecture 6. Cornell CS4620 Fall 2013 Lecture Steve Marschner (with previous instructors James/Bala)

T-REDSPEED White paper

Introduction to Computer Graphics

An Iterative Image Registration Technique with an Application to Stereo Vision

Classifying Manipulation Primitives from Visual Data

Terrain Traversability Analysis using Organized Point Cloud, Superpixel Surface Normals-based segmentation and PCA-based Classification

DICOM Correction Item

A PAIR OF MEASURES OF ROTATIONAL ERROR FOR AXISYMMETRIC ROBOT END-EFFECTORS

Essential Mathematics for Computer Graphics fast

Dev eloping a General Postprocessor for Multi-Axis CNC Milling Centers

Tracking in flussi video 3D. Ing. Samuele Salti

Industrial Robotics. Training Objective

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY

Software Manual. IDEA The Software Version 1.0

OBJECT TRACKING USING LOG-POLAR TRANSFORMATION

Joao Pedro Barreto. Dept. of Electrical and Computer Engineering R: Antonio Jose de Almeida, 295, 1Esq. University of Coimbra telf:

Real-Time Camera Tracking Using a Particle Filter

EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM

Structural Analysis - II Prof. P. Banerjee Department of Civil Engineering Indian Institute of Technology, Bombay. Lecture - 02

Transcription:

Epipolar Geometry and Visual Servoing Domenico Prattichizzo joint with with Gian Luca Mariottini and Jacopo Piazzi www.dii.unisi.it/prattichizzo Robotics & Systems Lab University of Siena, Italy Scuoladi Dottorato CIRA Controllo di Sistemi Robotici per la Manipolazione e la Cooperazione Bertinoro (FC), 14 16 Luglio2003 Slides Version June 9, 2003 This is a preliminary version of my talk. An update version of the slides will be soon available at the School Web Site or at my home page.

Visual servoing problem Unknown desired position Available current and desired views Image based VS [Hutchinson et al. 96] Fully actuated camera-robot

Contribution Two-views Epipolar Geometry fundamental matrix F [Faugeras 1993, Zissermann 2000] Current View Desired View Natural setting for contour based VS Epipolar tangency for surface reconstruction applications [Cipolla 2000]. Contours are very important in unstructured environments and in outdoor navigation. No contour parameterization is required.

Initial and final positions Current position Current image Target position Target image The Fundamental Matrix F is estimated from the correspondences betweencurrent and targetimages. Fundamental Matrix F Current Epipole Desired Epipole

Givena set of corresponding points in homogeneous coordinates, there exists a matrix called the fundamental matrix [Faugeras 1993, Zissermann 2000] such that: Exploiting the kinematics of epipoles Target epipole MAIN PLANE Current epipole Main idea: perform a sequence of translations and rotations, depending on the epipole coordinates in order to reach the target.

3-Phases Algorithm Reaching the target plane Getting main target coplanarity the Planar Case the Planar Case or Planar Homing A general rotation R and translation t Exists between current and target frames

Essential and Fudamental Matrices two planar Homing methods Epipole Symmetry Algorithm Auto-Epipole Algorithm Rotate to get pure translation! [ICRA 2001] partially calibrated camera principal point (u 0,v 0 ) uncalibrated camera

Fundamental matrix under pure translation Theory not only for planar cases! the auto-epipole left image right image Current and desired epipoles are equal Epipolar lines have the same angular coeff.

Overlapped images If the camera motion is pure translation, a bi-tangent to the corresponding apparent contours coincides with an epipolar line. The epipole is the intersection of the bi-tangents. Pure traslation Four bi-tangents obtained from four corresponding points in the overlapped images

Non pure translation Four bi-tangents obtained from four corresponding points in the overlapped images Same orientation (pure translation)? All the bi-tangents intersect in the same point (the autoepipole)

Theorem Given 8 point matches (u a,u d ) in the two views and let the rank of matrix A be 8, then when all the bi-tangents intersect in the same point the two cameras have the same orientation and the point intersection is the epipole. 1 1 1 1 1 1 1 1 1 1 1 1 A =.... 8 8 8 8 8 8 8 8 8 8 8 8 Visual Servoing (planar case) Step1 Find the right orientation exploiting the auto-epipole property Step2 Follow the base-line until the target is reached

Getting a pure translation (step1) Control based on the the relative distances of intersecting points

Keeping features in the field of view!? 2D Homing (step 2) Follow the base line until the target is reached! Following the base line direction the epipole does not change [P. Rives, INRIA] =

2D Homing TO DO Compensate the orientation Find the base-line direction Drive the camera to the target (one mode dof) + + + + + + +

The auto-epipole experiment 3-Phases Algorithm Reaching the target plane Getting main target coplanarity the Planar Case

Reaching the target plane Property 1: The epipole e d has zero ordinate e (d,v) =0 iff the baseline belongs to the target plane. [Control Variable: e(d,v)] Desired camera Target Epipole The baseline is on the plane The epipole ed has ordinate equal to 0 Actual camera The experiment Phase 1 epipole plots

3 Phases Algorithm Phase 2: Getting coplanarity

3 Phases Algorithm Phase 3: 2D homing R,t

Conclusions The VS algorithm is based on the epipole kinematics Epipole estimation obtained from scene point features (or contours...) Auto-epipole approach (completely uncalibrated camera) Next Epipole estimation must be robustly designed (image noise). Trade-off between accuracy and computational cost. Work is in progress for contour based visual servoing and comparison with existing methods Extension the auto-epipole to the fully actuated 3D camera motion (sinergy with surface reconstruct using apparent contours and profiles [Cipolla and Giblin, 2000], [Chesi, Malis and Cipolla, 2000] Enforce the estimation of epipoles for the contour case. Multipleview geometry [Hartley and Zisserman, 2000] Trajectory optimization (path planning) and phase fusion How to extend to omnidirectional cameras [Daniilidis]? MORE SLIDES..

3 Phases Algorithm Phase 2: Getting coplanarity 3 Phases Algorithm R,t Phase 3: 2D homing

The epipole s ordinate is not zero Phase 2: Getting coplanarity Observation:if xc is not on the plane translating along it the ordinate of target epipole change! The epipole s ordinate is equal to zero Phase 2: Getting coplanarity Observation:if xc is on the plane translating along it the ordinate of target epipole remain to zero!

Phase 2: Getting coplanarity Idea: Perform a test translation along xc and observe the target epipole Phase 2: Getting coplanarity Idea: Rotate aroud Zc and repeat the process