3D Scanner using Line Laser. 1. Introduction. 2. Theory

Similar documents
Geometric Camera Parameters

Section 12.6: Directional Derivatives and the Gradient Vector

Monash University Clayton s School of Information Technology CSE3313 Computer Graphics Sample Exam Questions 2007

Real-time 3D Scanning System for Pavement Distortion Inspection

Automatic Labeling of Lane Markings for Autonomous Vehicles

Lecture 2: Homogeneous Coordinates, Lines and Conics

Face detection is a process of localizing and extracting the face region from the

VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION

Manufacturing Process and Cost Estimation through Process Detection by Applying Image Processing Technique

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY

Color holographic 3D display unit with aperture field division

Analecta Vol. 8, No. 2 ISSN

521493S Computer Graphics. Exercise 2 & course schedule change

Copyright 2011 Casa Software Ltd. Centre of Mass

HIGH-PERFORMANCE INSPECTION VEHICLE FOR RAILWAYS AND TUNNEL LININGS. HIGH-PERFORMANCE INSPECTION VEHICLE FOR RAILWAY AND ROAD TUNNEL LININGS.

LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK

INTRODUCTION TO RENDERING TECHNIQUES

Static Environment Recognition Using Omni-camera from a Moving Vehicle

I-SiTE - Laser Scanning Revolutionises Site Survey

VOLUMNECT - Measuring Volumes with Kinect T M

Wii Remote Calibration Using the Sensor Bar

Self-Calibrated Structured Light 3D Scanner Using Color Edge Pattern

Scanners and How to Use Them

Real-Time Automated Simulation Generation Based on CAD Modeling and Motion Capture

Manual for simulation of EB processing. Software ModeRTL

DESIGN & DEVELOPMENT OF AUTONOMOUS SYSTEM TO BUILD 3D MODEL FOR UNDERWATER OBJECTS USING STEREO VISION TECHNIQUE

Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi

EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM

Digitization of Old Maps Using Deskan Express 5.0

13.4 THE CROSS PRODUCT

Three-dimensional vision using structured light applied to quality control in production line

CONFOCAL LASER SCANNING MICROSCOPY TUTORIAL

Terrain Traversability Analysis using Organized Point Cloud, Superpixel Surface Normals-based segmentation and PCA-based Classification

(a) We have x = 3 + 2t, y = 2 t, z = 6 so solving for t we get the symmetric equations. x 3 2. = 2 y, z = 6. t 2 2t + 1 = 0,

Robot Perception Continued

Synthetic Sensing: Proximity / Distance Sensors

WAVELENGTH OF LIGHT - DIFFRACTION GRATING

A technical overview of the Fuel3D system.

DAMAGED ROAD TUNNEL LASER SCANNER SURVEY

USING THE XBOX KINECT TO DETECT FEATURES OF THE FLOOR SURFACE

Digital Photogrammetric System. Version USER MANUAL. Block adjustment

Canny Edge Detection

Projection Center Calibration for a Co-located Projector Camera System

Announcements. Active stereo with structured light. Project structured light patterns onto the object

Vision based Vehicle Tracking using a high angle camera

Cabri Geometry Application User Guide

Student Outcomes. Lesson Notes. Classwork. Exercises 1 3 (4 minutes)

Robust and accurate global vision system for real time tracking of multiple mobile robots

A System for Capturing High Resolution Images

Shape Measurement of a Sewer Pipe. Using a Mobile Robot with Computer Vision

3D SCANNING: A NEW APPROACH TOWARDS MODEL DEVELOPMENT IN ADVANCED MANUFACTURING SYSTEM

Section 9.5: Equations of Lines and Planes

Robot Sensors. Outline. The Robot Structure. Robots and Sensors. Henrik I Christensen

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

Solution Guide III-C. 3D Vision. Building Vision for Business. MVTec Software GmbH

Palmprint Recognition. By Sree Rama Murthy kora Praveen Verma Yashwant Kashyap

BUILDING TELEPRESENCE SYSTEMS: Translating Science Fiction Ideas into Reality

Tutorial for Tracker and Supporting Software By David Chandler

Poker Vision: Playing Cards and Chips Identification based on Image Processing

1 Laboratory #5: Grating Spectrometer

Making 3D Threads in Feature Based Solid Modelers

Build Panoramas on Android Phones

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

High-resolution Imaging System for Omnidirectional Illuminant Estimation

Thin Lenses Drawing Ray Diagrams

THREE DIMENSIONAL GEOMETRY

WHITE PAPER DECEMBER 2010 CREATING QUALITY BAR CODES FOR YOUR MOBILE APPLICATION

Largest Fixed-Aspect, Axis-Aligned Rectangle

Correcting the Lateral Response Artifact in Radiochromic Film Images from Flatbed Scanners

Introduction Epipolar Geometry Calibration Methods Further Readings. Stereo Camera Calibration

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network

How To Analyze Ball Blur On A Ball Image

Equations Involving Lines and Planes Standard equations for lines in space

Software Manual. IDEA The Software Version 1.0

Computer Graphics and Image Processing Introduction

QUALITY ASSURANCE FOR MOBILE RETROREFLECTIVITY UNITS

Making Better Medical Devices with Multisensor Metrology

Pro/ENGINEER Wildfire 4.0 Basic Design

An Iterative Image Registration Technique with an Application to Stereo Vision

HANDS-FREE PC CONTROL CONTROLLING OF MOUSE CURSOR USING EYE MOVEMENT

Essay 5 Tutorial for a Three-Dimensional Heat Conduction Problem Using ANSYS Workbench

Mouse Control using a Web Camera based on Colour Detection

International Journal of Advanced Information in Arts, Science & Management Vol.2, No.2, December 2014

Digital Photography Composition. Kent Messamore 9/8/2013

How To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud

Binary Image Scanning Algorithm for Cane Segmentation

VIDEO COMMUNICATION SYSTEM-TECHNICAL DOCUMENTATION. Tracking Camera (PCSA-CTG70/CTG70P) PCS-G70/G70P All

Template-based Eye and Mouth Detection for 3D Video Conferencing

SketchUp Instructions

Partial f (x; y) x f (x; x2 y2 and then we evaluate the derivative as if y is a constant.

Range sensors. Sonar. Laser range finder. Time of Flight Camera. Structured light. 4a - Perception - Sensors. 4a 45

Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R.

Rotation: Moment of Inertia and Torque

CHAPTER 8, GEOMETRY. 4. A circular cylinder has a circumference of 33 in. Use 22 as the approximate value of π and find the radius of this cylinder.

Interference. Physics 102 Workshop #3. General Instructions

Transcription:

. Introduction 3D Scanner using Line Laser Di Lu Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute The goal of 3D reconstruction is to recover the 3D properties of a geometric entity from its 2D images. We have implemented the passive stereo method in one of the previous projects. Passive stereo method needs the images from two cameras. But active stereo vision, which is different from passive stereo vision, is a technique for the reconstruction of the 3D description of a scene from images observed from one camera and one light projector. In this project, I used the line laser as the light projector to implement the active stereo method to recover 3D objects. Reconstruction results of four different objects and reconstruction accuracy are showed. 2. Theory ). Active Stereo Vision Active stereo vision is a technique that actively uses a light such as a laser or a structured light to simplify the stereo matching problem. The equation between the pixel (c, r) on an image and its corresponding 3D coordinates (x, y, z) is: λ c r = W[R T] x y z Let (x, y, z) be in the camera frame, then R=I, and T=0. Thus the equation is simplified to

λ c r = W I 0 x y z = W x y z W is the intrinsic matrix of the camera W!! c r = λ x y z = x/λ y/λ z/λ = x/z y/z So with one camera, we can just compute x/z y/z. It is impossible to get the depth from one image without additional information. Fig : Triangulation However, when we project a laser line onto an object and it will result in a curve in the picture. Then we can compute the 3D coordinates of the surface of the object that is illuminated by the laser line, if we know the equation of the light plane. Assume the laser line projected onto the surface of the object is on the light plane: 2

Ax + By + Cz + D = 0 Let one image point on the line projected by the laser is (x!, y! ), and its pixel is c, r (Fig ). We can have: Let W!! Then c r W!! c r = x/z y/z = P = P, which is a vector. And P i is the ith component of P. x = zp, y = zp 2 Substitute them into the equation of the light plane AzP + BzP 2 + Cz + D = 0 We can get the depth z from this equation, thus obtain the 3D coordinates x, y, z = zp, zp 2, zp(3) To obtain the 3D properties of the whole object, we just need to scan the laser line over the object and shoot a video at the same time. Then use the method above to analyze every frame of the video to reconstruct the 3D coordinates of the surface of the object. In order to implement the method above, we need to know:. The intrinsic parameters of the camera. I used a calibration toolbox to obtain the intrinsic parameters of my camera. I will talk about the details later in the part of approach. 2. The location of the curve produced by the laser in every image. I implemented the edge detection method to extract them out in every image. More details are talked in the later part. 3. The equation of the light plane in every frame of the video. I set up a system with two boards to help to compute the light plane. 2). Compute the light plane 3

P P2 P3 P4 P P2 P3 P4 Fig 2: Two white boards with 8 fiducials I set up a system for 3D scanning with two white boards (Fig 2). There will be two intersection lines when the laser is scanning over the two boards. If we know the 3D coordinates of any three points from that two intersection lines, we can decide the equation for the light plane. Fig 3: The image with laser line scanning over the object To compute the 3D coordinates of the points in the two intersection 4

lines, we need to know the equations for the two boards first. So I attached four fiducials (P, P2, P3, P4)(Fig 2) on each board, which compose a rectangle. The length (P- - - P2) and width (P- - - P3) are known, which are l and w respectively. With known intrinsic matrix, the equation between an image pixel (c, r) and its corresponding 3D point (x, y, z) is: c r = W x/z y/z = fs 0 c! 0 fs r! 0 0 x/z y/z fsx z + c! = c fsy z + r! = r x = z fs (c c!) y = z fs (r r!) So the 3D coordinates of the four fiducials are: P = P2 = P3 = z! fs (c! c! ) z! fs (r! r! ) z! z! fs (c! c! ) z! fs (r! r! ) z! z! fs (c! c! ) z! fs (r! r! ) z! 5

(𝑐 𝑐! )! (𝑟 𝑟! )! 𝑃4 = Because P, P2, P3 and P4 compose a rectangle, 𝑃2 𝑃 = 𝑃4 𝑃3 (𝑐! 𝑐! ) (𝑐 𝑐! )! (𝑟! 𝑟! ) (𝑟 𝑟! )! (𝑐! 𝑐! ) (𝑐! 𝑐! ) (𝑟! 𝑟! ) (𝑟! 𝑟! ) = (𝑐 𝑐! ) (𝑐! 𝑐! )! (𝑟! 𝑟! ) (𝑟 𝑟! )! = (𝑐! 𝑐! ) (𝑐! 𝑐! ) (𝑟! 𝑟! ) (𝑟! 𝑟! ) Rearrange the three equations (𝑐! 𝑐! ) (𝑟! 𝑟! ) (𝑐! 𝑐! ) (𝑟! 𝑟! ) (𝑐! 𝑐! ) (𝑟! 𝑟! ) = (𝑐! 𝑐! ) (𝑟! 𝑟! ) That is 𝐴𝑋 = 𝐵 𝐴= (𝑐! 𝑐! ) (𝑟! 𝑟! ) (𝑐! 𝑐! ) (𝑟! 𝑟! ) (𝑐! 𝑐! ) (𝑟! 𝑟! ) 6

𝑋= 𝐵= (𝑐! 𝑐! ) (𝑟! 𝑟! ) So we can obtain the value of vector X 𝑋 = 𝐴! 𝐴!!! 𝐴𝐵 Thus, = 𝑋 ; = 𝑋 2 ; = 𝑋 (3) 𝑃 = (𝑐 𝑐! )! (𝑟 𝑟! )! 𝑃2 = (𝑐 𝑐! )! (𝑟 𝑟! )! = 𝑋 (𝑐! 𝑐! ) 𝑋 (𝑟! 𝑟! ) 𝑋 = 𝑋 2 (𝑐! 𝑐! ) 𝑋 2 (𝑟! 𝑟! ) 𝑋 2 Because the distance between P and P2 are known, and is the only unknown, so we can obtain through simple computation. Then we can have the 3D coordinates of the all four fiducials. With three points on a plane, we can get the equation for that plane. Apply the same method to the other board. Then we can obtain the equations for the two white boards. When we have the equations of the two boards, we can choose two points from the intersection line on each board and easily compute their 3D coordinates. With these four points, we can compute the equation for the light plane. 7

To easily choose the points from the intersection lines, I just left the top and button parts of the image blank, without blocked by the scanned object. When the laser line scans over the object, it must project the lines to the areas close to the top and button of the picture (Fig 3). So we can always find such four points easily in every frame to obtain the equation for the light plane. Assume the equation for the vertical board is A! x + B! y + C! z + D! = 0 The equation for the horizontal board is A! x + B! y + C! z + D! = 0 Choose two points (p, p2), which are in the intersection line of the light plane and the vertical board. And choose two points (p3, p4), which are in the intersection line of the light plane and the horizontal board. The corresponding 3D coordinates of the image point (c, r) is: P = z fs (c c!) z fs (r r!) z Substitute it into the equation of the vertical board: A! z fs (c c!) + B! z fs (r r!) + C! z + D! = 0 We can easily compute the only unknown z, then to obtain the 3D coordinate of P. Repeat the computation for all the four points, and we get the 3D coordinates of them. Then we can find the plane that match them best, which is the light plane we need. Just repeat the same process for every frame. 3). Edge detection To simplify the procedure, I scan the object in dark and there is no ambient light. 8

I used a simple first order filer to enhance the edge. Let I(c, r) be the image intensity at pixel (c, r), then the first order image gradient can be approximated by I c, r = I(c, r) c I(c, r) r Where the partial derivatives are numerically approximately Gradient magnitude is defined as I = I c +, r I(c, r) c I = I c, r + I(c, r) r g c, r = I(c, r) c! + I(c, r) r! This leads to two first order image derivatives filters h! = h! = The whole edge detection procedure includes edge enhancement, non maximal suppression and thresholding. In this project, the dark ambient light makes the edge easily be detected, so I just implemented the edge enhancement(fig 4). 9

(a) (b) 0

(c) Fig 4: Edge Detection. (a) Original image. (b) Extract the red color out and convert it to gray scale. (c) After edge enhancement. 3. Approach 3D Scanning System (a). Camera is the web camera on Macbook (Fig 5). Because the photos and videos from that camera are flipped horizontal, I just flipped them back at the beginning of the process. (b) Line laser with battery box (Fig 5). I bought them online. Cost less than 30 dollars.

(a) System (b) Laser line with battery box Fig 5. 3D scanning system Calibration I used the calibration toolbox from http://www.vision.caltech.edu/bouguetj/calib_doc/ I printed out a checkerboard pattern and attached it to a hardcover book. Placed it in different positions and took twelve pictures (Fig 6). Then used the calibration toolbox to obtain the intrinsic matrix of the web camera. You can find the procedure of the calibration from the website above. The intrinsic matrix I obtained after calibration is: 962 𝑊= 0 0 0 962 0 540 360 2

Fig 6: Twelve pictures for camera calibration Scanning During scanning, I scanned the object from different angles to cover the object as much as possible. Image processing I extracted the red color from the images first because the laser line is red Implemented Edge Enhancement 3D reconstruction Determine the light plane and recover every point illuminated by the line laser Repeat the step above for every image 4. Experimental results I scanned four objects to check the performance of my 3D scanner. Object : Stitch 3

Fig 7: Image from camera 4

Object 2: Bobblehead Pedroia Fig 8: 3D images from different views Fig 9: Image from camera 5

Fig 0: 3D images from different views Object 3: Figure of Puerto Rico Fig : Image from camera 6

Fig 2: 3D images from different views Object 4: box 7

Fig 3: Image from camera 8

Fig 4: 3D images from different views Accuracy To check the accuracy I used two methods:. Geometry: Compare the real length and width of the box with the readings from the recovered 3D model Real value Value read from image Length 2.cm 2.4cm Width 7.6cm 7.cm Table : Comparison of the geometry parameters The accuracy is acceptable if we don t require extreme precision. 2. Matching: Project the 3D coordinates back to the 2D image and see how well it matches with the image taken from the camera. 9

Fig 5: Original image from camera Fig 6: 2D image projected from 3D model 20

Fig 7: Original image from camera Fig 8: 2D image projected from 3D model 2

Fig 9: Original image from camera Fig 20: 2D image projected from 3D model 22

Fig 2: Original image from camera Fig 22: 2D image projected from 3D model 5. Conclusion With cheap equipment, which is fewer than 30 dollars, I successfully implement the 3D scanning using a line laser as the light source. There is some noise that affects the accuracy of the reconstruction. I think they are resulted from: 23

The precision of the light plane. The precision of the coordinates of the fiducials on the two white boards The laser line is not thin enough 6. Future Work The performance of the system can be improved with brighter and thinner laser line. If the object is put on a rotating table with known rotating speed, then the 3D properties of the whole object can be recovered. Reference Course notes: http://mesh.brown.edu/byo3d/notes/byo3d.pdf Calibration toolbox from: http://www.vision.caltech.edu/bouguetj/calib_doc/ 24