3D Scanner using Line Laser. 1. Introduction. 2. Theory

Save this PDF as:

Size: px
Start display at page:

Transcription

1 . Introduction 3D Scanner using Line Laser Di Lu Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute The goal of 3D reconstruction is to recover the 3D properties of a geometric entity from its 2D images. We have implemented the passive stereo method in one of the previous projects. Passive stereo method needs the images from two cameras. But active stereo vision, which is different from passive stereo vision, is a technique for the reconstruction of the 3D description of a scene from images observed from one camera and one light projector. In this project, I used the line laser as the light projector to implement the active stereo method to recover 3D objects. Reconstruction results of four different objects and reconstruction accuracy are showed. 2. Theory ). Active Stereo Vision Active stereo vision is a technique that actively uses a light such as a laser or a structured light to simplify the stereo matching problem. The equation between the pixel (c, r) on an image and its corresponding 3D coordinates (x, y, z) is: λ c r = W[R T] x y z Let (x, y, z) be in the camera frame, then R=I, and T=0. Thus the equation is simplified to

2 λ c r = W I 0 x y z = W x y z W is the intrinsic matrix of the camera W!! c r = λ x y z = x/λ y/λ z/λ = x/z y/z So with one camera, we can just compute x/z y/z. It is impossible to get the depth from one image without additional information. Fig : Triangulation However, when we project a laser line onto an object and it will result in a curve in the picture. Then we can compute the 3D coordinates of the surface of the object that is illuminated by the laser line, if we know the equation of the light plane. Assume the laser line projected onto the surface of the object is on the light plane: 2

3 Ax + By + Cz + D = 0 Let one image point on the line projected by the laser is (x!, y! ), and its pixel is c, r (Fig ). We can have: Let W!! Then c r W!! c r = x/z y/z = P = P, which is a vector. And P i is the ith component of P. x = zp, y = zp 2 Substitute them into the equation of the light plane AzP + BzP 2 + Cz + D = 0 We can get the depth z from this equation, thus obtain the 3D coordinates x, y, z = zp, zp 2, zp(3) To obtain the 3D properties of the whole object, we just need to scan the laser line over the object and shoot a video at the same time. Then use the method above to analyze every frame of the video to reconstruct the 3D coordinates of the surface of the object. In order to implement the method above, we need to know:. The intrinsic parameters of the camera. I used a calibration toolbox to obtain the intrinsic parameters of my camera. I will talk about the details later in the part of approach. 2. The location of the curve produced by the laser in every image. I implemented the edge detection method to extract them out in every image. More details are talked in the later part. 3. The equation of the light plane in every frame of the video. I set up a system with two boards to help to compute the light plane. 2). Compute the light plane 3

4 P P2 P3 P4 P P2 P3 P4 Fig 2: Two white boards with 8 fiducials I set up a system for 3D scanning with two white boards (Fig 2). There will be two intersection lines when the laser is scanning over the two boards. If we know the 3D coordinates of any three points from that two intersection lines, we can decide the equation for the light plane. Fig 3: The image with laser line scanning over the object To compute the 3D coordinates of the points in the two intersection 4

5 lines, we need to know the equations for the two boards first. So I attached four fiducials (P, P2, P3, P4)(Fig 2) on each board, which compose a rectangle. The length (P- - - P2) and width (P- - - P3) are known, which are l and w respectively. With known intrinsic matrix, the equation between an image pixel (c, r) and its corresponding 3D point (x, y, z) is: c r = W x/z y/z = fs 0 c! 0 fs r! 0 0 x/z y/z fsx z + c! = c fsy z + r! = r x = z fs (c c!) y = z fs (r r!) So the 3D coordinates of the four fiducials are: P = P2 = P3 = z! fs (c! c! ) z! fs (r! r! ) z! z! fs (c! c! ) z! fs (r! r! ) z! z! fs (c! c! ) z! fs (r! r! ) z! 5

6 (π π! )! (π π! )! π4 = Because P, P2, P3 and P4 compose a rectangle, π2 π = π4 π3 (π! π! ) (π π! )! (π! π! ) (π π! )! (π! π! ) (π! π! ) (π! π! ) (π! π! ) = (π π! ) (π! π! )! (π! π! ) (π π! )! = (π! π! ) (π! π! ) (π! π! ) (π! π! ) Rearrange the three equations (π! π! ) (π! π! ) (π! π! ) (π! π! ) (π! π! ) (π! π! ) = (π! π! ) (π! π! ) That is π΄π = π΅ π΄= (π! π! ) (π! π! ) (π! π! ) (π! π! ) (π! π! ) (π! π! ) 6

7 π= π΅= (π! π! ) (π! π! ) So we can obtain the value of vector X π = π΄! π΄!!! π΄π΅ Thus, = π ; = π 2 ; = π (3) π = (π π! )! (π π! )! π2 = (π π! )! (π π! )! = π (π! π! ) π (π! π! ) π = π 2 (π! π! ) π 2 (π! π! ) π 2 Because the distance between P and P2 are known, and is the only unknown, so we can obtain through simple computation. Then we can have the 3D coordinates of the all four fiducials. With three points on a plane, we can get the equation for that plane. Apply the same method to the other board. Then we can obtain the equations for the two white boards. When we have the equations of the two boards, we can choose two points from the intersection line on each board and easily compute their 3D coordinates. With these four points, we can compute the equation for the light plane. 7

8 To easily choose the points from the intersection lines, I just left the top and button parts of the image blank, without blocked by the scanned object. When the laser line scans over the object, it must project the lines to the areas close to the top and button of the picture (Fig 3). So we can always find such four points easily in every frame to obtain the equation for the light plane. Assume the equation for the vertical board is A! x + B! y + C! z + D! = 0 The equation for the horizontal board is A! x + B! y + C! z + D! = 0 Choose two points (p, p2), which are in the intersection line of the light plane and the vertical board. And choose two points (p3, p4), which are in the intersection line of the light plane and the horizontal board. The corresponding 3D coordinates of the image point (c, r) is: P = z fs (c c!) z fs (r r!) z Substitute it into the equation of the vertical board: A! z fs (c c!) + B! z fs (r r!) + C! z + D! = 0 We can easily compute the only unknown z, then to obtain the 3D coordinate of P. Repeat the computation for all the four points, and we get the 3D coordinates of them. Then we can find the plane that match them best, which is the light plane we need. Just repeat the same process for every frame. 3). Edge detection To simplify the procedure, I scan the object in dark and there is no ambient light. 8

9 I used a simple first order filer to enhance the edge. Let I(c, r) be the image intensity at pixel (c, r), then the first order image gradient can be approximated by I c, r = I(c, r) c I(c, r) r Where the partial derivatives are numerically approximately Gradient magnitude is defined as I = I c +, r I(c, r) c I = I c, r + I(c, r) r g c, r = I(c, r) c! + I(c, r) r! This leads to two first order image derivatives filters h! = h! = The whole edge detection procedure includes edge enhancement, non maximal suppression and thresholding. In this project, the dark ambient light makes the edge easily be detected, so I just implemented the edge enhancement(fig 4). 9

10 (a) (b) 0

11 (c) Fig 4: Edge Detection. (a) Original image. (b) Extract the red color out and convert it to gray scale. (c) After edge enhancement. 3. Approach 3D Scanning System (a). Camera is the web camera on Macbook (Fig 5). Because the photos and videos from that camera are flipped horizontal, I just flipped them back at the beginning of the process. (b) Line laser with battery box (Fig 5). I bought them online. Cost less than 30 dollars.

12 (a) System (b) Laser line with battery box Fig 5. 3D scanning system Calibration I used the calibration toolbox from I printed out a checkerboard pattern and attached it to a hardcover book. Placed it in different positions and took twelve pictures (Fig 6). Then used the calibration toolbox to obtain the intrinsic matrix of the web camera. You can find the procedure of the calibration from the website above. The intrinsic matrix I obtained after calibration is: 962 π=

13 Fig 6: Twelve pictures for camera calibration Scanning During scanning, I scanned the object from different angles to cover the object as much as possible. Image processing I extracted the red color from the images first because the laser line is red Implemented Edge Enhancement 3D reconstruction Determine the light plane and recover every point illuminated by the line laser Repeat the step above for every image 4. Experimental results I scanned four objects to check the performance of my 3D scanner. Object : Stitch 3

14 Fig 7: Image from camera 4

15 Object 2: Bobblehead Pedroia Fig 8: 3D images from different views Fig 9: Image from camera 5

16 Fig 0: 3D images from different views Object 3: Figure of Puerto Rico Fig : Image from camera 6

17 Fig 2: 3D images from different views Object 4: box 7

18 Fig 3: Image from camera 8

19 Fig 4: 3D images from different views Accuracy To check the accuracy I used two methods:. Geometry: Compare the real length and width of the box with the readings from the recovered 3D model Real value Value read from image Length 2.cm 2.4cm Width 7.6cm 7.cm Table : Comparison of the geometry parameters The accuracy is acceptable if we don t require extreme precision. 2. Matching: Project the 3D coordinates back to the 2D image and see how well it matches with the image taken from the camera. 9

20 Fig 5: Original image from camera Fig 6: 2D image projected from 3D model 20

21 Fig 7: Original image from camera Fig 8: 2D image projected from 3D model 2

22 Fig 9: Original image from camera Fig 20: 2D image projected from 3D model 22

23 Fig 2: Original image from camera Fig 22: 2D image projected from 3D model 5. Conclusion With cheap equipment, which is fewer than 30 dollars, I successfully implement the 3D scanning using a line laser as the light source. There is some noise that affects the accuracy of the reconstruction. I think they are resulted from: 23

24 The precision of the light plane. The precision of the coordinates of the fiducials on the two white boards The laser line is not thin enough 6. Future Work The performance of the system can be improved with brighter and thinner laser line. If the object is put on a rotating table with known rotating speed, then the 3D properties of the whole object can be recovered. Reference Course notes: Calibration toolbox from: 24

Geometric Camera Parameters

Geometric Camera Parameters What assumptions have we made so far? -All equations we have derived for far are written in the camera reference frames. -These equations are valid only when: () all distances

Structured light systems

Structured light systems Tutorial 1: 9:00 to 12:00 Monday May 16 2011 Hiroshi Kawasaki & Ryusuke Sagawa Today Structured light systems Part I (Kawasaki@Kagoshima Univ.) Calibration of Structured light

A low cost 3D scanner based on structured light. C. Rocchini, P. Cignoni, C. Montani, P. Pingi, R. Scopigno

A low cost 3D scanner based on structured light C. Rocchini, P. Cignoni, C. Montani, P. Pingi, R. Scopigno Introduction 3D Scanner + Software Claudio Rocchini, Visual Computing Group 2 Characteristics

C4 Computer Vision. 4 Lectures Michaelmas Term Tutorial Sheet Prof A. Zisserman. fundamental matrix, recovering ego-motion, applications.

C4 Computer Vision 4 Lectures Michaelmas Term 2004 1 Tutorial Sheet Prof A. Zisserman Overview Lecture 1: Stereo Reconstruction I: epipolar geometry, fundamental matrix. Lecture 2: Stereo Reconstruction

CS635 Spring 2010. Department of Computer Science Purdue University

Structured Light Based Acquisition (Part 1) CS635 Spring 2010 Daniel G Aliaga Daniel G. Aliaga Department of Computer Science Purdue University Passive vs. Active Acquisition Passive + Just take pictures

SCAN IN A BOX Guide to the Ideal 3D Scan

SCAN IN A BOX Guide to the Ideal 3D Scan Part I General Considerations This document is a guide for the person that approaches for the first time to the world of 3D scanning. The advices contained in this

Computational Optical Imaging - Optique Numerique. -- Active Light 3D--

Computational Optical Imaging - Optique Numerique -- Active Light 3D-- Autumn 2015 Ivo Ihrke Overview 3D scanning techniques Laser triangulation Structured light Photometric stereo Time-of-Flight Transient

3D POINT CLOUD CONSTRUCTION FROM STEREO IMAGES

3D POINT CLOUD CONSTRUCTION FROM STEREO IMAGES Brian Peasley * I propose an algorithm to construct a 3D point cloud from a sequence of stereo image pairs that show a full 360 degree view of an object.

Lecture 19 Camera Matrices and Calibration

Lecture 19 Camera Matrices and Calibration Project Suggestions Texture Synthesis for In-Painting Section 10.5.1 in Szeliski Text Project Suggestions Image Stitching (Chapter 9) Face Recognition Chapter

Whiteboard It! Convert Whiteboard Content into an Electronic Document

Whiteboard It! Convert Whiteboard Content into an Electronic Document Zhengyou Zhang Li-wei He Microsoft Research Email: zhang@microsoft.com, lhe@microsoft.com Aug. 12, 2002 Abstract This ongoing project

Computer Vision - part II

Computer Vision - part II Review of main parts of Section B of the course School of Computer Science & Statistics Trinity College Dublin Dublin 2 Ireland www.scss.tcd.ie Lecture Name Course Name 1 1 2

Real-time 3D Scanning System for Pavement Distortion Inspection

Real-time 3D Scanning System for Pavement Distortion Inspection Bugao Xu, Ph.D. & Professor University of Texas at Austin Center for Transportation Research Austin, Texas 78712 Pavement distress categories

Monash University Clayton s School of Information Technology CSE3313 Computer Graphics Sample Exam Questions 2007

Monash University Clayton s School of Information Technology CSE3313 Computer Graphics Questions 2007 INSTRUCTIONS: Answer all questions. Spend approximately 1 minute per mark. Question 1 30 Marks Total

ENGN 2502 3D Photography / Winter 2012 / SYLLABUS http://mesh.brown.edu/3dp/

ENGN 2502 3D Photography / Winter 2012 / SYLLABUS http://mesh.brown.edu/3dp/ Description of the proposed course Over the last decade digital photography has entered the mainstream with inexpensive, miniaturized

Section 12.6: Directional Derivatives and the Gradient Vector

Section 26: Directional Derivatives and the Gradient Vector Recall that if f is a differentiable function of x and y and z = f(x, y), then the partial derivatives f x (x, y) and f y (x, y) give the rate

Structured Light + Range Imaging. Lecture #17. (Thanks to Content from Levoy, Rusinkiewicz, Bouguet, Perona, Hendrik Lensch)

Structured Light + Range Imaging Lecture #17 (Thanks to Content from Levoy, Rusinkiewicz, Bouguet, Perona, Hendrik Lensch) 3D Scanning Stereo Triangulation I J Correspondence is hard! Structured Light

Notes on the Use of Multiple Image Sizes at OpenCV stereo

Notes on the Use of Multiple Image Sizes at OpenCV stereo Antonio Albiol April 5, 2012 Abstract This paper explains how to use different image sizes at different tasks done when using OpenCV for stereo

CSCI 445 Amin Atrash. Ultrasound, Laser and Vision Sensors. Introduction to Robotics L. Itti & M. J. Mataric

Introduction to Robotics CSCI 445 Amin Atrash Ultrasound, Laser and Vision Sensors Today s Lecture Outline Ultrasound (sonar) Laser range-finders (ladar, not lidar) Vision Stereo vision Ultrasound/Sonar

Map Scanning and Automated Conversion

Objectives (Entry) Map Scanning and Automated Conversion This unit will introduce quick and less costly method of data capture - map scanning and automated conversion. This unit will briefly discuss about

Face detection is a process of localizing and extracting the face region from the

Chapter 4 FACE NORMALIZATION 4.1 INTRODUCTION Face detection is a process of localizing and extracting the face region from the background. The detected face varies in rotation, brightness, size, etc.

Epipolar Geometry Prof. D. Stricker

Outline 1. Short introduction: points and lines Epipolar Geometry Prof. D. Stricker 2. Two views geometry: Epipolar geometry Relation point/line in two views The geometry of two cameras Definition of the

Automatic Labeling of Lane Markings for Autonomous Vehicles

Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 jkiske@stanford.edu 1. Introduction As autonomous vehicles become more popular,

Calibration of a rotating multi-beam Lidar

Calibration of a rotating multi-beam Lidar Naveed Muhammad and Simon Lacroix Abstract This paper presents a technique for the calibration of multi-beam laser scanners. The technique is based on an optimization

VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION

VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION Mark J. Norris Vision Inspection Technology, LLC Haverhill, MA mnorris@vitechnology.com ABSTRACT Traditional methods of identifying and

RealScape Series Fixed Assets Change Judgment System

Business Innovation / Products and Technologies RealScape Series Fixed Assets Change Judgment System KUNIEDA Kazuo, IWATA Makoto, HASHIZUME Kazuaki, KOIZUMI Hirokazu Abstract The Tokyo Metropolitan Government,

Epipolar Geometry and Stereo Vision

04/12/11 Epipolar Geometry and Stereo Vision Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Many slides adapted from Lana Lazebnik, Silvio Saverese, Steve Seitz, many figures from

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia

Wii Remote Calibration Using the Sensor Bar

Wii Remote Calibration Using the Sensor Bar Alparslan Yildiz Abdullah Akay Yusuf Sinan Akgul GIT Vision Lab - http://vision.gyte.edu.tr Gebze Institute of Technology Kocaeli, Turkey {yildiz, akay, akgul}@bilmuh.gyte.edu.tr

Homography. Dr. Gerhard Roth

Homography Dr. Gerhard Roth Epipolar Geometry P P l P r Epipolar Plane p l Epipolar Lines p r O l e l e r O r Epipoles P r = R(P l T) Homography Consider a point x = (u,v,1) in one image and x =(u,v,1)

Manufacturing Process and Cost Estimation through Process Detection by Applying Image Processing Technique

Manufacturing Process and Cost Estimation through Process Detection by Applying Image Processing Technique Chalakorn Chitsaart, Suchada Rianmora, Noppawat Vongpiyasatit Abstract In order to reduce the

Analecta Vol. 8, No. 2 ISSN 2064-7964

EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, EgyetemvΓ‘ros, 3515, Miskolc, Hungary,

Lecture 2: Homogeneous Coordinates, Lines and Conics

Lecture 2: Homogeneous Coordinates, Lines and Conics 1 Homogeneous Coordinates In Lecture 1 we derived the camera equations Ξ»x = P X, (1) where x = (x 1, x 2, 1), X = (X 1, X 2, X 3, 1) and P is a 3 4

Depth from a single camera

Depth from a single camera Fundamental Matrix Essential Matrix Active Sensing Methods School of Computer Science & Statistics Trinity College Dublin Dublin 2 Ireland www.scss.tcd.ie 1 1 Geometry of two

Image Projection. Goal: Introduce the basic concepts and mathematics for image projection.

Image Projection Goal: Introduce the basic concepts and mathematics for image projection. Motivation: The mathematics of image projection allow us to answer two questions: Given a 3D scene, how does it

INTRODUCTION TO RENDERING TECHNIQUES

INTRODUCTION TO RENDERING TECHNIQUES 22 Mar. 212 Yanir Kleiman What is 3D Graphics? Why 3D? Draw one frame at a time Model only once X 24 frames per second Color / texture only once 15, frames for a feature

DMU Space Analysis. Preface What's New Getting Started Basic Tasks Workbench Description Customizing Glossary Index

DMU Space Analysis Preface What's New Getting Started Basic Tasks Workbench Description Customizing Glossary Index Dassault SystΓ¨mes 1994-99. All rights reserved. Preface DMU Space Analysis is a CAD-independent

Growth Measurement of a Community of Strawberries Using Three-Dimensional Sensor

Original Paper Environ. Control Biol., 53 (2), 49 53, 2015 DOI: 10.2525/ecb.53.49 Growth Measurement of a Community of Strawberries Using Three-Dimensional Sensor Satoshi YAMAMOTO, Shigehiko HAYASHI and

Computer Graphics: Visualisation Lecture 3. Taku Komura Institute for Perception, Action & Behaviour

Computer Graphics: Visualisation Lecture 3 Taku Komura tkomura@inf.ed.ac.uk Institute for Perception, Action & Behaviour Taku Komura Computer Graphics & VTK 1 Last lecture... Visualisation can be greatly

LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK

vii LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK LIST OF CONTENTS LIST OF TABLES LIST OF FIGURES LIST OF NOTATIONS LIST OF ABBREVIATIONS LIST OF APPENDICES

Camera calibration and epipolar geometry. Odilon Redon, Cyclops, 1914

Camera calibration and epipolar geometry Odilon Redon, Cyclops, 94 Review: Alignment What is the geometric relationship between pictures taken by cameras that share the same center? How many points do

Color holographic 3D display unit with aperture field division

Color holographic 3D display unit with aperture field division Weronika Zaperty, Tomasz Kozacki, Malgorzata Kujawinska, Grzegorz Finke Photonics Engineering Division, Faculty of Mechatronics Warsaw University

Static Environment Recognition Using Omni-camera from a Moving Vehicle

Static Environment Recognition Using Omni-camera from a Moving Vehicle Teruko Yata, Chuck Thorpe Frank Dellaert The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 USA College of Computing

HIGH-PERFORMANCE INSPECTION VEHICLE FOR RAILWAYS AND TUNNEL LININGS. HIGH-PERFORMANCE INSPECTION VEHICLE FOR RAILWAY AND ROAD TUNNEL LININGS.

HIGH-PERFORMANCE INSPECTION VEHICLE FOR RAILWAYS AND TUNNEL LININGS. HIGH-PERFORMANCE INSPECTION VEHICLE FOR RAILWAY AND ROAD TUNNEL LININGS. The vehicle developed by Euroconsult and Pavemetrics and described

Image Formation and Camera Calibration

EECS 432-Advanced Computer Vision Notes Series 2 Image Formation and Camera Calibration Ying Wu Electrical Engineering & Computer Science Northwestern University Evanston, IL 60208 yingwu@ecenorthwesternedu

Geometry of Vectors. 1 Cartesian Coordinates. Carlo Tomasi

Geometry of Vectors Carlo Tomasi This note explores the geometric meaning of norm, inner product, orthogonality, and projection for vectors. For vectors in three-dimensional space, we also examine the

521493S Computer Graphics. Exercise 2 & course schedule change

521493S Computer Graphics Exercise 2 & course schedule change Course Schedule Change Lecture from Wednesday 31th of March is moved to Tuesday 30th of March at 16-18 in TS128 Question 2.1 Given two nonparallel,

CVChess: Computer Vision Chess Analytics

CVChess: Computer Vision Chess Analytics Jay Hack and Prithvi Ramakrishnan Abstract We present a computer vision application and a set of associated algorithms capable of recording chess game moves fully

SCREEN-CAMERA CALIBRATION USING A THREAD. Songnan Li, King Ngi Ngan, Lu Sheng

SCREEN-CAMERA CALIBRATION USING A THREAD Songnan Li, King Ngi Ngan, Lu Sheng Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong SAR ABSTRACT In this paper, we propose

Using Spatially Distributed Patterns for Multiple View Camera Calibration

Using Spatially Distributed Patterns for Multiple View Camera Calibration Martin Grochulla, Thorsten ThormΓ€hlen, and Hans-Peter Seidel Max-Planc-Institut Informati, SaarbrΓΌcen, Germany {mgrochul, thormae}@mpi-inf.mpg.de

I-SiTE - Laser Scanning Revolutionises Site Survey

I-SiTE - Laser Scanning Revolutionises Site Survey I.K. Kapageridis Maptek/KRJA Systems Ltd, United Kingdom ABSTRACT: MAPTEK's revolutionary I-SiTE 3D Laser Imaging System, presented in this paper, is

Interactive 3D Scanning Without Tracking

Interactive 3D Scanning Without Tracking Matthew J. Leotta, Austin Vandergon, Gabriel Taubin Brown University Division of Engineering Providence, RI 02912, USA {matthew leotta, aev, taubin}@brown.edu Abstract

VOLUMNECT - Measuring Volumes with Kinect T M

VOLUMNECT - Measuring Volumes with Kinect T M Beatriz Quintino Ferreira a, Miguel GrinΓ© a, Duarte Gameiro a, JoΓ£o Paulo Costeira a,b and Beatriz Sousa Santos c,d a DEEC, Instituto Superior TΓ©cnico, Lisboa,

Image Formation Principle

Image Formation Principle Michael Biddle Robert Dawson Department of Physics and Astronomy The University of Georgia, Athens, Georgia 30602 (Dated: December 8, 2015) The aim of this project was to demonstrate

2. Do Not use the laser without getting instructions from the demonstrator.

EXPERIMENT 3 Diffraction Pattern Measurements using a Laser Laser Safety The Helium Neon lasers used in this experiment and are of low power (0.5 milliwatts) but the narrow beam of light is still of high

RE INVENT THE CAMERA: 3D COMPUTATIONAL PHOTOGRAPHY FOR YOUR MOBILE PHONE OR TABLET

RE INVENT THE CAMERA: 3D COMPUTATIONAL PHOTOGRAPHY FOR YOUR MOBILE PHONE OR TABLET REINVENT THE CAMERA: 3D COMPUTATIONAL PHOTOGRAPHY FOR YOUR MOBILE PHONE OR TABLET The first electronic camera (left),

USING THE XBOX KINECT TO DETECT FEATURES OF THE FLOOR SURFACE

USING THE XBOX KINECT TO DETECT FEATURES OF THE FLOOR SURFACE By STEPHANIE COCKRELL Submitted in partial fulfillment of the requirements For the degree of Master of Science Thesis Advisor: Gregory Lee

Maths for Computer Graphics

Analytic Geometry Review of geometry Euclid laid the foundations of geometry that have been taught in schools for centuries. In the last century, mathematicians such as Bernhard Riemann (1809 1900) and

Real-Time Automated Simulation Generation Based on CAD Modeling and Motion Capture

103 Real-Time Automated Simulation Generation Based on CAD Modeling and Motion Capture Wenjuan Zhu, Abhinav Chadda, Ming C. Leu and Xiaoqing F. Liu Missouri University of Science and Technology, zhuwe@mst.edu,

EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM

EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM Amol Ambardekar, Mircea Nicolescu, and George Bebis Department of Computer Science and Engineering University

Manual for simulation of EB processing. Software ModeRTL

1 Manual for simulation of EB processing Software ModeRTL How to get results. Software ModeRTL. Software ModeRTL consists of five thematic modules and service blocks. (See Fig.1). Analytic module is intended

Stereo Vision (Correspondences)

Stereo Vision (Correspondences) EECS 598-08 Fall 2014! Foundations of Computer Vision!! Instructor: Jason Corso (jjcorso)! web.eecs.umich.edu/~jjcorso/t/598f14!! Readings: FP 7; SZ 11; TV 7! Date: 10/27/14!!

IMPACT OF POSE AND GLASSES ON FACE DETECTION USING THE RED EYE EFFECT

IMPACT OF POSE AND GLASSES ON FACE DETECTION USING THE RED EYE EFFECT Yednekachew Asfaw, Bryan Chen and Andy Adler University Of Ottawa March 2003 Face Detection Applications for face detection Surveillance

A System for Capturing High Resolution Images

A System for Capturing High Resolution Images G.Voyatzis, G.Angelopoulos, A.Bors and I.Pitas Department of Informatics University of Thessaloniki BOX 451, 54006 Thessaloniki GREECE e-mail: pitas@zeus.csd.auth.gr

Robot Perception Continued

Robot Perception Continued 1 Visual Perception Visual Odometry Reconstruction Recognition CS 685 11 Range Sensing strategies Active range sensors Ultrasound Laser range sensor Slides adopted from Siegwart

Robust and accurate global vision system for real time tracking of multiple mobile robots

Robust and accurate global vision system for real time tracking of multiple mobile robots MiΕ‘el Brezak Ivan PetroviΔ Edouard Ivanjko Department of Control and Computer Engineering, Faculty of Electrical

Chess Vision. Chua Huiyan Le Vinh Wong Lai Kuan

Chess Vision Chua Huiyan Le Vinh Wong Lai Kuan Outline Introduction Background Studies 2D Chess Vision Real-time Board Detection Extraction and Undistortion of Board Board Configuration Recognition 3D

Scorpion 3D Stinger Camera

Scorpion 3D Stinger Camera Scope Scorpion Stinger is a family of machine vision components and products. They provide building blocks for OEM and system integrators. The Scorpion 3D Stinger Camera is designed

Palmprint Recognition. By Sree Rama Murthy kora Praveen Verma Yashwant Kashyap

Palmprint Recognition By Sree Rama Murthy kora Praveen Verma Yashwant Kashyap Palm print Palm Patterns are utilized in many applications: 1. To correlate palm patterns with medical disorders, e.g. genetic

Robot Sensors. Outline. The Robot Structure. Robots and Sensors. Henrik I Christensen

Robot Sensors Henrik I Christensen Robotics & Intelligent Machines @ GT Georgia Institute of Technology, Atlanta, GA 30332-0760 hic@cc.gatech.edu Henrik I Christensen (RIM@GT) Sensors 1 / 38 Outline 1

1 Chapter 13. VECTORS IN THREE DIMENSIONAL SPACE Let s begin with some names and notation for things: R is the set (collection) of real numbers. We write x R to mean that x is a real number. A real number

The Design and Implementation of Traffic Accident Identification System Based on Video

3rd International Conference on Multimedia Technology(ICMT 2013) The Design and Implementation of Traffic Accident Identification System Based on Video Chenwei Xiang 1, Tuo Wang 2 Abstract: With the rapid

Improved Three-dimensional Image Processing Technology for Remote Handling Auxiliary System

Improved Three-dimensional Image Processing Technology for Remote Handling Auxiliary System Chiaki Tomizuka Keisuke Jinza Hiroshi Takahashi 1. Introduction Remote handling devices are used in the radioactive

Section 9.5: Equations of Lines and Planes

Lines in 3D Space Section 9.5: Equations of Lines and Planes Practice HW from Stewart Textbook (not to hand in) p. 673 # 3-5 odd, 2-37 odd, 4, 47 Consider the line L through the point P = ( x, y, ) that

Computer Vision: Filtering

Computer Vision: Filtering Raquel Urtasun TTI Chicago Jan 10, 2013 Raquel Urtasun (TTI-C) Computer Vision Jan 10, 2013 1 / 82 Today s lecture... Image formation Image Filtering Raquel Urtasun (TTI-C) Computer

Build Panoramas on Android Phones

Build Panoramas on Android Phones Tao Chu, Bowen Meng, Zixuan Wang Stanford University, Stanford CA Abstract The purpose of this work is to implement panorama stitching from a sequence of photos taken

Projection Center Calibration for a Co-located Projector Camera System

Projection Center Calibration for a Co-located Camera System Toshiyuki Amano Department of Computer and Communication Science Faculty of Systems Engineering, Wakayama University Sakaedani 930, Wakayama,

13.4 THE CROSS PRODUCT

710 Chapter Thirteen A FUNDAMENTAL TOOL: VECTORS 62. Use the following steps and the results of Problems 59 60 to show (without trigonometry) that the geometric and algebraic definitions of the dot product

Announcements. Active stereo with structured light. Project structured light patterns onto the object

Announcements Active stereo with structured light Project 3 extension: Wednesday at noon Final project proposal extension: Friday at noon > consult with Steve, Rick, and/or Ian now! Project 2 artifact

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - nzarrin@qiau.ac.ir

Digital Photogrammetric System. Version 6.0.2 USER MANUAL. Block adjustment

Digital Photogrammetric System Version 6.0.2 USER MANUAL Table of Contents 1. Purpose of the document... 4 2. General information... 4 3. The toolbar... 5 4. Adjustment batch mode... 6 5. Objects displaying

Shape from Single Stripe Pattern Illumination

Published in Luc Van Gool (Editor): Pattern Recognition, Lecture Notes in Computer Science 2449, Springer 2002, page 240-247. Shape from Single Stripe Pattern Illumination S. Winkelbach and F. M. Wahl

Cabri Geometry Application User Guide

Cabri Geometry Application User Guide Preview of Geometry... 2 Learning the Basics... 3 Managing File Operations... 12 Setting Application Preferences... 14 Selecting and Moving Objects... 17 Deleting

8.1 Lens Equation. 8.2 Image Resolution (8.1) z' z r

Chapter 8 Optics This chapter covers the essentials of geometrical optics. Radiometry is covered in Chapter 9. Machine vision relies on the pinhole camera model which models the geometry of perspective

Vision based Vehicle Tracking using a high angle camera

Vision based Vehicle Tracking using a high angle camera RaΓΊl Ignacio Ramos GarcΓ­a Dule Shu gramos@clemson.edu dshu@clemson.edu Abstract A vehicle tracking and grouping algorithm is presented in this work

Visual Data Combination for Object Detection and Localization for Autonomous Robot Manipulation Tasks

Visual Data Combination for Object Detection and Localization for Autonomous Robot Manipulation Tasks Luis A. Morgado-Ramirez, Sergio Hernandez-Mendez, Luis F. Marin- Urias, Antonio Marin-Hernandez, Homero

Precision edge detection with bayer pattern sensors

Precision edge detection with bayer pattern sensors Prof.Dr.-Ing.habil. Gerhard LinΓ Dr.-Ing. Peter BrΓΌckner Dr.-Ing. Martin Correns Folie 1 Agenda 1. Introduction 2. State of the art 3. Key aspects 1.

CCD or CIS: The Technology Decision

White Paper This White Paper will explain the two scanning technologies and compare them with respect to quality, usability, price and environmental aspects. The two technologies used in all document scanners

ALEXA Studio Electronic and Mirror Shutter

ALEXA Studio Electronic and Mirror Shutter White Paper, January 19, 2012 Introduction... 2 Mirror Shutter Basics... 3 VIEW and GATE Phases... 3 The Open Shutter Angle... 3 Advantages of Optical Viewfinders...

(a) We have x = 3 + 2t, y = 2 t, z = 6 so solving for t we get the symmetric equations. x 3 2. = 2 y, z = 6. t 2 2t + 1 = 0,

Name: Solutions to Practice Final. Consider the line r(t) = 3 + t, t, 6. (a) Find symmetric equations for this line. (b) Find the point where the first line r(t) intersects the surface z = x + y. (a) We

CONFOCAL LASER SCANNING MICROSCOPY TUTORIAL

CONFOCAL LASER SCANNING MICROSCOPY TUTORIAL Robert Bagnell 2006 This tutorial covers the following CLSM topics: 1) What is the optical principal behind CLSM? 2) What is the spatial resolution in X, Y,

A technical overview of the Fuel3D system.

A technical overview of the Fuel3D system. Contents Introduction 3 How does Fuel3D actually work? 4 Photometric imaging for high-resolution surface detail 4 Optical localization to track movement during

Introduction Epipolar Geometry Calibration Methods Further Readings. Stereo Camera Calibration

Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration 12.10.2004 Overview Introduction Summary / Motivation Depth Perception Ambiguity of Correspondence

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network

Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED '9) ISSN: 179-519 435 ISBN: 978-96-474-51-2 An Energy-Based Vehicle Tracking System using Principal

THREE DIMENSIONAL GEOMETRY

Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

Equations Involving Lines and Planes Standard equations for lines in space

Equations Involving Lines and Planes In this section we will collect various important formulas regarding equations of lines and planes in three dimensional space Reminder regarding notation: any quantity

WHITE PAPER DECEMBER 2010 CREATING QUALITY BAR CODES FOR YOUR MOBILE APPLICATION

DECEMBER 2010 CREATING QUALITY BAR CODES FOR YOUR MOBILE APPLICATION TABLE OF CONTENTS 1 Introduction...3 2 Printed bar codes vs. mobile bar codes...3 3 What can go wrong?...5 3.1 Bar code Quiet Zones...5

Copyright 2011 Casa Software Ltd. www.casaxps.com. Centre of Mass

Centre of Mass A central theme in mathematical modelling is that of reducing complex problems to simpler, and hopefully, equivalent problems for which mathematical analysis is possible. The concept of

Exercise #8. GIMP - editing, retouching, paths and their application

dr inΕΌ. Jacek Jarnicki, dr inΕΌ. Marek Woda Institute of Computer Engineering, Control and Robotics Wroclaw University of Technology {jacek.jarnicki, marek.woda}@pwr.wroc.pl Exercise #8 GIMP - editing,