FPGA-based Rectification of Stereo Images
|
|
- Lewis Doyle
- 7 years ago
- Views:
Transcription
1 FPGA-based Rectification of Stereo Images João Rodrigues 1, João Canas Ferreira 2 1 PhD Student, FEUP 2 Assistant Professor, DEEC, FEUP nijoao@gmail.com, jcf@fe.up.pt Abstract. In order to obtain depth perception in computer vision, one needs to process pairs of stereo images. This process is computationally challenging to be carried out in real-time as it requires the search for matches between objects in both images. Such process is significantly simplified if the images are rectified, making the objects horizontally aligned between them. The process of stereo images rectification has different steps with different computational requirements, that are therefore not usually implemented in the same system. These includes 2D searches for high fidelity matches, precise matrix calculations, and fast pixel coordinates transformations and interpolations. In this project, the complete process is effectively implemented in an Spartan-3 FPGA, taking advantage of a Microblaze softcore for slow but precise calculations, and of the fast dedicated hardware for real-time requirements. The implemented system successfully performs real-time rectification on the images of two video cameras, with a resolution of 640 x 480 pixels and a frame rate of 25 FPS, and is easily configured for videos with higher resolutions. The obtained results are quite satisfactory, with output images having a maximum vertical disparity of 2 pixels, proving that stereo images rectification can be efficiently achieved in an low-resources FPGA (64Kb). 1. Introduction Depth information of objects in an image is essential in applications such as videosecurity, military, cinematography, robotics and even some medical departments. The process to recover this information is commonly performed in Stereo Vision, which uses two images and triangulation operations to determine the depth of represented objects. The triangulation between two cameras can be accomplished by finding a corresponding point or object viewed by one camera in the image of the other. In most camera configurations finding these correspondences requires a search in two dimensions, although it can become a one-dimension search if the images are previously rectified. In rectified images the objects have the same vertical position in both images, so we just need to look along an horizontal line to find the same point in a pair of images. Even though the rectification may not be perfect, it still is very useful: the more precise the rectification operation is, the smaller the search area for correspondences can be. The process of rectification is normally divided in two main phases: calculation of the required transformation matrices and application of those transformations to the images. Several authors, such as Richard Hartley [Hartley 1999] and Andrea Fusiello [Fusiello 2000], proposed some methods and give good mathematical background for the
2 calculation of the necessary transformations. These transformations needed to rectify the images are represented as two 3 x 3 matrices, one for each camera. Some systems of images rectification have been previously proposed and implemented, like the MSVM-III from Jia, Y. [Jia et al. 2004], or an auto-rectification system based on an IC3D system from Xinting Gao [Gao et al. 2008]. However, these systems have specific restrictions that makes them unusable in most situations. For example, the MSVM-III requires the transformation matrix to be given, and the IC3D-based performs rectification only with translations in the images, without rotation or scaling. Our main goal is to implement both phases in an FPGA-based system, which outputs in real-time the rectified streams of video from two cameras. In the next section we will describe the implementation and the methods chosen. In sections 3 and 4 the results are presented along with brief conclusions about the proposed system. 2. Implementation Every method and algorithm described was implemented in an FPGA, either on Microblaze using C or directly in Verilog. This way the whole system is implemented in a single board, making it more useful and portable. The stream of video to be rectified are obtained from a stereo kit with two CMOS sensors of 640 x 480 pixels of resolution. There are several methods described in the literature to compute the required image transformations [Hartley 1999] [Fusiello 2000]. We implemented the most general method, where every image parameter is rectified, except for lens distortion. This method is based on epipolar geometry and is advised for random camera placement, with noncoplanar objects in view relatively near the camera and thus having high horizontal disparity. This requirement is important in order to give valuable spatial information to the images, and thus being able to accurately estimate the rotations need to both images. An example of a set on which this method should not be applied is satellite photography (e.g. Google maps,...). The objects (e.g. houses,...) are practically coplanar, forming the plane corresponding to the earth surface. In these cases, where almost no 3D information exists, a simpler method of rectification should be used, like the one described and implemented by Jia [Jia et al. 2004]. The chosen method consists of finding enough correspondences between the images, and then performing an iterative least-mean-squares function to minimize the error of the estimated matrices. These steps do not have real-time requirements, but require high precision, and therefore are implemented in C in a Microblaze. The calculated matrices are then applied to both videos in real-time. An FPGAbased bilinear interpolation was implemented in Verilog, in order to estimate and reconstruct the rectified video streams. The full system is described in Figure 1. The auxiliary modules are represented in white, the supporting hardware modules in red and the three main phases in blue and green. The three steps needed to implement a complete rectification process will now be described.
3 Figure 1. Global description of the system Correspondence Problem The correspondence problem consists in finding some points in one image and the corresponding points in the other image. Because of the limited instructions memory available on the FPGA (64 KB), an advanced method of finding correspondences was impossible to design. The correspondence problem was made simple by dividing it into some small weighting functions. The various actions of the functions are now listed chronologically: 1. Every 5 x 5 pixels block is analysed with a new non-linear algorithm and the best candidates chosen. The algorithm calculates the block s quality as the higher value, between the sum of the darker pixels than the central, and the pixels with higher value (lighter). Pixels with a luminosity value similar to the central pixel are ignored in order to suppress noise effect. This algorithm proved to be very efficient at detecting blocks with interesting characteristics like corners. 2. The images are divided in zones (80 x 60 pixels), and the best candidates from each zone are selected. This division is important to obtain correspondence matches throughout the entire image, improving the precision of transformation matrix. 3. For each candidate, a search for a match is performed in the other image. The search-area is defined as a rectangular block around the same coordinates and is iteratively reduced around the epipolar line. The similarity of the candidate blocks between both images is calculated based on weighted simple functions: A linear comparator. Sensitive to luminosity but susceptible to noise. A non-linear comparator - similar to the one used previously. Sensitive to characteristics changes, and insensitive to noise. The distance to the estimated epipolar line. To untie, the quality of the blocks previously calculated. 4. The matches for each candidate are refined, using the same algorithms but with a block size of 15 x 15 pixels. 5. The unicity of each candidate is calculated, which represents the trust in the correspondence. This eliminates errors in patterns and repetitive textures, giving more importance to unique characteristics. It s calculated for each candidate as the difference between the confidence of the first match to the second one, resulting in a high unicity when there is only clear a match. 6. For each zone of the reference image, only the best (higher unicity) two correspondences are saved, producing the final correspondence pairs.
4 2.2. Transformation Matrix The transformation required to rectify the video is given in the form of a 3 x 3 matrix, for each camera. The coordinates of at least eight correspondence pairs are needed in order to estimate the epipolar geometry and the transformation matrices. The epipolar geometry is estimated using the famous 8-points-algorithm and the result is used in the refining of the correspondences, until they stop changing for some iterations. The algorithm iteratively repeats from step 3, until a minimum of unique high quality matching blocks are found. If the images have few spatial information, as explained before, and the algorithm is not able to find enough matching blocks the system gets new images from the cameras and restarts from step 1. In this project we want, for each coordinate of the final rectified images, to know the coordinate to interpolate from the taken (unrectified) images. For this we have to calculate one different matrix for each camera, using the equation 1. H = D [C T G R] 1 N (1) The matrix H represents the final transformations to be applied to the video stream. N and D are the Normalizing and Denormalizing matrices. These matrices put the coordinates in the [-1,1] range, improving the precision of the 8-points-algorithm as described in Hartley [Hartley 1999]. R and G are the matrices with the same name described by Hartley [Hartley 1999]. These matrices send the epipole of the image to the point at infinity in the horizontal axis, making the epipolar lines horizontal and parallel between them. They are calculated using the 8-points-algorithm and a C implementation of an homogeneous equation solver by Singular Value Decomposition (SVD). Performing the SVD in the coordinates of the correspondence pairs results in the Fundamental Matrix, describing the epipolar geometry of the images. Another SVD on this matrix and in its transpose results in the coordinates of the epipoles of both images. These matrices are in the form: R = cos(θ) sin(θ) 0 sin(θ) cos(θ) e G = f 0 1 where f is the distance of the epipole to the origin, and (theta) is the angle of the line passing through the epipole and the origin. T is a matrix of scaling and vertical translation that makes the epipolar lines coincide between both images. For its calculation we need to apply the previous matrices to the original coordinates, and then finding k and d so that Y.k + d = Y, where Y and Y are the vertical coordinate of the candidate and match, k is the scaling factor and d the vertical translation. This is the same as performing Y.k Y + d = 0, which represents an homogeneous system solvable by SVD. C is a matrix that maximizes the visibility of the common area between the images of both videos. It is very useful for stereoscopy, since only the common area can be
5 analyzed. This matrix is the same for both cameras and consists only of a scaling and translation factor in both axis. The previous matrices calculations were simulated, and proved to be very reliable. In the simulations a list of random pairs with variable size was created and then gently distorted with a given random matrix. That matrix was successfully retrieved with only the distorted pairs, by using the described methods. In order to improve the simulations correctness, the distorted coordinates were rounded to the nearest integer. This step alone proved to introduce the errors reported in table 1 in the recovered coordinates. As we could see, the rectification process is improved if the correspondences found are dispersed and in various different depths. Table 1. Simulated precision using different number of pairs Number of pairs Maximum error in pixels of: Almost coplanar image Depth-rich image Dispersed points 5-8 2,0-2,9 1,5-2,6 0,5-0,7 As the algorithms developed for the correspondence problem already solve these issues the methods was implemented in the FPGA. In order to obtain the best precision possible, without taking too much time, the more complex mathematical functions are performed in an auxiliary support module Rectification Unlike the previous methods, the system must apply the calculated transformation matrices in real time to both videos, at a speed of 25 frames per second of 640 x 480 pixels. In this project a bilinear interpolation method was chosen to reconstruct the rectified images, but other methods can be easily used. This interpolation resulted in a much better looking video than with no interpolation whatsoever. The implementation on the FPGA of this process consists of the following steps: For each coordinate [0-639;0-479] multiply it by the transformation matrix. The result is an homogenous coordinate of the point to interpolate from the images. Transform the homogeneous coordinates into Cartesians. This requires a division. Read the four nearest pixels surrounding the calculated coordinates and perform the bilinear interpolation. Send the rectified images to the monitor, memory, or other output. 3. Results The FPGA used for implementation was a Xilinx XC3S1500, but the system is adaptable and easily applied to other FPGAs or cameras with different characteristics. The complete process was successfully implemented, meeting the time requirements thanks to the parallelism power of the FPGAs. The proposed correspondence algorithm has been thoroughly tested in the development system. These cameras had significant blur effect and lens distortion and, even so, the algorithm was capable of detecting enough correspondences with good quality: this
6 (a) Unrectified (b) Rectified Figure 2. Images taken and displayed using the new method algorithm showed less than 15% error in the correspondence pairs, which usually became much less after some iterations. A new way to evaluate pairs of images was used to analyse the results. It is based on a bi-color image, on which each image fills a different color: red and blue. This allows us to easily compare the objects position on both images, and thus the precision of the rectification method. It also allows to see the images with colored 3D glasses and confirm the increase in quality after the rectification. Although the cameras are in a stereoscopy kit and visually aligned, theoforiginal images are clearly as we can see the in figure 2. The calculation the matrix resulted in the unrectified, expected precision from simulations, and the interpolation resulted in a visually-lossless rectified image. When about 50 pairs of good points were found, the system rectified the videos with a maximum error of 1 pixels. In general, there were always between 30 and 60 pairs, with 1 or 2 bad correspondences, and the resulting error was less than 2 pixels as seen in figure Conclusion The implemented algorithms are capable of rectifying the videos in real-time, with a good precision. The 2 pixels of maximum error is good enough to practically reduce the search area to a line, and thus very useful to stereoscopy. We showed that a good and reliable stereo images rectification process can be implemented in an FPGA, using only 64KBs of memory. This means, for example, that a cheap and personal 3D-camcorder could be easily constructed, saving the rectified 3D video in real-time. References Fusiello, A. (2000). Epipolar rectification. fusiello/rectif_cvol/rectif_cvol.html. Gao, X., Kleihorst, R., and Schueler, B. (2008). Implementation of auto-rectification and depth estimation of stereo video in a real-time smart camera system. In Computer Vision and Pattern Recognition Workshops, pages 1 7, Anchorage, AK,. Hartley, R. I. (1999). Theory and practice of projective rectification. International Journal of Computer Vision, 35(2): Jia, Y., Zhang, X., Li, M., and An, L. (2004). A miniature stereo vision machine (msvmiii) for dense disparity mapping. In ICPR 04: Proceedings of the Pattern Recognition.
Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R.
Epipolar Geometry We consider two perspective images of a scene as taken from a stereo pair of cameras (or equivalently, assume the scene is rigid and imaged with a single camera from two different locations).
More information3D Scanner using Line Laser. 1. Introduction. 2. Theory
. Introduction 3D Scanner using Line Laser Di Lu Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute The goal of 3D reconstruction is to recover the 3D properties of a geometric
More information2-View Geometry. Mark Fiala Ryerson University Mark.fiala@ryerson.ca
CRV 2010 Tutorial Day 2-View Geometry Mark Fiala Ryerson University Mark.fiala@ryerson.ca 3-Vectors for image points and lines Mark Fiala 2010 2D Homogeneous Points Add 3 rd number to a 2D point on image
More informationKINECT PROJECT EITAN BABCOCK REPORT TO RECEIVE FINAL EE CREDIT FALL 2013
KINECT PROJECT EITAN BABCOCK REPORT TO RECEIVE FINAL EE CREDIT FALL 2013 CONTENTS Introduction... 1 Objective... 1 Procedure... 2 Converting Distance Array to 3D Array... 2 Transformation Matrices... 4
More informationGeometric Camera Parameters
Geometric Camera Parameters What assumptions have we made so far? -All equations we have derived for far are written in the camera reference frames. -These equations are valid only when: () all distances
More informationIntroduction Epipolar Geometry Calibration Methods Further Readings. Stereo Camera Calibration
Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration Stereo Camera Calibration 12.10.2004 Overview Introduction Summary / Motivation Depth Perception Ambiguity of Correspondence
More informationINTRODUCTION TO RENDERING TECHNIQUES
INTRODUCTION TO RENDERING TECHNIQUES 22 Mar. 212 Yanir Kleiman What is 3D Graphics? Why 3D? Draw one frame at a time Model only once X 24 frames per second Color / texture only once 15, frames for a feature
More informationClassifying Manipulation Primitives from Visual Data
Classifying Manipulation Primitives from Visual Data Sandy Huang and Dylan Hadfield-Menell Abstract One approach to learning from demonstrations in robotics is to make use of a classifier to predict if
More informationVideo-Rate Stereo Vision on a Reconfigurable Hardware. Ahmad Darabiha Department of Electrical and Computer Engineering University of Toronto
Video-Rate Stereo Vision on a Reconfigurable Hardware Ahmad Darabiha Department of Electrical and Computer Engineering University of Toronto Introduction What is Stereo Vision? The ability of finding the
More informationHow To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud
REAL TIME 3D FUSION OF IMAGERY AND MOBILE LIDAR Paul Mrstik, Vice President Technology Kresimir Kusevic, R&D Engineer Terrapoint Inc. 140-1 Antares Dr. Ottawa, Ontario K2E 8C4 Canada paul.mrstik@terrapoint.com
More informationImplementation of Canny Edge Detector of color images on CELL/B.E. Architecture.
Implementation of Canny Edge Detector of color images on CELL/B.E. Architecture. Chirag Gupta,Sumod Mohan K cgupta@clemson.edu, sumodm@clemson.edu Abstract In this project we propose a method to improve
More informationA Survey of Video Processing with Field Programmable Gate Arrays (FGPA)
A Survey of Video Processing with Field Programmable Gate Arrays (FGPA) Heather Garnell Abstract This paper is a high-level, survey of recent developments in the area of video processing using reconfigurable
More informationLow-resolution Image Processing based on FPGA
Abstract Research Journal of Recent Sciences ISSN 2277-2502. Low-resolution Image Processing based on FPGA Mahshid Aghania Kiau, Islamic Azad university of Karaj, IRAN Available online at: www.isca.in,
More informationInvestigation of Color Aliasing of High Spatial Frequencies and Edges for Bayer-Pattern Sensors and Foveon X3 Direct Image Sensors
Investigation of Color Aliasing of High Spatial Frequencies and Edges for Bayer-Pattern Sensors and Foveon X3 Direct Image Sensors Rudolph J. Guttosch Foveon, Inc. Santa Clara, CA Abstract The reproduction
More informationShear :: Blocks (Video and Image Processing Blockset )
1 of 6 15/12/2009 11:15 Shear Shift rows or columns of image by linearly varying offset Library Geometric Transformations Description The Shear block shifts the rows or columns of an image by a gradually
More informationAutomatic Labeling of Lane Markings for Autonomous Vehicles
Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 jkiske@stanford.edu 1. Introduction As autonomous vehicles become more popular,
More informationAuto Head-Up Displays: View-Through for Drivers
Auto Head-Up Displays: View-Through for Drivers Head-up displays (HUD) are featuring more and more frequently in both current and future generation automobiles. One primary reason for this upward trend
More informationHow To Analyze Ball Blur On A Ball Image
Single Image 3D Reconstruction of Ball Motion and Spin From Motion Blur An Experiment in Motion from Blur Giacomo Boracchi, Vincenzo Caglioti, Alessandro Giusti Objective From a single image, reconstruct:
More informationCanny Edge Detection
Canny Edge Detection 09gr820 March 23, 2009 1 Introduction The purpose of edge detection in general is to significantly reduce the amount of data in an image, while preserving the structural properties
More informationTime Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication
Time Domain and Frequency Domain Techniques For Multi Shaker Time Waveform Replication Thomas Reilly Data Physics Corporation 1741 Technology Drive, Suite 260 San Jose, CA 95110 (408) 216-8440 This paper
More informationThe Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
More informationA System for Capturing High Resolution Images
A System for Capturing High Resolution Images G.Voyatzis, G.Angelopoulos, A.Bors and I.Pitas Department of Informatics University of Thessaloniki BOX 451, 54006 Thessaloniki GREECE e-mail: pitas@zeus.csd.auth.gr
More informationRobot Perception Continued
Robot Perception Continued 1 Visual Perception Visual Odometry Reconstruction Recognition CS 685 11 Range Sensing strategies Active range sensors Ultrasound Laser range sensor Slides adopted from Siegwart
More informationBCC Multi Stripe Wipe
BCC Multi Stripe Wipe The BCC Multi Stripe Wipe is a similar to a Horizontal or Vertical Blind wipe. It offers extensive controls to randomize the stripes parameters. The following example shows a Multi
More informationDICOM Correction Item
Correction Number DICOM Correction Item CP-626 Log Summary: Type of Modification Clarification Rationale for Correction Name of Standard PS 3.3 2004 + Sup 83 The description of pixel spacing related attributes
More informationEXPERIMENTAL EVALUATION OF RELATIVE POSE ESTIMATION ALGORITHMS
EXPERIMENTAL EVALUATION OF RELATIVE POSE ESTIMATION ALGORITHMS Marcel Brückner, Ferid Bajramovic, Joachim Denzler Chair for Computer Vision, Friedrich-Schiller-University Jena, Ernst-Abbe-Platz, 7743 Jena,
More informationSynthetic Sensing: Proximity / Distance Sensors
Synthetic Sensing: Proximity / Distance Sensors MediaRobotics Lab, February 2010 Proximity detection is dependent on the object of interest. One size does not fit all For non-contact distance measurement,
More informationLinear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.
1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that
More informationWhitepaper. Image stabilization improving camera usability
Whitepaper Image stabilization improving camera usability Table of contents 1. Introduction 3 2. Vibration Impact on Video Output 3 3. Image Stabilization Techniques 3 3.1 Optical Image Stabilization 3
More informationEndoscope Optics. Chapter 8. 8.1 Introduction
Chapter 8 Endoscope Optics Endoscopes are used to observe otherwise inaccessible areas within the human body either noninvasively or minimally invasively. Endoscopes have unparalleled ability to visualize
More informationSegmentation of building models from dense 3D point-clouds
Segmentation of building models from dense 3D point-clouds Joachim Bauer, Konrad Karner, Konrad Schindler, Andreas Klaus, Christopher Zach VRVis Research Center for Virtual Reality and Visualization, Institute
More informationGlencoe. correlated to SOUTH CAROLINA MATH CURRICULUM STANDARDS GRADE 6 3-3, 5-8 8-4, 8-7 1-6, 4-9
Glencoe correlated to SOUTH CAROLINA MATH CURRICULUM STANDARDS GRADE 6 STANDARDS 6-8 Number and Operations (NO) Standard I. Understand numbers, ways of representing numbers, relationships among numbers,
More informationInteger Computation of Image Orthorectification for High Speed Throughput
Integer Computation of Image Orthorectification for High Speed Throughput Paul Sundlie Joseph French Eric Balster Abstract This paper presents an integer-based approach to the orthorectification of aerial
More informationCurrent Standard: Mathematical Concepts and Applications Shape, Space, and Measurement- Primary
Shape, Space, and Measurement- Primary A student shall apply concepts of shape, space, and measurement to solve problems involving two- and three-dimensional shapes by demonstrating an understanding of:
More informationAPPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder
APPM4720/5720: Fast algorithms for big data Gunnar Martinsson The University of Colorado at Boulder Course objectives: The purpose of this course is to teach efficient algorithms for processing very large
More informationLIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK
vii LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK LIST OF CONTENTS LIST OF TABLES LIST OF FIGURES LIST OF NOTATIONS LIST OF ABBREVIATIONS LIST OF APPENDICES
More informationIndustrial Robotics. Training Objective
Training Objective After watching the program and reviewing this printed material, the viewer will learn the basics of industrial robot technology and how robots are used in a variety of manufacturing
More informationImprovements in Real-Time Correlation-Based Stereo Vision
Improvements in Real-Time Correlation-Based Stereo Vision Heiko Hirschmüller Centre for Computational Intelligence, De Montfort University, Leicester, LE1 9BH, UK, hhm@dmu.ac.uk. Abstract A stereo vision
More informationAlgebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.
Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.
More informationAbstract. Introduction
SPACECRAFT APPLICATIONS USING THE MICROSOFT KINECT Matthew Undergraduate Student Advisor: Dr. Troy Henderson Aerospace and Ocean Engineering Department Virginia Tech Abstract This experimental study involves
More informationWii Remote Calibration Using the Sensor Bar
Wii Remote Calibration Using the Sensor Bar Alparslan Yildiz Abdullah Akay Yusuf Sinan Akgul GIT Vision Lab - http://vision.gyte.edu.tr Gebze Institute of Technology Kocaeli, Turkey {yildiz, akay, akgul}@bilmuh.gyte.edu.tr
More informationTracking Moving Objects In Video Sequences Yiwei Wang, Robert E. Van Dyck, and John F. Doherty Department of Electrical Engineering The Pennsylvania State University University Park, PA16802 Abstract{Object
More informationUnderstanding astigmatism Spring 2003
MAS450/854 Understanding astigmatism Spring 2003 March 9th 2003 Introduction Spherical lens with no astigmatism Crossed cylindrical lenses with astigmatism Horizontal focus Vertical focus Plane of sharpest
More informationVECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION
VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION Mark J. Norris Vision Inspection Technology, LLC Haverhill, MA mnorris@vitechnology.com ABSTRACT Traditional methods of identifying and
More informationSolution Guide III-C. 3D Vision. Building Vision for Business. MVTec Software GmbH
Solution Guide III-C 3D Vision MVTec Software GmbH Building Vision for Business Machine vision in 3D world coordinates, Version 10.0.4 All rights reserved. No part of this publication may be reproduced,
More informationAn Iterative Image Registration Technique with an Application to Stereo Vision
An Iterative Image Registration Technique with an Application to Stereo Vision Bruce D. Lucas Takeo Kanade Computer Science Department Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 Abstract
More informationFPGA Implementation of Human Behavior Analysis Using Facial Image
RESEARCH ARTICLE OPEN ACCESS FPGA Implementation of Human Behavior Analysis Using Facial Image A.J Ezhil, K. Adalarasu Department of Electronics & Communication Engineering PSNA College of Engineering
More informationChoosing a digital camera for your microscope John C. Russ, Materials Science and Engineering Dept., North Carolina State Univ.
Choosing a digital camera for your microscope John C. Russ, Materials Science and Engineering Dept., North Carolina State Univ., Raleigh, NC One vital step is to choose a transfer lens matched to your
More informationDistance measuring based on stereoscopic pictures
9th International Ph Workshop on Systems and Control: Young Generation Viewpoint 1. - 3. October 8, Izola, Slovenia istance measuring d on stereoscopic pictures Jernej Mrovlje 1 and amir Vrančić Abstract
More informationNumber Sense and Operations
Number Sense and Operations representing as they: 6.N.1 6.N.2 6.N.3 6.N.4 6.N.5 6.N.6 6.N.7 6.N.8 6.N.9 6.N.10 6.N.11 6.N.12 6.N.13. 6.N.14 6.N.15 Demonstrate an understanding of positive integer exponents
More informationFace detection is a process of localizing and extracting the face region from the
Chapter 4 FACE NORMALIZATION 4.1 INTRODUCTION Face detection is a process of localizing and extracting the face region from the background. The detected face varies in rotation, brightness, size, etc.
More informationCurrent status of image matching for Earth observation
Current status of image matching for Earth observation Christian Heipke IPI - Institute for Photogrammetry and GeoInformation Leibniz Universität Hannover Secretary General, ISPRS Content Introduction
More informationROBUST COLOR JOINT MULTI-FRAME DEMOSAICING AND SUPER- RESOLUTION ALGORITHM
ROBUST COLOR JOINT MULTI-FRAME DEMOSAICING AND SUPER- RESOLUTION ALGORITHM Theodor Heinze Hasso-Plattner-Institute for Software Systems Engineering Prof.-Dr.-Helmert-Str. 2-3, 14482 Potsdam, Germany theodor.heinze@hpi.uni-potsdam.de
More informationImage Processing and Computer Graphics. Rendering Pipeline. Matthias Teschner. Computer Science Department University of Freiburg
Image Processing and Computer Graphics Rendering Pipeline Matthias Teschner Computer Science Department University of Freiburg Outline introduction rendering pipeline vertex processing primitive processing
More informationUsing Photorealistic RenderMan for High-Quality Direct Volume Rendering
Using Photorealistic RenderMan for High-Quality Direct Volume Rendering Cyrus Jam cjam@sdsc.edu Mike Bailey mjb@sdsc.edu San Diego Supercomputer Center University of California San Diego Abstract With
More informationThe elements used in commercial codes can be classified in two basic categories:
CHAPTER 3 Truss Element 3.1 Introduction The single most important concept in understanding FEA, is the basic understanding of various finite elements that we employ in an analysis. Elements are used for
More informationEpipolar Geometry and Visual Servoing
Epipolar Geometry and Visual Servoing Domenico Prattichizzo joint with with Gian Luca Mariottini and Jacopo Piazzi www.dii.unisi.it/prattichizzo Robotics & Systems Lab University of Siena, Italy Scuoladi
More informationPHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia
More informationWHITE PAPER. Are More Pixels Better? www.basler-ipcam.com. Resolution Does it Really Matter?
WHITE PAPER www.basler-ipcam.com Are More Pixels Better? The most frequently asked question when buying a new digital security camera is, What resolution does the camera provide? The resolution is indeed
More informationArrangements And Duality
Arrangements And Duality 3.1 Introduction 3 Point configurations are tbe most basic structure we study in computational geometry. But what about configurations of more complicated shapes? For example,
More informationT-REDSPEED White paper
T-REDSPEED White paper Index Index...2 Introduction...3 Specifications...4 Innovation...6 Technology added values...7 Introduction T-REDSPEED is an international patent pending technology for traffic violation
More informationPre-Algebra 2008. Academic Content Standards Grade Eight Ohio. Number, Number Sense and Operations Standard. Number and Number Systems
Academic Content Standards Grade Eight Ohio Pre-Algebra 2008 STANDARDS Number, Number Sense and Operations Standard Number and Number Systems 1. Use scientific notation to express large numbers and small
More informationThe Image Deblurring Problem
page 1 Chapter 1 The Image Deblurring Problem You cannot depend on your eyes when your imagination is out of focus. Mark Twain When we use a camera, we want the recorded image to be a faithful representation
More informationC# Implementation of SLAM Using the Microsoft Kinect
C# Implementation of SLAM Using the Microsoft Kinect Richard Marron Advisor: Dr. Jason Janet 4/18/2012 Abstract A SLAM algorithm was developed in C# using the Microsoft Kinect and irobot Create. Important
More informationProduct specifications
Vehicle Driving Recorder Korean No. 1 Vehicle Driving Recorder ( Car Black Box) Product specifications ITB-100HD FULL HD (1920x1080) Resolution 1 1. Technical specifications Item specifications remarks
More informationHead-Coupled Perspective
Head-Coupled Perspective Introduction Head-Coupled Perspective (HCP) refers to a technique of rendering a scene that takes into account the position of the viewer relative to the display. As a viewer moves
More informationA Reliability Point and Kalman Filter-based Vehicle Tracking Technique
A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video
More informationHigh-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound
High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound Ralf Bruder 1, Florian Griese 2, Floris Ernst 1, Achim Schweikard
More informationIntelligent Flexible Automation
Intelligent Flexible Automation David Peters Chief Executive Officer Universal Robotics February 20-22, 2013 Orlando World Marriott Center Orlando, Florida USA Trends in AI and Computing Power Convergence
More informationPredict the Popularity of YouTube Videos Using Early View Data
000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050
More informationBildverarbeitung und Mustererkennung Image Processing and Pattern Recognition
Bildverarbeitung und Mustererkennung Image Processing and Pattern Recognition 1. Image Pre-Processing - Pixel Brightness Transformation - Geometric Transformation - Image Denoising 1 1. Image Pre-Processing
More informationComparison of different image compression formats. ECE 533 Project Report Paula Aguilera
Comparison of different image compression formats ECE 533 Project Report Paula Aguilera Introduction: Images are very important documents nowadays; to work with them in some applications they need to be
More informationFiles Used in this Tutorial
Generate Point Clouds Tutorial This tutorial shows how to generate point clouds from IKONOS satellite stereo imagery. You will view the point clouds in the ENVI LiDAR Viewer. The estimated time to complete
More information2. Spin Chemistry and the Vector Model
2. Spin Chemistry and the Vector Model The story of magnetic resonance spectroscopy and intersystem crossing is essentially a choreography of the twisting motion which causes reorientation or rephasing
More information3D MODEL DRIVEN DISTANT ASSEMBLY
3D MODEL DRIVEN DISTANT ASSEMBLY Final report Bachelor Degree Project in Automation Spring term 2012 Carlos Gil Camacho Juan Cana Quijada Supervisor: Abdullah Mohammed Examiner: Lihui Wang 1 Executive
More informationAutomotive Applications of 3D Laser Scanning Introduction
Automotive Applications of 3D Laser Scanning Kyle Johnston, Ph.D., Metron Systems, Inc. 34935 SE Douglas Street, Suite 110, Snoqualmie, WA 98065 425-396-5577, www.metronsys.com 2002 Metron Systems, Inc
More informationThe Visualization Simulation of Remote-Sensing Satellite System
The Visualization Simulation of Remote-Sensing Satellite System Deng Fei, Chu YanLai, Zhang Peng, Feng Chen, Liang JingYong School of Geodesy and Geomatics, Wuhan University, 129 Luoyu Road, Wuhan 430079,
More informationBasler. Line Scan Cameras
Basler Line Scan Cameras High-quality line scan technology meets a cost-effective GigE interface Real color support in a compact housing size Shading correction compensates for difficult lighting conditions
More informationRodenstock Photo Optics
Rogonar Rogonar-S Rodagon Apo-Rodagon N Rodagon-WA Apo-Rodagon-D Accessories: Modular-Focus Lenses for Enlarging, CCD Photos and Video To reproduce analog photographs as pictures on paper requires two
More informationUSING THE XBOX KINECT TO DETECT FEATURES OF THE FLOOR SURFACE
USING THE XBOX KINECT TO DETECT FEATURES OF THE FLOOR SURFACE By STEPHANIE COCKRELL Submitted in partial fulfillment of the requirements For the degree of Master of Science Thesis Advisor: Gregory Lee
More information1 Review of Least Squares Solutions to Overdetermined Systems
cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares
More information3D Interactive Information Visualization: Guidelines from experience and analysis of applications
3D Interactive Information Visualization: Guidelines from experience and analysis of applications Richard Brath Visible Decisions Inc., 200 Front St. W. #2203, Toronto, Canada, rbrath@vdi.com 1. EXPERT
More informationScanners and How to Use Them
Written by Jonathan Sachs Copyright 1996-1999 Digital Light & Color Introduction A scanner is a device that converts images to a digital file you can use with your computer. There are many different types
More informationLinear Programming. Solving LP Models Using MS Excel, 18
SUPPLEMENT TO CHAPTER SIX Linear Programming SUPPLEMENT OUTLINE Introduction, 2 Linear Programming Models, 2 Model Formulation, 4 Graphical Linear Programming, 5 Outline of Graphical Procedure, 5 Plotting
More informationDigitization of Old Maps Using Deskan Express 5.0
Dražen Tutić *, Miljenko Lapaine ** Digitization of Old Maps Using Deskan Express 5.0 Keywords: digitization; scanner; scanning; old maps; Deskan Express 5.0. Summary The Faculty of Geodesy, University
More informationVRSPATIAL: DESIGNING SPATIAL MECHANISMS USING VIRTUAL REALITY
Proceedings of DETC 02 ASME 2002 Design Technical Conferences and Computers and Information in Conference Montreal, Canada, September 29-October 2, 2002 DETC2002/ MECH-34377 VRSPATIAL: DESIGNING SPATIAL
More informationTutorial for Tracker and Supporting Software By David Chandler
Tutorial for Tracker and Supporting Software By David Chandler I use a number of free, open source programs to do video analysis. 1. Avidemux, to exerpt the video clip, read the video properties, and save
More informationResolution Enhancement of Photogrammetric Digital Images
DICTA2002: Digital Image Computing Techniques and Applications, 21--22 January 2002, Melbourne, Australia 1 Resolution Enhancement of Photogrammetric Digital Images John G. FRYER and Gabriele SCARMANA
More informationCELLULAR AUTOMATA AND APPLICATIONS. 1. Introduction. This paper is a study of cellular automata as computational programs
CELLULAR AUTOMATA AND APPLICATIONS GAVIN ANDREWS 1. Introduction This paper is a study of cellular automata as computational programs and their remarkable ability to create complex behavior from simple
More informationA Computer Vision System on a Chip: a case study from the automotive domain
A Computer Vision System on a Chip: a case study from the automotive domain Gideon P. Stein Elchanan Rushinek Gaby Hayun Amnon Shashua Mobileye Vision Technologies Ltd. Hebrew University Jerusalem, Israel
More informationVision based Vehicle Tracking using a high angle camera
Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu gramos@clemson.edu dshu@clemson.edu Abstract A vehicle tracking and grouping algorithm is presented in this work
More informationSMARTSCAN hardware test results for smart optoelectronic image correction for pushbroom cameras
SMARTSCAN hardware test results for smart optoelectronic image correction for pushbroom cameras Valerij Tchernykh a, Sergei Dyblenko a, Klaus Janschek a, Wolfgang Göhler b, Bernd Harnisch c a Technsiche
More informationStudy of the Human Eye Working Principle: An impressive high angular resolution system with simple array detectors
Study of the Human Eye Working Principle: An impressive high angular resolution system with simple array detectors Diego Betancourt and Carlos del Río Antenna Group, Public University of Navarra, Campus
More informationSolve addition and subtraction word problems, and add and subtract within 10, e.g., by using objects or drawings to represent the problem.
Solve addition and subtraction word problems, and add and subtract within 10, e.g., by using objects or drawings to represent the problem. Solve word problems that call for addition of three whole numbers
More informationVideo stabilization for high resolution images reconstruction
Advanced Project S9 Video stabilization for high resolution images reconstruction HIMMICH Youssef, KEROUANTON Thomas, PATIES Rémi, VILCHES José. Abstract Super-resolution reconstruction produces one or
More informationjorge s. marques image processing
image processing images images: what are they? what is shown in this image? What is this? what is an image images describe the evolution of physical variables (intensity, color, reflectance, condutivity)
More informationVideo Camera Image Quality in Physical Electronic Security Systems
Video Camera Image Quality in Physical Electronic Security Systems Video Camera Image Quality in Physical Electronic Security Systems In the second decade of the 21st century, annual revenue for the global
More informationAnalecta Vol. 8, No. 2 ISSN 2064-7964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
More informationMouse Control using a Web Camera based on Colour Detection
Mouse Control using a Web Camera based on Colour Detection Abhik Banerjee 1, Abhirup Ghosh 2, Koustuvmoni Bharadwaj 3, Hemanta Saikia 4 1, 2, 3, 4 Department of Electronics & Communication Engineering,
More informationSelf-Calibrated Structured Light 3D Scanner Using Color Edge Pattern
Self-Calibrated Structured Light 3D Scanner Using Color Edge Pattern Samuel Kosolapov Department of Electrical Engineering Braude Academic College of Engineering Karmiel 21982, Israel e-mail: ksamuel@braude.ac.il
More information