Localization of mobile robots using machine vision and QR codes
|
|
|
- Sheena Wood
- 9 years ago
- Views:
Transcription
1 INFOTEH-JAHORINA Vol. 15, March 016. Localization of mobile robots using machine vision and QR codes Svetislav Ćirić, Nemanja Janković, Nenad Jovičić Department of Electronics School of Electrical Engineering, University of Belgrade Belgrade, Serbia Abstract Due to the increase in applications of mobile robots there is growing need to support their autonomy with good localization. As the camera takes its place as one of the most important sensors in robotics a great number of algorithms for image processing is being developed every day. This paper focuses on detecting multiple robots on the field with three cameras set in the pre-defined places. The detection and identification is done by using QR codes via OpenCV functions library. The designed localization provides a robust algorithm for detecting and following mobile robots which enhances their autonomy with the ability to dynamically decide on the strategy of movement and paves the way for future applications of the combination of QR codes and machine vision. Keywords: localization; robots; machine vision; QR codes; OpenCV; I. INTRODUCTION Today, robotic systems have many applications in industry, medicine, military and other fields, and are finding more and more applications in everyday life. With the development of technology there is a growing need for autonomy of robotic systems, especially mobile robots. Mobile robots are capable of moving in their environment and are not fixed to one physical location. These robots find many applications in commerce and industry. They are used by hospitals and warehouses for transferring materials and goods. They are being developed on a great number of universities in the world and almost everyone has one or more laboratories doing research in this field. Autonomy of movement is provided by the knowledge of the environment itself and correct localization. Localization of mobile robots can be carried out in various ways, for example using encoders, machine vision, lasers and GPS [1]. In this paper localization of a mobile robot is realized using machine vision and QR codes. Primary idea is to detect a predefined shape in the image and to calculate the distance of the object from a camera based on that, which will result in the position of the object on the field. As the use of QR codes is increasing with each day, many algorithms for their detection have been developed. Typical QR code detection algorithms need the QR code to take at least 0% of the image. Because of that, it is required to use an algorithm that can recognize a QR code on greater distances from a camera. As it is possible to use more than one camera for detection, an algorithm is needed to decide the position of the robot in case of detection by multiple cameras. In the second section of this paper the localization setup and the hardware that was used will be presented. After that, an overview of the algorithms used and developed will be given in section three. The section four will provide information about the QR codes that were used and give an insight to the software implementation. The way localization was tested and results are presented in the fifth section. Localization described in this paper is developed for Eurobot competition, an international competition in robotics. II. SETUP AND USED HARDWARE The field on which the robots are moving is m long and m wide. The outline of the field and setup of the system is schematically presented in Fig. 1. Fig. 1. Sensor positions along the edges of the field Three cameras are placed on the edges of the field that will detect QR codes that are placed on the beacon on top of the robots. Images acquired are transferred to a computer that processes them and decides on the position. For this localization three Logitech HD Webcams C70 have been used. The specifications of these cameras that a re the most important for this purpose are []: Focal length: 4mm - 8 -
2 Resolution: 180 x 70 pixels Field of view (FOV): 60º Pixel size:.8µm As FOV is 60º, the cameras need to be directed so that they cover the whole field as shown on the Fig.. of the field. Coordinate system of each camera is placed so that the camera lens represents the point (0, 0) of the coordinate system, x axis is in the direction of the camera and y axis is directed right from the camera as shown on the Fig. 4. The coordinate system of each camera and their relation to the field coordinate system is shown of Fig. 5. Fig. 4. Camera coordinate system Fig.. Direction of the cameras III. OVERVIEW OF THE USED ALGORITHMS A. Algorithm for QR code detection QR code (Quick Response Code) is a type of twodimensional, matrix barcode []. Barcode is an optical label that contains information about the object on which it is put and which can be read by a machine. QR code contains four standardized coding modes for storing data. The amount of data that can be stored depends on mode which is used. For detecting the position of a QR code on the image a special insignia is being used shown on the Fig. called FIP (Finder Pattern). There are three FIPs on the QR code located in the corners of the QR code. Fig.. Finder pattern (FIP) Algorithm used for detection of the QR codes is based on the face detection algorithm using Haar characteristics cascade classifiers developed by Paul Viola and Michel Jones [4]. This algorithm represents the approach of training the cascade function using machine learning with a large amount of positive and negative images. Cascade function is later being used for detection in other images. Training the Haar cascade classifier was done according to the paper [5]. B. Algorithm for detecting position Algorithm for detecting position is based on calculating the distance of the detected object from the lens of the camera and transforming the acquired data to the unique coordinate system Fig. 5. Coordinate system orientations The equation used to determine the distance of the object from the lens is derived from equation of the biconvex lens shown on Fig. 6: fhoh d o = h h ip sp Fig. 6. Biconvex lens where d o is distance of the object from the lens, f is the focal length, h o is the height of the object, h sp is the height of the sensor in pixels, h ip is the height of the object in pixels and h s height of sensor. By doing this the x coordinate of the object is determined. Similarly, we can calculate the y coordinate. In this case we know the distance of the object from the lens and the unknown variable is the y coordinate itself and the height of s - 9 -
3 object in pixels is calculated as the distance of the middle of the object from the center of the image. Now, it is necessary to transform the coordinate from the cameras coordinate systems to the coordinate systems of the field. The first camera s coordinate system can be transformed into the field coordinate system with the following transformation equations: x = x' y = y' + a where the coefficient is equal to the half of the fields width. For the other to this is not the case. The relation between coordinate systems can be represented by following equations [7]: x = ax' + by' + c y = bx' ay' + d Starting with these equations it is easy to determine the unknown coefficients for each of the three cameras by knowing two pairs of coordinates. The resulting coefficients are: a a ; b ; b = C. Decision algorithm 1 ; 1 ; c c = 000mm; d = 0 = 000mm; d = 000mm; As the possibility of false detection exists and with it the multiple detection of the object on one or more cameras it is needed to develop a decision algorithm. Decision algorithm must either make a decision on the position of the object or return information that the object was not found or that there was an error in detection. The algorithm steps are following: It is first needed to form decision pairs. Pairs are formed by looking at all combinations of positions from two different cameras. The arithmetic mean of the x and y coordinates is kept. When these pairs are formed it is first needed to check if the consensus between three cameras has been reached. That is achieved by going through the decision pairs in order to find the two pairs whose difference in position is less than the pre-defined threshold. New value is being set by taking the mean values of the x and y coordinates of the two pairs. If there was no consensus, a pair is being looked for that has the smallest difference in detected positions. The smallest difference in detection must also be less than the pre-defined threshold. If the pair is found that satisfies the conditions its position is being taken as the position of the object. If there was no pair that satisfies the conditions in the previous step, the next step is the decision based on the information from one camera. First, the region covered by only one camera are checked. In the case that only one object was detected by a camera and if that object is in the region covered only by that camera, that position is taken as the position of the object. If not, the next step is to find the position that is the closest to the previous known position. If this position satisfies a predefined threshold, that position is taken as the object position. If not, the object was not detected. This step can be accessed only two times in a row as the uncertainty of the robot s position becomes too high to be sure of the detected position. IV. REALIZATION OF THE LOCALIZATION First thing to do is to determine the look of the QR codes and their placement on the beacon. As it is needed to detect QR code from the distance of.61m (field diagonal) the minimal dimensions of the FIPs had to be determined. It turns out that because of the quality of the cameras the minimal height of the FIP is 50mm. This proves to be a problem since there is no way to place of them on the beacon. As only one FIP is needed for detection of the robot this problem is easily overcome by placing only one. As it is not enough for the implementation of the whole QR code, a different approach had to be taken in order to keep the information about the robot on the beacon. As there can be only four robots on the field at one time it is enough to represent them with only two bits. It can be done with placing the black and white areas on the both sides of the FIP, where the white area represents bit 0 and the black bit 1. Placing the same combination of bits on the both sides of the FIP it is secured that there can be no false detection in case the identification from the other beacon overlaps the beacon on which the FIP is detected, or there the false identification due to bad lighting or a shadow. The identification codes that were used are 10mm wide which leaves 5mm space between the FIP and the identification so that the detection algorithm can run properly. The final look of the beacons is shown in the Fig. 7. Localization is realized using the C++ programming Fig. 7. FIPs with identification language with the OpenCV programming library version For this application the libraries used are imgproc, highgui and objdetect.. Library imgproc was used for the basic functions for manipulating and processing images. Library highgui is used to provide an easy interface for display, loading and saving images from disk or a camera and to input commands from a mouse or a keyboard. The objdetect library contains functions necessary for object detection. In this - 0 -
4 library the face detection algorithm previously explained is implemented that is being used for FIP detection. The function in which the algorithm is implemented is called detectmultiscale. The input to this function is trained cascaded classifier, and the output is the structure which contains positions and sizes of the detected objects. [7] Because the process of detection takes a lot of processing time all images taken from the cameras are cut in half. So that we have an image 180 pixels wide and 60 pixels high which cuts the detection time in half allowing a higher framerate. The cutting is done by cutting out upper and lower quarter of the image. The acquisition time of the images must be synchronized as well. To avoid taking images from different point in time, all three images are taken at the same time and then processed one by one. Fig. 8. Calibration of the cameras As the operating system automatically decides the ID of each of the cameras, a system had to be devised to identify the camera based on its position. Also, the cameras direction has to be set properly to avoid errors. For that purpose, the initialization process was introduced. During this process, cameras are turned on one by one and they are identified by the number of FIPs placed in front of them using the same detection algorithm. After that the screen is introduced with the blue line in the middle of the image. That blue line has to match the calibration marker set on the the edge of the field where the y axis of the camera intersects the edge. The calibration process is shown on the Fig. 8. After this the system has been initialized and the localization can start working. Fig. 9. First method of displaying results coordinates are displayed in the same color as the circle around the FIP. This method is show on the Fig. 9. Other method is drawing a path on the field. Path is drawn on the image whose width to height ratio is the same as the fields by connecting the detected positions with a straight line. Every robot has its own color, like in the previous example. This method is shown on Fig. 10. B. Testing To demonstrate the robustness of the localization, all four beacons are placed on the field at different heights. The result is shown on Fig. 9. This proves useful as the height of the beacon plays no importance to the detection algorithm. V. TESTING AND RESULTS A. Illustrating results For illustrating the results two methods were used. First is the image from the cameras. In this case images from all three cameras are merged and displayed as one. On them, the detected FIPs are marked with a circle around them with color depending on the identification. In the upper left corner, the Fig. 10. Paths of two robots Second test is devised to test the robot moving on the predefined path to measure the difference between detected and actual path. The result is shown on Fig
5 Fig. 11. Results of the drawing the path from detected positions Next to the path the sizes of the robots have been shown to visually illustrate the error of detection with the robot whose position it is trying to detect. From the results, the following conclusions can be drawn: In the worst case scenario, the localization makes an error of 100mm. This error manifests various factors. First is the field s imperfection. It primarily affects the transformation from one coordinate system to another. The second one is size of the beacon, because the detected position from the cameras on the opposite sides of it will differ 80mm from each other. Average time of one iteration is approximately 80ms which is roughly.5 frames per second and it varies based on the computer that is being used. Based on the number of detected FIPs in one frame that the number of false positive detections is relatively small and does not affect the decision process because of the robust identification process. VI. CONCLUSION In this paper the localization was presented which uses the advantages of the camera as a sensor in robots. Using the OpenCV library which provides access to heavily optimized and sophisticated algorithms for image processing satisfying results have been achieved in speed and precision of position detection. Taking into the account that the commercial web cameras were used, this paper demonstrates the perspective of computer vision as one of leading methods in mobile robot localization. The next step in improving the results is improving the decision algorithm by including machine learning into the process to calculate the threshold which were in this paper acquired empirically. On the other side, intensive development of OpenCV library opens the possibility to use GPGPU in the detection process and speed up the whole system. As this localization is primarily developed for Eurobot competition there is a possibility to find, through future work, further applications of QR code detection using machine vision. SPECIAL THANKS Special thanks to all the members of the Eurobot team Enika for their support REFERENCES [1] Siegwart, Roland, Illah Reza Nourbakhsh, and Davide Scaramuzza. Introduction to autonomous mobile robots. MIT press, 011 [] [] [4] Viola, Paul, and Michael Jones. "Rapid object detection using a boosted cascade of simple features. Computer Vision and Pattern Recognition, 001. CVPR 001. Proceedings of the 001 IEEE Computer Society Conference on. Vol. 1. IEEE, 001. [5] Belussi, Luiz FF, and Nina ST Hirata. "Fast QR code detection in arbitrarily acquired images."graphics, Patterns and Images (Sibgrapi), 011 4th SIBGRAPI Conference on. IEEE, 011. [6] [7] - -
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia
A Study on M2M-based AR Multiple Objects Loading Technology using PPHT
A Study on M2M-based AR Multiple Objects Loading Technology using PPHT Sungmo Jung, Seoksoo Kim * Department of Multimedia Hannam University 133, Ojeong-dong, Daedeok-gu, Daejeon-city Korea [email protected],
LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK
vii LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK LIST OF CONTENTS LIST OF TABLES LIST OF FIGURES LIST OF NOTATIONS LIST OF ABBREVIATIONS LIST OF APPENDICES
CS231M Project Report - Automated Real-Time Face Tracking and Blending
CS231M Project Report - Automated Real-Time Face Tracking and Blending Steven Lee, [email protected] June 6, 2015 1 Introduction Summary statement: The goal of this project is to create an Android
FREE FALL. Introduction. Reference Young and Freedman, University Physics, 12 th Edition: Chapter 2, section 2.5
Physics 161 FREE FALL Introduction This experiment is designed to study the motion of an object that is accelerated by the force of gravity. It also serves as an introduction to the data analysis capabilities
Geometric Optics Converging Lenses and Mirrors Physics Lab IV
Objective Geometric Optics Converging Lenses and Mirrors Physics Lab IV In this set of lab exercises, the basic properties geometric optics concerning converging lenses and mirrors will be explored. The
An Experimental Study on Pixy CMUcam5 Vision Sensor
LTU-ARISE-2015-01 1 Lawrence Technological University / Autonomous Robotics Institute for Supporting Education - Technical Memo ARISE-2015-01 An Experimental Study on Pixy CMUcam5 Vision Sensor Charles
High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound
High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound Ralf Bruder 1, Florian Griese 2, Floris Ernst 1, Achim Schweikard
Real-Time Eye Gaze Tracking With an Unmodified Commodity Webcam Employing a Neural Network
To appear in Proceedings of ACM Conference on Human Factors in Computing Systems (CHI), Atlanta, GA, 2010 Real-Time Eye Gaze Tracking With an Unmodified Commodity Webcam Employing a Neural Network Weston
High speed 3D capture for Configuration Management DOE SBIR Phase II Paul Banks [email protected]
High speed 3D capture for Configuration Management DOE SBIR Phase II Paul Banks [email protected] Advanced Methods for Manufacturing Workshop September 29, 2015 1 TetraVue does high resolution 3D
Protocol for Microscope Calibration
Protocol for Microscope Calibration A properly calibrated system is essential for successful and efficient software use. The following are step by step instructions on how to calibrate the hardware using
Self-Calibrated Structured Light 3D Scanner Using Color Edge Pattern
Self-Calibrated Structured Light 3D Scanner Using Color Edge Pattern Samuel Kosolapov Department of Electrical Engineering Braude Academic College of Engineering Karmiel 21982, Israel e-mail: [email protected]
Analecta Vol. 8, No. 2 ISSN 2064-7964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
Building an Advanced Invariant Real-Time Human Tracking System
UDC 004.41 Building an Advanced Invariant Real-Time Human Tracking System Fayez Idris 1, Mazen Abu_Zaher 2, Rashad J. Rasras 3, and Ibrahiem M. M. El Emary 4 1 School of Informatics and Computing, German-Jordanian
Video Tracking Software User s Manual. Version 1.0
Video Tracking Software User s Manual Version 1.0 Triangle BioSystems International 2224 Page Rd. Suite 108 Durham, NC 27703 Phone: (919) 361-2663 Fax: (919) 544-3061 www.trianglebiosystems.com Table of
LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE. [email protected]
LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE 1 S.Manikandan, 2 S.Abirami, 2 R.Indumathi, 2 R.Nandhini, 2 T.Nanthini 1 Assistant Professor, VSA group of institution, Salem. 2 BE(ECE), VSA
Robust Real-Time Face Detection
Robust Real-Time Face Detection International Journal of Computer Vision 57(2), 137 154, 2004 Paul Viola, Michael Jones 授 課 教 授 : 林 信 志 博 士 報 告 者 : 林 宸 宇 報 告 日 期 :96.12.18 Outline Introduction The Boost
Digital Photography Composition. Kent Messamore 9/8/2013
Digital Photography Composition Kent Messamore 9/8/2013 Photography Equipment versus Art Last week we focused on our Cameras Hopefully we have mastered the buttons and dials by now If not, it will come
DICOM Correction Item
Correction Number DICOM Correction Item CP-626 Log Summary: Type of Modification Clarification Rationale for Correction Name of Standard PS 3.3 2004 + Sup 83 The description of pixel spacing related attributes
Interactive Projector Screen with Hand Detection Using LED Lights
Interactive Projector Screen with Hand Detection Using LED Lights Padmavati Khandnor Assistant Professor/Computer Science/Project Mentor Aditi Aggarwal Student/Computer Science/Final Year Ankita Aggarwal
T-REDSPEED White paper
T-REDSPEED White paper Index Index...2 Introduction...3 Specifications...4 Innovation...6 Technology added values...7 Introduction T-REDSPEED is an international patent pending technology for traffic violation
Reflection and Refraction
Equipment Reflection and Refraction Acrylic block set, plane-concave-convex universal mirror, cork board, cork board stand, pins, flashlight, protractor, ruler, mirror worksheet, rectangular block worksheet,
Robot Perception Continued
Robot Perception Continued 1 Visual Perception Visual Odometry Reconstruction Recognition CS 685 11 Range Sensing strategies Active range sensors Ultrasound Laser range sensor Slides adopted from Siegwart
Fixplot Instruction Manual. (data plotting program)
Fixplot Instruction Manual (data plotting program) MANUAL VERSION2 2004 1 1. Introduction The Fixplot program is a component program of Eyenal that allows the user to plot eye position data collected with
MRC High Resolution. MR-compatible digital HD video camera. User manual
MRC High Resolution MR-compatible digital HD video camera User manual page 1 of 12 Contents 1. Intended use...2 2. System components...3 3. Video camera and lens...4 4. Interface...4 5. Installation...5
PCB Component Placement Inspection
Executive Summary PCB Component Placement Inspection Optimet s ConoProbe Mark10 HD with a 50 mm focal length lens was used to inspect PCB component placement. The PCB board inspected contained both individual
Product Information. QUADRA-CHEK 3000 Evaluation Electronics For Metrological Applications
Product Information QUADRA-CHEK 3000 Evaluation Electronics For Metrological Applications April 2016 QUADRA-CHEK 3000 The evaluation electronics for intuitive 2-D measurement The QUADRA-CHEK 3000 evaluation
Autonomous Advertising Mobile Robot for Exhibitions, Developed at BMF
Autonomous Advertising Mobile Robot for Exhibitions, Developed at BMF Kucsera Péter ([email protected]) Abstract In this article an autonomous advertising mobile robot that has been realized in
Computational Geometry. Lecture 1: Introduction and Convex Hulls
Lecture 1: Introduction and convex hulls 1 Geometry: points, lines,... Plane (two-dimensional), R 2 Space (three-dimensional), R 3 Space (higher-dimensional), R d A point in the plane, 3-dimensional space,
C# Implementation of SLAM Using the Microsoft Kinect
C# Implementation of SLAM Using the Microsoft Kinect Richard Marron Advisor: Dr. Jason Janet 4/18/2012 Abstract A SLAM algorithm was developed in C# using the Microsoft Kinect and irobot Create. Important
Overview Image Acquisition of Microscopic Slides via Web Camera
MAX-PLANCK-INSTITUT FÜR MARINE MIKROBIOLOGIE ABTEILUNG MOLEKULARE ÖKOLOGIE Overview Image Acquisition of Microscopic Slides via Web Camera Andreas Ellrott and Michael Zeder Max Planck Institute for Marine
An Active Head Tracking System for Distance Education and Videoconferencing Applications
An Active Head Tracking System for Distance Education and Videoconferencing Applications Sami Huttunen and Janne Heikkilä Machine Vision Group Infotech Oulu and Department of Electrical and Information
Navigation Aid And Label Reading With Voice Communication For Visually Impaired People
Navigation Aid And Label Reading With Voice Communication For Visually Impaired People A.Manikandan 1, R.Madhuranthi 2 1 M.Kumarasamy College of Engineering, [email protected],karur,india 2 M.Kumarasamy
Ultra-High Resolution Digital Mosaics
Ultra-High Resolution Digital Mosaics J. Brian Caldwell, Ph.D. Introduction Digital photography has become a widely accepted alternative to conventional film photography for many applications ranging from
Thin Lenses Drawing Ray Diagrams
Drawing Ray Diagrams Fig. 1a Fig. 1b In this activity we explore how light refracts as it passes through a thin lens. Eyeglasses have been in use since the 13 th century. In 1610 Galileo used two lenses
A System for Capturing High Resolution Images
A System for Capturing High Resolution Images G.Voyatzis, G.Angelopoulos, A.Bors and I.Pitas Department of Informatics University of Thessaloniki BOX 451, 54006 Thessaloniki GREECE e-mail: [email protected]
Static Environment Recognition Using Omni-camera from a Moving Vehicle
Static Environment Recognition Using Omni-camera from a Moving Vehicle Teruko Yata, Chuck Thorpe Frank Dellaert The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 USA College of Computing
Vision based Vehicle Tracking using a high angle camera
Vision based Vehicle Tracking using a high angle camera Raúl Ignacio Ramos García Dule Shu [email protected] [email protected] Abstract A vehicle tracking and grouping algorithm is presented in this work
Excel -- Creating Charts
Excel -- Creating Charts The saying goes, A picture is worth a thousand words, and so true. Professional looking charts give visual enhancement to your statistics, fiscal reports or presentation. Excel
ROBOTRACKER A SYSTEM FOR TRACKING MULTIPLE ROBOTS IN REAL TIME. by Alex Sirota, [email protected]
ROBOTRACKER A SYSTEM FOR TRACKING MULTIPLE ROBOTS IN REAL TIME by Alex Sirota, [email protected] Project in intelligent systems Computer Science Department Technion Israel Institute of Technology Under the
From Product Management Telephone Nuremberg
Release Letter Product: Version: IVA Intelligent Video Analysis 4.50 1. General Intelligent Video Analysis (IVA) version 4.50 is the successor of IVA 4.00. IVA is a continuously growing product with an
VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION
VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION Mark J. Norris Vision Inspection Technology, LLC Haverhill, MA [email protected] ABSTRACT Traditional methods of identifying and
CHAPTER I INTRODUCTION
CHAPTER I INTRODUCTION 1.1 Introduction Barcodes are machine readable symbols made of patterns and bars. Barcodes are used for automatic identification and usually are used in conjunction with databases.
Introduction. www.imagesystems.se
Product information Image Systems AB Main office: Ågatan 40, SE-582 22 Linköping Phone +46 13 200 100, fax +46 13 200 150 [email protected], Introduction Motion is the world leading software for advanced
Relating Vanishing Points to Catadioptric Camera Calibration
Relating Vanishing Points to Catadioptric Camera Calibration Wenting Duan* a, Hui Zhang b, Nigel M. Allinson a a Laboratory of Vision Engineering, University of Lincoln, Brayford Pool, Lincoln, U.K. LN6
CREATE A 3D MOVIE IN DIRECTOR
CREATE A 3D MOVIE IN DIRECTOR 2 Building Your First 3D Movie in Director Welcome to the 3D tutorial for Adobe Director. Director includes the option to create three-dimensional (3D) images, text, and animations.
CALIBRATION OF A ROBUST 2 DOF PATH MONITORING TOOL FOR INDUSTRIAL ROBOTS AND MACHINE TOOLS BASED ON PARALLEL KINEMATICS
CALIBRATION OF A ROBUST 2 DOF PATH MONITORING TOOL FOR INDUSTRIAL ROBOTS AND MACHINE TOOLS BASED ON PARALLEL KINEMATICS E. Batzies 1, M. Kreutzer 1, D. Leucht 2, V. Welker 2, O. Zirn 1 1 Mechatronics Research
The following is an overview of lessons included in the tutorial.
Chapter 2 Tutorial Tutorial Introduction This tutorial is designed to introduce you to some of Surfer's basic features. After you have completed the tutorial, you should be able to begin creating your
Visual Structure Analysis of Flow Charts in Patent Images
Visual Structure Analysis of Flow Charts in Patent Images Roland Mörzinger, René Schuster, András Horti, and Georg Thallinger JOANNEUM RESEARCH Forschungsgesellschaft mbh DIGITAL - Institute for Information
CONSTRUCTING SINGLE-SUBJECT REVERSAL DESIGN GRAPHS USING MICROSOFT WORD : A COMPREHENSIVE TUTORIAL
CONSTRUCTING SINGLE-SUBJECT REVERSAL DESIGN GRAPHS USING MICROSOFT WORD : A COMPREHENSIVE TUTORIAL PATRICK GREHAN ADELPHI UNIVERSITY DANIEL J. MORAN MIDAMERICAN PSYCHOLOGICAL INSTITUTE This document is
Intelligent Flexible Automation
Intelligent Flexible Automation David Peters Chief Executive Officer Universal Robotics February 20-22, 2013 Orlando World Marriott Center Orlando, Florida USA Trends in AI and Computing Power Convergence
Evaluation of Optimizations for Object Tracking Feedback-Based Head-Tracking
Evaluation of Optimizations for Object Tracking Feedback-Based Head-Tracking Anjo Vahldiek, Ansgar Schneider, Stefan Schubert Baden-Wuerttemberg State University Stuttgart Computer Science Department Rotebuehlplatz
Virtual Mouse Using a Webcam
1. INTRODUCTION Virtual Mouse Using a Webcam Since the computer technology continues to grow up, the importance of human computer interaction is enormously increasing. Nowadays most of the mobile devices
Color correction in 3D environments Nicholas Blackhawk
Color correction in 3D environments Nicholas Blackhawk Abstract In 3D display technologies, as reviewers will say, color quality is often a factor. Depending on the type of display, either professional
OCR and 2D DataMatrix Specification for:
OCR and 2D DataMatrix Specification for: 2D Datamatrix Label Reader OCR & 2D Datamatrix Standard (Multi-read) Camera System Auto-Multi Reading Camera System Auto-Multi Reading Camera System (Hi-Res) Contents
Understanding Line Scan Camera Applications
Understanding Line Scan Camera Applications Discover the benefits of line scan cameras, including perfect, high resolution images, and the ability to image large objects. A line scan camera has a single
Three daily lessons. Year 5
Unit 6 Perimeter, co-ordinates Three daily lessons Year 4 Autumn term Unit Objectives Year 4 Measure and calculate the perimeter of rectangles and other Page 96 simple shapes using standard units. Suggest
The application of image division method on automatic optical inspection of PCBA
1 1 1 1 0 The application of image division method on automatic optical inspection of PCBA Min-Chie Chiu Department of Automatic Control Engineering Chungchou Institute of Technology, Lane, Sec. 3, Shanchiao
In mathematics, there are four attainment targets: using and applying mathematics; number and algebra; shape, space and measures, and handling data.
MATHEMATICS: THE LEVEL DESCRIPTIONS In mathematics, there are four attainment targets: using and applying mathematics; number and algebra; shape, space and measures, and handling data. Attainment target
A Survey of Video Processing with Field Programmable Gate Arrays (FGPA)
A Survey of Video Processing with Field Programmable Gate Arrays (FGPA) Heather Garnell Abstract This paper is a high-level, survey of recent developments in the area of video processing using reconfigurable
1051-232 Imaging Systems Laboratory II. Laboratory 4: Basic Lens Design in OSLO April 2 & 4, 2002
05-232 Imaging Systems Laboratory II Laboratory 4: Basic Lens Design in OSLO April 2 & 4, 2002 Abstract: For designing the optics of an imaging system, one of the main types of tools used today is optical
How To Use Trackeye
Product information Image Systems AB Main office: Ågatan 40, SE-582 22 Linköping Phone +46 13 200 100, fax +46 13 200 150 [email protected], Introduction TrackEye is the world leading system for motion
Logo Symmetry Learning Task. Unit 5
Logo Symmetry Learning Task Unit 5 Course Mathematics I: Algebra, Geometry, Statistics Overview The Logo Symmetry Learning Task explores graph symmetry and odd and even functions. Students are asked to
Using MATLAB to Measure the Diameter of an Object within an Image
Using MATLAB to Measure the Diameter of an Object within an Image Keywords: MATLAB, Diameter, Image, Measure, Image Processing Toolbox Author: Matthew Wesolowski Date: November 14 th 2014 Executive Summary
Wii Remote Calibration Using the Sensor Bar
Wii Remote Calibration Using the Sensor Bar Alparslan Yildiz Abdullah Akay Yusuf Sinan Akgul GIT Vision Lab - http://vision.gyte.edu.tr Gebze Institute of Technology Kocaeli, Turkey {yildiz, akay, akgul}@bilmuh.gyte.edu.tr
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - [email protected]
Neural Network based Vehicle Classification for Intelligent Traffic Control
Neural Network based Vehicle Classification for Intelligent Traffic Control Saeid Fazli 1, Shahram Mohammadi 2, Morteza Rahmani 3 1,2,3 Electrical Engineering Department, Zanjan University, Zanjan, IRAN
OPERATION MANUAL. MV-410RGB Layout Editor. Version 2.1- higher
OPERATION MANUAL MV-410RGB Layout Editor Version 2.1- higher Table of Contents 1. Setup... 1 1-1. Overview... 1 1-2. System Requirements... 1 1-3. Operation Flow... 1 1-4. Installing MV-410RGB Layout
Epipolar Geometry. Readings: See Sections 10.1 and 15.6 of Forsyth and Ponce. Right Image. Left Image. e(p ) Epipolar Lines. e(q ) q R.
Epipolar Geometry We consider two perspective images of a scene as taken from a stereo pair of cameras (or equivalently, assume the scene is rigid and imaged with a single camera from two different locations).
Freehand Sketching. Sections
3 Freehand Sketching Sections 3.1 Why Freehand Sketches? 3.2 Freehand Sketching Fundamentals 3.3 Basic Freehand Sketching 3.4 Advanced Freehand Sketching Key Terms Objectives Explain why freehand sketching
Augmented Reality Tic-Tac-Toe
Augmented Reality Tic-Tac-Toe Joe Maguire, David Saltzman Department of Electrical Engineering [email protected], [email protected] Abstract: This project implements an augmented reality version
Vision-Based Blind Spot Detection Using Optical Flow
Vision-Based Blind Spot Detection Using Optical Flow M.A. Sotelo 1, J. Barriga 1, D. Fernández 1, I. Parra 1, J.E. Naranjo 2, M. Marrón 1, S. Alvarez 1, and M. Gavilán 1 1 Department of Electronics, University
The Trade-off between Image Resolution and Field of View: the Influence of Lens Selection
The Trade-off between Image Resolution and Field of View: the Influence of Lens Selection I want a lens that can cover the whole parking lot and I want to be able to read a license plate. Sound familiar?
E190Q Lecture 5 Autonomous Robot Navigation
E190Q Lecture 5 Autonomous Robot Navigation Instructor: Chris Clark Semester: Spring 2014 1 Figures courtesy of Siegwart & Nourbakhsh Control Structures Planning Based Control Prior Knowledge Operator
A Reliability Point and Kalman Filter-based Vehicle Tracking Technique
A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video
LESSON PLAN. Katie Dow 1
LESSON PLAN Katie Dow 1 Title of Lesson: Op Art Inquiry Lesson (Patters & Repetition) Estimated Class Time: 2 x 1 hour Course: Visual Arts for Elementary Teachers Grades 3-5 Curriculum Objectives 1. Explore
Using Microsoft Picture Manager
Using Microsoft Picture Manager Storing Your Photos It is suggested that a county store all photos for use in the County CMS program in the same folder for easy access. For the County CMS Web Project it
Cloud tracking with optical flow for short-term solar forecasting
Cloud tracking with optical flow for short-term solar forecasting Philip Wood-Bradley, José Zapata, John Pye Solar Thermal Group, Australian National University, Canberra, Australia Corresponding author:
PLAY VIDEO. Close- Closes the file you are working on and takes you back to MicroStation V8i Open File dialog.
Chapter Five Menus PLAY VIDEO INTRODUCTION To be able to utilize the many different menus and tools MicroStation V8i offers throughout the program and this guide, you must first be able to locate and understand
Shape Measurement of a Sewer Pipe. Using a Mobile Robot with Computer Vision
International Journal of Advanced Robotic Systems ARTICLE Shape Measurement of a Sewer Pipe Using a Mobile Robot with Computer Vision Regular Paper Kikuhito Kawasue 1,* and Takayuki Komatsu 1 1 Department
Excel 2007 Basic knowledge
Ribbon menu The Ribbon menu system with tabs for various Excel commands. This Ribbon system replaces the traditional menus used with Excel 2003. Above the Ribbon in the upper-left corner is the Microsoft
Digital Image Requirements for New Online US Visa Application
Digital Image Requirements for New Online US Visa Application As part of the electronic submission of your DS-160 application, you will be asked to provide an electronic copy of your photo. The photo must
Mouse Control using a Web Camera based on Colour Detection
Mouse Control using a Web Camera based on Colour Detection Abhik Banerjee 1, Abhirup Ghosh 2, Koustuvmoni Bharadwaj 3, Hemanta Saikia 4 1, 2, 3, 4 Department of Electronics & Communication Engineering,
Measuring large areas by white light interferometry at the nanopositioning and nanomeasuring machine (NPMM)
Image Processing, Image Analysis and Computer Vision Measuring large areas by white light interferometry at the nanopositioning and nanomeasuring machine (NPMM) Authors: Daniel Kapusi 1 Torsten Machleidt
FRC WPI Robotics Library Overview
FRC WPI Robotics Library Overview Contents 1.1 Introduction 1.2 RobotDrive 1.3 Sensors 1.4 Actuators 1.5 I/O 1.6 Driver Station 1.7 Compressor 1.8 Camera 1.9 Utilities 1.10 Conclusion Introduction In this
Tutorial for Tracker and Supporting Software By David Chandler
Tutorial for Tracker and Supporting Software By David Chandler I use a number of free, open source programs to do video analysis. 1. Avidemux, to exerpt the video clip, read the video properties, and save
Context-aware Library Management System using Augmented Reality
International Journal of Electronic and Electrical Engineering. ISSN 0974-2174 Volume 7, Number 9 (2014), pp. 923-929 International Research Publication House http://www.irphouse.com Context-aware Library
Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 269 Class Project Report
Automatic 3D Reconstruction via Object Detection and 3D Transformable Model Matching CS 69 Class Project Report Junhua Mao and Lunbo Xu University of California, Los Angeles [email protected] and lunbo
Limitations of Human Vision. What is computer vision? What is computer vision (cont d)?
What is computer vision? Limitations of Human Vision Slide 1 Computer vision (image understanding) is a discipline that studies how to reconstruct, interpret and understand a 3D scene from its 2D images
SYNTHESIZING FREE-VIEWPOINT IMAGES FROM MULTIPLE VIEW VIDEOS IN SOCCER STADIUM
SYNTHESIZING FREE-VIEWPOINT IMAGES FROM MULTIPLE VIEW VIDEOS IN SOCCER STADIUM Kunihiko Hayashi, Hideo Saito Department of Information and Computer Science, Keio University {hayashi,saito}@ozawa.ics.keio.ac.jp
Auto Head-Up Displays: View-Through for Drivers
Auto Head-Up Displays: View-Through for Drivers Head-up displays (HUD) are featuring more and more frequently in both current and future generation automobiles. One primary reason for this upward trend
GelAnalyzer 2010 User s manual. Contents
GelAnalyzer 2010 User s manual Contents 1. Starting GelAnalyzer... 2 2. The main window... 2 3. Create a new analysis... 2 4. The image window... 3 5. Lanes... 3 5.1 Detect lanes automatically... 3 5.2
9. Force Measurement with a Strain gage Bridge
9. Force Measurement with a Strain gage Bridge Task 1. Measure the transfer characteristics of strain-gage based force sensor in a range 0 10 kg 1. 2. Determine the mass of an unknown weight. 3. Check
Automatic Labeling of Lane Markings for Autonomous Vehicles
Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 [email protected] 1. Introduction As autonomous vehicles become more popular,
RiMONITOR. Monitoring Software. for RIEGL VZ-Line Laser Scanners. Ri Software. visit our website www.riegl.com. Preliminary Data Sheet
Monitoring Software RiMONITOR for RIEGL VZ-Line Laser Scanners for stand-alone monitoring applications by autonomous operation of all RIEGL VZ-Line Laser Scanners adaptable configuration of data acquisition
Adobe Lens Profile Creator User Guide. Version 1.0 Wednesday, April 14, 2010 Adobe Systems Inc
Adobe Lens Profile Creator User Guide Version 1.0 Wednesday, April 14, 2010 Adobe Systems Inc ii Table of Contents INTRODUCTION:... 1 TERMINOLOGY:... 2 PROCEDURES:... 4 OPENING AND RUNNING FILES THROUGH
Alpy guiding User Guide. Olivier Thizy ([email protected]) François Cochard ([email protected])
Alpy guiding User Guide Olivier Thizy ([email protected]) François Cochard ([email protected]) DC0017A : april 2013 Alpy guiding module User Guide Olivier Thizy ([email protected])
Microsoft Office Excel 2007 Key Features. Office of Enterprise Development and Support Applications Support Group
Microsoft Office Excel 2007 Key Features Office of Enterprise Development and Support Applications Support Group 2011 TABLE OF CONTENTS Office of Enterprise Development & Support Acknowledgment. 3 Introduction.
