Omnidirectional sensors for mobile robots
|
|
|
- Rose Murphy
- 10 years ago
- Views:
Transcription
1 Omnidirectional sensors for mobile robots Sebastian Blumenthal 1, David Droeschel 1, Dirk Holz 1, Thorsten Linder 1, Peter Molitor 2, Hartmut Surmann 2 1 Department of Computer Science, University of Applied Sciences Bonn-Rhein-Sieg, St. Augustin, Germany 2 Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS), St. Augustin, Germany Abstract. This paper presents the different omnidirectional sensors developed at Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS) during the past decade. It envelops cheap USB / Firewire webcams to expensive high end cameras, continuously rotation 3D laser scanner, rotating infrared cameras and a new panoramic camera Spherecam. This new spherical camera is presented in detail compared to the other sensors. Beside the camera hardware we also categorize the omni cameras according to their price, connection and operating platform dependences and present the associated camera drivers and possible applications. Consider this paper as hardware descriptions of vision systems. Algorithms for ominidirectional vision systems are not discussed. 1 Introduction Traditional imaging devices (photographic and video) are limited in their field of view. Existing devices typically use a photographic camera or a video camera, in conjunction with build in off-the-shelf lenses. The omnidirectional camera configuration allows to view the world from the center of projection of the lens in all directions (360 field of view). Such sensors have direct applications in a variety of fields, including, video surveillance [1], teleconferencing, computer vision, Simultaneous Localization And Mapping (SLAM) [2, 3] and virtual reality. This paper presents the different omnidirectional sensors and the experiences made at our Institute during the past decade with a special focus to a new developed panoramic vision system Spherecam. The paper is organized at follows. Section 2 reviews the current state and know how of the different omnidirectional systems with a special focus to catadioptric systems. Therefore we categorize the system and with it the associated application according the prize (small, medium, high), connection (USB / Firewire) and supported operating system (Linux / Windows). Section 3 presents the new developed spherical camera and section 4 concludes the paper. Links to videos which show the behavior of the systems are referenced in the footnote at each section or subsection. 2 Omnidirectional vision system Our basic catadioptric vision system consists of one camera mounted into a framework and aiming towards a hyperbolic mirror. This concept allows to make use of a varied
2 range of different cameras (e.g. Quickcam pro 5000 / 9000, Apple isight, AVT MAR- LIN) and is successfully in use on a variety of application [4, 5, 6]. Without the camera the complete system weight is about 300 g and has a dimension of mm (L W H). The main component is the hyperbolic mirror (Fig. 2). 2.1 The hyperbolic mirror The hyperbolic mirror is made of aluminum and is optionally coated with glass (see Fig. 2(a)). The standard hyperbolic geometry version has a diameter of 70 mm, a pitch from 0 to 30 and a mirror function of z2 x 2 y = 1. Since the mirror is produced at our laboratory we are able to implement different mirror functions. Therefore, the optimal mirror function of the catadioptric vision systems optimize the application benefit. 2.2 A cheap Firewire camera An interesting alternative to USB web cameras is the Apple isight Firewire camera ( 200 US). It has a manual / auto focus and delivers pictures with 30 fps at a VGA resolution. Firewire cameras are supported by both operation systems. Linux use the libdc1394 library [7] and for Windows the CMU 1394 driver can be used [8]. Figure shows an extension of an isight omni camera. The isight is combined with a pan and tilt unit mounted underneath the mirror. Image data for searching and mapping appli- (a) (c) Fig. 1. Different views of the isight Firewire camera with a pan and tilt unit under the mirror cations are obtained with 30 fps (VGA) while the camera is aiming towards the mirror. The focus is set manually to the mirror. After detecting Points Of Interests (POIs) the servos move the camera to that position to acquire undistorted pictures for example for object classification or recognition with a higher resolution then the omni picture. During pan and tilt movements the auto focus is helpful. A video which shows the isight and the pan and tilt unit can be found at SPTIpnQw0eA.
3 2.3 A high resolution omnidirectional vision system This system consists of a high resolution IEEE1394 firewire camera aiming towards a hyperbolic mirror. The AVT Marlin F-145-C camera is equipped with a Fujinon DF6HA-1B object lens. This fixed focal lens comes with a focal length of 6mm and an aperture between f/1.2 and f/16. The minimum object distance is about 0.1 meters and the maximum field of view is 56 in horizontal and 43 in vertical alignment. The camera is able to deliver up to 10 fps in high resolution mode ( pixels) if DMA transmission and Format 7 is used. Several image formats like RGB, RAW and YUV are build in. Drivers for Windows are available (e.g. CMU1394 library [8]). In Linux the library libdc1394 delivers the full functionality [7]. If mounted in the IAIS-Vision system, we achieve circular shaped cut-out images from the mirror with a diameter of about 1000 pixels 3. This system delivers the capabilities of capturing image data for several illumination conditions, because the camera is able to auto adjust the shutter speed and other parameters like white balance and gain control. Additionally the aperture can be adjusted manually between the described values. In comparision to the cheap isight camera system, the images from the Marlin yield advantages regarding the detection of features and for visual bearing only SLAM due its higher resolution. But the computational costs increase proportionally with the number of pixels. Depenting on a specific application those properties must be taken in to account. 2.4 Consumer digital cameras Beside the webcams, traditional consumer cameras are interesting for robotic applications. They have a high quality and are cheap compared with other robotic vision systems. High resolution images from 5-12 million pixels allow applications like environmental OCR e.g. reading door plates. Figure 3(a) shows an omnidirectional picture made with a Canon A620. The camera is connected via USB 2.0 to the robot. The operating systems Linux and Windows are supported. The libptp library is used for Linux and the remote capture software from Canon is used for Windows [9, 10]. The fast view finder mode supports a frame size of pixels with 30 fps which is similar to the camera display resolution. When a mobile robot reaches a POI for example a door then high resolution images are taken. A high resolution image with 7.2 million pixels takes 5 seconds to be transferred to the robot. Figure 3 shows a canon IXUS 400 together with an infrared camera on a pan and tilt unit. 4 The recently published software extension CHDK (Canon Hacker s Development Kit) further increase the use of standard digital cameras [11]. The CHDK is a firmware enhancement that operates on various models of Canon cameras. It provides additional functionality for example for experienced photographers beyond that currently provided by the native camera firmware. Beside the enhanced ways of recording images or additional photographic settings, the CHDK can run program scripts written in a See for a video.
4 (a) (c) Fig. 2. (a) The hyperbolic mirror in standard geometry. The high resolution IEEE1394 camera AVT-marlin camera with an omnidirectional mirror and (c) Example image taken with the AVT in the foyer of the Fraunhofer IAIS at campus Castle Birlinghofen. (a) Fig. 3. (a) Omni picture made with the consumer digital camera Cananon A620. Combined infrared camera A10 (Flir) and a consumer digital camera (IXUS 400) at a continously turning pan and tilt unit.
5 micro-version of the BASIC language for example for motion detection or interval selftriggering. A great benefit from very high resolution images is the use for text recognition in particular for extracting information on door plates in office environments. This enables to enritch maps with semantic information. 2.5 Continuously rotating 3D Laser scanner For building metric maps and omnidirectional pictures, a continuously rotating 3D laser scanner (e.g. 3DLS-K2) is appropriate (Fig. 4(a) ) [12]. Beside the 3D point cloud (omnidirectionl distance values), the laser scanner delivers the amount of reflected light (remission values) Fig. 4). This leads to black/white textured maps. The 3DLS-K2 (a) Fig. 4. (a) The continuously rotating laser scanner 3DLS-K2 consists of a rotation unit with 2 SICK-LMS laser scanner. Remission image of castle Birlinghoven consist of a rotation unit to reach the 3rd dimension and of two 2D time-of-flight SICK laser scanner (scanning range 180o ). They can provide distance measurement in a range of up to 80m with a centimeter accuracy (see Fig. 5). The continuously yawing angle scan scheme [13] of the unit generates two 360 degree dense point clouds every 1.2 seconds. The hundreds of 3D point clouds acquired during a robot run are registered into a common coordinate system with the well known Iterative Closest Point (ICP) algorithm described in several previously published papers5 (e.g. [14, 15]). According to our categories the laser scanner is a very expensive omnidirectional sensor with an update frequency of 1.2 fps. The scanner is operating system independent and needs 3 USB connections. The remission images in combination with the depth information are useful to build metric maps. In particular it is possible to place the remission images as textues in a virtual 3D map. 5 See
6 (a) (c) Fig. 5. (a) Top view of a point cloud acquired with the IAIS 3DLS-K2 (points are colorized according to the measured remission value), a detailed view and (c) a photo showing the scene in. (a) (c) Fig. 6. Panoramic visions system. (a) Scheme of a penta-dodecahedron. [16] The dodecahedron shaped camera system with eleven cameras. Each aims in another direction. (c) demonstrates an impression from panoramic images of the dodecahedron shaped camera.
7 3 Panoramic vision system Spherecam For visual inspection and SLAM applications a new vision system Spherecam is presented here. It is build up from eleven off-the-shelf USB Logitech QuickCam 9000 web cameras aiming in different directions. They are mounted in penta-dodecahedron shaped chassis with a size of mm (L W H). (see Fig. 6(a) and 6). Each camera delivers up to 15 frames per second at a resolution of pixels. At a lower framerate (5 fps), pictures with a resolution of pixels can be acquired. All eleven cameras are connected to one up to four computers (e.g. MacMinis) via USB 2.0 according to the desired computational power. Complete scenes of eleven images with VGA resolution can be acquired with a frame rate of approx. 3 fps due to the serial readout of the cameras. More computers speed up the frame rate but require a precise synchronization. The main advantages of the Spherecam are its inherent robustness since it has no mechanical moving units and its high resolution undistorted images which allow direct feature detection at the single images pointing in an interesting direction. 6 Dependant of the field of view of the used cameras a near panoramic overview of the environment can be obtained. Fig. 6(c) shows a panoramic (fisheye) image obtained from all cameras. The USB video device class or USB video class (UVC) is a special USB class which defines functionality for video streaming devices like web cams camcorders or television tuners over the Universal Serial Bus. The Logitech QuickCam Pro 9000 web cam is an UVC compliant camera device. Drivers for the Windows operating systems are shipped with the camera. A Linux implementation of the UVC standard can be found at [17]. The proposed vision system has a fixed numbering strategy. One camera from the lower row is chosen as starting point and it is assigned the first index. The next four clockwise cameras (seen from a top view perspective) are labeled in ascending order. The first camera of the upper row which is directly left hand from the starting camera is assigned the sixth index. The other cameras of the upper row are aligned assigned with ascending numbers clockwise. The camera which is directly aligned to the top gets the last index. The remaining bottom surface is reserved for the mounting parts. Fig. 7 sketches the numbering. Beside the camera numbering the exact position and orientation of each camera must be known. The camera lenses are located in the middle of each pentagon. The height h of a pantagon is characterized by h = a [18] (1) 2 where a is the length of one vertices. The coordinate frame for the whole Spherecam camera system has its origin in the center of the inscribed sphere of the dodecahedron. 6
8 (a) Fig. 7. Camera numbering for the panoramic vision system. (a) Scheme of the numbering of the penta-dodecahedron. The same scheme displayed on an unfolded dodecahedron. [16] The inner radius r of this sphere can be determined by: r = a [19] (2) 4 5 For each camera alignment pitch, yaw and roll values are needed. The pitch value α pitch can be deduced as follows. A slice plane is positioned as shown in Fig. 8(a). The resulting Fig. is depicted in 8. It consists of four heights which touch the inscribed sphere and two vertices. The dihedral angle (the angle between two surfaces) respectively the angle between two heights γ is given by: γ = 1 2 arccos( 1 5 5) [20] (3) In Fig. 8 the angle ε is exactly half of the dihedral angle ε = 1 2 γ (4) where the mirror axis x is the bisecting line of γ. The angle β which is opposite to ε in the right angle triangle and can be easily calculated: β = ε = 90 ε (5) The horizontal dotted line in Fig. 8 indicates a base line which is parallel to the mounting surface of the vision system. r is perpendicular to this baseline b and reaches from the center to the middle point of the height h of the upper pentagon surface. Due to the axis of symmetry x the segment r also reaches from the center to the middle of the adjacent pentagon surface, exactly where the camera lens is located. r and b are perpendicular. Hence this perpendicular angle consists of two times β and one times α pitch. α pitch = 90 2β (6)
9 α pitch = 90 2 (90 ε) = ε using (5) (7) α pitch = ( 1 2 γ) = 90 + γ using (4) (8) α pitch = arccos( 1 5 5) = 26.6 using (3) (9) For the yaw angles the symmetry of the dodecahedron is used. Seen from the top view five cameras in one horizontal plane are equidistant distributed (see Fig. 8(c)) and the angle between two adjacent cameras is: α yaw adjacent = The shift between the upper and the lower row is half this angle. α yaw offset = alpha yaw adjacent 2 = 72 (10) = 72 2 = 36 (11) The roll angle for each camera is equal to zero because the rectangular field of view of the cameras are parallel aligned to the base line. α roll = 0 (12) (a) (c) Fig. 8. Geometry of a penta-dodecahedron. (a) Slice plane through the dodecahedron. [18] Dodecahedron with angles α, β and ε. x denotes the axis of symmetry. (c) Yaw angles seen from top perspective. With the above considerations it is possible to set up all pitch, yaw and roll angles for all cameras in the vision system. With respect to the numbering strategy camera number one is defined with a pitch value equals zero α yaw start = 0. It belongs to the lower row, and therefore the yaw angle is α pitch = 26, 6. Roll is equal to zero.
10 The second camera has the same roll and pitch but yaw is increased by α yaw adjacent therefore yaw is 72. For the third camera yaw is again increased by α yaw and so on until camera five. The sixth camera belongs to upper row and has a pitch of α pitch = 26, 6. The difference regarding yaw to the first camera is equal to α yaw offset = 36. The next cameras until the tenth add again each up α yaw adjacent for yaw. The last camera which is mounted on the top of the vision system is directly heading upwards. Therefore the pitch angle is perpendicular to the base line: α pitch #11 = 90. The yaw angle was determined experimentally and has 152 Table 1 summarizes all pitch, yaw and roll angles for each camera. The distance of the displacement of each camera is described by the radius of the inscribed sphere. The length of one vertices a of the here used vision system is 75 mm. Substituted in Equation (2) r is equal to mm. Number Pitch Yaw Roll Table 1. Pitch, yaw and roll angles for each camera in the Spherecam vision system. All angles are given in degrees. With the pitch, yaw and roll angles and the inner radius it is possible to deduce homogeneous transformation matrices [21] for each camera with respect to the center of the vision system. With those matrices it is easily possible to transform coordinates from a camera coordinate system into a global system for the Spherecam. The used coordinate system follows the Left Hand System (LHS) rule. X and y denote the width and height whereas z is the depth relative to the x-y-plane. A transformation matrix to a specific camera coordinate frame is created by first setting up an auxiliary transformation matrix Caux G T with respect to pitch, yaw and roll in the origin of the global vision system frame (no displacement). Then the distance of the inner radius is traveled along the z axis to get point Caux P. The transformation matrix is inverted to transform the point Caux P from the auxiliary camera coordinate frame into the global frame: G P = G Caux T 1 Caux P. G P describes the displacement vector to the origin of a camera coordinate frame. The values for the displacement in
11 G Caux T are replaced by the values of the calculated displacement vector G P to form the homogeneous transformation matrix C GT for one camera frame. A plot of the calculated displacement vectors is depicted in Fig. 9. (a) Fig. 9. Plotted displacement vectors. (a) Displacement vectors connected with vertices in ascending order (ends in origin). Same plot seen from top perspective. This panoramic vision system is adequate for visual inspection, surveillance, exploration and visual SLAM applications. It is still under developement and comparision of performance regarding visual SLAM is open to future work. 4 Conclusion We have presented a variety of vision and range sensors especially designed for the application on mobile robot platforms and for a wide variety of tasks and application spaces, like for instance catadioptric omnidirectional camera systems for obtaining a hemispherical view, continuously rotating 3D laser scanners for digitalizing and reconstructing real world scenarios, buildings etc. or the Spherecam to get a near panoramic view of a robot s surroundings. The drivers for all of the sensors are available for Unix and Windows machines. Cheap solutions with webcams as well as high end solution with expensive cameras are realized. Especially the new Spherecam provides new interesting possibilities because it delivers undistorted high resolution images as well omni images at the same time. It is more robust than solutions with pan and tilt or rotation units. Acknowledgments The authors would like to thank our Fraunhofer colleagues for their support namely Viatcheslav Tretyakov, Reiner Frings, Jochen Winzer, Stefan May, Paul Plöger and Kai Pervölz. This work was partly supported by the DFG grant CH 74/9-1 (SPP 1125).
12 References [1] Murphy, R.: Trial by fire [rescue robots]. Robotics & Automation Magazine, IEEE 11(3) (Sept. 2004) [2] Durrant-Whyte, H., Bailey, T.: Simultaneous localization and mapping: part I. In: IEEE Robotics & Automation Magazine. Volume 13. (2006) [3] Bailey, T., Durrant-Whyte, H.: Simultaneous localization and mapping (slam): part II. In: IEEE Robotics & Automation Magazine. Volume 13. (2006) [4] Bredenfeld, A., Wisspeintner, T.: Rapid Prototyping für mobile Roboter. Elektronik- Informationen 12 (2004) ISSN: [5] Calabrese, F., Indiveri, G.: An Omni-Vision Triangulation-Like Approach to Mobile Robot Localization. In: Intelligent Control, Proceedings of the 2005 IEEE International Symposium on, Mediterrean Conference on Control and Automation. IEEE (June 2005) ISBN: [6] Wisspeintner, T., Bose, A.: The VolksBot Concept - Rapid Prototyping for real-life Applications in Mobile Robotics. it 5 (May 2005) [7] Douxchamps, D., Peters, G., Urmson, C., Dennedy, D., Moore, D.C., Lamorie, J., Moos, P., Ronneberger, O., Antoniac, D.P., Evers, T.: 1394-based dc control library. (September 2008) [8] Baker, C.: CMU 1394 digital camera driver. iwan/1394/ (September 2008) [9] Danecek, P.: Canon capture. (September 2008) [10] Canon Europe Ltd: Remote capture. (September 2008) [11] CHDK project group: CHDK Wiki. Page [12] Fraunhofer Institut Intelligent Analyse- und Informationssysteme (IAIS): Volksbot. http: // (2008) [13] Wulf, O., Wagner, B.: Fast 3D-Scanning Methods for Laser Measurement Systems. In: International Conference on Control Systems and Computer Science (CSCS14). (2003) [14] Nüchter, A., Lingemann, K., und Hartmut Surmann., J.H.: 6D SLAM for 3D Mapping Outdoor Environments. Journal of Field Robotics (JFR) Special Issue on Quantitative Performance Evaluation of Robotic and Intelligent Systems 24 (2007) [15] Surmann, H., Nüchter, A., Lingemann, K., Hertzberg, J.: 6D SLAM - Preliminary Report on closing the loop in Six Dimensions. In: Proc. of the 5th IFAC Symposium on Intelligent Autonomous Vehicles. (June 2004) [16] Wikimedia Commons: Dodecahedra. (September 2008) [17] Linux-UVC project: Linux UVC driver and tools. (August 2008) [18] Fendt, W.: Das Dodekaeder. (February 2005) [19] Gullberg, J.: Mathematics: From the Birth of Numbers. W W Norton & Co (April 1997) [20] Köller, J.: Pentagondodekaeder. (September 2008) [21] Craig, J.J.: Introduction to Robotics - Mechanics and Control. Addison-Wesley (1989)
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - [email protected]
Automatic Labeling of Lane Markings for Autonomous Vehicles
Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 [email protected] 1. Introduction As autonomous vehicles become more popular,
Basler pilot AREA SCAN CAMERAS
Basler pilot AREA SCAN CAMERAS VGA to 5 megapixels and up to 210 fps Selected high quality CCD sensors Powerful Gigabit Ethernet interface Superb image quality at all Resolutions and frame rates OVERVIEW
Basler. Area Scan Cameras
Basler Area Scan Cameras VGA to 5 megapixels and up to 210 fps Selected high quality Sony and Kodak CCD sensors Powerful Gigabit Ethernet interface Superb image quality at all resolutions and frame rates
Solving Simultaneous Equations and Matrices
Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering
Basler scout AREA SCAN CAMERAS
Basler scout AREA SCAN CAMERAS VGA to 2 megapixels and up to 120 fps Selected high quality CCD and CMOS sensors Gigabit Ethernet and FireWire-b interfaces Perfect fit for a variety of applications - extremely
Tracking Moving Objects In Video Sequences Yiwei Wang, Robert E. Van Dyck, and John F. Doherty Department of Electrical Engineering The Pennsylvania State University University Park, PA16802 Abstract{Object
Shape Measurement of a Sewer Pipe. Using a Mobile Robot with Computer Vision
International Journal of Advanced Robotic Systems ARTICLE Shape Measurement of a Sewer Pipe Using a Mobile Robot with Computer Vision Regular Paper Kikuhito Kawasue 1,* and Takayuki Komatsu 1 1 Department
Static Environment Recognition Using Omni-camera from a Moving Vehicle
Static Environment Recognition Using Omni-camera from a Moving Vehicle Teruko Yata, Chuck Thorpe Frank Dellaert The Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 USA College of Computing
A System for Capturing High Resolution Images
A System for Capturing High Resolution Images G.Voyatzis, G.Angelopoulos, A.Bors and I.Pitas Department of Informatics University of Thessaloniki BOX 451, 54006 Thessaloniki GREECE e-mail: [email protected]
High speed 3D capture for Configuration Management DOE SBIR Phase II Paul Banks [email protected]
High speed 3D capture for Configuration Management DOE SBIR Phase II Paul Banks [email protected] Advanced Methods for Manufacturing Workshop September 29, 2015 1 TetraVue does high resolution 3D
Ultra-High Resolution Digital Mosaics
Ultra-High Resolution Digital Mosaics J. Brian Caldwell, Ph.D. Introduction Digital photography has become a widely accepted alternative to conventional film photography for many applications ranging from
How To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud
REAL TIME 3D FUSION OF IMAGERY AND MOBILE LIDAR Paul Mrstik, Vice President Technology Kresimir Kusevic, R&D Engineer Terrapoint Inc. 140-1 Antares Dr. Ottawa, Ontario K2E 8C4 Canada [email protected]
R o b o C u p 2 0 0 5
R o b o C u p 2 0 0 5 R e s c u e R o b o t L e a g u e C o m p e t i t i o n O s a k a, J a p a n J u l y 1 3 1 9, 2 0 0 5 w w w. r o b o c u p 2 0 0 5. o r g RoboCupRescue - Robot League Team Team Deutschland1
T-REDSPEED White paper
T-REDSPEED White paper Index Index...2 Introduction...3 Specifications...4 Innovation...6 Technology added values...7 Introduction T-REDSPEED is an international patent pending technology for traffic violation
Visual Servoing Methodology for Selective Tree Pruning by Human-Robot Collaborative System
Ref: C0287 Visual Servoing Methodology for Selective Tree Pruning by Human-Robot Collaborative System Avital Bechar, Victor Bloch, Roee Finkelshtain, Sivan Levi, Aharon Hoffman, Haim Egozi and Ze ev Schmilovitch,
Optical Digitizing by ATOS for Press Parts and Tools
Optical Digitizing by ATOS for Press Parts and Tools Konstantin Galanulis, Carsten Reich, Jan Thesing, Detlef Winter GOM Gesellschaft für Optische Messtechnik mbh, Mittelweg 7, 38106 Braunschweig, Germany
An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network
Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED '9) ISSN: 179-519 435 ISBN: 978-96-474-51-2 An Energy-Based Vehicle Tracking System using Principal
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia
Autonomous Advertising Mobile Robot for Exhibitions, Developed at BMF
Autonomous Advertising Mobile Robot for Exhibitions, Developed at BMF Kucsera Péter ([email protected]) Abstract In this article an autonomous advertising mobile robot that has been realized in
CALIBRATION OF A ROBUST 2 DOF PATH MONITORING TOOL FOR INDUSTRIAL ROBOTS AND MACHINE TOOLS BASED ON PARALLEL KINEMATICS
CALIBRATION OF A ROBUST 2 DOF PATH MONITORING TOOL FOR INDUSTRIAL ROBOTS AND MACHINE TOOLS BASED ON PARALLEL KINEMATICS E. Batzies 1, M. Kreutzer 1, D. Leucht 2, V. Welker 2, O. Zirn 1 1 Mechatronics Research
Synthetic Sensing: Proximity / Distance Sensors
Synthetic Sensing: Proximity / Distance Sensors MediaRobotics Lab, February 2010 Proximity detection is dependent on the object of interest. One size does not fit all For non-contact distance measurement,
Robot Perception Continued
Robot Perception Continued 1 Visual Perception Visual Odometry Reconstruction Recognition CS 685 11 Range Sensing strategies Active range sensors Ultrasound Laser range sensor Slides adopted from Siegwart
EYE-10 and EYE-12. Advanced Live Image Cameras
EN EYE-10 and EYE-12 Advanced Live Image Cameras High Quality Live Imaging EYE-10 and EYE-12 Live Image Cameras WolfVision is a globally successful family owned company based in Austria/Europe. As technology
Automotive Applications of 3D Laser Scanning Introduction
Automotive Applications of 3D Laser Scanning Kyle Johnston, Ph.D., Metron Systems, Inc. 34935 SE Douglas Street, Suite 110, Snoqualmie, WA 98065 425-396-5577, www.metronsys.com 2002 Metron Systems, Inc
Segmentation of building models from dense 3D point-clouds
Segmentation of building models from dense 3D point-clouds Joachim Bauer, Konrad Karner, Konrad Schindler, Andreas Klaus, Christopher Zach VRVis Research Center for Virtual Reality and Visualization, Institute
Anamorphic imaging with three mirrors: a survey
Anamorphic imaging with three mirrors: a survey Joseph M. Howard Optics Branch (Code 551), NASA Goddard Space Flight Center, Greenbelt, MD 20771 Ph: 301-286-0690 Fax: 301-286-0204 [email protected]
New development of automation for agricultural machinery
New development of automation for agricultural machinery a versitale technology in automation of agriculture machinery VDI-Expertenforum 2011-04-06 1 Mechanisation & Automation Bigger and bigger Jaguar
MRC High Resolution. MR-compatible digital HD video camera. User manual
MRC High Resolution MR-compatible digital HD video camera User manual page 1 of 12 Contents 1. Intended use...2 2. System components...3 3. Video camera and lens...4 4. Interface...4 5. Installation...5
Geometric Optics Converging Lenses and Mirrors Physics Lab IV
Objective Geometric Optics Converging Lenses and Mirrors Physics Lab IV In this set of lab exercises, the basic properties geometric optics concerning converging lenses and mirrors will be explored. The
Solution Guide III-C. 3D Vision. Building Vision for Business. MVTec Software GmbH
Solution Guide III-C 3D Vision MVTec Software GmbH Building Vision for Business Machine vision in 3D world coordinates, Version 10.0.4 All rights reserved. No part of this publication may be reproduced,
4. CAMERA ADJUSTMENTS
4. CAMERA ADJUSTMENTS Only by the possibility of displacing lens and rear standard all advantages of a view camera are fully utilized. These displacements serve for control of perspective, positioning
Robust and accurate global vision system for real time tracking of multiple mobile robots
Robust and accurate global vision system for real time tracking of multiple mobile robots Mišel Brezak Ivan Petrović Edouard Ivanjko Department of Control and Computer Engineering, Faculty of Electrical
An Experimental Study on Pixy CMUcam5 Vision Sensor
LTU-ARISE-2015-01 1 Lawrence Technological University / Autonomous Robotics Institute for Supporting Education - Technical Memo ARISE-2015-01 An Experimental Study on Pixy CMUcam5 Vision Sensor Charles
PDF Created with deskpdf PDF Writer - Trial :: http://www.docudesk.com
CCTV Lens Calculator For a quick 1/3" CCD Camera you can work out the lens required using this simple method: Distance from object multiplied by 4.8, divided by horizontal or vertical area equals the lens
Manufacturing Process and Cost Estimation through Process Detection by Applying Image Processing Technique
Manufacturing Process and Cost Estimation through Process Detection by Applying Image Processing Technique Chalakorn Chitsaart, Suchada Rianmora, Noppawat Vongpiyasatit Abstract In order to reduce the
How To Analyze Ball Blur On A Ball Image
Single Image 3D Reconstruction of Ball Motion and Spin From Motion Blur An Experiment in Motion from Blur Giacomo Boracchi, Vincenzo Caglioti, Alessandro Giusti Objective From a single image, reconstruct:
Solutions to Exercises, Section 5.1
Instructor s Solutions Manual, Section 5.1 Exercise 1 Solutions to Exercises, Section 5.1 1. Find all numbers t such that ( 1 3,t) is a point on the unit circle. For ( 1 3,t)to be a point on the unit circle
Integrated sensors for robotic laser welding
Proceedings of the Third International WLT-Conference on Lasers in Manufacturing 2005,Munich, June 2005 Integrated sensors for robotic laser welding D. Iakovou *, R.G.K.M Aarts, J. Meijer University of
Overview Image Acquisition of Microscopic Slides via Web Camera
MAX-PLANCK-INSTITUT FÜR MARINE MIKROBIOLOGIE ABTEILUNG MOLEKULARE ÖKOLOGIE Overview Image Acquisition of Microscopic Slides via Web Camera Andreas Ellrott and Michael Zeder Max Planck Institute for Marine
Measuring large areas by white light interferometry at the nanopositioning and nanomeasuring machine (NPMM)
Image Processing, Image Analysis and Computer Vision Measuring large areas by white light interferometry at the nanopositioning and nanomeasuring machine (NPMM) Authors: Daniel Kapusi 1 Torsten Machleidt
Analecta Vol. 8, No. 2 ISSN 2064-7964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
A Reliability Point and Kalman Filter-based Vehicle Tracking Technique
A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video
WAVELENGTH OF LIGHT - DIFFRACTION GRATING
PURPOSE In this experiment we will use the diffraction grating and the spectrometer to measure wavelengths in the mercury spectrum. THEORY A diffraction grating is essentially a series of parallel equidistant
3D Scanner using Line Laser. 1. Introduction. 2. Theory
. Introduction 3D Scanner using Line Laser Di Lu Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute The goal of 3D reconstruction is to recover the 3D properties of a geometric
Removing Moving Objects from Point Cloud Scenes
1 Removing Moving Objects from Point Cloud Scenes Krystof Litomisky [email protected] Abstract. Three-dimensional simultaneous localization and mapping is a topic of significant interest in the research
How To Use Bodescan For 3D Imaging Of The Human Body
«Bodescan» THE ULTIMATE 3D SCANNER FOR DATA ACQUISITION OF HUMAN BODY A COMPREHENSIVE SYSTEM INCLUDING HARDWARE AND SOFTWARE TOOLS Page 2 of 9 Bodescan HUMAN BODY3D SCANNING HARDWARE AND SOFTWARE INCLUDED
Color holographic 3D display unit with aperture field division
Color holographic 3D display unit with aperture field division Weronika Zaperty, Tomasz Kozacki, Malgorzata Kujawinska, Grzegorz Finke Photonics Engineering Division, Faculty of Mechatronics Warsaw University
Industrial Robotics. Training Objective
Training Objective After watching the program and reviewing this printed material, the viewer will learn the basics of industrial robot technology and how robots are used in a variety of manufacturing
LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK
vii LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK LIST OF CONTENTS LIST OF TABLES LIST OF FIGURES LIST OF NOTATIONS LIST OF ABBREVIATIONS LIST OF APPENDICES
7 MEGAPIXEL 180 DEGREE IP VIDEO CAMERA
Scallop Imaging is focused on developing, marketing and manufacturing its proprietary video imaging technology. All our activities are still proudly accomplished in Boston. We do product development, marketing
Robotics. Lecture 3: Sensors. See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information.
Robotics Lecture 3: Sensors See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review: Locomotion Practical
Template-based Eye and Mouth Detection for 3D Video Conferencing
Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer
3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving
3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving AIT Austrian Institute of Technology Safety & Security Department Christian Zinner Safe and Autonomous Systems
Basler dart AREA SCAN CAMERAS. Board level cameras with bare board, S- and CS-mount options
Basler dart AREA SCAN CAMERAS Board level cameras with bare board, S- and CS-mount options Designed to meet smallest space as well as lowest weight and power requirements Plug and play with future-proof
WHITE PAPER. Are More Pixels Better? www.basler-ipcam.com. Resolution Does it Really Matter?
WHITE PAPER www.basler-ipcam.com Are More Pixels Better? The most frequently asked question when buying a new digital security camera is, What resolution does the camera provide? The resolution is indeed
Tube Control Measurement, Sorting Modular System for Glass Tube
Tube Control Measurement, Sorting Modular System for Glass Tube Tube Control is a modular designed system of settled instruments and modules. It comprises measuring instruments for the tube dimensions,
Realization of a UV fisheye hyperspectral camera
Realization of a UV fisheye hyperspectral camera Valentina Caricato, Andrea Egidi, Marco Pisani and Massimo Zucco, INRIM Outline Purpose of the instrument Required specs Hyperspectral technique Optical
MACHINE VISION MNEMONICS, INC. 102 Gaither Drive, Suite 4 Mount Laurel, NJ 08054 USA 856-234-0970 www.mnemonicsinc.com
MACHINE VISION by MNEMONICS, INC. 102 Gaither Drive, Suite 4 Mount Laurel, NJ 08054 USA 856-234-0970 www.mnemonicsinc.com Overview A visual information processing company with over 25 years experience
Mouse Control using a Web Camera based on Colour Detection
Mouse Control using a Web Camera based on Colour Detection Abhik Banerjee 1, Abhirup Ghosh 2, Koustuvmoni Bharadwaj 3, Hemanta Saikia 4 1, 2, 3, 4 Department of Electronics & Communication Engineering,
Terrain Traversability Analysis using Organized Point Cloud, Superpixel Surface Normals-based segmentation and PCA-based Classification
Terrain Traversability Analysis using Organized Point Cloud, Superpixel Surface Normals-based segmentation and PCA-based Classification Aras Dargazany 1 and Karsten Berns 2 Abstract In this paper, an stereo-based
Optimao. In control since 1995. Machine Vision: The key considerations for successful visual inspection
Optimao In control since 1995 Machine Vision: The key considerations for successful visual inspection There is no such a thing as an off-the-shelf vision system. Just like a drive- or a PLCbased control
Experimental Evaluation of State of the Art 3D-Sensors for Mobile Robot Navigation 1)
Experimental Evaluation of State of the Art 3D-Sensors for Mobile Robot Navigation 1) Peter Einramhof, Sven Olufs and Markus Vincze Technical University of Vienna, Automation and Control Institute {einramhof,
Wireless Day / Night Cloud Camera TV-IP751WIC (v1.0r)
(v1.0r) TRENDnet s Wireless Day / Night Cloud Camera, model, takes the work out of viewing video over the internet. Previously to view video remotely, users needed to perform many complicated and time
Interactive Motion Simulators
motionsimulator motionsimulator About the Company Since its founding in 2003, the company Buck Engineering & Consulting GmbH (BEC), with registered offices in Reutlingen (Germany) has undergone a continuous
Basler beat AREA SCAN CAMERAS. High-resolution 12 MP cameras with global shutter
Basler beat AREA SCAN CAMERAS High-resolution 12 MP cameras with global shutter Outstanding price / performance ratio High speed through Camera Link interface Flexible and easy integration Overview Convincing
Ampere's Law. Introduction. times the current enclosed in that loop: Ampere's Law states that the line integral of B and dl over a closed path is 0
1 Ampere's Law Purpose: To investigate Ampere's Law by measuring how magnetic field varies over a closed path; to examine how magnetic field depends upon current. Apparatus: Solenoid and path integral
ACCURACY ASSESSMENT OF BUILDING POINT CLOUDS AUTOMATICALLY GENERATED FROM IPHONE IMAGES
ACCURACY ASSESSMENT OF BUILDING POINT CLOUDS AUTOMATICALLY GENERATED FROM IPHONE IMAGES B. Sirmacek, R. Lindenbergh Delft University of Technology, Department of Geoscience and Remote Sensing, Stevinweg
Project 4: Camera as a Sensor, Life-Cycle Analysis, and Employee Training Program
Project 4: Camera as a Sensor, Life-Cycle Analysis, and Employee Training Program Team 7: Nathaniel Hunt, Chase Burbage, Siddhant Malani, Aubrey Faircloth, and Kurt Nolte March 15, 2013 Executive Summary:
Relating Vanishing Points to Catadioptric Camera Calibration
Relating Vanishing Points to Catadioptric Camera Calibration Wenting Duan* a, Hui Zhang b, Nigel M. Allinson a a Laboratory of Vision Engineering, University of Lincoln, Brayford Pool, Lincoln, U.K. LN6
How To Use Trackeye
Product information Image Systems AB Main office: Ågatan 40, SE-582 22 Linköping Phone +46 13 200 100, fax +46 13 200 150 [email protected], Introduction TrackEye is the world leading system for motion
Classifying Manipulation Primitives from Visual Data
Classifying Manipulation Primitives from Visual Data Sandy Huang and Dylan Hadfield-Menell Abstract One approach to learning from demonstrations in robotics is to make use of a classifier to predict if
A. OPENING POINT CLOUDS. (Notepad++ Text editor) (Cloud Compare Point cloud and mesh editor) (MeshLab Point cloud and mesh editor)
MeshLAB tutorial 1 A. OPENING POINT CLOUDS (Notepad++ Text editor) (Cloud Compare Point cloud and mesh editor) (MeshLab Point cloud and mesh editor) 2 OPENING POINT CLOUDS IN NOTEPAD ++ Let us understand
The process components and related data characteristics addressed in this document are:
TM Tech Notes Certainty 3D November 1, 2012 To: General Release From: Ted Knaak Certainty 3D, LLC Re: Structural Wall Monitoring (#1017) rev: A Introduction TopoDOT offers several tools designed specifically
Chapter 22: Electric Flux and Gauss s Law
22.1 ntroduction We have seen in chapter 21 that determining the electric field of a continuous charge distribution can become very complicated for some charge distributions. t would be desirable if we
Design of a six Degree-of-Freedom Articulated Robotic Arm for Manufacturing Electrochromic Nanofilms
Abstract Design of a six Degree-of-Freedom Articulated Robotic Arm for Manufacturing Electrochromic Nanofilms by Maxine Emerich Advisor: Dr. Scott Pierce The subject of this report is the development of
USB 3.0 Camera User s Guide
Rev 1.2 Leopard Imaging Inc. Mar, 2014 Preface Congratulations on your purchase of this product. Read this manual carefully and keep it in a safe place for any future reference. About this manual This
Face detection is a process of localizing and extracting the face region from the
Chapter 4 FACE NORMALIZATION 4.1 INTRODUCTION Face detection is a process of localizing and extracting the face region from the background. The detected face varies in rotation, brightness, size, etc.
Grazing incidence wavefront sensing and verification of X-ray optics performance
Grazing incidence wavefront sensing and verification of X-ray optics performance Timo T. Saha, Scott Rohrbach, and William W. Zhang, NASA Goddard Space Flight Center, Greenbelt, Md 20771 Evaluation of
VRSPATIAL: DESIGNING SPATIAL MECHANISMS USING VIRTUAL REALITY
Proceedings of DETC 02 ASME 2002 Design Technical Conferences and Computers and Information in Conference Montreal, Canada, September 29-October 2, 2002 DETC2002/ MECH-34377 VRSPATIAL: DESIGNING SPATIAL
Automatic Calibration of an In-vehicle Gaze Tracking System Using Driver s Typical Gaze Behavior
Automatic Calibration of an In-vehicle Gaze Tracking System Using Driver s Typical Gaze Behavior Kenji Yamashiro, Daisuke Deguchi, Tomokazu Takahashi,2, Ichiro Ide, Hiroshi Murase, Kazunori Higuchi 3,
A method of generating free-route walk-through animation using vehicle-borne video image
A method of generating free-route walk-through animation using vehicle-borne video image Jun KUMAGAI* Ryosuke SHIBASAKI* *Graduate School of Frontier Sciences, Shibasaki lab. University of Tokyo 4-6-1
ZEISS T-SCAN Automated / COMET Automated 3D Digitizing - Laserscanning / Fringe Projection Automated solutions for efficient 3D data capture
ZEISS T-SCAN Automated / COMET Automated 3D Digitizing - Laserscanning / Fringe Projection Automated solutions for efficient 3D data capture ZEISS 3D Digitizing Automated solutions for efficient 3D data
Geometric Camera Parameters
Geometric Camera Parameters What assumptions have we made so far? -All equations we have derived for far are written in the camera reference frames. -These equations are valid only when: () all distances
Low-resolution Character Recognition by Video-based Super-resolution
2009 10th International Conference on Document Analysis and Recognition Low-resolution Character Recognition by Video-based Super-resolution Ataru Ohkura 1, Daisuke Deguchi 1, Tomokazu Takahashi 2, Ichiro
INTRODUCTION TO RENDERING TECHNIQUES
INTRODUCTION TO RENDERING TECHNIQUES 22 Mar. 212 Yanir Kleiman What is 3D Graphics? Why 3D? Draw one frame at a time Model only once X 24 frames per second Color / texture only once 15, frames for a feature
1.3. DOT PRODUCT 19. 6. If θ is the angle (between 0 and π) between two non-zero vectors u and v,
1.3. DOT PRODUCT 19 1.3 Dot Product 1.3.1 Definitions and Properties The dot product is the first way to multiply two vectors. The definition we will give below may appear arbitrary. But it is not. It
Subspace Analysis and Optimization for AAM Based Face Alignment
Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China [email protected] Stan Z. Li Microsoft
D/N VARI-FOCAL IR CAMERA
D/N VARI-FOCAL IR CAMERA PIH-0364X/0368X/0384X/0388X W/S INSTRUCTION MANUAL IMPORTANT SAFEGUARDS 1. Read Instructions All the safety and operating instructions should be read before the unit is operated.
Fraunhofer Diffraction
Physics 334 Spring 1 Purpose Fraunhofer Diffraction The experiment will test the theory of Fraunhofer diffraction at a single slit by comparing a careful measurement of the angular dependence of intensity
Making Better Medical Devices with Multisensor Metrology
Making Better Medical Devices with Multisensor Metrology by Nate J. Rose, Chief Applications Engineer, Optical Gaging Products (OGP) Multisensor metrology is becoming a preferred quality control technology
11.1. Objectives. Component Form of a Vector. Component Form of a Vector. Component Form of a Vector. Vectors and the Geometry of Space
11 Vectors and the Geometry of Space 11.1 Vectors in the Plane Copyright Cengage Learning. All rights reserved. Copyright Cengage Learning. All rights reserved. 2 Objectives! Write the component form of
Introduction to Computer Graphics
Introduction to Computer Graphics Torsten Möller TASC 8021 778-782-2215 [email protected] www.cs.sfu.ca/~torsten Today What is computer graphics? Contents of this course Syllabus Overview of course topics
GeoGebra. 10 lessons. Gerrit Stols
GeoGebra in 10 lessons Gerrit Stols Acknowledgements GeoGebra is dynamic mathematics open source (free) software for learning and teaching mathematics in schools. It was developed by Markus Hohenwarter
TRIMBLE TX5 3D LASER SCANNER QUICK START GUIDE
TRIMBLE TX5 3D LASER SCANNER QUICK START GUIDE Equipment 1 8 9 5 6 7 4 3 2 The TX5 laser scanner ships with the following equipment: 1 Scanner transport and carry case 6 USB memory card reader 2 AC power
Mechanics lecture 7 Moment of a force, torque, equilibrium of a body
G.1 EE1.el3 (EEE1023): Electronics III Mechanics lecture 7 Moment of a force, torque, equilibrium of a body Dr Philip Jackson http://www.ee.surrey.ac.uk/teaching/courses/ee1.el3/ G.2 Moments, torque and
Autonomous Mobile Robot-I
Autonomous Mobile Robot-I Sabastian, S.E and Ang, M. H. Jr. Department of Mechanical Engineering National University of Singapore 21 Lower Kent Ridge Road, Singapore 119077 ABSTRACT This report illustrates
