Vision meets Robotics: The KITTI Dataset
|
|
|
- Franklin Horn
- 9 years ago
- Views:
Transcription
1 Geiger, Andreas ; Lenz, Philip ; Stiller, Christoph ; Urtasun, Raquel: Vision meets Robotics: The KITTI Dataset. 1 International Journal of Robotics Research (IJRR) 32 (213), pp Vision meets Robotics: The KITTI Dataset Andreas Geiger, Philip Lenz, Christoph Stiller and Raquel Urtasun Abstract We present a novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research. In total, we recorded 6 hours of traffic scenarios at 1-1 Hz using a variety of sensor modalities such as highresolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS/IMU inertial navigation system. The scenarios are diverse, capturing real-world traffic situations and range from freeways over rural areas to innercity scenes with many static and dynamic objects. Our data is calibrated, synchronized and timestamped, and we provide the rectified and raw image sequences. Our dataset also contains object labels in the form of 3D tracklets and we provide online benchmarks for stereo, optical flow, object detection and other tasks. This paper describes our recording platform, the data format and the utilities that we provide.! " " #$$#! % &'(! % &'( Index Terms dataset, autonomous driving, mobile robotics, field robotics, computer vision, cameras, laser, GPS, benchmarks, stereo, optical flow, SLAM, object detection, tracking, KITTI I. I NTRODUCTION The KITTI dataset has been recorded from a moving platform (Fig. 1) while driving in and around Karlsruhe, Germany (Fig. 2). It includes camera images, laser scans, high-precision GPS measurements and IMU accelerations from a combined GPS/IMU system. The main purpose of this dataset is to push forward the development of computer vision and robotic algorithms targeted to autonomous driving [1] [7]. While our introductory paper [8] mainly focuses on the benchmarks, their creation and use for evaluating state-of-the-art computer vision methods, here we complement this information by providing technical details on the raw data itself. We give precise instructions on how to access the data and comment on sensor limitations and common pitfalls. The dataset can be downloaded from For a review on related work, we refer the reader to [8]. II. S ENSOR S ETUP Our sensor setup is illustrated in Fig. 3: 2 PointGray Flea2 grayscale cameras (FL2-14S3M-C), 1.4 Megapixels, 1/2 Sony ICX267 CCD, global shutter 2 PointGray Flea2 color cameras (FL2-14S3C-C), 1.4 Megapixels, 1/2 Sony ICX267 CCD, global shutter 4 Edmund Optics lenses, 4mm, opening angle 9, vertical opening angle of region of interest (ROI) 35 1 Velodyne HDL-64E rotating 3D laser scanner, 1 Hz, 64 beams,.9 degree angular resolution, 2 cm distance accuracy, collecting 1.3 million points/second, field of view: 36 horizontal, 26.8 vertical, range: 12 m A. Geiger, P. Lenz and C. Stiller are with the Department of Measurement and Control Systems, Karlsruhe Institute of Technology, Germany. {geiger,lenz,stiller}@kit.edu R. Urtasun is with the Toyota Technological Institute at Chicago, USA. [email protected] Fig. 1. Recording Platform. Our VW Passat station wagon is equipped with four video cameras (two color and two grayscale cameras), a rotating 3D laser scanner and a combined GPS/IMU inertial navigation system. 1 OXTS RT33 inertial and GPS navigation system, 6 axis, 1 Hz, L1/L2 RTK, resolution:.2m /.1 Note that the color cameras lack in terms of resolution due to the Bayer pattern interpolation process and are less sensitive to light. This is the reason why we use two stereo camera rigs, one for grayscale and one for color. The baseline of both stereo camera rigs is approximately 54 cm. The trunk of our vehicle houses a PC with two six-core Intel XEON X565 processors and a shock-absorbed RAID 5 hard disk storage with a capacity of 4 Terabytes. Our computer runs Ubuntu Linux (64 bit) and a real-time database [9] to store the incoming data streams. III. DATASET The raw data described in this paper can be accessed from and contains 25% of our overall recordings. The reason for this is that primarily data with 3D tracklet annotations has been put online, though we will make more data available upon request. Furthermore, we have removed all sequences which are part of our benchmark test sets. The raw data set is divided into the categories Road, City, Residential, Campus and Person. Example frames are illustrated in Fig. 5. For each sequence, we provide the raw data, object annotations in form of 3D bounding box tracklets and a calibration file, as illustrated in Fig. 4. Our recordings have taken place on the 26th, 28th, 29th, 3th of September and on the 3rd of October 211 during daytime. The total size of the provided data is 18 GB.
2 2 Fig. 2. Recording Zone. This figure shows the GPS traces of our recordings in the metropolitan area of Karlsruhe, Germany. Colors encode the GPS signal quality: Red tracks have been recorded with highest precision using RTK corrections, blue denotes the absence of correction signals. The black runs have been excluded from our data set as no GPS signal has been available. A. Data Description All sensor readings of a sequence are zipped into a single file named date_drive.zip, where date and drive are placeholders for the recording date and the sequence number. The directory structure is shown in Fig. 4. Besides the raw recordings ( raw data ), we also provide post-processed data ( synced data ), i.e., rectified and synchronized video streams, on the dataset website. Timestamps are stored in timestamps.txt and perframe sensor readings are provided in the corresponding data sub-folders. Each line in timestamps.txt is composed of the date and time in hours, minutes and seconds. As the Velodyne laser scanner has a rolling shutter, three timestamp files are provided for this sensor, one for the start position (timestamps_start.txt) of a spin, one for the end position (timestamps_end.txt) of a spin, and one for the time, where the laser scanner is facing forward and triggering the cameras (timestamps.txt). The data format in which each sensor stream is stored is as follows: a) Images: Both, color and grayscale images are stored with loss-less compression using 8-bit PNG files. The engine hood and the sky region have been cropped. To simplify working with the data, we also provide rectified images. The size of the images after rectification depends on the calibration parameters and is.5 Mpx on average. The original images before rectification are available as well. b) OXTS (GPS/IMU): For each frame, we store 3 different GPS/IMU values in a text file: The geographic coordinates including altitude, global orientation, velocities, accelerations, angular rates, accuracies and satellite information. Accelera- Fig. 3. Sensor Setup. This figure illustrates the dimensions and mounting positions of the sensors (red) with respect to the vehicle body. Heights above ground are marked in green and measured with respect to the road surface. Transformations between sensors are shown in blue. date/ date_drive/ date_drive.zip image_x/ x={,..,3} data/ frame_number.png timestamps.txt oxts/ data/ frame_number.txt dataformat.txt timestamps.txt velodyne_points/ data/ frame_number.bin timestamps.txt timestamps_start.txt timestamps_end.txt date_drive_tracklets.zip tracklet_labels.xml date_calib.zip calib_cam_to_cam.txt calib_imu_to_velo.txt calib_velo_to_cam.txt Fig. 4. Structure of the provided Zip-Files and their location within a global file structure that stores all KITTI sequences. Here, date and drive are placeholders, and image x refers to the 4 video camera streams. tions and angular rates are both specified using two coordinate systems, one which is attached to the vehicle body (x, y, z) and one that is mapped to the tangent plane of the earth surface at that location (f,l,u). From time to time we encountered short ( 1 second) communication outages with the OXTS device for which we interpolated all values linearly and set the last 3 entries to -1 to indicate the missing information. More details are provided in dataformat.txt. Conversion utilities are provided in the development kit. c) Velodyne: For efficiency, the Velodyne scans are stored as floating point binaries that are easy to parse using the C++ or MATLAB code provided. Each point is stored with its (x, y, z) coordinate and an additional reflectance value (r). While the number of points per scan is not constant, on average each file/frame has a size of 1.9 MB which corresponds to 12, 3D points and reflectance values. Note that the Velodyne laser scanner rotates continuously around its vertical axis (counter-clockwise), which can be taken into account using the timestamp files. B. Annotations For each dynamic object within the reference camera s field of view, we provide annotations in the form of 3D bounding box tracklets, represented in Velodyne coordinates. We define the classes,,,,,, and (e.g., Trailers, Segways). The tracklets are stored in date_drive_tracklets.xml.
3 3 Fig. 7. Object Coordinates. This figure illustrates the coordinate system of the annotated 3D bounding boxes with respect to the coordinate system of the 3D Velodyne laser scanner. In z-direction, the object coordinate system is located at the bottom of the object (contact point with the supporting surface). Fig. 6. Development kit. Working with tracklets (top), Velodyne point clouds (bottom) and their projections onto the image plane is demonstrated in the MATLAB development kit which is available from the KITTI website. Each object is assigned a class and its 3D size (height, width, length). For each frame, we provide the object s translation and rotation in 3D, as illustrated in Fig. 7. Note that we only provide the yaw angle, while the other two angles are assumed to be close to zero. Furthermore, the level of occlusion and truncation is specified. The development kit contains C++/MATLAB code for reading and writing tracklets using the boost::serialization 1 library. To give further insights into the properties of our dataset, we provide statistics for all sequences that contain annotated objects. The total number of objects and the object orientations for the two predominant classes and are shown in Fig. 8. For each object class, the number of object labels per image and the length of the captured sequences is shown in Fig. 9. The egomotion of our platform recorded by the GPS/IMU system as well as statistics about the sequence length and the number of objects are shown in Fig. 1 for the whole dataset and in Fig. 11 per street category. C. Development Kit The raw data development kit provided on the KITTI website 2 contains MATLAB demonstration code with C++ wrappers and a readme.txt file which gives further details. Here, we will briefly discuss the most important features. Before running the scripts, the mex wrapper readtrackletsmex.cpp for reading tracklets into MAT- LAB structures and cell arrays needs to be built using the script make.m. It wraps the file tracklets.h from the data.php cpp folder which holds the tracklet object for serialization. This file can also be directly interfaced with when working in a C++ environment. The script run_demotracklets.m demonstrates how 3D bounding box tracklets can be read from the XML files and projected onto the image plane of the cameras. The projection of 3D Velodyne point clouds into the image plane is demonstrated in run_demovelodyne.m. See Fig. 6 for an illustration. The script run_demovehiclepath.m shows how to read and display the 3D vehicle trajectory using the GPS/IMU data. It makes use of convertoxtstopose(), which takes as input GPS/IMU measurements and outputs the 6D pose of the vehicle in Euclidean space. For this conversion we make use of the Mercator projection [1] x = s r πlon (1) 18 π(9 + lat) y = s r log tan (2) 36 with earth radius r meters, scale s = cos lat π 18, and (lat, lon) the geographic coordinates. lat denotes the latitude of the first frame s coordinates and uniquely determines the Mercator scale. The function loadcalibrationcamtocam() can be used to read the intrinsic and extrinsic calibration parameters of the four video sensors. The other 3D rigid body transformations can be parsed with loadcalibrationrigid(). D. Benchmarks In addition to the raw data, our KITTI website hosts evaluation benchmarks for several computer vision and robotic tasks such as stereo, optical flow, visual odometry, SLAM, 3D object detection and 3D object tracking. For details about the benchmarks and evaluation metrics we refer the reader to [8]. IV. SENSOR CALIBRATION We took care that all sensors are carefully synchronized and calibrated. To avoid drift over time, we calibrated the sensors every day after our recordings. Note that even though the sensor setup hasn t been altered in between, numerical differences are possible. The coordinate systems are defined as illustrated in Fig. 1 and Fig. 3, i.e.: Camera: x = right, y = down, z = forward Velodyne: x = forward, y = left, z = up
4 4 Fig. 5. City Residential Road Campus Person Examples from the KITTI dataset. This figure demonstrates the diversity in our dataset. The left color camera image is shown. GPS/IMU: x = forward, y = left, z = up Notation: In the following, we write scalars in lower-case letters (a), vectors in bold lower-case (a) and matrices using bold-face capitals (A). 3D rigid-body transformations which take points from coordinate system a to coordinate system b will be denoted by T b a, with T for transformation. A. Synchronization In order to synchronize the sensors, we use the timestamps of the Velodyne 3D laser scanner as a reference and consider each spin as a frame. We mounted a reed contact at the bottom of the continuously rotating scanner, triggering the cameras when facing forward. This minimizes the differences in the range and image observations caused by dynamic objects. Unfortunately, the GPS/IMU system cannot be synchronized that way. Instead, as it provides updates at 1 Hz, we collect the information with the closest timestamp to the laser scanner timestamp for a particular frame, resulting in a worstcase time difference of 5 ms between a GPS/IMU and a camera/velodyne data package. Note that all timestamps are provided such that positioning information at any time can be easily obtained via interpolation. All timestamps have been recorded on our host computer using the system clock. B. Camera Calibration For calibrating the cameras intrinsically and extrinsically, we use the approach proposed in [11]. Note that all camera centers are aligned, i.e., they lie on the same x/y-plane. This is important as it allows us to rectify all images jointly. The calibration parameters for each day are stored in row-major order in calib_cam_to_cam.txt using the following notation: s (i) N original image size ( ) K (i) R calibration matrices (unrectified) d (i) R distortion coefficients (unrectified) R (i) R rotation from camera to camera i t (i) R translation from camera to camera i s (i) rect N 2 R (i) rect R 3 3 P (i) rect R image size after rectification rectifying rotation matrix projection matrix after rectification Here, i {, 1, 2, 3} is the camera index, where represents the left grayscale, 1 the right grayscale, 2 the left color and 3 the right color camera. Note that the variable definitions are compliant with the OpenCV library, which we used for warping the images. When working with the synchronized and rectified datasets only the variables with rect-subscript are relevant. Note that due to the pincushion distortion effect the images have been cropped such that the size of the rectified images is smaller than the original size of pixels. The projection of a 3D point x =(x, y, z, 1) T in rectified (rotated) camera coordinates to a point y =(u, v, 1) T in the i th camera image is given as rect x (3)
5 5 with P (i) rect = f (i) f (i) u u c (i) u f v (i) 1 c (i) v b (i) x (4) the i th projection matrix. Here, b (i) x denotes the baseline (in meters) with respect to reference camera. Note that in order to project a 3D point x in reference camera coordinates to a point y on the i th image plane, the rectifying rotation matrix of the reference camera R () rect must be considered as well: rect R () rect x (5) Here, R () rect has been expanded into a 4 4 matrix by appending a fourth zero-row and column, and setting R () rect(4, 4) = 1. C. Velodyne and IMU Calibration We have registered the Velodyne laser scanner with respect to the reference camera coordinate system (camera ) by initializing the rigid body transformation using [11]. Next, we optimized an error criterion based on the Euclidean distance of 5 manually selected correspondences and a robust measure on the disparity error with respect to the 3 top performing stereo methods in the KITTI stereo benchmark [8]. The optimization was carried out using Metropolis-Hastings sampling. The rigid body transformation from Velodyne coordinates to camera coordinates is given in calib_velo_to_cam.txt: R cam velo R3 3 t cam velo R1 3 Using.... rotation matrix: velodyne camera... translation vector: velodyne camera T cam R cam velo = velo t cam velo 1 a 3D point x in Velodyne coordinates gets projected to a point y in the i th camera image as (6) rect R () rect T cam velo x (7) For registering the IMU/GPS with respect to the Velodyne laser scanner, we first recorded a sequence with an -loop and registered the (untwisted) point clouds using the Pointto-Plane ICP algorithm. Given two trajectories this problem corresponds to the well-known hand-eye calibration problem which can be solved using standard tools [12]. The rotation matrix R velo imu and the translation vector tvelo imu are stored in calib_imu_to_velo.txt. A 3D point x in IMU/GPS coordinates gets projected to a point y in the i th image as rect R () rect T cam velo T velo imu x (8) V. SUMMARY AND FUTURE WORK In this paper, we have presented a calibrated, synchronized and rectified autonomous driving dataset capturing a wide range of interesting scenarios. We believe that this dataset will be highly useful in many areas of robotics and computer vision. In the future we plan on expanding the set of available sequences by adding additional 3D object labels for currently unlabeled sequences and recording new sequences, Number of s Orientation [deg] % of Members Number of s Occluded Truncated Occluded Truncated Occlusion/Truncation Status by Class Orientation [deg] Fig. 8. Object Occurrence and Orientation Statistics of our Dataset. This figure shows the different types of objects occurring in our sequences (top) and the orientation histograms (bottom) for the two most predominant categories and. for example in difficult lighting situations such as at night, in tunnels, or in the presence of fog or rain. Furthermore, we plan on extending our benchmark suite by novel challenges. In particular, we will provide pixel-accurate semantic labels for many of the sequences. REFERENCES [1] G. Singh and J. Kosecka, Acquiring semantics induced topology in urban environments, in ICRA, 212. [2] R. Paul and P. Newman, FAB-MAP 3D: Topological mapping with spatial and visual appearance, in ICRA, 21. [3] C. Wojek, S. Walk, S. Roth, K. Schindler, and B. Schiele, Monocular visual scene understanding: Understanding multi-object traffic scenes, PAMI, 212. [4] D. Pfeiffer and U. Franke, Efficient representation of traffic scenes by means of dynamic stixels, in IV, 21. [5] A. Geiger, M. Lauer, and R. Urtasun, A generative model for 3d urban scene understanding from movable platforms, in CVPR, 211. [6] A. Geiger, C. Wojek, and R. Urtasun, Joint 3d estimation of objects and scene layout, in NIPS, 211. [7] M. A. Brubaker, A. Geiger, and R. Urtasun, Lost! leveraging the crowd for probabilistic visual self-localization. in CVPR, 213. [8] A. Geiger, P. Lenz, and R. Urtasun, Are we ready for autonomous driving? The KITTI vision benchmark suite, in CVPR, 212. [9] M. Goebl and G. Faerber, A real-time-capable hard- and software architecture for joint image and knowledge processing in cognitive automobiles, in IV, 27. [1] P. Osborne, The mercator projections, 28. [Online]. Available: [11] A. Geiger, F. Moosmann, O., and B. Schuster, A toolbox for automatic calibration of range and camera sensors using a single shot, in ICRA, 212. [12] R. Horaud and F. Dornaika, Hand-eye calibration, IJRR, vol. 14, no. 3, pp , 1995.
6 per Image per Image per Image per Image per Image per Image per Image per Image Fig. 9. Number of Object Labels per Class and Image. This figure shows how often an object occurs in an image. Since our labeling efforts focused on cars and pedestrians, these are the most predominant classes here Number of Sequences Frames per Sequences campus city person residential road Street Category Fig. 1. Egomotion, Sequence Count and Length. This figure show (from-left-to-right) the egomotion (velocity and acceleration) of our recording platform for the whole dataset. Note that we excluded sequences with a purely static observer from these statistics. The length of the available sequences is shown as a histogram counting the number of frames per sequence. The rightmost figure shows the number of frames/images per scene category static observer static observer City Residential Road Campus Person Fig. 11. Velocities, Accelerations and per. For each scene category we show the acceleration and velocity of the mobile platform as well as the number of labels and objects per class. Note that sequences with a purely static observer have been excluded from the velocity and acceleration histograms as they don t represent natural driving behavior. The category Person has been recorded from a static observer.
Vision meets Robotics: The KITTI Dataset
1 Vision meets Robotics: The KITTI Dataset Andreas Geiger, Philip Len, Christoph Stiller and Raquel Urtasun Velodne HDL-64E Laserscanner Abstract We present a novel dataset captured from a VW station wagon
Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite
Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite Philip Lenz 1 Andreas Geiger 2 Christoph Stiller 1 Raquel Urtasun 3 1 KARLSRUHE INSTITUTE OF TECHNOLOGY 2 MAX-PLANCK-INSTITUTE IS 3
Automatic Labeling of Lane Markings for Autonomous Vehicles
Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 [email protected] 1. Introduction As autonomous vehicles become more popular,
IP-S2 HD. High Definition 3D Mobile Mapping System
IP-S2 HD High Definition 3D Mobile Mapping System Integrated, turnkey solution High Density, Long Range LiDAR sensor for ultimate in visual detail High Accuracy IMU and DMI Odometry for positional accuracy
T-REDSPEED White paper
T-REDSPEED White paper Index Index...2 Introduction...3 Specifications...4 Innovation...6 Technology added values...7 Introduction T-REDSPEED is an international patent pending technology for traffic violation
How To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud
REAL TIME 3D FUSION OF IMAGERY AND MOBILE LIDAR Paul Mrstik, Vice President Technology Kresimir Kusevic, R&D Engineer Terrapoint Inc. 140-1 Antares Dr. Ottawa, Ontario K2E 8C4 Canada [email protected]
IP-S2 Compact+ 3D Mobile Mapping System
IP-S2 Compact+ 3D Mobile Mapping System 3D scanning of road and roadside features Delivers high density point clouds and 360 spherical imagery High accuracy IMU options without export control Simple Map,
IP-S3 HD1. Compact, High-Density 3D Mobile Mapping System
IP-S3 HD1 Compact, High-Density 3D Mobile Mapping System Integrated, turnkey solution Ultra-compact design Multiple lasers minimize scanning shades Unparalleled ease-of-use No user calibration required
3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving
3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving AIT Austrian Institute of Technology Safety & Security Department Manfred Gruber Safe and Autonomous Systems
High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound
High-accuracy ultrasound target localization for hand-eye calibration between optical tracking systems and three-dimensional ultrasound Ralf Bruder 1, Florian Griese 2, Floris Ernst 1, Achim Schweikard
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA
A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA N. Zarrinpanjeh a, F. Dadrassjavan b, H. Fattahi c * a Islamic Azad University of Qazvin - [email protected]
3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving
3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving AIT Austrian Institute of Technology Safety & Security Department Christian Zinner Safe and Autonomous Systems
Basler. Area Scan Cameras
Basler Area Scan Cameras VGA to 5 megapixels and up to 210 fps Selected high quality Sony and Kodak CCD sensors Powerful Gigabit Ethernet interface Superb image quality at all resolutions and frame rates
Robot Perception Continued
Robot Perception Continued 1 Visual Perception Visual Odometry Reconstruction Recognition CS 685 11 Range Sensing strategies Active range sensors Ultrasound Laser range sensor Slides adopted from Siegwart
Removing Moving Objects from Point Cloud Scenes
1 Removing Moving Objects from Point Cloud Scenes Krystof Litomisky [email protected] Abstract. Three-dimensional simultaneous localization and mapping is a topic of significant interest in the research
PRODUCT SHEET. [email protected] [email protected] www.biopac.com
EYE TRACKING SYSTEMS BIOPAC offers an array of monocular and binocular eye tracking systems that are easily integrated with stimulus presentations, VR environments and other media. Systems Monocular Part
Solution Guide III-C. 3D Vision. Building Vision for Business. MVTec Software GmbH
Solution Guide III-C 3D Vision MVTec Software GmbH Building Vision for Business Machine vision in 3D world coordinates, Version 10.0.4 All rights reserved. No part of this publication may be reproduced,
Development of an automated Red Light Violation Detection System (RLVDS) for Indian vehicles
CS11 59 Development of an automated Red Light Violation Detection System (RLVDS) for Indian vehicles Satadal Saha 1, Subhadip Basu 2 *, Mita Nasipuri 2, Dipak Kumar Basu # 2 # AICTE Emeritus Fellow 1 CSE
VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS
VEHICLE LOCALISATION AND CLASSIFICATION IN URBAN CCTV STREAMS Norbert Buch 1, Mark Cracknell 2, James Orwell 1 and Sergio A. Velastin 1 1. Kingston University, Penrhyn Road, Kingston upon Thames, KT1 2EE,
How To Use Trackeye
Product information Image Systems AB Main office: Ågatan 40, SE-582 22 Linköping Phone +46 13 200 100, fax +46 13 200 150 [email protected], Introduction TrackEye is the world leading system for motion
USING MOBILE LIDAR TO SURVEY A RAILWAY LINE FOR ASSET INVENTORY INTRODUCTION
USING MOBILE LIDAR TO SURVEY A RAILWAY LINE FOR ASSET INVENTORY a Michael Leslar, Sales and Marketing Support Engineer b Gordon Perry, Senior Project Manager b Keith McNease, Principal a Optech Incorporated
Classifying Manipulation Primitives from Visual Data
Classifying Manipulation Primitives from Visual Data Sandy Huang and Dylan Hadfield-Menell Abstract One approach to learning from demonstrations in robotics is to make use of a classifier to predict if
Mobile Robot FastSLAM with Xbox Kinect
Mobile Robot FastSLAM with Xbox Kinect Design Team Taylor Apgar, Sean Suri, Xiangdong Xi Design Advisor Prof. Greg Kowalski Abstract Mapping is an interesting and difficult problem in robotics. In order
Files Used in this Tutorial
Generate Point Clouds Tutorial This tutorial shows how to generate point clouds from IKONOS satellite stereo imagery. You will view the point clouds in the ENVI LiDAR Viewer. The estimated time to complete
MRC High Resolution. MR-compatible digital HD video camera. User manual
MRC High Resolution MR-compatible digital HD video camera User manual page 1 of 12 Contents 1. Intended use...2 2. System components...3 3. Video camera and lens...4 4. Interface...4 5. Installation...5
Traffic Monitoring Systems. Technology and sensors
Traffic Monitoring Systems Technology and sensors Technology Inductive loops Cameras Lidar/Ladar and laser Radar GPS etc Inductive loops Inductive loops signals Inductive loop sensor The inductance signal
Basler. Line Scan Cameras
Basler Line Scan Cameras High-quality line scan technology meets a cost-effective GigE interface Real color support in a compact housing size Shading correction compensates for difficult lighting conditions
Basler pilot AREA SCAN CAMERAS
Basler pilot AREA SCAN CAMERAS VGA to 5 megapixels and up to 210 fps Selected high quality CCD sensors Powerful Gigabit Ethernet interface Superb image quality at all Resolutions and frame rates OVERVIEW
3D Scanner using Line Laser. 1. Introduction. 2. Theory
. Introduction 3D Scanner using Line Laser Di Lu Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute The goal of 3D reconstruction is to recover the 3D properties of a geometric
Description of field acquisition of LIDAR point clouds and photos. CyberMapping Lab UT-Dallas
Description of field acquisition of LIDAR point clouds and photos CyberMapping Lab UT-Dallas Description of field acquisition of LIDAR point clouds and photos Objective: Introduce the basic steps that
USB 3.0 Camera User s Guide
Rev 1.2 Leopard Imaging Inc. Mar, 2014 Preface Congratulations on your purchase of this product. Read this manual carefully and keep it in a safe place for any future reference. About this manual This
Using MATLAB to Measure the Diameter of an Object within an Image
Using MATLAB to Measure the Diameter of an Object within an Image Keywords: MATLAB, Diameter, Image, Measure, Image Processing Toolbox Author: Matthew Wesolowski Date: November 14 th 2014 Executive Summary
An inertial haptic interface for robotic applications
An inertial haptic interface for robotic applications Students: Andrea Cirillo Pasquale Cirillo Advisor: Ing. Salvatore Pirozzi Altera Innovate Italy Design Contest 2012 Objective Build a Low Cost Interface
Using Optech LMS to Calibrate Survey Data Without Ground Control Points
Challenge An Optech client conducted an airborne lidar survey over a sparsely developed river valley. The data processors were finding that the data acquired in this survey was particularly difficult to
THE problem of visual servoing guiding a robot using
582 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, VOL. 13, NO. 4, AUGUST 1997 A Modular System for Robust Positioning Using Feedback from Stereo Vision Gregory D. Hager, Member, IEEE Abstract This paper
Behavior Analysis in Crowded Environments. XiaogangWang Department of Electronic Engineering The Chinese University of Hong Kong June 25, 2011
Behavior Analysis in Crowded Environments XiaogangWang Department of Electronic Engineering The Chinese University of Hong Kong June 25, 2011 Behavior Analysis in Sparse Scenes Zelnik-Manor & Irani CVPR
An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network
Proceedings of the 8th WSEAS Int. Conf. on ARTIFICIAL INTELLIGENCE, KNOWLEDGE ENGINEERING & DATA BASES (AIKED '9) ISSN: 179-519 435 ISBN: 978-96-474-51-2 An Energy-Based Vehicle Tracking System using Principal
Fixplot Instruction Manual. (data plotting program)
Fixplot Instruction Manual (data plotting program) MANUAL VERSION2 2004 1 1. Introduction The Fixplot program is a component program of Eyenal that allows the user to plot eye position data collected with
An Experimental Study on Pixy CMUcam5 Vision Sensor
LTU-ARISE-2015-01 1 Lawrence Technological University / Autonomous Robotics Institute for Supporting Education - Technical Memo ARISE-2015-01 An Experimental Study on Pixy CMUcam5 Vision Sensor Charles
Opportunities for the generation of high resolution digital elevation models based on small format aerial photography
Opportunities for the generation of high resolution digital elevation models based on small format aerial photography Boudewijn van Leeuwen 1, József Szatmári 1, Zalán Tobak 1, Csaba Németh 1, Gábor Hauberger
CAM-HFR-A HIGH FRAME RATE CAMERA
CAM-HFR-A HIGH FRAME RATE CAMERA Tightly synchronize high frame rate video up to 100 FPS with physiological data recorded with a BIOPAC MP150 Research System. Included Components High Frame Rate Camera
automatic road sign detection from survey video
automatic road sign detection from survey video by paul stapleton gps4.us.com 10 ACSM BULLETIN february 2012 tech feature In the fields of road asset management and mapping for navigation, clients increasingly
Ultra-High Resolution Digital Mosaics
Ultra-High Resolution Digital Mosaics J. Brian Caldwell, Ph.D. Introduction Digital photography has become a widely accepted alternative to conventional film photography for many applications ranging from
Basic Principles of Inertial Navigation. Seminar on inertial navigation systems Tampere University of Technology
Basic Principles of Inertial Navigation Seminar on inertial navigation systems Tampere University of Technology 1 The five basic forms of navigation Pilotage, which essentially relies on recognizing landmarks
Introduction. www.imagesystems.se
Product information Image Systems AB Main office: Ågatan 40, SE-582 22 Linköping Phone +46 13 200 100, fax +46 13 200 150 [email protected], Introduction Motion is the world leading software for advanced
USING THE XBOX KINECT TO DETECT FEATURES OF THE FLOOR SURFACE
USING THE XBOX KINECT TO DETECT FEATURES OF THE FLOOR SURFACE By STEPHANIE COCKRELL Submitted in partial fulfillment of the requirements For the degree of Master of Science Thesis Advisor: Gregory Lee
WHITE PAPER. Are More Pixels Better? www.basler-ipcam.com. Resolution Does it Really Matter?
WHITE PAPER www.basler-ipcam.com Are More Pixels Better? The most frequently asked question when buying a new digital security camera is, What resolution does the camera provide? The resolution is indeed
Wii Remote Calibration Using the Sensor Bar
Wii Remote Calibration Using the Sensor Bar Alparslan Yildiz Abdullah Akay Yusuf Sinan Akgul GIT Vision Lab - http://vision.gyte.edu.tr Gebze Institute of Technology Kocaeli, Turkey {yildiz, akay, akgul}@bilmuh.gyte.edu.tr
Basler scout AREA SCAN CAMERAS
Basler scout AREA SCAN CAMERAS VGA to 2 megapixels and up to 120 fps Selected high quality CCD and CMOS sensors Gigabit Ethernet and FireWire-b interfaces Perfect fit for a variety of applications - extremely
Step-by-Step guide for IMAGINE UAV workflow
Step-by-Step guide for IMAGINE UAV workflow Overview This short guide will go through all steps of the UAV workflow that are needed to produce the final results. Those consist out of two raster datasets,
A. OPENING POINT CLOUDS. (Notepad++ Text editor) (Cloud Compare Point cloud and mesh editor) (MeshLab Point cloud and mesh editor)
MeshLAB tutorial 1 A. OPENING POINT CLOUDS (Notepad++ Text editor) (Cloud Compare Point cloud and mesh editor) (MeshLab Point cloud and mesh editor) 2 OPENING POINT CLOUDS IN NOTEPAD ++ Let us understand
Synthetic Sensing: Proximity / Distance Sensors
Synthetic Sensing: Proximity / Distance Sensors MediaRobotics Lab, February 2010 Proximity detection is dependent on the object of interest. One size does not fit all For non-contact distance measurement,
Geometric Camera Parameters
Geometric Camera Parameters What assumptions have we made so far? -All equations we have derived for far are written in the camera reference frames. -These equations are valid only when: () all distances
High Resolution RF Analysis: The Benefits of Lidar Terrain & Clutter Datasets
0 High Resolution RF Analysis: The Benefits of Lidar Terrain & Clutter Datasets January 15, 2014 Martin Rais 1 High Resolution Terrain & Clutter Datasets: Why Lidar? There are myriad methods, techniques
E190Q Lecture 5 Autonomous Robot Navigation
E190Q Lecture 5 Autonomous Robot Navigation Instructor: Chris Clark Semester: Spring 2014 1 Figures courtesy of Siegwart & Nourbakhsh Control Structures Planning Based Control Prior Knowledge Operator
Integer Computation of Image Orthorectification for High Speed Throughput
Integer Computation of Image Orthorectification for High Speed Throughput Paul Sundlie Joseph French Eric Balster Abstract This paper presents an integer-based approach to the orthorectification of aerial
Investigation of Color Aliasing of High Spatial Frequencies and Edges for Bayer-Pattern Sensors and Foveon X3 Direct Image Sensors
Investigation of Color Aliasing of High Spatial Frequencies and Edges for Bayer-Pattern Sensors and Foveon X3 Direct Image Sensors Rudolph J. Guttosch Foveon, Inc. Santa Clara, CA Abstract The reproduction
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY
PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY V. Knyaz a, *, Yu. Visilter, S. Zheltov a State Research Institute for Aviation System (GosNIIAS), 7, Victorenko str., Moscow, Russia
Overview Image Acquisition of Microscopic Slides via Web Camera
MAX-PLANCK-INSTITUT FÜR MARINE MIKROBIOLOGIE ABTEILUNG MOLEKULARE ÖKOLOGIE Overview Image Acquisition of Microscopic Slides via Web Camera Andreas Ellrott and Michael Zeder Max Planck Institute for Marine
Cost-Effective Collection of a Network-Level Asset Inventory. Michael Nieminen, Roadware
Cost-Effective Collection of a Network-Level Asset Inventory Michael Nieminen, Roadware Introduction This presentation is about using a mobile vehicle to collect roadway asset (feature) data. Asset/Feature
Barcode Based Automated Parking Management System
IJSRD - International Journal for Scientific Research & Development Vol. 2, Issue 03, 2014 ISSN (online): 2321-0613 Barcode Based Automated Parking Management System Parth Rajeshbhai Zalawadia 1 Jasmin
T O B C A T C A S E G E O V I S A T DETECTIE E N B L U R R I N G V A N P E R S O N E N IN P A N O R A MISCHE BEELDEN
T O B C A T C A S E G E O V I S A T DETECTIE E N B L U R R I N G V A N P E R S O N E N IN P A N O R A MISCHE BEELDEN Goal is to process 360 degree images and detect two object categories 1. Pedestrians,
Robust and accurate global vision system for real time tracking of multiple mobile robots
Robust and accurate global vision system for real time tracking of multiple mobile robots Mišel Brezak Ivan Petrović Edouard Ivanjko Department of Control and Computer Engineering, Faculty of Electrical
Terrain Traversability Analysis using Organized Point Cloud, Superpixel Surface Normals-based segmentation and PCA-based Classification
Terrain Traversability Analysis using Organized Point Cloud, Superpixel Surface Normals-based segmentation and PCA-based Classification Aras Dargazany 1 and Karsten Berns 2 Abstract In this paper, an stereo-based
Mobile Mapping. VZ-400 Conversion to a Mobile Platform Guide. By: Joshua I France. Riegl USA
Mobile Mapping VZ-400 Conversion to a Mobile Platform Guide By: Joshua I France Riegl USA Table of Contents Introduction... 5 Installation Checklist... 5 Software Required... 5 Hardware Required... 5 Connections...
CNC-STEP. "LaserProbe4500" 3D laser scanning system Instruction manual
LaserProbe4500 CNC-STEP "LaserProbe4500" 3D laser scanning system Instruction manual 2 Hylewicz CNC-Technik Siemensstrasse 13-15 D-47608 Geldern Fon.: +49 (0) 2831 133236 E-Mail: [email protected] Website:
VOLUMNECT - Measuring Volumes with Kinect T M
VOLUMNECT - Measuring Volumes with Kinect T M Beatriz Quintino Ferreira a, Miguel Griné a, Duarte Gameiro a, João Paulo Costeira a,b and Beatriz Sousa Santos c,d a DEEC, Instituto Superior Técnico, Lisboa,
Interaction devices and sensors. EPFL Immersive Interaction Group Dr. Nan WANG Dr. Ronan BOULIC [email protected]
Interaction devices and sensors EPFL Immersive Interaction Group Dr. Nan WANG Dr. Ronan BOULIC [email protected] Outline 3D interaction tasks Action capture system Large range Short range Tracking system
If you are working with the H4D-60 or multi-shot cameras we recommend 8GB of RAM on a 64 bit Windows and 1GB of video RAM.
Phocus 2.7.6 Windows read-me December 5 2013 Installation To install Phocus, run the installation bundle called Phocus 2.7.6 Setup.exe. This bundle contains Phocus, Hasselblad Device Drivers, Microsoft.NET
EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM
EFFICIENT VEHICLE TRACKING AND CLASSIFICATION FOR AN AUTOMATED TRAFFIC SURVEILLANCE SYSTEM Amol Ambardekar, Mircea Nicolescu, and George Bebis Department of Computer Science and Engineering University
Pedestrian Detection with RCNN
Pedestrian Detection with RCNN Matthew Chen Department of Computer Science Stanford University [email protected] Abstract In this paper we evaluate the effectiveness of using a Region-based Convolutional
Rotation: Moment of Inertia and Torque
Rotation: Moment of Inertia and Torque Every time we push a door open or tighten a bolt using a wrench, we apply a force that results in a rotational motion about a fixed axis. Through experience we learn
LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK
vii LIST OF CONTENTS CHAPTER CONTENT PAGE DECLARATION DEDICATION ACKNOWLEDGEMENTS ABSTRACT ABSTRAK LIST OF CONTENTS LIST OF TABLES LIST OF FIGURES LIST OF NOTATIONS LIST OF ABBREVIATIONS LIST OF APPENDICES
A Comprehensive Set of Image Quality Metrics
The Gold Standard of image quality specification and verification A Comprehensive Set of Image Quality Metrics GoldenThread is the product of years of research and development conducted for the Federal
Integration of Inertial Measurements with GNSS -NovAtel SPAN Architecture-
Integration of Inertial Measurements with GNSS -NovAtel SPAN Architecture- Sandy Kennedy, Jason Hamilton NovAtel Inc., Canada Edgar v. Hinueber imar GmbH, Germany Symposium Gyro Technology, Stuttgart 9/25
Note monitors controlled by analog signals CRT monitors are controlled by analog voltage. i. e. the level of analog signal delivered through the
DVI Interface The outline: The reasons for digital interface of a monitor the transfer from VGA to DVI. DVI v. analog interface. The principles of LCD control through DVI interface. The link between DVI
MoTeC USA GPS. Part # M GPS BL Available in 10 Hz or 20 Hz. USER MANUAL Version 1.4
MoTeC USA GPS Part # M GPS BL Available in 10 Hz or 20 Hz. USER MANUAL Version 1.4 MoTeC USA GPS Copyright Motec Systems USA 2008 The information in this document is subject to change without notice. While
Tracking of Small Unmanned Aerial Vehicles
Tracking of Small Unmanned Aerial Vehicles Steven Krukowski Adrien Perkins Aeronautics and Astronautics Stanford University Stanford, CA 94305 Email: [email protected] Aeronautics and Astronautics Stanford
Tube Control Measurement, Sorting Modular System for Glass Tube
Tube Control Measurement, Sorting Modular System for Glass Tube Tube Control is a modular designed system of settled instruments and modules. It comprises measuring instruments for the tube dimensions,
High speed 3D capture for Configuration Management DOE SBIR Phase II Paul Banks [email protected]
High speed 3D capture for Configuration Management DOE SBIR Phase II Paul Banks [email protected] Advanced Methods for Manufacturing Workshop September 29, 2015 1 TetraVue does high resolution 3D
Waypoint. Best-in-Class GNSS and GNSS+INS Processing Software
Waypoint Best-in-Class GNSS and GNSS+INS Processing Software Waypoint Exceptional Post-Processing Software Enhance your GNSS Position, Velocity and Attitude Accuracy For applications requiring highly
A System for Capturing High Resolution Images
A System for Capturing High Resolution Images G.Voyatzis, G.Angelopoulos, A.Bors and I.Pitas Department of Informatics University of Thessaloniki BOX 451, 54006 Thessaloniki GREECE e-mail: [email protected]
Whitepaper. Image stabilization improving camera usability
Whitepaper Image stabilization improving camera usability Table of contents 1. Introduction 3 2. Vibration Impact on Video Output 3 3. Image Stabilization Techniques 3 3.1 Optical Image Stabilization 3
Tracking Moving Objects In Video Sequences Yiwei Wang, Robert E. Van Dyck, and John F. Doherty Department of Electrical Engineering The Pennsylvania State University University Park, PA16802 Abstract{Object
Exploration, Navigation and Self-Localization in an Autonomous Mobile Robot
Exploration, Navigation and Self-Localization in an Autonomous Mobile Robot Thomas Edlinger [email protected] Gerhard Weiß [email protected] University of Kaiserslautern, Department
Speed Performance Improvement of Vehicle Blob Tracking System
Speed Performance Improvement of Vehicle Blob Tracking System Sung Chun Lee and Ram Nevatia University of Southern California, Los Angeles, CA 90089, USA [email protected], [email protected] Abstract. A speed
Tracking and integrated navigation Konrad Schindler
Tracking and integrated navigation Konrad Schindler Institute of Geodesy and Photogrammetry Tracking Navigation needs predictions for dynamic objects estimate trajectories in 3D world coordinates and extrapolate
INSTRUCTOR WORKBOOK Quanser Robotics Package for Education for MATLAB /Simulink Users
INSTRUCTOR WORKBOOK for MATLAB /Simulink Users Developed by: Amir Haddadi, Ph.D., Quanser Peter Martin, M.A.SC., Quanser Quanser educational solutions are powered by: CAPTIVATE. MOTIVATE. GRADUATE. PREFACE
ZEISS T-SCAN Automated / COMET Automated 3D Digitizing - Laserscanning / Fringe Projection Automated solutions for efficient 3D data capture
ZEISS T-SCAN Automated / COMET Automated 3D Digitizing - Laserscanning / Fringe Projection Automated solutions for efficient 3D data capture ZEISS 3D Digitizing Automated solutions for efficient 3D data
LIBSVX and Video Segmentation Evaluation
CVPR 14 Tutorial! 1! LIBSVX and Video Segmentation Evaluation Chenliang Xu and Jason J. Corso!! Computer Science and Engineering! SUNY at Buffalo!! Electrical Engineering and Computer Science! University
C# Implementation of SLAM Using the Microsoft Kinect
C# Implementation of SLAM Using the Microsoft Kinect Richard Marron Advisor: Dr. Jason Janet 4/18/2012 Abstract A SLAM algorithm was developed in C# using the Microsoft Kinect and irobot Create. Important
Chapter 5 Objectives. Chapter 5 Input
Chapter 5 Input Describe two types of input List characteristics of a Identify various types of s Identify various types of pointing devices Chapter 5 Objectives Explain how voice recognition works Understand
ZEISS Axiocam 506 color Your Microscope Camera for Imaging of Large Sample Areas Fast, in True Color, and High Resolution
Product Information Version 1.0 ZEISS Axiocam 506 color Your Microscope Camera for Imaging of Large Sample Areas Fast, in True Color, and High Resolution ZEISS Axiocam 506 color Sensor Model Sensor Pixel
Basler beat AREA SCAN CAMERAS. High-resolution 12 MP cameras with global shutter
Basler beat AREA SCAN CAMERAS High-resolution 12 MP cameras with global shutter Outstanding price / performance ratio High speed through Camera Link interface Flexible and easy integration Overview Convincing
Data source, type, and file naming convention
Exercise 1: Basic visualization of LiDAR Digital Elevation Models using ArcGIS Introduction This exercise covers activities associated with basic visualization of LiDAR Digital Elevation Models using ArcGIS.
