How To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud



Similar documents
USING MOBILE LIDAR TO SURVEY A RAILWAY LINE FOR ASSET INVENTORY INTRODUCTION

Automatic Labeling of Lane Markings for Autonomous Vehicles

Using Optech LMS to Calibrate Survey Data Without Ground Control Points

IP-S2 HD. High Definition 3D Mobile Mapping System

How To Make An Orthophoto

IP-S3 HD1. Compact, High-Density 3D Mobile Mapping System

IP-S2 Compact+ 3D Mobile Mapping System

The process components and related data characteristics addressed in this document are:

Lidar 101: Intro to Lidar. Jason Stoker USGS EROS / SAIC

Mobile Mapping. VZ-400 Conversion to a Mobile Platform Guide. By: Joshua I France. Riegl USA

State of the art in mobile terrestrial laser scanning systems in Romania and at an international level

Description of field acquisition of LIDAR point clouds and photos. CyberMapping Lab UT-Dallas

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network

automatic road sign detection from survey video

Traffic Monitoring Systems. Technology and sensors

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

ACCURACY ASSESSMENT OF BUILDING POINT CLOUDS AUTOMATICALLY GENERATED FROM IPHONE IMAGES

Digital Remote Sensing Data Processing Digital Remote Sensing Data Processing and Analysis: An Introduction and Analysis: An Introduction

3D Building Roof Extraction From LiDAR Data

Static Environment Recognition Using Omni-camera from a Moving Vehicle

REGISTRATION OF LASER SCANNING POINT CLOUDS AND AERIAL IMAGES USING EITHER ARTIFICIAL OR NATURAL TIE FEATURES

3D City Modelling of Istanbul Historic Peninsula by Combination of Aerial Images and Terrestrial Laser Scanning Data

CONVERGENCE OF MMS AND PAVEMENT INSPECTION SENSORS FOR COMPLETE ROAD ASSESSMENT

Mobile 360 Degree Imagery: Cost Effective Rail Network Asset Management October 22nd, Eric McCuaig, Trimble Navigation

THE CONTROL OF A ROBOT END-EFFECTOR USING PHOTOGRAMMETRY

High Resolution RF Analysis: The Benefits of Lidar Terrain & Clutter Datasets

Automated Pavement Distress Survey: A Review and A New Direction

High speed 3D capture for Configuration Management DOE SBIR Phase II Paul Banks

Full Waveform Digitizing, Dual Channel Airborne LiDAR Scanning System for Ultra Wide Area Mapping

GEOScaN Remote Data Acquisition for Hydrographic, Topographic and GIS Surveying

Inter Swath Data Quality Measures to Assess Quality of Calibration of Lidar System. U.S. Department of the Interior U.S.

In Flight ALS Point Cloud Georeferencing using RTK GPS Receiver

Integer Computation of Image Orthorectification for High Speed Throughput

Types of 3D Scanners and 3D Scanning Technologies.

Making Better Medical Devices with Multisensor Metrology

HIGH-PERFORMANCE INSPECTION VEHICLE FOR RAILWAYS AND TUNNEL LININGS. HIGH-PERFORMANCE INSPECTION VEHICLE FOR RAILWAY AND ROAD TUNNEL LININGS.

Differentiation of 3D scanners and their positioning method when applied to pipeline integrity

Industrial Robotics. Training Objective

DINAMIC AND STATIC CENTRE OF PRESSURE MEASUREMENT ON THE FORCEPLATE. F. R. Soha, I. A. Szabó, M. Budai. Abstract

Point Clouds: Big Data, Simple Solutions. Mike Lane

Digital Photogrammetric System. Version USER MANUAL. Block adjustment

Cost-Effective Collection of a Network-Level Asset Inventory. Michael Nieminen, Roadware

Integrated sensors for robotic laser welding

Specifying Laser Scanning Services. A Quantapoint White Paper

DAMAGED ROAD TUNNEL LASER SCANNER SURVEY

USING THE XBOX KINECT TO DETECT FEATURES OF THE FLOOR SURFACE

A. OPENING POINT CLOUDS. (Notepad++ Text editor) (Cloud Compare Point cloud and mesh editor) (MeshLab Point cloud and mesh editor)

Synthetic Sensing: Proximity / Distance Sensors

Basic Problem: Map a 3D object to a 2D display surface. Analogy - Taking a snapshot with a camera

Fabrizio Tadina Regional Sales Manager Western Europe Airborne Products Optech Incorporated

Geometry: Unit 1 Vocabulary TERM DEFINITION GEOMETRIC FIGURE. Cannot be defined by using other figures.

Robot Perception Continued

Terrasolid Software for LiDAR processing

3-D Object recognition from point clouds

Solution Guide III-C. 3D Vision. Building Vision for Business. MVTec Software GmbH

Vorstellung eines photogrammetrischen Kamerasystems für UAVs mit hochgenauer GNSS/INS Information für standardisierte Verarbeitungsverfahren

ADVANTAGES AND DISADVANTAGES OF THE HOUGH TRANSFORMATION IN THE FRAME OF AUTOMATED BUILDING EXTRACTION

Projection Center Calibration for a Co-located Projector Camera System

Paul Bryan UK Representative for CIPA English Heritage Metric Survey Team Tanner Row York, UK

GIS DATA CAPTURE APPLICATIONS MANDLI COMMUNICATIONS

OBLIQUE AERIAL PHOTOGRAPHY TOOL FOR BUILDING INSPECTION AND DAMAGE ASSESSMENT

FURTHER VECTORS (MEI)

The X100. Safe and fully automatic. Fast and with survey accuracy. revolutionary mapping. create your own orthophotos and DSMs

Topographic Change Detection Using CloudCompare Version 1.0

VISION ALGORITHM FOR SEAM TRACKING IN AUTOMATIC WELDING SYSTEM Arun Prakash 1

VENETOREGION ROAD NETWORK ROAD MAPPING LASER SCANNING FACILITY MANAGEMENT

Mobile Laser Scanning for Oilfield Asset Mapping and Management

A Game of Numbers (Understanding Directivity Specifications)

Basler. Line Scan Cameras

A technical overview of the Fuel3D system.

DICOM Correction Item

Geometric Camera Parameters

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY

How To Analyze Ball Blur On A Ball Image

LIDAR and Digital Elevation Data

Force/position control of a robotic system for transcranial magnetic stimulation

3D Laser Scanner. Extra long range laser scanner combined with a high resolution digital panoramic camera

Chapter 3. Track and Wheel Load Testing

Mobile Robot FastSLAM with Xbox Kinect

Under 20 of rotation the accuracy of data collected by 2D software is maintained?

A Simple Guide To Understanding 3D Scanning Technologies

Application Report: Running µshape TM on a VF-20 Interferometer

Information Contents of High Resolution Satellite Images

New Features in TerraScan. Arttu Soininen Software developer Terrasolid Ltd

VERSATILE AND EASY-TO-USE 3D LASER SCANNERS > >

Interactive Motion Simulators

3D Scanner using Line Laser. 1. Introduction. 2. Theory

4. CAMERA ADJUSTMENTS

2D & 3D TelePresence

Automotive Applications of 3D Laser Scanning Introduction

Capturing Road Network Data Using Mobile Mapping Technology

Polarization of Light

Innovative capture and modeling of existing building conditions using 3D laser scanning. Entry B Submission

Transcription:

REAL TIME 3D FUSION OF IMAGERY AND MOBILE LIDAR Paul Mrstik, Vice President Technology Kresimir Kusevic, R&D Engineer Terrapoint Inc. 140-1 Antares Dr. Ottawa, Ontario K2E 8C4 Canada paul.mrstik@terrapoint.com kresimir.kusevic@terrapoint.com ABSTRACT Acquiring LiDAR data and processing it to a point cloud can be a challenge in some environments where the scan itself can be complicated by shadowing and reflections, and positioning reliability might be degraded by obstructions and multipath. Adding attribute information to each point in the cloud adds to the value of the data by enabling the extraction of more information from it, but at the same time increases the complexity of acquisition and processing. A LiDAR point cloud acquired by a mobile ground-based acquisition system is enhanced when fused with airborne LiDAR point clouds, static point clouds, RGB color and other information such as Hyperspectral data. This article will discuss the fusion of these data types collected in challenging environments, focusing on RGB color. INTRODUCTION Terrapoint deploys a mobile terrestrial based laser scanning system, TITAN, which can be mounted on a passenger vehicle or small watercraft. The TITAN platform includes LIDAR (Light Detection And Ranging) scanners, digital video and a high accuracy GPS/INS system. Data acquisition can occur at normal traffic speeds. TITAN has been in production for over a year, successfully completing over 60 projects to date. It has proven to be accurate enough to model paved surfaces to predict water pooling that could cause hydroplaning and it is an effective tool for mapping highway corridors for engineering purposes as well as infrastructure. Terrapoint s development of TITAN grew out of an experimental project started early in 2002 when the company (then called Mosaic Mapping systems Inc.) successfully converted an airborne LiDAR system they had developed, into a platform strapped to a van. Then in mid 2003 Terrapoint was approached to perform a helicopter LIDAR survey of Highway 1 in Afghanistan between Herat and Kandahar. When a suitable helicopter could not be found in Afghanistan the truck mounted system was resurrected (Figure 1). Details of the Afghanistan survey can be found in [1]. Based on the success of the system in Afghanistan, and successful demonstrations for customers in North America, Terrapoint decided to build a next generation kinematic terrestrial LIDAR system, TITAN [2]. GPS, INS and four laser scanners are configured and spatially oriented to scan the surroundings in a 360 degrees effective swath with one pass of the survey vehicle at traffic speed. Geo-referenced digital frame video image and/or line scan RGB imagery is collected at the same time. The sensor pod is ideally mounted on an elevator lift operated from inside the survey vehicle. Figure 1. On the road in Afghanistan.

THE CASE FOR FUSION The use of LiDAR intensity as an attribute added to the position of a point in the point cloud is pretty much taken for granted with airborne LiDAR data. Intensity data is even more useful when used with data acquired by terrestrial mobile LiDAR because so much of that data is collected over featureless surfaces such as the road around the vehicle (Figure 1), or very detailed surfaces such as building walls on either side of the vehicle. Intensity data has also proved useful in visually presenting 3-D point clouds to people not used to them, because the intensity adds a sense of realism to the picture presented by the cloud. The value of intensity data as a point attribute for points in the cloud makes one wonder what other data could be added or fused with the cloud. The first thing that comes to mind is Airborne LiDAR data. When this data is combined or fused with mobile terrestrial Figure 2. Point cloud shaded by intensity. LiDAR data a whole new dimension is added: buildings acquire roofs, shadowing is reduced and coverage is increased (Figure 3). Fusing static terrestrial LiDAR is an obvious candidate for fusion when one wants to combine minute detail in a small area in the context of a larger data with less detail. One example of this would be a rock fall scan acquired by a terrestrial static LiDAR system Figure 3. Mobile terrestrial data (left). Mobile terrestrial data fused with airborne data (right). superimposed on a road scan done by a terrestrial mobile LiDAR system. Although laser intensity data is useful it was apparent that adding color to each point in the cloud, whether airborne or terrestrial, would make the point cloud display more realistic as well as add valuable information to the point data, a feature which could help in the identification and extraction of object and vector information. This is more obvious for terrestrial scans than for airborne scans because terrestrial scans tend to capture complex and detailed textured surfaces such as building facades, signs, etc. Extracting color from digital frame imagery and assigning it to 3-D coordinates in the cloud can be problematic in part because of the processing required, and because of the difficulty in accurately assigning a color pixel from the image to the correct point in the cloud. For our purposes it seemed best to use a line scan camera so that each color scan could be aligned with a laser scan such that each laser point would have the correct color attribute. In this way the color attribute could be associated with each point in the cloud in real time.

LIDAR AND LINE SCAN CAMERA SUBSYSTEM TITAN was originally deployed with digital frame video cameras. Following through on the idea that adding attributes to each point in the cloud would make for richer information extraction from the cloud, Terrapoint decided to add the attributes of RGB color. To accomplish this, a line scan color camera was attached to each laser and then synchronized and aligned with respect to the laser so that each laser scan line was covered by a corresponding color scan line. Camera optics were chosen so that each line-scan camera would cover a field of view slightly wider than the nominal swath with of the scanner it was mounted on. In order to make the RGB referencing algorithm simpler and faster the camera mounting location was chosen to be in the laser scanning plane and as close as possible to the LiDAR coordinate reference center. This eliminated the distance dependent up (z) parallax between two sensors leaving only a side (x) parallax to be removed by software. A special mount was designed to house each camera while still allowing for system calibration adjustment and an RGB-to-laser point referencing calculation. The same mounts could accommodate digital frame cameras mounted for optimal video acquisition so that line scanning or frame imagery could be optionally selected or even combined through the addition of cameras and mounts. A microcontroller controlled network was designed and built so as to control the various cameras and configurations as well as interface with the logging computer and GPS/INS systems. As a result LiDAR scanners and cameras are all synchronized to GPS time, enabling us to uniquely reference the RGB attribute from the camera to the laser points in the cloud. Once mounted in the line scan configuration, the camera s relative exterior orientation with respect to each laser was rectified using the four degrees of freedom in the mounting bracket. A distance to target dependent chip shift correction of the line-scan camera was compensated for using software correction functions. Body Frames Two local Cartesian body frames are defined: the laser body frame, and the line-scan camera body frame. The laser body frame origin is at the laser s center of scanning L with the Y-axis Ly pointing straight forward in the direction of a zero scan angle (Figure 4, plan view). The Z-axis Lz is perpendicular to the scanning plane and the X- axis Lx is perpendicular to the other two. The line-scan camera body frame has it s origin at the camera s perspective center C, the Y-axis coincident with camera s optical axis Cy. The Z-axis Cz is perpendicular to the scanning plane (Figure 4, side view) and the X-axis Cx completes the Cartesian axis triplet. Lz Cz Ly Cy L, C Ly Lx Cx L C line scan sensor Cy side view plan view Lz Cz Lx Cx L C front view Figure 4. Views representing the relationship between camera body frame and laser body frame.

Mounting Bracket and Degrees of Freedom The exterior orientation parameters of the camera with respect to the IMU are three linear offsets (X,Y,Z) and three rotations (Omega, Phi, Kappa). Linear offsets can be calculated by combining a GPS/INS solution with the known linear offsets (lever arms) between the IMU and the camera. Ideally, rotation parameters to the IMU should be made the same for both laser and camera. To be able to rectify and then fix the parameters of the camera s exterior orientation, a camera mounting bracket with at least four degrees of freedom was installed. For exterior orientation rectification purposes the mount permits all three rotations around the camera body coordinate axis as well as a linear translation along the z axis. This ensures that after adjustment of the mount both scanning planes will coincide and eliminate any distance to target dependant z-parallax, leaving only x-parallax which can be more easily modeled and accounted for in post-processing. LINE SCAN CAMERA CALIBRATION The LIDAR-Line-scan camera subsystem calibration is done by aligning the laser scanning plane with the linescan camera scanning plane and synchronizing the camera frames and laser scans to the same time frame. Scan Line Rectification The laser scan plane is defined by the center of the pulse-reflecting rotating mirror and the scanned points in a single scan line. The line-scan camera s scanning plane is defined by the points scanned in the object space and the camera s perspective center. To rectify the system both planes must be made to coincide. This is done by rotating the camera around its three body axis and adjusting the z linear offset of the mounting bracket while scanning a flat wall with some easily identifiable targets set up along a straight line. The heading angle has to be adjusted by rotating the camera about the Z-axis so that the entire region of interest of the camera s scanning field of view will cover the laser field of view. This can be verified by sighting the target points on the wall with both sensors simultaneously, first from a minimum scanning distance and then from an optimum scanning distance from the sensors. Once the heading angle has been adjusted a roll adjustment can be made by rotating the camera around its Y- axis such that both camera and laser scans are parallel when the sensor is located at an optimum scanning distance from the target wall. Roll and pitch are iteratively adjusted until the targets sighted by the laser appear in the camera scan, thus satisfying the parallelity condition. Lastly, the pitch and z-axis offset are adjusted iteratively until the camera and laser scanning planes are coplanar. Chip-shift Correction Although the laser and camera systems are aligned so that both scanning planes are co-planar, there will be x- parallax remaining due to the horizontal linear offset between the camera perspective center and the laser center. This parallax results in a change in the correspondence of line-scan camera pixels with laser points in a scan line with respect to the distance to a target. By observing several sets of data at different distances for the target wall it is possible to model a distance dependent chip-shift correction. APPLICATION IN AN URBAN ENVIRONMENT Terrapoint has carried out numerous projects in rural and corridor environments as well as several projects that were undertaken in downtown urban environments where obstructions and traffic add to the challenge of data acquisition. In 2007 and again in 2008 Terrapoint was contracted to scan a downtown core using TITAN as well as airborne LiDAR. LiDAR data with intensity and RGB color was required in point cloud form as one of the deliverables. Using the system already described, Terrapoint was able to acquire the color information and LiDAR point cloud, fusing the two together with a minimum of post processing. A sample result is shown in Figure 5. The accuracy of the coloring process is at the 2 pixel level in the imagery, which translates into about 2 cm at a 10 meter distance from the sensor. Accuracy checks were done by comparing the color data against LiDAR intensity data in the point cloud.

Figure 5. TITAN colored point cloud. One advantage of the line scan camera approach is the color redundancy available due to the relative scan rates of the LiDAR and the camera. For each scan that the LiDAR makes as the vehicle is moving, the camera scans many lines of color. After making a mesh out of the LiDAR data, the redundant color can then be mapped onto the mesh. The end effect is that the color adds to the texture information in the point cloud. Examples of this are shown in Figures 6 and 7. Figure 6. Colored point cloud (left). All color information fused to LiDAR mesh (right).

Figure 7. Sign represented as a colored LiDAR point cloud (left). True color fusion of the same sign (right). CONCLUSIONS The usual output from most commercial off the shelf LiDAR scanners does not include an integrated color attribute. Color is often provided by using a digital frame camera and photogrammetric techniques to colorize the LiDAR data. For mobile LiDAR applications this method can introduce problems including shadowing and occlusions where the color data and the LiDAR point cloud data are not aligned. Terrapoint developed a mount system that adds line scan color capability to a 2D laser scanner. A boresight and calibration technique was also developed so that the color information from the camera line scans could be assigned to laser points in corresponding laser scan lines. The techniques developed were used on several production projects in an urban environment with excellent results. REFERENCES Newby, S., and P. Mrstik, 2005. Lidar on the Level in Afghanistan, GPS World, Vol. 16 Number 7, pp. 16-22. Glennie, C., K. Kusevic, and P. Mrstik, 2006. Performance Analysis of a Kinematic Terrestrial Lidar Scanning System, MAPPS/ASPRS Fall Conference, November 6-10, San Antonio, TX. Glennie, C., 2007. Rigorous 3D error analysis of kinematic scanning LIDAR systems, Journal of Applied Geodesy, Vol. 1, pp. 147-157.