Geometry of vertical and oblique image combinations

Similar documents
Information Contents of High Resolution Satellite Images

PROS AND CONS OF THE ORIENTATION OF VERY HIGH RESOLUTION OPTICAL SPACE IMAGES

OBLIQUE AERIAL PHOTOGRAPHY TOOL FOR BUILDING INSPECTION AND DAMAGE ASSESSMENT

Geometric Property of Large Format Digital Camera DMC II 140

Vorstellung eines photogrammetrischen Kamerasystems für UAVs mit hochgenauer GNSS/INS Information für standardisierte Verarbeitungsverfahren

Map World Forum Hyderabad, India Introduction: High and very high resolution space images: GIS Development

3D City Modelling of Istanbul Historic Peninsula by Combination of Aerial Images and Terrestrial Laser Scanning Data

Digital Photogrammetric System. Version USER MANUAL. Block adjustment

ULTRACAMXP, THE NEW DIGITAL AERIAL CAMERA SYSTEM BY VEXCEL IMAGING / MICROSOFT INTRODUCTION

From Pixel to Info-Cloud News at Leica Geosystems JACIE Denver, 31 March 2011 Ruedi Wagner Hexagon Geosystems, Geospatial Solutions Division.

MIDAS (Multi-cameras Integrated Digital Acquisition System)

High Resolution Digital Surface Models and Orthoimages for Telecom Network Planning

Leica Photogrammetry Suite Project Manager

The Chillon Project: Aerial / Terrestrial and Indoor Integration

Jean François Aumont (1), Bastien Mancini (2). (1) Image processing manager, Delair-Tech, (2) Managing director, Delair-Tech. Project : Authors:

3D MODELING OF LARGE AND COMPLEX SITE USING MULTI-SENSOR INTEGRATION AND MULTI-RESOLUTION DATA

How To Make An Orthophoto

Solving for Camera Exterior Orientation on Historical Images Using Imagine/LPS Version 1.0

4.03 Vertical Control Surveys: 4-1

Opportunities for the generation of high resolution digital elevation models based on small format aerial photography

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY

Results of digital aerial triangulation using different software packages

PHOTOGRAMMETRIC MAPPING OF SPOT IMAGES BINGO IN THE PHOCUS SYSTEM ABSTRACT 1. INTRODUCTION. by E. Kruck, Oberkochen

Basic Principles of Inertial Navigation. Seminar on inertial navigation systems Tampere University of Technology

IP-S3 HD1. Compact, High-Density 3D Mobile Mapping System

AIRBORNE TESTING OF THE DSS: TEST RESULTS AND ANALYSIS

Description of field acquisition of LIDAR point clouds and photos. CyberMapping Lab UT-Dallas

THE EFFECTS OF TEMPERATURE VARIATION ON SINGLE-LENS-REFLEX DIGITAL CAMERA CALIBRATION PARAMETERS

A framework for disaster early recovery support using CIM model based on Photogrammetric Techniques

Inter Swath Data Quality Measures to Assess Quality of Calibration of Lidar System. U.S. Department of the Interior U.S.

Lidar 101: Intro to Lidar. Jason Stoker USGS EROS / SAIC

Robot Perception Continued

Submitted to: Submitted by: Department of Geology and Mineral Industries 800 NE Oregon Street, Suite 965 Portland, OR 97232

Greg Colley, Suave Aerial Photographers

CALIBRATION AND ACCURACY TESTING OF MOBILE PHONE CAMERAS

PERFORMANCE TEST ON UAV-BASED PHOTOGRAMMETRIC DATA COLLECTION

Integer Computation of Image Orthorectification for High Speed Throughput

RESOLUTION MERGE OF 1: SCALE AERIAL PHOTOGRAPHS WITH LANDSAT 7 ETM IMAGERY

Total Program's Units

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

TUTORIAL Extraction of Geospatial Information from High Spatial Resolution Optical Satellite Sensors

LIDAR and Digital Elevation Data

4. CAMERA ADJUSTMENTS

REGISTRATION OF LASER SCANNING POINT CLOUDS AND AERIAL IMAGES USING EITHER ARTIFICIAL OR NATURAL TIE FEATURES

MODULATION TRANSFER FUNCTION MEASUREMENT METHOD AND RESULTS FOR THE ORBVIEW-3 HIGH RESOLUTION IMAGING SATELLITE

UPDATING OBJECT FOR GIS DATABASE INFORMATION USING HIGH RESOLUTION SATELLITE IMAGES: A CASE STUDY ZONGULDAK

Photogrammetric 3-D Point Determination for Dam Monitoring

How To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud

POTENTIAL OF MANUAL AND AUTOMATIC FEATURE EXTRACTION FROM HIGH RESOLUTION SPACE IMAGES IN MOUNTAINOUS URBAN AREAS

IP-S2 Compact+ 3D Mobile Mapping System

IMPROVEMENT OF DIGITAL IMAGE RESOLUTION BY OVERSAMPLING

Orthorectifying Air Photos with GPS data, GCPs and Tie Points

3D GIS: It s a Brave New World

Unmanned Airship Based High Resolution Images Acquisition and the Processing

INTEGRATED GEOPHYSICAL AND REMOTE SENSING STUDIES ON GROTTA GIGANTE SHOW CAVE (TRIESTE ITALY) P. Paganini, A. Pavan, F. Coren, A.

Analysis of RTN Measurement Results Referring to ASG-EUPOS Network

16 th IOCCG Committee annual meeting. Plymouth, UK February mission: Present status and near future

Waypoint. Best-in-Class GNSS and GNSS+INS Processing Software

Development of Automatic shooting and telemetry system for UAV photogrammetry INTRODUCTION

VISIONMAP A3 - SUPER WIDE ANGLE MAPPING SYSTEM BASIC PRINCIPLES AND WORKFLOW

Ultra-High Resolution Digital Mosaics

Geometric Optics Converging Lenses and Mirrors Physics Lab IV

ARGOS - NEAR REAL TIME AIRBORNE MONITORING SYSTEM FOR DISASTER AND TRAFFIC APPLICATIONS OECD CONFERENCE CENTER, PARIS, FRANCE / 3 5 FEBRUARY 2010

MULTIPURPOSE USE OF ORTHOPHOTO MAPS FORMING BASIS TO DIGITAL CADASTRE DATA AND THE VISION OF THE GENERAL DIRECTORATE OF LAND REGISTRY AND CADASTRE

The Hydrographic Society in Scotland

Total Station Setup and Operation. Sokkia SET Total Station

Capturing Road Network Data Using Mobile Mapping Technology

Auto Head-Up Displays: View-Through for Drivers

Deploying DMC in today s workflow

Scanning Photogrammetry for Measuring Large Targets in Close Range

The RapidEye optical satellite family for high resolution imagery

Rapid Updates of GIS Databases from Digital Images

RS platforms. Fabio Dell Acqua - Gruppo di Telerilevamento

LiDAR Remote Sensing Data Collection: Panther Creek, Oregon April 27, 2012

COMPARISON OF AERIAL IMAGES, SATELLITE IMAGES AND LASER SCANNING DSM IN A 3D CITY MODELS PRODUCTION FRAMEWORK

Geometric Camera Parameters

Teaching and Learning Strategies for 3D Urban and Landscape Modelling

LANDSAT 8 Level 1 Product Performance

High Resolution RF Analysis: The Benefits of Lidar Terrain & Clutter Datasets

LiDAR REMOTE SENSING

Current status of image matching for Earth observation

XXI International CIPA Symposium, October, Athens, Greece

VECTORAL IMAGING THE NEW DIRECTION IN AUTOMATED OPTICAL INSPECTION

Global Positioning System

STATE OF NEVADA Department of Administration Division of Human Resource Management CLASS SPECIFICATION

Digital Orthophoto Production In the Desktop Environment 1

NEAR REAL TIME AIRBORNE MONITORING SYSTEM FOR DISASTER AND TRAFFIC APPLICATIONS

IP-S2 HD. High Definition 3D Mobile Mapping System

Determining Polar Axis Alignment Accuracy

IMAGE ACQUISITION CONSTRAINTS FOR PANORAMIC FRAME CAMERA IMAGING

Asset Data: A New Beginning, A New AURIZON. OmniSurveyor3D - The one source of truth

GPS Precise Point Positioning as a Method to Evaluate Global TanDEM-X Digital Elevation Model

Rodenstock Photo Optics

Automatic georeferencing of imagery from high-resolution, low-altitude, low-cost aerial platforms

Application Example: Quality Control. Turbines: 3D Measurement of Water Turbines

Charlesworth School Year Group Maths Targets

3D City Modelling from LIDAR Data

31 Encoding and validating data from maps and images

SMARTSCAN hardware test results for smart optoelectronic image correction for pushbroom cameras

LOW-COST OPTICAL CAMERA SYSTEM FOR DISASTER MONITORING

Transcription:

Geometry of vertical and oblique image combinations K. Jacobsen Leibniz University Hannover, Germany Keywords: Pictometry, Multivision, calibration, block adjustment, Virtual Earth ABSTRACT: For city planning and security services the combination of vertical and oblique images like from Pictometry and Multivision recently became very popular. For larger cities a high number of such image combinations are even included in Microsoft Virtual Earth. The geo-reference of the image combinations usually is based on direct sensor orientation the combination of GPS and inertial measurement units. The images are taken by camera systems like Track Air MIDAS, equipped with 5 Canon EOS cameras, each with 4992 x 3328 pixels. The Canon EOS is not really a metric camera; it keeps the inner and the system orientation only stable during one photo flight. The images are also used for simple measurement purposes as well as the generation of 3D-city models. If object information without disturbing loss of accuracy against the direct sensor orientation is required, a system calibration during the day of the photo flight has to be done. The method of calibration and achieved accuracy is described as well as the characteristics and potential of the image combinations. 1 INTRODUCTION The interpretation and understanding of vertical aerial or space images requires experience. For untrained persons the interpretation of terrestrial or oblique images is easy because they are closer to the standard experience in viewing to objects. In addition usually the facades are not or nearly not visible in vertical images. With the combination of vertical and oblique images totally new application for photogrammetry came. Such images like from the Pictometry system using a camera system with one vertical and 4 oblique cameras are widely used by Microsoft Virtual Earth as Bird s Eye (Figure 1). In Western Europe the Blom Group is imaging all cities with a population larger than 50000, that means approximately 900 cities with together approximately 100 000 km². 12 Pictometry camera systems are in use for this. The competitor MultiVision is using a similar system. The commercial application of the combination of vertical with oblique images like generated by Pictometry and MultiVision systems is dominated by visual inspections and simple measurement of distances for public safety and planning purposes. The use together with geoinformation systems requires the knowledge of the exterior orientation of every image together with a digital elevation model (DEM). Block adjustments are too time-consuming and complex, so the exterior orientations are based on direct sensor orientation the use of the combination of

inertial measurement units (IMU) together with relative cinematic GPS-positioning. In several countries networks of permanent GPS reference stations are available, but also with satellite based reference systems like OMNISTAR and different off-line reference systems, like JPL Final, the GPS reference is available in sub-meter accuracy. Especially for planning purposes not only the visualization is important, also some metric information is used. This requires a calibration of the imaging system and the relation to the IMU and the GPS-antenna. Figure 1: Main building of Leibniz University Hannover in Microsoft Virtual Earth, based on Pictometry system, taken by Blom Group Oblique view 13cm GSD The use of oblique aerial images or the combination of vertical and oblique images is not a new invention. Prior to 1938 in the USA, UK, Germany, France, Italy and Switzerland single and multiple lens cameras have been produced for oblique or combined configuration (Manual of Photogrammetry, 2 nd edition) and Moffit 1967 (figure 2). Scheimflug invented an eight-lens camera in 1900, viewing oblique into 8 directions, Aschenbrenner developed in 1920 a 9-lens camera with 8 oblique and a nadir view, similar to the USGS 9 lens camera (figure 2). In military reconnaissance oblique images are in use since long time. They combine the advantage of a view to facades and other vertical objects, together with imaging from distance. The orientation and calibration of such systems, partially with extreme long focal length, is known since longer time (Jacobsen 1988). The combination of a nadir, 2 low and 2 high oblique images taken by the Z/I Imaging KS-153 can be seen in figure 3. Today oblique images even can be taken by the very high resolution satellites like WorldView-1 (fig. 4).

above: Aschenbrenners 9- lens camera 1920 centre and below: USGS 9- lens camera 3 above figures: Trimetrogon below: Scheimpflug s 8-lens camera (1900) Zeiss oblique camera arrangements Figure 2: historic multiple lens and multiple camera arrangements for oblique aerial images Figure 3: nadir, low and high oblique images of the reconnaissance camera Z/I Imaging KS-153, viewing from horizon to horizon

Figure 4: Dolmabahce palace, Istanbul in WorldView-1 image with 29.3 incidence angle Today for the combination of nadir and oblique images dominating digital medium format cameras are used for civil application, allowing smaller systems which can fit to any standard aerial camera cone. Different camera combinations are in use. A typical camera system for such an application is the Track Air (The Netherlands) MIDAS camera system. It has a combination of 5 of the shelf Canon EOS small format cameras with a focal length of 23.8mm for the nadir view and 51mm for the 4 oblique views (figure 5). The vertical camera has a field of view of 71.92 x 51.69 and the oblique cameras 38.8 x 26.4, viewing approximately with 45 nadir angle, covering a nadir angle from 32 up to 58 (footprint arrangement see figures 7 and 8). All 5 cameras have a CCD-array of 4992 x 3328 pixels with 7.2µm pixel size. The Canon EOS-cameras of the MIDAS system have a limited geometric stability, requiring more often a check of the calibration. Figure 5: Track Air MIDAS camera system combination of 5 Canon EOS cameras 2 BUNDLE ORIENTATION OF VERTICAL AND OBLIQUE IMAGES A data set with a combination of standard aerial wide angle photos (153mm focal length) and digital middle format images has been oriented by bundle block adjustment. The 3 different digital cameras have the same CCD-array like the Canon EOS, described above and focal length of 84.3mm, 169.5mm and 169.8mm. The oblique images have nadir angles in the range of 50. The image scale for the vertical images is approximately 1:4600 (9cm ground sampling distance (GSD) based on 20µm pixel size), for the centre of the oblique images 1:9400 (7cm GSD), 1:9900 (7cm GSD) and 1:26000 (19cm GSD). Because of the unusual arrangement and the different type

of images no automatic block adjustment was possible with standard commercial software for automatic image matching, so the tie points had to be measured manually. Only 13 ground control points and no direct sensor orientation has been used, but for such a block with an extreme number of ties, with up to 16 images per ground point, this is quite enough. The bundle block adjustment did not cause any problem, only the number of blunders was higher than usual; but this was expected because of the quite different image scale, different view directions and the combination of digitized analogue photos with digital images. The sigma0 value of 20µm was satisfying for the required purpose. At the ground control points for the horizontal component root mean square errors of 2cm and for the height 7cm have been reached. The orientation by traditional bundle adjustment is possible, but the manual measurement of tie points is time-consuming. That means for such an image configuration the direct sensor orientation (use of GPS + inertial measurement system (IMU)) is a must. Only for the calibration a manual measurement of the tie and control points of such a block is justified, but after the first initial calibration, the preceding calibration can be used for support of the measurement. Figure 6: footprints of combination of vertical and oblique images Colour of footprint = camera 3 CALIBRATION OF TRACK AIR MIDAS CAMERA SYSTEM A larger area has been flown with the Track Air MIDAS camera system, organized by MultiVision. The camera with the nadir view has a focal length of 23.8mm and the oblique cameras approximately 51mm. This corresponds for the vertical view to 17cm GSD and for the oblique images to 10cm x 11cm up to 15cm x 29cm. For oblique images the GSD in the view direction is the GSD across the view direction divided by the cosine of the nadir angle, so it is not a square size. 3 flight lines, each with 4 vertical images, and the corresponding oblique views have been used for the bundle block adjustment. This should lead to a configuration like shown in figure 8c, but some of the oblique images are not well connected with the block, so they are not supporting the calibration significantly. By this reason some of these images have not been used (figure 7a). The centre flight line has been flown from north to south, while the other both have been flown in the opposite direction, improving the calibration configuration. The data acquisition was made with LPS. LPS had no problems to match the vertical images (figure 8a) and overlapping images in the image space, but for the oblique images the matching is limited to the connection of 2 neighboured images. An automatic image matching of the whole test block, even if it was supported by initial image orientation of the first bundle block adjustment, failed. LPS has not been developed for automatic aero triangulation of such a configuration,

starting with the problem of different focal length. So a combination between manual pointing and matching of 2 images was used. The automatic matching of overlapping different image combinations often leads to the extraction of corresponding object points. The used Hannover program system BLUH for bundle block adjustment is able to identify such points based on their similar object coordinates und can rename corresponding points to the same point names, leading to a better block tie. Some extension for program system BLUH was necessary, starting with the automatic exclusion of object points from the adjustment located only in images taken from the same projection centre. Such points do not allow the computation of object coordinates. As control information an orthoimage with 1m GSD and a DEM was used. For the vertical images this was leading to a sigma0 of 11µm and root mean square discrepancies at the control points of 16cm for X and Y and 1.6m for Z (table 1). Figure 7a: footprints of calibration block, 3 flight lines, colour = camera Figure 7b: footprints together with tie points By the adjustment of all images of the test-block (figure 7) the system geometry and the camera geometry have been determined. The dominating systematic image errors are the radial symmetric components with values up to 100µm. This is usual for the used optics. The radial symmetric distortions, as well as the overall systematic image errors, are similar for the oblique sub-cameras, using the same type of lens system. For the vertical sub-camera equipped with a different lens system it varies from the other. The systematic image errors, or in other words the difference between the mathematical model of perspective geometry and the real image geometry, can be respected in the application software system for handling the vertical and oblique images, or it can be used for resampling the images to strict perspective geometry. The sigma0 (accuracy of image coordinates) of the vertical images of 11µm corresponds to 1.4 pixels. With the average image scale of 1:11 000 it corresponds to 12cm on the ground, not far away from the reached root mean square discrepancies at the horizontal components of the control points. The larger discrepancies at the control point height only can be explained by the limited accuracy of the DEM used for the determination of the control point heights. In an adjustment of all images together the sigma0 is with 33µm larger, caused by the different view direction, but partially also by the limited accuracy of the control point heights (table 1). Supported by relative cinematic GPS-positions of the projection centres, also the inner orientation has been improved. Of course there is a strong correlation between a shift of the GPSpositions, especially in Z-direction, and the inner orientation, but the block configuration together

with fixing the corresponding projection centres together, supports the determination of the inner orientation. Figure 8a:only vertical images with tie points, colour coded corresponding to number of images/point Figure 8b: footprints of one image combination Figure 8c: maximal number of images for test block configuration sigma0 RMSX / RMSY control points RMSZ control points only 12 vertical images 11 µm 15 cm 160 cm all images 33 µm 38 cm 122 cm Table 1: accuracy of block adjustment of control-block 32 control points Based on the improved inner orientation by combined block adjustment with GPS-coordinates of the projection centres, joining also the corresponding projection centres of the vertical and oblique images together, the relation of the oblique cameras to the vertical camera have to be determined (table 2). The image orientations taken from each projection centre have to be rotated by multiplying the rotation matrixes by the inverse rotation matrix of the vertical image. This leads to the rotation values for the nadir image of 0 for all 3 rotations. The averaged relative orientations of the oblique images in relation to the nadir view are identical to the internal system calibration. Based on the orientation of the combined adjustment, the boresite calibration can be computed by comparing the orientation from the controlled bundle block adjustment with the inertial orientations. The boresite values are related to the roll, pitch and yaw-system, the required transformations are made within the Hannover program GPSCOR. The so computed boresite calibration values can be used in the same run or a separate run of GPSCOR for the correction of the inertial data. The corrected inertial data are corresponding to the orientation of the vertical camera. With the values of the internal system calibration (rotations of the oblique cameras in relation to the vertical camera see table 2), with program ROTOR the exterior orientation of all images of a project can be computed. Of course the so computed orientations, within the calibration sub-block, should be close to the orientations from the controlled bundle clock adjustment. The kappa-values of the orientation of the sub-cameras in relation to the vertical reference camera (table 2) show, that the sub-cameras are always oriented into the oblique view direction. The oblique angles vary from 49.1 grads up to 51.3 grads (44.2 to 46.2 ). With orientation of the sub-cameras in relation to the nadir images, the orientations have been computed by ROTOR. With these orientations, object coordinates have been computed by combined intersection, resulting at the control points to RMSX=0.62m, RMSY=0.60m and RMSZ=1.63m, satisfying the expectation.

phi [grads] omega [grads] kappa [grads] sub-camera 5 -.0261 49.9609 -.3585 sub-camera 7-49.1002 -.5065-99.8977 sub-camera 8 1.7375-51.1537 199.2017 sub-camera 19 51.3505.4023 99.6800 Table 2: orientation of sub-cameras against nadir reference camera 4 CONCLUSON The combined use of vertical and oblique cameras like in Track Air MIDAS camera system requires an internal system calibration. The orientation of an image block, taken with such a combination leads to a strong overlap of images with up to 12 images per object point. Standard commercial programs are not able to handle such a block by automatic image matching and a manual measurement is very time consuming. So a direct sensor orientation with a combination of relative cinematic GPS-positioning together with an inertial measurement system is required. The direct sensor orientation together with the system calibration leads to the orientation of all subcameras. With a sub-block of 3 flight lines and 4 vertical images in every flight line, with good connected oblique images, a complete system-calibration is possible. The used Canon EOS cameras have systematic image errors, dominated by the radial symmetric distortion, in the range up to 100µm or 14 pixels. The influence of the systematic image errors can be respected in a geo-coded interpretation and measurement system like from Pictometry or MultiVision by the application software or by generating perfect perspective images based on a resampling of the images using the systematic image errors. The reached absolute accuracy within the calibration block of approximately 0.6m in X and Y is sufficient for the purposes of MultiVision applications. The relative accuracy is better than this. Of course it is depending upon the direct sensor orientation and the stability of the sub-cameras and the camera system including the misalignment. The camera geometry under usual conditions is stable within the block and the direct sensor orientation is dominated by the used hardware components. Images taken by MultiVision or Pictometry usually are only used as single images, allowing a geo-coding only by means of digital elevation models. These DEMs are an important limitation because of the inclined views, the height errors of the DEMs are causing dislocations approximately in the same size like the height error. REFERENCES Höhle, J., 2008: Photogrammetric Measurements in Oblique Aerial Images, Photogrammetrie Fernerkundung Geoinformation, 2008, issue 1, pp 7-14 Jacobsen, K., 1988: Handling of Panoramic and Extreme High Oblique Photographs in Analytical Plotters, ISPRS Kyoto 1988 IntArchPhRS. issue XXVII, B2, pp 232-237 Manual of Photogrammetry, 2 nd edition, chapter II, part1, Aerial cameras and accessories, ASPRS, 1952 Moffit, F.H., 1967: Photogrammetry 2 nd edition, chapter 2 Aerial Cameras International Textbook Company