A method of generating free-route walk-through animation using vehicle-borne video image



Similar documents
Proposal for a Virtual 3D World Map

High speed 3D capture for Configuration Management DOE SBIR Phase II Paul Banks

Traffic Monitoring Systems. Technology and sensors

Static Environment Recognition Using Omni-camera from a Moving Vehicle

REPRESENTATION, CODING AND INTERACTIVE RENDERING OF HIGH- RESOLUTION PANORAMIC IMAGES AND VIDEO USING MPEG-4

HIGH-PERFORMANCE INSPECTION VEHICLE FOR RAILWAYS AND TUNNEL LININGS. HIGH-PERFORMANCE INSPECTION VEHICLE FOR RAILWAY AND ROAD TUNNEL LININGS.

A Study of Classification for Driver Conditions using Driving Behaviors

3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving

IP-S3 HD1. Compact, High-Density 3D Mobile Mapping System

Using Photorealistic RenderMan for High-Quality Direct Volume Rendering

MetropoGIS: A City Modeling System DI Dr. Konrad KARNER, DI Andreas KLAUS, DI Joachim BAUER, DI Christopher ZACH

3D SCANNING: A NEW APPROACH TOWARDS MODEL DEVELOPMENT IN ADVANCED MANUFACTURING SYSTEM

A Reliability Point and Kalman Filter-based Vehicle Tracking Technique

Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA

Introduction to Computer Graphics. Reading: Angel ch.1 or Hill Ch1.

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY

Drawing Accurate Ground Plans from Laser Scan Data

The Car Tutorial Part 1 Creating a Racing Game for Unity

STATE OF THE ART OF THE METHODS FOR STATIC 3D SCANNING OF PARTIAL OR FULL HUMAN BODY

Digital Video-Editing Programs

A Novel Plan for Crime Scene Reconstruction. Guangjun Liao 123 Faculty of Forensic Science and Technology, Guangdong Police College

ACCURACY ASSESSMENT OF BUILDING POINT CLOUDS AUTOMATICALLY GENERATED FROM IPHONE IMAGES

Implementation of Augmented Reality System for Smartphone Advertisements

CHAPTER 6 TEXTURE ANIMATION

1. INTRODUCTION Graphics 2

A Proposal for OpenEXR Color Management

GEOScaN Remote Data Acquisition for Hydrographic, Topographic and GIS Surveying

Character Animation from 2D Pictures and 3D Motion Data ALEXANDER HORNUNG, ELLEN DEKKERS, and LEIF KOBBELT RWTH-Aachen University

Company Profile. > Products Overview > Panoweaver > Tourweaver > Panowalker > Modelweaver > Tour-hosting > City8 Street View Capturing System

Virtual Fitting by Single-shot Body Shape Estimation

3D Vision An enabling Technology for Advanced Driver Assistance and Autonomous Offroad Driving

MMGD0203 Multimedia Design MMGD0203 MULTIMEDIA DESIGN. Chapter 3 Graphics and Animations

3D Scanner using Line Laser. 1. Introduction. 2. Theory

Data Review and Analysis Program (DRAP) Flight Data Visualization Program for Enhancement of FOQA

Consolidated Visualization of Enormous 3D Scan Point Clouds with Scanopy

Shape Measurement of a Sewer Pipe. Using a Mobile Robot with Computer Vision

Evaluation of traffic control policy in disaster case. by using traffic simulation model

CAD and Creativity. Contents

An Energy-Based Vehicle Tracking System using Principal Component Analysis and Unsupervised ART Network

Automatic Labeling of Lane Markings for Autonomous Vehicles

Study of Large-Scale Data Visualization

How To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud

Mouse Control using a Web Camera based on Colour Detection

Lecture Notes, CEng 477

Wearable Finger-Braille Interface for Navigation of Deaf-Blind in Ubiquitous Barrier-Free Space

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

ACCIDENTS AND NEAR-MISSES ANALYSIS BY USING VIDEO DRIVE-RECORDERS IN A FLEET TEST

Kankakee Community College

A STRATEGIC PLANNER FOR ROBOT EXCAVATION' by Humberto Romero-Lois, Research Assistant, Department of Civil Engineering

Evaluation of the Cross Correlation Method by Using PIV Standard Images

Automatic Calibration of an In-vehicle Gaze Tracking System Using Driver s Typical Gaze Behavior

T-REDSPEED White paper

A Method of Caption Detection in News Video

Immersive Medien und 3D-Video

Procedure for Marine Traffic Simulation with AIS Data

Intuitive Navigation in an Enormous Virtual Environment

A Simple Guide To Understanding 3D Scanning Technologies

3D Animation Silent Movie Entitled The Seed : Research for Level of Understanding Compare between Male and Female

SEMANTIC-BASED AUTHORING OF TECHNICAL DOCUMENTATION

How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm

Spatio-Temporal Mapping -A Technique for Overview Visualization of Time-Series Datasets-

ASSESSMENT OF VISUALIZATION SOFTWARE FOR SUPPORT OF CONSTRUCTION SITE INSPECTION TASKS USING DATA COLLECTED FROM REALITY CAPTURE TECHNOLOGIES

Wii Remote Calibration Using the Sensor Bar

3D U ser I t er aces and Augmented Reality

INTERNET FOR VANET NETWORK COMMUNICATIONS -FLEETNET-

Computer Graphics AACHEN AACHEN AACHEN AACHEN. Public Perception of CG. Computer Graphics Research. Methodological Approaches

How To Use Bodescan For 3D Imaging Of The Human Body

GIS DATA CAPTURE APPLICATIONS MANDLI COMMUNICATIONS

Locating and Decoding EAN-13 Barcodes from Images Captured by Digital Cameras

DINAMIC AND STATIC CENTRE OF PRESSURE MEASUREMENT ON THE FORCEPLATE. F. R. Soha, I. A. Szabó, M. Budai. Abstract

Manufacturing Process and Cost Estimation through Process Detection by Applying Image Processing Technique

Driving Safety Support Systems Utilizing ITS Radio System

A Learning Based Method for Super-Resolution of Low Resolution Images

A Cognitive Approach to Vision for a Mobile Robot

Quick Start Tutorial Imperial version

Adding Panoramas to Google Maps Using Ajax

An Instructional Aid System for Driving Schools Based on Visual Simulation

A Short Introduction to Computer Graphics

A Study on M2M-based AR Multiple Objects Loading Technology using PPHT

Density Map Visualization for Overlapping Bicycle Trajectories

Comp 410/510. Computer Graphics Spring Introduction to Graphics Systems


A Remote Maintenance System with the use of Virtual Reality.

Go to contents 18 3D Visualization of Building Services in Virtual Environment

REAL TIME TRAFFIC LIGHT CONTROL USING IMAGE PROCESSING

Effects of Pronunciation Practice System Based on Personalized CG Animations of Mouth Movement Model

A System for Capturing High Resolution Images

TRACKING DRIVER EYE MOVEMENTS AT PERMISSIVE LEFT-TURNS

Quantifying Spatial Presence. Summary

Transcription:

A method of generating free-route walk-through animation using vehicle-borne video image Jun KUMAGAI* Ryosuke SHIBASAKI* *Graduate School of Frontier Sciences, Shibasaki lab. University of Tokyo 4-6-1 Komaba,Meguro-ku,Tokyo,153-8505 Tel:(81)-3-5452-6417 Fax(81)-3-5452-6417 E-mail:jun@iis.u -tokyo.ac.jp JAPAN KEYWORDS: vehicle-borne video image, free root walk-through animation ABSTRACT Large three-dimensional virtual space and walk-through animations are used in navigation systems, games, and movies, etc. In three-dimensional space using the CG data, reality is dependent on the texture quality of buildings. However, the costs for high -quality texture data are also high, which makes it very difficult to reproduce the reality of the real world completely. In this research, a laser range scanner and eight sets of video cameras are put on a vehicle. The depth information of object and the panoramic video images of all the directions (360 degrees) are taken by this system. From the depth and panoramic video image data, this research aims at generating free route walk-through animations such as right-turn and left-turn animations at crossing, movement of viewpoint and change of the direction for advance etc. 1 Introduction Recently, walk-through animation is getting popular in various fields, such as navigation, sightseeing, museum, e-commerce, and real estate brokerage. There are basically two techniques for making three-dimensional virtual space and walk-through animation. One of these methods is to generate the surface models of objects. The VRML, CAD, games and many applications produce three-dimensional space by this technique. This technique requires skills and good computer facilities. Furthermore, there is a problem that much time and labor for acquisition of the texture of the building surface are required. Though we may apply cutting-edge advanced level technology, the reality of the real world cannot be reproduced completely. Another technique of making three-dimensional space from real images is called "image based rendering". This method uses only the photographs of the real world. In order to make three-dimensional space by image processing, this method requires a lot of real images. Furthermore, in order to compound the images of arbitrary viewpoints, it is necessary to have the position and direction of camera and depth information from camera position to objects coordinate in three-dimensional space. Hirose et al has detected a corresponding point by matching using the epipolar constraint conditions and computed the depth of the corresponding points from two images that have the position and attitude information acquired by GPS etc Ikeuchi et al has investigated the inclination on an EPI image for presumption of the distance between a camera and a building, and presumed the depth information of objects. In most of all research, Using GPS gets a position and direction of the image and image processing performs presumption of depth information. In urban areas, such as a valley of a building, positioning data using GPS is not available. In order to acquire information on depth by image processing, It is also difficult to acquire exact three-dimensional information due to vehicle vibration, noise and light source environment. Besides, it is necessary the vehicle-track follow a straight line and travel at a constant velocity. However, in real world, it is quite difficult to meet these conditions due to ever changing environment.

2 Purpose The objective of this research is to create walk-through animation from real images. The walk-through animation is first created on a fixed route such as a road network. Next, the walk-through animation is created on a free root. 3 Methodology 3-1 Outline In order to create walk-through animation from real images, it is necessary to compound the images of arbitrary viewpoints, which need information on the depth of object and wide range real images. We have used laser range scanner to acquire depth information. Eight sets of video cameras are used to acquire real images of all directions. Based on these acquired data, walk -through animations, such as images for right and left corner crossing, movement of viewpoint, panorama image of an arbitrary viewpoint, and change of direction for heading, are compounded from all direction images. 3-2 Positioning using Horizontal Range Profiles 3-2-1 Laser range scanner The laser range scanner as shown in figure 1 is carried on a car and measures the range distance from the sensor to the object. The specification of the laser range scanner is given below: Scanning Resolution: 1 200 points/300 Scanning Frequency: 10Hz. Scanning Range: 70m Range Error: 3cm at 1 sigma Scanning Type: One-Dimensional Profiling Horizontal range profiles are captured for every 25cm of forward vehicle travel 3-2-2 Registration of Horizontal Range Profile An image in the figure.3 can be created from laser range data. Zhao et al proposed a method to find the information of sensor position and the surface of buildings., line segments are extracted from the successive images. And common segments in successive images are piled up to determine the changes of the position and attitude between the neighboring images. The information of sensor position, depth, width of roads, and building surface along the vehicle route can be computed by piling up the successive images. GPS Moving ahead Horizontal profiling Building surface Fig1.Laser range scanner Laser range scanner Fig2.sensor prototype Sensor position Laser range point Fig3. Horizontal Range Profile Fig4.A pair of laser range profile Fig6.integrated profiles after matching Vehicle Trajectory Fig5.Results of line segment extraction Fig7.Vehicle trajectory and Information on depth of object

3-3 Composition of walk-through animation 3-3-1 Problems Walk-through animation may not look natural if the viewing image changes suddenly in direction at the crossing. Then, a natural sequence of images of turning left and right at a crossing is required. Actual image sequences acquired in turning right or left would look the most natural. In order to acquire the image of right and left turn at a crossing, however, it is necessary to have 12 runs per crossing as shown in Figure 8. This involves right and left turn from four directions, straight along the north-south direction and the east-west direction. West North South Fig.8 Problem East 3-3-2 Image compositions about turning at a Video camera Cameras crossing A. Outline An image-capturing device that consists of eight Vehicle video cameras is used to acquire image from all 45 directions (360 degree). The video cameras are arranged at interval of 45 degree to each other. These cameras are numbered from one to eight Crossing as shown in figure 9. Camera 1 is for front view, camera 2 is for right side 45 degrees view, camera 3 Fig.9 Conceptual figure and camera position is for right side 90 degrees view and so on. Each camera has field of view of about 61 degrees. The image of the turning at the crossing is obtained by compounding the images along the South-North and East-West direction. B. Making of a panorama image Panorama image is created by stitching the images acquired by video cameras. We have used computer graphics (CG) for the simulation of the algorithms so far developed. In this simulation, road is 10m wide and camera system is placed at 1.5m from the road level. The CG generated images (320pixel x 240pixel) for simulation are shown in figure10. The image set consists of simulated images for camera 1, camera 7 and camera 8. These three images are used to prepare a panorama image, which is shown in figure11. Camera 7 Camera 8 Fig.10 images by each camera Camera 1 Fig.11.a panorama image C. Movement of a panorama image First of all a display screen is set up as marked by a red point in figure12. Next, the panorama image, near the crossing is gradually moved towards the right direction (movement of panorama image on the screen) and a left-turn image is compounded. For example, when the vehicle moves from position P1 towards P3 (near the crossing), at P1, we need only camera 1 image. But, as we approach towards the crossing we need to compound images from different cameras for realistic three -dimensional viewing. AT P2, we need to compound panorama images from camera 1 and camera 8. At P3, we need image from camera 7 only for left 90 degrees view. Between P2 and P3, we need panorama images from camera 7 and camera 8.

Screen P1 Crossing P3 P2 P1 P2 P3 Fig.12 Movement of a panorama image D. The change of panorama image s The crossings are identified by using the horizontal profile of the range data. If we drive the vehicle through the crossings, we can find the intersecting lines on the horizontal profile of the range image. These intersecting lines provide the information about the crossing. If we are moving along the south-north direction and make a left turn at the intersection, then the south-north panorama has to be replaced with the east-west panorama. Similarly, if a right turn is made, the south-north panorama has to be replaced with the west-east panorama. This gives the viewer a real three-dimensional space feeling. Vehicle Trajectory Image of the direction of south to north The intersection of a locus Fig.13 The intersection of the locus by the laser scanner Image of the direction of east to west Fig.14 The change of panorama images 4 Results The result of running the simulation with a parameter (Table1) is shown below. A parameter is values that I made move a panorama image to wards the right direction. Figure15 shows a simulation for left turn panorama using a parameter. Table1.Parameter Frame number 1 2 3 4 5 6 7 8 9 10 11 Value of movement towards the right. (pixels) 0 +20 +40 +40 +40 +60 +80 +110 +130 +69. 5 +29. 5 Frame 1 Frame 2 Frame 3

Frame 4 Frame 5 Frame 6 Frame 7 Frame 8 Frame 9 Frame 10 Frame 11 Fig15.The image of turning left at a crossing 5. Conclusions and Future Works As shown above, it is possible to create walk-through animations that turn at the crossings even while we drive straight forward. Although it was a simulation using CG images, we will use the real images acquired by the system in the future. The crossings that have more or less than four road intersections will also be analyzed and suitable algorithms will be developed. Furthermore, we can also compute movement of viewpoint, change of heading etc. by acquiring depth information from laser range scanner and panorama image. Thus, accumulating various animations created by this technique and loading according to the demand of a user, we can create natural walk-through animation resembling a real world. 6.References M. HIROSE, S. WATANABE, T.TANIKAWA, Generation of Wide-Range Virtual Environment by Using Photo-realistic Images,Human Interface News and Report, Vol.13, No.2, pp.173-178, 1998. K.Ikeuchi,M.Sakauchi,H.Kawasaki,T.Takahashi,M.Murao,I.Sato,F.Kai, Panoramic Image Analysis for Constructing Virtual Cities,Vol.42 No.SIG 13(CVIM 3),pp.49-57,2001 T.Takahashi,H.Kawasaki,K.Ikeuchi,M.Sakauchi, Arbitrary View Rendering System Using Omni Directional Image,Vol.42 No.SIG 13(CVIM 3),pp.99-109,2001 H. Zhao, R. Shibasaki, High Accurate Positioning and Mapping in Urban Area using Laser Range Scanner, Proc. of IEEE Intelligent Vehicles Symposium, May 2001 Tokyo