Mesh-based Integration of Range and Color Images

Similar documents
A unified representation for interactive 3D modeling

Segmentation of building models from dense 3D point-clouds

How To Create A Surface From Points On A Computer With A Marching Cube

Constrained Tetrahedral Mesh Generation of Human Organs on Segmented Volume *

A typical 3D modeling system involves the phases of 1. Individual range image acquisition from different viewpoints.

Volumetric Meshes for Real Time Medical Simulations

Model Repair. Leif Kobbelt RWTH Aachen University )NPUT $ATA 2EMOVAL OF TOPOLOGICAL AND GEOMETRICAL ERRORS !NALYSIS OF SURFACE QUALITY

A Short Introduction to Computer Graphics

MetropoGIS: A City Modeling System DI Dr. Konrad KARNER, DI Andreas KLAUS, DI Joachim BAUER, DI Christopher ZACH

Introduction. Chapter 1

MA 323 Geometric Modelling Course Notes: Day 02 Model Construction Problem

The Scientific Data Mining Process

Computational Geometry. Lecture 1: Introduction and Convex Hulls

Triangulation by Ear Clipping

Rendering Microgeometry with Volumetric Precomputed Radiance Transfer

A method of generating free-route walk-through animation using vehicle-borne video image

Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA

Classifying Manipulation Primitives from Visual Data

3D Scanner using Line Laser. 1. Introduction. 2. Theory

Such As Statements, Kindergarten Grade 8

Reflection and Refraction

John F. Cotton College of Architecture & Environmental Design California Polytechnic State University San Luis Obispo, California JOHN F.

3D SCANNING: A NEW APPROACH TOWARDS MODEL DEVELOPMENT IN ADVANCED MANUFACTURING SYSTEM

3D Face Modeling. Vuong Le. IFP group, Beckman Institute University of Illinois ECE417 Spring 2013

ENGN D Photography / Winter 2012 / SYLLABUS

RIEGL VZ-400 NEW. Laser Scanners. Latest News March 2009

Introduction. C 2009 John Wiley & Sons, Ltd

On Fast Surface Reconstruction Methods for Large and Noisy Point Clouds

Intuitive Navigation in an Enormous Virtual Environment

P. Lu, Sh. Huang and K. Jiang

PHOTOGRAMMETRIC TECHNIQUES FOR MEASUREMENTS IN WOODWORKING INDUSTRY

Delaunay Based Shape Reconstruction from Large Data

Optical Digitizing by ATOS for Press Parts and Tools

Announcements. Active stereo with structured light. Project structured light patterns onto the object

Self-Calibrated Structured Light 3D Scanner Using Color Edge Pattern

An Adaptive Hierarchical Next-Best-View Algorithm for 3D Reconstruction of Indoor Scenes

Fast and Robust Normal Estimation for Point Clouds with Sharp Features

Intersection of a Line and a Convex. Hull of Points Cloud

Differentiation of 3D scanners and their positioning method when applied to pipeline integrity

A Surface Reconstruction Method for Highly Noisy Point Clouds

Surface Reconstruction from Point Cloud of Human Body by Clustering

VRSPATIAL: DESIGNING SPATIAL MECHANISMS USING VIRTUAL REALITY

A Learning Based Method for Super-Resolution of Low Resolution Images

Self-Positioning Handheld 3D Scanner

Efficient Storage, Compression and Transmission

The Ball-Pivoting Algorithm for Surface Reconstruction

Off-line Model Simplification for Interactive Rigid Body Dynamics Simulations Satyandra K. Gupta University of Maryland, College Park

Wii Remote Calibration Using the Sensor Bar

GRAFICA - A COMPUTER GRAPHICS TEACHING ASSISTANT. Andreas Savva, George Ioannou, Vasso Stylianou, and George Portides, University of Nicosia Cyprus

Automatic Detection of PCB Defects

A COMPARISON OF SYSTEMS AND TOOLS FOR 3D SCANNING

ME 111: Engineering Drawing

Arrangements And Duality

LOCAL SURFACE PATCH BASED TIME ATTENDANCE SYSTEM USING FACE.

Architectural Photogrammetry Lab., College of Architecture, University of Valladolid - jgarciaf@mtu.edu b

Surface Curvature from Laser Triangulation Data. John Rugis ELECTRICAL & COMPUTER ENGINEERING

Integrated sensors for robotic laser welding

DAMAGED ROAD TUNNEL LASER SCANNER SURVEY

A CAD MODELLING SYSTEM AUTOMATION FOR REVERSE ENGINEERING APPLICATIONS

Robust NURBS Surface Fitting from Unorganized 3D Point Clouds for Infrastructure As-Built Modeling

Super-resolution method based on edge feature for high resolution imaging

RoboCup Advanced 3D Monitor

Photo VR: A System of Rendering High Quality Images for Virtual Environments Using Sphere-like Polyhedral Environment Maps

How To Fuse A Point Cloud With A Laser And Image Data From A Pointcloud

Efficient Next-Best-Scan Planning for Autonomous 3D Surface Reconstruction of Unknown Objects

A PHOTOGRAMMETRIC APPRAOCH FOR AUTOMATIC TRAFFIC ASSESSMENT USING CONVENTIONAL CCTV CAMERA

QUALITY TESTING OF WATER PUMP PULLEY USING IMAGE PROCESSING

Graphic Design. Background: The part of an artwork that appears to be farthest from the viewer, or in the distance of the scene.

Automatic Labeling of Lane Markings for Autonomous Vehicles

COMP175: Computer Graphics. Lecture 1 Introduction and Display Technologies

Geometry Chapter Point (pt) 1.1 Coplanar (1.1) 1.1 Space (1.1) 1.2 Line Segment (seg) 1.2 Measure of a Segment

Virtual CRASH 3.0 Staging a Car Crash

Angle - a figure formed by two rays or two line segments with a common endpoint called the vertex of the angle; angles are measured in degrees

Manufacturing Process and Cost Estimation through Process Detection by Applying Image Processing Technique

CUBE-MAP DATA STRUCTURE FOR INTERACTIVE GLOBAL ILLUMINATION COMPUTATION IN DYNAMIC DIFFUSE ENVIRONMENTS

INTRODUCTION TO RENDERING TECHNIQUES

SPECIALIZED VISUALIZATION SYSTEMS FOR DIFFERENTIAL GAMES

ISAT Mathematics Performance Definitions Grade 4

Glencoe. correlated to SOUTH CAROLINA MATH CURRICULUM STANDARDS GRADE 6 3-3, , , 4-9

Glass coloured glass may pick up on scan. Top right of screen tabs: these tabs will relocate lost windows.

Everyday Mathematics. Grade 4 Grade-Level Goals CCSS EDITION. Content Strand: Number and Numeration. Program Goal Content Thread Grade-Level Goal

Mean Value Coordinates

Modelling 3D Avatar for Virtual Try on

Part-Based Recognition

Everyday Mathematics. Grade 4 Grade-Level Goals. 3rd Edition. Content Strand: Number and Numeration. Program Goal Content Thread Grade-Level Goals

CS 534: Computer Vision 3D Model-based recognition

How To Draw In Autocad

CAD / CAM Dr. P. V. Madhusuthan Rao Department of Mechanical Engineering Indian Institute of Technology, Delhi Lecture No. # 12 Reverse Engineering

A NEW METHOD OF STORAGE AND VISUALIZATION FOR MASSIVE POINT CLOUD DATASET

Consolidated Visualization of Enormous 3D Scan Point Clouds with Scanopy

Freehand Sketching. Sections

A HYBRID GROUND DATA MODEL TO SUPPORT INTERACTION IN MECHANIZED TUNNELING

The View-Cube: An Efficient Method of View Planning for 3D Modelling from Range Data

Introduction to Computer Graphics

A Study on SURF Algorithm and Real-Time Tracking Objects Using Optical Flow

Character Animation from 2D Pictures and 3D Motion Data ALEXANDER HORNUNG, ELLEN DEKKERS, and LEIF KOBBELT RWTH-Aachen University

A Study on M2M-based AR Multiple Objects Loading Technology using PPHT

MATH STUDENT BOOK. 8th Grade Unit 6

E XPLORING QUADRILATERALS

Automotive Applications of 3D Laser Scanning Introduction

Transcription:

Mesh-based Integration of Range and Color Images Yiyong Sun *, Christophe Dumont, Mongi A. Abidi Imaging, Robotics, and Intelligent Systems (IRIS) Laboratory Department of Electrical and Computer Engineering University of Tennessee, Knoxville ABSTRACT This paper discusses the construction of photo-realistic 3D models from multisensor data. The data typically comprises multiple views of range and color images to be integrated into a unified 3D model. The integration process uses a meshbased representation of the range data and the advantages of the mesh-based approach over a volumetric approach are mentioned. First, two meshes, corresponding to range images taken from two different viewpoints, are registered to the same world coordinate system and then integrated. This process is repeated until integration of all the views. The integration is straightforward unless the two triangle meshes overlap. The overlapped measurements are detected and the less confident triangles are removed based on their distance from and orientation relative to the camera viewpoint. After removing the overlapping patches, the meshes are seamed together to build a single 3D model. The model is incrementally updated after each new viewpoint is integrated. The color images are used as texture in the finished scene model. The results show that the approach is efficient for the integration of large, multimodal data sets. Keywords: Range images, surface mesh, view integration, multisensor fusion 1. INTRODUCTION Three dimensional object and environment reconstruction is an increasingly important topic in the field of computer vision. Although various types of image acquisition equipment can be employed for this task, laser range scanners are one of the more popular. In recent years, both the accuracy and speed of acquisition of laser range scanners have improved significantly, leading to more and more activity in the area of 3D reconstruction from range images. As the range scanner can only acquire data from the surface on which the laser strikes, reconstructing entire objects and/or scenes generally involves the integration of several range images from different viewpoints. The problem of interest is then how to construct a unique surface representation from these multiple views. The registration of each view into a global reference system is not considered in this paper. Methods of integrating range images can be categorized into two general approaches. Methods in the first category rely upon a triangular mesh surface representation 4-7. Turk et. al. 7 remove the overlapped regions until they touch only along the boundary and then zipper them together. Soucy et. al. 6 employ the concept of canonic views. The integrated surface model is piecewise estimated by a set of triangulations modeling each canonical subset of the Venn diagram of the set of range views. These triangulations are subsequently connected to yield a global surface. Rutishauser et. al. 5 retriangulate the overlapped mesh by growing the mesh at its contour. Pito 4 defines the concept of co-measurements based on the position and orientation of the range scanner. Only the most confidently acquired measurements are kept. The redundant triangles are removed and then the patches of triangle meshes are seamed together. The second category of integration methods comprises the volumetric approaches. New measurements update the status of voxels in the scene space and, once completed, a polygonisation algorithm is used to obtain the surface mesh representation. Some researchers 1-3, 11 have employed the concept of implicit surfaces. The voxels near the surface are assigned values representing the distance to the surface and these values are updated when new measurements are obtained. The surface mesh is created where the voxel values are zero. Hoppe et. al 3 reconstruct surfaces from a clouds of unorganized 3D points. This algorithm requires no knowledge of the points - points from any view are treated in the same way and is therefore more general. Other approaches 1,2, however, make prior assumptions about the connectivity of points, implying that measurements from a single view must have some connection. All of the techniques mentioned here have shown to provide good results. Most of the * Email: yiyong@iristown.engr.utk.edu

experiments, however, were performed with relatively small data sets. Our goal, on the other hand, is to reconstruct very large scenes. In the case of constructing such large scenes, the data sets often contain prohibitively large amounts of data. For practical reasons, the amount of data must be reduced either during or after the integration process. Although voxel-based approaches have been shown to perform better than mesh-based integration in some instances, 1 mesh-based techniques are more amenable to data reduction. Additionally, due to practical visualization constraints, most voxel-based approaches employ a polygonisation algorithm such as marching cubes 1 or marching triangles 2,8 to obtain a mesh representation. For these reasons, we have adopted a mesh-based technique for range image integration, which is similar to Pito s work 4. Although the various mesh integration methods differ in their details, they all share several common steps, beginning with the triangulation of a single view. This, of course, is relatively easy and fast since most laser scanners provide measurements on a rectangular grid. Next, when two different views are considered, each algorithm must handle the overlapping regions appropriately. Finally, each patch must be seamed with another to create a global mesh. Identification and re-meshing of overlapping regions are generally considered to be the primary problems to solve in mesh-based integration. One additional aspect that must be considered in photo-realistic scene reconstruction is texture. In addition to providing visual information, texture can also significantly affect the perception of scene geometry. In our application, a color image is captured with each range image. The general idea of our work is illustrated in Fig. 1 below. Presently, the algorithm works only with regular meshes (i.e., those created from range data over rectangular grids). Integration of arbitrary meshes, such as those that would be produced by a mesh reduction algorithm, is in progress. Range Image Reduced Image Color Image Smooth Register Reduce Integration Range Image Reduced Image Color Image Texture 3D Model Figure 1. Framework for photorealistic scene reconstruction. The remainder of this paper is organized as follows. In Section 2, the mesh integration algorithm is described. Section 2.1 describes the generation of mesh from a single range image. In Section 2.2, we describe the process for detecting overlapping regions and removing the redundant triangles. Linking the gaps between the mesh patches is then described in Section 2.3. In Section 3, the method of fusing the range and color images is discussed. Some example results of the algorithm are presented in Section 4. Finally, some concluding remarks are made in Section 5. 2. MESH INTEGRATION ALGORITHM 2.1. Create a Triangle Mesh from a Single Range Image Most laser range scanners employ a polar coordinate system and the viewing volume is restricted by the horizontal and vertical maximal angles. The range measurements are stored as a 2D grayscale image, from which the 3D coordinates can be recovered when the calibration parameters are known. The initial triangulation considers four neighboring points and the six possible connections 5 shown in Fig. 2. Figure 2. Six possible configurations for the creation of triangles from four neighboring points.

When two neighboring range measurements differ by more than some threshold, there is a step discontinuity. The threshold is determined by the range value and sample resolution. If a discontinuity is present, a triangle should not be created. Triangles created across step discontinuities generally have very small internal angles. They hinder both the search for neighboring triangles as well as the identification of overlapping regions. Therefore, for four neighbor points, we only consider points that are not along discontinuities. If three of them satisfy this condition, a triangle will be created in one of the last four styles of Fig. 2. If none of the four are along a discontinuity, two triangles will be created, and the common edge will be the one with shortest 3D distance, as shown in first two styles of Fig. 2. 2.2. Remove Triangles in Overlapping Regions We now suppose that two meshes have been created from two range images as described in Section 2.1. We must now detect the overlapping regions. Fig. 3 shows two registered meshes from two simulated range shots of a sphere. Note the overlapping, redundant triangles in the center. The overlapping region detection is based on back projection. Knowing the calibration model, the 3D points can be projected back to a 2D reference frame. Given a new triangle mesh, we project each triangle of the old mesh onto the new 2D reference frame (i.e., the image plane of the new range image). If the projection is out of this reference frame, it is not in the view port of the new range shot and the triangle in question will be left unchanged. If the projection is inside the new reference frame, we check for overlapping. First, we compute the bounding box of the triangle projection, as shown in Fig. 4. Then, we check whether the triangle is facing the new position of the range scanner or not. If the dot product of the triangle normal with one of the three measurement rays (i.e., the rays from the view point to each of the triangle vertices) is positive, we call it front facing. For the front facing triangles, we will check all the triangles that are from the new range image and are in the bounding box. In Fig. 5, conditions of 2D triangle intersection are shown. Though we can find whether edges are intersecting by checking each pair, it computationally expensive. In most cases of intersection, Figure 3. Two registered meshes. one point of one triangle will be inside the other triangle, and this can be simply computed. We therefore check this case first and if it is not satisfied, we then check for edge intersection. Note that there exist efficient algorithms for checking 2D line intersection 13. Figure 4. Bounding box. Figure 5. Intersecting triangles. Figure 6. Removing circle. When checking whether a point is inside a triangle, we employ a removing circumscribed circle (Fig. 6). If a point is positioned slightly outside of a triangle, the triangle that would be created would be ill formed, as shown by the dashed lines in Fig. 6. Therefore, we check whether that point is inside the removing circle. If it is inside, we classify the two triangles as overlapping. This is actually the same principle of creating 2D Delaunay triangles. When all the triangles in the bounding box have been checked and there is overlapping, we should delete either the triangle in the old mesh or all the overlapping triangles in the new mesh. To keep the best measurements, we compute a confidence for each triangle. The confidence is defined to as the dot product of the normal of the triangle and the measurement ray, both normalized, and will have value in the range [-1,1]. This concept matches the range scanner s working principle: the measurement accuracy depends on the incident angle. We compute the average confidence of all the overlapping triangles in the bounding box. If this average is larger than that of the triangle from the old mesh, we delete the triangle in the old mesh. Otherwise, we delete all the overlapping triangles in the bounding box. Note that overlapping in 2D

does not imply overlapping in 3D. In 3D, two surface patches overlapping in their 2D projection may come from different areas of the object if, for instance, the object has some self-occlusions. We set a threshold to determine whether two patches overlapping in 2D are from the same area of the object. If the distance between two triangles is smaller than the threshold, we assume that they are the representations of the same surface patch. The threshold is set according to the accuracy of the range scanner and the measured distance. From experimental evidence, this threshold seems to work well for both large and small objects. Since there is always some registration error and noise in the range data, registered surface patches are seldom aligned perfectly. From one view, the triangles may not be overlapping, while in another view, they are. This is shown in Fig. 7. As the overlapping detection is view dependent, we should not only check in the new view port, but also check the old view port(s). Because the triangles in the old mesh may come from many different views that have been previously integrated, back projecting each triangle in the new mesh onto each previous view port is computationally expensive. Instead, we project only the triangles in the bounding box onto the 2D reference frame of the triangle in the old mesh. Again, deletion is also based on the average measurement confidence. View 1 View 2 Figure 7. View dependent overlapping. From View 1 there is overlap, but not from View 2. In general, triangles in the old mesh that are not front facing do not need to be checked for overlapping. There is, however, a special that must be considered, as shown in Fig. 8. When the step discontinuity is smaller than the threshold, two points along a discontinuity are connected. But when the real surface is measured, we may need to remove such a surface patch. For example, in Fig. 8(a) two different view points are indicated. The dashed line shown in (b) is created from view 2, but is not correct, as seen from view 1. Therefore, the triangle indicated by the dashed line should be removed. From view1 View 1 From view 2 Connected if less than threshold View 2 (a) Two views Not front facing, need to project to view 1 (b) Sample results of 2 views Figure 8. Not front facing, but must check overlapping. In Fig. 9, a two view image of a head model is shown after overlapping deletion. Two views are taken at each side of the model. We see that all the overlapping parts are removed and gaps are left in the center of the face and at the eyes. The most confident measurements are kept. In our implementation to calculate confidence, we also consider the distance between the object and the scanner. The closer measurements tend to be more accurate even though the ray does not hit the surface perpendicularly. We add a factor in the computation of the confidence to compensate for the distance difference between a pair of views. This modification is very important in 3D reconstruction for the rooms, since the range finder may move around in the room. Figure 9. Two views after deletion of overlapping triangles. 2.3. Link the Mesh Patches neighbor triangle is deleted or a new triangle is created to bridge the gap. To link the gaps between the mesh patches, we must label candidate triangles to combine with other points for building new triangles. These candidate triangles are called active triangles and they must be on the mesh boundaries. Note that not all boundary triangles are active triangles, as some may have nothing to do with the other mesh. If one any of a triangle s neighbors have been deleted, we will mark it as an active triangle, whether it is in the old or new mesh. When we build the mesh for a single view, the neighbor triangle s information is stored. For example, for each point of a triangle, we store a pointer to the opposite neighbor triangle. If the pointer is null, the triangle is at the boundary of the mesh. The pointer will be updated when a

An active triangle may have one, two, or even three active edges that must find a point to build a new triangle. For one active edge, we first find some neighboring points as candidates. If the active edge and the nearest points are both in the old mesh or are both in the new mesh, the nearest points are not necessarily on the active edge. Then we check the validity of each candidate and find which is the best. We examine whether the new triangle intersects the existing triangles in a region. For all the valid candidate points, one that faces the active edge with the largest angle is the best. We use this point to create a new triangle. If the new triangle has a common edge with any existing triangle, both triangles will update their neighboring information. After linking all the gaps, a global mesh representation of the surface is obtained. We employ a KD-tree 9 for candidate point searching in our implementation. Though searching the KD-tree still takes time, the number of active triangles is very small and finding the nearest neighbor is quite fast. 3. TEXTURE MAP To produce a realistic scene, the color images must be fused with the range images as a texture map. As we are currently using simulated data, we know the registration parameters. Registration errors are being address in ongoing research. Generally the texture map can be of any type, i.e., color, thermal, etc. In our simulations, both range images and color images are captured from the exact same view (and are therefore automatically registered). Each triangle in the complete mesh is associated with the texture image corresponding to the range image from which it was generated. The linking triangles are associated with the texture image corresponding to the range image where two of the three triangle vertices lie. We project each triangle onto its 2D reference frame, find the 2D coordinates of each point, and then assign to a corresponding 2D texture coordinate. The end result is a 3D, textured scene. 4. EXAMPLE RESULT We have performed experiments on various 3D models. Example results using some frequently used small objects are shown in Fig. 10. The algorithm was also applied to reconstruct a room model and some results are shown in Fig. 11, where (a) shows the result of integrating two views (without texture) and (b) shows the result of integrating four views with texture mapping. It may be evident in Fig. 11 (b) that the three file cabinets under the painting have uneven texture. This is a result of color inconsistency - the cabinet looks much brighter when viewed perpendicularly. We intend to address the color consistency problem in future work. (a) 6 views, sphere (b) 11 views, 72,778 triangles, head (c) 12 views, 93,358 triangles, bunny Figure 10. 3D reconstruction results from some commonly used models.

(a) 2 views integration (b) 4 views integration with texture Figure 11. Integration results for a room model. Fig. 12 (at the end of the paper) shows each step of a four view reconstruction of a room with texture. Fig. 13(a) is just a single view range image note the occluded regions behind the computer and cabinet. These regions are covered by the second view. The third view adds a large new area, but one side of the cabinet still needs measurement. Finally, the fourth view takes care of it as well as the rear part of the table. The reconstructed model is quite large, consisting of 424,495 triangles. The algorithm presented here is quite fast integrating the simple objects in Fig. 10 can be performed in almost real time, with no serious code optimization. For the large model in Fig. 11, the integration takes about two minutes on an ONYX. Though the tested scenes are quite different, we do not need to change the thresholds used in the algorithm - they are computed automatically. Although the algorithm requires many 3D-to- 2D projections when checking the overlapped regions and the validity of the newly created triangles, most 3D graphics libraries provide very fast functions for this. Still, however, most of the computation time is spent checking for overlapping regions. The algorithm in its current implementation requires a significant amount of memory, mostly to store the neighbor information. To integrate the model in Fig. 11, more than 100MB of memory is required. One feasible approach to reduce computation time and memory requirements is to employ mesh reduction in the integration process. A reduced mesh for a real range image is shown in Fig. 14. Generally the data can be reduced by five times while still maintaining reasonable accuracy. Incorporating mesh reduction in the integration algorithm is the subject of ongoing research. Figure 13. A reduced mesh. 5. CONCLUSION In this paper, we describe a mesh-based method to integrate multiple view range images and fuse color images with the global mesh to produce a photorealistic scene. A mesh-based approach was selected over a voxel approach so that future work may incorporate mesh reduction. As the scenes we are interested in are quite large, data reduction will be required The integration algorithm consists of several steps including single view meshing, detection of overlapped regions, deleting redundant triangles, and linking gaps between mesh patches. In the overlapping regions, the most confident

triangles are kept, where the confidence is determined by the sensor orientation, the surface patch normal, and the measurement distance. In the linking stage, a KD-tree is built to efficiently find the nearest neighboring points. Of these neighboring points, the one that sees the active edge with the largest angle is selected to combine with the active edge to create a new triangle. The process is repeated until a global mesh is generated. Experiments were performed using simulated range data and the results indicate that the algorithm works well in reconstructing different type of models and that the computation is relatively fast. All thresholds used in the algorithm are computed automatically. ACKNOWLEDGEMENT This work was supported by the U.S. Department of Energy (DOE) through the University Research Program in Robotics (URPR), grant number DOE-DE-FG02-86NE37968. REFERENCES 1. B. Curless and M. Levoy, A Volumetric Method for Building Complex Models from Range Images, Proceedings of SIGGRAPH, pp. 303-312, New Orleans, LA, 1996. 2. A.Hilton, A. Stoddart, J. Illingworth, and T. Windeatt, Implicit Surface-Based Geometric Fusion, Computer Vision and Image Understanding, vol. 69, pp. 273-291, 1998. 3. H. Hoppe, T. Derose, T. Duchamp, H. McDonald, and W. Stuetzle, Surface Reconstruction from Unorganized Points, Proceedings of SIGGRAPH, vol. 26, pp. 71-78, 1992. 4. R. Pito, Mesh Integration Based on Co-measurements, Proceedings of ICIP, vol. 2, pp. 397-400, Laussane, Switzerland, 1996. 5. M. Rutishauser, M. Stricker, and M. Trobina, Merging Range Images of Arbitrarily Shaped Objects, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 573-580, Seattle, WA, 1994. 6. M. Soucy and D. Laurendeau, A General Surface Approach to the Integration of a Set of Range Views, IEEE Trans. On Pattern Analysis and Machine Intelligence, vol. 17, 1995. 7. G.Turk and M. Levoy, Zippered Polygon Meshes form Range Images, Proceedings of SIGGRAPH, vol. 2, pp. 311-318, Orlando, FL, 1994. 8. A. Hilton, A. J. Stoddart, J. Illingworth, and T. Windeatt, Marching triangles: Range Image Fusion for Complex Object Modeling, International conf. On Image Processing, Lusanne, 1996, pp. 381-384. 9. David M. Mount, ANN Programming Manual, Department of Computer Science and Institute for Advanced Computer Studies, University of Maryland, College Park, Maryland. 10. J. D. Boissonnat, Geometric Structures for Three-dimensional Shape Representation, ACM Trans. Graphics 3(4), 1984, 266-286. 11. Whitaker, Ross, T. A Level-Set Approach to 3D Reconstruction From Range Data, International Journal of Computer Vision, vol. 29, no. 3, October 1998. 12. K. Pulli, T. Duchamp, H. Hoppe, J. McDonald, L. Shapiro, and W. Stuetzle, Robust Meshes from Multiple Range Maps, Proceedings of International Conference on Recent Advances in 3D Digital Imaging and Modeling, pp. 205-211, Ottawa, Canda, 1997. 13. Joseph O Rourke, Computational Geometry in C (Second Edition), Cambridge University Press, September 1998. 14. Wernecke, The Inventor Mentor, Addison Wesley, 1994.

(a) 1 view, 126,517 triangles (b) 2 views, 218,283 triangles (c) 3 views, 327,816 triangles (d) 4 views, 424,495 triangles Figure 12. Four steps of the reconstruction of a room model.