Autonomous Systems Lab Prof. Roland Siegwart Bachelor-Thesis Surface Reconstruction from Point Clouds Spring Term 2010 Supervised by: Andreas Breitenmoser François Pomerleau Author: Inna Tishchenko
Abstract Very often robots use laser sensors to detect the environment. These sensors only provide a cloud of points lying on or beneath the surface because of noise. The challenge is to reconstruct a smooth surface and visualize it saving space on the data processor at the same time. In the first part during my work for the course studies on mechatronics I researched papers on this topic and classified them. After comparing the methods I chose the one described in the work of Hugues Hoppe [1] to apply it on the problem. In the second part which is my Bachelor Thesis I implemented the first two Phases of the method described by Hoppe [1] in MATLAB, applied them on real point clouds using different parameteres and analyzed the results. For several userdefined parameters the results were very good, but they required a lot of space. So I developed a method similar to the Marching Cubes having the advantage in saving space, but still having some drawbacks. The future work is to automatize the choice of good parameters, to improve the speed using assumptions and simplifications, and to combine my method with the Marching Cubes to save place while getting a good triangulated surface after the first Phase to be optimized in the second Phase. I
II
Contents 1 Introduction 1 1.1 Magnebike inspection robot....................... 1 1.2 Robot Sensor............................... 2 1.3 Scan matching and localization..................... 4 1.4 Starting Point............................... 5 2 Literature Research 7 2.1 Available Papers............................. 7 2.1.1 Problem Definition........................ 7 2.1.2 Extension of searching field................... 8 2.1.3 Reading Papers.......................... 8 2.1.4 Arrange Papers.......................... 8 2.1.5 Classify Papers.......................... 9 2.2 Phase 1: Initial Surface Estimation................... 12 2.2.1 Data Points Without Noise................... 12 2.2.2 Data Points With Noise..................... 17 2.3 Phase 2: Mesh Optimization...................... 24 2.3.1 Data Points Without Noise................... 24 2.3.2 Data Points With Noise..................... 25 2.4 Choice of a Method............................ 28 3 Surface Reconstruction from Point Clouds 29 3.1 Problem Definition............................ 29 3.2 Phase 1: Initial Surface Estimation................... 30 3.2.1 Neighborhood........................... 30 3.2.2 Average Points and Corresponding Planes........... 31 3.2.3 Orient Planes Outside...................... 32 3.2.4 Divide Volume in Cubes..................... 33 3.2.5 Signed Distance Vectors..................... 34 3.2.6 Marching Cubes......................... 35 3.2.7 Results.............................. 35 3.2.8 Discussion............................. 38 3.3 Phase 2: Mesh Optimization...................... 43 3.3.1 Energy Function......................... 43 3.3.2 Inner Minimization Problem.................. 44 3.3.3 Outer Minimization Problem.................. 47 3.3.4 Choice of the Edge........................ 49 3.3.5 Loop of Inner and Outer Problems............... 49 3.3.6 Results.............................. 51 3.3.7 Discussion............................. 52 3.4 Method of Cubes............................. 56 3.4.1 Cubes Including Projected Points................ 56 III
3.4.2 Average Points and Planes in Cubes.............. 56 3.4.3 Intersection Points........................ 57 3.4.4 Build Triangles.......................... 58 3.4.5 Connect Triangles........................ 59 3.4.6 Fil Gaps.............................. 59 3.4.7 Results.............................. 60 3.4.8 Discussion............................. 62 4 Conclusion and Future Work 63 4.1 Conclusion................................ 63 4.2 Future Work............................... 63 A Symbols 65 IV
Chapter 1 Introduction There are many locations that are hardly or not accessible by humans. These could be industrial plants difficult or impossible to disassemble, like pipes. So the idea is to design a robot that is able to explore the environment and provide its 3D representations to the operator. With this motivation a project named Magnebike was launched at ETH Zurich in 2006 [2]. Its goal is to design a mobile robot able to climb into a specific environment, to create the representation map of it (3D), to detect the location of the robot on this map and finally to adapt low level control electronics and sensors. My work is a part of this project. In the following chapters 1.1, 1.2 and 1.3 the main ideas and realizations made so far will be described according to the papers [2],[3] and [4]. In the chapter 1.4 the starting point of my work is discussed. 1.1 Magnebike inspection robot The Magnebike inspection robot (Figure 1.1 ) is a climbing robot moving on complex shape 3D pipe structures and providing a 3D visualization of the environment [1]. It also detects it sactual location in the map. The robot has to fulfill the two main challenges [2],[5]: It has to move on complex ferromagnetic surfaces within pipes of a diameter of 200mm up to 700mm. 90 o convex or concave obstacles with local abrupt changes in diameter up to 50mm are also possible. The robot should be able to follow a circumferential path. The appropriate sensors should be able to detect the points of a metallic surface and a program should process them into a surface representation. Furthermore the robot has to identify its own location on this map. The first challenge affects the movement technic of the robot on the surface. This problem was solved by equipping the robot with magnetic wheels. One of them can rotate and therefore guarantee the circumferential path. This can also be seen in figure 1.2. The second challenge concerns the choice of suitable sensors and programs for 3D representation and localization. Since the localization was chosen to be a combination of 3D scan registration and 3D odometry, a 3-axes accelerometer is necessary. The general data of the Magnebike inspection robot can be taken from table 1.1. 1
Chapter 1. Introduction 2 Figure 1.1: Magnebike inspection robot [5] Table 1.1: Data of the Magnebike inspection robot [5] Characteristics Data Size (LxWxH) 180x130x120 mm Wheel diameter 60 mm Mass 3.5 kg Magnetic wheel force 250 N Max. speed 2.7 m/min 1.2 Robot Sensor The main laser sensor has to be small and lightweight and needs ot have a good performance in the environment described above. This implies that the sensor should have a high precision and provide high densities of points of the metallic surfaces, like, for example, laser range finders. The paper [4] describes exactly why the sensor Hokuyo URG-04LX 2D laser range scanner was chosen. It is light and quite small in comparison to the other sensors and although its accuracy is strongly dependent on the target s properties such as color, brightness and material, it is competitive enough to remain the favorite. As the name indicates, the sensor is 2D. To extend it to a 3D sensor, it is additionally rotated on the third axis. To determine the target distance the sensor uses amplitude modulated laser light. It
3 1.2. Robot Sensor Figure 1.2: Robot on a flat and curved surface [5] sends it to the target and detects the reflected light wave; hence it returns the phase shift and also the distance to the target. The exact data of the sensor is listed in table 1.2. Table 1.2: Data of the Hokuyo URG-04LX 2D laser range scanner [3] Characteristics of URG-04LX Weight Dimension (WxDxH) Measuring area Accuracy Angular resolution Scanning time Ambient temperature/humidity Data Approx. 160g 50x50x70mm Distance 20 mm to 5.600 m; scan angle: 240 o Distance 20 to 1000 mm ±10 mm Distance 1000 to 4000mm ±1 % of measurement Step angle : approx. 0.36 o (360 o /1, 024steps) 100 ms/scan 10 to +50 o C, 85 % or less It takes measurements in a distance between 20 mm and 5.6m and in a scan angle of 240 o. The accuracy depends on the distance from the sensor and light emitter. If the target is located in a distance between 20 mm and 1 m, the accuracy equals 10 mm. If the distance is more than 1m and less than 4 m, the accuracy is between 10 mm and 40 mm. For my work this means the points near the robot will yield a fine surface with a better accuracy than the points far away from the robot having a lot of noise and probably leading to an approximate surface of the real one. Since the step angle is constant there are more points near the sensor and less far away from it. An important conclusion of this fact for my work is that the points are not uniformly distributed. The scanning time is a kind of reference for the running time of the 3D representation program. If there are 50 scans taken for a 3D representation it takes 5 s. Ambient temperature and humidity determine the conditions under which the sensors can be used.
Chapter 1. Introduction 4 1.3 Scan matching and localization The first step to the localization of the robot is the scan matching to a bigger representation map of the environment. Two scans having overlaps can be matched to a new scan with more points using the common ICP Algorithm 1. For this robot some modifications to the algorithm were made [2]. On the other hand it is also possible to use odometry to match the scans more efficiently or quicker. The purpose is also to use these two operations together. Odometry provides the approximate information about the location and the scan matching the exact or refined location of the robot. An example of scan matching is demonstrated in figures 1.3 and 1.4. The left picture in figure 1.3 shows a point cloud taken from a point of view A. On the right side the point cloud is taken from an other point of view B. In figure 1.4 both of them are matched to a new point set containing points from the first and from the second point clouds (red and green respectively). Figure 1.3: Point clouds from different points of view [5] Figure 1.4: Matched point clouds [5] 1 (Iterative Closest Points) proposed by P.J. Besl and N.D. McKay [6]
5 1.4. Starting Point 1.4 Starting Point Problem Definition : Given: noisy 3D point cloud detected from pipes with different topology To find: reconstructed and visualizated surface built by the data points Using: Software MATLAB, papers on the topic The goal of my work is to reconstruct a surface from noisy point clouds in 3D space. The data points are provided by the sensors before or after scan matching. In the first part Studies on Mechatronics I have to research the papers and classify them. Then I have to choose a method to apply on the given noisy points. In the next part Bachelor Theses I have to implement the method in MATLAB, apply it on real point clouds provided by the Magnebike and analyze the results.
Chapter 1. Introduction 6
Chapter 2 Literature Research This chapter builds a part of work for Studies on Mechatronics. The main points of this subject are: Searching for papers about the topic starting with some selected papers Classify these papers by methods or other important criteria Figure out the actual state and the main research directions The first chapter describes how to find papers on a certain subject in general. The classification of these papers on the topic of surface reconstruction or methods applied to this problem, are specified in the chapter 2.1.5. In the last chapter 2.4 I make a decision which method to choose. 2.1 Available Papers For my research I used the following programs with additional access provided by ETH Zurich: Google Scolar Cite Seer, Scientific Literature Digital Library and Search Engine IEEE Xplore, Digital Library The chosen searched words and their extensions are described in the following subsections. 2.1.1 Problem Definition The problem to solve in this work is defined as follows: Building 3D- representations from laser scans with the description: The goal of this project is to generate environment representation from point clouds of laser scans. Data from single scans are used as local maps, whereas consecutive scans are merged to build a global 3Drepresentation of the environment. [7]. The chosen keywords are: 3D representation, 3D mapping, surface reconstruction, point cloud, mesh. 7
Chapter 2. Literature Research 8 2.1.2 Extension of searching field While searching for papers I noticed the following six important names associated with the problem. Their publications can be found online: Amenta Nina Hoppe Hugues Tamal Krishna Dey Jean-Daniel Boissonnat C.-K. Tang and G. Medioni There also were many other names of people who provided important parts of algorithms or solutions of some mathematical problems. It can be helpful to look for the latest papers on a topic or algorithm of a certain author because they are often improvements of the first ones. To find the basic ideas or to follow the history or changes of the method one can take a look at older versions. 2.1.3 Reading Papers The papers have often a defined structure: 1. Abstract 2. Introduction 3. Main content 4. Conclusion 5. Future work 6. References Since papers on the problem in this work have technical content, it is sufficient to read the abstract, conclusion or future work and to look at the figures to follow the main ideas of the articles. If they seem interesting and important for a topic, the whole article can be read. While searching for papers on 3D reconstruction from noisy point cloud it was easy to tell from the figures whether an article was useful or not. There should be point clouds, reconstructed surfaces or meshes imaged. When an article did not contain such figures it often indicated that the article just dealt with a specific solution or part of an algorithm, another method or even that it was not useful at all. Also the words surface reconstruction from a point cloud or synonyms in the abstract were good signs to find more informations on the topic. 2.1.4 Arrange Papers After filtering the useful ones from all papers found, the next step was to distinguish between important and less important papers. As important I defined papers having following qualities: general description of a method
9 2.1. Available Papers references or names of used algorithms the method is applicable for a concrete problem With the chosen important papers containing different methods I created a table with 11 columns: Author, Name, Year, Abstract, Works for, Doesn t Work for, Ideas, Remarks, Advantages, Disadvantages and Related Work. I sorted them by date because I was interested in history and updates. Altogether I had 18 papers in the Table 2.1. Figure 2.1: Table with papers to the topic: 18 rows x 11 columns 2.1.5 Classify Papers According to the table 2.1 I classified the papers and the methods described within respectively. I distinguish between two Phases. In the first Phase the mesh is built from a point cloud but the surface is not necessarily smooth. In the second Phase the mesh obtained in the first is improved with the purpose of getting a smooth surface and reducing the data volume. The respresentation of the two stages is shown in the surface reconstruction of a bunny in figure 2.2. Due to the fact that the methods applicable for noisefree data sets often cannot be applied to data sets with noise, it is also important to distinguish between noisefree data sets and such with noise. It is possible to use filters for noisy data points and thus get a set of noisefree data points. But it is difficult to develop a good and robust filter which is able to differentiate between points of the real surface and noisy points without making mistakes. For this reason there are methods for data points with noise using functions similar to filters as well as methods very different from such for noisefree data. The action of Phase 1 and 2 is shown in figures 2.3 and 2.4. The violet boxes describe multiple methods described in various papers, the yellow ones only one method.
Chapter 2. Literature Research 10 Figure 2.2: a) The original object: bunny; b) bunny representation after the Phase 1; c) bunny representation after the Phase 2 [8] Figure 2.3: Phase 1: Mesh built from a point cloud
11 2.1. Available Papers Figure 2.4: Phase 2: Optimization of the Phase 1
Chapter 2. Literature Research 12 2.2 Phase 1: Initial Surface Estimation As already mentioned, in Phase 1 the first approximation of the surface from a point cloud is made. The overview of the Phase 1 is shown in figure 2.3. It is important to distinguish between the methods only applicable for point clouds without noise and such with noise. If there is no noise the methods based on Delaunay Triangulation and Neural Networks can be used. Otherwise, if noise is present, data points can be represented using methods based on eigenvalue and eigenvector decomposition, Hoppe s idea with integrated Marching Cubes method, Neural Meshes or approaches including interpolation. The methods used for data points with noise can also be applied on the noisefree problems, too, but its not recommondable due to the higher computing time. 2.2.1 Data Points Without Noise Data points without noise are points perfectly fitting the surface of the object. Since no sensors are perfect, these points are often generated artificially. The surface reconstruction of such point clouds can be of use to start solving the problems with point clouds in general, to provide the surfaces of unknown objects with exactly known points and to quickly test a method for a specific problem. There are two types of methods only applicable for noisefree point clouds: methods based on Delaunay Triangulation and Neural Networks. Figure 2.5: Phase 1, Overview of the methods applicable for noisefree data points An overview of the procedures based on the Delaunay Triangulation is shown in figure 2.5. The first section includes methods of surface reconstruction from noisefree data points based on the algorithms with a descripition of how to find and connect the neighboring points. Power Crust is an interpolating method that is deduced from the Voronoi Diagrams and that uses 3D Delaunay Triangulation to build Power Shape. In the third section the parametrization of the point cloud is utilized to triangulate them to a mesh. The last one corresponds to a method using Neural Network. Neighborhood The main idea of this method is to reconstruct a smooth surface from unorganized sample points using next neighbors. That is only possible because there is no noise
13 2.2. Phase 1: Initial Surface Estimation and the points lie in the same layer of the surface. Triangulation algorithm of Oblonsek and Guid [9] : The paper consists of two stages: Phases 1 and 2. In the first Phase the base approximation of the object surface is achieved using 2D Delaunay Triangulation. The main idea is to find the neighborhood (neighbor points) of each point from data set and connect them to triangles. The triangulation starts with one triangle and is expanded adding others satisfying the conditions demonstrated in figure 2.6. The conditions accord to the boundary edges and vertices of the latest triangulated mesh. This means that for every new triangle added the triangulated mesh and the boundaries must be updated. The advantage of this approach is the linear running time complexity 1. The drawbacks are the requirements for the noisefree point set: the maximal distance between each point has to be smaller than half of the radius of the maximal curvature, the point set isotropic 2 and geometrically close points should be topologically close. It also does not yield good results for surfaces with gaps. Figure 2.6: Cases in which a new triangle is added into triangulation. The boundaries are the main components in this procedure [9]. Natural Neighbor interpolation of Distance Functions [8] : This method uses Delaunay Triangulation and Voronoi Diagrams to define the natural neighbors 3. The trianglation part uses then distance functions to define the distances between the point set and triangles, therefore this method is an interpolation (data points do not necessarely build the vertices of the mesh). If the distance is zero 4, the triangulated mesh fits the data points perfectly. The advantages of this procedure are that it handles point sets with a nonuniform distribution and sparse sets, does not use any user-tuned parameters, is theoretically guaranteed and has integrated Phase 2. It also can be adapted for different errors (zero-set values) so that the number of triangles in the mesh or the smoothness of the mesh can be changed, respectively. The limitations are: the normals to the surface must be known or computed by linearizing the surface by planes, the surface is assumed to be smooth (without sharp edges) and without boundaries (like a sphere). The results of the surface reconstruction with this method are illustrated in figure 2.2. 3D-Delaunay Triangulation [10] : The surface representation is built by the sides of tetrahedrons filling the whole volume of the object using Delaunay Triangulation in 3D. 1 the running time is straight proportional to the number of points 2 closest points to each point p i from the data set must be on both sides of the normal plane through p i 3 Natural neighbors of a point x are the neighbors of x in the Delaunay triangulation. 4 zero-set approach
Chapter 2. Literature Research 14 Figure 2.7: Triangulation algorithm of Oblonsek and Guid: a) point cloud not satisfying the conditions; b) point cloud satisfying the conditions; c) surface reconstruction of the point cloud b) [9]. The advantage is the direct implementation in MATLAB. The drawback of this method is its applicability for objects with closed surfaces without leaks or smooth branches only. In figure 2.8 this method is applied on the vertices of a cube. Figure 2.8: 3D-Delaunay Triangulation of a cube [10]. Voronoi-based Surface Reconstruction [11],[12] : In this method the 3D-Delaunay Triangulation, 3D-Voronoi Diagrams (figure 2.9) and Medial Axes are used to reconstruct the surface from unorganized sample points. After application of this method all sample points are connected to triangles and a surface is built. Advantages of Voronoi-based surface reconstruction are a short running time (dominated by the computing time of the 3D-Delaunay Triangulation), it does not need any user-defined parameters and it is simpler and more direct than
15 2.2. Phase 1: Initial Surface Estimation the zero-set approach. Problems appear with objects that have sharp edges and boundaries. Furthermore one needs special filters to get a hollow surface of an object: since 3D-Delaunay builds tetrahedrons (see also [10]), the triangles within the object must be cancelled as well as all small triangles normal to the surface building and other layer. Then this approach can also represent the surface with boundaries as hollow ojects. Figure 2.9: Comparison of Voronoi Diagram in 2D (a) and in 3D (b) [12]. Black points are data points and red lines build the Voronoi Diagrams. Figure 2.10: Voronoi-based surface reconstruction [12]. Power Crust and Power Shape [13],[14] This is a piecewise linear approximation of the surface over the points using MAT 5 deduced from the weighted Voronoi Diagrams. The Power Crust is built by the MAT and therefore its faces are not triangles (faces are built by points of intersection between the balls in 3D space). Since this method is an interpolating method, its vertices are not necessarily sample points as well as not all sample points are necessarily vertices of the reconstructed surface. The Power Shape can be deduced from the Power Crust using 3D-Delaunay Triangulation and is therefore a triangulation mesh. The steps of this procedure in 2D are demostrated in figure 2.11. Special about this approach is the avoidance of the polygonization 6, hole-filling 7 or manifold extraction 5 Medial Axes Transform represents an object as an infinite union of balls, consider Fig. 2.12. 6 The subdivision of a plane or surface into polygons 7 Hole-Filling is used to obtain closed mesh
Chapter 2. Literature Research 16 steps 8. The results for objects with smooth surfaces are very good, as it can be observed in figure 2.12. The drawbacks of Power Crust and Power Shape are the expensive costs computing the Medial Axes Transform. Since Power Shape uses 3D-Delaunay Triangulation, the volume of objects is filled. To get a hollow object representation the inner components must be eliminated. Figure 2.11: Power Crust: a) an object with its Medial Axes; b) Voronoi Diagrams; c) inner and outer polar balls centered on the Voronoi lines; d) Power Diagram (Medial Axes Transform); e) Power Crust [13] Figure 2.12: Power Crust: a)inner polar balls; b)power Crust built by the inner polar balls (balls coming out of the object are cut off leaving a hole) [14] Meshless Parameterization [15] In this method a 3D point set is first parameterized 9 solving a sparse linear system problem. Then the mapped points are triangulated using Delaunay Triangulation. 8 Optimization of the Phase 1 9 mapped into a planar parameter domain [15]
17 2.2. Phase 1: Initial Surface Estimation The triangles of the mapped points corresond to the mesh triangulation of the initial point set. This data points can be unorganized but they must derive from a single surface patch. The advantages of this approach are its independence of any given topological structure and good results, the drawback is long computing time. Some examples of this approach are shown in figure 2.13. The picture c) on the right side is the result if noise is present in the data points: the features are rough and the person is quite difficult to recognize. Figure 2.13: Meshless Parameterization: a) Parameterized point set; b) Mesh for data points without noise; c) Mesh for data points with noise [15] Neural Network [16] With this method the surface can be reconstructed from a dense unorganized collection of scanned point data using neural networks. The main idea is to collect more detailed information about an objects shape using randomized selection. That means the shape reconstruction from an unorganized point cloud is done without arranging the point elements. For this approach there are three important specific local features to be analyzed: pixel depth, surface normals and curvatures. The advantages of Neural Network are its simplicity, efficiency and uniformity and accurate results. It also has an integrated Phase 2. The drawback is the use of many specific functions and methods (Sigmoid Function, Zernike Moments, Learning, Neural Network) Figure 2.14: Overview of the method Neural Network [16] 2.2.2 Data Points With Noise Data points with noise arise from the real scans due to the reason that real systems are never perfect. There are point clouds with a lot of noise and such with less
Chapter 2. Literature Research 18 noise. If there is only a little bit of noise present, the point cloud can be assumed to be noisefree, but in the majority of cases this is not possible. The environment influences the sensors and it is difficult to predict the resulting point cloud. Therefore the methods for noisy data points have to be very robust. To prove the results it is possible to generate an artificial data set of points coming from an object with noise using randomizing functions. There are many methods applicable for point clouds with noise. Some of them are the extensions of the methods used for noisefree data points. As mentioned above, all these methods can also be applied on point clouds without noise, but it is not recommended due to the high costs. An overview of these methods is shown in figure 2.15. Figure 2.15: Overview of Phase 1 for noisy data points Methods including Interpolation RBF (Radial Basis Functions) [17] : In this method Radial Basis Functions 10 and low pass filtering (implicit smoothing) are used to reconstruct a suface from range data. In the computations there also appear convolutions, Fourier Transform and discrete smoothing. The advantages of the method are its independence on the degree of smoothing, it is very effective and it provides visually good results. The main drawback of this procedure is interpolation over large and irregular holes, which means that it is not recommended for objects with smooth branches and gaps. Other problems appear with the discretization (Aliasing) and usage of interpolating and filtering functions. BPA (Ball-Pivoting Algorithm) [18] : BPA computes a triangle mesh interpolating a given point cloud and is related to Alpha-Shapes 11. The main idea is to take a ball of a user-defined radius r and let it roll along the data points. If it touches three points at the same time without containing other points, they build a triangle and are accepted as points of the triangle mesh. This idea in 2D (building lines instead of triangles) is illustrated in figure 2.17. The handling of the data points with noise in 2D is represented in figure 2.18. The advantages are its robustness, efficiency and flexibility (setting radius parameter r). 10 Radial Basis Function can be interpreted as a simple kind of Neural Network. 11 Alpha-Shape is a method used to generate a possible hull of the object
19 2.2. Phase 1: Initial Surface Estimation Figure 2.16: Radial Basis Function applied on a point cloud a) with the result b); c) Visualizatoion of the RBF describing the distance to the surface [17] The drawbacks are the use of user-defined parameter r (radius of the balls), expensive costs for object with different degrees of smoothness and not consistent avoidance of noise. Since only a layer of points is accepted (see figure 2.18), it is not guaranteed that this layer is the best approximation of the real surface by the reason that the noise can occur on the both sides of the real surface. Therefore for data points with a lot of noise an average surface approximation would be a better approach. Figure 2.17: Ball-Pivoting Algorithm in 2D; a) a good choice of r; b) with the same r as in a) a sparse data set causes holes in the mesh; c) when the curvature is larger than 1/r some features cannot be represented and remain missing [18]
Chapter 2. Literature Research 20 Figure 2.18: Ball-Pivoting Algorithm in 2D for noisy data points; a) surface samples lying below are not touched by the balls and remain isolated; b) an outlier is isolated if its surface orientation (in 2D line orientation) is not consistent with the surface orientation of the object; c)the choice of the radius can affect the results by creating a double layer out of a surface [18] Figure 2.19: Ball-Pivoting Algorithm applied on real data points [18] Power Crust [19],[14] : This method for noisefree data sets was already introduced in the section 2.3.1. Its extension using extra limitations for polar balls (see figure 2.11 c)) leads to a method applicable for noisy point clouds. This method using interpolation obviously conforms to the shape of the object better than polygonal models using triangulation. The results are very good for data points with noise as well as for objects with smooth or sharp features. It also works for objects with holes and gaps. The main drawback of this method is a big number of faces compared with other triangulating methods and expensive cost of computing MAT. Hoppe s idea with integrated Marching Cubes [20], [21], [1] The main idea of this method is to approximate the surface of an object using zeroset and local linearization. That means the surface can be assumed to be built out of tangent planes through surface points. If the number of points is big, the surface is better approximated, but the amount of time needed for computation also rises. It is the opposite way round if the amount number is small: the surface is rough, but it can be computed quickly. To create a triangulation mesh the Marching Cubes [21] algorithm is used. In this algorithm the whole volume is subdivided into cubes. Then only the cubes cut by the surface of the object remain. With these interfaces, special weights and a MC-table the triangles from the triangulation are built. This method has many advantages regarding the topology of the object: the presence of boundaries, gaps and holes as well as a big amount of noise are well handled. The disadvantage is that the smoothness of the mesh respective the number of the
21 2.2. Phase 1: Initial Surface Estimation Figure 2.20: Power Crust results: a) point cloud; b) Power Crust; c) transparent Power Crust with its simplified Medial Axes in 3D; d) Power Crust of a noisy point cloud[14],[19] triangles in the mesh after Phase 1 is determined by the smoothest part of the object. But there also exist a paper from the same authors with methods of the Phase 2 [22] which can take a rough mesh after the Phase 1 and data points as input and give out a new refined mesh. The other drawback of this method are used-defined parameters. Figure 2.21: The result of the Phase 1 using the method of H.Hoppe: a) original object; b) point cloud; c) surface reconstruction [22] Neural Meshes [23] The Neural Meshes are based on Neural Network with extension by the Learning Algorithm. The topology of the object surface is learned through special operators using statistics. The algorithm contains sampling, smoothing, connectivity changes and topology learning.
Chapter 2. Literature Research 22 The results of the algorithm look quite good and all user-tuned parameters can be set intuitively. The drawback of the algorithm is the amount of user-tuned parameters (9 parameters), the absence of any guarantee for results and its low speed. Figure 2.22: The results of the Neural Meshes for noisy point cloud using different parameters [23] Methods based on Eigenvalues and Eigenvectors Tensor Voting [24] The main idea is to use Tensor Voting and Tensor Decomposition to reconstruct the surface. In the very first step every point is represented as an isotropic tensor (a ball) of unit radius. In the next step the points communicate with each other in their neighborhood and get new information about their orientation and curve (respective of the surface). The Tensor Decomposition is then used to get a 3D-ball, 3D-plate and 3D-stick for each point (see figure 2.23). With this information a saliency tensor field can be built. Special about this approach is the surface representation from curves, normals and points. Feature extraction is then applied to get a smooth surface. The advantages of this method are good results for different topologies of surfaces and its robustness. The drawbacks are the usage of a user-defined parameter. Anisotropic Basis Functions [25] In this method Tensor Field and Anisotropic Basis Functions(ABF) are used to reconstruct the surface. The main idea of ABF is to reconstruct objects with asymmetry on the edges and sharp features using tensors. There also exist Isotropic Basis Functions (IBF) that even the edges, also do the converse procedure (see figure 2.24). After the first reconstruction filtering is used to avoid the noise (see figure 2.25). Since the procedure uses interpolation, the mesh does not consist of triangles. The advantages of this procedure are very good results for sharp edges. The drawbacks are the uneven surfaces (especial if noise is present) caused by ABF.
23 2.2. Phase 1: Initial Surface Estimation Figure 2.23: Tensor Decomposition and Tensor Voting for a point cloud with noise [24] Figure 2.24: Comparison between Isotropic (a) and Anisotropic (b) Basis Functions for sharp edges [25] Figure 2.25: The results: a)using IBF; b)using ABF; c) using ABF and filters; d) final textured reconstruction [25]
Chapter 2. Literature Research 24 2.3 Phase 2: Mesh Optimization When Phase 1 is fulfilled Phase 2 can be applied. As already mentioned in the section 2.2 there are some methods in Phase 1 with integrated Phase 2. Some of them can also be refined with Phase 2, but not all. Most triangulation meshes can be applied, whereas most other meshes without polygonization already include Phase 2. The main idea is to refine the surface in the parts where it is smooth and to reduce the number of vertices in the mesh. Since these two procedures interfere with each other a compromise must be found. As in Phase 1, there is also a distiction between data points with and without noise. It is possible to try to apply a method of the Phase 2 for noisefree data points to data points with noise, but there is no guarantee that it works. 2.3.1 Data Points Without Noise A triangulation mesh can be improved by reducing the number of triangles in flat regions and increasing their number in areas with high curvature. The representation of sharp edges is also important because after Phase 1 the edges are often smoothed. It is also possible to interpolate the surface with a continuous surface. In theory this is to compute a surface with infinite number of triangles, which is not possible in practice. Due to this reason there exists a continuity rate defined to set the stop criterion. Figure 2.26: Overview of the Phase 2 for noisefree points Sharp Edges [9] : The points of the triangulation mesh are classified in simple vertices and vertices lying on a sharp edge. The distance vector from the vertex to the plane for simple vertices and to the sharp edge for others is computed and added to the vertex, so that it lies in the plane or on the sharp edge respectively. Figure 2.27: Classification of the vertices: a) simple vertex; b) vertex lying on a sharp edge [9]
25 2.3. Phase 2: Mesh Optimization Change number of trianges [9] : In the regions with high curvature of the surface a division of triangles is carried out to get a smooth surface. Merging the triangles in areas with low curvature saves space and computing time in the next steps. Figure 2.28: Division of a triangle into 4 new smaller triangles to get a smooth surface [9] Interpolation [9] : It is possible to approximate a curve by a continuous function. For example the Nielson side-vertex method can provide surface with divided triangles till the stop criterion is reached. Without this criterion the division would occur without end, because the best approximation consists of an infinite number of triangles. Another approach would be to interpolate the surface by a 3D parameterized function. The problems here arise with the coordinates: a function is defined as unique assignment of a value to each input of a specified type. So given for example a pipe winding like a snake around the z-axis, it is impossible to reconstruct it with the help of a continuous function. It must be slit into parts with unique z-values first, which can proove to be very difficult, before it can be interpolated. Then the parts are put together again. 2.3.2 Data Points With Noise The handling of the noisy data points is also different from the one of the noisefree data in Phase 2. Some methods were described by H.Hoppe [22],[1]. All of these methods are only applicable for triangulation meshes. Hoppe s Energy Function The Energy Function of Hoppe is defined in the following way [22]: E(K, V ) = E dist (K, V ) + E rep (K) + E spring (K, V ) (2.1) with the definitions: n E dist (K, V ) = d 2 (x i, φ V ( K )) (2.2) i=1 E rep (K) = c rep m (2.3) E spring (K, V ) = κ v j v k 2 {j,k} (2.4) E dist represents the distance from the mesh to the sample points, E rep penalizes meshes with a large number of vertices and E spring represents the importance of the distances between the vertices. Hence the function E represents a compromise between a smooth surface and small number of triangles.
Chapter 2. Literature Research 26 Figure 2.29: Overview of the Phase 2 for noisy data points Edge Split [1] : Two triangles corresponding to an edge are splitted into four new triangles. Edge Collapse [1] : Two triangles dissapear. Edge Swap [1] : The connections between two triangles change. Figure 2.30: Edge collapse, edge split, edge swap [22] Piecewise Smooth Subdivision Surface Optimization Subdivision Matrix [1] : The Matrix was introduced by Loop. Using it allows us to produce tangent plane continuous surfaces of arbitrary topological type. Since it would require an infinitely long time (analogous to Interpolation ) to implement it, a stop criterion or rate of the continuity has to be chosen.
27 2.3. Phase 2: Mesh Optimization Figure 2.31: The results after Phase 2 applied on Phase 1 from Hoppe s idea introduced in section 2.3.2 [22] Edge Tag [1] : is an additional function to the ones already introduced: edge split, edge collapse and edge swap. This function defines whether an edge is sharp or not and adds this property to the edge.
Chapter 2. Literature Research 28 2.4 Choice of a Method Since there are many different methods to reconstruct the surface I had to find important criteria to decide. The first one is that the point cloud is noisy. So all methods only applicable for noisefree data are not further considered. The other criteria are: kind of mesh (trinagles or not); speed; topology (if the topology of the pipe where the robot moves satisfies the conditions of the methods); result (how do the meshes visually look like); information (if there is enough information in papers on a topic) and noise (robustness of the method). The green fields show good results. I decided to mark the triangulation meshes as good, because they are practical to use and easier to be manually modified. The triangles can be easy taken out of the mesh, they can be modified and it is possible to find boundaries using their connection properties. Furthemore it is not a problem to add new triangles produced from another part of the pipe to the existing mesh. Another point is that there are many methods to improve the mesh in Phase 2. The columns Result and Information were evaluated by me and are more subjective. The Table 2.1 shows the method of H.Hoppe appears to be the best for the problem defined by the environment and sensors of Magnebike robot. Table 2.1: Table to decide which method to apply to reconstruct the point clouds provided by Magnebike robot.
Chapter 3 Surface Reconstruction from Point Clouds This chapter describes the main part of my work, namely the Bachelor Thesis. In the previous chapter the methods of surface reconstruction from point clouds were discussed and a method applicable for the actual problem was chosen. Extended research of further papers to the theme led me to the script of H. Hoppe [1] with a detailed description. Equipped with this information I implemented the phases 1 and 2 of the script. In section 3.4 an alternative but similar method of Marching Cubes developed by me is described. A more precise explanation of Marching Cubes is given in 3.2.6 3.1 Problem Definition Figure 3.1: Noisy Point Cloud Given : a set X containing 3D noisy points x i, i = {1,..., n} with n- number of points unknown original surface U from which the set X arises is of arbitrary topology including boundaries and discontinuities Software : MATLAB To find : triangulation mesh best approximating U 29
Chapter 3. Surface Reconstruction from Point Clouds 30 Since the real point set contains a huge number of points and the topology is not trivial, an artificial generated point cloud with noise building a cylinder was used (Figure 3.1) to visually prove the steps. The visualizations in the following sections 3.2 and 3.3 are based on this point cloud. In section 3.2.7 the results are shown applied on real data sets. 3.2 Phase 1: Initial Surface Estimation The main steps in the phase 1 are: 1. find the neighborhood (neighbor points) of each point lying in the ball of radius ρ with the center in this point 2. compute the average point of each neighborhood and corresponding planes representing the linearized surface at this point 3. orient all planes looking outside 4. divide the whole volume in cubes of size ρ 5. build signed distances from the edges of the cubes to the plane corresponding to the next average point 6. apply the method of Marching Cubes to get a triangulation surface 3.2.1 Neighborhood Given noisy data points x i can be described as x i = y i + e i with y i R 3 a point on U without noise and e i R 3 error vector representing the noise. If e i δ for all i then the set X is called δ- noisy. The density of the points is defined by noisefree point density or the density of y i. Let Y = {y 1,..., y n }, then Y is ρ-dense if a sphere of radius ρ and center in U contains at least one point in Y. It means if the minimal distance of each point of Y to other points of Y is smaller than ρ. Summarizing these definitions the following conclusions can be done: If the surface U has holes of radius r, they can only be represented in the surface reconstruction if they are smaller than δ + ρ. A point x i is an outlier if the minimal distance of this point to all other points of X, d(x i, X) is smaller than δ + ρ. Two sheets of the surface U are assumed to have at least the distance ρ + 3δ between each other to be represented correctly. The assumption 3δ comes from taking sampling noise into account 1. The very first step is then to find ρ and δ and to set the radius of a neighborhood equal to ρ + δ. For artificial generated points they are known. Their choice in general is discussed in section 3.2.8. To find the neighborhood Nbhd(x i ) of each point x i the KD-Tree can be used. Embedding special KD-Tree packages from MATLAB, a function can be build with the following input and output: [neighbors x j of a point x i, data set] = function (data set, point x i, ρ, δ) (3.1) This function is used two times as follows: 1 Since there are often too many points in a laser scan the points are sampled: only some of them remain for the surface representation. The sampling problem is discussed in section 3.2.8
31 3.2. Phase 1: Initial Surface Estimation 1. to find outlier and eliminate them defining a new data set: [data set] = function1 (data set, point x i, ρ, δ) (3.2) 2. to find neighbor points for each point of the new data set: [neighbors x j of a point x i ] = function2 (data set, point x i, ρ, δ) (3.3) The elimitation of the outlier occurs using the conclusions above. After the elimination the data set is the same or smaller. 3.2.2 Average Points and Corresponding Planes Having points of a neighborhood the next step is to find their orientation respective the linearized plane of U in each neighborhood. To do so the average point and a covariance matrix must be computed for each neighborhood. The procedure consist of 5 steps: 1. compute the centroid o i of each Nbhd(x i ) 2. compute covariance matrix CV i for each neighborhood Nbhd(x i ) 3. compute eigenvalues of CV i, so that λ i,1 < λ i,2 < λ i,3 4. compute eigenvector v 1 i associated with λ i,1 5. set the normal of the tangent plane going through the point o i equal to v 1 i To compute the centroid o i of the neighborhood Nbhd(x i ) the mean value of the neighborhood-points for each coordinate is computed. It means the centroid is the average point of the Nbhd(x i ). The covariance matrix CV i is computed using centroid o i and the neighborhood Nbhd(x i ): CV i = m j=1 (p j,1 o i,1 ) 2 cv 12 cv 13 cv 12 (p j,2 o i,2 ) 2 cv 23 cv 13 cv 23 (p j,3 o i,3 ) 2 (3.4) with cv 12 = (p j,2 o i,2 )(p j,1 o i,1 ), cv 13 = (p j,3 o i,3 )(p j,1 o i,1 ), cv 23 = (p j,2 o i,2 )(p j,3 o i,3 ), p j Nbhd(x i ), m number of points in Nbhd(x i ), p = ( p 1 p 2 p 3 ) and oi = ( o i,1 o i,2 o i,3 ). The matrix represents so to say the oriented distances between the points x j in a Nbhd(x i ). The unit eigenvector asssociated with the smallest eigenvalue of the covariance matrix CV i is the normal vector to these oriented distances or respective the normal vector of the tangent plane of the surface. The results are shown in figure 3.3.
Chapter 3. Surface Reconstruction from Point Clouds 32 Figure 3.2: Data points in blue and centroids in green 3.2.3 Orient Planes Outside The procedure of orienting the normals so that they all look in the same direction requires 4 steps: 1. compute the neighborhood for each centroid o i 2. compute the weights of the orientation of two vectors associated with neighbor centroids and build a weighting matrix 3. orient all normals looking outside or inside using Minimal Spanning Tree 4. orient all normals looking outside Neighborhood of centroid In the first step the neighborhood of each centroid in data set of centroids is found using the function: [neighbors o j of a point o i ] = function2 (data set, point x i, ρ, δ) (3.5) Weighting matrix Next the weights must be defined: if two vectors are parallel their weight should be 0, and if they are orthogonal to each other their weight should be 1 2. Other combinations have thehefore weights (0, 1). This correlation arises from the definition of the Minimal Spanning Tree connecting first the points with the lowest costs. In this case orienting the normals, preferably parallel neighbor vectors (normals) are wanted, because they better guarantee the consistance of the approach. Therefore the weights are defined: w i,j = 1 v i v j (3.6) The weighting matrix W is built by w i,j in the corresponding rows and columns. If two vectors do not lie in a neighborhood, their weight is set to, and therefore these vectors can t connected after the application of MST. 2 the vectors v i and v j are unit vectors and therefore v i v j [0, 1]
33 3.2. Phase 1: Initial Surface Estimation Figure 3.3: Normals and middle points Minimal spanning tree In MATLAB there already exists a function creating the Minimal Spanning Tree from a weighted matrix. It connects all nodes with weighted relations defined in the weighting matrix W in order of lowest costs. Embedding this function a new function can be created: [oriented normals] = f unction3 (normals, centroids, weightingmatrix, ρ, δ) (3.7) Orient normals looking outside Marching Cubes (see section 3.2.6) requires normals oriented outside. Since after the implementation of the f unction3 the normals are oriented outside or inside, an extra step is necessary. The main idea orienting all normals looking outside is to take a centroid with a minimal value in coordinate x and analyze the sign of the coordinate x of the associated normal: if it is negative the orientation of all normals must be changed, and if it is positive the oriented normals already look outside. This relation comes from the multiplication of the vector v i corresponding to the centroid with the 1 minimal entry in x coordinate with the unit vector 0. If they are oriented 0 in the same direction, the normals are oriented as wanted. The results are shown in figure 3.4. 3.2.4 Divide Volume in Cubes The division of the whole volume in cubes of the size ρ + δ occurs defining edges of the cubes in 3D space. This step is necessary for the method Marching Cubes.
Chapter 3. Surface Reconstruction from Point Clouds 34 Figure 3.4: Oriented Normals (red vectors) on the centroids (green points) 3.2.5 Signed Distance Vectors For using Marching Cubes at the end the distances between the edges of the cubes and planes must be computed. A short overview of this method: 1. compute the distances between vertices and centroids 2. take the minimum distance 3. compute the distance to the corresponding plane 4. if the projected point z lies in the neighborhood Nbhd(o i ) then this distance is accepted, otherwise not The distance between a vertex of a cube and a centroid is simply computed using the formula d(a, b) = (a 1 b 1 ) 2 + (a 2 b 2 ) 2 + (a 3 b 3 ) 2. The minimum can be directly found in MATLAB using the function min(). The 3D geometry can be used to compute the distance between the plane and a vertex p, in order to find the projection point z (Figure 3.5). Figure 3.5: Visualization of the projection problem [1] d(p, z) = (p o i ) n i (3.8) z = p d(p, z) n i (3.9)
35 3.2. Phase 1: Initial Surface Estimation The projected point z is used to find out if the point p was projected in the neighborhood Nbhd(o i ) or not. If not, the distance is not accepted and it is set equal to. 3.2.6 Marching Cubes Marching Cubes (MC) [21] is a method used for surface reconstruction. It bases on the signed distances computed in the last section. The main idea is to find cubes where the object surface cut them in cold and hot parts: lying on the different sides of the surface. The weighted cold and hot 3 vertices are then used to triangulate the planes in the right order. This method uses several big tables with listed vertices, triangulation rules and their connections to a mesh. A visualized table of triangulation inside a cube using hot vertices is illustrated in figure 3.6. Special about Marching Cubes is its sensitivity on the cube size for noisy data points. In general the Marching Cubes approximates a surface well for small cube sizes, but it needs a lot of space and and much computing time in phase 2, for big cube sizes the surface reconstruction varies a lot from the initial surface but it needs less space and computing time. Figure 3.6: Table used for Marching Cubes Method to connect the surfaces to triangles. The green points are hot vertices [26] The result after implementation of Marching Cubes on the point cloud is shown in figure 3.7. 3.2.7 Results In this section the phase 1, initially implemented on an artificial generated noisy point cloud, is applied on real problems. The running time was evaluated in my notebook and varies with the power of computer. These times should give the feeling for the computational time order and show the differences in times for different parameters. 3 hot and cold are just names, the main idea is to divide these cube in different parts, just as mentioned
Chapter 3. Surface Reconstruction from Point Clouds 36 Figure 3.7: The triangulated surface after the implementation of Marching Cubes. The blue points arise from the point cloud. Figure 3.8: Phase 1: triangulated surface containing 1505 triangles reconstructed from 3289 data points in 178 s with the radius 0.11m
37 3.2. Phase 1: Initial Surface Estimation Figure 3.9: Phase 1: triangulated surface containing 1741 triangles reconstructed from 3462 data points in 214 s with the radius 0.06m Figure 3.10: Phase 1: triangulated surface containing 805 triangles reconstructed from 2558 data points in 55 s with the radius 0.11m
Chapter 3. Surface Reconstruction from Point Clouds 38 3.2.8 Discussion Sampling Rate and Size of Neighborhood Since point clouds provided by a sensor often contain a big number of points, they must be sampled. During the sampling process, the points can be chosen randomly or specified. Randomized choice of points can influence the neighborhood size. For example if the sampling rate is 50%, then the radius of the neighborhood should be set bigger than 1 0.5 times the radius of the initial point cloud (sampling noise should also be respected). For scans with different point densities the randomized choice does not change the density rates in the point cloud. It is also possible to filter only points from a certain region (specific choice). These regions can be limited by coordinates or the density of points. Figure 3.11: Effect of sampling rate on running time and results with a constant radius (ρ+δ) : a) sampling rate 0.003 with running time 9 s; b) sampling rate 0.006 with running time 15 s; c) sampling rate 0.012 with running time 156 s In figure 3.11 the effect of the sampling rate on the results is shown. The first representation has holes and gaps, accordingly it is a bad representation. Whereas the pictures b) and c) look good, but the second representation still has a hole. The third representation has no holes and looks good. So the ideal sampling rate for this example with a certain radius lies between second and third values (between 0.006 and 0.0012). Choosing a good value saves space and reduces running time. Figure 3.13: A circle represented by lines of different length determining the smoothness of this circle A good sampling rate represents the point cloud good enough to bring out the smooth details in a desirable grade. So for example for the cylinder in 2D it means the smoothness of the circle depends on the size of the line representing it. The
39 3.2. Phase 1: Initial Surface Estimation Figure 3.12: The triangulated surface after the phase 1: a) data set of 2549 points; b) with the radius ρ + δ = 0.05m after 51 s running time; c) with the radius ρ + δ = 0.1m after 60 s running time; d) with the radius ρ + δ = 0.2m after 271 s running time smaller the size the bigger the amount of lines and the smoother the surface (Figure 3.13). An ideal circle is represented by infinite number of lines. Figure 3.14: The relation between the ratio of the time in s (a axis) ρ distance between points (x axis) and The radius of the neighborhood is also very important. The relation between radius and results is shown in figure 3.12. To analyze the relation between the running time and radius I took a noisefree point cloud of a cylinder with uniform distribution with a known distance between the points, and run the phase 1 for different neighborhood sizes. The results are represented in figure 3.14. It shows that there exists an optimum for the running time which is not the distance between the points, but 1.6 times the distance. Because for small radius the running time is determined by the computational time of a high number of cubes and with them associated distances and traingles. For a big radiuses the time for computation of average points and normals in a neighborhood dominates.
Chapter 3. Surface Reconstruction from Point Clouds 40 Using the properties of the Magnebike robot sensor and the average radius of a pipe of 0.25m the radius for a neighborhood without sampling can be computed. The average distance between the points is then ca. 0.25 sin( 0.36 π 180 ) = 1.571 10 3 m. It means for sampling rate of 1% the average radius is equal to 0.157m. The noise in this distance is ca. ±10 3 m, what means for the radius with respect to sampling noise ca. 0.16m. Since the laser sensor used for Magnebike is rotated around an axis with a certain velocity, the resolution of 0.36 o can not be accepted as the resolution of the scan in 3D. But it is possible to compute the resolution coming from this anglular velocity. If it is very slow, the resolution is determined by this rotation around the axis. In an optimal case for surface reconstruction the sensor is rotated stepwise with the same angular distance as the resolution of the sensor of 0.36 o. The resolution of 1.57mm for noise of ±10mm is good enough. Since the neighborhood radius of 0.16m for sampling rate of 1% was too big, and the distances even after sampling were around 0.05m, the resolution around the axis was bigger then the sensor resolution or there were more than one scan in a data set. Without knowing this information it is not possible to compute the neighborhood radius exactly. Figure 3.15: Cumulative distribution of a sparse data with big differences in radius (small derivation) For this case or if the pipe size is not available, I implemented a function using cumulative distribution of the distances between some randomized neighbor data points to set the radius ρ+δ. If it is known that the distribution is uniform, the radius can be set equal to the maximal distance. If it is not known and the density of points is not constant, the radius can be varied. The values of 0.85 0.95, meaning 80% respectively 90% of computed radius lie under the chosen ρ+δ, provided good results. Summarizing this discussion the following observations can be done: the sampling rate determines the possible smoothness of the respresented surface the smaller the sampling rate the faster the running time running time is not linear proportional to the number of points in the data set there exist an optimal radius near 1.6 of the distance between points in the data set, for which the running time is optimal the visualization is best for the neighborhood sizes near the distance between the points in data set
41 3.2. Phase 1: Initial Surface Estimation the bigger the difference in distances between points in the data set the more difficult it is to find good sampling rates, sampling region and neighborhood size Neighborhood of centroids In section 3.2.3 the neighborhood for each centroid was computed. To save computation time it is also possible to assume for small neighborhood radius the neighborhood of the set X is the same as the neighborhood of the corresponding centroids of the points x i. Marching Cubes The surface representations in section 3.2.7 show the sensitivity of the phase 1 on the neighborhoods size. In the regions with density near set radius ρ + δ the surface representation is good, in other regions rather not. Since there is a second phase in which the triangulation mesh is treated, it would be best to get a mesh with a small number of triangles and a good approximation of the surface. If the power of the computer would be infinitely high, a smooth triangulation mesh with a big number of triangles would be no problem. But since the power and space are limited, a compromise must be found. A good radius is then defined as the radius of compromise between a large number of triangles and smooth surface on the one side and a small number of triangles and a rough approximation of the surface on the other side. Analysing the reconstructed surfaces, the differences in the size of triangles can be observed. There are many small triangles using space and increasing the computational time. A possibility to improve this phase would also be to eliminate these small triangles or to get a mesh with smaller number of triangles of the same size an other way. The effect of the neighborhood radius (respective cube size) on the surface reconstruction and the distance between the surface and the data set is shown in figure 3.16. Setting the radius smaller than ρ + δ led to error messages (could not build neighborhoods) and bigger than 6(ρ + δ) led to triangulation meshes looking like a small sphere inside the point cloud. Changing the cube size in Marching Cubes led to interesting results (Figure 3.17). Setting the cubes size bigger than ρ + δ reduces the running time, but the results are worse. Otherwise, setting the size smaller than the neighborhood radius leads to a huge number of triangles in the mesh and expensive costs. Intersting about this behaviour is its non-linearity in running time (running time increases explosively with the cubes size) and linearity in number of triangles (2 times change in cubes size causes 4 times change in number of triangles). Setting cube size smaller than ρ + δ appears to provide smoother results, but in reality they are not and can not be smoother than the resolution (ρ + δ). So a good choice of the cube size is about 1.5(ρ + δ): a compromise between short computing and a good result. In general, the running time is very sensitive on the seting parameters (ρ + δ), cube size and sampling rate. The results might look similar for several combinations of parameters, but the running time can be different.
Chapter 3. Surface Reconstruction from Point Clouds 42 Figure 3.16: Changing the radius of neighborhood in phase 1 a) ρ + δ; b) 2(ρ + δ); c) 3(ρ + δ) Figure 3.17: Changing the size of cubes in Marching Cubes: a) 2(ρ + δ), running time 1s, 76 triangles in mesh; b) ρ + δ, 3s, 400 triangles; c) 0.5(ρ + δ), 5s, 1815 triangles; d) 0.25(ρ + δ), 47 s, 7467 triangles
43 3.3. Phase 2: Mesh Optimization 3.3 Phase 2: Mesh Optimization Phase 2 improves the results of phase 1 and reduces the data volume. The Energy Function defined by H. Hoppe leads to a smooth surface in the areas with high curvature and reduction of the number of triangles in flat regions. This phase consists of the following steps: 1. define the Energy Function 2. solve the inner minimization problem 3. solve the outer minimization problem 4. build a loop of inner and outer minimization problems 3.3.1 Energy Function Let V be the set of vertex positions V = {v 1,...v m } with m-number of vertices. With K defining the simplificial complex respective the connections between the vertices, M = (K, V ) is assumed to represent the triangulation mesh. With these definitions the Energy Function is described as following: E(K, V ) = E dist (K, V ) + E rep (K) + E spring (K, V ) (3.10) E dist (K, V ) = n d 2 (x i, π V ( K )) (3.11) i=1 E rep (K) = c rep m (3.12) E spring (K, V ) = κ v j v k 2 (3.13) {j,k} K with c r and κ user-defined parameters. π V ( K ) is the projection of a the point x i on the triangulation mesh M. The qualitative meanings of the functions are: E dist (K, V ) : is big if the distances between the point cloud and triangulated mesh are big respective if the mesh approximates the point cloud badly. E rep (K) : penalizes meshes with a lot of vertices, also it is big if there are a lot of vertices in the mesh. E spring (K, V ) : penalizes meshes with huge distances between the vertices. It is kind of spring energy holding the triangulation mesh together. To find a good triangulation mesh, the Energy Function E must be minimized. Since the function depends on two variables V and K, the problem can be divided into two new problems: an inner minimization over V for fixed K and an outer minimization over K. The first one only changes the positions of the vertices and the second one changes vertices positions and their connections. The outer and inner problems are repeated till the mesh is optimized: while (number of changes in outer minimization problem) > 0 solve outer minimization problem including inner minimization problem end
Chapter 3. Surface Reconstruction from Point Clouds 44 3.3.2 Inner Minimization Problem To solve the inner minimization problem 4 steps are necessary: 1. find normals and middle points of each triangle in the mesh 2. project the points of point cloud on the triangulated mesh 3. compute barycentric coordinates of the projected points 4. solve the inner linear least square problem Normals and middle points The middlepoint of a triangle can be found computing the average point of three vertices. The normal to the triangle t is found solving the system of equations: v 1,1 v 1,2 v 1,3 v 2,1 v 2,2 v 2,3 ˆ t = v 3,1 v 3,2 v 3,3 1 1 1 with v 1 = ( v 1,1 v 1,2 v 1,3 ) and v1,v 2 and v 3 vertices of triangle. Since unit vectors are more practical t = ˆ t ˆ t. (3.14) Figure 3.18: Middle points of triangles with their normals Point projection To project the data points on the planes of the triangulation mesh it would be possible to project all points on all planes and take the minimal distance. Since it costs much, leads to defects projecting through holes and needs space, an assumption is made: the point x i with the minimal distance to a triangle with the middle point s i lies near this middle point s i. So the first step is to find Nbhd(s i ) within the data set X. The number of neighbors is chosen so that the middle point s i build with its neighbors all triangles including s i.
45 3.3. Phase 2: Mesh Optimization A point x i is projected on all neighbor planes using the equations 3.8 and 3.9. To prove if a point is really projected on the surface of the mesh, it is necessary to prove if the projection point lies inside a triangle. In MATLAB there exist a function proving if a point lies inside a 2D triangle. Since in this case triangles lie in the 3D space, each of them must be first transformed into 2D triangle with one vertex in the origin (0, 0). The main idea is shown in figure 3.19. Figure 3.19: Transformation of a triangle and projected points in 3D into 2D The new axis x is defined by the vertices 3 and 1: e x = v3 v1 v 3 v 1. The new axis y is found to be the unit cross product of e x with the normal of the triangle t: e y = ex t. e x t To compute the transformed vertices ˆv 2 and ˆv 3 in 2D the property of the origin is used: ˆv 2 = ( (v 2 v 1 ), e x (v 2 v 1 ), e y ) and ˆv 3 = ( v 3 v 1 0 ). The transformed projected point is then equal to ˆp = ( p, e x p, e y ). The new vertices and projected points in 2D can now be used to find out if a projected point lies inside a triangle or not. If a point lies inside a triangle, it is accepted, otherwise not. Figure 3.20: Projected points on the triangulation mesh Barycentric coordinates Barycentric coordinates are based on the vertices of a triangle. A point within a triangle can be represented as p = v 1 α + v 2 β + v 3 γ with the weights α,β and
Chapter 3. Surface Reconstruction from Point Clouds 46 γ. To find the weights a system of equations must be solved: v 1,1 v 2,1 v 3,1 v 1,2 v 2,2 v 3,2 α β = p 1 p 2 (3.15) v 1,3 v 2,3 v 3,3 γ p 3 To prove the results are correct the following equation must be satisfied: α + β + γ = 1. Inner minimization problem In this section the problem must be solved for V over fixed K. The Energy Function can then be represented as: E(K, V ) = n d 2 (x i, π V ( K )) + i=1 {j,k} K κ v j v k 2 (3.16) due to the reason that E rep does not depend on V. π V ( K ) stands the projected points of data set on the triangulation mesh. The inner minimizing problem can be represented as LLSP (linear least square problem) and it can be solved using QR or SV (singular value) decompositions. Since QR appeared to be faster in MATLAB, this method was prefered. The problem is defined as follows: A v d 2 = r (3.17) with given A and d. For a matrix A : n n with full rank the vector r is a zero vector. Since here the matrix A is a n m Matrix with n > m this problem can not be solved exactly. Therefore the problem is defined as follows: find v so that r is minimal. The vector v in the inner minimization problem is the vector including all vertices of the mesh: v x = v 1,x. vn, x (3.18) with x representing the associated coordinate x. The same holds for other two coordinates y and z. The matrix A and the vector d consist of two parts. In the first part the barycetric coordinates are used to find new vertices so that the distance between projected points and data points is minimal. Therefore the first part A 1 contains the weights of the barycentric coordinates corresponding to projected points on a triangle and d 1,x the x coordinate of the data points corresponding to these projected points. The same holds for other coordinates. In the second part the coherence of the vertices is respected. So each point of an edge of the triangulation mesh is multiplied with κ and κ respectively. The second part d 2 is assumed to be zero. ( ) ( ) A1 The matrix A is set equal to and d A d1 equal to. The problem must be 2 d 2 solved for v for each coordinate x, y and z separately. For a better understanding of the meaning of this procedure an example can be done: a mesh consisting of 4 vertices (v 1,v 2,v 3 and v 4 ) and 2 triangles (v 1,v 2,v 3 and v 1,v 2,v 4 ). Two points p 1 and p 2 were projected. The first one on the first
47 3.3. Phase 2: Mesh Optimization tringle and the second one on the second triangle with the weight α 1, β 1, γ1 and α 2, β 2, γ2 respectively. So the following system can be built: α 1 β 1 γ1 0 p 1,x α 2 β 2 0 γ2 p 2,x κ κ 0 0 0 A = κ 0 κ 0 d x = 0 (3.19) 0 κ κ 0 0 0 κ 0 κ 0 κ 0 0 κ 0 A v x d x = r x (3.20) so that r x is minimal. This equation can be solved for v x, also for vertices coordinates in triangulation mesh. The choice of κ is not intuitive and depends on the geometry. It can be found trying different values. This parameter is discussed in section 3.3.7. 3.3.3 Outer Minimization Problem The inner minimization problem is used to solve the outer minimization problem including the changes in geometry of the surface. The main ideas are to apply edge collapse, edge split or edge swap (Figure 3.21) on triangles of mesh and compute their Energy Functions. If the value of the new function is smaller than the value of the old function, the change is accepted. Figure 3.21: Possible moves [1] Since there are legal and not legal moves within a mesh and they base on boundaries conditions, the boundary edges and vertices must be computed. The procedure consists of the following steps: 1. find boundary edges and boundary vertices 2. define edge collapse and its legal moves 3. define edge split and its legal moves 4. define edge swap and its legal moves 5. compare the energies of a move
Chapter 3. Surface Reconstruction from Point Clouds 48 6. choose an edge of the mesh to move Boundaries The boundaries are necessary to decide if a move is legal or not. To find boudaries the property of boundary edges is used: they only belong to one triangle whereas other edges make part of two triangles. The boundary vertices are then all vertices of the boundary edges. Figure 3.22: Boundary edges and boundary vertices Edge collapse After edge collapse two triangles disappear as shown in figure 3.21. Since this move is very attractive for boundary triangles letting them disappear and hence minimizing the energy of the mesh, this move must be prevented. So the edge collapse is a legal move if vertex v i and v j are not a boundary vertices. What is stronger than the condition the edge (v i,v j ) is not a boundary edge. There are three possibilities for the move: collapsing in the vertex v i, v j or vi+vj 2. The energy is computed for all of these cases and the smallest of them is then compared with the energy before the collapse. If the energy after the edge collapse is smaller then the energy before, the change is accepted and three edges, two triangles and one vertex are canceled. Thus the triangulation mesh needs less space. Edge split Edge split means the surface becomes smoother. After this move the triangulation mesh contains three edge, two triangles and one vertex more. The move is legal if the vertices v k and v l exist. The additional vertex is positioned in vi+vj 2. Edge swap Edge swap changes the geometry of two triangles, but the size of the mesh remains the same. The move is legal if the vertices v k and v l exist.
49 3.3. Phase 2: Mesh Optimization Figure 3.23: Visualization of edge collapse (b), edge split (c) and edge swap (d) Energy of moves The Energy Function was already introduced in section 3.3.1: E(K, V ) = n d 2 (x i, π V ( K )) + c rep m + i=1 {j,k} K κ v j v k 2 (3.21) For each of these three cases the energy is computed before and after the move. If the energy after a move is smaller than before, the move is accepted, otherwise not. With the definition of the Energy Function above for which the edge collapse holds: the Energy before the move is equal to the sum of distances between projected points on the mesh surface and with them associated data points, plus the weighted sum of all distances between the vertices of all corresponding triangles, plus a term weighting the importance of the number of vertices in a mesh. For the edge collapse, m can be set equal to 2 before and equal to 1 after. For edge split contrariwise 1 before and 2 after. For the computation of the energy after the edge collapse and edge swap data points must first be projected on the new corresponding tringles. Special about edge split is that this move is fulfilled if the square distances between the vertices are more important than the number of vertices. 3.3.4 Choice of the Edge An edge can be chosen randomly. In the implementation I took 50% of all edges and analyzed a randomly chosen move: edge collapse, edge swap or edge split. 3.3.5 Loop of Inner and Outer Problems In this section a loop is built from the inner and outer problems. The solutions of them were discussed in the previous sections. The outer problem includes the inner problem. So first the geometry is changed applying edge collapse, swap or split. Then the new positions of the vertices are computed in the inner problem. The whole procedure is called the outer problem.
Chapter 3. Surface Reconstruction from Point Clouds 50 The overview of the loop: while(the number of achieved moves) > 0 end 1. choose edges, compute the energy for each edge and fulfill the good moves 2. solve linear least square problem for the new geometry of the mesh good moves we mean those with smaller energy than before the move. The resulting surface reconstruction for the point cloud of a cylinder after three iterations of phase 2 is shown in figure 3.24. Figure 3.24: Resulting surface reconstruction of 87 triangles b) after 12 iterations in 185 s from a mesh of 327 triangles a).
51 3.3. Phase 2: Mesh Optimization 3.3.6 Results The results after the implementation of the phase 2 on real problems are schown in figures 3.25, 3.26 and 3.27. The running time and results depend among other things on the number of edges: if there are a lot of them, a lot of moves are not accepted and the Energy Function in these cases is not computed. Figure 3.25: Surface optimization of a mesh with 508 triangles (b) after 1 iteration in 175 s to a mesh of 305 triangles (a). Figure 3.26: Surface optimization of a mesh with 608 triangles (b) after 1 iteration in 89 s to a mesh of 528 triangles (a).
Chapter 3. Surface Reconstruction from Point Clouds 52 Figure 3.27: Surface optimization of a mesh with 459 triangles (b) after 1 iteration in 96 s to a mesh of 397 triangles (a). 3.3.7 Discussion The implementation of phase 2 is more difficult than phase 1, because it depends on the results of the first phase, what means it depends on all parameters of the last procedure respectively on the number of triangles in a mesh. The biggest problem is the Energy Function handling triangles. It highly depends of their number: experiments on regular computers could not efford more than 500 triangles in a mesh for more than one iteration. The solution could be to save the results, restart the program, load the last results and do the next iteration, but this solution is not a good one. So an other solution also is to take only meshes with a small number of triangles from the first phase. The advantage of this phase is that it can treat rough meshes and smooth them if necessary. So a mesh of about 800 triangles can also be accepted for this phase. Point projection The method described in section 3.3.2 has a drawback: it only handles points projected on a triangle. The problems then appear in the cases illustrated in figure 3.28. In the first case a point can not be projected because it lies right above the peak of a pyramid and in the second case it is not projected because of the parameter κ. The second problem can be avoided setting κ more precisely, but the first one remains. Figure 3.28: The two critical cases of point projection a) point lying above a peak; b) a point after the inner minimization problem lying outside and not being projected anymore.
53 3.3. Phase 2: Mesh Optimization I tried to avoid the first case problem projecting the points on all planes built by triangles of a neighbor vertex and taking the one with the smallest distance. The projected points do not lie on the surface anymore, but near the surface. The advantage is that all points are respected in the solution of the inner minimization problem, but it also has drawbacks. The first is the problem with the barycentric coordinates. The relation (α + β + γ) = 1 does not hold anymore and the order of these weights can vary. So α could be for ceveral coordinates around 10 12, and this leads to problems with other weights caused by the numerical accuracy and finite number of ciphers in the mantissa. The results after this method were not proper: the approximation of the surface was good only for some iterations. After about 10 iterations the reconstructed surface exfoliated from the data set. I also tried to cancel all values bigger than 10 4 but even this improvement did not lead to good results. Edge Collapse Edge collapse workes well but not always. In several cases it causes defects in the mesh. These cases are illustrated in figure 3.29: when there are two additional triangles (in orange) inside the neighbor triangles (light green), they build after the edge collapse a new triangle (in orange) standing out. This actual important problem should be solved in the future work. Figure 3.29: Edge Collapse causing defects in a mesh a)black point the collapsing point, grey triangles are triangles to be removed, orange triangles will build the wrong triangle; b)after the edge collapse the ornage triangle looks outside the mesh causing defects. Edge split The move edge split smoothes the surface. Unfortunately the application on real problems did not show good results. To improve it, new setting of the new edge should be done. It should not lie on the edge itself, but above or beneath it. Moves Instead of randomized choise of moves, it is possible to enforce a special kind of mesh. Setting the order (edge collapse, edge swap, edge split) leads to a mesh with less number of triangles and rougher mesh than with the other (edge split, edge swap, edge collapse). Choosing to many edges for moves leads to a very long computational time, therefore I took about 50% 80% of all edges to be analyzed.
Chapter 3. Surface Reconstruction from Point Clouds 54 Figure 3.30: Edge split: the black point is the actual new vertex, the green are possible new vertex points to inmprove this move Parameter κ The parameter κ was used in the inner minimization problem and must be set by the user. It stands for the connections between the edges in a mesh. It is a kind of spring holding the triangulation mesh together. The Figure 3.31 shows what exactly this parameter affects (these pictures are associated with the second proposed projection of points on all planes; however it shows what κ does with the difference the projection on the triangles of the surface is more robust). If there is a lot of noise present, the affect of the parameter κ is bigger. Figure 3.31: After 4 iterations only of the inner minimization problem a)κ = 0; b)κ = 10 6 ; c)κ = 10 5 ; d)κ = 10 4 ; e)κ = 10 3 with QRD; f)κ = 10 3 with SVD; g)κ = 10 2 with QRD; h)κ = 10 2 with SVD; i)κ = 10 1 with QRD; j)κ = 10 1 with SVD; k)κ = 1 with QRD; l)κ = 1 with SVD For big κ the distance between the vertices does not matter, important are projected points lying in the same coordinates as the data points. So the triangulation mesh
55 3.3. Phase 2: Mesh Optimization goes off and does not have the form of the object any more. For small κ the distance between the vertices is too important, so that the mesh collapses. Because the coordinates of data points are then too small relative to κ. The parameter κ depends on the object geometry and must be set for each problem new again. It would be possible to build a function for an automatical setting of this parameter. It would cumpute the mesh for different κ and take the one with the minimum distance to the data points after some iterations. But this function would be very expensive. But nevertheless, there is an advantage: this parameter once set remains constant for a certain kind of problems (similar coordinates and type of mesh, the same or nearly the same ρ, δ, cube size, density of the data points). Parameter c r This parameter also depends on the geometry of the object. Since it is used in the Energy Function with κ it can be set relative to it. Knowing ρ + δ, respectively having an information about geometry, the first estimation for c r could be 10(ρ+δ)κ. This relation is deduced setting c r and other parameters in the Energy Function with estimation of good choice of +δ. Like κ the parameter c r infuances the final geometry of the mesh. If it is big, the triangles are big and their number is small, otherwise if it is small the triangulation mesh has a big number of triangles.
Chapter 3. Surface Reconstruction from Point Clouds 56 3.4 Method of Cubes As it was already discussed in sections 3.2.8 and 3.3.7 it would be good to have a triangulation mesh containing a small number of triangles nearly of the same size approximating the surface well. I tried to implement a kind of Marching Cubes using some assumptions and simplifications. I call this method the Method of Cubes. The starting point is after section 3.2.4. The proocedure contains the following steps: 1. find cubes with projected points of the data set inside 2. compute the average point and plane orientation inside each cube 3. compute intersection points of planes with cubes 4. connect the points to triangles in the right order 5. merge the points corresponding to a vertex together 6. fill the gaps 3.4.1 Cubes Including Projected Points How to project points on a plane was described in section 3.2.5. Using this procedure it is possible to compute the projections of the data points instead of vertices of the cubes on the planes representing the linearized surface in centroids. To avoid the computation of the distances between data points and all centroids, the already computed neighborhood of X can be used, with the assumption the centroids lie near the associated data points x i. Figure 3.32: Data points (blue) and their projection on linearized planes of the surface (red). This step is a kind of noise filtering. The cubes including projected points are illustrated in figure 3.33 3.4.2 Average Points and Planes in Cubes To compute the average point all points are added and then divided by the number of points. The average plane orientation analog: the normal oriantations are added and then divided by their number. This is only possible because the vectors in a
57 3.4. Method of Cubes Figure 3.33: Cubes with projected points inside neighborhood are oriented looking outside. If it would not be the case, it would not 0.9 1.1 provide the desirable resluts. For example two vectors 0.2 and 0.2 would 0.2 0.2 0.1 1 provide a vector 0.2 as result instead of the wanted 0.2 if the second vector 0.2 0.2 would not be oriented. Figure 3.34: Visualization of the average point and plane within a cube 3.4.3 Intersection Points The next step is to find the intersection points of planes with the edges of cubes. To represent them the edges of each cubes are enumerated from 1 to 12. Intersection points get then an additional information about the edge they cut. To do so 3D geometry can be used again. Let represent a line as g = a 1 a 2 a 3 + α t 1 t 2 t 3 (3.22) with a = ( ) a 1 a 2 a 3 a point of this line and t direction of this line. The plane is represented with the equation: ( ) n1 n 2 n 3 x y D = 0 (3.23) z
Chapter 3. Surface Reconstruction from Point Clouds 58 with n = n 1 the normal to the plane and D = n, p with p a point of the n 2 n 3 plane. In this case D = n 1 p 1 + n 2 p 2 + n 3 p 3. Inserting the line equation into the equation of the plane provides a new equation for α: ( ) a 1 n1 n 2 n 3 + α (n ) t 1 1 n 2 n 3 D = 0 (3.24) a 2 a 3 A point of the line and its direction is defined by each cube and its edge. A point of the plane is the average point and the normal the average normal computed in the last section. t 2 t 3 Figure 3.35: Visualization of the intersection point r of the plane with the normal n and a point p with the line through the point a with the normal t 3.4.4 Build Triangles With the information about the intersected edges of cubes and intersection coordinates it is possible to divide the planes inside a cube in triangles. To do so I used the table initially sought for the method Marching Cubes (Figure 3.6). I enumerated each edge and vertex of a cube and listed all possible connections to triangles using the simplification that there is only one plane inside a cube. I got 43 different cases to build triangles (5 cases from figure 3.6 plus possible modifications inside a cube). The resulting triangles are shown in figure 3.36. The mesh is not closed hence the triangles still build planes inside each cube representing the linearized surface through the average points. Figure 3.36: Triangles built inside each cube
59 3.4. Method of Cubes 3.4.5 Connect Triangles To connect triangles a trick was used: merging all points near a vertex of a cube to one point. So for each vertex the points lying ±0.5 cubesize are merged into one point. In the directions normal to the surface the distance is chosen to be ±1 cubesize to prevent multiple sheets of a surface caused by noise. The results are illustrated in figure 3.38. There are some defects in the surface coming after this step. This problem is discussed in section 3.4.6. Figure 3.37: The visualization of the idea of merging points into one: red vector is the normal vector to the surface, all blue points are accepted and merged to the new yellow point. The violet points are not accepted. Figure 3.38: Triangulation mesh after connecting the triangles 3.4.6 Fil Gaps As already mentioned in the last section, the Method of Cubes has some defects. Trying to avoid them I found the boundary edges and vertices in the mesh and marked them. Nearly all defects lied on these boundaries. I implemented a first step to eliminate the defects filling the missed triangles 4 built by three boundary edges. This can be done because the assumption holds there are no gaps with radius smaller than ρ + δ. 4 To fill a triangle means to add it into mesh.
Chapter 3. Surface Reconstruction from Point Clouds 60 Nearly all other defects lie on boundary edges and can be classified into three types: an additional triangle building a part of an other triangle, a triangle not folded down and a gap with 4 vertices. Other defects are rare. Figure 3.39: Triangulation mesh with filled triangles and boundaries 3.4.7 Results The results after the Phase 1 using the Method of Cubes are represented in figures 3.40, 3.41 and 3.42. Figure 3.40: Triangulation mesh after Method of Cubes in 50 s including 178 triangles (c) build from the data set of 1929 points (a) compared with the mesh after Marching Cubes with 805 triangles with the same cube size
61 3.4. Method of Cubes Figure 3.41: Triangulation mesh after Method of Cubes in 290 s including 779 triangles build from the data set of 3425 points Figure 3.42: Triangulation mesh after Method of Cubes in 25 s including 446 triangles build from the data set of 1716 points
Chapter 3. Surface Reconstruction from Point Clouds 62 Figure 3.43: Triangulation mesh after Method of Cubes and 7 iterations of the Phase 2 in 66 s including after the Phase 1 310 and after the Phase 2 63 triangles, build from a noisefree data set of 468 points 3.4.8 Discussion The Method of Cubes is not a fulfilled method, because it still has defects. Nevertheless it showed good results considering the runnning time for optimal radius and it could be handled in the Phase 2 since the mesh after this Phase has in general a small number of triangles. So summarizing the method: Advantages: small number of triangles in a mesh and therefore better applicable for the Phase 2 the computational time is in the same range as for Marching Cubes the computational time increases for small radius and otherwise it is smaller for big radius (contrariwise to Marching Cubes) Drawbacks: there are defects in the mesh the reconstruction can only be used for the Phase 2, without this the smoothness of the surface is not provided
Chapter 4 Conclusion and Future Work 4.1 Conclusion The surface reconstruction from a noisy point cloud is not trivial. Reconstruct a surface without noise is possible by connecting the points of data set, but this can not be done for noisy data sets. There are many steps that must be done to reduce or filter the noise. The method of H. Hoppe, implemented in this work, was explained step by step and provided good results. For noisefree data the method worked very well, for noisy data it cost long computing time. There were many parameters that could be set the way to reduce these costs. Since the surface reconstruction and its running time were very sensitive on these parameters, they had to be tuned by the user to get a good visualization in a relative short time. The combination of the parameters is not intuitive and is therefore expensive. The Method of Cubes I developped and also explaned in this documentation is related to Marching Cubes used by H.Hoppe in the first Phase of surface reconstruction, but it uses more simplifications and is extra modulated for further usage in the second Phase. It still has some defects, but the idea can be used to modify Marching Cubes to a more efficient triangulation mesh as input for the Phase 2. 4.2 Future Work The algorithm I wrote for the surface reconstruction using the script of H.Hoppe [1] provided the results showed in the last chapter. This program still has some drawbacks that should be optimized in a future work. My proposals for the future work using this MATLAB program are: improve the code in general by replacing some for and while -loops with more efficient algorithms or by adding aditional stop criteria find a formula for good parameters ρ,δ, cube size, sampling rate, κ, c r and the number of iterations in Phase 2 improve the projection problem in Phase 2 (section 3.3.7) improve the edge collapse in Phase 2 (section 3.3.7) improve the edge split in Phase 2 (section 3.3.7) optimize and apply the step 3.4.5 on Marching Cubes to get a less number of triangles of nearly same size in triangulation mesh 63
Chapter 4. Conclusion and Future Work 64 define a function searching for data points near edges to better define the boundaries and real gaps in the mesh It also would be interesting to set the parameters different in the both Phase. For example to take in the first Phase only 30% of the data points, build a triangulaion mesh and then in the second Phase set data set consisting of 100% of points. To do so, the move edge split must be optimize.
Appendix A Symbols E Enery Function defined by H.Hoppe a value a a point a = ( ) a 1 a 2 a 3 R 3 a vector a = d(a, b) distance between two points a R 3 and b R 3 a norm of vector a = a 2 1 + a2 2 + a2 3 a, b scalar product a T b Y = {y 1,..., y n } Y containing elements y i inr 3 [output] = f unction(input) a function produces an output from the input Nbhd(x i ) neighbor points of the point x i Indices 1 x axis 2 y axis 3 z axis a 1 a 2 a 3 Acronyms and Abbreviations ABF BPA IBF MAT MC MST QRD RBF SVD Anisotropic Basis Functions Ball-Pivoting Algorithm Isotropic Basis Functions Medial Axis Transform Marching Cubes Minimal Spanning Tree QR Decomposition Radial Basis Functions Singular Value Decomposition 65
Bibliography [1] H.Hoppe. Surface reconstruction from unorganized points. phd theses. 1994. [2] G.Caprari R.Siegwart M.Bosse R.Moser F.Tâche, F.Pomerleau. 3d localization for the magnebike inspection robot. 2010. [3] Hokuyo Automatic Co. Hokuyo urg-=4lx datasheet. [4] G. Caprari R. Siegwart L. Kneip, F. Tâche. Characterization of the compact hokuyo urg-04lx 2d laser range scanner. 2009. [5] Magnebike: Compact magnetic wheeled inspection robot with high mobility. 2010. [6] N.D.McKay P.J.Besl. A method for registration of 3-d shapes. 1992. [7] F. Pomerleau A. Breitenmoser. Building 3d-representation from laser scans, bachelor s theses description. 2010. [8] F. Cazals J.-D. Boissonnat. Smooth surface reconstruction via natural neighbor interpolation of distance functions. 2000. [9] N. Guid C. Oblonsek. A fast surface-based procedure for object reconstruction from 3d scattered points. 1995. [10] MATLAB Documentation. 3d delaunay triangulation. [11] M.Bern N. Amenta. Surface reconstruction by voronoi filtering. 1998. [12] M.Bern N. Amenta and M.Kamvysselis. A new voronoi-based surface reconstruction algorithm. 1998. [13] S. Choi N. Amenta and R. K. Kolluri. The power crust, unions balls, and the medial axis transform. 2000. [14] S.Choi N. Amenta and K.Kolluri. The power crust. 2001. [15] M. Reimers M. S. Floaters. Meshless parameterization and surface reconstruction. 2001. [16] Y.-W. Yuan X.-H. Zeng L.-M., Yan. Adaptive 3d mesh reconstruction from dense unorganized weighted points using neural network. 2004. [17] B.C.McCallum et al. J.C.Carr, R.K.Beatson. Smooth surface reconstruction from noisy range data. 2003. [18] H. Rushmeier et al. F.Bernardini, J.Mittleman. The ball-pivoting algorithm for surface reconstruction. 1999. 66
67 Bibliography [19] L.Vehlo et al. B.Mederos, N. Amenta. Surface reconstruction from noisy point clouds. 2005. [20] T.Duchamp et al. H.Hoppe, T.DeRose. Surface reconstruction from unorganized points. 1992. [21] C.McPheeters G.Wyvill and B.Wyvill. Data structure for soft objects. 1986. [22] T.Duchamp et al. H.Hoppe, T.DeRose. Mesh optimization. 1993. [23] S. Rusinkiewicz M. Alexa. Neural meshes: Surface reconstruction with a learning algorithm. 2004. [24] M.-S.Lee G.Medioni, C.-K.Tang. Tensor voting: Theory and applications. 2000. [25] G.Slabaugh H.Dinh, G.Turk. Reconstructing surfaces using anistropic basis functions. 2000. [26] M. Buh. Marching cubes. homepage. 2000.
List of Figures 1.1 Magnebike inspection robot [5]..................... 2 1.2 Robot on a flat and curved surface [5]................. 3 1.3 Point clouds from different points of view [5]............. 4 1.4 Matched point clouds [5]........................ 4 2.1 Table with papers to the topic: 18 rows x 11 columns........ 9 2.2 a) The original object: bunny; b) bunny representation after the Phase 1; c) bunny representation after the Phase 2 [8]........ 10 2.3 Phase 1: Mesh built from a point cloud................ 10 2.4 Phase 2: Optimization of the Phase 1................. 11 2.5 Phase 1, Overview of the methods applicable for noisefree data points 12 2.6 Cases in which a new triangle is added into triangulation. The boundaries are the main components in this procedure [9].......... 13 2.7 Triangulation algorithm of Oblonsek and Guid: a) point cloud not satisfying the conditions; b) point cloud satisfying the conditions; c) surface reconstruction of the point cloud b) [9]............. 14 2.8 3D-Delaunay Triangulation of a cube [10]................ 14 2.9 Comparison of Voronoi Diagram in 2D (a) and in 3D (b) [12]. Black points are data points and red lines build the Voronoi Diagrams... 15 2.10 Voronoi-based surface reconstruction [12]................ 15 2.11 Power Crust: a) an object with its Medial Axes; b) Voronoi Diagrams; c) inner and outer polar balls centered on the Voronoi lines; d) Power Diagram (Medial Axes Transform); e) Power Crust [13].. 16 2.12 Power Crust: a)inner polar balls; b)power Crust built by the inner polar balls (balls coming out of the object are cut off leaving a hole) [14]..................................... 16 2.13 Meshless Parameterization: a) Parameterized point set; b) Mesh for data points without noise; c) Mesh for data points with noise [15].. 17 2.14 Overview of the method Neural Network [16]............. 17 2.15 Overview of Phase 1 for noisy data points............... 18 2.16 Radial Basis Function applied on a point cloud a) with the result b); c) Visualizatoion of the RBF describing the distance to the surface [17] 19 2.17 Ball-Pivoting Algorithm in 2D; a) a good choice of r; b) with the same r as in a) a sparse data set causes holes in the mesh; c) when the curvature is larger than 1/r some features cannot be represented and remain missing [18]......................... 19 2.18 Ball-Pivoting Algorithm in 2D for noisy data points; a) surface samples lying below are not touched by the balls and remain isolated; b) an outlier is isolated if its surface orientation (in 2D line orientation) is not consistent with the surface orientation of the object; c)the choice of the radius can affect the results by creating a double layer out of a surface [18]........................ 20 68
69 List of Figures 2.19 Ball-Pivoting Algorithm applied on real data points [18]....... 20 2.20 Power Crust results: a) point cloud; b) Power Crust; c) transparent Power Crust with its simplified Medial Axes in 3D; d) Power Crust of a noisy point cloud[14],[19]...................... 21 2.21 The result of the Phase 1 using the method of H.Hoppe: a) original object; b) point cloud; c) surface reconstruction [22]......... 21 2.22 The results of the Neural Meshes for noisy point cloud using different parameters [23].............................. 22 2.23 Tensor Decomposition and Tensor Voting for a point cloud with noise [24]..................................... 23 2.24 Comparison between Isotropic (a) and Anisotropic (b) Basis Functions for sharp edges [25]........................ 23 2.25 The results: a)using IBF; b)using ABF; c) using ABF and filters; d) final textured reconstruction [25].................... 23 2.26 Overview of the Phase 2 for noisefree points.............. 24 2.27 Classification of the vertices: a) simple vertex; b) vertex lying on a sharp edge [9]............................... 24 2.28 Division of a triangle into 4 new smaller triangles to get a smooth surface [9]................................. 25 2.29 Overview of the Phase 2 for noisy data points............. 26 2.30 Edge collapse, edge split, edge swap [22]................ 26 2.31 The results after Phase 2 applied on Phase 1 from Hoppe s idea introduced in section 2.3.2 [22]..................... 27 3.1 Noisy Point Cloud............................ 29 3.2 Data points in blue and centroids in green............... 32 3.3 Normals and middle points....................... 33 3.4 Oriented Normals (red vectors) on the centroids (green points)... 34 3.5 Visualization of the projection problem [1]............... 34 3.6 Table used for Marching Cubes Method to connect the surfaces to triangles. The green points are hot vertices [26]........... 35 3.7 The triangulated surface after the implementation of Marching Cubes. The blue points arise from the point cloud............... 36 3.8 Phase 1: triangulated surface containing 1505 triangles reconstructed from 3289 data points in 178 s with the radius 0.11m......... 36 3.9 Phase 1: triangulated surface containing 1741 triangles reconstructed from 3462 data points in 214 s with the radius 0.06m......... 37 3.10 Phase 1: triangulated surface containing 805 triangles reconstructed from 2558 data points in 55 s with the radius 0.11m......... 37 3.11 Effect of sampling rate on running time and results with a constant radius (ρ + δ) : a) sampling rate 0.003 with running time 9 s; b) sampling rate 0.006 with running time 15 s; c) sampling rate 0.012 with running time 156 s......................... 38 3.13 A circle represented by lines of different length determining the smoothness of this circle............................. 38 3.12 The triangulated surface after the phase 1: a) data set of 2549 points; b) with the radius ρ + δ = 0.05m after 51 s running time; c) with the radius ρ + δ = 0.1m after 60 s running time; d) with the radius ρ + δ = 0.2m after 271 s running time................. 39 ρ distance between points 3.14 The relation between the ratio of (x axis) and the time in s (a axis)........................... 39 3.15 Cumulative distribution of a sparse data with big differences in radius (small derivation)............................. 40
List of Figures 70 3.16 Changing the radius of neighborhood in phase 1 a) ρ+δ; b) 2(ρ+δ); c) 3(ρ + δ)................................. 42 3.17 Changing the size of cubes in Marching Cubes: a) 2(ρ + δ), running time 1s, 76 triangles in mesh; b) ρ+δ, 3s, 400 triangles; c) 0.5(ρ+δ), 5s, 1815 triangles; d) 0.25(ρ + δ), 47 s, 7467 triangles......... 42 3.18 Middle points of triangles with their normals............. 44 3.19 Transformation of a triangle and projected points in 3D into 2D.. 45 3.20 Projected points on the triangulation mesh.............. 45 3.21 Possible moves [1]............................. 47 3.22 Boundary edges and boundary vertices................. 48 3.23 Visualization of edge collapse (b), edge split (c) and edge swap (d). 49 3.24 Resulting surface reconstruction of 87 triangles b) after 12 iterations in 185 s from a mesh of 327 triangles a)................. 50 3.25 Surface optimization of a mesh with 508 triangles (b) after 1 iteration in 175 s to a mesh of 305 triangles (a).................. 51 3.26 Surface optimization of a mesh with 608 triangles (b) after 1 iteration in 89 s to a mesh of 528 triangles (a).................. 51 3.27 Surface optimization of a mesh with 459 triangles (b) after 1 iteration in 96 s to a mesh of 397 triangles (a).................. 52 3.28 The two critical cases of point projection a) point lying above a peak; b) a point after the inner minimization problem lying outside and not being projected anymore..................... 52 3.29 Edge Collapse causing defects in a mesh a)black point the collapsing point, grey triangles are triangles to be removed, orange triangles will build the wrong triangle; b)after the edge collapse the ornage triangle looks outside the mesh causing defects............. 53 3.30 Edge split: the black point is the actual new vertex, the green are possible new vertex points to inmprove this move........... 54 3.31 After 4 iterations only of the inner minimization problem a)κ = 0; b)κ = 10 6 ; c)κ = 10 5 ; d)κ = 10 4 ; e)κ = 10 3 with QRD; f)κ = 10 3 with SVD; g)κ = 10 2 with QRD; h)κ = 10 2 with SVD; i)κ = 10 1 with QRD; j)κ = 10 1 with SVD; k)κ = 1 with QRD; l)κ = 1 with SVD............................. 54 3.32 Data points (blue) and their projection on linearized planes of the surface (red). This step is a kind of noise filtering........... 56 3.33 Cubes with projected points inside................... 57 3.34 Visualization of the average point and plane within a cube...... 57 3.35 Visualization of the intersection point r of the plane with the normal n and a point p with the line through the point a with the normal t 58 3.36 Triangles built inside each cube..................... 58 3.37 The visualization of the idea of merging points into one: red vector is the normal vector to the surface, all blue points are accepted and merged to the new yellow point. The violet points are not accepted. 59 3.38 Triangulation mesh after connecting the triangles........... 59 3.39 Triangulation mesh with filled triangles and boundaries....... 60 3.40 Triangulation mesh after Method of Cubes in 50 s including 178 triangles (c) build from the data set of 1929 points (a) compared with the mesh after Marching Cubes with 805 triangles with the same cube size.............................. 60 3.41 Triangulation mesh after Method of Cubes in 290 s including 779 triangles build from the data set of 3425 points............ 61 3.42 Triangulation mesh after Method of Cubes in 25 s including 446 triangles build from the data set of 1716 points............ 61
71 List of Figures 3.43 Triangulation mesh after Method of Cubes and 7 iterations of the Phase 2 in 66 s including after the Phase 1 310 and after the Phase 2 63 triangles, build from a noisefree data set of 468 points..... 62
List of Figures 72
List of Tables 1.1 Data of the Magnebike inspection robot [5].............. 2 1.2 Data of the Hokuyo URG-04LX 2D laser range scanner [3]...... 3 2.1 Table to decide which method to apply to reconstruct the point clouds provided by Magnebike robot................... 28 73
List of Tables 74