Surface Reconstruction from Point Clouds


 Bertram Robbins
 2 years ago
 Views:
Transcription
1 Autonomous Systems Lab Prof. Roland Siegwart BachelorThesis Surface Reconstruction from Point Clouds Spring Term 2010 Supervised by: Andreas Breitenmoser François Pomerleau Author: Inna Tishchenko
2
3 Abstract Very often robots use laser sensors to detect the environment. These sensors only provide a cloud of points lying on or beneath the surface because of noise. The challenge is to reconstruct a smooth surface and visualize it saving space on the data processor at the same time. In the first part during my work for the course studies on mechatronics I researched papers on this topic and classified them. After comparing the methods I chose the one described in the work of Hugues Hoppe [1] to apply it on the problem. In the second part which is my Bachelor Thesis I implemented the first two Phases of the method described by Hoppe [1] in MATLAB, applied them on real point clouds using different parameteres and analyzed the results. For several userdefined parameters the results were very good, but they required a lot of space. So I developed a method similar to the Marching Cubes having the advantage in saving space, but still having some drawbacks. The future work is to automatize the choice of good parameters, to improve the speed using assumptions and simplifications, and to combine my method with the Marching Cubes to save place while getting a good triangulated surface after the first Phase to be optimized in the second Phase. I
4 II
5 Contents 1 Introduction Magnebike inspection robot Robot Sensor Scan matching and localization Starting Point Literature Research Available Papers Problem Definition Extension of searching field Reading Papers Arrange Papers Classify Papers Phase 1: Initial Surface Estimation Data Points Without Noise Data Points With Noise Phase 2: Mesh Optimization Data Points Without Noise Data Points With Noise Choice of a Method Surface Reconstruction from Point Clouds Problem Definition Phase 1: Initial Surface Estimation Neighborhood Average Points and Corresponding Planes Orient Planes Outside Divide Volume in Cubes Signed Distance Vectors Marching Cubes Results Discussion Phase 2: Mesh Optimization Energy Function Inner Minimization Problem Outer Minimization Problem Choice of the Edge Loop of Inner and Outer Problems Results Discussion Method of Cubes Cubes Including Projected Points III
6 3.4.2 Average Points and Planes in Cubes Intersection Points Build Triangles Connect Triangles Fil Gaps Results Discussion Conclusion and Future Work Conclusion Future Work A Symbols 65 IV
7 Chapter 1 Introduction There are many locations that are hardly or not accessible by humans. These could be industrial plants difficult or impossible to disassemble, like pipes. So the idea is to design a robot that is able to explore the environment and provide its 3D representations to the operator. With this motivation a project named Magnebike was launched at ETH Zurich in 2006 [2]. Its goal is to design a mobile robot able to climb into a specific environment, to create the representation map of it (3D), to detect the location of the robot on this map and finally to adapt low level control electronics and sensors. My work is a part of this project. In the following chapters 1.1, 1.2 and 1.3 the main ideas and realizations made so far will be described according to the papers [2],[3] and [4]. In the chapter 1.4 the starting point of my work is discussed. 1.1 Magnebike inspection robot The Magnebike inspection robot (Figure 1.1 ) is a climbing robot moving on complex shape 3D pipe structures and providing a 3D visualization of the environment [1]. It also detects it sactual location in the map. The robot has to fulfill the two main challenges [2],[5]: It has to move on complex ferromagnetic surfaces within pipes of a diameter of 200mm up to 700mm. 90 o convex or concave obstacles with local abrupt changes in diameter up to 50mm are also possible. The robot should be able to follow a circumferential path. The appropriate sensors should be able to detect the points of a metallic surface and a program should process them into a surface representation. Furthermore the robot has to identify its own location on this map. The first challenge affects the movement technic of the robot on the surface. This problem was solved by equipping the robot with magnetic wheels. One of them can rotate and therefore guarantee the circumferential path. This can also be seen in figure 1.2. The second challenge concerns the choice of suitable sensors and programs for 3D representation and localization. Since the localization was chosen to be a combination of 3D scan registration and 3D odometry, a 3axes accelerometer is necessary. The general data of the Magnebike inspection robot can be taken from table
8 Chapter 1. Introduction 2 Figure 1.1: Magnebike inspection robot [5] Table 1.1: Data of the Magnebike inspection robot [5] Characteristics Data Size (LxWxH) 180x130x120 mm Wheel diameter 60 mm Mass 3.5 kg Magnetic wheel force 250 N Max. speed 2.7 m/min 1.2 Robot Sensor The main laser sensor has to be small and lightweight and needs ot have a good performance in the environment described above. This implies that the sensor should have a high precision and provide high densities of points of the metallic surfaces, like, for example, laser range finders. The paper [4] describes exactly why the sensor Hokuyo URG04LX 2D laser range scanner was chosen. It is light and quite small in comparison to the other sensors and although its accuracy is strongly dependent on the target s properties such as color, brightness and material, it is competitive enough to remain the favorite. As the name indicates, the sensor is 2D. To extend it to a 3D sensor, it is additionally rotated on the third axis. To determine the target distance the sensor uses amplitude modulated laser light. It
9 Robot Sensor Figure 1.2: Robot on a flat and curved surface [5] sends it to the target and detects the reflected light wave; hence it returns the phase shift and also the distance to the target. The exact data of the sensor is listed in table 1.2. Table 1.2: Data of the Hokuyo URG04LX 2D laser range scanner [3] Characteristics of URG04LX Weight Dimension (WxDxH) Measuring area Accuracy Angular resolution Scanning time Ambient temperature/humidity Data Approx. 160g 50x50x70mm Distance 20 mm to m; scan angle: 240 o Distance 20 to 1000 mm ±10 mm Distance 1000 to 4000mm ±1 % of measurement Step angle : approx o (360 o /1, 024steps) 100 ms/scan 10 to +50 o C, 85 % or less It takes measurements in a distance between 20 mm and 5.6m and in a scan angle of 240 o. The accuracy depends on the distance from the sensor and light emitter. If the target is located in a distance between 20 mm and 1 m, the accuracy equals 10 mm. If the distance is more than 1m and less than 4 m, the accuracy is between 10 mm and 40 mm. For my work this means the points near the robot will yield a fine surface with a better accuracy than the points far away from the robot having a lot of noise and probably leading to an approximate surface of the real one. Since the step angle is constant there are more points near the sensor and less far away from it. An important conclusion of this fact for my work is that the points are not uniformly distributed. The scanning time is a kind of reference for the running time of the 3D representation program. If there are 50 scans taken for a 3D representation it takes 5 s. Ambient temperature and humidity determine the conditions under which the sensors can be used.
10 Chapter 1. Introduction Scan matching and localization The first step to the localization of the robot is the scan matching to a bigger representation map of the environment. Two scans having overlaps can be matched to a new scan with more points using the common ICP Algorithm 1. For this robot some modifications to the algorithm were made [2]. On the other hand it is also possible to use odometry to match the scans more efficiently or quicker. The purpose is also to use these two operations together. Odometry provides the approximate information about the location and the scan matching the exact or refined location of the robot. An example of scan matching is demonstrated in figures 1.3 and 1.4. The left picture in figure 1.3 shows a point cloud taken from a point of view A. On the right side the point cloud is taken from an other point of view B. In figure 1.4 both of them are matched to a new point set containing points from the first and from the second point clouds (red and green respectively). Figure 1.3: Point clouds from different points of view [5] Figure 1.4: Matched point clouds [5] 1 (Iterative Closest Points) proposed by P.J. Besl and N.D. McKay [6]
11 Starting Point 1.4 Starting Point Problem Definition : Given: noisy 3D point cloud detected from pipes with different topology To find: reconstructed and visualizated surface built by the data points Using: Software MATLAB, papers on the topic The goal of my work is to reconstruct a surface from noisy point clouds in 3D space. The data points are provided by the sensors before or after scan matching. In the first part Studies on Mechatronics I have to research the papers and classify them. Then I have to choose a method to apply on the given noisy points. In the next part Bachelor Theses I have to implement the method in MATLAB, apply it on real point clouds provided by the Magnebike and analyze the results.
12 Chapter 1. Introduction 6
13 Chapter 2 Literature Research This chapter builds a part of work for Studies on Mechatronics. The main points of this subject are: Searching for papers about the topic starting with some selected papers Classify these papers by methods or other important criteria Figure out the actual state and the main research directions The first chapter describes how to find papers on a certain subject in general. The classification of these papers on the topic of surface reconstruction or methods applied to this problem, are specified in the chapter In the last chapter 2.4 I make a decision which method to choose. 2.1 Available Papers For my research I used the following programs with additional access provided by ETH Zurich: Google Scolar Cite Seer, Scientific Literature Digital Library and Search Engine IEEE Xplore, Digital Library The chosen searched words and their extensions are described in the following subsections Problem Definition The problem to solve in this work is defined as follows: Building 3D representations from laser scans with the description: The goal of this project is to generate environment representation from point clouds of laser scans. Data from single scans are used as local maps, whereas consecutive scans are merged to build a global 3Drepresentation of the environment. [7]. The chosen keywords are: 3D representation, 3D mapping, surface reconstruction, point cloud, mesh. 7
14 Chapter 2. Literature Research Extension of searching field While searching for papers I noticed the following six important names associated with the problem. Their publications can be found online: Amenta Nina Hoppe Hugues Tamal Krishna Dey JeanDaniel Boissonnat C.K. Tang and G. Medioni There also were many other names of people who provided important parts of algorithms or solutions of some mathematical problems. It can be helpful to look for the latest papers on a topic or algorithm of a certain author because they are often improvements of the first ones. To find the basic ideas or to follow the history or changes of the method one can take a look at older versions Reading Papers The papers have often a defined structure: 1. Abstract 2. Introduction 3. Main content 4. Conclusion 5. Future work 6. References Since papers on the problem in this work have technical content, it is sufficient to read the abstract, conclusion or future work and to look at the figures to follow the main ideas of the articles. If they seem interesting and important for a topic, the whole article can be read. While searching for papers on 3D reconstruction from noisy point cloud it was easy to tell from the figures whether an article was useful or not. There should be point clouds, reconstructed surfaces or meshes imaged. When an article did not contain such figures it often indicated that the article just dealt with a specific solution or part of an algorithm, another method or even that it was not useful at all. Also the words surface reconstruction from a point cloud or synonyms in the abstract were good signs to find more informations on the topic Arrange Papers After filtering the useful ones from all papers found, the next step was to distinguish between important and less important papers. As important I defined papers having following qualities: general description of a method
15 Available Papers references or names of used algorithms the method is applicable for a concrete problem With the chosen important papers containing different methods I created a table with 11 columns: Author, Name, Year, Abstract, Works for, Doesn t Work for, Ideas, Remarks, Advantages, Disadvantages and Related Work. I sorted them by date because I was interested in history and updates. Altogether I had 18 papers in the Table 2.1. Figure 2.1: Table with papers to the topic: 18 rows x 11 columns Classify Papers According to the table 2.1 I classified the papers and the methods described within respectively. I distinguish between two Phases. In the first Phase the mesh is built from a point cloud but the surface is not necessarily smooth. In the second Phase the mesh obtained in the first is improved with the purpose of getting a smooth surface and reducing the data volume. The respresentation of the two stages is shown in the surface reconstruction of a bunny in figure 2.2. Due to the fact that the methods applicable for noisefree data sets often cannot be applied to data sets with noise, it is also important to distinguish between noisefree data sets and such with noise. It is possible to use filters for noisy data points and thus get a set of noisefree data points. But it is difficult to develop a good and robust filter which is able to differentiate between points of the real surface and noisy points without making mistakes. For this reason there are methods for data points with noise using functions similar to filters as well as methods very different from such for noisefree data. The action of Phase 1 and 2 is shown in figures 2.3 and 2.4. The violet boxes describe multiple methods described in various papers, the yellow ones only one method.
16 Chapter 2. Literature Research 10 Figure 2.2: a) The original object: bunny; b) bunny representation after the Phase 1; c) bunny representation after the Phase 2 [8] Figure 2.3: Phase 1: Mesh built from a point cloud
17 Available Papers Figure 2.4: Phase 2: Optimization of the Phase 1
18 Chapter 2. Literature Research Phase 1: Initial Surface Estimation As already mentioned, in Phase 1 the first approximation of the surface from a point cloud is made. The overview of the Phase 1 is shown in figure 2.3. It is important to distinguish between the methods only applicable for point clouds without noise and such with noise. If there is no noise the methods based on Delaunay Triangulation and Neural Networks can be used. Otherwise, if noise is present, data points can be represented using methods based on eigenvalue and eigenvector decomposition, Hoppe s idea with integrated Marching Cubes method, Neural Meshes or approaches including interpolation. The methods used for data points with noise can also be applied on the noisefree problems, too, but its not recommondable due to the higher computing time Data Points Without Noise Data points without noise are points perfectly fitting the surface of the object. Since no sensors are perfect, these points are often generated artificially. The surface reconstruction of such point clouds can be of use to start solving the problems with point clouds in general, to provide the surfaces of unknown objects with exactly known points and to quickly test a method for a specific problem. There are two types of methods only applicable for noisefree point clouds: methods based on Delaunay Triangulation and Neural Networks. Figure 2.5: Phase 1, Overview of the methods applicable for noisefree data points An overview of the procedures based on the Delaunay Triangulation is shown in figure 2.5. The first section includes methods of surface reconstruction from noisefree data points based on the algorithms with a descripition of how to find and connect the neighboring points. Power Crust is an interpolating method that is deduced from the Voronoi Diagrams and that uses 3D Delaunay Triangulation to build Power Shape. In the third section the parametrization of the point cloud is utilized to triangulate them to a mesh. The last one corresponds to a method using Neural Network. Neighborhood The main idea of this method is to reconstruct a smooth surface from unorganized sample points using next neighbors. That is only possible because there is no noise
19 Phase 1: Initial Surface Estimation and the points lie in the same layer of the surface. Triangulation algorithm of Oblonsek and Guid [9] : The paper consists of two stages: Phases 1 and 2. In the first Phase the base approximation of the object surface is achieved using 2D Delaunay Triangulation. The main idea is to find the neighborhood (neighbor points) of each point from data set and connect them to triangles. The triangulation starts with one triangle and is expanded adding others satisfying the conditions demonstrated in figure 2.6. The conditions accord to the boundary edges and vertices of the latest triangulated mesh. This means that for every new triangle added the triangulated mesh and the boundaries must be updated. The advantage of this approach is the linear running time complexity 1. The drawbacks are the requirements for the noisefree point set: the maximal distance between each point has to be smaller than half of the radius of the maximal curvature, the point set isotropic 2 and geometrically close points should be topologically close. It also does not yield good results for surfaces with gaps. Figure 2.6: Cases in which a new triangle is added into triangulation. The boundaries are the main components in this procedure [9]. Natural Neighbor interpolation of Distance Functions [8] : This method uses Delaunay Triangulation and Voronoi Diagrams to define the natural neighbors 3. The trianglation part uses then distance functions to define the distances between the point set and triangles, therefore this method is an interpolation (data points do not necessarely build the vertices of the mesh). If the distance is zero 4, the triangulated mesh fits the data points perfectly. The advantages of this procedure are that it handles point sets with a nonuniform distribution and sparse sets, does not use any usertuned parameters, is theoretically guaranteed and has integrated Phase 2. It also can be adapted for different errors (zeroset values) so that the number of triangles in the mesh or the smoothness of the mesh can be changed, respectively. The limitations are: the normals to the surface must be known or computed by linearizing the surface by planes, the surface is assumed to be smooth (without sharp edges) and without boundaries (like a sphere). The results of the surface reconstruction with this method are illustrated in figure DDelaunay Triangulation [10] : The surface representation is built by the sides of tetrahedrons filling the whole volume of the object using Delaunay Triangulation in 3D. 1 the running time is straight proportional to the number of points 2 closest points to each point p i from the data set must be on both sides of the normal plane through p i 3 Natural neighbors of a point x are the neighbors of x in the Delaunay triangulation. 4 zeroset approach
20 Chapter 2. Literature Research 14 Figure 2.7: Triangulation algorithm of Oblonsek and Guid: a) point cloud not satisfying the conditions; b) point cloud satisfying the conditions; c) surface reconstruction of the point cloud b) [9]. The advantage is the direct implementation in MATLAB. The drawback of this method is its applicability for objects with closed surfaces without leaks or smooth branches only. In figure 2.8 this method is applied on the vertices of a cube. Figure 2.8: 3DDelaunay Triangulation of a cube [10]. Voronoibased Surface Reconstruction [11],[12] : In this method the 3DDelaunay Triangulation, 3DVoronoi Diagrams (figure 2.9) and Medial Axes are used to reconstruct the surface from unorganized sample points. After application of this method all sample points are connected to triangles and a surface is built. Advantages of Voronoibased surface reconstruction are a short running time (dominated by the computing time of the 3DDelaunay Triangulation), it does not need any userdefined parameters and it is simpler and more direct than
21 Phase 1: Initial Surface Estimation the zeroset approach. Problems appear with objects that have sharp edges and boundaries. Furthermore one needs special filters to get a hollow surface of an object: since 3DDelaunay builds tetrahedrons (see also [10]), the triangles within the object must be cancelled as well as all small triangles normal to the surface building and other layer. Then this approach can also represent the surface with boundaries as hollow ojects. Figure 2.9: Comparison of Voronoi Diagram in 2D (a) and in 3D (b) [12]. Black points are data points and red lines build the Voronoi Diagrams. Figure 2.10: Voronoibased surface reconstruction [12]. Power Crust and Power Shape [13],[14] This is a piecewise linear approximation of the surface over the points using MAT 5 deduced from the weighted Voronoi Diagrams. The Power Crust is built by the MAT and therefore its faces are not triangles (faces are built by points of intersection between the balls in 3D space). Since this method is an interpolating method, its vertices are not necessarily sample points as well as not all sample points are necessarily vertices of the reconstructed surface. The Power Shape can be deduced from the Power Crust using 3DDelaunay Triangulation and is therefore a triangulation mesh. The steps of this procedure in 2D are demostrated in figure Special about this approach is the avoidance of the polygonization 6, holefilling 7 or manifold extraction 5 Medial Axes Transform represents an object as an infinite union of balls, consider Fig The subdivision of a plane or surface into polygons 7 HoleFilling is used to obtain closed mesh
22 Chapter 2. Literature Research 16 steps 8. The results for objects with smooth surfaces are very good, as it can be observed in figure The drawbacks of Power Crust and Power Shape are the expensive costs computing the Medial Axes Transform. Since Power Shape uses 3DDelaunay Triangulation, the volume of objects is filled. To get a hollow object representation the inner components must be eliminated. Figure 2.11: Power Crust: a) an object with its Medial Axes; b) Voronoi Diagrams; c) inner and outer polar balls centered on the Voronoi lines; d) Power Diagram (Medial Axes Transform); e) Power Crust [13] Figure 2.12: Power Crust: a)inner polar balls; b)power Crust built by the inner polar balls (balls coming out of the object are cut off leaving a hole) [14] Meshless Parameterization [15] In this method a 3D point set is first parameterized 9 solving a sparse linear system problem. Then the mapped points are triangulated using Delaunay Triangulation. 8 Optimization of the Phase 1 9 mapped into a planar parameter domain [15]
23 Phase 1: Initial Surface Estimation The triangles of the mapped points corresond to the mesh triangulation of the initial point set. This data points can be unorganized but they must derive from a single surface patch. The advantages of this approach are its independence of any given topological structure and good results, the drawback is long computing time. Some examples of this approach are shown in figure The picture c) on the right side is the result if noise is present in the data points: the features are rough and the person is quite difficult to recognize. Figure 2.13: Meshless Parameterization: a) Parameterized point set; b) Mesh for data points without noise; c) Mesh for data points with noise [15] Neural Network [16] With this method the surface can be reconstructed from a dense unorganized collection of scanned point data using neural networks. The main idea is to collect more detailed information about an objects shape using randomized selection. That means the shape reconstruction from an unorganized point cloud is done without arranging the point elements. For this approach there are three important specific local features to be analyzed: pixel depth, surface normals and curvatures. The advantages of Neural Network are its simplicity, efficiency and uniformity and accurate results. It also has an integrated Phase 2. The drawback is the use of many specific functions and methods (Sigmoid Function, Zernike Moments, Learning, Neural Network) Figure 2.14: Overview of the method Neural Network [16] Data Points With Noise Data points with noise arise from the real scans due to the reason that real systems are never perfect. There are point clouds with a lot of noise and such with less
24 Chapter 2. Literature Research 18 noise. If there is only a little bit of noise present, the point cloud can be assumed to be noisefree, but in the majority of cases this is not possible. The environment influences the sensors and it is difficult to predict the resulting point cloud. Therefore the methods for noisy data points have to be very robust. To prove the results it is possible to generate an artificial data set of points coming from an object with noise using randomizing functions. There are many methods applicable for point clouds with noise. Some of them are the extensions of the methods used for noisefree data points. As mentioned above, all these methods can also be applied on point clouds without noise, but it is not recommended due to the high costs. An overview of these methods is shown in figure Figure 2.15: Overview of Phase 1 for noisy data points Methods including Interpolation RBF (Radial Basis Functions) [17] : In this method Radial Basis Functions 10 and low pass filtering (implicit smoothing) are used to reconstruct a suface from range data. In the computations there also appear convolutions, Fourier Transform and discrete smoothing. The advantages of the method are its independence on the degree of smoothing, it is very effective and it provides visually good results. The main drawback of this procedure is interpolation over large and irregular holes, which means that it is not recommended for objects with smooth branches and gaps. Other problems appear with the discretization (Aliasing) and usage of interpolating and filtering functions. BPA (BallPivoting Algorithm) [18] : BPA computes a triangle mesh interpolating a given point cloud and is related to AlphaShapes 11. The main idea is to take a ball of a userdefined radius r and let it roll along the data points. If it touches three points at the same time without containing other points, they build a triangle and are accepted as points of the triangle mesh. This idea in 2D (building lines instead of triangles) is illustrated in figure The handling of the data points with noise in 2D is represented in figure The advantages are its robustness, efficiency and flexibility (setting radius parameter r). 10 Radial Basis Function can be interpreted as a simple kind of Neural Network. 11 AlphaShape is a method used to generate a possible hull of the object
25 Phase 1: Initial Surface Estimation Figure 2.16: Radial Basis Function applied on a point cloud a) with the result b); c) Visualizatoion of the RBF describing the distance to the surface [17] The drawbacks are the use of userdefined parameter r (radius of the balls), expensive costs for object with different degrees of smoothness and not consistent avoidance of noise. Since only a layer of points is accepted (see figure 2.18), it is not guaranteed that this layer is the best approximation of the real surface by the reason that the noise can occur on the both sides of the real surface. Therefore for data points with a lot of noise an average surface approximation would be a better approach. Figure 2.17: BallPivoting Algorithm in 2D; a) a good choice of r; b) with the same r as in a) a sparse data set causes holes in the mesh; c) when the curvature is larger than 1/r some features cannot be represented and remain missing [18]
26 Chapter 2. Literature Research 20 Figure 2.18: BallPivoting Algorithm in 2D for noisy data points; a) surface samples lying below are not touched by the balls and remain isolated; b) an outlier is isolated if its surface orientation (in 2D line orientation) is not consistent with the surface orientation of the object; c)the choice of the radius can affect the results by creating a double layer out of a surface [18] Figure 2.19: BallPivoting Algorithm applied on real data points [18] Power Crust [19],[14] : This method for noisefree data sets was already introduced in the section Its extension using extra limitations for polar balls (see figure 2.11 c)) leads to a method applicable for noisy point clouds. This method using interpolation obviously conforms to the shape of the object better than polygonal models using triangulation. The results are very good for data points with noise as well as for objects with smooth or sharp features. It also works for objects with holes and gaps. The main drawback of this method is a big number of faces compared with other triangulating methods and expensive cost of computing MAT. Hoppe s idea with integrated Marching Cubes [20], [21], [1] The main idea of this method is to approximate the surface of an object using zeroset and local linearization. That means the surface can be assumed to be built out of tangent planes through surface points. If the number of points is big, the surface is better approximated, but the amount of time needed for computation also rises. It is the opposite way round if the amount number is small: the surface is rough, but it can be computed quickly. To create a triangulation mesh the Marching Cubes [21] algorithm is used. In this algorithm the whole volume is subdivided into cubes. Then only the cubes cut by the surface of the object remain. With these interfaces, special weights and a MCtable the triangles from the triangulation are built. This method has many advantages regarding the topology of the object: the presence of boundaries, gaps and holes as well as a big amount of noise are well handled. The disadvantage is that the smoothness of the mesh respective the number of the
27 Phase 1: Initial Surface Estimation Figure 2.20: Power Crust results: a) point cloud; b) Power Crust; c) transparent Power Crust with its simplified Medial Axes in 3D; d) Power Crust of a noisy point cloud[14],[19] triangles in the mesh after Phase 1 is determined by the smoothest part of the object. But there also exist a paper from the same authors with methods of the Phase 2 [22] which can take a rough mesh after the Phase 1 and data points as input and give out a new refined mesh. The other drawback of this method are useddefined parameters. Figure 2.21: The result of the Phase 1 using the method of H.Hoppe: a) original object; b) point cloud; c) surface reconstruction [22] Neural Meshes [23] The Neural Meshes are based on Neural Network with extension by the Learning Algorithm. The topology of the object surface is learned through special operators using statistics. The algorithm contains sampling, smoothing, connectivity changes and topology learning.
28 Chapter 2. Literature Research 22 The results of the algorithm look quite good and all usertuned parameters can be set intuitively. The drawback of the algorithm is the amount of usertuned parameters (9 parameters), the absence of any guarantee for results and its low speed. Figure 2.22: The results of the Neural Meshes for noisy point cloud using different parameters [23] Methods based on Eigenvalues and Eigenvectors Tensor Voting [24] The main idea is to use Tensor Voting and Tensor Decomposition to reconstruct the surface. In the very first step every point is represented as an isotropic tensor (a ball) of unit radius. In the next step the points communicate with each other in their neighborhood and get new information about their orientation and curve (respective of the surface). The Tensor Decomposition is then used to get a 3Dball, 3Dplate and 3Dstick for each point (see figure 2.23). With this information a saliency tensor field can be built. Special about this approach is the surface representation from curves, normals and points. Feature extraction is then applied to get a smooth surface. The advantages of this method are good results for different topologies of surfaces and its robustness. The drawbacks are the usage of a userdefined parameter. Anisotropic Basis Functions [25] In this method Tensor Field and Anisotropic Basis Functions(ABF) are used to reconstruct the surface. The main idea of ABF is to reconstruct objects with asymmetry on the edges and sharp features using tensors. There also exist Isotropic Basis Functions (IBF) that even the edges, also do the converse procedure (see figure 2.24). After the first reconstruction filtering is used to avoid the noise (see figure 2.25). Since the procedure uses interpolation, the mesh does not consist of triangles. The advantages of this procedure are very good results for sharp edges. The drawbacks are the uneven surfaces (especial if noise is present) caused by ABF.
29 Phase 1: Initial Surface Estimation Figure 2.23: Tensor Decomposition and Tensor Voting for a point cloud with noise [24] Figure 2.24: Comparison between Isotropic (a) and Anisotropic (b) Basis Functions for sharp edges [25] Figure 2.25: The results: a)using IBF; b)using ABF; c) using ABF and filters; d) final textured reconstruction [25]
30 Chapter 2. Literature Research Phase 2: Mesh Optimization When Phase 1 is fulfilled Phase 2 can be applied. As already mentioned in the section 2.2 there are some methods in Phase 1 with integrated Phase 2. Some of them can also be refined with Phase 2, but not all. Most triangulation meshes can be applied, whereas most other meshes without polygonization already include Phase 2. The main idea is to refine the surface in the parts where it is smooth and to reduce the number of vertices in the mesh. Since these two procedures interfere with each other a compromise must be found. As in Phase 1, there is also a distiction between data points with and without noise. It is possible to try to apply a method of the Phase 2 for noisefree data points to data points with noise, but there is no guarantee that it works Data Points Without Noise A triangulation mesh can be improved by reducing the number of triangles in flat regions and increasing their number in areas with high curvature. The representation of sharp edges is also important because after Phase 1 the edges are often smoothed. It is also possible to interpolate the surface with a continuous surface. In theory this is to compute a surface with infinite number of triangles, which is not possible in practice. Due to this reason there exists a continuity rate defined to set the stop criterion. Figure 2.26: Overview of the Phase 2 for noisefree points Sharp Edges [9] : The points of the triangulation mesh are classified in simple vertices and vertices lying on a sharp edge. The distance vector from the vertex to the plane for simple vertices and to the sharp edge for others is computed and added to the vertex, so that it lies in the plane or on the sharp edge respectively. Figure 2.27: Classification of the vertices: a) simple vertex; b) vertex lying on a sharp edge [9]
31 Phase 2: Mesh Optimization Change number of trianges [9] : In the regions with high curvature of the surface a division of triangles is carried out to get a smooth surface. Merging the triangles in areas with low curvature saves space and computing time in the next steps. Figure 2.28: Division of a triangle into 4 new smaller triangles to get a smooth surface [9] Interpolation [9] : It is possible to approximate a curve by a continuous function. For example the Nielson sidevertex method can provide surface with divided triangles till the stop criterion is reached. Without this criterion the division would occur without end, because the best approximation consists of an infinite number of triangles. Another approach would be to interpolate the surface by a 3D parameterized function. The problems here arise with the coordinates: a function is defined as unique assignment of a value to each input of a specified type. So given for example a pipe winding like a snake around the zaxis, it is impossible to reconstruct it with the help of a continuous function. It must be slit into parts with unique zvalues first, which can proove to be very difficult, before it can be interpolated. Then the parts are put together again Data Points With Noise The handling of the noisy data points is also different from the one of the noisefree data in Phase 2. Some methods were described by H.Hoppe [22],[1]. All of these methods are only applicable for triangulation meshes. Hoppe s Energy Function The Energy Function of Hoppe is defined in the following way [22]: E(K, V ) = E dist (K, V ) + E rep (K) + E spring (K, V ) (2.1) with the definitions: n E dist (K, V ) = d 2 (x i, φ V ( K )) (2.2) i=1 E rep (K) = c rep m (2.3) E spring (K, V ) = κ v j v k 2 {j,k} (2.4) E dist represents the distance from the mesh to the sample points, E rep penalizes meshes with a large number of vertices and E spring represents the importance of the distances between the vertices. Hence the function E represents a compromise between a smooth surface and small number of triangles.
32 Chapter 2. Literature Research 26 Figure 2.29: Overview of the Phase 2 for noisy data points Edge Split [1] : Two triangles corresponding to an edge are splitted into four new triangles. Edge Collapse [1] : Two triangles dissapear. Edge Swap [1] : The connections between two triangles change. Figure 2.30: Edge collapse, edge split, edge swap [22] Piecewise Smooth Subdivision Surface Optimization Subdivision Matrix [1] : The Matrix was introduced by Loop. Using it allows us to produce tangent plane continuous surfaces of arbitrary topological type. Since it would require an infinitely long time (analogous to Interpolation ) to implement it, a stop criterion or rate of the continuity has to be chosen.
33 Phase 2: Mesh Optimization Figure 2.31: The results after Phase 2 applied on Phase 1 from Hoppe s idea introduced in section [22] Edge Tag [1] : is an additional function to the ones already introduced: edge split, edge collapse and edge swap. This function defines whether an edge is sharp or not and adds this property to the edge.
34 Chapter 2. Literature Research Choice of a Method Since there are many different methods to reconstruct the surface I had to find important criteria to decide. The first one is that the point cloud is noisy. So all methods only applicable for noisefree data are not further considered. The other criteria are: kind of mesh (trinagles or not); speed; topology (if the topology of the pipe where the robot moves satisfies the conditions of the methods); result (how do the meshes visually look like); information (if there is enough information in papers on a topic) and noise (robustness of the method). The green fields show good results. I decided to mark the triangulation meshes as good, because they are practical to use and easier to be manually modified. The triangles can be easy taken out of the mesh, they can be modified and it is possible to find boundaries using their connection properties. Furthemore it is not a problem to add new triangles produced from another part of the pipe to the existing mesh. Another point is that there are many methods to improve the mesh in Phase 2. The columns Result and Information were evaluated by me and are more subjective. The Table 2.1 shows the method of H.Hoppe appears to be the best for the problem defined by the environment and sensors of Magnebike robot. Table 2.1: Table to decide which method to apply to reconstruct the point clouds provided by Magnebike robot.
35 Chapter 3 Surface Reconstruction from Point Clouds This chapter describes the main part of my work, namely the Bachelor Thesis. In the previous chapter the methods of surface reconstruction from point clouds were discussed and a method applicable for the actual problem was chosen. Extended research of further papers to the theme led me to the script of H. Hoppe [1] with a detailed description. Equipped with this information I implemented the phases 1 and 2 of the script. In section 3.4 an alternative but similar method of Marching Cubes developed by me is described. A more precise explanation of Marching Cubes is given in Problem Definition Figure 3.1: Noisy Point Cloud Given : a set X containing 3D noisy points x i, i = {1,..., n} with n number of points unknown original surface U from which the set X arises is of arbitrary topology including boundaries and discontinuities Software : MATLAB To find : triangulation mesh best approximating U 29
36 Chapter 3. Surface Reconstruction from Point Clouds 30 Since the real point set contains a huge number of points and the topology is not trivial, an artificial generated point cloud with noise building a cylinder was used (Figure 3.1) to visually prove the steps. The visualizations in the following sections 3.2 and 3.3 are based on this point cloud. In section the results are shown applied on real data sets. 3.2 Phase 1: Initial Surface Estimation The main steps in the phase 1 are: 1. find the neighborhood (neighbor points) of each point lying in the ball of radius ρ with the center in this point 2. compute the average point of each neighborhood and corresponding planes representing the linearized surface at this point 3. orient all planes looking outside 4. divide the whole volume in cubes of size ρ 5. build signed distances from the edges of the cubes to the plane corresponding to the next average point 6. apply the method of Marching Cubes to get a triangulation surface Neighborhood Given noisy data points x i can be described as x i = y i + e i with y i R 3 a point on U without noise and e i R 3 error vector representing the noise. If e i δ for all i then the set X is called δ noisy. The density of the points is defined by noisefree point density or the density of y i. Let Y = {y 1,..., y n }, then Y is ρdense if a sphere of radius ρ and center in U contains at least one point in Y. It means if the minimal distance of each point of Y to other points of Y is smaller than ρ. Summarizing these definitions the following conclusions can be done: If the surface U has holes of radius r, they can only be represented in the surface reconstruction if they are smaller than δ + ρ. A point x i is an outlier if the minimal distance of this point to all other points of X, d(x i, X) is smaller than δ + ρ. Two sheets of the surface U are assumed to have at least the distance ρ + 3δ between each other to be represented correctly. The assumption 3δ comes from taking sampling noise into account 1. The very first step is then to find ρ and δ and to set the radius of a neighborhood equal to ρ + δ. For artificial generated points they are known. Their choice in general is discussed in section To find the neighborhood Nbhd(x i ) of each point x i the KDTree can be used. Embedding special KDTree packages from MATLAB, a function can be build with the following input and output: [neighbors x j of a point x i, data set] = function (data set, point x i, ρ, δ) (3.1) This function is used two times as follows: 1 Since there are often too many points in a laser scan the points are sampled: only some of them remain for the surface representation. The sampling problem is discussed in section 3.2.8
37 Phase 1: Initial Surface Estimation 1. to find outlier and eliminate them defining a new data set: [data set] = function1 (data set, point x i, ρ, δ) (3.2) 2. to find neighbor points for each point of the new data set: [neighbors x j of a point x i ] = function2 (data set, point x i, ρ, δ) (3.3) The elimitation of the outlier occurs using the conclusions above. After the elimination the data set is the same or smaller Average Points and Corresponding Planes Having points of a neighborhood the next step is to find their orientation respective the linearized plane of U in each neighborhood. To do so the average point and a covariance matrix must be computed for each neighborhood. The procedure consist of 5 steps: 1. compute the centroid o i of each Nbhd(x i ) 2. compute covariance matrix CV i for each neighborhood Nbhd(x i ) 3. compute eigenvalues of CV i, so that λ i,1 < λ i,2 < λ i,3 4. compute eigenvector v 1 i associated with λ i,1 5. set the normal of the tangent plane going through the point o i equal to v 1 i To compute the centroid o i of the neighborhood Nbhd(x i ) the mean value of the neighborhoodpoints for each coordinate is computed. It means the centroid is the average point of the Nbhd(x i ). The covariance matrix CV i is computed using centroid o i and the neighborhood Nbhd(x i ): CV i = m j=1 (p j,1 o i,1 ) 2 cv 12 cv 13 cv 12 (p j,2 o i,2 ) 2 cv 23 cv 13 cv 23 (p j,3 o i,3 ) 2 (3.4) with cv 12 = (p j,2 o i,2 )(p j,1 o i,1 ), cv 13 = (p j,3 o i,3 )(p j,1 o i,1 ), cv 23 = (p j,2 o i,2 )(p j,3 o i,3 ), p j Nbhd(x i ), m number of points in Nbhd(x i ), p = ( p 1 p 2 p 3 ) and oi = ( o i,1 o i,2 o i,3 ). The matrix represents so to say the oriented distances between the points x j in a Nbhd(x i ). The unit eigenvector asssociated with the smallest eigenvalue of the covariance matrix CV i is the normal vector to these oriented distances or respective the normal vector of the tangent plane of the surface. The results are shown in figure 3.3.
38 Chapter 3. Surface Reconstruction from Point Clouds 32 Figure 3.2: Data points in blue and centroids in green Orient Planes Outside The procedure of orienting the normals so that they all look in the same direction requires 4 steps: 1. compute the neighborhood for each centroid o i 2. compute the weights of the orientation of two vectors associated with neighbor centroids and build a weighting matrix 3. orient all normals looking outside or inside using Minimal Spanning Tree 4. orient all normals looking outside Neighborhood of centroid In the first step the neighborhood of each centroid in data set of centroids is found using the function: [neighbors o j of a point o i ] = function2 (data set, point x i, ρ, δ) (3.5) Weighting matrix Next the weights must be defined: if two vectors are parallel their weight should be 0, and if they are orthogonal to each other their weight should be 1 2. Other combinations have thehefore weights (0, 1). This correlation arises from the definition of the Minimal Spanning Tree connecting first the points with the lowest costs. In this case orienting the normals, preferably parallel neighbor vectors (normals) are wanted, because they better guarantee the consistance of the approach. Therefore the weights are defined: w i,j = 1 v i v j (3.6) The weighting matrix W is built by w i,j in the corresponding rows and columns. If two vectors do not lie in a neighborhood, their weight is set to, and therefore these vectors can t connected after the application of MST. 2 the vectors v i and v j are unit vectors and therefore v i v j [0, 1]
39 Phase 1: Initial Surface Estimation Figure 3.3: Normals and middle points Minimal spanning tree In MATLAB there already exists a function creating the Minimal Spanning Tree from a weighted matrix. It connects all nodes with weighted relations defined in the weighting matrix W in order of lowest costs. Embedding this function a new function can be created: [oriented normals] = f unction3 (normals, centroids, weightingmatrix, ρ, δ) (3.7) Orient normals looking outside Marching Cubes (see section 3.2.6) requires normals oriented outside. Since after the implementation of the f unction3 the normals are oriented outside or inside, an extra step is necessary. The main idea orienting all normals looking outside is to take a centroid with a minimal value in coordinate x and analyze the sign of the coordinate x of the associated normal: if it is negative the orientation of all normals must be changed, and if it is positive the oriented normals already look outside. This relation comes from the multiplication of the vector v i corresponding to the centroid with the 1 minimal entry in x coordinate with the unit vector 0. If they are oriented 0 in the same direction, the normals are oriented as wanted. The results are shown in figure Divide Volume in Cubes The division of the whole volume in cubes of the size ρ + δ occurs defining edges of the cubes in 3D space. This step is necessary for the method Marching Cubes.
40 Chapter 3. Surface Reconstruction from Point Clouds 34 Figure 3.4: Oriented Normals (red vectors) on the centroids (green points) Signed Distance Vectors For using Marching Cubes at the end the distances between the edges of the cubes and planes must be computed. A short overview of this method: 1. compute the distances between vertices and centroids 2. take the minimum distance 3. compute the distance to the corresponding plane 4. if the projected point z lies in the neighborhood Nbhd(o i ) then this distance is accepted, otherwise not The distance between a vertex of a cube and a centroid is simply computed using the formula d(a, b) = (a 1 b 1 ) 2 + (a 2 b 2 ) 2 + (a 3 b 3 ) 2. The minimum can be directly found in MATLAB using the function min(). The 3D geometry can be used to compute the distance between the plane and a vertex p, in order to find the projection point z (Figure 3.5). Figure 3.5: Visualization of the projection problem [1] d(p, z) = (p o i ) n i (3.8) z = p d(p, z) n i (3.9)
41 Phase 1: Initial Surface Estimation The projected point z is used to find out if the point p was projected in the neighborhood Nbhd(o i ) or not. If not, the distance is not accepted and it is set equal to Marching Cubes Marching Cubes (MC) [21] is a method used for surface reconstruction. It bases on the signed distances computed in the last section. The main idea is to find cubes where the object surface cut them in cold and hot parts: lying on the different sides of the surface. The weighted cold and hot 3 vertices are then used to triangulate the planes in the right order. This method uses several big tables with listed vertices, triangulation rules and their connections to a mesh. A visualized table of triangulation inside a cube using hot vertices is illustrated in figure 3.6. Special about Marching Cubes is its sensitivity on the cube size for noisy data points. In general the Marching Cubes approximates a surface well for small cube sizes, but it needs a lot of space and and much computing time in phase 2, for big cube sizes the surface reconstruction varies a lot from the initial surface but it needs less space and computing time. Figure 3.6: Table used for Marching Cubes Method to connect the surfaces to triangles. The green points are hot vertices [26] The result after implementation of Marching Cubes on the point cloud is shown in figure Results In this section the phase 1, initially implemented on an artificial generated noisy point cloud, is applied on real problems. The running time was evaluated in my notebook and varies with the power of computer. These times should give the feeling for the computational time order and show the differences in times for different parameters. 3 hot and cold are just names, the main idea is to divide these cube in different parts, just as mentioned
42 Chapter 3. Surface Reconstruction from Point Clouds 36 Figure 3.7: The triangulated surface after the implementation of Marching Cubes. The blue points arise from the point cloud. Figure 3.8: Phase 1: triangulated surface containing 1505 triangles reconstructed from 3289 data points in 178 s with the radius 0.11m
43 Phase 1: Initial Surface Estimation Figure 3.9: Phase 1: triangulated surface containing 1741 triangles reconstructed from 3462 data points in 214 s with the radius 0.06m Figure 3.10: Phase 1: triangulated surface containing 805 triangles reconstructed from 2558 data points in 55 s with the radius 0.11m
44 Chapter 3. Surface Reconstruction from Point Clouds Discussion Sampling Rate and Size of Neighborhood Since point clouds provided by a sensor often contain a big number of points, they must be sampled. During the sampling process, the points can be chosen randomly or specified. Randomized choice of points can influence the neighborhood size. For example if the sampling rate is 50%, then the radius of the neighborhood should be set bigger than times the radius of the initial point cloud (sampling noise should also be respected). For scans with different point densities the randomized choice does not change the density rates in the point cloud. It is also possible to filter only points from a certain region (specific choice). These regions can be limited by coordinates or the density of points. Figure 3.11: Effect of sampling rate on running time and results with a constant radius (ρ+δ) : a) sampling rate with running time 9 s; b) sampling rate with running time 15 s; c) sampling rate with running time 156 s In figure 3.11 the effect of the sampling rate on the results is shown. The first representation has holes and gaps, accordingly it is a bad representation. Whereas the pictures b) and c) look good, but the second representation still has a hole. The third representation has no holes and looks good. So the ideal sampling rate for this example with a certain radius lies between second and third values (between and ). Choosing a good value saves space and reduces running time. Figure 3.13: A circle represented by lines of different length determining the smoothness of this circle A good sampling rate represents the point cloud good enough to bring out the smooth details in a desirable grade. So for example for the cylinder in 2D it means the smoothness of the circle depends on the size of the line representing it. The
45 Phase 1: Initial Surface Estimation Figure 3.12: The triangulated surface after the phase 1: a) data set of 2549 points; b) with the radius ρ + δ = 0.05m after 51 s running time; c) with the radius ρ + δ = 0.1m after 60 s running time; d) with the radius ρ + δ = 0.2m after 271 s running time smaller the size the bigger the amount of lines and the smoother the surface (Figure 3.13). An ideal circle is represented by infinite number of lines. Figure 3.14: The relation between the ratio of the time in s (a axis) ρ distance between points (x axis) and The radius of the neighborhood is also very important. The relation between radius and results is shown in figure To analyze the relation between the running time and radius I took a noisefree point cloud of a cylinder with uniform distribution with a known distance between the points, and run the phase 1 for different neighborhood sizes. The results are represented in figure It shows that there exists an optimum for the running time which is not the distance between the points, but 1.6 times the distance. Because for small radius the running time is determined by the computational time of a high number of cubes and with them associated distances and traingles. For a big radiuses the time for computation of average points and normals in a neighborhood dominates.
46 Chapter 3. Surface Reconstruction from Point Clouds 40 Using the properties of the Magnebike robot sensor and the average radius of a pipe of 0.25m the radius for a neighborhood without sampling can be computed. The average distance between the points is then ca sin( 0.36 π 180 ) = m. It means for sampling rate of 1% the average radius is equal to 0.157m. The noise in this distance is ca. ±10 3 m, what means for the radius with respect to sampling noise ca. 0.16m. Since the laser sensor used for Magnebike is rotated around an axis with a certain velocity, the resolution of 0.36 o can not be accepted as the resolution of the scan in 3D. But it is possible to compute the resolution coming from this anglular velocity. If it is very slow, the resolution is determined by this rotation around the axis. In an optimal case for surface reconstruction the sensor is rotated stepwise with the same angular distance as the resolution of the sensor of 0.36 o. The resolution of 1.57mm for noise of ±10mm is good enough. Since the neighborhood radius of 0.16m for sampling rate of 1% was too big, and the distances even after sampling were around 0.05m, the resolution around the axis was bigger then the sensor resolution or there were more than one scan in a data set. Without knowing this information it is not possible to compute the neighborhood radius exactly. Figure 3.15: Cumulative distribution of a sparse data with big differences in radius (small derivation) For this case or if the pipe size is not available, I implemented a function using cumulative distribution of the distances between some randomized neighbor data points to set the radius ρ+δ. If it is known that the distribution is uniform, the radius can be set equal to the maximal distance. If it is not known and the density of points is not constant, the radius can be varied. The values of , meaning 80% respectively 90% of computed radius lie under the chosen ρ+δ, provided good results. Summarizing this discussion the following observations can be done: the sampling rate determines the possible smoothness of the respresented surface the smaller the sampling rate the faster the running time running time is not linear proportional to the number of points in the data set there exist an optimal radius near 1.6 of the distance between points in the data set, for which the running time is optimal the visualization is best for the neighborhood sizes near the distance between the points in data set
47 Phase 1: Initial Surface Estimation the bigger the difference in distances between points in the data set the more difficult it is to find good sampling rates, sampling region and neighborhood size Neighborhood of centroids In section the neighborhood for each centroid was computed. To save computation time it is also possible to assume for small neighborhood radius the neighborhood of the set X is the same as the neighborhood of the corresponding centroids of the points x i. Marching Cubes The surface representations in section show the sensitivity of the phase 1 on the neighborhoods size. In the regions with density near set radius ρ + δ the surface representation is good, in other regions rather not. Since there is a second phase in which the triangulation mesh is treated, it would be best to get a mesh with a small number of triangles and a good approximation of the surface. If the power of the computer would be infinitely high, a smooth triangulation mesh with a big number of triangles would be no problem. But since the power and space are limited, a compromise must be found. A good radius is then defined as the radius of compromise between a large number of triangles and smooth surface on the one side and a small number of triangles and a rough approximation of the surface on the other side. Analysing the reconstructed surfaces, the differences in the size of triangles can be observed. There are many small triangles using space and increasing the computational time. A possibility to improve this phase would also be to eliminate these small triangles or to get a mesh with smaller number of triangles of the same size an other way. The effect of the neighborhood radius (respective cube size) on the surface reconstruction and the distance between the surface and the data set is shown in figure Setting the radius smaller than ρ + δ led to error messages (could not build neighborhoods) and bigger than 6(ρ + δ) led to triangulation meshes looking like a small sphere inside the point cloud. Changing the cube size in Marching Cubes led to interesting results (Figure 3.17). Setting the cubes size bigger than ρ + δ reduces the running time, but the results are worse. Otherwise, setting the size smaller than the neighborhood radius leads to a huge number of triangles in the mesh and expensive costs. Intersting about this behaviour is its nonlinearity in running time (running time increases explosively with the cubes size) and linearity in number of triangles (2 times change in cubes size causes 4 times change in number of triangles). Setting cube size smaller than ρ + δ appears to provide smoother results, but in reality they are not and can not be smoother than the resolution (ρ + δ). So a good choice of the cube size is about 1.5(ρ + δ): a compromise between short computing and a good result. In general, the running time is very sensitive on the seting parameters (ρ + δ), cube size and sampling rate. The results might look similar for several combinations of parameters, but the running time can be different.
48 Chapter 3. Surface Reconstruction from Point Clouds 42 Figure 3.16: Changing the radius of neighborhood in phase 1 a) ρ + δ; b) 2(ρ + δ); c) 3(ρ + δ) Figure 3.17: Changing the size of cubes in Marching Cubes: a) 2(ρ + δ), running time 1s, 76 triangles in mesh; b) ρ + δ, 3s, 400 triangles; c) 0.5(ρ + δ), 5s, 1815 triangles; d) 0.25(ρ + δ), 47 s, 7467 triangles
49 Phase 2: Mesh Optimization 3.3 Phase 2: Mesh Optimization Phase 2 improves the results of phase 1 and reduces the data volume. The Energy Function defined by H. Hoppe leads to a smooth surface in the areas with high curvature and reduction of the number of triangles in flat regions. This phase consists of the following steps: 1. define the Energy Function 2. solve the inner minimization problem 3. solve the outer minimization problem 4. build a loop of inner and outer minimization problems Energy Function Let V be the set of vertex positions V = {v 1,...v m } with mnumber of vertices. With K defining the simplificial complex respective the connections between the vertices, M = (K, V ) is assumed to represent the triangulation mesh. With these definitions the Energy Function is described as following: E(K, V ) = E dist (K, V ) + E rep (K) + E spring (K, V ) (3.10) E dist (K, V ) = n d 2 (x i, π V ( K )) (3.11) i=1 E rep (K) = c rep m (3.12) E spring (K, V ) = κ v j v k 2 (3.13) {j,k} K with c r and κ userdefined parameters. π V ( K ) is the projection of a the point x i on the triangulation mesh M. The qualitative meanings of the functions are: E dist (K, V ) : is big if the distances between the point cloud and triangulated mesh are big respective if the mesh approximates the point cloud badly. E rep (K) : penalizes meshes with a lot of vertices, also it is big if there are a lot of vertices in the mesh. E spring (K, V ) : penalizes meshes with huge distances between the vertices. It is kind of spring energy holding the triangulation mesh together. To find a good triangulation mesh, the Energy Function E must be minimized. Since the function depends on two variables V and K, the problem can be divided into two new problems: an inner minimization over V for fixed K and an outer minimization over K. The first one only changes the positions of the vertices and the second one changes vertices positions and their connections. The outer and inner problems are repeated till the mesh is optimized: while (number of changes in outer minimization problem) > 0 solve outer minimization problem including inner minimization problem end
50 Chapter 3. Surface Reconstruction from Point Clouds Inner Minimization Problem To solve the inner minimization problem 4 steps are necessary: 1. find normals and middle points of each triangle in the mesh 2. project the points of point cloud on the triangulated mesh 3. compute barycentric coordinates of the projected points 4. solve the inner linear least square problem Normals and middle points The middlepoint of a triangle can be found computing the average point of three vertices. The normal to the triangle t is found solving the system of equations: v 1,1 v 1,2 v 1,3 v 2,1 v 2,2 v 2,3 ˆ t = v 3,1 v 3,2 v 3, with v 1 = ( v 1,1 v 1,2 v 1,3 ) and v1,v 2 and v 3 vertices of triangle. Since unit vectors are more practical t = ˆ t ˆ t. (3.14) Figure 3.18: Middle points of triangles with their normals Point projection To project the data points on the planes of the triangulation mesh it would be possible to project all points on all planes and take the minimal distance. Since it costs much, leads to defects projecting through holes and needs space, an assumption is made: the point x i with the minimal distance to a triangle with the middle point s i lies near this middle point s i. So the first step is to find Nbhd(s i ) within the data set X. The number of neighbors is chosen so that the middle point s i build with its neighbors all triangles including s i.
51 Phase 2: Mesh Optimization A point x i is projected on all neighbor planes using the equations 3.8 and 3.9. To prove if a point is really projected on the surface of the mesh, it is necessary to prove if the projection point lies inside a triangle. In MATLAB there exist a function proving if a point lies inside a 2D triangle. Since in this case triangles lie in the 3D space, each of them must be first transformed into 2D triangle with one vertex in the origin (0, 0). The main idea is shown in figure Figure 3.19: Transformation of a triangle and projected points in 3D into 2D The new axis x is defined by the vertices 3 and 1: e x = v3 v1 v 3 v 1. The new axis y is found to be the unit cross product of e x with the normal of the triangle t: e y = ex t. e x t To compute the transformed vertices ˆv 2 and ˆv 3 in 2D the property of the origin is used: ˆv 2 = ( (v 2 v 1 ), e x (v 2 v 1 ), e y ) and ˆv 3 = ( v 3 v 1 0 ). The transformed projected point is then equal to ˆp = ( p, e x p, e y ). The new vertices and projected points in 2D can now be used to find out if a projected point lies inside a triangle or not. If a point lies inside a triangle, it is accepted, otherwise not. Figure 3.20: Projected points on the triangulation mesh Barycentric coordinates Barycentric coordinates are based on the vertices of a triangle. A point within a triangle can be represented as p = v 1 α + v 2 β + v 3 γ with the weights α,β and
Topographic Change Detection Using CloudCompare Version 1.0
Topographic Change Detection Using CloudCompare Version 1.0 Emily Kleber, Arizona State University Edwin Nissen, Colorado School of Mines J Ramón Arrowsmith, Arizona State University Introduction CloudCompare
More informationInterpolating and Approximating Implicit Surfaces from Polygon Soup
Interpolating and Approximating Implicit Surfaces from Polygon Soup 1. Briefly summarize the paper s contributions. Does it address a new problem? Does it present a new approach? Does it show new types
More informationPlanar Tree Transformation: Results and Counterexample
Planar Tree Transformation: Results and Counterexample Selim G Akl, Kamrul Islam, and Henk Meijer School of Computing, Queen s University Kingston, Ontario, Canada K7L 3N6 Abstract We consider the problem
More informationH.Calculating Normal Vectors
Appendix H H.Calculating Normal Vectors This appendix describes how to calculate normal vectors for surfaces. You need to define normals to use the OpenGL lighting facility, which is described in Chapter
More informationImproved Billboard Clouds for Extreme Model Simplification
Improved Billboard Clouds for Extreme Model Simplification I.T. Huang, K. L. Novins and B. C. Wünsche Graphics Group, Department of Computer Science, University of Auckland, Private Bag 92019, Auckland,
More informationA. OPENING POINT CLOUDS. (Notepad++ Text editor) (Cloud Compare Point cloud and mesh editor) (MeshLab Point cloud and mesh editor)
MeshLAB tutorial 1 A. OPENING POINT CLOUDS (Notepad++ Text editor) (Cloud Compare Point cloud and mesh editor) (MeshLab Point cloud and mesh editor) 2 OPENING POINT CLOUDS IN NOTEPAD ++ Let us understand
More informationThe Essentials of CAGD
The Essentials of CAGD Chapter 2: Lines and Planes Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/essentialscagd c 2000 Farin & Hansford
More informationFast and Robust Normal Estimation for Point Clouds with Sharp Features
1/37 Fast and Robust Normal Estimation for Point Clouds with Sharp Features Alexandre Boulch & Renaud Marlet University ParisEst, LIGM (UMR CNRS), Ecole des Ponts ParisTech Symposium on Geometry Processing
More informationSurface Reconstruction from a Point Cloud with Normals
Surface Reconstruction from a Point Cloud with Normals Landon Boyd and Massih Khorvash Department of Computer Science University of British Columbia,2366 Main Mall Vancouver, BC, V6T1Z4, Canada {blandon,khorvash}@cs.ubc.ca
More informationGeometry and Topology from Point Cloud Data
Geometry and Topology from Point Cloud Data Tamal K. Dey Department of Computer Science and Engineering The Ohio State University Dey (2011) Geometry and Topology from Point Cloud Data WALCOM 11 1 / 51
More informationConstrained Tetrahedral Mesh Generation of Human Organs on Segmented Volume *
Constrained Tetrahedral Mesh Generation of Human Organs on Segmented Volume * Xiaosong Yang 1, Pheng Ann Heng 2, Zesheng Tang 3 1 Department of Computer Science and Technology, Tsinghua University, Beijing
More informationA unified representation for interactive 3D modeling
A unified representation for interactive 3D modeling Dragan Tubić, Patrick Hébert, JeanDaniel Deschênes and Denis Laurendeau Computer Vision and Systems Laboratory, University Laval, Québec, Canada [tdragan,hebert,laurendeau]@gel.ulaval.ca
More information2.2 Creaseness operator
2.2. Creaseness operator 31 2.2 Creaseness operator Antonio López, a member of our group, has studied for his PhD dissertation the differential operators described in this section [72]. He has compared
More informationComputer Graphics. Geometric Modeling. Page 1. Copyright Gotsman, Elber, Barequet, Karni, Sheffer Computer Science  Technion. An Example.
An Example 2 3 4 Outline Objective: Develop methods and algorithms to mathematically model shape of real world objects Categories: WireFrame Representation Object is represented as as a set of points
More informationOverview. Inflating Balloon Models. Overview. Overview. Goal. Overview of Algorithm
Inflating Balloon Models by Yang Chen and Gérard Medioni Indepth view explanation Goal Reconstruct a 3D mesh from a data set Data set is a set of merged range images from scanners Surface is reconstructed
More informationClustering & Visualization
Chapter 5 Clustering & Visualization Clustering in highdimensional databases is an important problem and there are a number of different clustering paradigms which are applicable to highdimensional data.
More informationRobust NURBS Surface Fitting from Unorganized 3D Point Clouds for Infrastructure AsBuilt Modeling
81 Robust NURBS Surface Fitting from Unorganized 3D Point Clouds for Infrastructure AsBuilt Modeling Andrey Dimitrov 1 and Mani GolparvarFard 2 1 Graduate Student, Depts of Civil Eng and Engineering
More informationOn Fast Surface Reconstruction Methods for Large and Noisy Point Clouds
On Fast Surface Reconstruction Methods for Large and Noisy Point Clouds Zoltan Csaba Marton, Radu Bogdan Rusu, Michael Beetz Intelligent Autonomous Systems, Technische Universität München {marton,rusu,beetz}@cs.tum.edu
More informationMA 323 Geometric Modelling Course Notes: Day 02 Model Construction Problem
MA 323 Geometric Modelling Course Notes: Day 02 Model Construction Problem David L. Finn November 30th, 2004 In the next few days, we will introduce some of the basic problems in geometric modelling, and
More informationCOS702; Assignment 6. Point Cloud Data Surface Interpolation University of Southern Missisippi Tyler Reese December 3, 2012
COS702; Assignment 6 Point Cloud Data Surface Interpolation University of Southern Missisippi Tyler Reese December 3, 2012 The Problem COS 702, Assignment 6: Given appropriate sets of Point Cloud data,
More informationSegmentation of building models from dense 3D pointclouds
Segmentation of building models from dense 3D pointclouds Joachim Bauer, Konrad Karner, Konrad Schindler, Andreas Klaus, Christopher Zach VRVis Research Center for Virtual Reality and Visualization, Institute
More informationMonotone Partitioning. Polygon Partitioning. Monotone polygons. Monotone polygons. Monotone Partitioning. ! Define monotonicity
Monotone Partitioning! Define monotonicity Polygon Partitioning Monotone Partitioning! Triangulate monotone polygons in linear time! Partition a polygon into monotone pieces Monotone polygons! Definition
More informationManifold Learning Examples PCA, LLE and ISOMAP
Manifold Learning Examples PCA, LLE and ISOMAP Dan Ventura October 14, 28 Abstract We try to give a helpful concrete example that demonstrates how to use PCA, LLE and Isomap, attempts to provide some intuition
More informationOffline Model Simplification for Interactive Rigid Body Dynamics Simulations Satyandra K. Gupta University of Maryland, College Park
NSF GRANT # 0727380 NSF PROGRAM NAME: Engineering Design Offline Model Simplification for Interactive Rigid Body Dynamics Simulations Satyandra K. Gupta University of Maryland, College Park Atul Thakur
More informationMean Value Coordinates
Mean Value Coordinates Michael S. Floater Abstract: We derive a generalization of barycentric coordinates which allows a vertex in a planar triangulation to be expressed as a convex combination of its
More informationEfficient Storage, Compression and Transmission
Efficient Storage, Compression and Transmission of Complex 3D Models context & problem definition general framework & classification our new algorithm applications for digital documents Mesh Decimation
More informationRobust and Efficient Implicit Surface Reconstruction for Point Clouds Based on Convexified Image Segmentation
Noname manuscript No. (will be inserted by the editor) Robust and Efficient Implicit Surface Reconstruction for Point Clouds Based on Convexified Image Segmentation Jian Liang Frederick Park Hongkai Zhao
More informationAutomatic Labeling of Lane Markings for Autonomous Vehicles
Automatic Labeling of Lane Markings for Autonomous Vehicles Jeffrey Kiske Stanford University 450 Serra Mall, Stanford, CA 94305 jkiske@stanford.edu 1. Introduction As autonomous vehicles become more popular,
More informationMedial Axis Construction and Applications in 3D Wireless Sensor Networks
Medial Axis Construction and Applications in 3D Wireless Sensor Networks Su Xia, Ning Ding, Miao Jin, Hongyi Wu, and Yang Yang Presenter: Hongyi Wu University of Louisiana at Lafayette Outline Introduction
More informationTopological Data Analysis Applications to Computer Vision
Topological Data Analysis Applications to Computer Vision Vitaliy Kurlin, http://kurlin.org Microsoft Research Cambridge and Durham University, UK Topological Data Analysis quantifies topological structures
More informationTheory From the diffraction pattern to the distribution size
Theory From the diffraction pattern to the distribution size 1 Principle This method is based on diffraction and diffusion phenomenon. To obtain the particle size Fraunhofer and Mie theory are used. When
More informationThe Scientific Data Mining Process
Chapter 4 The Scientific Data Mining Process When I use a word, Humpty Dumpty said, in rather a scornful tone, it means just what I choose it to mean neither more nor less. Lewis Carroll [87, p. 214] In
More information3D POINT CLOUD CONSTRUCTION FROM STEREO IMAGES
3D POINT CLOUD CONSTRUCTION FROM STEREO IMAGES Brian Peasley * I propose an algorithm to construct a 3D point cloud from a sequence of stereo image pairs that show a full 360 degree view of an object.
More informationData Clustering. Dec 2nd, 2013 Kyrylo Bessonov
Data Clustering Dec 2nd, 2013 Kyrylo Bessonov Talk outline Introduction to clustering Types of clustering Supervised Unsupervised Similarity measures Main clustering algorithms kmeans Hierarchical Main
More informationAIMA Chapter 3: Solving problems by searching
AIMA Chapter 3: Solving problems by searching Ana Huaman January 13, 2011 1. Define in your own words the following terms: state, state space, search tree, search node, goal, action, transition model and
More informationLecture 9: Shape Description (Regions)
Lecture 9: Shape Description (Regions) c Bryan S. Morse, Brigham Young University, 1998 2000 Last modified on February 16, 2000 at 4:00 PM Contents 9.1 What Are Descriptors?.........................................
More informationEðlisfræði 2, vor 2007
[ Assignment View ] [ Pri Eðlisfræði 2, vor 2007 28. Sources of Magnetic Field Assignment is due at 2:00am on Wednesday, March 7, 2007 Credit for problems submitted late will decrease to 0% after the deadline
More informationWe can display an object on a monitor screen in three different computermodel forms: Wireframe model Surface Model Solid model
CHAPTER 4 CURVES 4.1 Introduction In order to understand the significance of curves, we should look into the types of model representations that are used in geometric modeling. Curves play a very significant
More informationComputer Graphics CS 543 Lecture 12 (Part 1) Curves. Prof Emmanuel Agu. Computer Science Dept. Worcester Polytechnic Institute (WPI)
Computer Graphics CS 54 Lecture 1 (Part 1) Curves Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) So Far Dealt with straight lines and flat surfaces Real world objects include
More informationMathematics on the Soccer Field
Mathematics on the Soccer Field Katie Purdy Abstract: This paper takes the everyday activity of soccer and uncovers the mathematics that can be used to help optimize goal scoring. The four situations that
More informationGUIDE TO POSTPROCESSING OF THE POINT CLOUD
GUIDE TO POSTPROCESSING OF THE POINT CLOUD Contents Contents 3 Reconstructing the point cloud with MeshLab 16 Reconstructing the point cloud with CloudCompare 2 Reconstructing the point cloud with MeshLab
More informationA Short Introduction to Computer Graphics
A Short Introduction to Computer Graphics Frédo Durand MIT Laboratory for Computer Science 1 Introduction Chapter I: Basics Although computer graphics is a vast field that encompasses almost any graphical
More informationDelaunay Based Shape Reconstruction from Large Data
Delaunay Based Shape Reconstruction from Large Data Tamal K. Dey Joachim Giesen James Hudson Ohio State University, Columbus, OH 4321, USA Abstract Surface reconstruction provides a powerful paradigm for
More informationMetrics on SO(3) and Inverse Kinematics
Mathematical Foundations of Computer Graphics and Vision Metrics on SO(3) and Inverse Kinematics Luca Ballan Institute of Visual Computing Optimization on Manifolds Descent approach d is a ascent direction
More informationModel Repair. Leif Kobbelt RWTH Aachen University )NPUT $ATA 2EMOVAL OF TOPOLOGICAL AND GEOMETRICAL ERRORS !NALYSIS OF SURFACE QUALITY
)NPUT $ATA 2ANGE 3CAN #!$ 4OMOGRAPHY 2EMOVAL OF TOPOLOGICAL AND GEOMETRICAL ERRORS!NALYSIS OF SURFACE QUALITY 3URFACE SMOOTHING FOR NOISE REMOVAL 0ARAMETERIZATION 3IMPLIFICATION FOR COMPLEXITY REDUCTION
More informationNatural Neighbour Interpolation
Natural Neighbour Interpolation DThe Natural Neighbour method is a geometric estimation technique that uses natural neighbourhood regions generated around each point in the data set. The method is particularly
More informationTHE BENEFITS OF REVERSE ENGINEERING FOR ENSURING PIPELINE INTÉGRITY
THE BENEFITS OF REVERSE ENGINEERING FOR ENSURING PIPELINE INTÉGRITY Author: JérômeAlexandre Lavoie, Product Manager ABSTRACT Today, now more than ever before, mounting public concern over pipeline safety
More informationSurface Modeling. Polygon Surfaces. Types: Generating models: Polygon Tables. Set of surface polygons that enclose an object interior
Surface Modeling Types: Polygon surfaces Curved surfaces Volumes Generating models: Interactive Procedural Polygon Surfaces Set of surface polygons that enclose an object interior Slide 1 Slide 2 Polygon
More informationFor example, estimate the population of the United States as 3 times 10⁸ and the
CCSS: Mathematics The Number System CCSS: Grade 8 8.NS.A. Know that there are numbers that are not rational, and approximate them by rational numbers. 8.NS.A.1. Understand informally that every number
More informationHydrogeological Data Visualization
Conference of Junior Researchers in Civil Engineering 209 Hydrogeological Data Visualization Boglárka Sárközi BME Department of Photogrammetry and Geoinformatics, email: sarkozi.boglarka@fmt.bme.hu Abstract
More information5Axis TestPiece Influence of Machining Position
5Axis TestPiece Influence of Machining Position Michael Gebhardt, Wolfgang Knapp, Konrad Wegener Institute of Machine Tools and Manufacturing (IWF), Swiss Federal Institute of Technology (ETH), Zurich,
More informationPartBased Recognition
PartBased Recognition Benedict Brown CS597D, Fall 2003 Princeton University CS 597D, PartBased Recognition p. 1/32 Introduction Many objects are made up of parts It s presumably easier to identify simple
More informationTo determine vertical angular frequency, we need to express vertical viewing angle in terms of and. 2tan. (degree). (1 pt)
Polytechnic University, Dept. Electrical and Computer Engineering EL6123  Video Processing, S12 (Prof. Yao Wang) Solution to Midterm Exam Closed Book, 1 sheet of notes (double sided) allowed 1. (5 pt)
More informationIntroduction to ANSYS
Lecture 3 Introduction to ANSYS Meshing 14. 5 Release Introduction to ANSYS Meshing 2012 ANSYS, Inc. March 27, 2014 1 Release 14.5 Introduction to ANSYS Meshing What you will learn from this presentation
More informationSeminar. Path planning using Voronoi diagrams and BSplines. Stefano Martina stefano.martina@stud.unifi.it
Seminar Path planning using Voronoi diagrams and BSplines Stefano Martina stefano.martina@stud.unifi.it 23 may 2016 This work is licensed under a Creative Commons AttributionShareAlike 4.0 International
More informationRecovering Primitives in 3D CAD meshes
Recovering Primitives in 3D CAD meshes Roseline Bénière a,c, Gérard Subsol a, Gilles Gesquière b, François Le Breton c and William Puech a a LIRMM, Univ. Montpellier 2, CNRS, 161 rue Ada, 34392, France;
More informationA HYBRID APPROACH FOR AUTOMATED AREA AGGREGATION
A HYBRID APPROACH FOR AUTOMATED AREA AGGREGATION Zeshen Wang ESRI 380 NewYork Street Redlands CA 92373 Zwang@esri.com ABSTRACT Automated area aggregation, which is widely needed for mapping both natural
More informationBegin recognition in EYFS Age related expectation at Y1 (secure use of language)
For more information  http://www.mathsisfun.com/geometry Begin recognition in EYFS Age related expectation at Y1 (secure use of language) shape, flat, curved, straight, round, hollow, solid, vertexvertices
More informationPHYS 39a Lab 3: Microscope Optics
PHYS 39a Lab 3: Microscope Optics Trevor Kafka December 15, 2014 Abstract In this lab task, we sought to use critical illumination and Köhler illumination techniques to view the image of a 1000 linesperinch
More informationImage Segmentation Preview Segmentation subdivides an image to regions or objects Two basic properties of intensity values Discontinuity Edge detection Similarity Thresholding Region growing/splitting/merging
More informationNeural Gas for Surface Reconstruction
Neural Gas for Surface Reconstruction Markus Melato, Barbara Hammer, Kai Hormann IfI Technical Report Series IfI0708 Impressum Publisher: Institut für Informatik, Technische Universität Clausthal JuliusAlbert
More informationBig Data: Rethinking Text Visualization
Big Data: Rethinking Text Visualization Dr. Anton Heijs anton.heijs@treparel.com Treparel April 8, 2013 Abstract In this white paper we discuss text visualization approaches and how these are important
More informationCALIBRATION OF A ROBUST 2 DOF PATH MONITORING TOOL FOR INDUSTRIAL ROBOTS AND MACHINE TOOLS BASED ON PARALLEL KINEMATICS
CALIBRATION OF A ROBUST 2 DOF PATH MONITORING TOOL FOR INDUSTRIAL ROBOTS AND MACHINE TOOLS BASED ON PARALLEL KINEMATICS E. Batzies 1, M. Kreutzer 1, D. Leucht 2, V. Welker 2, O. Zirn 1 1 Mechatronics Research
More informationINTRODUCTION TO RENDERING TECHNIQUES
INTRODUCTION TO RENDERING TECHNIQUES 22 Mar. 212 Yanir Kleiman What is 3D Graphics? Why 3D? Draw one frame at a time Model only once X 24 frames per second Color / texture only once 15, frames for a feature
More informationSolving Geometric Problems with the Rotating Calipers *
Solving Geometric Problems with the Rotating Calipers * Godfried Toussaint School of Computer Science McGill University Montreal, Quebec, Canada ABSTRACT Shamos [1] recently showed that the diameter of
More informationPSS 27.2 The Electric Field of a Continuous Distribution of Charge
Chapter 27 Solutions PSS 27.2 The Electric Field of a Continuous Distribution of Charge Description: Knight ProblemSolving Strategy 27.2 The Electric Field of a Continuous Distribution of Charge is illustrated.
More informationSo which is the best?
Manifold Learning Techniques: So which is the best? Todd Wittman Math 8600: Geometric Data Analysis Instructor: Gilad Lerman Spring 2005 Note: This presentation does not contain information on LTSA, which
More informationComparison of Nonlinear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data
CMPE 59H Comparison of Nonlinear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data Term Project Report Fatma Güney, Kübra Kalkan 1/15/2013 Keywords: Nonlinear
More informationSingle Image 3D Reconstruction of Ball Motion and Spin From Motion Blur
Single Image 3D Reconstruction of Ball Motion and Spin From Motion Blur An Experiment in Motion from Blur Giacomo Boracchi, Vincenzo Caglioti, Alessandro Giusti Objective From a single image, reconstruct:
More informationModelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic
More informationPath Tracking for a Miniature Robot
Path Tracking for a Miniature Robot By Martin Lundgren Excerpt from Master s thesis 003 Supervisor: Thomas Hellström Department of Computing Science Umeå University Sweden 1 Path Tracking Path tracking
More informationFace Detection Using Color Thresholding, and Eigenimage Template Matching Diedrick Marius, Sumita Pennathur, and Klint Rose
Face Detection Using Color Thresholding, and Eigenimage Template Matching Diedrick Marius, Sumita Pennathur, and Klint Rose 1. Introduction The goal of this project is to take a color digital image with
More informationSolution of Linear Systems
Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start
More informationAlgorithms for RealTime Tool Path Generation
Algorithms for RealTime Tool Path Generation Gyula Hermann John von Neumann Faculty of Information Technology, Budapest Polytechnic H1034 Nagyszombat utca 19 Budapest Hungary, hermgyviif.hu Abstract:The
More informationPoint Cloud Segmentation via Constrained Nonlinear Least Squares Surface Normal Estimates
Point Cloud Segmentation via Constrained Nonlinear Least Squares Surface Normal Estimates Edward Castillo Radiation Oncology Department University of Texas MD Anderson Cancer Center, Houston TX ecastillo3@mdanderson.org
More informationAnalecta Vol. 8, No. 2 ISSN 20647964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
More information3D Model of the City Using LiDAR and Visualization of Flood in ThreeDimension
3D Model of the City Using LiDAR and Visualization of Flood in ThreeDimension R.Queen Suraajini, Department of Civil Engineering, College of Engineering Guindy, Anna University, India, suraa12@gmail.com
More information521466S Machine Vision Assignment #7 Hough transform
521466S Machine Vision Assignment #7 Hough transform Spring 2014 In this assignment we use the hough transform to extract lines from images. We use the standard (r, θ) parametrization of lines, lter the
More informationCopyright 2011 Casa Software Ltd. www.casaxps.com. Centre of Mass
Centre of Mass A central theme in mathematical modelling is that of reducing complex problems to simpler, and hopefully, equivalent problems for which mathematical analysis is possible. The concept of
More informationEngineering Geometry
Engineering Geometry Objectives Describe the importance of engineering geometry in design process. Describe coordinate geometry and coordinate systems and apply them to CAD. Review the righthand rule.
More informationGlass coloured glass may pick up on scan. Top right of screen tabs: these tabs will relocate lost windows.
Artec 3D scanner Instructions for Medium Handheld (MH) Scanner Scanning Conditions: Objects/surfaces that don t scan well: Black or shiny objects and objects with sharp edges or points, hair, glass, transparent
More informationIf A is divided by B the result is 2/3. If B is divided by C the result is 4/7. What is the result if A is divided by C?
Problem 3 If A is divided by B the result is 2/3. If B is divided by C the result is 4/7. What is the result if A is divided by C? Suggested Questions to ask students about Problem 3 The key to this question
More informationGauss's Law. Gauss's Law in 3, 2, and 1 Dimension
[ Assignment View ] [ Eðlisfræði 2, vor 2007 22. Gauss' Law Assignment is due at 2:00am on Wednesday, January 31, 2007 Credit for problems submitted late will decrease to 0% after the deadline has passed.
More informationNormal Estimation for Point Clouds: A Comparison Study for a Voronoi Based Method
Eurographics Symposium on PointBased Graphics (2005) M. Pauly, M. Zwicker, (Editors) Normal Estimation for Point Clouds: A Comparison Study for a Voronoi Based Method Tamal K. Dey Gang Li Jian Sun The
More informationVisualization methods for patent data
Visualization methods for patent data Treparel 2013 Dr. Anton Heijs (CTO & Founder) Delft, The Netherlands Introduction Treparel can provide advanced visualizations for patent data. This document describes
More informationModeling Molecular Structure
PRELAB: Reading: Modeling Molecular Structure Make sure to have read Chapters 8 and 9 in Brown, Lemay, and Bursten before coming to lab. Bring your textbook to lab and a pencil. INTRDUCTIN: Chemists often
More informationDXF Import and Export for EASE 4.0
DXF Import and Export for EASE 4.0 Page 1 of 9 DXF Import and Export for EASE 4.0 Bruce C. Olson, Dr. Waldemar Richert ADA Copyright 2002 Acoustic Design Ahnert EASE 4.0 allows both the import and export
More informationTHEA RENDER INSTANCING TUTORIAL
THEA RENDER INSTANCING TUTORIAL REVISION AUTHOR REASON FOR CHANGE 11/09/12 Christina Psarrou Initial version. INTRODUCTION In computer graphics, geometry instancing is the practice of rendering multiple
More informationEM Clustering Approach for MultiDimensional Analysis of Big Data Set
EM Clustering Approach for MultiDimensional Analysis of Big Data Set Amhmed A. Bhih School of Electrical and Electronic Engineering Princy Johnson School of Electrical and Electronic Engineering Martin
More informationDoptimal plans in observational studies
Doptimal plans in observational studies Constanze Pumplün Stefan Rüping Katharina Morik Claus Weihs October 11, 2005 Abstract This paper investigates the use of Design of Experiments in observational
More information3. Interpolation. Closing the Gaps of Discretization... Beyond Polynomials
3. Interpolation Closing the Gaps of Discretization... Beyond Polynomials Closing the Gaps of Discretization... Beyond Polynomials, December 19, 2012 1 3.3. Polynomial Splines Idea of Polynomial Splines
More informationChapter 2. Derivation of the Equations of Open Channel Flow. 2.1 General Considerations
Chapter 2. Derivation of the Equations of Open Channel Flow 2.1 General Considerations Of interest is water flowing in a channel with a free surface, which is usually referred to as open channel flow.
More informationTHREE DIMENSIONAL GEOMETRY
Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,
More informationPipeline External Corrosion Analysis Using a 3D Laser Scanner
Pipeline Technology Conference 2013 Pipeline External Corrosion Analysis Using a 3D Laser Scanner PierreHugues ALLARD, Charles MONY Creaform, www.creaform3d.com 5825 rue StGeorges, Lévis (QC), Canada,
More informationOverview. Introduction 3D Projection. Volume Rendering. Isosurface Rendering. Sofware. Raytracing Modes for 3D rendering
3D rendering Overview Introduction 3D Projection Raytracing Modes for 3D rendering Volume Rendering Maximum intensity projection Direct Volume Rendering Isosurface Rendering Wireframing Sofware Amira Imaris
More information6.1 Application of Solid Models In mechanical engineering, a solid model is used for the following applications:
CHAPTER 6 SOLID MODELING 6.1 Application of Solid Models In mechanical engineering, a solid model is used for the following applications: 1. Graphics: generating drawings, surface and solid models 2. Design:
More information3D SURFACE GENERATION FROM POINT CLOUDS ACQUIRED FROM A VISIONBASED LASER SCANNING SENSOR
3D SURFACE GENERATION FROM POINT CLOUDS ACQUIRED FROM A VISIONBASED LASER SCANNING SENSOR Gerardo Antonio Idrobo Pizo, gerardo_idrobo@unb.br José Mauricio S. T. Motta, jmmotta@unb.br Universidade de Brasília,
More informationA Surface Reconstruction Method for Highly Noisy Point Clouds
A Surface Reconstruction Method for Highly Noisy Point Clouds DanFeng Lu 1, HongKai Zhao 2, Ming Jiang 1, ShuLin Zhou 1, and Tie Zhou 1 1 LMAM, School of Mathematical Sciences, Peking Univ. 2 Department
More informationThis week. CENG 732 Computer Animation. Challenges in Human Modeling. Basic Arm Model
CENG 732 Computer Animation Spring 20062007 Week 8 Modeling and Animating Articulated Figures: Modeling the Arm, Walking, Facial Animation This week Modeling the arm Different joint structures Walking
More informationFrom Scattered Samples to Smooth Surfaces
From Scattered Samples to Smooth Surfaces Kai Hormann 1 California Institute of Technology (a) (b) (c) (d) Figure 1: A point cloud with 4,100 scattered samples (a), its triangulation with 7,938 triangles
More information