**Surface Reconstruction from Unorganized 3D Point Clouds**

Patric Keller1, Martin Hering-Bertram2 and Hans Hagen<sup>3</sup> <sup>1</sup>*University of Kaiserslautern* <sup>2</sup>*University of Applied Sciences Bremen* <sup>3</sup>*University of Kaiserslautern Germany*

#### **1. Introduction**

Computer-based surface models are indispensable in several fields of science and engineering. For example, the design and manufacturing of vehicles, such as cars and aircrafts, would not be possible without sophisticated CAD and simulation tools predicting the behavior of the product. On the other hand, designers often do not like working on virtual models, though sophisticated tools, like immersive VR-environments are available. Hence, a designer may produce a physical prototype made from materials of his choice that can be easily assembled and shaped like clay models. Reverse engineering is the process of reconstructing digital representations from physical models. The overall reverse-engineering framework mainly is composed of four steps (see Figure 1): data acquisition, pre-processing, surface reconstruction, and post-processing;

The point cloud acquisition generally is performed by stationary scanning devices, like laser-range or computer-tomography scanners. In the case of a 3D laser scanner, the surface is sampled by one or more laser beams. The distance to the surface is typically measured by the time delay or by the reflection angle of the beam. After taking multiple scans from various sides or by rotating the object, the sampled points are combined into a single point cloud, from which the surface needs to be reconstructed.

Pre-processing of the data may be necessary, due to sampling errors, varying sampling density, and registration errors. Regions covered by multiple scans, for example, may result in noisy surfaces since tangential distances between nearest samples may be much smaller than the sampling error orthogonal to the surface. In this case, it is necessary to remove redundant points introduced by combining different points from multiple scans. In other regions, the density may be lower due to cavities and highly non-orthogonal scanning. If additional information, like a parametrization originating from each scan is available, interpolation can be used to fill these gaps.

In the present chapter, a powerful algorithm for multi-resolution surface extraction and -fairing, based on hybrid-meshes Guskov et al. (2002), from unorganized 3D point clouds is proposed (cf. Keller et al. (2005) and Keller et al. (2007)). The method uses an octree-based voxel hierarchy computed from the original points in an initial *hierarchical space partitioning* (HSP) process. At each octree level, the *hybrid mesh wrapping* (HMW) extracts the outer

Mederos et al. (2003) the authors introduce a MLS reconstruction scheme which takes into account local curvature approximations to enhance the quality of the generated surfaces. One of the main problems associated with the MLS-based techniques is that, in general, they have to adapt to meet the underlying topological conditions. Depending on the type of input data this can be rather challenging. Another drawback is, that the MLS-technique in general is not capable of constructing surfaces having sharp features. One attempt for solving this problem

Surface Reconstruction from Unorganized 3D Point Clouds 119

Whenever accuracy matters, adaptive methods are sought, capable of providing multiple levels of resolution, subdivision surfaces, for example, can be used together with wavelets Stollnitz et al. (1996) to represent highly detailed objects of arbitrary topology. In addition, such level-of-detail representations are well suited for further applications, like view-dependent rendering, multi-resolution editing, compression, and progressive transmission. In addition to adaptive polyhedral representations, subdivision surfaces provide smooth or piecewise smooth limit surfaces similar to spline surfaces. In Hoppe et al. (1994) Hoppe et al. introduce a method for fitting subdivision surfaces to a set of unorganized 3D points. Another class of reconstruction algorithms are computational geometry approaches. These algorithms usually extract the surface from previously computed Delaunay- or dual complexes. The reconstruction is based on mathematical guarantees but relies on clean data e.g., noisy, and non-regularly sampled points perturb the reconstruction

An early work concerning a Delaunay-based surface reconstruction scheme was provided by Boissonnat (1984). Following this idea, methods like the crust algorithm introduced by Amenta, Bern & Kamvysselis (1998) have been developed exploiting the structure of the Voronoi diagrams of the input data. Other works Funke & Ramos (2002) Amenta et al. (2001) Mederos et al. (2005) aimed at improving the original crust algorithm regarding efficiency and accuracy. The cocone algorithm Amenta et al. (2000) evolved from the crust algorithm provides further enhancements. Based on this work Dey et. al. Dey & Goswami. (2003) introduced the tight cocone. Other Delaunay/Voronoi-based reconstruction algorithms are

One challenge concerns the separation of proximal sheets of a surface Amenta, Bernd & Kolluri (1998). When considering local surface components, it may be helpful to construct a surface parametrization, i.e. a one-to-one mapping from a proper domain onto the surface. Having a surface of arbitrary topology split into a set of graph surfaces, for example by recursive clustering Heckel et al. (1997), one can reduce the reconstruction problem to scattered-data approximation in the plane Bertram et al. (2003). A very powerful meshless parametrization method for reverse engineering is described by Floater and Reimers Floater

A completely different approach is the construction of *α*-shapes described by Edelsbrunner and Mücke Edelsbrunner & Mücke (1994). Depending on a single radius *α*, their method collects all simplices (e.g. points, lines, triangles, tetrahedra, etc.) fitting into an *α*-sphere. The method efficiently provides a data structure valid for all choices of *α*, such that a user may interactively adapt *α* to obtain a proper outer boundary of a point cloud. Out-of-core methods like ball-pivoting Bernardini et al. (1999) employ the same principle, rolling a ball of sufficiently large radius around the point cloud filling in all visited triangles. Other methods

was proposed by Fleishman et al. Fleishman et al. (2005).

process and may cause the algorithms to fail.

& Reimers (2001).

presented by Kolluri et al. (2004), Dey & Goswami (2004).

Fig. 1. Principle steps of reverse engineering: (a) point cloud acquisition by 3D object scanner, mostly laser range scan devices; (b) slices scanned from different views, (c) combined point cloud, (d) reconstructed mesh, (e) result after post-processing.

boundary of the voxel complex, taking into account the shape on the next coarser level. The resulting meshes both are linked into a structure with subdivision connectivity, where local topological modifications guarantee the resulting meshes are two-manifold. Subsequently, a *vertex mapping* (VM) procedure is proposed to project the mesh vertices onto locally fitted tangent planes. A final post-processing step aims on improving the quality of the generated mesh. This is achieved by applying a mesh relaxation step based on the constricted repositioning of mesh vertices tangential to the approximated surface.

The remainder of the chapter is structured as followed. In Section 2 a short overview about related reconstruction techniques is provided. Section 3 discusses the individual steps of our approach in detail. Section 4 presents some results and discusses the advantages and disadvantages of the proposed method in terms of performance, quality and robustness. In addition section 5 presents some experimental results in the context of surface reconstruction from environmental point clouds. The conclusion is part of section 6.

#### **2. Related work**

A possible approach to obtain surfaces from unorganized point clouds is to fit surfaces to the input points Goshtasby & O'Neill (1993), such as fitting polynomial Lei et al. (1996) or algebraic surfaces Pratt (1987). To be able to fit surfaces to a set of unorganized points it is necessary to have information about the topology of the point cloud inherent surfaces or to have some form of parametrization in advance. For example, Eck and Hoppe Eck & Hoppe (1996) generate a first parametrization using their approach presented in Hoppe et al. (1992) and fit a network of B-Spline patches to the initial surface. This allows to reconstruct surfaces of arbitrary topology. A competing spline-based method is provided by Guo Guo (1997). Another form of surface reconstruction algorithm applying high-level model recognition is presented in Ramamoorthi & Arvo (1999).

Alexa et al. Alexa et al. (2001) introduced an approach for reconstructing point set surfaces from point clouds based on Levin's MLS projection operator. Further approaches following the idea of locally fitting polynomial surface patches to confined point neighborhoods are proposed in Alexa et al. (2003) Nealen (2004.) Fleishman et al. (2005) Dey & Sun (2005). In 2 Will-be-set-by-IN-TECH

(a) (b) (c) (d) (e)

boundary of the voxel complex, taking into account the shape on the next coarser level. The resulting meshes both are linked into a structure with subdivision connectivity, where local topological modifications guarantee the resulting meshes are two-manifold. Subsequently, a *vertex mapping* (VM) procedure is proposed to project the mesh vertices onto locally fitted tangent planes. A final post-processing step aims on improving the quality of the generated mesh. This is achieved by applying a mesh relaxation step based on the constricted

The remainder of the chapter is structured as followed. In Section 2 a short overview about related reconstruction techniques is provided. Section 3 discusses the individual steps of our approach in detail. Section 4 presents some results and discusses the advantages and disadvantages of the proposed method in terms of performance, quality and robustness. In addition section 5 presents some experimental results in the context of surface reconstruction

A possible approach to obtain surfaces from unorganized point clouds is to fit surfaces to the input points Goshtasby & O'Neill (1993), such as fitting polynomial Lei et al. (1996) or algebraic surfaces Pratt (1987). To be able to fit surfaces to a set of unorganized points it is necessary to have information about the topology of the point cloud inherent surfaces or to have some form of parametrization in advance. For example, Eck and Hoppe Eck & Hoppe (1996) generate a first parametrization using their approach presented in Hoppe et al. (1992) and fit a network of B-Spline patches to the initial surface. This allows to reconstruct surfaces of arbitrary topology. A competing spline-based method is provided by Guo Guo (1997). Another form of surface reconstruction algorithm applying high-level model recognition is

Alexa et al. Alexa et al. (2001) introduced an approach for reconstructing point set surfaces from point clouds based on Levin's MLS projection operator. Further approaches following the idea of locally fitting polynomial surface patches to confined point neighborhoods are proposed in Alexa et al. (2003) Nealen (2004.) Fleishman et al. (2005) Dey & Sun (2005). In

Fig. 1. Principle steps of reverse engineering: (a) point cloud acquisition by 3D object scanner, mostly laser range scan devices; (b) slices scanned from different views, (c) combined point cloud, (d) reconstructed mesh, (e) result after post-processing.

repositioning of mesh vertices tangential to the approximated surface.

from environmental point clouds. The conclusion is part of section 6.

**2. Related work**

presented in Ramamoorthi & Arvo (1999).

Mederos et al. (2003) the authors introduce a MLS reconstruction scheme which takes into account local curvature approximations to enhance the quality of the generated surfaces. One of the main problems associated with the MLS-based techniques is that, in general, they have to adapt to meet the underlying topological conditions. Depending on the type of input data this can be rather challenging. Another drawback is, that the MLS-technique in general is not capable of constructing surfaces having sharp features. One attempt for solving this problem was proposed by Fleishman et al. Fleishman et al. (2005).

Whenever accuracy matters, adaptive methods are sought, capable of providing multiple levels of resolution, subdivision surfaces, for example, can be used together with wavelets Stollnitz et al. (1996) to represent highly detailed objects of arbitrary topology. In addition, such level-of-detail representations are well suited for further applications, like view-dependent rendering, multi-resolution editing, compression, and progressive transmission. In addition to adaptive polyhedral representations, subdivision surfaces provide smooth or piecewise smooth limit surfaces similar to spline surfaces. In Hoppe et al. (1994) Hoppe et al. introduce a method for fitting subdivision surfaces to a set of unorganized 3D points. Another class of reconstruction algorithms are computational geometry approaches. These algorithms usually extract the surface from previously computed Delaunay- or dual complexes. The reconstruction is based on mathematical guarantees but relies on clean data e.g., noisy, and non-regularly sampled points perturb the reconstruction process and may cause the algorithms to fail.

An early work concerning a Delaunay-based surface reconstruction scheme was provided by Boissonnat (1984). Following this idea, methods like the crust algorithm introduced by Amenta, Bern & Kamvysselis (1998) have been developed exploiting the structure of the Voronoi diagrams of the input data. Other works Funke & Ramos (2002) Amenta et al. (2001) Mederos et al. (2005) aimed at improving the original crust algorithm regarding efficiency and accuracy. The cocone algorithm Amenta et al. (2000) evolved from the crust algorithm provides further enhancements. Based on this work Dey et. al. Dey & Goswami. (2003) introduced the tight cocone. Other Delaunay/Voronoi-based reconstruction algorithms are presented by Kolluri et al. (2004), Dey & Goswami (2004).

One challenge concerns the separation of proximal sheets of a surface Amenta, Bernd & Kolluri (1998). When considering local surface components, it may be helpful to construct a surface parametrization, i.e. a one-to-one mapping from a proper domain onto the surface. Having a surface of arbitrary topology split into a set of graph surfaces, for example by recursive clustering Heckel et al. (1997), one can reduce the reconstruction problem to scattered-data approximation in the plane Bertram et al. (2003). A very powerful meshless parametrization method for reverse engineering is described by Floater and Reimers Floater & Reimers (2001).

A completely different approach is the construction of *α*-shapes described by Edelsbrunner and Mücke Edelsbrunner & Mücke (1994). Depending on a single radius *α*, their method collects all simplices (e.g. points, lines, triangles, tetrahedra, etc.) fitting into an *α*-sphere. The method efficiently provides a data structure valid for all choices of *α*, such that a user may interactively adapt *α* to obtain a proper outer boundary of a point cloud. Out-of-core methods like ball-pivoting Bernardini et al. (1999) employ the same principle, rolling a ball of sufficiently large radius around the point cloud filling in all visited triangles. Other methods

voxel grid. The resulting mesh is obtained by subdividing the coarser mesh (cf. Figure 2(c)) and adapting its topology at locations where voxels have been removed (see Figure

Surface Reconstruction from Unorganized 3D Point Clouds 121

• The final vertex mapping locally constrains the mesh toward the point cloud (cf. Figure 2(e)). All vertices are projected onto local tangent planes defined by the points of the individual voxels. The resulting mesh is relaxed toward its final position by applying

The HSP presumes an already existing voxel set *V<sup>j</sup>* defining the voxel complex at level *j*. At the coarsest level this set *V*<sup>0</sup> consists of the bounding voxel enclosing the entire point cloud.

The task of managing and maintaining the originated non-empty set *Vj*+<sup>1</sup> is accomplished by the usage of an octree data structure. As required by the succeeding HM wrapping operation we need certain connectivity information facilitating the localization of proximate voxel neighborhoods. The algorithm applied uses an efficient octree-based navigation scheme

The most difficult part of this work concerns the extraction of a two-manifold mesh from the generated voxel complex *Vj*<sup>+</sup>1. For the following let *M<sup>j</sup>* denote the set of faces representing

the quadrilateral sidepart of a voxel. *M*<sup>0</sup> defines the face patches of *V*<sup>0</sup> forming the hull

representation *Mj*+<sup>1</sup> by performing regular and irregular refinement operations. This includes the subdivision of *M<sup>j</sup>* and the introduction of new faces at *Mj*+<sup>1</sup> inducing local changes in the

• Each face *<sup>f</sup>* <sup>∈</sup> *<sup>M</sup><sup>j</sup>* is associated with exactly one voxel *<sup>v</sup>* <sup>∈</sup> *<sup>V</sup><sup>j</sup>* (no two voxel can be associated with the same face). Thus the maximum number of faces associated with a

• A face is linked to each of its proximate neighbor face. *n* is called a neighbor of (or adjacent to) the face *f* , if both share a common voxel edge. With limitation of one neighbor per edge

To accomplish the acquisition of the boundary hull associated with *Vj*+<sup>1</sup> the first step concerns

into four subfaces. To guarantee the resulting mesh fulfills the conditions outlined above, we assign the subfaces of *<sup>f</sup>* to voxel of *<sup>V</sup>j*<sup>+</sup>1. For the following let *<sup>v</sup>* <sup>∈</sup> *<sup>V</sup><sup>j</sup>* be the voxel

. The refinement is achieved by subdividing each face *<sup>f</sup>* <sup>∈</sup> *<sup>M</sup><sup>j</sup>*

mesh topology. These operations require *M<sup>j</sup>* to meet the following conditions:

, where the term face abstracts

, we obtain the next finer mesh

2 Assign the points of *v* to the corresponding subvoxel and delete empty subvoxels.

2(d)).

additional post-processing (see Figure 2(f)).

To obtain *Vj*+<sup>1</sup> the following steps are performed:

1 Subdivide every voxel *<sup>v</sup>* <sup>∈</sup> *<sup>V</sup><sup>j</sup>* into 8 subvoxels.

related to the approach of Bhattacharya Bhattacharya (2001).

the existing mesh corresponding to the voxel complex *V<sup>j</sup>*

of the bounding voxel. Starting from the existing mesh *M<sup>j</sup>*

• The mesh represented by *M<sup>j</sup>* is a two-manifold mesh.

the number of possible neighbor faces of *f* is four.

**3.2 Hierarchical spatial partitioning**

**3.3 Hybrid mesh wrapping**

voxel is restricted to six.

**3.4 Regular mesh refinement**

the regular refinement of *M<sup>j</sup>*

e.g., Azernikov et al. (2003) and Wang et al. (2005), exploit octree-based grid structures to guide surface reconstruction.

## **3. Reconstruction approach**

#### **3.1 Overview**

Fig. 2. (a)-(f) principle steps of the proposed multi-resolution surface reconstruction approach. (a) input point cloud of the Stanford bunny data set, consisting of 35,947 points, (b) voxel grid after 4 subdivision steps, (c) and (d) extracted voxel hull before and after application of the HMW operation, (e) surface mesh after vertex projection, (f) final surface after mesh relaxation.

This work aims at finding an efficient way to extract a connected quadrilateral two-manifold mesh out of a given 3D point cloud in a way that the underlying "unknown" surface is approximated as accurately as possible. The resulting adaptive reconstruction method is based upon the repetitive application of the following steps:


voxel grid. The resulting mesh is obtained by subdividing the coarser mesh (cf. Figure 2(c)) and adapting its topology at locations where voxels have been removed (see Figure 2(d)).

• The final vertex mapping locally constrains the mesh toward the point cloud (cf. Figure 2(e)). All vertices are projected onto local tangent planes defined by the points of the individual voxels. The resulting mesh is relaxed toward its final position by applying additional post-processing (see Figure 2(f)).

#### **3.2 Hierarchical spatial partitioning**

4 Will-be-set-by-IN-TECH

e.g., Azernikov et al. (2003) and Wang et al. (2005), exploit octree-based grid structures to

(a) (b) (c)

(d) (e) (f)

This work aims at finding an efficient way to extract a connected quadrilateral two-manifold mesh out of a given 3D point cloud in a way that the underlying "unknown" surface is approximated as accurately as possible. The resulting adaptive reconstruction method is

• Starting from an initial bounding voxel enclosing the original point cloud (see Figure 2(a)), the hierarchical space partitioning creates a voxel set by recursively subdividing each individual voxel into eight subvoxels. Empty subvoxel are not subject to subdivision and

• The outer boundary of the generated voxel complex is extracted by the HMW operation. This exploits the voxel-subvoxel connectivity between the current and the next coarser

are deleted. Figure 2(b) presents an example of a generated voxel grid.

Fig. 2. (a)-(f) principle steps of the proposed multi-resolution surface reconstruction approach. (a) input point cloud of the Stanford bunny data set, consisting of 35,947 points, (b) voxel grid after 4 subdivision steps, (c) and (d) extracted voxel hull before and after application of the HMW operation, (e) surface mesh after vertex projection, (f) final surface

based upon the repetitive application of the following steps:

guide surface reconstruction.

**3. Reconstruction approach**

**3.1 Overview**

after mesh relaxation.

The HSP presumes an already existing voxel set *V<sup>j</sup>* defining the voxel complex at level *j*. At the coarsest level this set *V*<sup>0</sup> consists of the bounding voxel enclosing the entire point cloud. To obtain *Vj*+<sup>1</sup> the following steps are performed:


The task of managing and maintaining the originated non-empty set *Vj*+<sup>1</sup> is accomplished by the usage of an octree data structure. As required by the succeeding HM wrapping operation we need certain connectivity information facilitating the localization of proximate voxel neighborhoods. The algorithm applied uses an efficient octree-based navigation scheme related to the approach of Bhattacharya Bhattacharya (2001).

#### **3.3 Hybrid mesh wrapping**

The most difficult part of this work concerns the extraction of a two-manifold mesh from the generated voxel complex *Vj*<sup>+</sup>1. For the following let *M<sup>j</sup>* denote the set of faces representing the existing mesh corresponding to the voxel complex *V<sup>j</sup>* , where the term face abstracts the quadrilateral sidepart of a voxel. *M*<sup>0</sup> defines the face patches of *V*<sup>0</sup> forming the hull of the bounding voxel. Starting from the existing mesh *M<sup>j</sup>* , we obtain the next finer mesh representation *Mj*+<sup>1</sup> by performing regular and irregular refinement operations. This includes the subdivision of *M<sup>j</sup>* and the introduction of new faces at *Mj*+<sup>1</sup> inducing local changes in the mesh topology. These operations require *M<sup>j</sup>* to meet the following conditions:


#### **3.4 Regular mesh refinement**

To accomplish the acquisition of the boundary hull associated with *Vj*+<sup>1</sup> the first step concerns the regular refinement of *M<sup>j</sup>* . The refinement is achieved by subdividing each face *<sup>f</sup>* <sup>∈</sup> *<sup>M</sup><sup>j</sup>* into four subfaces. To guarantee the resulting mesh fulfills the conditions outlined above, we assign the subfaces of *<sup>f</sup>* to voxel of *<sup>V</sup>j*<sup>+</sup>1. For the following let *<sup>v</sup>* <sup>∈</sup> *<sup>V</sup><sup>j</sup>* be the voxel

(a) (b)

Surface Reconstruction from Unorganized 3D Point Clouds 123

(c) (d)

Fig. 4. Graphical interpretation of the face-propagation rules concerning the possible cases.

principal orientation of the points within *v* and *w*.

.

and edges (see right part of Figure 5(a) and Figure 5(b)).

until *A<sup>i</sup>* = ∅ or *Aj*+<sup>1</sup> = *A<sup>j</sup>*

3. The voxel *v* and *w* adjoin a common edge. In this case we have to differentiate between two cases: a) an additional voxel adjoins *v* (see Figure 4(c)). In this case the creation/adoption of *n* is permitted. b) only *v* and *w* adjoin a common edge (see Figure 4(d)). This case requires to detect the underlying topological conditions. If the underlying surface passes *v* and *w*, we adopt/create *n*0, otherwise, if the points within *v* and *w* represent a break of the underlying surface we adopt/create *n*1. The right case is filtered out by comparing the

The irregular refinement procedure is performed as followed: Given two initial sets *A*<sup>0</sup> = *Nj*+<sup>1</sup> and *B*<sup>0</sup> = ∅, once a new neighbor face *n* is created/adopted, it is added to *Ai*+<sup>1</sup> = *<sup>A</sup><sup>i</sup>* ∪ {*n*}. Simultaneously, every *<sup>n</sup>* <sup>∈</sup> *<sup>A</sup><sup>i</sup>* which is fully connected to all of its existing neighbor faces is removed *<sup>A</sup>i*+<sup>1</sup> <sup>=</sup> *<sup>A</sup><sup>i</sup>* \ {*n*} and attached to *<sup>B</sup>j*+<sup>1</sup> <sup>=</sup> *<sup>B</sup><sup>j</sup>* ∪ {*n*}. This procedure is repeated

Applying the irregular mesh refinement operator <sup>I</sup> on *<sup>N</sup>j*+<sup>1</sup> results in *<sup>M</sup>j*+<sup>1</sup> <sup>=</sup> <sup>I</sup>(*Nj*+1), where *<sup>M</sup>j*+<sup>1</sup> <sup>=</sup> *<sup>A</sup><sup>i</sup>* <sup>∪</sup> *<sup>B</sup><sup>j</sup>* represents the final mesh at level *<sup>j</sup>* <sup>+</sup> 1. Figure 3(b) illustrates the

To force *Mj*+<sup>1</sup> to maintain the two-manifold condition each case at which the propagation of a face leads to a non-manifold mesh structure e.g., more than two faces share an edge, is identified and the mesh connectivity is resolved by applying vertex- and edge-splits. Figure 5(a) and Figure 5(b) illustrate the principles according to these the non-manifold mesh structures are resolved. In the depicted cases the connectivity of the vertices *v* and *v*1, *v*<sup>2</sup> cause the mesh to be non-manifold. We avoid this by simply splitting the corresponding vertices

propagation procedure, where one neighbor is created and another adopted.

Fig. 3. (a) regular refinement: Principles of the subface projection operation; (b) irregular mesh refinement: Principles of face creation and adoption;

assigned to *<sup>f</sup>* <sup>∈</sup> *<sup>M</sup><sup>j</sup>* and *<sup>s</sup>* be a subface of *<sup>f</sup>* . The assignment procedure projects *<sup>s</sup>* onto the corresponding subvoxel of *v* as illustrated in Figure 3(a). *f* is restricted only to be projected onto the immediate subvoxel of *v*. Since we only address the extraction of the outer hull of *Vj* , additional rules have to be defined preventing faces of *M<sup>j</sup>* to capture parts of the interior hull of the voxel complex e.g., in cases the surface is not closed. Thus, the subface *s* can not be assigned to a subvoxel of *v* if this subvoxel already is associated with an face on the opposite. Subfaces that can not be assigned because no corresponding subvoxel exist they could be projected onto, are removed. The resulting refinement operator R defines the set *<sup>N</sup>j*+<sup>1</sup> :<sup>=</sup> <sup>R</sup>(*M<sup>j</sup>* ), with *N*<sup>0</sup> = *M*<sup>0</sup> representing the collection of the created and projected subfaces (cf. Figure 2(c)). So far *Nj*<sup>+</sup>1, consisting of unconnected faces, represents the base for the subsequent *irregular mesh refinement*, in which the final mesh connectivity is recovered.

#### **3.5 Irregular mesh refinement**

The *irregular mesh refinement* recovers the mesh connectivity i.g., it reconnects the faces of *Nj*+<sup>1</sup> and closes resulting breaks in the mesh structure induced by the regular mesh refinement. This procedure is based on the propagation of faces or the adoption of existing proximate neighbor faces (see Figure 3(b)). More precisely: Assume *f* to be a face of *Nj*<sup>+</sup>1, the irregular refinement detects existing neighbor faces in *Nj*+<sup>1</sup> sharing a common voxel edge or creates new faces in case no neighbor is found. Considering the configuration of the principle voxel neighborhoods there are three possible cases a neighbor face of *f* can be adapted/created. Figure 4 depicts these cases. For the following considerations let *<sup>n</sup>* <sup>∈</sup> *<sup>N</sup>j*+<sup>1</sup> specifying the "missing" neighbor face of *<sup>f</sup>* , *<sup>v</sup>* <sup>∈</sup> *<sup>V</sup>j*+<sup>1</sup> the voxel associated with *<sup>f</sup>* , and *<sup>w</sup>* <sup>∈</sup> *<sup>V</sup>j*+<sup>1</sup> the voxel associated with *n*. Exploiting the voxel neighborhood relations, the propagation/adoption of *n* is performed according the summarized rules below:


6 Will-be-set-by-IN-TECH

(a) (b)

assigned to *<sup>f</sup>* <sup>∈</sup> *<sup>M</sup><sup>j</sup>* and *<sup>s</sup>* be a subface of *<sup>f</sup>* . The assignment procedure projects *<sup>s</sup>* onto the corresponding subvoxel of *v* as illustrated in Figure 3(a). *f* is restricted only to be projected onto the immediate subvoxel of *v*. Since we only address the extraction of the outer hull of

, additional rules have to be defined preventing faces of *M<sup>j</sup>* to capture parts of the interior hull of the voxel complex e.g., in cases the surface is not closed. Thus, the subface *s* can not be assigned to a subvoxel of *v* if this subvoxel already is associated with an face on the opposite. Subfaces that can not be assigned because no corresponding subvoxel exist they could be projected onto, are removed. The resulting refinement operator R defines the set

subfaces (cf. Figure 2(c)). So far *Nj*<sup>+</sup>1, consisting of unconnected faces, represents the base for the subsequent *irregular mesh refinement*, in which the final mesh connectivity is recovered.

The *irregular mesh refinement* recovers the mesh connectivity i.g., it reconnects the faces of *Nj*+<sup>1</sup> and closes resulting breaks in the mesh structure induced by the regular mesh refinement. This procedure is based on the propagation of faces or the adoption of existing proximate neighbor faces (see Figure 3(b)). More precisely: Assume *f* to be a face of *Nj*<sup>+</sup>1, the irregular refinement detects existing neighbor faces in *Nj*+<sup>1</sup> sharing a common voxel edge or creates new faces in case no neighbor is found. Considering the configuration of the principle voxel neighborhoods there are three possible cases a neighbor face of *f* can be adapted/created. Figure 4 depicts these cases. For the following considerations let *<sup>n</sup>* <sup>∈</sup> *<sup>N</sup>j*+<sup>1</sup> specifying the "missing" neighbor face of *<sup>f</sup>* , *<sup>v</sup>* <sup>∈</sup> *<sup>V</sup>j*+<sup>1</sup> the voxel associated with *<sup>f</sup>* , and *<sup>w</sup>* <sup>∈</sup> *<sup>V</sup>j*+<sup>1</sup> the voxel associated with *n*. Exploiting the voxel neighborhood relations, the propagation/adoption of

1. In case that *v* and *w* are identical the creation/adoption of *n* is admitted if *f* does not

2. The faces *n* and *f* share a common edge, the corresponding voxel *w* and *v* adjoin a common face (see Figure 4(b)). The creation/adoption of *n* is allowed if no other face is associated

already share an edge with a face on the opposite of *v* (see Figure 4(a)).

), with *N*<sup>0</sup> = *M*<sup>0</sup> representing the collection of the created and projected

Fig. 3. (a) regular refinement: Principles of the subface projection operation; (b) irregular

mesh refinement: Principles of face creation and adoption;

*n* is performed according the summarized rules below:

with *w*, vis-a-vis from *n* (additional *cavity rule*).

*Vj*

*<sup>N</sup>j*+<sup>1</sup> :<sup>=</sup> <sup>R</sup>(*M<sup>j</sup>*

**3.5 Irregular mesh refinement**

Fig. 4. Graphical interpretation of the face-propagation rules concerning the possible cases.

3. The voxel *v* and *w* adjoin a common edge. In this case we have to differentiate between two cases: a) an additional voxel adjoins *v* (see Figure 4(c)). In this case the creation/adoption of *n* is permitted. b) only *v* and *w* adjoin a common edge (see Figure 4(d)). This case requires to detect the underlying topological conditions. If the underlying surface passes *v* and *w*, we adopt/create *n*0, otherwise, if the points within *v* and *w* represent a break of the underlying surface we adopt/create *n*1. The right case is filtered out by comparing the principal orientation of the points within *v* and *w*.

The irregular refinement procedure is performed as followed: Given two initial sets *A*<sup>0</sup> = *Nj*+<sup>1</sup> and *B*<sup>0</sup> = ∅, once a new neighbor face *n* is created/adopted, it is added to *Ai*+<sup>1</sup> = *<sup>A</sup><sup>i</sup>* ∪ {*n*}. Simultaneously, every *<sup>n</sup>* <sup>∈</sup> *<sup>A</sup><sup>i</sup>* which is fully connected to all of its existing neighbor faces is removed *<sup>A</sup>i*+<sup>1</sup> <sup>=</sup> *<sup>A</sup><sup>i</sup>* \ {*n*} and attached to *<sup>B</sup>j*+<sup>1</sup> <sup>=</sup> *<sup>B</sup><sup>j</sup>* ∪ {*n*}. This procedure is repeated until *A<sup>i</sup>* = ∅ or *Aj*+<sup>1</sup> = *A<sup>j</sup>* .

Applying the irregular mesh refinement operator <sup>I</sup> on *<sup>N</sup>j*+<sup>1</sup> results in *<sup>M</sup>j*+<sup>1</sup> <sup>=</sup> <sup>I</sup>(*Nj*+1), where *<sup>M</sup>j*+<sup>1</sup> <sup>=</sup> *<sup>A</sup><sup>i</sup>* <sup>∪</sup> *<sup>B</sup><sup>j</sup>* represents the final mesh at level *<sup>j</sup>* <sup>+</sup> 1. Figure 3(b) illustrates the propagation procedure, where one neighbor is created and another adopted.

To force *Mj*+<sup>1</sup> to maintain the two-manifold condition each case at which the propagation of a face leads to a non-manifold mesh structure e.g., more than two faces share an edge, is identified and the mesh connectivity is resolved by applying vertex- and edge-splits. Figure 5(a) and Figure 5(b) illustrate the principles according to these the non-manifold mesh structures are resolved. In the depicted cases the connectivity of the vertices *v* and *v*1, *v*<sup>2</sup> cause the mesh to be non-manifold. We avoid this by simply splitting the corresponding vertices and edges (see right part of Figure 5(a) and Figure 5(b)).

(a) (b)

Surface Reconstruction from Unorganized 3D Point Clouds 125

Fig. 7. (a) part of the reconstructed surface mesh from the dragon data set before smoothing

*<sup>W</sup>*� <sup>=</sup> *<sup>W</sup>* ∪ {*<sup>v</sup>* <sup>∈</sup> *<sup>V</sup>* \ *<sup>W</sup>*|*<sup>v</sup>* is directly adjacent <sup>1</sup> to at least one *<sup>w</sup>* <sup>∈</sup> *<sup>W</sup>*} the number of points in

To improve the quality of the generated mesh we perform an additional mesh optimization step. Based on the principles of Laplacian smoothing, the vertices of the mesh are repositioned by first computing the centroid of the directly connected neighbor vertices. In a subsequent step these centroids are again projected onto the tangent planes of the corresponding point sets according to equation (1). Generally, mesh-optimization is a repetitive process, applied several times to obtain the most possible gain in surface quality, see Figure 7(a) and Figure

To find the overall time complexity we have to look at every step of the algorithm separately. We begin discussing the spatial decomposition analysis: In order to distribute the points contained by a voxel set *Vk*−<sup>1</sup> to their respective subvoxel we need to determine the subvoxel affiliation of every point. This leads to an computational complexity of *O*(|*P*|) in every refinement step. Considering the HM wrapping we have to differentiate between the regular R and the irregular I operation but keep in mind that both are interdependent. Due to the fact that <sup>R</sup> basically depends linearly on the number of faces of <sup>|</sup>*Mk*−1<sup>|</sup> and hence on <sup>|</sup>*Vk*−1<sup>|</sup> we obtain a complexity of *<sup>O</sup>*(|*Vk*−1|). Since it is difficult to estimate the number of attempts needed to find *M<sup>k</sup>* we cannot reveal accurate statements concerning the computation time for

<sup>I</sup>. Based on empirical observations, an average time complexity of *<sup>O</sup>*(*<sup>k</sup>* · |*Vk*|) holds.

<sup>1</sup> Two distinct voxels are directly adjacent if they share a common vertex, edge or face.

Assuming that <sup>|</sup>*Vk*|�|*P*<sup>|</sup> with constant *<sup>k</sup>* (say 0 <sup>&</sup>lt; *<sup>k</sup>* <sup>≤</sup> 10) the combined results of the particular sub-processes leads to an overall complexity of *O*(|*P*|), concerning one single

was applied, (b) corresponding mesh after few relaxation steps.

*Pv* can be increased.

**3.6.1 Post-processing**

7(b).

**4. Results**

**4.1 Performance**

Fig. 5. (a) example in which a non-manifold mesh structure is resolved by a vertex-split, (b) resolving non-manifold mesh structure by first applying an edge-split followed by two vertex-splits.

#### **3.6 Vertex-mapping**

The next step concerns moving the mesh toward the "unknown" surface by projecting the mesh vertices onto the local tangent planes defined by the set of proximate sample points. This is accomplished for each vertex *v* of the mesh *M* by first identifying the voxel set *W* directly adjoining *v* and collecting the enclosed points *Pv*. Next, we fit a plane to the points *Pv* by computing the centroid �*c* and the plane normal �*n* obtained from the covariance matrix *C* of *Pv*. In order to improve the accuracy of the fitting the points of *Pv* can be filtered according their distance to *v* yielding *P*� *<sup>v</sup>* = {*p* ∈ *Pv*| 2�*p* − *v*� < *l*}, with *l* representing the edge length of the voxel complex *W*. The normal �*n* is defined by the eigenvector associated with the smallest eigenvalue of *C*. Together with the former position of the vertex �*v* we are able to compute the new coordinates of �*vn* by

$$
\vec{v\_n} = \vec{v} - ((\vec{v} - \vec{c}) \cdot \vec{n})\vec{n} \,. \tag{1}
$$

To be able to perform this projection the number of points of *Pv* has to be |*Pv*| ≥ 3. Otherwise, points from adjacent voxels need to be added from surrounding voxels. By extending *W* to

Fig. 6. Vertex projected onto the tangent plane defined by the points *Pv* of the adjacent voxel set *W*.

Fig. 7. (a) part of the reconstructed surface mesh from the dragon data set before smoothing was applied, (b) corresponding mesh after few relaxation steps.

*<sup>W</sup>*� <sup>=</sup> *<sup>W</sup>* ∪ {*<sup>v</sup>* <sup>∈</sup> *<sup>V</sup>* \ *<sup>W</sup>*|*<sup>v</sup>* is directly adjacent <sup>1</sup> to at least one *<sup>w</sup>* <sup>∈</sup> *<sup>W</sup>*} the number of points in *Pv* can be increased.

#### **3.6.1 Post-processing**

8 Will-be-set-by-IN-TECH

(a) (b)

Fig. 5. (a) example in which a non-manifold mesh structure is resolved by a vertex-split, (b) resolving non-manifold mesh structure by first applying an edge-split followed by two

The next step concerns moving the mesh toward the "unknown" surface by projecting the mesh vertices onto the local tangent planes defined by the set of proximate sample points. This is accomplished for each vertex *v* of the mesh *M* by first identifying the voxel set *W* directly adjoining *v* and collecting the enclosed points *Pv*. Next, we fit a plane to the points *Pv* by computing the centroid �*c* and the plane normal �*n* obtained from the covariance matrix *C* of *Pv*. In order to improve the accuracy of the fitting the points of *Pv* can be filtered according

the voxel complex *W*. The normal �*n* is defined by the eigenvector associated with the smallest eigenvalue of *C*. Together with the former position of the vertex �*v* we are able to compute the

To be able to perform this projection the number of points of *Pv* has to be |*Pv*| ≥ 3. Otherwise, points from adjacent voxels need to be added from surrounding voxels. By extending *W* to

Fig. 6. Vertex projected onto the tangent plane defined by the points *Pv* of the adjacent voxel

*<sup>v</sup>* = {*p* ∈ *Pv*| 2�*p* − *v*� < *l*}, with *l* representing the edge length of

*v*�*<sup>n</sup>* = �*v* − ((�*v* −�*c*) ·�*n*)�*n* . (1)

vertex-splits.

**3.6 Vertex-mapping**

their distance to *v* yielding *P*�

new coordinates of �*vn* by

set *W*.

To improve the quality of the generated mesh we perform an additional mesh optimization step. Based on the principles of Laplacian smoothing, the vertices of the mesh are repositioned by first computing the centroid of the directly connected neighbor vertices. In a subsequent step these centroids are again projected onto the tangent planes of the corresponding point sets according to equation (1). Generally, mesh-optimization is a repetitive process, applied several times to obtain the most possible gain in surface quality, see Figure 7(a) and Figure 7(b).

## **4. Results**

#### **4.1 Performance**

To find the overall time complexity we have to look at every step of the algorithm separately. We begin discussing the spatial decomposition analysis: In order to distribute the points contained by a voxel set *Vk*−<sup>1</sup> to their respective subvoxel we need to determine the subvoxel affiliation of every point. This leads to an computational complexity of *O*(|*P*|) in every refinement step. Considering the HM wrapping we have to differentiate between the regular R and the irregular I operation but keep in mind that both are interdependent. Due to the fact that <sup>R</sup> basically depends linearly on the number of faces of <sup>|</sup>*Mk*−1<sup>|</sup> and hence on <sup>|</sup>*Vk*−1<sup>|</sup> we obtain a complexity of *<sup>O</sup>*(|*Vk*−1|). Since it is difficult to estimate the number of attempts needed to find *M<sup>k</sup>* we cannot reveal accurate statements concerning the computation time for <sup>I</sup>. Based on empirical observations, an average time complexity of *<sup>O</sup>*(*<sup>k</sup>* · |*Vk*|) holds.

Assuming that <sup>|</sup>*Vk*|�|*P*<sup>|</sup> with constant *<sup>k</sup>* (say 0 <sup>&</sup>lt; *<sup>k</sup>* <sup>≤</sup> 10) the combined results of the particular sub-processes leads to an overall complexity of *O*(|*P*|), concerning one single

<sup>1</sup> Two distinct voxels are directly adjacent if they share a common vertex, edge or face.

(a) (b)

Surface Reconstruction from Unorganized 3D Point Clouds 127

Fig. 8. (a) correctly detected holes in the bunny model, (b) fragmentation of the mesh due to

In case the voxel refinement level exceeds a specific bound imposed by sampling density, the resulting mesh may be incomplete in some areas, due to empty voxels, see figure 8(b). If a very fine mesh is desired, one can fix the topology at a coarser level and apply subdivision

The exploration of point data sets, like environmental LiDaR data, is a challenging problem of current interest. In contrast to the point clouds obtained from stationary scanning devices, data complexity is increased by e.g., noise, occlusion, alternating sample density and overlapping samples. Despite of the high scanning resolution, additional undersampling may occur at small and fractured artifacts like fences and leaves of trees. These are only some problems immanent to the reconstruction of surfaces from unorganized environmental point

We have applied our method to environmental point clouds in order to analyse the effects of the aforementioned influence factors on our proposed reconstruction approach. In this context we have performed some experiments on environmental input data generated by tripod mounted LiDaR scanners. Figure 10(a) shows a point cloud of a water tower located at the UC Davis, CA, USA. It consists of about 4.3 million points and features complex environmental structures like buildings and trees. Figure 10(b) shows the corresponding reconstruction after 10 steps. The generated mesh exhibiting about 300 thousand faces took 23 seconds on an Intel Core2 Duo system with 4GB RAM to finish. Another example is presented in figure 11. It shows the final surface representation (215 thousand faces) of a slope with parts of a building and some vegetation. The underlying point cloud has 3.8 million points. The reconstruction finished at level 9 after 24 second. The reconstructed mesh features holes and cracks at those

Environmental point clouds by nature are large (consisting of up to several million points) and feature high complexity. The experiments showed that our reconstruction approach is capable of producing reliable representations even in cases in which the point clouds are not optimally conditioned. However, the reconstruction lacks at some regions in which the point density does not exhibit the required/satisfied resolution. Despite of this fact the introduced method

lacking density of point cloud.

clouds.

and vertex mapping in consecutive steps.

areas at which the resolution lacks in density.

**5. Experimental reconstruction of environmental point data**

refinement step. The complexity to completely generate a mesh representing the unknown surface at refinement level *k* averages *O*(*k* · |*P*|).

The following time tables confirm the above discussed propositions. Table 1 shows the measured computation times for several data sets (*Stanford Scan Repository* (2009)) performed on an Intel Pentium 4 system with 1.6 GHz and 256 MB main memory. Table 2 presents more detailed results for the individual reconstruction steps for the Stanford dragon point data set.


Table 1. Computation time table regarding several input models whereas the VM and the mesh-optimization (4 passes) are performed after the last reconstruction step.


Table 2. Time values for each reconstruction step of the Stanford dragon.

#### **4.2 Robustness and quality**

As shown by the examples our reconstruction method delivers meshes of good quality, as long as the resolution of the voxel complex does not exceed a point density induced threshold. Topological features, such as holes and cavities were always detected correctly after a proper number of refinements, see figure 8(a). The HM wrapping rules prevent the underlying object from caving by simultaneously covering the real surface completely. Table 3 shows the measured *L*2-errors obtained by processing point clouds of different models for several refinement levels.


Table 3. *L*2-error values of the Bunny and Buddha reconstruction for each ref. level (measured in lengths of diagonal of the Bounding Box).

10 Will-be-set-by-IN-TECH

refinement step. The complexity to completely generate a mesh representing the unknown

The following time tables confirm the above discussed propositions. Table 1 shows the measured computation times for several data sets (*Stanford Scan Repository* (2009)) performed on an Intel Pentium 4 system with 1.6 GHz and 256 MB main memory. Table 2 presents more detailed results for the individual reconstruction steps for the Stanford dragon point data set. Object points faces ref. time[sec] time[sec]

Rabbit 8171 25234 6 1.058 0.244 Dragon 437645 72955 7 5.345 0.714 Buddha 543652 56292 7 5.170 0.573 Table 1. Computation time table regarding several input models whereas the VM and the

Ref.- Spatial- HM- HM- Vertex- Opt. Complete

 0.240 < 0.001 < 0.001 0.019 < 0.001 0.262 0.330 < 0.001 < 0.001 0.068 < 0.001 0.401 0.388 < 0.001 0.002 0.241 0.002 0.634 0.401 < 0.001 0.014 0.676 0.008 1.100 0.278 0.002 0.056 0.787 0.050 1.173 0.362 0.008 0.170 0.928 0.180 1.648 0.662 0.039 0.717 1.683 0.725 3.826

As shown by the examples our reconstruction method delivers meshes of good quality, as long as the resolution of the voxel complex does not exceed a point density induced threshold. Topological features, such as holes and cavities were always detected correctly after a proper number of refinements, see figure 8(a). The HM wrapping rules prevent the underlying object from caving by simultaneously covering the real surface completely. Table 3 shows the measured *L*2-errors obtained by processing point clouds of different models for several

> ref. level 1 2 3 4 5 6 7 *L*2-error Bunny 25,86 9.47 3.60 1.13 0.41 0.14 - *L*2-error Buddha - 51.92 18.80 7.61 3.47 1.58 0.66

Table 3. *L*2-error values of the Bunny and Buddha reconstruction for each ref. level

(measured in lengths of diagonal of the Bounding Box).

[sec] [sec] [sec] [sec] [sec] [sec]

mesh-optimization (4 passes) are performed after the last reconstruction step.

Level Partitioning Extraction R Extraction I Mapping

Table 2. Time values for each reconstruction step of the Stanford dragon.

**4.2 Robustness and quality**

refinement levels.

level reconstr. opt.

surface at refinement level *k* averages *O*(*k* · |*P*|).

Fig. 8. (a) correctly detected holes in the bunny model, (b) fragmentation of the mesh due to lacking density of point cloud.

In case the voxel refinement level exceeds a specific bound imposed by sampling density, the resulting mesh may be incomplete in some areas, due to empty voxels, see figure 8(b). If a very fine mesh is desired, one can fix the topology at a coarser level and apply subdivision and vertex mapping in consecutive steps.

#### **5. Experimental reconstruction of environmental point data**

The exploration of point data sets, like environmental LiDaR data, is a challenging problem of current interest. In contrast to the point clouds obtained from stationary scanning devices, data complexity is increased by e.g., noise, occlusion, alternating sample density and overlapping samples. Despite of the high scanning resolution, additional undersampling may occur at small and fractured artifacts like fences and leaves of trees. These are only some problems immanent to the reconstruction of surfaces from unorganized environmental point clouds.

We have applied our method to environmental point clouds in order to analyse the effects of the aforementioned influence factors on our proposed reconstruction approach. In this context we have performed some experiments on environmental input data generated by tripod mounted LiDaR scanners. Figure 10(a) shows a point cloud of a water tower located at the UC Davis, CA, USA. It consists of about 4.3 million points and features complex environmental structures like buildings and trees. Figure 10(b) shows the corresponding reconstruction after 10 steps. The generated mesh exhibiting about 300 thousand faces took 23 seconds on an Intel Core2 Duo system with 4GB RAM to finish. Another example is presented in figure 11. It shows the final surface representation (215 thousand faces) of a slope with parts of a building and some vegetation. The underlying point cloud has 3.8 million points. The reconstruction finished at level 9 after 24 second. The reconstructed mesh features holes and cracks at those areas at which the resolution lacks in density.

Environmental point clouds by nature are large (consisting of up to several million points) and feature high complexity. The experiments showed that our reconstruction approach is capable of producing reliable representations even in cases in which the point clouds are not optimally conditioned. However, the reconstruction lacks at some regions in which the point density does not exhibit the required/satisfied resolution. Despite of this fact the introduced method

(a) (b)

(a) (b)

Surface Reconstruction from Unorganized 3D Point Clouds 129

Fig. 10. Water tower scan at the campus of UC Davis. (a) Raw point cloud having 4.3 mio. points, (b) reconstruction having 300k faces (10 subdivision steps) (total reconstruction time

Fig. 11. Reconstruction of environmental point cloud (3.8 mio. points, 215k faces, 9 steps, 24

is well suited for providing a fast preview of complex environmental scenes and serves as

We provided a novel multi-resolution approach to surface reconstruction from point clouds. Our method automatically adapts to the underlying surface topology and provides a fully-connected hybrid-mesh representation. In the context of reverse engineering it is able to provide accurate reconstructions assumed that the input data shows a sufficient point density. However, in case the point distribution is not continuous the generated reconstruction may

basis for providing initial reconstructions with respect to further mesh processing.

of about 23 seconds performed with an Intel Core2 Duo).

seconds).

**6. Conclusion**

(c) (d)

Fig. 9. (a)-(f) different reconstruction-levels, from level one to six, of the Stanford dragon point data set (*Stanford Scan Repository* (2009)) consisting of 437,645 points.

12 Will-be-set-by-IN-TECH

(a) (b)

(c) (d)

(e) (f)

Fig. 9. (a)-(f) different reconstruction-levels, from level one to six, of the Stanford dragon

point data set (*Stanford Scan Repository* (2009)) consisting of 437,645 points.

Fig. 10. Water tower scan at the campus of UC Davis. (a) Raw point cloud having 4.3 mio. points, (b) reconstruction having 300k faces (10 subdivision steps) (total reconstruction time of about 23 seconds performed with an Intel Core2 Duo).

Fig. 11. Reconstruction of environmental point cloud (3.8 mio. points, 215k faces, 9 steps, 24 seconds).

is well suited for providing a fast preview of complex environmental scenes and serves as basis for providing initial reconstructions with respect to further mesh processing.

## **6. Conclusion**

We provided a novel multi-resolution approach to surface reconstruction from point clouds. Our method automatically adapts to the underlying surface topology and provides a fully-connected hybrid-mesh representation. In the context of reverse engineering it is able to provide accurate reconstructions assumed that the input data shows a sufficient point density. However, in case the point distribution is not continuous the generated reconstruction may

Dey, T. K. & Sun, J. (2005). An adaptive mls surface for reconstruction with guarantees, *3rd*

Surface Reconstruction from Unorganized 3D Point Clouds 131

Eck, M. & Hoppe, H. (1996). Automatic reconstruction of b-spline surfaces of arbitrary

Edelsbrunner, H. & Mücke, E. P. (1994). Three-dimensional alpha shapes, *ACM Transactions*

Fleishman, S., Cohen-Or, D. & Silva, C. (2005). Robust moving least-squares fitting with sharp

Floater, M. & Reimers, M. (2001). Meshless parameterization and surface reconstruction,

Funke, S. & Ramos, E. A. (2002). Smooth-surface reconstruction in near-linear time, *SODA*

Goshtasby, A. & O'Neill, W. D. (1993). Surface fitting to scattered data by a sum of gaussians,

Guskov, I., Khodakovsky, A., Schroeder, P. & Sweldens, W. (2002). Hybrid meshes:

Heckel, B., Uva, A. & Hamann, B. (1997). Cluster-based generation of hierarchical surface

Hoppe, H., DeRose, T., Duchamp, T., Halstead, M., Jin, H., McDonald, J., Schweitzer, J. &

Hoppe, H., DeRose, T., Duchamp, T., McDonald, J. & Stuetzle, W. (1992). Surface

Keller, P., Bertram, M. & Hagen, H. (2005). Multiresolution surface reconstruction from

Keller, P., Bertram, M. & Hagen, H. (2007). Reverse engineering with subdivision surfaces,

Kolluri, R., Shewchuk, J. R. & O'Brien, J. F. (2004). Spectral surface reconstruction from

Lei, Z., Blane, M. M. & Cooper, D. B. (1996). 3l fitting of higher degree implicit polynomials,

Mederos, B., Velho, L. & de Figueiredo, L. H. (2003). Moving least squares multiresolution

Mederos, L. V. B., Amenta, N., Velho, L. & de Figueiredo, L. (2005). Surface reconstruction from noisy point clouds, *Eurographics Symposium on Geometry Processing*. Nealen, A. (2004.). An as-short-as-possible introduction to the least squares, weighted least

Pratt, V. (1987). Direct least-squares fitting of algebraic surfaces, *SIGGRAPH Comput. Graph.*

scattered data based on hybrid meshes, *IASTED VIIP*, pp. 616–621.

*'02: Proceedings of the thirteenth annual ACM-SIAM symposium on Discrete algorithms*,

Multiresolution using regular and irregular refinement, *ACM Symposium on Geometric*

Stuetzle, W. (1994). Piecewise smooth surface reconstruction, *ACM SIGGRAPH 1994*

reconstruction from unorganized points, *ACM SIGGRAPH '92 Conference Proceedings*,

noisy point clouds, *SGP '04: Proceedings of the 2004 Eurographics/ACM SIGGRAPH*

squares and moving least squares methods for scattered data approximation and

*Eurographics Symposium on Geometry Processing*, pp. 43–52.

Guo, B. (1997). Surface reconstruction: from points to splines, 29(4): 269–277.

models, *Scientific Visualization, Dagstuhl*, pp. 113–222.

topological type, *ACM Siggraph*, pp. 325–334.

features, *ACM Siggraph*, pp. 544–552.

*Comput. Aided Geom. Des.* 10(2): 143–156.

*Modeling (SoCG)*, pp. 264–272.

*Conference Proceedings*, pp. 295–302.

*Computing 2007*, pp. 127–134.

surface approximation, *Sibgraphi*.

*symposium on Geometry processing*, pp. 11–21.

interpolation, *Technical Report, TU Darmstadt*.

*on Graphics* 13(1): 43–72.

18(2): 77–92.

pp. 781–790.

pp. 71–78.

2(4): 148–153.

21(4): 145–152.

exhibit cracks and holes. One possible direction for future work would be the improvement of the stability of the approach regarding such unwanted effects. This could be achieved e.g., by adapting the reconstruction in order to become more sensitive to irregular point distributions.

#### **7. Acknowledgements**

This work was supported by the German Research Foundation (DFG) through the International Research Training Group (IRTG) 1131, and the Computer Graphics Research Group at the University of Kaiserslautern. It was also supported in parts by the W.M. Keck Foundation that provided support for the UC Davis Center for Active Visualization in the Earth Sciences (KeckCAVES). We also thank the members of the Computer Graphics Research Group at the Institute for Data Analysis and Visualization (IDAV) at UC Davis, the UC Davis Center for Active Visualization in the Earth Sciences (KeckCAVES) as well as the Department of Geology at UC Davis for their support and for providing us the data sets.

#### **8. References**


14 Will-be-set-by-IN-TECH

exhibit cracks and holes. One possible direction for future work would be the improvement of the stability of the approach regarding such unwanted effects. This could be achieved e.g., by adapting the reconstruction in order to become more sensitive to irregular point distributions.

This work was supported by the German Research Foundation (DFG) through the International Research Training Group (IRTG) 1131, and the Computer Graphics Research Group at the University of Kaiserslautern. It was also supported in parts by the W.M. Keck Foundation that provided support for the UC Davis Center for Active Visualization in the Earth Sciences (KeckCAVES). We also thank the members of the Computer Graphics Research Group at the Institute for Data Analysis and Visualization (IDAV) at UC Davis, the UC Davis Center for Active Visualization in the Earth Sciences (KeckCAVES) as well as the Department

Alexa, M., Behr, J., Cohen-Or, D., Fleishman, S., Levin, D. & Silva, C. (2003). Computing and

Alexa, M., Behr, J., Cohen-Or, D., Fleishman, S., Levin, D. & Silva, C. T. (2001). Point set surfaces, *Conference on Visualization*, IEEE Computer Society, pp. 21–28. Amenta, N., Bern, M. & Kamvysselis, M. (1998). A new vornonoi-based surface reconstruction

Amenta, N., Bernd, M. & Kolluri, D. E. (1998). The crust and the beta-skeleton: combinatorial curve reconstruction, *Graphical Models and Image Processing, 60*, pp. 125–135. Amenta, N., Choi, S., Dey, T. K. & Leekha, N. (2000). A simple algorithm for homeomorphic

Amenta, N., Choi, S. & Kolluri, R. K. (2001). The power crust, *6th ACM symposium on Solid*

Azernikov, S., Miropolsky, A. & Fischer, A. (2003). Surface reconstruction of freeform objects

Bernardini, F., Mittleman, J., Rushmeier, H., Silva, C. & Taubin, G. (1999). The ball-pivoting

Bertram, M., Tricoche, X. & Hagen, H. (2003). Adaptive smooth scattered-data approximation

Boissonnat, J.-D. (1984). Geometric structures for three-dimensional shape representation,

Dey, T. K. & Goswami., S. (2003). Tight cocone: A water tight surface reconstructor, *Proc. 8th*

Dey, T. K. & Goswami, S. (2004). Provable surface reconstruction from noisy samples, *20th Annual Symposium on Computational Geometry*, ACM Press, pp. 330–339.

Bhattacharya, P. (2001). Efficient neighbor finding algorithms in quadtree and octree.

rendering point set surfaces, *IEEE Transactions on Visualization and Computer Graphics*

surface reconstruction, *SCG '00: Proceedings of the sixteenth annual symposium on*

based on multiresolution volumetric method, *ACM Solid Modeling and Applications*,

algorith for surface reconstruction, *IEEE Transactions on Visualization and Computer*

for large-scale terrain visualization, *Joint EUROGRAPHICS - IEEE TCVG Symposium*

of Geology at UC Davis for their support and for providing us the data sets.

**7. Acknowledgements**

**8. References**

9(1): 3–âA ¸

pp. 115–126.

˘ S15.

algorithm, *ACM Siggraph '98*, pp. 415–421.

*Modeling and Applications*, ACM Press, pp. 249–266.

*Computational geometry*, pp. 213–222.

*Graphics (TVCG)* 5(4): 349–359.

*on Visualization*, pp. 177–184.

*ACM Trans. Graph.* 3(4): 266–286.

*ACM Sympos. Solid Modeling Appl.*, pp. 127–134.


**7** 

*Greece* 

**A Systematic Approach for Geometrical** 

In the fields of mechanical engineering and industrial manufacturing the term Reverse Engineering (RE) refers to the process of creating engineering design data from existing parts and/or assemblies. While conventional engineering transforms engineering concepts and models into real parts, in the *reverse engineering* approach real parts are transformed into engineering models and concepts (Várady et al., 1997). However, apart from its application in mechanical components, RE is a very common practice in a broad range of diverse fields such as software engineering, animation/entertainment industry, microchips, chemicals, electronics, and pharmaceutical products. Despite this diversity, the reasons for using RE appear to be common in all these application fields, e.g. the original design data and documentation of a product are either not updated, not accessible, do not exist or even have never existed. Focusing on the mechanical engineering domain, through the application of RE techniques an existing part is recreated by acquiring its' surface and/ or geometrical features' data using contact or non contact scanning or measuring devices (Wego, 2011). The creation of an RE component computer model takes advantage of the extensive use of CAD/CAM/CAE systems and apparently provides enormous gains in improving the quality and efficiency of RE design, manufacture and analysis. Therefore, RE is now considered one of the technologies that provide substantial business benefits in shortening

Tolerance assignment is fundamental for successful mechanical assembly, conformance with functional requirements and component interchangeability, since manufacturing with perfect geometry is virtually unrealistic. In engineering drawings Geometric Dimensioning and Tolerancing (GD&T) correlates size, form, orientation and location of the geometric elements of the design model with the design intent, therefore it has a profound impact on the manufacturability, ease of assembly, performance and ultimate cost of the component. High geometrical and dimensional accuracy leads to high quality; however, tight tolerances lead to an exponential increase of the manufacturing cost. Though the importance of

the product development cycle (Raja & Fernandes, 2008).

**1. Introduction** 

**and Dimensional Tolerancing in** 

*Rapid Prototyping & Tooling – Reverse Engineering Laboratory* 

**Reverse Engineering** 

*School of Mechanical Engineering,* 

*Mechanical Design & Control Systems Section,* 

*National Technical University of Athens (NTUA)* 

George J. Kaisarlis


URL: *http://graphics.stanford.edu/data/3Dscanrep/*


George J. Kaisarlis

*Rapid Prototyping & Tooling – Reverse Engineering Laboratory Mechanical Design & Control Systems Section, School of Mechanical Engineering, National Technical University of Athens (NTUA) Greece* 

## **1. Introduction**

16 Will-be-set-by-IN-TECH

132 Reverse Engineering – Recent Advances and Applications

Ramamoorthi, R. & Arvo, J. (1999). Creating generative models from range images,

Stollnitz, E., DeRose, T. & Salesin, D. (1996). *Wavelets for Computer Graphics–Theory and*

Wang, J., Oliveira, M. & Kaufman, A. (2005). Reconstructing manifold and non-manifold

*interactive techniques*, pp. 195–204.

*Applications*, Morgan Kaufmann.

URL: *http://graphics.stanford.edu/data/3Dscanrep/*

surfaces from point clouds, *IEEE Visualization*, pp. 415–422.

*Stanford Scan Repository* (2009).

*SIGGRAPH '99: Proceedings of the 26th annual conference on Computer graphics and*

In the fields of mechanical engineering and industrial manufacturing the term Reverse Engineering (RE) refers to the process of creating engineering design data from existing parts and/or assemblies. While conventional engineering transforms engineering concepts and models into real parts, in the *reverse engineering* approach real parts are transformed into engineering models and concepts (Várady et al., 1997). However, apart from its application in mechanical components, RE is a very common practice in a broad range of diverse fields such as software engineering, animation/entertainment industry, microchips, chemicals, electronics, and pharmaceutical products. Despite this diversity, the reasons for using RE appear to be common in all these application fields, e.g. the original design data and documentation of a product are either not updated, not accessible, do not exist or even have never existed. Focusing on the mechanical engineering domain, through the application of RE techniques an existing part is recreated by acquiring its' surface and/ or geometrical features' data using contact or non contact scanning or measuring devices (Wego, 2011). The creation of an RE component computer model takes advantage of the extensive use of CAD/CAM/CAE systems and apparently provides enormous gains in improving the quality and efficiency of RE design, manufacture and analysis. Therefore, RE is now considered one of the technologies that provide substantial business benefits in shortening the product development cycle (Raja & Fernandes, 2008).

Tolerance assignment is fundamental for successful mechanical assembly, conformance with functional requirements and component interchangeability, since manufacturing with perfect geometry is virtually unrealistic. In engineering drawings Geometric Dimensioning and Tolerancing (GD&T) correlates size, form, orientation and location of the geometric elements of the design model with the design intent, therefore it has a profound impact on the manufacturability, ease of assembly, performance and ultimate cost of the component. High geometrical and dimensional accuracy leads to high quality; however, tight tolerances lead to an exponential increase of the manufacturing cost. Though the importance of

mechanical components and assemblies that have been inevitably modified during several stages of their life cycle, e.g. surface modifications of automotive components during prototype functionality testing (Chant et al., 1998; Yau, 1997), mapping of sheet metal forming deviations on free form surface parts (Yuan et al., 2001), monitoring on the geometrical stability during test runs of mock-up turbine blades used in nuclear power generators (Chen & Lin, 2000), repair time compression by efficient RE modeling of the broken area and subsequent rapid spare part manufacturing (Zheng et al., 2004), recording of die distortions due to thermal effects for the design optimization of fan blades used in

The principles and applications of tolerancing in modern industrial engineering can also be found in several reference publications, e.g. (Drake, 1999; Fischer, 2004). An extensive and systematic review of the conducted research and the state of the art in the field of dimensioning and tolerancing techniques is provided by several recent review papers, e.g. (Singh et al. 2009a, 2009b) and need not be reiterated here. In the large number of research articles on various tolerancing issues in design for manufacturing that have been published over the last years, the designation of geometrical tolerances has been adequately studied under various aspects including tolerance analysis and synthesis, composite positional tolerancing, geometric tolerance propagation, datum establishment, virtual manufacturing, inspection and verification methods for GD&T specifications, e.g. (Anselmetti and Louati,

Although RE-tolerancing is a very important and frequently met industrial problem, the need for the development of a systematic approach to extract appropriate design specifications that concern the geometric accuracy of a reconstructed component has been only recently pointed out, (Borja et al., 2001; Thompson et al., 1999; VREPI, 2003). In one of the earliest research works that systematically addresses the RE-tolerancing problem (Kaisarlis et al., 2000), presents the preliminary concept of a knowledge-based system that aims to the allocation of standard tolerances as per ISO-286. The issue of datum identification in RE geometric tolerancing is approached in a systematic way by (Kaisarlis et al, 2004) in a later publication. Recently, (Kaisarlis et al, 2007, 2008) have further extend the research on this area by focusing on the RE assignment of position tolerances in the case of fixed and floating fasteners respectively. The methodology that is presented in this Chapter further develops the approach that is proposed on these last two publications. The novel contribution reported here deals with *(i)* the systematic assignment of *both* geometrical and dimensional tolerances in RE and their possible interrelation through the application of material modifiers on *both* the RE features and datums and *(ii)* the consideration of cost-

Reconstructed components must obviously mate with the other components of the mechanical assembly that they belong to, in *(at least)* the same way as their originals, in order the original assembly clearances to be observed, i.e. they must have appropriate manufacturing tolerances. As pointed out in Section 1, this is quite different and much more difficult to be achieved in RE than when designing from scratch where, through normal tolerance analysis/synthesis techniques and given clearances, critical tolerances are assigned, right from the beginning, to all the assembly components. Integration of geometric

aero engines (Mavromihales et al., 2003).

2005; Diplaris & Sfantsikopoulos, 2006; Martin et al., 2011).

effective, competent tolerance designation in RE in a systematic way.

**3. Dimensional tolerancing in reverse engineering** 

tolerance design is well understood in the engineering community it still remains an engineering task that largely depends on experimental data, industrial databases and guidelines, past experience and individual expertise, (Kaisarlis et al., 2008). Geometrical and dimensional tolerances are of particular importance, on the other hand, not only in industrial production but also in product development, equipment upgrading and maintenance. The last three activities include, inevitably, RE tasks which go along with the reconstruction of an object CAD model from measured data and have to do with the assignment of dimensional and geometrical manufacturing tolerances to this object. In that context, tolerancing of RE components address a wide range of industrial applications and real-world manufacturing problems such as tolerance allocation in terms of the actual functionality of a prototype assembly, mapping of component experimental design modifications, spare part tolerancing for machines that are out of production or need improvements and no drawings are available, damage repair, engineering maintenance etc.

The objective of remanufacturing a needed mechanical component which has to fit and well perform in an existing assembly and, moreover, has to observe the originally assigned functional characteristics of the product is rather delicate. The objective in such applications is the designation of geometric and dimensional tolerances that match, as closely as possible, to the original *(yet unknown)* dimensional and geometrical accuracy specifications that reveal the original design intend. RE tolerancing becomes even more sophisticated in case that Coordinate Measuring Machines' (CMM) data of a few or just only one of the original components to be reversibly engineered are within reach. Moreover, if operational use has led to considerable wear/ damage or one of the mating parts is missing, then the complexity of the problem increases considerably. The RE tolerancing problem has not been sufficiently and systematically addressed to this date. Currently, in such industrial problems where typically relevant engineering information does not exist, the conventional trial and error approach for the allocation of RE tolerances is applied. This approach apparently requires much effort and time and offers no guarantee for the generation of the best of results.

This research work provides a novel, modern and integrated methodology for tolerancing in RE. The problem is addressed in a systematic, time and cost efficient way, compatible with the current industrial practice. The rest of the chapter is organized as follows: after the review of relevant technical literature in Section 2, the theoretical analysis of RE dimensional and geometrical tolerancing is presented *(Sections 3 and 4 respectively)*. The application of Tolerance Elements (TE) method for cost-effective, competent tolerance designation in RE is then introduced in Section 5. Certain application examples that illustrate the effectiveness of the methodology are further presented and discussed in Section 6. Main conclusions and future work orientation are included in the final Section 7 of the chapter.
