**1. Introduction**

212 Semantics – Advances in Theories and Mathematical Models

The proposed representation system plays a central role in the framework explained in Section 2, which enables medical staffs and patients, who desire to evaluate medical services, to define quality indicators and to calculate their values based on medical databases, without knowing the structure of the data models of them. Moreover, the representation system helps medical stuffs and system engineers, who develop or manage medical databases, collaborate in developing useful vocabularies to establish and

Abiteboul, S.; Hull, R. B. & Vianu, V. (1995). *Foundations of Databases*, Addison-Wesley. Chen, P. P. S. (1976). The entity-relationship model—toward a unified view of data, *ACM* 

Donabedian, A. (1966). *Evaluating the quality of medical care*, The Milbank Memorial Fund

Mainz, J. (2003). Developing evidence-based clinical indicators: a state of the art methods primer, *International Journal for Quality in Health Care*, Vol.15, Supp.1, pp. i5-i11. Mattke, S. et al. (2006). Health Care Quality Indicators Project: Initial Indicators Report,

Scheiderer, K. J. (1995). The Maryland Quality Indicator Project: searching for opportunities for improvement, *Top Health Inf Manage.* Vol.15, No.4, 1995, pp. 26-30. Takaki, O.; Takeuti, I.; Takahashi, K.; Izumi, N.; Murata, K. & Hasida, K. (2012).

*OECD Health Working Papers*, No. 22, OECD Publishing.

Representation System of Quality Indicators towards Accurate Evaluation of Medical Services based on Medical Databases, *to appear in the Proceedings of the 4th International Conference on eHealth, Telemedicine, and Social Medicine (eTELEMED* 

*Transactions on Database Systems*, Vol. 1 Iss.1, pp. 9-36.

Quarterly, Vol. 44, No. 3, Pt. 2, 1966, pp. 166–203. Hasida, K. (2011). *Introduction to Semantic Editor* (in Japanese), http://i- content.org/semauth/intro/index.html. Internatinal Quality Indicator Project (IQIP) (2011). http://www.internationalqip.com/index.aspx.

http://dx.doi.org/10.1787/481685177056.

standardize quality indicators.

**11. References** 

*2012).*

Over the last few years, formal ontologies has been suggested as a solution for several engineer problems, since it can efficiently replace standard data bases and relational one with more flexibility and reliability. In fact, well designed ontologies own lots of positive aspects, like those related to defining a controlled vocabulary of terms, inheriting and extending existing terms, declaring a relationship between terms, and inferring relationships by reasoning on existent ones. Ontologies are used to represent formally the knowledge of a domain where the basic idea was to present knowledge using graphs and logical structure to make computers able to understand and process it, (Boochs, et al., 2011). As most recent works, the tendency related to the use of semantic has been explored, (Ben Hmida, et al., 2010) (Hajian, et al., 2009) (Whiting, 2006) where the automatic data extraction from 3D point clouds presents one of the new challenges, especially for map updating, passenger safety and security improvements. However such domain is characterized by a specific vocabulary containing different type of object. In fact, the assumption that knowledge will help the improvement of the automation, the accuracy and the result quality is shared by specialists of the point cloud processing.

As a matter of fact, surveying with 3D scanners is spreading all domains. Terrestrial laser scanners have been established as a workhorse for topographic and building survey from the archaeology (Balzani, et al., 2004) to the architecture (Vale, et al., 2009). Actually, with every new scanner model on the market, the instruments become faster, more accurate and can scan objects at longer distances. Such technology presents a powerful tool for many applications and has partially replaced traditional surveying methods since it can speed up field work significantly. Actually, this powerful method allows the creation of 3D point clouds from objects or landscapes. However, the huge amount of data generated during the process proved to be costly in post-processing. The field time is very height since in most cases; processing techniques are still mainly affected by manual interaction of the user. Typical operations consist to clean point clouds, to delete unnecessary areas, to navigate in an often huge and complicated 3D structure, to select set of points, to extract and model

From Unstructured 3D Point Clouds to Structured Knowledge - A Semantics Approach 215

have to include detailed information about the objects' geometry, structure, 3D algorithms,

The present chapter aims at building a bridge between the semantic modelling and the numerical processing to define strategies based on domain knowledge and 3D processing knowledge. The knowledge will be structured in ontologies structure containing a variety of elements like already existing information about objects of that scene such as data sources (digital maps, geographical information systems, etc.), information about the objects' characteristics, the hierarchy of the sub-elements, the geometrical topology, the characteristics of processing algorithms, etc. In addition, all relevant information about the objects, geometries, inter and intra-relation and the 3D processing algorithms have been modeled inside the knowledge base, including characteristics such as positions, geometrics information, images textures, behavior and parameter of suitable algorithms,

By this contribution, an approach on achieving the object detection and recognition within those inference engines will be presented. The major context behind the current chapter is the use of knowledge in order to manage the engineering problem in question based on heterogynous environment. It primarily focuses on 3D point clouds and its management through the available processing technologies for object detection and recognition incorporated through the knowledge. As the Web technologies get matured through its approach in the Semantic Web, the implementation of knowledge in this domain seems to

This research puts forward the views and result of the research activities in the backdrop of the Semantic Web technologies and the knowledge management aspect within it. The suggested system is materialized via WiDOP project (Ben Hmida, et al., 2011). Furthermore, the created WiDOP platform is able to generate an indexed scene from unorganized 3D

The following chapter is structured into section 2 which gives an overview of actual existing strategies for reconstruction processes, section 3 highlight the adopted languages and technologies for knowledge and semantic modeling, section 4 explains the suggested architecture for the WiDOP solution, section 5 presents an overview of the related knowledge model, section 6 emphasizes the intelligent process. Section 7 shows different strategies and level of knowledge for the processing, section 8 present the developed platform and gives first results for a real example, and finally section 9 concludes and shows

point clouds visualized within the virtual reality modelling language (W3C, 1995).

Fig. 1. Automatic processing compared to the manual one.

etc.

for example.

be more appropriate.

next planned steps.

geometries and objects. At the same time, it would be much more effective, to process the data automatically, which has already been recorded in a very fast and effective way.

From another side, the technical survey of facility aims to build a digital model based on geometric analysis. Such a process becomes more and more tedious. Especially with the new terrestrial laser scanners where a huge amount of 3D point clouds are generated. Within such scenario, new challenges have seen the light where the basic one is to make the reconstruction process automatic and more accurate. Thus, early works on 3D point clouds have investigated the reconstruction and the recognition of geometrical shapes (Pu, et al., 2007) to resolve this challenge. In fact, such a problematic was investigated as a topic of the computer graphic and the signal processing research where most works focused on segmentation or visualization aspects. As most recent works, the new tendency related to the use of semantic has been explored (Ben Hmida, et al., 2010). As a main operation, the technical survey relies fundamentally on the object reconstruction process where considerable effort has already been invested to reduce the impact of time consuming, manual activities and to substitute them by numerical algorithms. Unfortunately, most of such algorithmic conceptions are data-driven and concentrate on specific features of the objects being accessible to numerical models. By these models, which normally describe the behavior of geometrical (flatness, roughness…) or physical features (color, texture…), the data is classified and analyzed. Such strategies are static and not to allow a dynamic adjustment to the object or initial processing results. In further scenarios, an algorithm will be applied to the data producing better or minor results depending on several parameters like image or point cloud quality, the completeness of object representation, the viewpoints position, the complexity of object features, the use of control parameters and so on. Consequently, there is no feedback to the algorithmic part in order to choose a different algorithm or reuse the same algorithm with changed parameters. This interaction is mainly up to the user who has to decide by himself, which algorithms to apply for which kind of objects and data sets. Often good results can only be achieved by iterative processing controlled by a human interaction.

These problems can be solved when further information is integrated into the algorithmic process chain for object detection and recognition allowing supporting the process of validation. Such information might be derived from the context of the object itself and its behavior with respect to the data and/or other objects or from a systematic characterization of the parameterization and the effectiveness of the algorithms to be used. As programming languages used in the context of numerical treatments are not dedicated to process knowledge, their condition of use is not flexible and makes the integration of semantic aspects difficult.

As a matter of fact, the goal of our proposition is to develop efficient and intelligent methods for an automated processing of terrestrial laser scanner data, Fig 1. The principle our solution is a knowledge-based detection of objects in point clouds for AEC (Architecture, Engineering and Construction) engineering applications in correspondence to a project of the same name "WiDOP". In contrast to existing approaches, the project consists in using prior knowledge about the context and the objects. This knowledge is extracted from databases, CAD plans, Geographic Information Systems (GIS), technical reports or domain experts. Therefore, this knowledge is the basis for a selective knowledge-oriented detection and recognition of objects in point clouds. In such scenario, knowledge about such objects have to include detailed information about the objects' geometry, structure, 3D algorithms, etc.

Fig. 1. Automatic processing compared to the manual one.

214 Semantics – Advances in Theories and Mathematical Models

geometries and objects. At the same time, it would be much more effective, to process the data automatically, which has already been recorded in a very fast and effective way.

From another side, the technical survey of facility aims to build a digital model based on geometric analysis. Such a process becomes more and more tedious. Especially with the new terrestrial laser scanners where a huge amount of 3D point clouds are generated. Within such scenario, new challenges have seen the light where the basic one is to make the reconstruction process automatic and more accurate. Thus, early works on 3D point clouds have investigated the reconstruction and the recognition of geometrical shapes (Pu, et al., 2007) to resolve this challenge. In fact, such a problematic was investigated as a topic of the computer graphic and the signal processing research where most works focused on segmentation or visualization aspects. As most recent works, the new tendency related to the use of semantic has been explored (Ben Hmida, et al., 2010). As a main operation, the technical survey relies fundamentally on the object reconstruction process where considerable effort has already been invested to reduce the impact of time consuming, manual activities and to substitute them by numerical algorithms. Unfortunately, most of such algorithmic conceptions are data-driven and concentrate on specific features of the objects being accessible to numerical models. By these models, which normally describe the behavior of geometrical (flatness, roughness…) or physical features (color, texture…), the data is classified and analyzed. Such strategies are static and not to allow a dynamic adjustment to the object or initial processing results. In further scenarios, an algorithm will be applied to the data producing better or minor results depending on several parameters like image or point cloud quality, the completeness of object representation, the viewpoints position, the complexity of object features, the use of control parameters and so on. Consequently, there is no feedback to the algorithmic part in order to choose a different algorithm or reuse the same algorithm with changed parameters. This interaction is mainly up to the user who has to decide by himself, which algorithms to apply for which kind of objects and data sets. Often good results can only be achieved by iterative processing

These problems can be solved when further information is integrated into the algorithmic process chain for object detection and recognition allowing supporting the process of validation. Such information might be derived from the context of the object itself and its behavior with respect to the data and/or other objects or from a systematic characterization of the parameterization and the effectiveness of the algorithms to be used. As programming languages used in the context of numerical treatments are not dedicated to process knowledge, their condition of use is not flexible and makes the integration of semantic

As a matter of fact, the goal of our proposition is to develop efficient and intelligent methods for an automated processing of terrestrial laser scanner data, Fig 1. The principle our solution is a knowledge-based detection of objects in point clouds for AEC (Architecture, Engineering and Construction) engineering applications in correspondence to a project of the same name "WiDOP". In contrast to existing approaches, the project consists in using prior knowledge about the context and the objects. This knowledge is extracted from databases, CAD plans, Geographic Information Systems (GIS), technical reports or domain experts. Therefore, this knowledge is the basis for a selective knowledge-oriented detection and recognition of objects in point clouds. In such scenario, knowledge about such objects

controlled by a human interaction.

aspects difficult.

The present chapter aims at building a bridge between the semantic modelling and the numerical processing to define strategies based on domain knowledge and 3D processing knowledge. The knowledge will be structured in ontologies structure containing a variety of elements like already existing information about objects of that scene such as data sources (digital maps, geographical information systems, etc.), information about the objects' characteristics, the hierarchy of the sub-elements, the geometrical topology, the characteristics of processing algorithms, etc. In addition, all relevant information about the objects, geometries, inter and intra-relation and the 3D processing algorithms have been modeled inside the knowledge base, including characteristics such as positions, geometrics information, images textures, behavior and parameter of suitable algorithms, for example.

By this contribution, an approach on achieving the object detection and recognition within those inference engines will be presented. The major context behind the current chapter is the use of knowledge in order to manage the engineering problem in question based on heterogynous environment. It primarily focuses on 3D point clouds and its management through the available processing technologies for object detection and recognition incorporated through the knowledge. As the Web technologies get matured through its approach in the Semantic Web, the implementation of knowledge in this domain seems to be more appropriate.

This research puts forward the views and result of the research activities in the backdrop of the Semantic Web technologies and the knowledge management aspect within it. The suggested system is materialized via WiDOP project (Ben Hmida, et al., 2011). Furthermore, the created WiDOP platform is able to generate an indexed scene from unorganized 3D point clouds visualized within the virtual reality modelling language (W3C, 1995).

The following chapter is structured into section 2 which gives an overview of actual existing strategies for reconstruction processes, section 3 highlight the adopted languages and technologies for knowledge and semantic modeling, section 4 explains the suggested architecture for the WiDOP solution, section 5 presents an overview of the related knowledge model, section 6 emphasizes the intelligent process. Section 7 shows different strategies and level of knowledge for the processing, section 8 present the developed platform and gives first results for a real example, and finally section 9 concludes and shows next planned steps.

From Unstructured 3D Point Clouds to Structured Knowledge - A Semantics Approach 217

require several months to be achieved, depending on the complexity of the facility and the modeling requirements. Reverse engineering tools excel at geometric modeling of surfaces, but with the lack of volumetric representations, while such design systems cannot handle the massive data sets from laser scanners. As a result, modelers often shuttle intermediate results back and forth between different software packages during the modeling process, giving rise to the possibility of information loss due to limitations of data exchange standards or errors in the implementation of the standards within the software tools (Goldberg, 2005). Prior knowledge about component geometry, such as the diameter of a column, can be used to constrain the modeling process, or the characteristics of known components may be kept in a standard component library. Finally, the class of the detected geometry is determined by the modeler once the object is created. In some cases, relationships

between components are established either manually or in a semi-automated manner.

The manual process for constructing a survey model is time consuming, labor-intensive, tedious, subjective, and requires skilled workers. Even if modeling of individual geometric primitives can be fairly quick, modeling a facility may require thousands of primitives. The combined modeling time can be several months for an average-sized facility. Since the same types of primitives must be modeled throughout a facility, the steps are highly repetitive and tedious (Hajian, et al., 2009). The above mentioned observations and others illustrate the need semi-automated and automated techniques for facility model creation. Ideally, a system could be developed that would take a point cloud of a facility as input and produce a fully annotated as-built model of the facility as output. The first step within the automatic process is the geometric modeling. It presents the process of constructing simplified representations of the 3D shape of survey components from point cloud data. In general, the shape representation is supported by Constructive Solid Geometry (CSG) (Corporation, 2006) or Boundary representation B-Rep representation (CASCADE, 2000). The representation of geometric shapes has been studied extensively (Campbell, et al., 2001). Once geometric elements are detected and stored via a specific presentation, the final task within a facility modeling process is the object recognition. It presents the process of labeling a set of data points or geometric primitives extracted from the data with a named object or object class. Whereas the modeling task would find a set of points to be a vertical plane, the recognition task would label that plane as being a wall, for instance. Often, the knowledge describing the shapes to be recognized is encoded in a set of descriptors that implicitly capture object shape. Research on recognition of facility's specific components related to a facility is still in its early stages. Methods in this category typically perform an initial shape-based segmentation of the scene, into planar regions, for example, and then use features derived from the segments to recognize objects. This approach is exemplified by Rusu et al. who use heuristics to detect walls, floors, ceilings, and cabinets in a kitchen environment (Rusu, et al., 2009). A similar approach was proposed by Pu and Vosselman to model facility façades (Pu, et al., 2009). To reduce the search space of object recognition algorithms, the use of knowledge related to a specific facility can be a fundamental solution. For instance, Yue et al. overlay a design model of a facility with the asbuilt point cloud to guide the process of identifying which data points belong to specific objects and to detect differences between the as-built and as-designed conditions (Yue, et al., 2006). In such cases, object recognition problem is simplified to be a matching problem between the scene model entities and the data points. Another similar approach is presented in

**2.2 Semi-Automatic and Automatic methods** 
