**1. Introduction**

100 New Research on Knowledge Management Technology

[11] Chao Lemen,An Xiaomi,Ye Liuqi,Knowledge Life Cycle on Semantic Web,Proceedings

[12] Tim O'Reilly. What Is Web 2.0,Design Patterns and Business Models for the Next

[15] James Hendler, Jennifer Golbeck. Metcalfe's law, Web 2.0, and the Semantic Web [J]:

[16] Ora Lassila, James Hendler.Embracing Web 3.0 [J].IEEE Internet Computing,2007(3):90-

[17] Anupriya Ankolekar, Markus Krotzsch, Thanh Tran, Denny Vrandecic. The two

[18] U. Bojars, J.G. Breslin, A. Finn, S. Decker. Using the Semantic Web for linking and

[19] foaf-project.org. About FOAF [OL][2011-09-01]. http://www.foaf-project.org/about/ [20] Dan Brickley and Libby Miller. FOAF Vocabulary Specification 0.91 Namespace

[21] Edd Dumbill, XML Watch: Finding friends with XML and RDF[EB/OL] (2002-6-1) [2009-4-5]. http://www.ibm.com/developerworks/xml/library/x-foaf.html] [22] The Friend of a Friend Project.RDFWeb: Friend of a Friend (FOAF)[OL].[2011-10-20].

[23] Guillaume Erétéo, Michel Buffa, Fabien Gandon, Patrick Grohan, Mylène Leitzelman,

[25] WANG Xu-ping,NI Zi-jian,HU Xiang-pei. A method of searching e-commerce partner

Peter Sander.A State of the Art on Social Network Analysis and its Applications on a Semantic Web[C].In: Proc. SDoW2008 Social Data on the Web, Workshop held with the 7th International Semantic Web Conference Karlsruhe, Germany: (October

based on FOAF and RDQL [J]. Journal of Harbin Institute of Technology,

Document 2 November 2007 - OpenID Edition [EB/OL].

cultures: Mashing up Web 2.0 and the Semantic Web [J].Journal of Web Semantics,

reusing data across Web 2.0 communities [J].Journal of Web Semantics, 2008(6): 21-

Computer Society,2009.

93.

28

2008(6):70-75

http://xmlns.com/foaf/spec/,2007.

[24] Google.Social Graph API[OL].[2011-9-20].

http://code.google.com/intl/EN/apis/socialgraph/

http://rdfweb.org/foaf/

2008(4):661-665.

2008)

Generation of Software.(2005-9-30)[2011-07-11]. http://oreilly.com/web2/archive/what-is-web-20.html. [13] Colin White. The Impact of Web 2.0[J].DM Review, 2007(8):14.

Journal of Web Semantics, 2008(6):14-20.

[14] Amy Shuen.Web 2.0: A Strategy Guide[M].O'Reilly Media, Inc.2008:1-266.

of 2009 International Forum on Information Technology and Applications[C],IEEE

The motivation of this work is the problem of information overload in the ICT-based systems. We think network knowledge management systems have the most important characteristics of systems with this problem, but they are more scalable and controllable than others, so they might be used as a research experimental model.

Our assumption is that there are several hidden aspects in the systems with information overload, which can be used to try to solve this problem. On the one hand, taking advantage of the excess energy of the active elements that are involved in the systems, such as users, services or applications and other entities related to them. One the other hand, using the properties of both the elements and the activities related to the systems affected by the problem, such as network, active entities, information and knowledge involved, or processes and interactions of these elements and activities.

In applying this assumption to proposed simplified experimental model of knowledge management systems, we try to discover ways to reduce information overload in these systems, which could be applied in broader areas such as the Web.

Our approach is based on a knowledge management system called KnowCat (KC) (Alamán & Cobos, 1999; Cobos, 2003; Cobos & Pifarré, 2008), which is a groupware system that facilitates the management of a knowledge repository by means of user community interaction through the Web. KC achieves a selection of the best documents without supervision by anyone, using information about users' activity and users' opinion about knowledge. KC knowledge repository is formed by documents and topics structured as a knowledge tree. Each KC instance is a KC Node and has a subject, a user community and a knowledge repository. Crystallization is the KC's process of knowledge selection by its quality using information about users' activity and their opinion about the knowledge items. For more information, please see the chapter about KC in this book.

In order to support the assumption, a prototype has been developed on KnowCat, which is called Semantic KnowCat (SKC) (Moreno-Llorena, 2008; Moreno-Llorena & Alamán, 2005) incorporating ideas and techniques from different research areas that converge on the Semantic Web (Berners-Lee, 2000): Knowledge Management, Human Computer Interaction, Collaborative Work, and Data and Information Mining (Baeza & Ribeiro, 1999).

This article shows how some of the techniques and ideas mentioned are integrated to implement an Analysis Module (AM) of SKC on a KC system. This module is in charge of processing explicit knowledge of the system in order to develop another latent and return it

Digestion of Knowledge in a KM System to Reveal Implicit Knowledge 103

The AM works on the system knowledge base, depositing the result of its activity in the same repository, in a way in which both the system and users may use the module's contributions in a transparent way. The AM considers that the knowledge is formed by

With the new knowledge the system may improve the management it carries out in different ways, for instance, providing different views of the repository and new access services; simplifying users' classification of knowledge items in the system; or informing

Each knowledge item that is considered by the AM must have a description text associated, which may be assigned either manually or automatically; for the second option the module itself may deal with it on some occasions. The descriptive texts that are associated with the documents that SKC currently manages are the documents themselves, given that they contain textual information. In the case of the topics -that is a collection of documents-, are given by the descriptive texts of documents classified within themselves or of the subtopics they contain; although initially model texts that don´t necessarily have to be part of the system knowledge repository, may be used. Nodes are the same way in that they may be considered the root topics of a knowledge tree constituted by the topics and the documents included within it. Regarding users, several description texts may be associated with them, considering the documents or topics that, for instance, they provide or use frequently. The AM carries out two fundamental tasks: on the one hand, it develops knowledge that is latent in the system; and on the other hand it incorporates it within the system itself in an explicit way in order to allow its exploitation. Implicit knowledge is found in relations that are established among the different knowledge items, for instance within the contents included or within interactions that ones and others establish. Explicit knowledge is incorporated into the system in its clear new state, describing the existing knowledge items,

The link through the contents is established, in this approach, obtaining vectorial descriptors of the weight of the terms from text documents associated with the items. With these descriptors items may be compared, the distance that separates them may be determined

Associations based on the interaction between knowledge items are determined by analyzing how items relate among themselves. Like this, the way in which topics group documents and other topics in the knowledge tree of nodes may be considered, and how

The knowledge incorporated into the system, as a result of the analysis, provides new repository exploitation opportunities. On one hand, the enriched knowledge items may be shown from new perspectives thanks to the new attributes. On the other hand, the items incorporated into the system by the knowledge assimilated by the latter, allow offering

In our approach we have considered, initially, four types of knowledge items: nodes that are system instances in charge of the knowledge management about an area with the help of a user community; topics structured in the form of a knowledge tree that develop the different aspects of the main node topic; users that constitute the community that participates in the node; and documents that describe the different topics and are provided by the users,

items. These items may be documents, topics, knowledge trees, nodes, users, etc.

users about implicit relation between items, given the context of interaction.

or in the form of new knowledge items that are added to the repository.

and groups among them may be formed.

users provide documents to the system.

**2.1 Linking by content** 

users different views of the repository and new services.

searched by them and which are the object of their consideration.

to the system in a way that it can be used. For this reason, the module uses texts associated with the knowledge, the way the latter is structured and the way its components interact. As a result, the knowledge developed provides new access opportunities and interaction with the system and knowledge.

The knowledge tree of a KC node represents the common and shared understanding of the corresponding community on the domain dealt by the node. This tree may be considered a representation of an ontology underlying the domain (Gruber, 1993). Assignment of documents to topics in the knowledge tree involves semantic annotation of these documents within an ontology scope. This is the AM view of the knowledge tree node where it works. It is in this context, that one should be interested in the automatic annotation of documents automatic assignment of documents to topics- (Kiryakov et al., 2004) or in mapping between ontologies -trees- with different nodes (Noy & Musen, 2002).

In AM text mining techniques that allow poorly structured textual data processing through the use of vectorial models are being used (Baeza & Ribeiro, 1999; Chang et al., 2001). These techniques are currently very popular, especially for their use in automatic indexing of the Web contents. In addition, AM uses analysis language processing (Carreras et al., 2004) appropriate for natural language processing. Application of these techniques in the field data recovery is not widely used because the computing effort of its use does not justify the benefits it provides in most common cases, where some of the texts compared are small and the repository to be dealt with is big -as it happens when using conventional search engines on the Web-. However, it seems that the situation may be different when comparing larger texts on moderate- sized repositories, which is the case we are concerned with and why we thought it would be a good idea to use this technique (Brants, 2004). There are other alternatives to this approach (Baeza & Ribeiro, 1999) that the prototype could be included in the future to contrast the results.

The ultimate aim of the module is to convert the result obtained into something useful to interact with the system and its contents. For this reason, it is essential to resolve problems related to the information filtering to be shown, typical of recommendation systems (Adomavicius & Tuzhilin, 2005), and with data visualization (Geroimenko & Chen, 2002).

To check the viability of the proposed approach several experiments have been performed with four KC nodes in learning activities carried out in the Universidad Autónoma de Madrid (Spain). The experimental results have shown evidences about how to take advantage of latent knowledge to enrich knowledge base and to facilitate the management task fulfilled by the system, the interaction among its entities and users' access to the contents that have been processed, among other interesting applications (Moreno-Llorena, 2008; Moreno-Llorena & Alamán, 2005; Moreno-Llorena et al., 2009a, 2009b). The enrichment of the proposed content seems to provide a very powerful support for automatic exchange of knowledge among knowledge management systems opening a way to the development of the latter on the semantic Web field (Berners-Lee, 2000).
