**4.1 Web2.0 and its implications**

92 New Research on Knowledge Management Technology

(1)Knowledge representation: The main purpose of knowledge representing is changed on semantic web. Traditionally, web content is formatted for human readers rather than computer applications. As a result, the machines hardly find, organize, integrate or validate knowledge on the traditional web without man's intervention. We have been tended to believe that Artificial Intelligence is the only way to manage the web data by machines or applications. AI hasn't been, however, made much progress with data management yet and therefore many scholars of knowledge management have to grant a higher value on humancentric knowledge management instead of machine- centric knowledge management. Semantic web, for the first time, make it easier for machines to manage web knowledge because data are represented to be machine readable. The main semantic technologies to represent data semantically are Unicode, XML (Extensive markup language), RDF/RDFs (Resources Description Framework / Resources Description Framework Schema), and OWL

(2)Knowledge interconnection: Another main purpose of semantic web is to build a web between data. Today's web is not a web of data, but a web of computers or applications. Knowledge on the current web doesn't connect with or be related to each other. Uniform(URI) and Namespace (NS) are most common used technologies to build

(3)Knowledge reasoning: Semantic web's strength lies in its ability to knowledge reasoning. It is very difficult for today's web to reason knowledge because of lacking metadata and rules. Semantic web makes knowledge reasoning possible by adding semantic metadata and a rule system to semantic data. By adding new rule system to the semantic web, new knowledge can be inferred and existing knowledge can be validated. Rule system may be Monotonic or Nonmonotonic. Monotonic rule system, which is a special care of predicate logic, can be combined with semantic web by Semantic Web Rules Language (SWRL) or Description Logic Programs (DLP). No monotonic rules are useful in situations where the available information is incomplete [9]. Through RuleML, Nonmontonic rules can be represented easily and priorities to resolve some conflicts

(4)Knowledge retrieving: Knowledge can be retrieved with high precision on semantic web. The process of semantic knowledge searching can be divided into following steps: searching for semantic web document and searching for semantic knowledge in a web document found. Intelligent agent and search engines are the most frequently used tools to search for semantic web documents. After the document is located, addressing and querying languages, such as XQL, XQuery, X-Path, RQL and SPARQL, can be used to further search

(5)Knowledge validation: As knowledge on semantic web may be redundant, out-of-date, incorrect, or distorted, it is necessary for semantic web to validate the result set of knowledge retrieving. Knowledge validation in NGKM refers to the process by which new "knowledge claims" are subjected to peer review and a test of value in practice [10]. The process of validating semantic knowledge can be carried out based on its authenticity and integrity. Digital signatures, encryption, certificate authority technology are the most

(6)Knowledge integration: Diverse knowledge which has been validated sometimes needs to be integrated with each other. There are three kinds of knowledge integration technologies: Technologies to integrate knowledge with each other, such as Message-oriented

prevalent technologies for semantic web to validate its knowledge.

Middleware (MOM), Message Broker or Adapters.

(Web Ontology Language) [8].

connection between semantic knowledge.

between these rules can also be added.

for semantic web parts.

The concept of "Web 2.0" began with a conference brainstorming session between O'Reilly and MediaLive International [12]. Web 2.0 is not a new technology, but a new shift in the application model of the World Wide Web. Design principles behind web 2.0 include: The web as platform, harnessing collective intelligence, data is the next Intel inside, end of the software release cycle, lightweight programming models, software above the level of a single device and rich user experience [13].There are some typical applications of web 2.0 such as Blog,RSS,Wiki,Tag,SNS,P2P which has been widely used on existing web. The reasons why web2.0 has been successfully accepted are as follows: users create value, networks multiply effects, people build connections, companies capitalize competences, new recombines with old, and businesses incorporate strategies [14].


Table 1. Design principles behind web 2.0

The Semantic Web-Based Collaborative Knowledge Management 95

as shown in Figure 3. These three levels correspond to the three different objectives of organization knowledge management respectively: accumulating or creating organizational knowledge, mining or utilizing organizational knowledge and building a knowledge ecosystem for the organization. The theoretical foundation of knowledge chain management is Web2.0 based organizational knowledge management. Knowledge chain management plays a role of knowledge provider for organizational knowledge base. The management of knowledge base in turn could promote the further development of the knowledge chain management. Knowledge base management is a prerequisite for building a knowledge ecosystem and a well developed knowledge ecosystem can provide a better environment for construction of knowledge base. Organizational knowledge management should cultivate

its knowledge chain, knowledge base and knowledge ecosystem at the same time.

Fig. 3. A Model for Collaborative Knowledge Management on the Semantic Web

upward partial understanding- has been discussed in previous section.

**5. Implementing collaborative knowledge management on the semantic web**  Figure 4 introduces a new framework that integrates semantic web with web2.0 to make full use the mutually complementary natures of them. The framework consists of following five layers in top-down direction: user layer, application layer, computing layer, knowledge layer and networking layer. The building activities of the five different layers should also follow the two basic principles of semantic web layered cake -downward compatibility and

In contrast to web1.0, web2.0 extends the coverage of knowledge management to the long tail of organizational knowledge chain so that it can nurture knowledge ecosystems for contemporary organizations. Some of the key implications of web2.0 on knowledge processing are as following:

(1)Mass-Collaborative Knowledge Processing: It can be easily inferred from some principles behind web2.0 such as harnessing wisdom of crowds, lightweight programming models, software above the level of a single device and rich user experience, that knowledge processing activities on web 2.0 environments are mass-collaborative. All volunteers who located on long tail of organizational knowledge chains would be encouraged to participate in organizational knowledge intervention and can save the cost of knowledge sharing and innovation in organizations.

(2)Self- Organized Knowledge Processing: Web2.0 applications allow users to cooperate with each other rather than controlled by other. Wiki, for example, allows anyone not only to contribute his or her own knowledge but also to edit the knowledge provided by others Therefore, a self-organized knowledge processing platform forms on web2.0 and facilitates ongoing knowledge sharing and innovation activities in organizational knowledge ecosystems.

(3)Meta-synthestive Knowledge Processing: The three main elements of knowledge processing system -content, machine and man- can be integrated seamlessly in web2.0 environments. The increasing popularity of Internet technology highlights limitations of human brain in the speed, accuracy, strength, storage capacity, storage time and standardization of knowledge processing. As a result, computer is becoming an alternative tool of knowledge processing. The abilities of human brain and computers in knowledge processing are complementary with each other so that man-machine collaborative knowledge processing will be one of the basic models for computing knowledge on the web.

### **4.2 Integrating semantic web with web2.0**

Table 1 conducts a comparison between the semantic web and web2.0 from a knowledge processing perspective. There is a growing awareness that integrating semantic web with web2.0 is reasonable and practical [15] .Web3.0, which is the third wave to hit the web in future, should be an integration of semantic web into web2.0 [16]. The semantic web and web2.0 are complementary with each other [17], for example the semantic web can be used for linking and reusing data across Web 2.0 communities [18].


Table 2. A comparation between the semantic web and web2.0

According to the seven basic principles of Web2.0 and main outcomes of organizational knowledge management, we can propose a novel collaborative knowledge management model by the holistic systems approach. The model has three different layers: knowledge chain management, knowledge base management and knowledge ecosystem management,

In contrast to web1.0, web2.0 extends the coverage of knowledge management to the long tail of organizational knowledge chain so that it can nurture knowledge ecosystems for contemporary organizations. Some of the key implications of web2.0 on knowledge

(1)Mass-Collaborative Knowledge Processing: It can be easily inferred from some principles behind web2.0 such as harnessing wisdom of crowds, lightweight programming models, software above the level of a single device and rich user experience, that knowledge processing activities on web 2.0 environments are mass-collaborative. All volunteers who located on long tail of organizational knowledge chains would be encouraged to participate in organizational knowledge intervention and can save the cost of knowledge sharing and

(2)Self- Organized Knowledge Processing: Web2.0 applications allow users to cooperate with each other rather than controlled by other. Wiki, for example, allows anyone not only to contribute his or her own knowledge but also to edit the knowledge provided by others Therefore, a self-organized knowledge processing platform forms on web2.0 and facilitates ongoing knowledge sharing and innovation activities in organizational knowledge

(3)Meta-synthestive Knowledge Processing: The three main elements of knowledge processing system -content, machine and man- can be integrated seamlessly in web2.0 environments. The increasing popularity of Internet technology highlights limitations of human brain in the speed, accuracy, strength, storage capacity, storage time and standardization of knowledge processing. As a result, computer is becoming an alternative tool of knowledge processing. The abilities of human brain and computers in knowledge processing are complementary with each other so that man-machine collaborative knowledge processing will be one of the basic models for computing

Table 1 conducts a comparison between the semantic web and web2.0 from a knowledge processing perspective. There is a growing awareness that integrating semantic web with web2.0 is reasonable and practical [15] .Web3.0, which is the third wave to hit the web in future, should be an integration of semantic web into web2.0 [16]. The semantic web and web2.0 are complementary with each other [17], for example the semantic web can be used

*Purpose To Improve the efficiency of Knowledge Processing* 

*Unit Micro contents Knowledge atom focus Man Machine* 

According to the seven basic principles of Web2.0 and main outcomes of organizational knowledge management, we can propose a novel collaborative knowledge management model by the holistic systems approach. The model has three different layers: knowledge chain management, knowledge base management and knowledge ecosystem management,

Web 2.0 The Semantic Web

processing are as following:

innovation in organizations.

knowledge on the web.

*Differences* 

**4.2 Integrating semantic web with web2.0** 

for linking and reusing data across Web 2.0 communities [18].

*Similarities Platform Internet, especially world wide web* 

Table 2. A comparation between the semantic web and web2.0

*Level Applications Data* 

ecosystems.

as shown in Figure 3. These three levels correspond to the three different objectives of organization knowledge management respectively: accumulating or creating organizational knowledge, mining or utilizing organizational knowledge and building a knowledge ecosystem for the organization. The theoretical foundation of knowledge chain management is Web2.0 based organizational knowledge management. Knowledge chain management plays a role of knowledge provider for organizational knowledge base. The management of knowledge base in turn could promote the further development of the knowledge chain management. Knowledge base management is a prerequisite for building a knowledge ecosystem and a well developed knowledge ecosystem can provide a better environment for construction of knowledge base. Organizational knowledge management should cultivate its knowledge chain, knowledge base and knowledge ecosystem at the same time.

Fig. 3. A Model for Collaborative Knowledge Management on the Semantic Web

#### **5. Implementing collaborative knowledge management on the semantic web**

Figure 4 introduces a new framework that integrates semantic web with web2.0 to make full use the mutually complementary natures of them. The framework consists of following five layers in top-down direction: user layer, application layer, computing layer, knowledge layer and networking layer. The building activities of the five different layers should also follow the two basic principles of semantic web layered cake -downward compatibility and upward partial understanding- has been discussed in previous section.

The Semantic Web-Based Collaborative Knowledge Management 97

its lower layer and some web2.0 technologies, such as P2P,B/S,Ajax and SOA , are also used

The application layer is the upper layer of the computing layer and provides the user layer with web2.0 application environments, such as Wiki, Blog, RSS and IM. The layer combines web2.0 with semantic web so that improves the effects of knowledge processing by manmachine collaboration. At the same time, the technologies of Application Programming Interface are usually required in this layer for keeping its independence with computing

At the top of the framework, we can find user layer including agents, men and machines. Men and machines can use web2.0 application of application layer by intelligent agents. Men, machines and agents at the long tail of organizational knowledge chain are encourage to take part in knowledge processing activities in order to build organizational knowledge ecosystems which support mass-collaborative, self-organized, and Meta-synthesized

Shown as figure 1, it is evident that trust and security are common challenge for all the layers and can't be ignored by any of the layers. Therefore, a holistic security and trust solution is required in the framework. In general, the trust of two top layers- user layer and application layer - could be implemented by interaction between agents, at the same time, the security of the other three layers should use information security technologies, including

Man-machine collaborative knowledge processing is one of most protruding features in the framework which inherits from web2.0. It should be put on top of the list to maximize the complementary advantages of man and machine when developing or selecting methods and

In this section, we're going to make an in-depth study of applications built on the top of FOAF project and to provide insights into the semantic web based collaborative knowledge

The Friend of a Friend (FOAF) is a project about creating a web of machine-readable homepages describing people, the links between them and the things they create and do[19]. The project accumulates various kinds of data, such as text, photo and records, from real practices and defines relations between different data source by social

(1) Knowledge representation. Users of these applications can publish their personnel information in FOAF language , an XML-based RDF knowledge representation language introduced by the FOAF project. The language employs classes FOAF:Agent,

relations[20][21][22].Knowledge life cycle of a typical FOAF application is as follows:

to keep align with its upper layer.

**5.4 Application layer** 

layer.

**5.5 User layer** 

Knowledge Processing.

**5.6 Trust and security** 

encryption or certificate authentication.

**5.7 Man-machine collaboration** 

technologies for each layer.

**6. Case study** 

management.

Fig. 4. Knowledge Processing Framework for Integrationg Semantic Web with Web2.0

#### **5.1 Networking layer**

The new framework is built on the top of traditional World Wide Web technologies and provides the knowledge layer with the services of data interconnecting and transferring. Some current web technologies, including Unicode, URI and namespaces, form part of the basic structure of this layer.

#### **5.2 Knowledge layer**

The knowledge layer is located between the networking layer and computing layer and provides the upper layer with machine readable knowledge representation services. This layer involves two kinds of knowledge: domain knowledge and non-domain knowledge. The latter can be described, retrieved, inferred and validated by the former. Unlike domain knowledge, non-domain knowledge can be maintained by collaborative efforts of domain experts and grass root users. XML-based RDF and OWL language is two most prevailing technologies in the knowledge layer.

#### **5.3 Computing layer**

The computing layer bridges the gap between the computing layer and the application layer and be responsible for knowledge retrieving, inferring, extracting and mining of the whole framework. Therefore, its knowledge processing technology depends on the two other layers. Some semantic web knowledge processing technologies, including RuleML,SPARQL and SPARQL-update , are used for knowledge computing in the computing layer because of its lower layer and some web2.0 technologies, such as P2P,B/S,Ajax and SOA , are also used to keep align with its upper layer.
