**12. Making usage the knowledge enabler**

How might we use this cost reduction approach by grouping things effectively for users? One approach might be to simply write down items, as suggested before, and wait for natural social processes to normalize usage into common knowledge, but this can take significant time and can lead to an explosion of new items. One would likely need to go back and re-organize the stored information later to eliminate redundancy. All this accomplishes, however, is to delay the inevitable cost of organizing the information in the first place.

While there might be optimum ways of approaching categorizations that reduce the cost of knowing everything, this suggests that there is an intrinsic cost to knowing something that is associated with the what the receiver of the knowledge has decided is an acceptable set of category bundles. This is such an important observation, it is worth turning it into a hypothesis

**Hypothesis 5** (Knowledge has a minimum cost)**.** *There is an intrinsic minimum cost to comprehending information that depends on the complexity of the model used to interpret information by the recipient, i.e. the complexity of the recipient's use-promise.*

This hypothesis harks of Shannon's entropy theorem for intrinsic information, and is almost certainly related through the definition of modelling 'alphabets', see Burgess (2004); Shannon & Weaver (1949). As Einstein remarked: everything should be made as simple as possible but

And the Emergence of Ontology 27

What's Wrong with Knowledge Management? And the Emergence of Ontology 175

through story telling. It is entirely possible that our brains are wired to support this form of

Consider what might happen if we looked up Einstein's famous equation *E* = *mc*<sup>2</sup> in a knowledge base. This simple looking equation is associated with a huge amount of popular culture about Einstein, and most people would immediately think of him, but there is nothing unique or 'copyrighted' about it. A nutrition expert who had never read any physics might use this formula for something quite different, such as "Eating is munching and chewing

Still, one context dominates above all others and that is physics. Under this context, there are many associated ideas. When we think of *E* = *mc*2, we might associate it with the atomic bomb, or nuclear power, or mass-energy conversion, or with a funny photograph of Einstein

In a book one could integrate all of these apparently unrelated meanings from cover to cover, weaving them into a story, with a progressing storyline that explains, organizes and blazes a repeatable trail for all these ideas, or one could go directly to look up keywords in an index. The table of contents in a book spans the highlights of the story, whereas the index is a quite different covering of the subjects that pays little attention to the ordering or developments in

The concept of a story is large in human culture, but as far as I can tell very little attention has been given to them in Knowledge Management research. In work with Alva Couch, I have tried to remedy this by exploring a simple notion of stories, including the notion of automated storytelling, by identifying causal associations between topics, see Burgess (2009); Couch & Burgess (2009; 2010). A table of contents, in a book, is a rough outline of the story told by the book at a very high level. It gives a different perspective on a book's content than the index does (yet the index is what current KM technology is almost exclusively focused on). Knowledge technology needs to support the idea of storylines, in which ideas and information build upon the context of earlier information, because this is how humans communicate, see

**Definition 20** (Story)**.** *A collection of topics connected together by associations in a causative thread.* Causality (i.e. cause and effect) can be embodied in associative relationships such as 'affects' or 'always happens before', 'is a part of' etc. These relationships have a transitivity that most promised associations do not have, and this property allows a kind of automated reasoning

Automated story generation has been discussed by myself and Alva Couch in , see Couch & Burgess (2009; 2010), so I will not repeat the detailed arguments here. Today, there are no semantic knowledge models that are able to model creative narratives by association, or even ordered tables of contents in books for that matter! This is an extraordinary omission and a

It is worth studying this possibility to derive new and 'unknown' stories from emergent repositories of knowledge promises. In this way, one could imagine discovering a new story about *E* = *mc*<sup>2</sup> that has never been told before, derived perhaps from the contributions of a

Understanding more about the principles of story detection could also have more far-reaching consequences for knowledge than just automated reasoning. In school, not all students find it easy to make their own stories from bare facts, and this could be why some students do better than others in their understanding. We tend to feel we understand something when we can

swarm of twenty different individuals who were not even thinking about this matter.

key capability in integrating random access knowledge with documents.

narrative.

twice".

Wolf (2007).

pulling tongues at the camera.

narrative, or the totality of the theme in the book.

that is not possible with arbitrary associations.

no simpler. The subjectivity of this intrinsic cost might also explain why some people find it harder than others to learn or accept knowledge from certain sources than others.

The economic perspective we are pursuing here suggests a simple strategy for reducing personal cost by an end user that has to do with short-circuiting others' predefined or authoritative categories by recategorizing everything in his or her own set.

**Hypothesis 6** (Personal simplification strategy)**.** *Each individual student or recipient of knowledge begins by remapping apparent categories of information used by the source into a personal reduced set of trusted categories, according to their own world view and experience. In this way the cost of lookup, mistrust and unfamiliarity is reduced.*

In modelling terms, we can imagine forming usage-categories called, say, 'virtual bundles of knowledge promises', i.e. virtual roles for the things a user promises to accept, which any knowledge agent is free to edit and manipulate as it sees fit.

More work will be needed to identify what the optimum approach might be in certain circumstances, and this could depend on a number of factors, so I shall leave the subject dangling on this point, as an opportunity for future work.

### **13. How can we be certain about meaning?**

If an ontology is not determined by a standardizing authority, how can we be certain that anyone will end up understanding each other? The story of the Tower of Babel comes to mind, as one advocates tearing down the standard ontologies. In fact, I believe that the underpinning of knowledge by these spanning trees is entirely unnecessary. It is rather up to each and every user to apply such a tree as a filter if they so desire.

What the promise model underlines is that every agent individually promises only its own intended meaning, and in fact no two agents can truly know if they mean the same thing. Rather than seeing this as a problem to be forced into submission, it is better to accept this as the nature of reality and deal with the uncertainty. Only an independent third party can determine whether or not they *seem* to agree for all intents and purposes. The frequency of use will determine how stable word usage is. Note that the irregular verbs are those that are most frequently used. Less well used words tend to be normalized into common patterns quickly to reduce the cost of recall.

The main difference in the emergent approach is the distribution of cost. For the authoritative ontology, the up-front cost of contribution and usage is high, and it assumes expert knowledge. For the emergent context approach, there is no initial cost, but rather one must promise to practice over time to retain meaning. The advantage of a purely linguistic classification is that it is not a separate rehearsal from daily usage. We have little choice but to practice language, so in some ways the overhead is gratis, or at least can be 'charged to a different account'.
