**2.1 The semantic naturalism**

"Semantics" involves the **understanding of the relationship between** words, **texts** signals, sentence structure on the one hand **and language-independent, reference, truth and meaning on the other**. **There are** three types of semantics: **Formal, Lexical and Conceptual Semantics. "Semantic Naturalism" is essentially the view that it is possible to have physical understanding of "meaning**.**"**

There is in philosophy a tradition occupied by those who hope or expect to achieve the reduction of semantics and related concepts to respectable **physical ones**. Daniel C. Dennett [3] introduces the concept **"intentional stance"** in which the behavior of a system is understood in terms of its goals, beliefs and a principle of rationality: it does what it believes will achieve its goals*.* Much of what we know, the behavior of many systems is **intentional** (i.e. pointing beyond itself or the capacity of the mind to refer to an existent or nonexistent object).

John Searle [4, 5] argues that mere calculation does not, of itself, evoke conscious mental attributes, such as understanding or intentionality, e.g. the results of mathematical insight, do not seem to be obtained algorithmically. In his famous "Chinese room argument," Searle claims to demonstrate that computers mimic someone who understands Chinese even though it does not know a word. Computers process symbols in ways that simulate human thinking, but they are actually mindless, as they do not have any subjective, conscious experience.

Fred Dretske in his **Knowledge and the Flow of Information** Dretske [6], enunciates the idea of **semantic naturalism**. **"Naturalism"** or **"naturalistic characterization"** is a tendency attempting to explain everything in terms of **nature** or a tendency consisting essentially of looking upon nature as the original and fundamental source of all explanation, in this case "information".

This is also Dretske's first major defense of an **informational theory of content**. This book was instrumental in bringing informational approaches to the attention of mainstream philosophers of mind. Dretske's distinctive claim in his **communication-theoretic notion of information** is that a satisfactory semantic concept of information is indeed to be found and may be articulated with a simple extension of **Shannon's theory** of **"information**.**"** (discussed below).

The main thrust of Dretske approach, by taking advantage of the new physics, is to show how the idea of a non-algorithmic conscious brain, is capable of filling the so-called 'gap' between **physical** and **semantic (or intentional) concepts**. Extending this idea with the help of the present state of **physical understanding** of semantic or intentional concepts in *physical terms*, Fred Dretske initiated a new tradition of what we might call the **semantic naturalism***.* Dretske holds the view that to **possess "information"** is to have certain capacity or ability, while for something to **contain "information"** is for it to be in a certain state or to possess certain occurrent categorical properties.

The earliest systematic attempts to understand **semantic content** or **intentional content** (i.e. the **context** in which an utterance is made or referring to **what a sentence or utterance is** and its **use**) in terms of **"information"** was carried out by Dretske.

Dretske articulates his **notion of information** and defines it in following way: a state type T carries information of type p if there is a nomological or counterfactual regularity (perhaps a ceteris paribus law) to the effect that if a T occurs, p obtains. So, for example, the height of mercury in a thermometer carries **information** about the ambient temperature. Dretske's idea is to construct the content of beliefs out of the information that they carry under certain circumstances.

Thus, a signal correlated with p will fail to carry the information that p if the correlation is merely accidental or statistical: my thermometer carries information about the temperature of my room and not somebody else's room, even if the two rooms have the same temperature. It is because the state of my thermometer supports counterfactuals about the temperature of my room but not about the temperature of somebody else's room. Hence, it is a true generalization that if the temperature of my room were different, the state of my thermometer would be different. In contrast, it is not generally true that if the temperature of somebody else's room were different, the state of my thermometer would be different.

According to Dretske the engineering aspects of mechanical communication systems are relevant and he goes on to demonstrate precisely what their relevance is. Dretske's proposal is linking the information theory to the amount of information that an individual event carries about another event or state of affairs. He argues that if a signal is to carry the information that q it must, among other things, carry *as much information* as is generated by the obtaining of the fact that q. He says:

*How, for example, do we calculate the amount of information generated by Edith's playing tennis? … [O]ne needs to know: (1) the alternative possibilities … (2) the associated probabilities … (3) the conditional probabilities … Obviously, in most ordinary communication settings one knows none of this. It is not even very clear whether one could know it. What, after all, are the alternative possibilities to Edith's playing tennis? Presumably there are some things that are possible (e.g., Edith going to the hairdresser instead of playing tennis) and some things that are not possible (e.g., Edith turning into a tennis ball), but how does one begin to catalog these possibilities? If Edith might be jogging, shall we count this as one alternative possibility? Or shall we count it as more than one, since she could be jogging almost anywhere, at a variety of different speeds, in almost any direction? ([6], p. 53)*

There might be problems in specifying absolute amounts of information; but it is comparative amounts of information with which Dretske is concerned, in particular, with whether a signal carries as much information as is generated by the occurrence of a specified event, whatever the absolute values might be.

#### **2.2 Criticisms**

In our system of communication and information thus far, there is an apparent failure to provide a **satisfactory naturalized account of meaning or semantics**. The important reason for this is that language, being a **rule governed** activity, has an essential **normative component** that cannot be captured by any **naturalistic explanation**. The impetus behind this line of thought derives from Wittgenstein's reflections on meaning and rule-following ([7], p. 53).

Secondly, we learn from the circumstances certain beliefs in which the information these beliefs carry is not the belief's content. Take for example a child who learns to token a belief with a content about tigers by seeing pictures of tigers. In

*Some Foundational Issues in Quantum Information Science DOI: http://dx.doi.org/10.5772/intechopen.98769*

such cases her belief states carry information about pictures, in spite of the fact that their content is about tigers. Dretske's account will end up assigning the wrong truth conditional contents to these beliefs.

Thirdly, according to Dretske a teleological characterization of the state tokens, although the relevant information fixes the content of the beliefs. However, Dretske's main idea is that the informational content fixes the belief's semantic content in these instances of the belief state and they are reinforced by the relevant behavior which produces them. Although this is a naturalistic characterization of this class of beliefs, it is debatable whether it assigns appropriate contents. One may easily come up with situations in which a false token of a belief produces behavior.

Finally, it is believed that informational theories are the most promising proposals for reconciling naturalism with intentional realism. However, it remains to be shown that there is an informational theory of content that satisfies the constraint, viz. `ps cause Ss' is a law (where S has property p as its content). Of course, this does not mean that no informational theory can succeed. However, it does mean that, so far, appeals to information have not resolved the problem of naturalizing content.
