**Analysis of Fuzzy Logic Models**

#### Beloslav RieĀan

*M. Bel University, Banská Bystrica, Matematický Ústav SAV, Bratislava Slovakia* 

#### **1. Introduction**

218 Intelligent Systems

[O'Droma,2007] M. O'Droma and I. Ganchev, Toward a Ubiquitous Consumer Wireless World, IEEE Wireless Communications, vol. 14, pp. 52-63, February 2007. [OWL-S,2010] OWL-S: Semantic Markup for Web Services. [Online]. Available http://www.w3.org/Submission/OWL-S/ [Accessed: Mar 3, 2010]. [Passas,2006] N. Passas, et al., Enabling technologies for the 'always best connected' concept: Research Articles, Wirel. Commun. Mob. Comput., vol. 6, pp. 523-540, 2006. [Sangiorgi,2001] D. Sangiorgi and D. Walker. The π-calculus: A Theory of Mobile Processes.

[Stoyanov,2005] S. Stoyanov, et al., From CBT to e-Learning, Journal Information

[Stoyanov,2008] S. Stoyanov, et al., An Approach for the Development of InfoStation-Based

[W3C,2010] W3C, Document Object Model (DOM) [online]. Available -

[Zedan,2008] H. Zedan, A. Cau, K. Buss, S. Westendorf, A. Hugill, S. Thomas, Mapping

[Zimmer,2005] P. Zimmer, A Calculus for Context-awareness, Tech. rep., BRICS, 2005.

eLearning Architectures Compt. Rend. Acad. Bulg. Sci., vol.61, pp. 1189-1198, 2008.

Human Creativity, STRL Internal Monograph, STRL-2008-09, De Montfort

Cambridge University Press, 2001.

University, Leicester, 2008,UK.

Technologies and Control, vol. 4, pp. 2-10, 2005.

http://www.w3.org/DOM/. [Accessed Mar 03, 2010].

One of the most important results of mathematics in the 20th century is the Kolmogorov model of probability and statistics. It gave many impulses for research and develop so in theoretical area as well as in applications in a large scale of subjects.

It is reasonable to ask why the Kolmogorov approach played so important role in the probability theory and in mathematical statistics. In disciplines which have been very successfull for many centuries.

Of course, Kolmogorov stated probability and statistics on a new and very effective foundation - set theory. For the first time in the history basic notions of probability theory have been defined precisely but simply. So a random event has been defined as a subset of a space, a random variable as a measurable function and its mean value as an integral. More precisely, abstract Lebesgue integral. It is hopeful to wait some new stimuls from the fuzzy generalization of the classical set theory. The aim of the chapter is a presentation of some results of the type.

#### **2. Fuzzy systems and their algebraizations**

Any subset *A* of a given space Ω can be identified with its characteristic function

$$\chi\_{\mathcal{A}} : \Omega \to \{0, 1\}$$

where

$$
\chi\_A(\omega) = 1\_\prime
$$

if *ω* ∈ *A*,

$$
\chi\_A(\omega) = 0,
$$

if *ω* ∈/ *A*. From the mathematical point of view a fuzzy set is a natural generalization of *χA*(see [73]). It is a function

$$
\varphi\_A: \Omega \to [0,1].
$$

Evidently any set (i.e. two-valued function on Ω, *χ<sup>A</sup>* → {0, 1}) is a special case of a fuzzy set (multi-valued function), *ϕ<sup>A</sup>* : Ω → [0, 1].

We call *μ<sup>A</sup>* the membership function, *ν<sup>A</sup>* the non membership function and

If *A* = (*μA*, *νA*), *B* = (*μB*, *νB*) are two IF-sets, then we define

Denote by F a family of IF sets such that

i.e. *ν<sup>A</sup>* = 1 − *μA*.

⊕, �, ¬.

hence

IF-sets to a suitable MV-algebra. Let us start with a simple example.

*A* ≤ *B* ⇐⇒ *μ<sup>A</sup>* ≤ *μB*, *ν<sup>A</sup>* ≥ *νB*.

Analysis of Fuzzy Logic Models 221

*A* ⊕ *B* = ((*μ<sup>A</sup>* + *μB*) ∧ 1,(*ν<sup>A</sup>* + *ν<sup>B</sup>* − 1) ∨ 0),

*A* � *B* = ((*μ<sup>A</sup>* + *μ<sup>B</sup>* − 1) ∨ 0,(*ν<sup>A</sup>* + *νB*) ∧ 1), ¬*A* = (1 − *μA*, 1 − *νA*).

*A*, *B* ∈ F =⇒ *A* ⊕ *B* ∈ F, *A* � *B* ∈ F, ¬*A* ∈ F.

**Example 1.1.** Let F be the set of all fuzzy subsets of a set Ω. If *f* : Ω → [0, 1] then we define

*A* = (*f* , 1 − *f*),

**Example 1.2.** Let (Ω, S) be a measurable space, S a *σ*-algebra, F the family of all pairs such that *μ<sup>A</sup>* : Ω → [0, 1], *ν<sup>A</sup>* : Ω → [0, 1] are measurable. Then F is closed under the operations

**Example 1.3.** Let (Ω, T ) be a topological space, F the family of all pairs such that *μ<sup>A</sup>* : Ω →

*A* ⊕ *B* = ((*μ<sup>A</sup>* + *μB*) ∧ 1,(*ν<sup>A</sup>* + *ν<sup>B</sup>* − 1) ∨ 0),

(*μ<sup>A</sup>* + *μB*) ∧ 1 + (*ν<sup>A</sup>* + *ν<sup>B</sup>* − 1) ∨ 0 = = ((*μ<sup>A</sup>* + *μB*) ∧ 1 + (*ν<sup>A</sup>* + *ν<sup>B</sup>* − 1)) ∨ ((*μ<sup>A</sup>* + *μB*) ∧ 1) = = ((*μ<sup>A</sup>* + *μ<sup>B</sup>* + *ν<sup>A</sup>* + *ν<sup>B</sup>* − 1) ∧ (1 + *ν<sup>A</sup>* + *ν<sup>B</sup>* − 1)) ∨ ((*μ<sup>A</sup>* + *μB*) ∧ 1) ≤ ≤ ((1 + 1 − 1) ∧ (*ν<sup>A</sup>* + *νB*)) ∨ ((*μ<sup>A</sup>* + *μB*) ∧ 1) = = (1 ∧ (*ν<sup>A</sup>* + *νB*)) ∨ ((*μ<sup>A</sup>* + *μB*) ∧ 1) ≤ ≤ 1 ∨ 1 = 1.

Probably the most important algebraic model of multi-valued logic is an MV-algebra ([48],[49]). MV-algebras play in multi-valued logic a role analogous to the role of Boolean algebras in two-valued logic. Therefore we shall present a short information about MV-alegras and after it we shall prove the main result of the section: a possibility to embed the family of

[0, 1], *ν<sup>A</sup>* : Ω → [0, 1] are continuous. Then F is closed under the operations⊕, �, ¬. **Remark.** Of course, in any case *A* ⊕ *B*, *A* � *B*, ¬*A* are IF-sets, if *A*, *B* are IF-sets. E.g.

There are many possibilities for characterizations of operations with sets (union *A* ∪ *B* and intersection *A* ∩ *B*). We shall use so called Lukasiewicz characterization:

$$\chi\_{A\cup B} = (\chi\_A + \chi\_B) \wedge 1,$$

$$\chi\_{A\cap B} = (\chi\_A + \chi\_B - 1) \vee 0.$$

(Here (*f* ∨ *g*)(*ω*) = max(*f*(*ω*), *g*(*ω*)),(*f* ∧ *g*)(*ω*) = min(*f*(*ω*), *g*(*ω*)).) Hence if *ϕA*, *ϕ<sup>B</sup>* : Ω → [0, 1] are fuzzy sets, then the union (disjunction *ϕ<sup>A</sup>* or *ϕ<sup>B</sup>* of corresponding assertions) can be defined by the formula

$$
\varphi\_A \oplus \varphi\_B = (\varphi\_A + \varphi\_B - 1) \wedge 1\_A
$$

the intersection (conjunction *ϕ<sup>A</sup>* and *ϕ<sup>B</sup>* of corresponding assertions) can be defined by the formula

$$
\varphi\_A \odot \varphi\_B = (\varphi\_A + \varphi\_B - 1) \vee 0.
$$

In the chapter we shall work with a natural generalization of the notion of fuzzy set so-called IF-set (see [1], [2]), what is a pair

$$A = (\mu\_{A'} \upsilon\_A) : \Omega \to [0, 1] \times [0, 1]$$

of fuzzy sets *μA*, *ν<sup>A</sup>* : Ω → [0, 1], where

$$\mu\_A + \mu\_A \le 1.$$

Evidently a fuzzy set *ϕ<sup>A</sup>* : Ω → [0, 1] can be considered as an IF-set, where

$$
\mu\_A = \varphi\_A : \Omega \to [0, 1], \nu\_A = 1 - \varphi\_A : \Omega \to [0, 1].
$$

Here we have

$$\mu\_A + \nu\_A = 1$$

while generally it can be *μA*(*ω*) + *νA*(*ω*) < 1 for some *ω* ∈ Ω. Geometrically an IF-set can be regarded as a function *A* : Ω → Δ to the triangle

$$\Delta = \{(\iota, v) \in \mathbb{R}^2 : 0 \le \iota, 0 \le v, \iota + v \le 1\}.$$

Fuzzy set can be considered as a mapping *ϕ<sup>A</sup>* : Ω → *D* to the segment

$$D = \{(\iota, v) \in \mathbb{R}^2 \colon \iota + v = 1, 0 \le \iota \le 1\}$$

and the classical set as a mapping *ψ* : Ω → *D*<sup>0</sup> from Ω to two-point set

$$D\_0 = \{(0,1), (1,0)\}.$$

In the next definition we again use the Lukasiewicz operations.

**Definition 1.1.** By an IF subset of a set Ω a pair *A* = (*μA*, *νA*) of functions

$$
\mu\_A: \Omega \to [0,1], \nu\_A; \Omega \to [0,1].
$$

is considered such that

$$\mu\_A + \nu\_A \le 1.$$

2 Will-be-set-by-IN-TECH

There are many possibilities for characterizations of operations with sets (union *A* ∪ *B* and

*<sup>χ</sup>A*∪*<sup>B</sup>* = (*χ<sup>A</sup>* + *<sup>χ</sup>B*) ∧ 1,

*<sup>χ</sup>A*∩*<sup>B</sup>* = (*χ<sup>A</sup>* + *<sup>χ</sup><sup>B</sup>* − <sup>1</sup>) ∨ 0. (Here (*f* ∨ *g*)(*ω*) = max(*f*(*ω*), *g*(*ω*)),(*f* ∧ *g*)(*ω*) = min(*f*(*ω*), *g*(*ω*)).) Hence if *ϕA*, *ϕ<sup>B</sup>* : Ω → [0, 1] are fuzzy sets, then the union (disjunction *ϕ<sup>A</sup>* or *ϕ<sup>B</sup>* of corresponding assertions) can be

*ϕ<sup>A</sup>* ⊕ *ϕ<sup>B</sup>* = (*ϕ<sup>A</sup>* + *ϕ<sup>B</sup>* − 1) ∧ 1, the intersection (conjunction *ϕ<sup>A</sup>* and *ϕ<sup>B</sup>* of corresponding assertions) can be defined by the

*ϕ<sup>A</sup>* � *ϕ<sup>B</sup>* = (*ϕ<sup>A</sup>* + *ϕ<sup>B</sup>* − 1) ∨ 0.

In the chapter we shall work with a natural generalization of the notion of fuzzy set so-called

*A* = (*μA*, *νA*) : Ω → [0, 1] × [0, 1]

*μ<sup>A</sup>* + *μ<sup>A</sup>* ≤ 1.

*μ<sup>A</sup>* = *ϕ<sup>A</sup>* : Ω → [0, 1], *ν<sup>A</sup>* = 1 − *ϕ<sup>A</sup>* : Ω → [0, 1].

*μ<sup>A</sup>* + *ν<sup>A</sup>* = 1, while generally it can be *μA*(*ω*) + *νA*(*ω*) < 1 for some *ω* ∈ Ω. Geometrically an IF-set can be

<sup>Δ</sup> <sup>=</sup> {(*u*, *<sup>v</sup>*) <sup>∈</sup> *<sup>R</sup>*<sup>2</sup> : 0 <sup>≤</sup> *<sup>u</sup>*, 0 <sup>≤</sup> *<sup>v</sup>*, *<sup>u</sup>* <sup>+</sup> *<sup>v</sup>* <sup>≤</sup> <sup>1</sup>}.

*<sup>D</sup>* <sup>=</sup> {(*u*, *<sup>v</sup>*) <sup>∈</sup> *<sup>R</sup>*2; *<sup>u</sup>* <sup>+</sup> *<sup>v</sup>* <sup>=</sup> 1, 0 <sup>≤</sup> *<sup>u</sup>* <sup>≤</sup> <sup>1</sup>}

*D*<sup>0</sup> = {(0, 1),(1, 0)}.

*μ<sup>A</sup>* : Ω → [0, 1], *νA*; Ω → [0, 1]

*μ<sup>A</sup>* + *ν<sup>A</sup>* ≤ 1.

Evidently a fuzzy set *ϕ<sup>A</sup>* : Ω → [0, 1] can be considered as an IF-set, where

Fuzzy set can be considered as a mapping *ϕ<sup>A</sup>* : Ω → *D* to the segment

and the classical set as a mapping *ψ* : Ω → *D*<sup>0</sup> from Ω to two-point set

**Definition 1.1.** By an IF subset of a set Ω a pair *A* = (*μA*, *νA*) of functions

In the next definition we again use the Lukasiewicz operations.

intersection *A* ∩ *B*). We shall use so called Lukasiewicz characterization:

defined by the formula

IF-set (see [1], [2]), what is a pair

of fuzzy sets *μA*, *ν<sup>A</sup>* : Ω → [0, 1], where

regarded as a function *A* : Ω → Δ to the triangle

formula

Here we have

is considered such that

We call *μ<sup>A</sup>* the membership function, *ν<sup>A</sup>* the non membership function and

$$A \le B \Longleftrightarrow \mu\_A \le \mu\_{B'} \nu\_A \ge \nu\_B.$$

If *A* = (*μA*, *νA*), *B* = (*μB*, *νB*) are two IF-sets, then we define

$$A \oplus B = (\left(\mu\_A + \mu\_B\right) \land 1\_\prime (\nu\_A + \nu\_B - 1) \lor 0),$$

$$A \odot B = (\left(\mu\_A + \mu\_B - 1\right) \lor 0, \left(\nu\_A + \nu\_B\right) \land 1),$$

$$\neg A = (1 - \mu\_{A\prime} \, 1 - \nu\_A).$$

Denote by F a family of IF sets such that

$$A, B \in \mathcal{F} \Longrightarrow A \oplus B \in \mathcal{F}, A \odot B \in \mathcal{F}, \neg A \in \mathcal{F}.$$

**Example 1.1.** Let F be the set of all fuzzy subsets of a set Ω. If *f* : Ω → [0, 1] then we define

$$A = (f, 1 - f)\_{\nu}$$

i.e. *ν<sup>A</sup>* = 1 − *μA*.

**Example 1.2.** Let (Ω, S) be a measurable space, S a *σ*-algebra, F the family of all pairs such that *μ<sup>A</sup>* : Ω → [0, 1], *ν<sup>A</sup>* : Ω → [0, 1] are measurable. Then F is closed under the operations ⊕, �, ¬.

**Example 1.3.** Let (Ω, T ) be a topological space, F the family of all pairs such that *μ<sup>A</sup>* : Ω → [0, 1], *ν<sup>A</sup>* : Ω → [0, 1] are continuous. Then F is closed under the operations⊕, �, ¬.

**Remark.** Of course, in any case *A* ⊕ *B*, *A* � *B*, ¬*A* are IF-sets, if *A*, *B* are IF-sets. E.g.

$$A \oplus B = ((\mu\_A + \mu\_B) \land 1\_\prime(\nu\_A + \nu\_B - 1) \lor 0)\_\prime$$

hence

$$\begin{aligned} \left(\mu\_A + \mu\_B\right) \wedge 1 + \left(\nu\_A + \nu\_B - 1\right) \vee 0 &= \\ = \left(\left(\mu\_A + \mu\_B\right) \wedge 1 + \left(\nu\_A + \nu\_B - 1\right)\right) \vee \left(\left(\mu\_A + \mu\_B\right) \wedge 1\right) &= \\ = \left(\left(\mu\_A + \mu\_B + \nu\_A + \nu\_B - 1\right) \wedge \left(1 + \nu\_A + \nu\_B - 1\right)\right) \vee \left(\left(\mu\_A + \mu\_B\right) \wedge 1\right) &\leq \\ \leq \left(\left(1 + 1 - 1\right) \wedge \left(\nu\_A + \nu\_B\right)\right) \vee \left(\left(\mu\_A + \mu\_B\right) \wedge 1\right) &= \\ = \left(1 \wedge \left(\nu\_A + \nu\_B\right)\right) \vee \left(\left(\mu\_A + \mu\_B\right) \wedge 1\right) &\leq \\ \leq 1 \vee 1 &= 1. \end{aligned}$$

Probably the most important algebraic model of multi-valued logic is an MV-algebra ([48],[49]). MV-algebras play in multi-valued logic a role analogous to the role of Boolean algebras in two-valued logic. Therefore we shall present a short information about MV-alegras and after it we shall prove the main result of the section: a possibility to embed the family of IF-sets to a suitable MV-algebra.

Let us start with a simple example.

= ((*μ<sup>A</sup>* + *μB*, *ν<sup>A</sup>* + *ν<sup>B</sup>* − 1) − (1, 0)) ∨ (0, 1)= = (*μ<sup>A</sup>* + *μ<sup>B</sup>* − 1, *ν<sup>A</sup>* + *ν<sup>B</sup>* − 1 − 0 + 1) ∨ (0, 1) =

Connections with the family of IF-sets (Definition 1.1) is evident. Hence we can formulate the

Analysis of Fuzzy Logic Models 223

**Theorem 1.1.** Let (Ω, S) be a measurable space, F the family of all IF-sets *A* = (*μA*, *νA*) be such that *μA*, *ν<sup>A</sup>* are S-measurable. Then there exists an MV-algebra M such that F⊂M, the operations ⊕, � are extensions of operations on F and the ordering ≤ is an extension of the

Proof. Consider MV-algebra M constructed in Example 1.5. If *A*, *B* ∈ F, then the operations

Theorem 1.1 enables us in the space of IF-sets to use some results of the well developed probability theory on MV-algebras ([66 - 68]). Of course, some methods of the theory can be generalized in so-called D-posets ([28]). The system (*D* ≤, −, 0, 1) is called D-poset, if (*D*, ≤) is partially ordered set with the smallest element 0 and the largest element 1, − is a partially

In IF-events theory an original terminology is used. The main notion is the notion of a state ([21], [22],[57], [58], [61][, [62]). It is an analogue of the notion of probability in the Kolmogorov classical theory. As before F is the family of all IF-sets *A* = (*μA*, *νA*) such that *μA*, *ν<sup>A</sup>* :

**Definition 2.1.** A mapping *m* : F → [0, 1] is called a state if the following properties are

Of course, also the notion with the name probability has been introduced in IF-events theory.

**Definition 2.2.** Let J be the family of all compact intervals in the real line, J = {[*a*, *b*]; *a*, *b* ∈

*R*, *a* ≤ *b*}. Probability is a mapping *P* : F→J satisfying the following conditions:

on M coincide with the operations on F. The ordering ≤ is the same.

binary operation satisfying the following statements:

(ii) *A* � *B* = (0Ω, 1Ω) =⇒ *m*((*A* ⊕ *B*)) = *m*(*A*) + *m*(*B*),

3. *a* ≤ *b* ≤ *c* implies *c* − *b* ≤ *c* − *a* and (*c* − *a*) − (*c* − *b*) = *b* − *a*.

1. *b* − *a* is defined if and only if *a* ≤ *b*.

**3. Probability on IF-events**

(Ω, S) → [0, 1] are S-measurable.

(i) *m*(1Ω, 0Ω) = 1, *m*(0Ω, 1Ω) = 0,

(iii) *An* � *A* =⇒ *m*(*An*) � *m*(*A*).

satisfied:

2. *a* ≤ *b* implies *b* − *a* ≤ *b* and *b* − (*b* − *a*) = *a*.

= ((*μ<sup>A</sup>* + *μ<sup>B</sup>* − 1) ∨ 0,(*ν<sup>A</sup>* + *νB*) ∧ 1),

¬*A* = (1, 0) − (*μA*, *νA*) = = (1 − *μA*, 0 − *ν<sup>A</sup>* + 1) =

main result of the section.

= (1 − *μA*, 1 − *νA*).

ordering in F.

**Example 1.4.** Consider the unit interval [0, 1] in the set *R* of all real numbers. It will stay an MV-algebra, if we shall define two binary operations ⊕, � on [0, 1], one unary operation and the usual ordering ≤ by the following way:

$$a \oplus b = \min(a + b, 1)\_{\prime}$$

$$a \odot b = \max(a + b - 1, 0)\_{\prime}$$

$$\neg a = 1 - a.$$

It is easy to imagine that *a* ⊕ *b* corresponds to the disjunction of the assertions *a*, *b*, *a* � *b* to the conjunction of *a*, *b* and ¬*a* to the negation of *a*.

By the Mundici theorem ([48])any MV-algebra can be defined similarly as in Example 1.4, only the group *R* must be substitute by an arbitrary *l*-group.

**Definition 1.2.** By an *l*-group we consider an algebraic system (*G*, +, ≤) such that

(i) (*G*, +) is an Abelian group,

(ii) (*G*, ≤) is a lattice,

(iii) *a* ≤ *b* =⇒ *a* + *c* ≤ *b* + *c*.

**Definition 1.3.** By an *MV*-algebra we consider an algebraic system (*M*, 0, *u*, ⊕, �) such that *M* = [0, *u*] ⊂ *G*, where (*G*, +, ≤) is an *l*-group, 0 its neutral element, *u* a positive element, and

$$\begin{aligned} a \oplus b &= (a+b) \land \iota\_\prime \\ a \odot b &= (a+b-\iota) \lor 0, \\ \neg a &= \iota - a. \end{aligned}$$

**Example 1.5.** Let (Ω, S) be a measurable space, S a *σ*-algebra,

$$\begin{aligned} G &= \{ A = (\mu\_{A'} \nu\_A) ; \mu\_{A'} \nu\_A : \Omega \to \mathbb{R} \}, \\ A + B &= (\mu\_A + \mu\_{B'} \nu\_A + \nu\_B - 1) = (\mu\_A + \mu\_{B'} 1 - (1 - \nu\_A + 1 - \nu\_B)), \\ A \le B &\Longleftrightarrow \mu\_A \le \mu\_{B'} \nu\_A \ge \nu\_B. \end{aligned}$$

Then (*G*, +, ≤) is an *l*-group with the neutral element **0** = (0, 1), *A* − *B* = (*μ<sup>A</sup>* − *μB*, *ν<sup>A</sup>* − *ν<sup>B</sup>* + 1), and the lattice operations

$$A \lor B = (\mu\_A \lor \mu\_B, \upsilon\_A \land \upsilon\_B),$$

$$A \land B = (\mu\_A \land \mu\_{B'} \upsilon\_A \lor \upsilon\_B).$$

$$\text{Put } \mu = (1, 0) \text{ and define the MV-algebra}$$

$$M = \{A \in G; (0, 1) = \mathbf{0} \le A \le \mathbf{u} = (1, 0)\},$$

$$A \oplus B = (A + B) \land \mathbf{u} =$$

$$= (\mu\_A + \mu\_B, \upsilon\_A + \upsilon\_B - 1) \land (1, 0) =$$

$$= ((\mu\_A + \mu\_B) \land 1, (\upsilon\_A + \upsilon\_B - 1) \lor 0)$$

$$A \odot B = (A + B - \mu) \lor (0, 1) =$$

4 Will-be-set-by-IN-TECH

**Example 1.4.** Consider the unit interval [0, 1] in the set *R* of all real numbers. It will stay an MV-algebra, if we shall define two binary operations ⊕, � on [0, 1], one unary operation ¬

*a* ⊕ *b* = min(*a* + *b*, 1),

*a* � *b* = max(*a* + *b* − 1, 0), ¬*a* = 1 − *a*. It is easy to imagine that *a* ⊕ *b* corresponds to the disjunction of the assertions *a*, *b*, *a* � *b* to the

By the Mundici theorem ([48])any MV-algebra can be defined similarly as in Example 1.4, only

**Definition 1.3.** By an *MV*-algebra we consider an algebraic system (*M*, 0, *u*, ⊕, �) such that *M* = [0, *u*] ⊂ *G*, where (*G*, +, ≤) is an *l*-group, 0 its neutral element, *u* a positive element, and

Then (*G*, +, ≤) is an *l*-group with the neutral element **0** = (0, 1), *A* − *B* = (*μ<sup>A</sup>* − *μB*, *ν<sup>A</sup>* −

**Definition 1.2.** By an *l*-group we consider an algebraic system (*G*, +, ≤) such that

and the usual ordering ≤ by the following way:

conjunction of *a*, *b* and ¬*a* to the negation of *a*.

(i) (*G*, +) is an Abelian group,

(iii) *a* ≤ *b* =⇒ *a* + *c* ≤ *b* + *c*.

(ii) (*G*, ≤) is a lattice,

*a* ⊕ *b* = (*a* + *b*) ∧ *u*,

¬*a* = *u* − *a*.

*a* � *b* = (*a* + *b* − *u*) ∨ 0,

*G* = {*A* = (*μA*, *νA*); *μA*, *ν<sup>A</sup>* : Ω → *R*},

*A* ≤ *B* ⇐⇒ *μ<sup>A</sup>* ≤ *μB*, *ν<sup>A</sup>* ≥ *νB*.

*A* ∨ *B* = (*μ<sup>A</sup>* ∨ *μB*, *vA* ∧ *νB*), *A* ∧ *B* = (*μ<sup>A</sup>* ∧ *μB*, *ν<sup>A</sup>* ∨ *νB*).

*A* ⊕ *B* = (*A* + *B*) ∧ *u* =

*ν<sup>B</sup>* + 1), and the lattice operations

Put *u* = (1, 0) and define the MV-algebra

= (*μ<sup>A</sup>* + *μB*, *ν<sup>A</sup>* + *ν<sup>B</sup>* − 1) ∧ (1, 0) = = ((*μ<sup>A</sup>* + *μB*) ∧ 1,(*ν<sup>A</sup>* + *ν<sup>B</sup>* − 1) ∨ 0,

*A* � *B* = (*A* + *B* − *u*) ∨ (0, 1) =

*M* = {*A* ∈ *G*;(0, 1) = **0** ≤ A ≤ u = (1, 0)},

the group *R* must be substitute by an arbitrary *l*-group.

**Example 1.5.** Let (Ω, S) be a measurable space, S a *σ*-algebra,

*A* + *B* = (*μ<sup>A</sup>* + *μB*, *ν<sup>A</sup>* + *ν<sup>B</sup>* − 1)=(*μ<sup>A</sup>* + *μB*, 1 − (1 − *ν<sup>A</sup>* + 1 − *νB*)),

= ((*μ<sup>A</sup>* + *μB*, *ν<sup>A</sup>* + *ν<sup>B</sup>* − 1) − (1, 0)) ∨ (0, 1)= = (*μ<sup>A</sup>* + *μ<sup>B</sup>* − 1, *ν<sup>A</sup>* + *ν<sup>B</sup>* − 1 − 0 + 1) ∨ (0, 1) = = ((*μ<sup>A</sup>* + *μ<sup>B</sup>* − 1) ∨ 0,(*ν<sup>A</sup>* + *νB*) ∧ 1), ¬*A* = (1, 0) − (*μA*, *νA*) = = (1 − *μA*, 0 − *ν<sup>A</sup>* + 1) = = (1 − *μA*, 1 − *νA*).

Connections with the family of IF-sets (Definition 1.1) is evident. Hence we can formulate the main result of the section.

**Theorem 1.1.** Let (Ω, S) be a measurable space, F the family of all IF-sets *A* = (*μA*, *νA*) be such that *μA*, *ν<sup>A</sup>* are S-measurable. Then there exists an MV-algebra M such that F⊂M, the operations ⊕, � are extensions of operations on F and the ordering ≤ is an extension of the ordering in F.

Proof. Consider MV-algebra M constructed in Example 1.5. If *A*, *B* ∈ F, then the operations on M coincide with the operations on F. The ordering ≤ is the same.

Theorem 1.1 enables us in the space of IF-sets to use some results of the well developed probability theory on MV-algebras ([66 - 68]). Of course, some methods of the theory can be generalized in so-called D-posets ([28]). The system (*D* ≤, −, 0, 1) is called D-poset, if (*D*, ≤) is partially ordered set with the smallest element 0 and the largest element 1, − is a partially binary operation satisfying the following statements:

1. *b* − *a* is defined if and only if *a* ≤ *b*.

2. *a* ≤ *b* implies *b* − *a* ≤ *b* and *b* − (*b* − *a*) = *a*.

3. *a* ≤ *b* ≤ *c* implies *c* − *b* ≤ *c* − *a* and (*c* − *a*) − (*c* − *b*) = *b* − *a*.

#### **3. Probability on IF-events**

In IF-events theory an original terminology is used. The main notion is the notion of a state ([21], [22],[57], [58], [61][, [62]). It is an analogue of the notion of probability in the Kolmogorov classical theory. As before F is the family of all IF-sets *A* = (*μA*, *νA*) such that *μA*, *ν<sup>A</sup>* : (Ω, S) → [0, 1] are S-measurable.

**Definition 2.1.** A mapping *m* : F → [0, 1] is called a state if the following properties are satisfied:

$$\text{(i)}\,m(1\_{\Omega},0\_{\Omega}) = 1, m(0\_{\Omega},1\_{\Omega}) = 0,$$

(ii) *A* � *B* = (0Ω, 1Ω) =⇒ *m*((*A* ⊕ *B*)) = *m*(*A*) + *m*(*B*),

$$(\text{iii)}\ A\_n \nearrow A \implies \mathfrak{m}(A\_n) \nearrow \mathfrak{m}(A).$$

Of course, also the notion with the name probability has been introduced in IF-events theory.

**Definition 2.2.** Let J be the family of all compact intervals in the real line, J = {[*a*, *b*]; *a*, *b* ∈ *R*, *a* ≤ *b*}. Probability is a mapping *P* : F→J satisfying the following conditions:

First it can be proved by induction the equality

holding for every *q* ∈ *N*. Therefore

Therefore

hence

1 − *fn* � 1 − *f* . Therefore

*qm*( 1 *q*

*m*( 1 *q*

*m*( *p*

hence, (3) is proved, too. Particularly, if we give *β* = 1, then

*αiχAi*

*m*(*f* , 1 − *f*) =

= *n* ∑ *i*=1

> = *n* ∑ *i*=1

*<sup>m</sup>*(*<sup>f</sup>* , 1 <sup>−</sup> *<sup>f</sup>*) = lim*n*→<sup>∞</sup> *<sup>m</sup>*(*fn*, 1 <sup>−</sup> *fn*) = lim*n*→<sup>∞</sup>

Combining (2), (3), and the definition of *P* we obtain

Let *f* : Ω → [0, 1] be simple, S-measurable, i.e.

*f* = *n* ∑ *i*=1 *βχA*, 1 <sup>−</sup> <sup>1</sup>

*βχA*, 1 <sup>−</sup> <sup>1</sup>

*<sup>q</sup> βχA*, 1 <sup>−</sup> *<sup>p</sup>*

*q*

*q*

*βχA*) = <sup>1</sup>

Analysis of Fuzzy Logic Models 225

*<sup>q</sup> βχA*) = *<sup>p</sup>*

*αnχ<sup>A</sup>* � *αχA*, 1 − *αnχ<sup>A</sup>* � 1 − *αχA*.

*<sup>m</sup>*(*αβχA*, 1 <sup>−</sup> *αβχA*) = lim*n*→<sup>∞</sup> *<sup>m</sup>*(*αnβχA*, 1 <sup>−</sup> *<sup>α</sup>nβχA*) =

<sup>=</sup> lim*n*→<sup>∞</sup> *<sup>α</sup>nm*(*βχA*, 1 <sup>−</sup> *βχA*) = *<sup>α</sup>m*(*βχA*, 1 <sup>−</sup> *βχA*),

*m*(*αχA*, 1 − *αχA*) = *αm*(*χA*, 1 − *χA*).

*n* ∑ *i*=1

*αim*(*χAi*

*<sup>m</sup>*(*<sup>f</sup>* , 1 <sup>−</sup> *<sup>f</sup>*) =

*<sup>α</sup>iP*(*Ai*) =

for any *f* : Ω → [0, 1]simple. If *f* : Ω → [0, 1] is an arbitrary S-measurable function, then there exists a sequence (*fn*) of simple measurable functions such that *fn* � *f* . Evidently,

*m*(*αiχAi*

, 1 − *χAi*

Ω *f dP*,

Ω *f dP*,

, *Ai* ∈ S(*i* = 1, ., , , *n*), *Ai* ∩ *Aj* = ∅(*i* �= *j*).

, 1 − *αiχAi*

) =

 Ω

*fndP* =

 Ω *f dP*,

) =

hence (3) holds for every rational *α*. Let *α* ∈ *R*, *α* ∈ [0, 1]. Take *α<sup>n</sup>* ∈ *Q*, *α<sup>n</sup>* � *α*. Then

*q*

*q*

*βχA*) = *m*(*βχA*, 1 − *βχA*)

*m*(*βχA*, 1 − *βχA*)

*m*(*βχA*, 1 − *βχA*),

(i) *P*(1Ω, 0Ω)=[1, 1], *P*(0Ω, 1Ω)=[0, 0],

$$(\text{ii)}\ A \odot B = (\mathbf{0}\_{\Omega}, \mathbf{1}\_{\Omega}) \implies P((A \oplus B)) = P(A) + P(B),$$

(iii) *An* � *A* =⇒ *P*(*An*) � *P*(*A*).

It is easy to see that the following property holds.

**Proposition 2.1.** Let *<sup>P</sup>* : F→J , *<sup>P</sup>*(*A*)=[*<sup>P</sup>*(*A*), *<sup>P</sup>*(*A*)]. Then *<sup>P</sup>* is a probability if and only if *<sup>P</sup>*, *<sup>P</sup>* : F → [0, 1] are states.

Hence it is sufficient to characterize only the states ([4], [5], [54]).

**Theorem 2.1.** For any state *m* : F → [0, 1] there exist probability measures *P*, *Q* : S → [0, 1] and *α* ∈ [0, 1] such that

$$m((
\mu\_{A'}\nu\_A)) = \int\_{\Omega} \mu\_A dP + \mathfrak{a}(1 - \int\_{\Omega} (\mu\_A + \nu\_A) dQ).$$

Proof. The main instrument in our investigation is the following implication, a corollary of (ii):

$$f, \mathbf{g} \in \mathcal{F}, f + \mathbf{g} \le 1 \implies m(f, \mathbf{g}) = m(f, 1 - f) + m(0, f + \mathbf{g}). \tag{1}$$

We shall define the mapping *P* : S → [0, 1] by the formula *P*(*A*) = *m*(*χA*, 1 − *χA*). Let *A*, *B* ∈ S, *A* ∩ *B* = ∅. Then *χ<sup>A</sup>* + *χ<sup>B</sup>* ≤ 1, hence (*χA*, 1 − *χA*) � (*χB*, 1 − *χB*)=(0, 1). Therefore *P*(*A*) + *P*(*B*) = *m*(*χA*, 1 − *χA*) + *m*(*χB*, 1 − *χB*) = = *m*((*χA*, 1 − *χA*) ⊕ (*χB*, 1 − *χB*)) =

$$\eta = \mathfrak{m}(\chi\_A + \chi\_{B'}1 - \chi\_A - \chi\_B) = \mathfrak{m}(\chi\_{A \cup B'}1 - \chi\_{A \cup B} = P(A \cup B).$$

Let *An* ∈ S(*n* = 1, 2, ...), *An* � *A*. Then

$$\chi\_{A\_n} \nearrow \chi\_{A'} \\ 1 - \chi\_{A\_n} \searrow 1 - \chi\_{A'}$$

hence by (iii)

$$P(A\_{\mathfrak{n}}) = m(\chi\_{A\_{\mathfrak{n}'}}1 - \chi\_{A\_{\mathfrak{n}}}) \nearrow m(\chi\_{A'}1 - \chi\_{A}) = P(A).$$

Evidently *P*(Ω) = *m*(*χ*Ω, 1 − *χ*Ω) = *m*((1, 0)) = 1, hence *P* : S → [0, 1] is a probability measure.

Now we prove two identities. First the implication:

$$A\_1, \dots, A\_n \in \mathcal{S}, a\_1, \dots, a\_n \in [0, 1], A\_i \cap A\_j = \mathcal{Q}(i \neq j) \implies$$

$$m(\sum\_{i=1}^n a\_i \chi\_{A\_{i'}}, 1 - \sum\_{i=1}^n a\_i \chi\_{A\_i}) = \sum\_{i=1}^n m(a\_i \chi\_{A\_{i'}}, 1 - a\_i \chi\_{A\_i}).\tag{2}$$

It can be proved by induction. The second identity is the following

$$0 \le \mathfrak{a}, \mathfrak{z} \le 1 \implies m(\mathfrak{a}\mathfrak{z}\chi\_{A'}1 - \mathfrak{a}\mathfrak{z}\chi\_A) = \mathfrak{a}m(\mathfrak{z}\chi\_{A'}1 - \mathfrak{z}\chi\_A). \tag{3}$$

6 Will-be-set-by-IN-TECH

**Proposition 2.1.** Let *<sup>P</sup>* : F→J , *<sup>P</sup>*(*A*)=[*<sup>P</sup>*(*A*), *<sup>P</sup>*(*A*)]. Then *<sup>P</sup>* is a probability if and only if

**Theorem 2.1.** For any state *m* : F → [0, 1] there exist probability measures *P*, *Q* : S → [0, 1]

*μAdP* + *α*(1 −

Proof. The main instrument in our investigation is the following implication, a corollary of

We shall define the mapping *P* : S → [0, 1] by the formula *P*(*A*) = *m*(*χA*, 1 − *χA*). Let *A*, *B* ∈ S, *A* ∩ *B* = ∅. Then *χ<sup>A</sup>* + *χ<sup>B</sup>* ≤ 1, hence (*χA*, 1 − *χA*) � (*χB*, 1 − *χB*)=(0, 1). Therefore

*χAn* � *χA*, 1 − *χAn* � 1 − *χA*,

*P*(*An*) = *m*(*χAn* , 1 − *χAn* ) � *m*(*χA*, 1 − *χA*) = *P*(*A*).

Evidently *P*(Ω) = *m*(*χ*Ω, 1 − *χ*Ω) = *m*((1, 0)) = 1, hence *P* : S → [0, 1] is a probability

*A*1, ..., *An* ∈ S, *α*1, ..., *α<sup>n</sup>* ∈ [0, 1], *Ai* ∩ *Aj* = ∅(*i* �= *j*) =⇒

*m*(*αiχAi*

0 ≤ *α*, *β* ≤ 1 =⇒ *m*(*αβχA*, 1 − *αβχA*) = *αm*(*βχA*, 1 − *βχA*). (3)

, 1 − *αiχAi*

). (2)

 Ω

*f* , *g* ∈ F, *f* + *g* ≤ 1 =⇒ *m*(*f* , *g*) = *m*(*f* , 1 − *f*) + *m*(0, *f* + *g*). (1)

(*μ<sup>A</sup>* + *νA*)*dQ*).

(i) *P*(1Ω, 0Ω)=[1, 1], *P*(0Ω, 1Ω)=[0, 0],

(iii) *An* � *A* =⇒ *P*(*An*) � *P*(*A*).

*<sup>P</sup>*, *<sup>P</sup>* : F → [0, 1] are states.

and *α* ∈ [0, 1] such that

(ii):

hence by (iii)

measure.

(ii) *A* � *B* = (0Ω, 1Ω) =⇒ *P*((*A* ⊕ *B*)) = *P*(*A*) + *P*(*B*),

Hence it is sufficient to characterize only the states ([4], [5], [54]).

 Ω

*m*((*μA*, *νA*)) =

*P*(*A*) + *P*(*B*) = *m*(*χA*, 1 − *χA*) + *m*(*χB*, 1 − *χB*) =

Now we prove two identities. First the implication:

*αiχAi*

, 1 −

It can be proved by induction. The second identity is the following

*n* ∑ *i*=1 *αiχAi* ) = *n* ∑ *i*=1

*m*( *n* ∑ *i*=1

= *<sup>m</sup>*(*χ<sup>A</sup>* + *<sup>χ</sup>B*, 1 − *<sup>χ</sup><sup>A</sup>* − *<sup>χ</sup>B*) = *<sup>m</sup>*(*χA*∪*B*, 1 − *<sup>χ</sup>A*∪*<sup>B</sup>* = *<sup>P</sup>*(*<sup>A</sup>* ∪ *<sup>B</sup>*).

= *m*((*χA*, 1 − *χA*) ⊕ (*χB*, 1 − *χB*)) =

Let *An* ∈ S(*n* = 1, 2, ...), *An* � *A*. Then

It is easy to see that the following property holds.

First it can be proved by induction the equality

$$qm(\frac{1}{q}\beta\chi\_{A\prime}1-\frac{1}{q}\beta\chi\_A) = m(\beta\chi\_{A\prime}1-\beta\chi\_A).$$

holding for every *q* ∈ *N*. Therefore

$$m(\frac{1}{q}\beta\chi\_{A\prime}1-\frac{1}{q}\beta\chi\_A) = \frac{1}{q}m(\beta\chi\_{A\prime}1-\beta\chi\_A)$$

$$m(\frac{p}{q}\beta\chi\_{A\prime}1-\frac{p}{q}\beta\chi\_A) = \frac{p}{q}m(\beta\chi\_{A\prime}1-\beta\chi\_A)$$

hence (3) holds for every rational *α*. Let *α* ∈ *R*, *α* ∈ [0, 1]. Take *α<sup>n</sup>* ∈ *Q*, *α<sup>n</sup>* � *α*. Then

$$
\alpha\_n \chi\_A \nearrow \alpha \chi\_{A'} \mathbf{1} - \alpha\_n \chi\_A \searrow \mathbf{1} - \mathfrak{a} \chi\_A.
$$

Therefore

$$m(\mathfrak{a}\mathfrak{\beta}\chi\_{A\prime}1-\mathfrak{a}\mathfrak{\beta}\chi\_{A}) = \lim\_{n\to\infty}m(\mathfrak{a}\_{n}\mathfrak{\beta}\chi\_{A\prime}1-\mathfrak{a}\_{n}\mathfrak{\beta}\chi\_{A}) = 1$$

$$=\lim\_{n\to\infty}\mathfrak{a}\_{\mathfrak{n}}m(\mathfrak{\beta}\chi\_{A\prime}1-\mathfrak{\beta}\chi\_{A}) = \mathfrak{a}m(\mathfrak{\beta}\chi\_{A\prime}1-\mathfrak{\beta}\chi\_{A}).$$

hence, (3) is proved, too. Particularly, if we give *β* = 1, then

$$m(\mathfrak{a}\chi\_{A\prime}1-\mathfrak{a}\chi\_A)=\mathfrak{a}m(\chi\_{A\prime}1-\chi\_A).$$

Let *f* : Ω → [0, 1] be simple, S-measurable, i.e.

$$f = \sum\_{i=1}^{n} \alpha\_i \chi\_{A\_{i'}} A\_{i'} \in \mathcal{S}(i = 1, \dots, n), A\_i \cap A\_{\bar{j}} = \mathcal{O}(i \neq j).$$

Combining (2), (3), and the definition of *P* we obtain

$$m(f, 1 - f) = \sum\_{i=1}^{n} m(\alpha\_i \chi\_{A\_i}, 1 - \mathfrak{a}\_i \chi\_{A\_i}) = 0$$

$$= \sum\_{i=1}^{n} \alpha\_i m(\chi\_{A\_i}, 1 - \chi\_{A\_i}) = \\
$$

$$= \sum\_{i=1}^{n} \mathfrak{a}\_i P(A\_i) = \int\_{\Omega} f dP\_\prime$$

hence

$$m(f, 1 - f) = \int\_{\Omega} f dP\_{\prime \prime}$$

for any *f* : Ω → [0, 1]simple. If *f* : Ω → [0, 1] is an arbitrary S-measurable function, then there exists a sequence (*fn*) of simple measurable functions such that *fn* � *f* . Evidently, 1 − *fn* � 1 − *f* . Therefore

$$m(f, 1 - f) = \lim\_{n \to \infty} m(f\_{n\nu} 1 - f\_n) = \lim\_{n \to \infty} \int\_{\Omega} f\_n dP = \int\_{\Omega} f dP\_n$$

Proof. It is easy to see that any element (*μA*, *νA*) ∈ M can be presented in the form

Generally, we can define *m* : M → [0, 1] by the formula

Let *A* = (*μA*, *νA*) ∈ M, *B* = (*μB*, *νB*) ∈ M, *A* � *B* = (0, 1), hence

If (*μA*, *νA*) ∈ F, then

*m* is additive.

Therefore

Then by Theorem 2.1

= Ω

Therefore

*μAdP* + *α*(1 −

 Ω

= Ω

(*μ<sup>B</sup>* − *μA*)*dP* − *α*

*<sup>m</sup>*(*B*) <sup>−</sup> *<sup>m</sup>*(*A*) =

−( Ω

= Ω (*μA*, *νA*) � (0, 1 − *νA*)=(0, 1),

Analysis of Fuzzy Logic Models 227

(*μA*, 0)=(*μA*, *νA*) ⊕ (0, 1 − *νA*).

*m*((*μA*, 0)) = *m*((*μAνA*)) + *m*((0, 1 − *νA*)).

*m*((*μA*, *νA*)) = *m*((*μA*, 0)) − *m*((0, 1 − *νA*)),

so that *m* is an extension of *m*. Of course, we must prove that *m* is a state. First we prove that

((*μ<sup>A</sup>* + *μ<sup>B</sup>* − 1) ∨ 0,(*ν<sup>A</sup>* + *νB*) ∧ 1)=(0, 1),

*μ<sup>A</sup>* + *μ<sup>B</sup>* ≤ 1, 1 − *ν<sup>A</sup>* + 1 − *ν<sup>B</sup>* ≤ 1.

*m*(*A*) + *m*(*B*) = *m*(*μA*, *νA*) + *m*(*μB*, *νB*) = *m*(*μA*, 0) − *m*(0, 1 − *νA*) + *m*(*μB*, 0) − *m*(0, 1 − *νB*) =

= *m*(*μ<sup>A</sup>* + *μB*, 0) − *m*(0, 1 − *ν<sup>A</sup>* − *νB*) =

= *m*(*μ<sup>A</sup>* + *μB*, *ν<sup>A</sup>* + *νB*) = *m*(*A* ⊕ *B*).

Before the continuity of *m* we shall prove its monotonicity. Let *A* ≤ *B*, i.e. *μ<sup>A</sup>* ≤ *μA*, *ν<sup>A</sup>* ≥ *νB*.

*m*(*A*) = *m*(*μA*, 0) − *m*(0, 1 − *νA*) =

 Ω

> Ω

> > Ω

*μAdQ* − *α*

(*μ<sup>B</sup>* − *μA*)*dQ* + *α*

*f dQ* ≥ 0

0*dP* − *α*(1 −

(*μ<sup>A</sup>* + *νA*)*dQ*).

*μBdQ* − *α*

 Ω

 Ω  Ω

> Ω

*νAdQ*) =

(0 + 1 − *νA*)*dQ*) =

*νBdQ*−

(*ν<sup>A</sup>* − *νB*)*dQ*.

(*μ<sup>A</sup>* + 0)*dQ* −

Ω

*μAdP* + *α* − *α*

*μAdP* + *α*(1 −

 Ω

*f dP* − *α*

 Ω

Of course, as an easy consequence o Proposition 2.1 we obtain the inequality

 Ω *μBdP* + *α* − *α*

 Ω

hence

$$m(f, 1 - f) = \int\_{\Omega} dP\_{\prime} \tag{4}$$

for any measurable *f* : Ω → [0, 1].

Now take our attention to the second term *m*(0, *f* + *g*) in the right side of the equality (1). First define *M* : S → [0, 1] by the formula

$$M(A) = m(0, 1 - \chi\_A).$$

As before it is possible to prove that *M* is a measure. Of course,

$$M(\Omega) = m(0,0) = \mathfrak{a} \in [0,1].$$

Define *Q* : S → [0, 1] by the formula

$$m(0, 1 - \chi\_A) = \mathfrak{a}Q(A).$$

As before, it is possible to prove

$$m(0, 1 - f) = \alpha \int\_{\Omega} f dQ\_{\lambda}$$

for any *f* : Ω → [0, 1] measurable, or

$$m(0,h) = a \int\_{\Omega} (1-h)dQ\_{\prime} \tag{5}$$

for any *h* : Ω → [0, 1], S-measurable. Combining (1), (4), and (5) we obtain

$$m(A) = m((
\mu\_{A'}
\nu\_A) = m((
\mu\_{A'}
1 - 
\mu\_A)) + m((0, 
\mu\_A + 
\nu\_A))
$$

$$= 
\int\_{\Omega} \mu\_A dP + \mathfrak{a}(1 - 
\int\_{\Omega} (
\mu\_A + 
\nu\_A) d\mathbb{Q}).$$

A simple consequence of the representation theorem is the following property of the mapping *P* − *αQ* : S → *R*.

**Proposition 2.2.** Let *P*, *Q* : S → [0, 1] be the probabilities mentioned in Theorem 2.1, *α* is the corresponding constant. Then

$$P(A) - \alpha Q(A) \ge 0$$

for any *A* ∈ S.

Proof. Put *B* = (0, 0), *C* = (*χA*, 0). Then *B* ≤ *C*, hence *m*(0, 0) ≤ *m*(*χA*, 0). Therefore

$$
\alpha = m(0,0) \le m(\chi\_{A\nu}0) = P(A) + \alpha(1 - Q(A)).
$$

Theorem 1.1 is an embedding theorem stating that every IF-events algebra F can be embedded to and MV-algebra M. Now we shall prove that any state *m* : F → [0, 1] can be extended to a state *m* : M → [0, 1] ([63]).

**Theorem 2.2.** Let M⊃F be the MV-algebra constructed in Theorem 1.1. Then every state *m* : F → [0, 1] can be extended to a state *m* : M → [0, 1].

8 Will-be-set-by-IN-TECH

Now take our attention to the second term *m*(0, *f* + *g*) in the right side of the equality (1). First

*M*(*A*) = *m*(0, 1 − *χA*).

*M*(Ω) = *m*(0, 0) = *α* ∈ [0, 1].

*m*(0, 1 − *χA*) = *αQ*(*A*).

 Ω

*m*(*A*) = *m*((*μA*, *νA*) = *m*((*μA*, 1 − *μA*)) + *m*((0, *μ<sup>A</sup>* + *νA*))

A simple consequence of the representation theorem is the following property of the mapping

**Proposition 2.2.** Let *P*, *Q* : S → [0, 1] be the probabilities mentioned in Theorem 2.1, *α* is the

*P*(*A*) − *αQ*(*A*) ≥ 0

*α* = *m*(0, 0) ≤ *m*(*χA*, 0) = *P*(*A*) + *α*(1 − *Q*(*A*)).

Theorem 1.1 is an embedding theorem stating that every IF-events algebra F can be embedded to and MV-algebra M. Now we shall prove that any state *m* : F → [0, 1] can be extended to a

**Theorem 2.2.** Let M⊃F be the MV-algebra constructed in Theorem 1.1. Then every state

Proof. Put *B* = (0, 0), *C* = (*χA*, 0). Then *B* ≤ *C*, hence *m*(0, 0) ≤ *m*(*χA*, 0). Therefore

 Ω

(*μ<sup>A</sup>* + *νA*)*dQ*).

 Ω *f dQ*,

*m*(0, 1 − *f*) = *α*

*m*(0, *h*) = *α*

*μAdP* + *α*(1 −

for any *h* : Ω → [0, 1], S-measurable. Combining (1), (4), and (5) we obtain

= Ω

*m* : F → [0, 1] can be extended to a state *m* : M → [0, 1].

 Ω

*dP*, (4)

(1 − *h*)*dQ*, (5)

*m*(*f* , 1 − *f*) =

hence

for any measurable *f* : Ω → [0, 1].

define *M* : S → [0, 1] by the formula

Define *Q* : S → [0, 1] by the formula

for any *f* : Ω → [0, 1] measurable, or

*P* − *αQ* : S → *R*.

for any *A* ∈ S.

corresponding constant. Then

state *m* : M → [0, 1] ([63]).

As before, it is possible to prove

As before it is possible to prove that *M* is a measure. Of course,

Proof. It is easy to see that any element (*μA*, *νA*) ∈ M can be presented in the form

$$(\mu\_{A\prime}\nu\_A)\odot(0,1-\nu\_A)=(0,1),$$

$$(\mu\_{A\prime}0)=(\mu\_{A\prime}\nu\_A)\oplus(0,1-\nu\_A).$$

If (*μA*, *νA*) ∈ F, then

$$m((\mu\_A \, \_\prime 0)) = m((\mu\_A \nu\_A)) + m((0, 1 - \nu\_A)).$$

Generally, we can define *m* : M → [0, 1] by the formula

$$\overline{m}((\mu\_{A\prime}\nu\_A)) = m((\mu\_{A\prime}0)) - m((0,1-\nu\_A))\_{\prime\prime}$$

so that *m* is an extension of *m*. Of course, we must prove that *m* is a state. First we prove that *m* is additive.

$$\text{Let } A = (\mu\_A \nu\_A) \in \mathcal{M}, B = (\mu\_B, \nu\_B) \in \mathcal{M}, A \odot B = (0, 1) \text{, hence}$$

$$((\mu\_A + \mu\_B - 1) \vee 0, (\nu\_A + \nu\_B) \wedge 1) = (0, 1).$$

$$\mu\_A + \mu\_B \le 1, 1 - \nu\_A + 1 - \nu\_B \le 1.$$

Therefore

$$\overline{m}(A) + \overline{m}(B) = \overline{m}(\mu\_{A'} \nu\_A) + \overline{m}(\mu\_{B'} \nu\_B)$$

$$= m(\mu\_{A'}0) - m(0, 1 - \nu\_A) + m(\mu\_{B'}0) - m(0, 1 - \nu\_B) = 0$$

$$= m(\mu\_A + \mu\_B, 0) - m(0, 1 - \nu\_A - \nu\_B) = 0$$

$$= m(\mu\_A + \mu\_{B'} \nu\_A + \nu\_B) = \overline{m}(A \oplus B).$$

Before the continuity of *m* we shall prove its monotonicity. Let *A* ≤ *B*, i.e. *μ<sup>A</sup>* ≤ *μA*, *ν<sup>A</sup>* ≥ *νB*. Then by Theorem 2.1

$$\overline{m}(A) = m(\mu\_A, 0) - m(0, 1 - \nu\_A) = 0$$

$$\begin{aligned} \mu\_A dP + a(1 - \int\_{\Omega} (\mu\_A + 0) dQ - \int\_{\Omega} 0 dP - a(1 - \int\_{\Omega} (0 + 1 - \nu\_A) dQ) &= 0 \\ &= \int\_{\Omega} \mu\_A dP + a(1 - \int\_{\Omega} (\mu\_A + \nu\_A) dQ) .\end{aligned}$$

Therefore

$$\overline{m}(B) - \overline{m}(A) = \int\_{\Omega} \mu\_B dP + \mathfrak{a} - \mathfrak{a} \int\_{\Omega} \mu\_B dQ - \mathfrak{a} \int\_{\Omega} \nu\_B dQ - \mathfrak{a}$$

$$- (\int\_{\Omega} \mu\_A dP + \mathfrak{a} - \mathfrak{a} \int\_{\Omega} \mu\_A dQ - \mathfrak{a} \int\_{\Omega} \nu\_A dQ) =$$

$$= \int\_{\Omega} (\mu\_B - \mu\_A) dP - \mathfrak{a} \int\_{\Omega} (\mu\_B - \mu\_A) dQ + \mathfrak{a} \int\_{\Omega} (\nu\_A - \nu\_B) dQ.$$

Of course, as an easy consequence o Proposition 2.1 we obtain the inequality

$$\int\_{\Omega} fdP - \alpha \int\_{\Omega} fdQ \ge 0$$

Proof. First

If *A* ∩ *B* = ∅, then *x*(*A*) � *x*(*B*)=(0, 1), hence

Finally, *An* � *A* implies *x*(*An*) � *x*(*A*), hence

Then *F* is non-decreasing, left continuous in any point *u* ∈ *R*,

F is left continuous in any *u* ∈ *R*. Similarly *un* � ∞ implies

for every *un* � ∞, hence lim*u*→<sup>∞</sup> *F*(*u*) = 1. Similarly we obtain

*F* : *R* → [0, 1] by the formula

F is non decreasing. If *un* � *u*, then

Proof. If *u* < *v*, then

hence

hence

Therefore

hence

Now

*mx*(*R*) = *m*(*x*(*R*)) = *m*((1, 0)) = 1.

Analysis of Fuzzy Logic Models 229

*mx*(*A* ∪ *B*) = *m*(*x*(*A* ∪ *B*)) = *m*((*x*(*A*) ⊕ *x*(*B*)) =

= *m*(*x*(*A*)) + *m*(*x*(*B*)) = *mx*(*A*) + *mx*(*B*).

*mx*(*An*) = *m*(*x*(*An*)) � *m*(*x*(*A*)) = *mx*(*A*).

**Proposition 3.2.** Let *x* : *σ*(J ) → F be an observable, *m* : F → [0, 1] be a state. Define

*F*(*u*) = *m*(*x*((−∞, *u*))).

lim*u*→<sup>∞</sup> *<sup>F</sup>*(*u*) = 1, lim *<sup>u</sup>*→−<sup>∞</sup> *<sup>F</sup>*(*u*) = 0.

*x*((−∞, *v*)) = *x*((−∞, *u*)) ⊕ *x*((*u*, *v*)) ≥ *x*((−∞, *u*)),

*F*(*v*) = *m*((−∞, *v*)) ≥ *m*(*x*((−∞, *u*))) = *F*(*u*),

*x*((−∞, *un*)) � *x*((−∞, *u*)),

*F*(*un*) = *m*(*x*((−∞, *un*))) � *m*(*x*((−∞, *u*))) = *F*(*u*),

*x*((−∞, *un*)) � *x*((−∞, ∞)) = (1, 0).

*F*(*un*) = *m*(*x*((−∞, *un*))) � *m*((1, 0))) = 1

*un* � −∞ =⇒ −*un* � ∞,

*m*(*x*((*un*, −*un*))) � *m*(*x*(*R*)) = 1.

<sup>1</sup> <sup>=</sup> lim*n*→<sup>∞</sup> *<sup>F</sup>*(−*un*) = lim*n*→<sup>∞</sup> *<sup>m</sup>*(*x*((*un*, <sup>−</sup>*un*))) + lim*n*→<sup>∞</sup> *<sup>F</sup>*(*un*) =

<sup>=</sup> <sup>1</sup> <sup>+</sup> lim*n*→<sup>∞</sup> *<sup>F</sup>*(*un*),

for any non-negative measurable *f* : Ω → *R*. Therefore

$$
\overline{m}(B) - \overline{m}(A) = \int\_{\Omega} fdP - \mathfrak{a} \int\_{\Omega} fdQ + \mathfrak{a} \int\_{\Omega} (\nu\_A - \mu\_B) dQ \ge 0.
$$

Finally let *An* = (*μAn* , *νAn* ) ∈ M, *A* = (*μA*, *νA*) ∈ M, *An* � *A*, i.e. *μAn* � *μA*, *νAn* � *νA*. We have

$$\overline{m}(A\_{\mathfrak{n}}) = \int\_{\Omega} \mu\_{A\_{\mathfrak{n}}} dP - \mathfrak{a} \int\_{\Omega} \mu\_{A\_{\mathfrak{n}}} dQ + \mathfrak{a} - \mathfrak{a} \int\_{\Omega} \nu\_{A\_{\mathfrak{n}}} dQ \nearrow$$

$$\nearrow \int\_{\Omega} \mu\_{A} dP - \mathfrak{a} \int\_{\Omega} \mu\_{A} dQ + \mathfrak{a} - \mathfrak{a} \int\_{\Omega} \nu\_{A} dQ = \overline{m}(A).$$

#### **4. Observables**

In the classical probability there are three main notions:

probability = measure

random variable = measurable function

mean value = integral.

The first notion has been studied in the previous section. Now we shall define the second two notions.

Classically a random variable is such function *<sup>ξ</sup>* : (Ω, <sup>S</sup>) <sup>→</sup> *<sup>R</sup>* that *<sup>ξ</sup>*−1(*A*) ∈ S for any Borel set *A* ∈ B(*R*) (here B(*R*) = *σ*(J ) is the *σ*-algebra generated by the family J of all intervals). Now instead of a *σ*-algebra S we have the family F of all IF-events, hence we must give to any Borel set *A* an element of F. Of course, instead of random variable we shall use the term observable ([15], [16], [18], [32], [35]).

**Definition 3.1.** An observable is a mapping

$$\mathfrak{x}: \sigma(\mathcal{J}) \to \mathcal{F}$$

satisfying the following conditions:

(i)

$$\mathfrak{x}(R) = (1,0), \mathfrak{x}(\mathcal{Q}) = (0,1)\_{\prime\prime}$$

(ii)

$$A \cap B = \bigcirc \Longrightarrow \mathfrak{x}(A) \odot \mathfrak{x}(B) = (0,1), \mathfrak{x}(A \cup B) = \mathfrak{x}(A) \oplus \mathfrak{x}(B),$$

(iii)

$$A\_n \nearrow A \implies \mathfrak{x}(A\_n) \nearrow \mathfrak{x}(A).$$

**Proposition 3.1.** If *x* : *σ*(J ) → F is an observable, and *m* : F → [0, 1] is a state, then

$$\mathfrak{m}\_{\mathfrak{x}} = \mathfrak{m} \circ \mathfrak{x} : \sigma(\mathcal{J}) \to [0, 1]$$

defined by

$$
\mathfrak{m}\_{\mathfrak{x}}(A) = \mathfrak{m}(\mathfrak{x}(A)).
$$

is a probability measure.

Proof. First

10 Will-be-set-by-IN-TECH

 Ω

Finally let *An* = (*μAn* , *νAn* ) ∈ M, *A* = (*μA*, *νA*) ∈ M, *An* � *A*, i.e. *μAn* � *μA*, *νAn* � *νA*. We

*μAdQ* + *α* − *α*

The first notion has been studied in the previous section. Now we shall define the second two

Classically a random variable is such function *<sup>ξ</sup>* : (Ω, <sup>S</sup>) <sup>→</sup> *<sup>R</sup>* that *<sup>ξ</sup>*−1(*A*) ∈ S for any Borel set *A* ∈ B(*R*) (here B(*R*) = *σ*(J ) is the *σ*-algebra generated by the family J of all intervals). Now instead of a *σ*-algebra S we have the family F of all IF-events, hence we must give to any Borel set *A* an element of F. Of course, instead of random variable we shall use the term

*x* : *σ*(J ) → F

*x*(*R*)=(1, 0), *x*(∅)=(0, 1),

*A* ∩ *B* = ∅ =⇒ *x*(*A*) � *x*(*B*)=(0, 1), *x*(*A* ∪ *B*) = *x*(*A*) ⊕ *x*(*B*),

*An* � *A* =⇒ *x*(*An*) � *x*(*A*).

*mx* = *m* ◦ *x* : *σ*(J ) → [0, 1]

*mx*(*A*) = *m*(*x*(*A*))

**Proposition 3.1.** If *x* : *σ*(J ) → F is an observable, and *m* : F → [0, 1] is a state, then

 Ω *f dQ* + *α*

*μAn dQ* + *α* − *α*

 Ω

 Ω

> Ω

*νAdQ* = *m*(*A*).

(*ν<sup>A</sup>* − *μB*)*dQ* ≥ 0.

*νAn dQ* �

for any non-negative measurable *f* : Ω → *R*. Therefore

 Ω

In the classical probability there are three main notions:

*μAdP* − *α*

 Ω

*f dP* − *α*

*μAn dP* − *α*

 Ω

*m*(*B*) − *m*(*A*) =

*m*(*An*) =

� Ω

random variable = measurable function

observable ([15], [16], [18], [32], [35]).

satisfying the following conditions:

**Definition 3.1.** An observable is a mapping

have

**4. Observables**

probability = measure

mean value = integral.

notions.

(i)

(ii)

(iii)

defined by

is a probability measure.

$$m\_{\mathfrak{X}}(R) = m(\mathfrak{x}(R)) = m((1,0)) = 1.$$

If *A* ∩ *B* = ∅, then *x*(*A*) � *x*(*B*)=(0, 1), hence

$$m\_{\mathfrak{X}}(A \cup B) = m(\mathfrak{x}(A \cup B)) = m((\mathfrak{x}(A) \oplus \mathfrak{x}(B)) = 1)$$

$$=m(\mathfrak{x}(A)) + m(\mathfrak{x}(B)) = m\_{\mathfrak{x}}(A) + m\_{\mathfrak{x}}(B).$$

Finally, *An* � *A* implies *x*(*An*) � *x*(*A*), hence

$$m\_{\mathfrak{X}}(A\_{\mathfrak{n}}) = m(\mathfrak{x}(A\_{\mathfrak{n}})) \nearrow m(\mathfrak{x}(A)) = m\_{\mathfrak{X}}(A).$$

**Proposition 3.2.** Let *x* : *σ*(J ) → F be an observable, *m* : F → [0, 1] be a state. Define *F* : *R* → [0, 1] by the formula

$$F(\boldsymbol{\mu}) = m(\boldsymbol{x}((-\infty, \boldsymbol{\mu}))).$$

Then *F* is non-decreasing, left continuous in any point *u* ∈ *R*,

$$\lim\_{\mu \to \infty} F(\mu) = 1,\\ \lim\_{\mu \to -\infty} F(\mu) = 0.$$

Proof. If *u* < *v*, then

$$\mathfrak{x}((-\infty,v)) = \mathfrak{x}((-\infty,\mathfrak{u})) \oplus \mathfrak{x}((\mathfrak{u},v)) \geq \mathfrak{x}((-\infty,\mathfrak{u}))\_{\prime\prime}$$

hence

$$F(\upsilon) = m((-\infty,\upsilon)) \ge m(\mathfrak{x}((-\infty,\mathfrak{u}))) = F(\mathfrak{u})\_\prime$$

F is non decreasing. If *un* � *u*, then

$$\left(\mathfrak{x}((-\infty,u\_n))\nearrow\mathfrak{x}((-\infty,u))\right)$$

hence

$$F(\mathfrak{u}\_{\mathfrak{n}}) = m(\mathfrak{x}((-\infty, \mathfrak{u}\_{\mathfrak{n}}))) \nearrow m(\mathfrak{x}((-\infty, \mathfrak{u}))) = F(\mathfrak{u}),$$

F is left continuous in any *u* ∈ *R*. Similarly *un* � ∞ implies

$$
\infty((-\infty,\mu\_n)) \nearrow \ge ((-\infty,\infty)) = (1,0).
$$

Therefore

$$F(\mu\_n) = m(\mathfrak{x}((-\infty, \mu\_n))) \nearrow m((1, 0))) = 1$$

for every *un* � ∞, hence lim*u*→<sup>∞</sup> *F*(*u*) = 1. Similarly we obtain

$$
\mu\_{\mathfrak{n}} \searrow\_{\mathfrak{n}} - \infty \Longrightarrow -\mu\_{\mathfrak{n}} \nearrow \infty,
$$

hence

$$m(\mathfrak{x}((\mathfrak{u}\_{n\mathsf{v}}-\mathfrak{u}\_{\mathsf{n}}))) \nearrow m(\mathfrak{x}(\mathsf{R})) = 1.$$

Now

$$\begin{aligned} 1 = \lim\_{n \to \infty} F(-u\_n) &= \lim\_{n \to \infty} m(x((u\_{n\prime} - u\_n))) + \lim\_{n \to \infty} F(u\_n) = 0 \\ &= 1 + \lim\_{n \to \infty} F(u\_n)\_{\prime} \end{aligned}$$

hence

then

Analogously

If we define

pre-images:

Then we define

by the formula

independent, if

*h*

*h*

(*C*).*y*

where *<sup>h</sup>* : <sup>B</sup>(*Rn*) → F is the joint observable of the observables *<sup>x</sup>*1, ..., *xn*.

is defined by the equality *g*(*u*1, ..., *un*) = *u*<sup>1</sup> + ... + *un*.

for any *n* ∈ *N* and any *C*1, ..., *Cn* ∈ *σ*(J ).

where *F* is the distribution function of *ξ*.

**Definition 3.5.** Let *<sup>m</sup>* : F → [0, 1] be a state, (*xn*)<sup>∞</sup>

*h*(*A*)=(*h*

*<sup>h</sup>*(*<sup>C</sup>* <sup>×</sup> *<sup>D</sup>*)=(*<sup>x</sup>*

(*<sup>C</sup>* <sup>×</sup> *<sup>D</sup>*) = *<sup>x</sup>*

Analysis of Fuzzy Logic Models 231

(*<sup>C</sup>* <sup>×</sup> *<sup>D</sup>*) = *<sup>x</sup>*

(*A*), 1 <sup>−</sup> *<sup>h</sup>*

(*D*), 1 <sup>−</sup> *<sup>x</sup>*

Now we shall present two applications of the notion of the joint observable. The first is the definition of function of a finite sequence of observables, e.g. their sum. In the classical case

*ξ* + *η* = *g* ◦ *T* : Ω → *R*

where *g*(*u*, *v*) = *u* + *v*, *T*(*ω*)=(*ξ*(*ω*), *η*(*ω*)). Hence *ξ* + *η* can be defined by the help of

(*<sup>ξ</sup>* <sup>+</sup> *<sup>η</sup>*)−<sup>1</sup> <sup>=</sup> *<sup>T</sup>*−<sup>1</sup> ◦ *<sup>g</sup>*−<sup>1</sup> : <sup>B</sup>(*R*) → S.

**Definition 3.4.** Let *x*1, ..., *xn* : B(*R*) → F be observables, *g* : *Rn* → *R* be a measurable function.

*g*(*x*1, ..., *xn*) : B(*R*) → F

*<sup>g</sup>*(*x*1, ..., *xn*)(*C*) = *<sup>h</sup>*(*g*−1(*C*)), *<sup>C</sup>* ∈ B(*R*),

**Example 3.1.** *x*<sup>1</sup> + ... + *xn* : B(*R*) → F is the observable defined by the formula (*x*<sup>1</sup> + ... + *xn*)(*C*) = *<sup>h</sup>*(*g*−1(*C*)), where *<sup>h</sup>* : <sup>B</sup>(*Rn*) → F is the joint observable of *<sup>x</sup>*1, ..., *xn*, and *<sup>g</sup>* : *<sup>R</sup><sup>n</sup>* <sup>→</sup> *<sup>R</sup>*

*m*(*hn*(*C*<sup>1</sup> × *C*<sup>2</sup> × ... × *Cn*)) = *m*(*x*1(*C*1)).*m*(*x*2(*C*2)).....*m*(*xn*(*Cn*))

The second application of the joint observable is in the formulation of the independency.

*hn* : *<sup>σ</sup>*(<sup>J</sup> *<sup>n</sup>*) → F be the joint observable of *<sup>x</sup>*1, ..., *xn*(*<sup>n</sup>* <sup>=</sup> 1, 2, ...). Then (*xn*)<sup>∞</sup>

Now let us return to the notion of mean value of an observable. In the classical case

Ω

**Definition 3.6.** Let *x* : B(*R*) → F be an observable, *m* : F → [0, 1] be a state, *g* : *R* → *R* be a measurable function, *F* be the distribution function of *x* (*F*(*t*) = *m*(*x*((−∞, *t*)))). Then we

*g* ◦ *ξdP* =

 *R gdF*

*<sup>E</sup>*(*<sup>g</sup>* ◦ *<sup>ξ</sup>*) =

(*C*).*y*

(*C*).*y*

(*C*).*y*

(*D*).

(*D*).

(*A*)), *<sup>A</sup>* <sup>∈</sup> *<sup>σ</sup>*(<sup>J</sup> <sup>2</sup>),

(*D*)) = *x*(*C*).*y*(*D*).

*<sup>n</sup>*=<sup>1</sup> be a sequence of observables,

*<sup>n</sup>*=<sup>1</sup> is called

hence lim*n*→<sup>∞</sup> *F*(*un*) = 0 for any *un* � −∞.

Of course, we must describe also the random vector *<sup>T</sup>* = (*ξ*, *<sup>η</sup>*) : <sup>Ω</sup> <sup>→</sup> *<sup>R</sup>*2. We have

$$T^{-1}(\mathbb{C} \times D) = \mathfrak{f}^{-1}(\mathbb{C}) \cap \mathfrak{\eta}^{-1}(D).$$

In the IF case we shall use product of functions instead of intersection of sets ([47], [56], [68]).

**Definition 3.2.** The product *A*.*B* of two IF-events *A* = (*μA*, *νA*), *B* = (*μB*, *νB*) is the IF set

*A*.*B* = (*μA*.*μB*, 1 − (1 − *νA*).(1 − *νB*)) = (*μA*.*μB*, *ν<sup>A</sup>* + *ν<sup>B</sup>* − *νA*.*νB*).

**Definition 3.3.** Let *x*1, ..., *xn* : *σ*(J ) → F be observables. By the joint observable of *x*1, ..., *xn* we consider a mapping *<sup>h</sup>* : *<sup>σ</sup>*(<sup>J</sup> *<sup>n</sup>*) → F(<sup>J</sup> *<sup>n</sup>* being the set of all intervals of *<sup>R</sup>n*) satisfying the following conditions:

$$\begin{aligned} \text{(i)} \, h(R^n) &= (1, 0) \\\\ \text{(ii)} \, A \cap B &= \bigotimes \dots \bigotimes \newline h(A) \odot h(B) = (0, 1), \text{and } h(A \cup B) = h(A) \oplus h(B), \\\\ \text{(iii)} \, A\_{\mathbb{R}} \, \nwarrow A &\Longrightarrow h(A\_{\mathbb{R}} \, \nwarrow h(A), \\\\ \text{(iv)} \, h(\mathbb{C}\_1 \times \mathbb{C}\_2 \times \dots \times \mathbb{C}\_{\mathbb{R}}) &= \mathbbm{x}\_1(\mathbb{C}\_1) . \mathbbm{x}\_2(\mathbb{C}\_2) . \dots \mathbbm{x}\_{\mathbb{R}}(\mathbb{C}\_{\mathbb{R}}), \text{ for any } \mathbb{C}\_1, \mathbb{C}\_2, \dots, \mathbb{C}\_{\mathbb{R}} \in \mathcal{I}. \end{aligned}$$

**Theorem 3.1.** ([63]) For any observables *x*1, ..., *xn* : *σ*(J ) → F there exists their joint observable *<sup>h</sup>* : *<sup>σ</sup>*(<sup>J</sup> *<sup>n</sup>*) → F.

Proof. We shall prove it for *n* = 2. Consider two observables *x*, *y* : *σ*(J ) → F. Since *x*(*A*) ∈ F, we shall write

$$\mathfrak{x}(A) = (\mathfrak{x}^\flat(A), 1 - \mathfrak{x}^\sharp(A))$$

and similarly

$$y(B) = (y^\flat(B), 1 - y^\sharp(B)).$$

By the definition of product we obtain

$$\mathfrak{x}(\mathsf{C}).y(D) = (\mathsf{x}^{\flat}(\mathsf{C}).y^{\flat}(D), 1 - \mathsf{x}^{\sharp}(\mathsf{C}).y^{\sharp}(D)).$$

Therefore, we shall construct similarly

$$h(K) = (h^\flat(K), 1 - h^\sharp(K))$$

Fix *ω* ∈ Ω and define *μ*, *ν* : *σ*(J ) → [0, 1] by

$$
\mu(A) = \mathfrak{x}^\flat(A)(\omega), \nu(B) = \mathfrak{y}^\flat(B)(\omega).
$$

Let *μ* × *ν* be the product of the probability measures *μ*, *ν*. Put

$$h^\flat(K)(\omega) = \mu \times \nu(K).$$

Then

$$\mu^\flat(\mathbb{C} \times D)(\omega) = \mu(\mathbb{C}).\nu(D) = \mathfrak{x}^\flat(\mathbb{C}).y^\flat(D)(\omega).$$

hence

12 Will-be-set-by-IN-TECH

*<sup>T</sup>*−1(*<sup>C</sup>* <sup>×</sup> *<sup>D</sup>*) = *<sup>ξ</sup>*−1(*C*) <sup>∩</sup> *<sup>η</sup>*−1(*D*).

In the IF case we shall use product of functions instead of intersection of sets ([47], [56], [68]).

*A*.*B* = (*μA*.*μB*, 1 − (1 − *νA*).(1 − *νB*)) = (*μA*.*μB*, *ν<sup>A</sup>* + *ν<sup>B</sup>* − *νA*.*νB*).

**Definition 3.3.** Let *x*1, ..., *xn* : *σ*(J ) → F be observables. By the joint observable of *x*1, ..., *xn* we consider a mapping *<sup>h</sup>* : *<sup>σ</sup>*(<sup>J</sup> *<sup>n</sup>*) → F(<sup>J</sup> *<sup>n</sup>* being the set of all intervals of *<sup>R</sup>n*) satisfying the

**Theorem 3.1.** ([63]) For any observables *x*1, ..., *xn* : *σ*(J ) → F there exists their joint

Proof. We shall prove it for *n* = 2. Consider two observables *x*, *y* : *σ*(J ) → F. Since

(*A*), 1 <sup>−</sup> *<sup>x</sup>*

(*B*), 1 <sup>−</sup> *<sup>y</sup>*

(*K*), 1 <sup>−</sup> *<sup>h</sup>*

(*A*)(*ω*), *ν*(*B*) = *y*

(*K*)(*ω*) = *μ* × *ν*(*K*).

(*D*), 1 <sup>−</sup> *<sup>x</sup>*

(*A*))

(*B*)).

(*K*))

(*C*).*y*

(*B*)(*ω*).

(*C*).*y*

(*D*)(*ω*)

(*D*)).

**Definition 3.2.** The product *A*.*B* of two IF-events *A* = (*μA*, *νA*), *B* = (*μB*, *νB*) is the IF set

(ii) *A* ∩ *B* = ∅ =⇒ *h*(*A*) � *h*(*B*)=(0, 1), and *h*(*A* ∪ *B*) = *h*(*A*) ⊕ *h*(*B*),

(iv) *h*(*C*<sup>1</sup> × *C*<sup>2</sup> × ... × *Cn*) = *x*1(*C*1).*x*2(*C*2).....*xn*(*Cn*), for any *C*1, *C*2, ..., *Cn* ∈ J .

*x*(*A*)=(*x*

*y*(*B*)=(*y*

*h*(*K*)=(*h*

(*C*).*y*

*x*(*C*).*y*(*D*)=(*x*

*μ*(*A*) = *x*

*h*

(*<sup>C</sup>* <sup>×</sup> *<sup>D</sup>*)(*ω*) = *<sup>μ</sup>*(*C*).*ν*(*D*) = *<sup>x</sup>*

Let *μ* × *ν* be the product of the probability measures *μ*, *ν*. Put

Of course, we must describe also the random vector *<sup>T</sup>* = (*ξ*, *<sup>η</sup>*) : <sup>Ω</sup> <sup>→</sup> *<sup>R</sup>*2. We have

hence lim*n*→<sup>∞</sup> *F*(*un*) = 0 for any *un* � −∞.

following conditions:

(iii) *An* � *A* =⇒ *h*(*An* � *h*(*A*),

observable *<sup>h</sup>* : *<sup>σ</sup>*(<sup>J</sup> *<sup>n</sup>*) → F.

*x*(*A*) ∈ F, we shall write

By the definition of product we obtain

Therefore, we shall construct similarly

Fix *ω* ∈ Ω and define *μ*, *ν* : *σ*(J ) → [0, 1] by

*h*

and similarly

Then

(i) *h*(*Rn*)=(1, 0)

*h* (*<sup>C</sup>* <sup>×</sup> *<sup>D</sup>*) = *<sup>x</sup>* (*C*).*y* (*D*).

Analogously

$$h^\sharp(\mathbb{C} \times D) = \mathfrak{x}^\sharp(\mathbb{C}). y^\sharp(D).$$

If we define

$$h(A) = (h^\flat(A), 1 - h^\sharp(A)), A \in \sigma(\mathcal{J}^2),$$

then

$$h(\mathbb{C} \times D) = (\mathfrak{x}^\flat(\mathbb{C}). \mathfrak{y}^\flat(D), 1 - \mathfrak{x}^\sharp(\mathbb{C}). \mathfrak{y}^\sharp(D)) = \mathfrak{x}(\mathbb{C}). \mathfrak{y}(D).$$

Now we shall present two applications of the notion of the joint observable. The first is the definition of function of a finite sequence of observables, e.g. their sum. In the classical case

$$
\mathfrak{F} + \mathfrak{v} = \mathfrak{g} \circ T : \Omega \to \mathbb{R}
$$

where *g*(*u*, *v*) = *u* + *v*, *T*(*ω*)=(*ξ*(*ω*), *η*(*ω*)). Hence *ξ* + *η* can be defined by the help of pre-images:

$$(\mathfrak{F} + \eta)^{-1} = T^{-1} \circ \mathfrak{g}^{-1} : \mathcal{B}(\mathbb{R}) \to \mathcal{S} .$$

**Definition 3.4.** Let *x*1, ..., *xn* : B(*R*) → F be observables, *g* : *Rn* → *R* be a measurable function. Then we define

$$g(\mathfrak{x}\_1, \dots, \mathfrak{x}\_n) : \mathcal{B}(R) \to \mathcal{F}$$

by the formula

$$(g(\mathfrak{x}\_1, \dots, \mathfrak{x}\_n)(\mathbb{C}) = h(g^{-1}(\mathbb{C})), \mathbb{C} \in \mathcal{B}(\mathbb{R})\_\prime$$

where *<sup>h</sup>* : <sup>B</sup>(*Rn*) → F is the joint observable of the observables *<sup>x</sup>*1, ..., *xn*.

**Example 3.1.** *x*<sup>1</sup> + ... + *xn* : B(*R*) → F is the observable defined by the formula (*x*<sup>1</sup> + ... + *xn*)(*C*) = *<sup>h</sup>*(*g*−1(*C*)), where *<sup>h</sup>* : <sup>B</sup>(*Rn*) → F is the joint observable of *<sup>x</sup>*1, ..., *xn*, and *<sup>g</sup>* : *<sup>R</sup><sup>n</sup>* <sup>→</sup> *<sup>R</sup>* is defined by the equality *g*(*u*1, ..., *un*) = *u*<sup>1</sup> + ... + *un*.

The second application of the joint observable is in the formulation of the independency.

**Definition 3.5.** Let *<sup>m</sup>* : F → [0, 1] be a state, (*xn*)<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> be a sequence of observables, *hn* : *<sup>σ</sup>*(<sup>J</sup> *<sup>n</sup>*) → F be the joint observable of *<sup>x</sup>*1, ..., *xn*(*<sup>n</sup>* <sup>=</sup> 1, 2, ...). Then (*xn*)<sup>∞</sup> *<sup>n</sup>*=<sup>1</sup> is called independent, if

$$m(h\_{\mathfrak{n}}(\mathbb{C}\_1 \times \mathbb{C}\_2 \times \dots \times \mathbb{C}\_n)) = m(\mathfrak{x}\_1(\mathbb{C}\_1)).m(\mathfrak{x}\_2(\mathbb{C}\_2))...m(\mathfrak{x}\_n(\mathbb{C}\_n))$$

for any *n* ∈ *N* and any *C*1, ..., *Cn* ∈ *σ*(J ).

Now let us return to the notion of mean value of an observable. In the classical case

$$E(\mathfrak{g}\circ\mathfrak{F}) = \int\_{\Omega} \mathfrak{g}\circ\mathfrak{F}dP = \int\_{R} \mathfrak{g}dP$$

where *F* is the distribution function of *ξ*.

**Definition 3.6.** Let *x* : B(*R*) → F be an observable, *m* : F → [0, 1] be a state, *g* : *R* → *R* be a measurable function, *F* be the distribution function of *x* (*F*(*t*) = *m*(*x*((−∞, *t*)))). Then we

In the discrete case we have

**5. Sequences**

in the continuous case we obtain

exactly one probability measure

such that

for any *B* ∈ C.

sequences various spaces (Ω, S, *P*) can be obtained.

of probability measures (*μn*)*n*, *μ<sup>n</sup>* : *σ*(J*n*) → [0, 1] such that

all cylinders in the space *<sup>R</sup>N*, i. e. such sets *<sup>A</sup>* <sup>⊂</sup> *<sup>R</sup><sup>N</sup>* that

If we denote by *<sup>π</sup><sup>n</sup>* the projection *<sup>π</sup><sup>n</sup>* : *<sup>R</sup><sup>N</sup>* <sup>→</sup> *<sup>R</sup>n*,

then we can formulate the assertion (6) by the equality

1, 2, .... If we define *<sup>μ</sup><sup>n</sup>* : <sup>B</sup>(*Rn*) <sup>→</sup> [0, 1] by the equality

then (*μn*)*<sup>n</sup>* satisfies the consistency condition

*E*(*x*2) =

*<sup>E</sup>*(*x*2) = <sup>∞</sup>

*k* ∑ 1=*i x*2 *<sup>i</sup> pi*,

Analysis of Fuzzy Logic Models 233

−∞ *t* <sup>2</sup>*ϕ*(*t*)*dt*.

In the section we want to present a method for studying of limit properties of some sequences (*xn*)*n*, *xn* : B(*R*) → F of observables ([7], [25], [31], [32], [49]). The main idea is a representation of the given sequence by a sequence of random variables (*ξn*)*n*, *ξ<sup>n</sup>* : (Ω, S, *P*) → *R*. Of course, the space (Ω, S) depends on a concrete sequence (*xn*)*n*, for different

The main instrument is the Kolmogorov consistency theorem ([67]). It starts with a sequence

*μn*+1|*σ*(J*n*) × *R* = *μ<sup>n</sup>*

i. e. *μn*+1(*A* × *R*) = *μn*(*A*) for any *A* ∈ *σ*(J*n*) (consistency condition). Let C be the family of

*A* = {(*tn*)*n*;(*t*1, ..., *tk*) ∈ *B*},

where *<sup>k</sup>* <sup>∈</sup> *<sup>N</sup>*, *<sup>B</sup>* ∈ B(*Rk*) = *<sup>σ</sup>*(<sup>J</sup> *<sup>k</sup>*). Then by the Kolmogorov consistency theorem there exists

*P* : *σ*(C) → [0, 1]

**Theorem 4.1.** Let *m* be a state on a space F of all IF-events. Let (*xn*)*<sup>n</sup>* be a sequence of observables, *xn* : <sup>B</sup>(*R*) → F, and let *hn* : <sup>B</sup>(*Rn*) → F be the joint observable of *<sup>x</sup>*1, ..., *xn*, *<sup>n</sup>* <sup>=</sup>

*μ<sup>n</sup>* = *m* ◦ *hn*,

*μn*+1|(*σ*(J*n*) × *R*) = *μn*.

*<sup>i</sup>*=1)=(*t*1, *t*2, ..., *tn*),

*πn*((*ti*)<sup>∞</sup>

*P*(*A*) = *μk*(*B*). (6)

*<sup>P</sup>*(*π*−<sup>1</sup> *<sup>n</sup>* (*B*)) = *<sup>μ</sup>n*(*B*), (7)

define the mean value *E*(*g* ◦ *x*) by the formula

$$E(\mathbf{g}\circ\mathbf{x}) = \int\_{R} \mathbf{g}dF$$

if the integral exists.

**Example 3.2.** Let *x* be discrete, i.e. there exist *xi* ∈ *R*, *pi* ∈ (0, 1], *i* = 1, ..., *k* such that

$$F(t) = \sum\_{\mathbf{x}\_i < t} p\_i.$$

Then

$$E(\mathbf{x}) = \int\_{\mathcal{R}} t dF(t) = \sum\_{i=1}^{k} \mathbf{x}\_i p\_i.$$

The second classical case is the continuous distribution, where

$$F(t) = \int\_{-\infty}^{t} \varphi(u) du.$$

Then

$$E(\mathfrak{x}) = \int\_{\mathbb{R}} t dF(t) = \int\_{-\infty}^{\infty} t \varphi(t) dt.$$

**Example 3.3.** Let us compute the dispersion

$$
\sigma^2(\mathfrak{x}) = E(\mathfrak{g} \circ \mathfrak{x}),
$$

where

$$\lg(\mu) = (\mu - a)^2, a = E(\mathfrak{x}).$$

Here we have two possibilities. The first

$$
\sigma^2 = \int\_{\mathcal{R}} (t - a)^2 dF(t),
$$

i.e.

$$\sigma^2(\mathbf{x}) = \sum\_{i=1}^k (\mathbf{x}\_i - a)^2 p\_i$$

in the discrete case, and

$$
\sigma^2(x) = \int\_{-\infty}^{\infty} (t - a)^2 \varphi(t) dt
$$

in the continuous case. The second possibility is the equality

$$
\sigma^2(\mathbf{x}) = E(\left(\mathbf{x} - a\right)^2) = E(\mathbf{x}^2) - 2aE(\mathbf{x}) + E(a^2) = 0
$$

$$
= E(\mathbf{x}^2) - a^2,
\\
a = E(\mathbf{x}).
$$

Since *a* = *E*(*x*) is known, it is sufficient to compute *E*(*x*2). In the case we have *g*(*t*) = *t* 2, hence

$$E(\mathfrak{x}^2) = \int\_{\mathcal{R}} \mathfrak{g}(t) dF(t) = \int\_{\mathcal{R}} t^2 dF(t).$$

In the discrete case we have

$$E(\mathbf{x}^2) = \sum\_{1=i}^k x\_i^2 p\_{i\nu}$$

in the continuous case we obtain

$$E(\mathbf{x}^2) = \int\_{-\infty}^{\infty} t^2 \varrho(t) dt.$$

#### **5. Sequences**

14 Will-be-set-by-IN-TECH

 *R gdF*

*E*(*g* ◦ *x*) =

**Example 3.2.** Let *x* be discrete, i.e. there exist *xi* ∈ *R*, *pi* ∈ (0, 1], *i* = 1, ..., *k* such that

 *R*

*F*(*t*) =

 *R*

*σ*<sup>2</sup> = *R*

*σ*2(*x*) =

*σ*2(*x*) =

in the continuous case. The second possibility is the equality

*E*(*x*2) =

*E*(*x*) =

The second classical case is the continuous distribution, where

*E*(*x*) =

*F*(*t*) = ∑ *xi*<*t pi*.

*tdF*(*t*) =

 *t* −∞

*tdF*(*t*) =

*<sup>σ</sup>*2(*x*) = *<sup>E</sup>*(*<sup>g</sup>* ◦ *<sup>x</sup>*),

*<sup>g</sup>*(*u*)=(*<sup>u</sup>* <sup>−</sup> *<sup>a</sup>*)2, *<sup>a</sup>* <sup>=</sup> *<sup>E</sup>*(*x*).

*k* ∑ *i*=1

 ∞ −∞

*<sup>σ</sup>*2(*x*) = *<sup>E</sup>*((*<sup>x</sup>* <sup>−</sup> *<sup>a</sup>*)2) = *<sup>E</sup>*(*x*2) <sup>−</sup> <sup>2</sup>*aE*(*x*) + *<sup>E</sup>*(*a*2) =

<sup>=</sup> *<sup>E</sup>*(*x*2) <sup>−</sup> *<sup>a</sup>*2, *<sup>a</sup>* <sup>=</sup> *<sup>E</sup>*(*x*).

*g*(*t*)*dF*(*t*) =

Since *a* = *E*(*x*) is known, it is sufficient to compute *E*(*x*2). In the case we have *g*(*t*) = *t*

 *R* (*<sup>t</sup>* <sup>−</sup> *<sup>a</sup>*)2*dF*(*t*)

(*xi* <sup>−</sup> *<sup>a</sup>*)<sup>2</sup> *pi*

(*<sup>t</sup>* <sup>−</sup> *<sup>a</sup>*)2*ϕ*(*t*)*dt*

 *R t* <sup>2</sup>*dF*(*t*). 2, hence

*k* ∑ *i*=1

*ϕ*(*u*)*du*.

 ∞ −∞ *xi pi*.

*tϕ*(*t*)*dt*.

define the mean value *E*(*g* ◦ *x*) by the formula

**Example 3.3.** Let us compute the dispersion

Here we have two possibilities. The first

if the integral exists.

Then

Then

where

i.e.

in the discrete case, and

In the section we want to present a method for studying of limit properties of some sequences (*xn*)*n*, *xn* : B(*R*) → F of observables ([7], [25], [31], [32], [49]). The main idea is a representation of the given sequence by a sequence of random variables (*ξn*)*n*, *ξ<sup>n</sup>* : (Ω, S, *P*) → *R*. Of course, the space (Ω, S) depends on a concrete sequence (*xn*)*n*, for different sequences various spaces (Ω, S, *P*) can be obtained.

The main instrument is the Kolmogorov consistency theorem ([67]). It starts with a sequence of probability measures (*μn*)*n*, *μ<sup>n</sup>* : *σ*(J*n*) → [0, 1] such that

$$
\mu\_{n+1}|\sigma(\mathcal{J}\_n) \times \mathbb{R} = \mu\_n
$$

i. e. *μn*+1(*A* × *R*) = *μn*(*A*) for any *A* ∈ *σ*(J*n*) (consistency condition). Let C be the family of all cylinders in the space *<sup>R</sup>N*, i. e. such sets *<sup>A</sup>* <sup>⊂</sup> *<sup>R</sup><sup>N</sup>* that

$$A = \{ (t\_n)\_n ; (t\_1, \dots, t\_k) \in B \}\_\prime$$

where *<sup>k</sup>* <sup>∈</sup> *<sup>N</sup>*, *<sup>B</sup>* ∈ B(*Rk*) = *<sup>σ</sup>*(<sup>J</sup> *<sup>k</sup>*). Then by the Kolmogorov consistency theorem there exists exactly one probability measure

*P* : *σ*(C) → [0, 1]

such that

$$P(A) = \mu\_k(B). \tag{6}$$

If we denote by *<sup>π</sup><sup>n</sup>* the projection *<sup>π</sup><sup>n</sup>* : *<sup>R</sup><sup>N</sup>* <sup>→</sup> *<sup>R</sup>n*,

$$
\pi\_n((t\_i)\_{i=1}^\infty) = (t\_1, t\_2, \dots, t\_n)\_n
$$

then we can formulate the assertion (6) by the equality

$$P(\pi\_n^{-1}(B)) = \mu\_\mathbb{n}(B),\tag{7}$$

for any *B* ∈ C.

**Theorem 4.1.** Let *m* be a state on a space F of all IF-events. Let (*xn*)*<sup>n</sup>* be a sequence of observables, *xn* : <sup>B</sup>(*R*) → F, and let *hn* : <sup>B</sup>(*Rn*) → F be the joint observable of *<sup>x</sup>*1, ..., *xn*, *<sup>n</sup>* <sup>=</sup> 1, 2, .... If we define *<sup>μ</sup><sup>n</sup>* : <sup>B</sup>(*Rn*) <sup>→</sup> [0, 1] by the equality

$$
\mu\_n = m \circ h\_{n\nu}
$$

then (*μn*)*<sup>n</sup>* satisfies the consistency condition

$$\mu\_{n+1} | (\sigma(\mathcal{J}\_n) \times \mathbb{R}) = \mu\_n.$$

Proof. We shall use the notation from the last two theorems. Then for *C* ∈ *σ*(J )

*<sup>E</sup>*(*ξn*) = <sup>∞</sup>

*P*(*ξ*−<sup>1</sup>

Therefore by the classical central limit theorem

lim*n*→<sup>∞</sup> *<sup>m</sup>*(

1 *<sup>n</sup>* <sup>∑</sup>*<sup>n</sup>*

−∞

<sup>1</sup> (*C*1) <sup>∩</sup> ... <sup>∩</sup> *<sup>ξ</sup>*−<sup>1</sup>

hence *ξ*1, ..., *ξ<sup>n</sup>* are independent for every *n*. Put *gn*(*u*1, ..., *un*) =

<sup>=</sup> *<sup>m</sup>*(*hn*(*C*<sup>1</sup> <sup>×</sup> ... <sup>×</sup> *Cn*) = *<sup>m</sup>*(*x*1(*C*1)).....*m*(*xn*(*Cn*)) = *<sup>P</sup>*(*ξ*−<sup>1</sup>

*xi* <sup>−</sup> *<sup>a</sup>*)((−∞, *<sup>t</sup>*)) = *<sup>m</sup>*(*hn*(*g*−<sup>1</sup>

*<sup>i</sup>*=<sup>1</sup> *xi* − *a* √*σ n*

possibilities of convergencies, at least in measure and almost everywhere.

for every *ε* > 0. And the sequence converges to 0 almost everywhere, if

*<sup>p</sup>*=<sup>1</sup> <sup>∪</sup><sup>∞</sup>

lim*n*→<sup>∞</sup> *<sup>P</sup>*(∩<sup>∞</sup>

<sup>=</sup> *<sup>P</sup>*(*η*−<sup>1</sup> *<sup>n</sup>* ((−∞, *<sup>t</sup>*))) = *<sup>P</sup>*({(*ω*);

hence

and

Moreover,

4.2. we have

*μ* : S → [0, 1], if

Certainly, if *ηn*(*ω*) → 0,then

*m*( √*n σ* ( *n* ∑ *i*=1

*<sup>m</sup>*(*xn*(*C*)) = *<sup>m</sup>*(*hn*(*<sup>R</sup>* <sup>×</sup> ... <sup>×</sup> *<sup>R</sup>* <sup>×</sup> *<sup>C</sup>*) = *<sup>P</sup>*(*π*−<sup>1</sup> *<sup>n</sup>* (*<sup>R</sup>* <sup>×</sup> ... <sup>×</sup> *<sup>R</sup>* <sup>×</sup> *<sup>C</sup>*)) = *<sup>P</sup>*(*ξ*−<sup>1</sup>

Analysis of Fuzzy Logic Models 235

−∞

*σ*2(*ξn*) = *σ*2(*xn*) = *σ*2.

*tdmxn* (*t*) = *E*(*xn*) = *a*,

<sup>1</sup> (*C*1)).....*P*(*ξ*−<sup>1</sup>

√*n <sup>σ</sup>* <sup>∑</sup>*<sup>n</sup>*

*<sup>n</sup>* (−∞, *t*))) = *m*(*yn*((−∞, *t*)) =

*ξi*(*ω*) − *a* < *t*}).

*<sup>n</sup>* (*Cn*)) = *<sup>P</sup>*(*π*−<sup>1</sup> *<sup>n</sup>* (*C*<sup>1</sup> <sup>×</sup> ... <sup>×</sup> *Cn*)) =

√*n σ*

(−∞, *<sup>t</sup>*)) = <sup>1</sup>

Let us have a look to the previous theorem from another point of view, say, categorial. We had

lim*n*→<sup>∞</sup> *<sup>P</sup>*(*η*−<sup>1</sup> *<sup>n</sup>* ((−∞, *<sup>t</sup>*)) = *<sup>φ</sup>*(*t*)

We can say that (*ηn*)*<sup>n</sup>* converges to *φ* in distribution. Of course, there are important

A sequence (*ηn*)*<sup>n</sup>* of random variables (= measurable functions) converges to 0 in measure

lim*n*→<sup>∞</sup> *<sup>μ</sup>*(*η*−1(−*ε*,*ε*)) = <sup>0</sup>

∀*ε* > 0∃*k*∀*n* > *k* : −*ε* < *η*(*ω*) < *ε*

*<sup>n</sup>*=*kη*−<sup>1</sup> *<sup>n</sup>* ((<sup>−</sup> <sup>1</sup>

*p* , 1 *p* )) = 1

*<sup>k</sup>*=<sup>1</sup> <sup>∩</sup><sup>∞</sup>

*n* ∑ *i*=1

<sup>√</sup>2*<sup>π</sup>*

 *t* −∞ *e* − *u*2 <sup>2</sup> *du*

*tdPξ<sup>n</sup>* (*t*) = <sup>∞</sup>

*<sup>n</sup>* (*C*)),

*<sup>n</sup>* (*Cn*)),

*<sup>i</sup>*=<sup>1</sup> *ui* − *a*. By Theorem

Proof. Let *C*1, *C*2, ..., *Cn* ∈ B(*R*). Then by Definition 3.3. and Definition 3.1

$$\mu\_{n+1}(\mathbb{C}\_1 \times \mathbb{C}\_2 \times \dots \times \mathbb{C}\_n \times \mathbb{R}) = m(\mathfrak{x}\_1(\mathbb{C}\_1)).\mathfrak{x}\_2(\mathbb{C}\_2)...\mathfrak{x}\_n(\mathbb{C}\_n).\mathfrak{x}\_{n+1}(\mathbb{R})) = 1$$

$$= m(\mathfrak{x}\_1(\mathbb{C}\_1)).\mathfrak{x}\_2(\mathbb{C}\_2)...\mathfrak{x}\_n(\mathbb{C}\_n).(1,0)) = $$

$$= m(\mathfrak{x}\_1(\mathbb{C}\_1)).\mathfrak{x}\_2(\mathbb{C}\_2)...\mathfrak{x}\_n(\mathbb{C}\_n)) = $$

$$= \mathfrak{y}\_n(\mathbb{C}\_1 \times \mathbb{C}\_2 \times ... \times \mathbb{C}\_n)\_\prime$$

hence *μn*+1|(J*<sup>n</sup>* × *R*) = *μn*|J*n*. Of course, if two measures coincide on J*<sup>n</sup>* then they coincide on *σ*(J*n*), too.

Now we shall formulate a translation formula between sequences of observables in (F, *m*) and corresponding random variables in (*RN*, *<sup>σ</sup>*(C), *<sup>P</sup>*) ([67]).

**Theorem 4.2.** Let the assumptions of Theorem 4.1 be satisfied. Let *gn* : *<sup>R</sup><sup>n</sup>* <sup>→</sup> *<sup>R</sup>* be Borel measurable functions *<sup>n</sup>* <sup>=</sup> 1, 2, .... Let <sup>C</sup> be the family of all cylinders in *<sup>R</sup>N*, *<sup>ξ</sup><sup>n</sup>* : *<sup>R</sup><sup>N</sup>* <sup>→</sup> *<sup>R</sup>* be defined by the formula *ξn*((*ti*)*i*) = *tn*,

$$\eta\_n: \boldsymbol{R}^N \to \boldsymbol{R}, \eta\_n = \operatorname{g}\_n(\mathfrak{F}\_1, \dots, \mathfrak{F}\_n)\_n$$

$$\boldsymbol{y}\_n: \mathcal{B}(\boldsymbol{R}^n) \to \mathcal{F}, \boldsymbol{y}\_n = \boldsymbol{h}\_n \circ \boldsymbol{g}\_n^{-1}.$$

Then

$$P(\eta\_n^{-1}(B)) = m(y\_n(B))$$

for any *B* ∈ B(*R*).

Proof. Put *A* = *g*−<sup>1</sup> *<sup>n</sup>* (*B*). By Theorem 4.1.

$$m(y\_n(B)) = m(h\_n(\mathcal{g}\_n^{-1}(B))) = P(\pi\_n^{-1}(\mathcal{g}\_n^{-1}(B))) = 1$$

$$= P((\mathcal{g}\_n \circ \pi\_n)^{-1}(B)) = P(\eta\_n^{-1}(B)).$$

As an easy corollary of Theorem 4.2 we obtain a variant of central limit theorem. In the classical case

$$\lim\_{n \to \infty} P(\{\omega \colon \frac{\frac{1}{n} \sum\_{i=1}^{n} \xi\_i(\omega) - a}{\frac{\sigma}{\sqrt{n}}} < t \}) = \frac{1}{\sqrt{2\pi}} \int\_{-\infty}^{t} e^{-\frac{u^2}{2}} du$$

Of course, we must define for observables the element

$$(\frac{\sqrt{n}}{\sigma} \sum\_{i=1}^{n} x\_i - a)(-\infty, t)$$

It is sufficient to put

$$g\_n(u\_{1'}\dots u\_n) = \frac{\sqrt{n}}{\sigma} \sum\_{i=1}^n u\_i - a$$

**Theorem 4.3.** Let (*xn*)*<sup>n</sup>* be a sequence of square integrable, equally distributed, independent observables, *E*(*xn*) = *a*, *σ*2(*xn*) = *σ*2(*n* = 1, 2, ...). Then

$$\lim\_{m \to \infty} m(\frac{\frac{1}{n} \sum\_{i=1}^{n} \mathbb{x}\_i - a}{\frac{\sigma}{\sqrt{n}}} (-\infty, t) = \frac{1}{\sqrt{2\pi}} \int\_{-\infty}^{t} e^{-\frac{u^2}{2}} du$$

Proof. We shall use the notation from the last two theorems. Then for *C* ∈ *σ*(J )

$$m(\mathfrak{x}\_{\mathfrak{n}}(\mathbb{C})) = m(h\_{\mathfrak{n}}(\mathbb{R} \times \ldots \times \mathbb{R} \times \mathbb{C}) = P(\pi\_{\mathfrak{n}}^{-1}(\mathbb{R} \times \ldots \times \mathbb{R} \times \mathbb{C})) = P(\xi\_{\mathfrak{n}}^{-1}(\mathbb{C})),$$

hence

16 Will-be-set-by-IN-TECH

*μn*+1(*C*<sup>1</sup> × *C*<sup>2</sup> × ... × *Cn* × *R*) = *m*(*x*1(*C*1)).*x*2(*C*2).....*xn*(*Cn*).*xn*+1(*R*)) =

= *m*(*x*1(*C*1)).*x*2(*C*2).....*xn*(*Cn*).(1, 0)) = = *m*(*x*1(*C*1)).*x*2(*C*2).....*xn*(*Cn*)) = = *μn*(*C*<sup>1</sup> × *C*<sup>2</sup> × ... × *Cn*), hence *μn*+1|(J*<sup>n</sup>* × *R*) = *μn*|J*n*. Of course, if two measures coincide on J*<sup>n</sup>* then they coincide

Now we shall formulate a translation formula between sequences of observables in (F, *m*)

**Theorem 4.2.** Let the assumptions of Theorem 4.1 be satisfied. Let *gn* : *<sup>R</sup><sup>n</sup>* <sup>→</sup> *<sup>R</sup>* be Borel measurable functions *<sup>n</sup>* <sup>=</sup> 1, 2, .... Let <sup>C</sup> be the family of all cylinders in *<sup>R</sup>N*, *<sup>ξ</sup><sup>n</sup>* : *<sup>R</sup><sup>N</sup>* <sup>→</sup> *<sup>R</sup>* be

*<sup>η</sup><sup>n</sup>* : *<sup>R</sup><sup>N</sup>* <sup>→</sup> *<sup>R</sup>*, *<sup>η</sup><sup>n</sup>* <sup>=</sup> *gn*(*ξ*1., ..., *<sup>ξ</sup>n*),

*yn* : <sup>B</sup>(*Rn*) → F, *yn* <sup>=</sup> *hn* ◦ *<sup>g</sup>*−<sup>1</sup>

*<sup>P</sup>*(*η*−<sup>1</sup> *<sup>n</sup>* (*B*)) = *<sup>m</sup>*(*yn*(*B*))

<sup>=</sup> *<sup>P</sup>*((*gn* ◦ *<sup>π</sup>n*)−1(*B*)) = *<sup>P</sup>*(*η*−<sup>1</sup> *<sup>n</sup>* (*B*)).

As an easy corollary of Theorem 4.2 we obtain a variant of central limit theorem. In the

*<sup>i</sup>*=<sup>1</sup> *ξi*(*ω*) − *a* √*σ n*

> *n* ∑ *i*=1

*gn*(*u*1, ..., *un*) =

*<sup>i</sup>*=<sup>1</sup> *xi* − *a* √*σ n*

*<sup>n</sup>* (*B*))) = *<sup>P</sup>*(*π*−<sup>1</sup> *<sup>n</sup>* (*g*−<sup>1</sup>

<sup>&</sup>lt; *<sup>t</sup>*}) = <sup>1</sup>

*xi* − *a*)(−∞, *t*)

√*n σ*

(−∞, *<sup>t</sup>*) = <sup>1</sup>

**Theorem 4.3.** Let (*xn*)*<sup>n</sup>* be a sequence of square integrable, equally distributed, independent

*n* ∑ *i*=1

*ui* − *a*

<sup>√</sup>2*<sup>π</sup>*

 *t* −∞ *e* − *u*2 <sup>2</sup> *du*

<sup>√</sup>2*<sup>π</sup>*

 *t* −∞ *e* − *u*2 <sup>2</sup> *du*

*<sup>n</sup>* .

*<sup>n</sup>* (*B*))) =

Proof. Let *C*1, *C*2, ..., *Cn* ∈ B(*R*). Then by Definition 3.3. and Definition 3.1

and corresponding random variables in (*RN*, *<sup>σ</sup>*(C), *<sup>P</sup>*) ([67]).

*m*(*yn*(*B*)) = *m*(*hn*(*g*−<sup>1</sup>

1 *<sup>n</sup>* <sup>∑</sup>*<sup>n</sup>*

> ( √*n σ*

defined by the formula *ξn*((*ti*)*i*) = *tn*,

Proof. Put *A* = *g*−<sup>1</sup> *<sup>n</sup>* (*B*). By Theorem 4.1.

lim*n*→<sup>∞</sup> *<sup>P</sup>*({*ω*;

Of course, we must define for observables the element

observables, *E*(*xn*) = *a*, *σ*2(*xn*) = *σ*2(*n* = 1, 2, ...). Then

1 *<sup>n</sup>* <sup>∑</sup>*<sup>n</sup>*

lim*n*→<sup>∞</sup> *<sup>m</sup>*(

on *σ*(J*n*), too.

Then

for any *B* ∈ B(*R*).

classical case

It is sufficient to put

$$E(\xi\_{\mathfrak{n}}) = \int\_{-\infty}^{\infty} t dP\_{\mathfrak{f}\_{\mathfrak{n}}}(t) = \int\_{-\infty}^{\infty} t d\mathfrak{m}\_{\mathfrak{X}\_{\mathfrak{n}}}(t) = E(\mathfrak{x}\_{\mathfrak{n}}) = a\_{\mathfrak{n}}$$

and

$$
\sigma^2(\mathfrak{f}\_n) = \sigma^2(\mathfrak{x}\_n) = \sigma^2.
$$

Moreover,

$$P(\mathfrak{F}\_1^{-1}(\mathbb{C}\_1) \cap \dots \cap \mathfrak{F}\_n^{-1}(\mathbb{C}\_n)) = P(\pi\_n^{-1}(\mathbb{C}\_1 \times \dots \times \mathbb{C}\_n)) = 1$$

$$\mathbf{x} = m(\hbar\_{\mathfrak{n}}(\mathbb{C}\_1 \times \ldots \times \mathbb{C}\_{\mathfrak{n}}) = m(\mathbf{x}\_1(\mathbb{C}\_1)) \ldots m(\mathbf{x}\_{\mathfrak{n}}(\mathbb{C}\_{\mathfrak{n}})) = P(\mathfrak{f}\_1^{-1}(\mathbb{C}\_1)) \ldots P(\mathfrak{f}\_{\mathfrak{n}}^{-1}(\mathbb{C}\_{\mathfrak{n}})) \ldots$$

hence *ξ*1, ..., *ξ<sup>n</sup>* are independent for every *n*. Put *gn*(*u*1, ..., *un*) = √*n <sup>σ</sup>* <sup>∑</sup>*<sup>n</sup> <sup>i</sup>*=<sup>1</sup> *ui* − *a*. By Theorem 4.2. we have

$$m(\frac{\sqrt{n}}{\sigma}(\sum\_{i=1}^{n}\mathbf{x}\_{i} - a)((-\infty, t)) = m(\hbar\_{\hbar}(\underline{\mathbf{g}}\_{n}^{-1}(-\infty, t))) = m(\underline{y}\_{n}((-\infty, t)) = 0$$

$$= P(\eta\_{n}^{-1}((-\infty, t))) = P(\{(\omega); \frac{\sqrt{n}}{\sigma}\sum\_{i=1}^{n}\underline{\mathbf{g}}\_{i}(\omega) - a < t\}).$$

Therefore by the classical central limit theorem

$$\lim\_{m \to \infty} m(\frac{\frac{1}{n} \sum\_{i=1}^{n} \chi\_i - a}{\frac{\sigma}{\sqrt{n}}}(-\infty, t)) = \frac{1}{\sqrt{2\pi}} \int\_{-\infty}^{t} e^{-\frac{u^2}{2}} du$$

Let us have a look to the previous theorem from another point of view, say, categorial. We had

$$\lim\_{n \to \infty} P(\eta\_n^{-1}((-\infty, t)) = \phi(t))$$

We can say that (*ηn*)*<sup>n</sup>* converges to *φ* in distribution. Of course, there are important possibilities of convergencies, at least in measure and almost everywhere.

A sequence (*ηn*)*<sup>n</sup>* of random variables (= measurable functions) converges to 0 in measure *μ* : S → [0, 1], if

$$\lim\_{n \to \infty} \mu(\eta^{-1}(-\varepsilon, \varepsilon)) = 0$$

for every *ε* > 0. And the sequence converges to 0 almost everywhere, if

$$\lim\_{n \to \infty} P(\cap\_{p=1}^{\infty} \cup\_{k=1}^{\infty} \cap\_{n=k}^{\infty} \eta\_n^{-1}((-\frac{1}{p'} \frac{1}{p})) = 1)$$

Certainly, if *ηn*(*ω*) → 0,then

$$\forall \varepsilon > 0 \exists k \forall n > k \; :-\varepsilon < \eta(\omega) < \varepsilon$$

The number *P*(*A*|*B*) can be regarded as a constant function, Constant functions are

Analysis of Fuzzy Logic Models 237

Generally *P*(*A*|S0) can be defined for any *σ*-algebra S<sup>0</sup> ⊂ S, as an S0-measurable function

*χAdP* = *P*(*A* ∩ *C*).

An important example of S<sup>0</sup> is the family of all pre-images of a random variable *ξ* : Ω → *R*

<sup>S</sup><sup>0</sup> <sup>=</sup> {*ξ*−1(*B*); *<sup>B</sup>* <sup>∈</sup> *<sup>σ</sup>*(<sup>J</sup> )}.

(*P*(*A*|*ξ*)*dP* <sup>=</sup> *<sup>P</sup>*(*<sup>A</sup>* <sup>∩</sup> *<sup>C</sup>*), *<sup>C</sup>* <sup>=</sup> *<sup>ξ</sup>*−1(*B*), *<sup>B</sup>* <sup>∈</sup> *<sup>σ</sup>*(<sup>J</sup> ).

*g* ◦ *ξdP* =

*p*(*A*|*x*)*dmx* =

Of course, we must first prove the existence of such a mapping *p*(*A*|*x*) : *R* → *R* ([34], [70],

*K*.*L* = (*μK*.*μL*, *ν<sup>K</sup>* + *ν<sup>L</sup>* − *νK*.*νL*).

**Theorem 5.1.** Let *x* : *σ*(J ) → F be an observable, *m* : F → [0, 1] be a state, and let *A* ∈ F.

*ν*(*B*) = *m*(*A*.*x*(*B*)).

*A*.(*x*(*B*) ⊕ *x*(*C*)) = (*A*.*x*(*B*)) ⊕ (*A*.*x*(*C*)),

*ν*(*B* ∪ *C*) = *m*(*A*.*x*(*B* ∪ *C*)) = *m*(*A*.(*x*(*B*) ⊕ *x*(*C*)) = *m*((*A*.*x*(*B*)) ⊕ (*A*.*x*(*C*))) =

= *m*(*A*.*x*(*B*)) + *m*(*A*.*x*(*C*)) = *ν*(*B*) + *ν*(*C*).

 *B*

> *B*

*gdP<sup>ξ</sup>* , *B* ∈ *σ*(J )

*p*(*A*|*x*)*dF*.

*ξ*−<sup>1</sup>(*B*)

*B*

*P*(*A*|S0)*dP*, *C* ∈ S*o*.

*C*

measurable with respect to the *σ*-algebra S<sup>0</sup> = {∅, Ω}.

In this case we shall write *P*(*A*|S0) = *P*(*A*|*ξ*), hence

*<sup>P</sup>*(*<sup>A</sup>* <sup>∩</sup> *<sup>ξ</sup>*−1(*B*)) =

And exactly this formulation will be used in our *IF*-case,

*<sup>m</sup>*(*A*.*x*(*B*)) =

[72]). Recall that the product of *IF*-events is defined by the formula

Proof. Let *B* ∩ *C* = ∅, *B*, *C* ∈ B(*R*) = *σ*(J ). Then *x*(*B*).*x*(*C*)=(0, 1), hence

 *C*

Define *ν* : *σ*(J ) → [0, 1] by the equality

Then *ν* is a measure.

and therefore

By the transformation formula

*<sup>P</sup>*(*<sup>A</sup>* <sup>∩</sup> *<sup>C</sup>*) =

If S<sup>0</sup> = S, then we can put *P*(*A*|S0) = *χA*, since *χ<sup>A</sup>* is S0-measurable, and *C*

such that

If we instead of *ε* use <sup>1</sup> *<sup>p</sup>* , *p* ∈ *N*, then *ηn*(*ω*) → 0 if and only if

$$\forall p \exists k \forall n > k : \omega \in \eta\_n^{-1}((-\frac{1}{p}, \frac{1}{p})).$$

And *η<sup>n</sup>* → 0 almost everywhere, if the set {*ω*; *η*(*ω*) → 0} has measure 1.

**Definition 4.1.** A sequence (*yn*)*<sup>n</sup>* of observables

(i) converges in distribution to a function *F* : *R* → *R*, if

$$\lim\_{n \to \infty} m(y\_n((-\infty, t)) = F(t))$$

for every *t* ∈ *R*;

(ii) it converges to 0 in state *m* : F → [0, 1], if

$$\lim\_{n \to \infty} m(y\_n((-\varepsilon, \varepsilon))) = 0$$

for every *ε* > 0;

(iii) it converges to 0 *m*-almost everywhere, if

$$\lim\_{p \to \infty} \lim\_{k \to \infty} \lim\_{i \to \infty} m(\wedge\_{n=k}^{k+i} y\_n(-\frac{1}{p'} \frac{1}{p})) = 0.1$$

**Theorem 4.4.** Let (*yn*)*<sup>n</sup>* be a sequence of observables, (*ηn*)*<sup>n</sup>* be the sequence of corresponding random variables. Then

(i) (*yn*)*<sup>n</sup>* converges to *F* : *R* → *R* in distribution if and only if (*ηn*)*<sup>n</sup>* converges to *F*;

(ii) *yn*)*<sup>n</sup>* converges to 0 in state *m* : F → [0, 1] if and only if (*ηn*)*<sup>n</sup>* converges to 0 in measure *P* : S → [0, 1]

(iii) if (*ηn*)*<sup>n</sup>* converges *P*-almost everywhere to 0, then (*yn*)*<sup>n</sup> m*-almost everywhere converges to 0.

The details can be found in [66]. Many applications of the method has been described in [25], [31], [35], [37], [39] , [52].

#### **6. Conditional probability**

Conditional entropy (of *A* with respect to *B*) is the real number *P*(*A*|*B*) such that

$$P(A \cap B) = P(B)P(A|B).$$

When *A*, *B* are independent, then *P*(*A*|*B*) = *P*(*A*), the event *A* does not depend on the ocuring of event *B*. Another point of view:

$$P(A \cap B) = \int\_{B} P(A|B)dP.$$

18 Will-be-set-by-IN-TECH

lim*n*→<sup>∞</sup> *<sup>m</sup>*(*yn*((−∞, *<sup>t</sup>*)) = *<sup>F</sup>*(*t*)

lim*n*→<sup>∞</sup> *<sup>m</sup>*(*yn*((−*ε*,*ε*))) = <sup>0</sup>

**Theorem 4.4.** Let (*yn*)*<sup>n</sup>* be a sequence of observables, (*ηn*)*<sup>n</sup>* be the sequence of corresponding

(ii) *yn*)*<sup>n</sup>* converges to 0 in state *m* : F → [0, 1] if and only if (*ηn*)*<sup>n</sup>* converges to 0 in measure

(iii) if (*ηn*)*<sup>n</sup>* converges *P*-almost everywhere to 0, then (*yn*)*<sup>n</sup> m*-almost everywhere converges

The details can be found in [66]. Many applications of the method has been described in [25],

*P*(*A* ∩ *B*) = *P*(*B*)*P*(*A*|*B*).

When *A*, *B* are independent, then *P*(*A*|*B*) = *P*(*A*), the event *A* does not depend on the ocuring

 *B*

*P*(*A*|*B*)*dP*.

*<sup>n</sup>*=*kyn*(<sup>−</sup> <sup>1</sup>

*p* , 1 *p* )) = 0.

*<sup>i</sup>*→<sup>∞</sup> *<sup>m</sup>*(∧*k*+*<sup>i</sup>*

(i) (*yn*)*<sup>n</sup>* converges to *F* : *R* → *R* in distribution if and only if (*ηn*)*<sup>n</sup>* converges to *F*;

Conditional entropy (of *A* with respect to *B*) is the real number *P*(*A*|*B*) such that

*P*(*A* ∩ *B*) =

*p* , 1 *p* )).

*<sup>p</sup>* , *p* ∈ *N*, then *ηn*(*ω*) → 0 if and only if

And *η<sup>n</sup>* → 0 almost everywhere, if the set {*ω*; *η*(*ω*) → 0} has measure 1.

**Definition 4.1.** A sequence (*yn*)*<sup>n</sup>* of observables

(ii) it converges to 0 in state *m* : F → [0, 1], if

(iii) it converges to 0 *m*-almost everywhere, if

lim *<sup>p</sup>*→<sup>∞</sup> lim *<sup>k</sup>*→<sup>∞</sup> lim

(i) converges in distribution to a function *F* : *R* → *R*, if

<sup>∀</sup>*p*∃*k*∀*<sup>n</sup>* <sup>&</sup>gt; *<sup>k</sup>* : *<sup>ω</sup>* <sup>∈</sup> *<sup>η</sup>*−<sup>1</sup> *<sup>n</sup>* ((<sup>−</sup> <sup>1</sup>

If we instead of *ε* use <sup>1</sup>

for every *t* ∈ *R*;

for every *ε* > 0;

random variables. Then

[31], [35], [37], [39] , [52].

**6. Conditional probability**

of event *B*. Another point of view:

*P* : S → [0, 1]

to 0.

The number *P*(*A*|*B*) can be regarded as a constant function, Constant functions are measurable with respect to the *σ*-algebra S<sup>0</sup> = {∅, Ω}.

Generally *P*(*A*|S0) can be defined for any *σ*-algebra S<sup>0</sup> ⊂ S, as an S0-measurable function such that

$$P(A \cap \mathcal{C}) = \int\_{\mathcal{C}} P(A|\mathcal{S}\_0)dP\_\prime \mathcal{C} \in \mathcal{S}\_\bullet.$$

If S<sup>0</sup> = S, then we can put *P*(*A*|S0) = *χA*, since *χ<sup>A</sup>* is S0-measurable, and

$$\int\_{\mathbb{C}} \chi\_A dP = P(A \cap \mathcal{C}).$$

An important example of S<sup>0</sup> is the family of all pre-images of a random variable *ξ* : Ω → *R*

$$\mathcal{S}\_0 = \{ \mathfrak{f}^{-1}(B); B \in \sigma(\mathcal{J}) \}.$$

In this case we shall write *P*(*A*|S0) = *P*(*A*|*ξ*), hence

$$\int\_{\mathcal{C}} (P(A|\xi)dP = P(A \cap \mathcal{C}), \mathcal{C} = \xi^{-1}(B), B \in \sigma(\mathcal{J}).$$

By the transformation formula

$$P(A \cap \mathfrak{F}^{-1}(B)) = \int\_{\mathfrak{F}^{-1}(B)} \mathfrak{g} \circ \mathfrak{F}dP = \int\_{B} \mathfrak{g}dP\_{\mathfrak{F}'}B \in \sigma(\mathcal{J})$$

And exactly this formulation will be used in our *IF*-case,

$$m(A.\mathfrak{x}(B)) = \int\_{B} p(A|\mathfrak{x}) dm\_{\mathfrak{X}} = \int\_{B} p(A|\mathfrak{x}) dF.$$

Of course, we must first prove the existence of such a mapping *p*(*A*|*x*) : *R* → *R* ([34], [70], [72]). Recall that the product of *IF*-events is defined by the formula

$$K.L = (\mu\_K.\mu\_{L'}.\nu\_K + \nu\_L - \nu\_K.\nu\_L).$$

**Theorem 5.1.** Let *x* : *σ*(J ) → F be an observable, *m* : F → [0, 1] be a state, and let *A* ∈ F. Define *ν* : *σ*(J ) → [0, 1] by the equality

$$\nu(B) = \mathfrak{m}(A.\mathfrak{x}(B)).$$

Then *ν* is a measure.

Proof. Let *B* ∩ *C* = ∅, *B*, *C* ∈ B(*R*) = *σ*(J ). Then *x*(*B*).*x*(*C*)=(0, 1), hence

$$A.(\mathfrak{x}(B)\oplus\mathfrak{x}(\mathbb{C})) = (A.\mathfrak{x}(B))\oplus(A.\mathfrak{x}(\mathbb{C}))\_\prime$$

and therefore

$$\nu(B \cup \mathsf{C}) = m(A.\mathtt{x}(B \cup \mathsf{C})) = m(A.(\mathtt{x}(B) \oplus \mathtt{x}(\mathsf{C})) = m((A.\mathtt{x}(B)) \oplus (A.\mathtt{x}(\mathsf{C}))) = $$
 
$$= m(A.\mathtt{x}(B)) + m(A.\mathtt{x}(\mathsf{C})) = \nu(B) + \nu(\mathsf{C}).$$

**Example 6.1.** Let M⊃F be the MV-algebra defined in Theorem 1.1 (Example1.5). Then M with the product *A*.*B* = (*μAμB*, *ν<sup>A</sup>* + *ν<sup>B</sup>* − *νAνB*) is an MV-algebra with product. Indeed,

Analysis of Fuzzy Logic Models 239

(1, 0).(*μA*, *νA*)=(1.*μA*, 0 + *ν<sup>A</sup>* − 0.*νA*)=(*μA*, *νA*).

(*μA*, *νA*).((*μB*, *νB*) � (1 − *μC*, 1 − *νC*)) = = (*μA*((*μ<sup>B</sup>* − *μC*) ∨ 0), *ν<sup>A</sup>* + (*ν<sup>B</sup>* − *ν<sup>C</sup>* + 1) ∧ 1 − *νA*((*ν<sup>B</sup>* + 1 − *νC*) ∧ 1)).

((*μA*, *νA*).(*μB*, *νB*)) � (¬(*μA*, *νA*).(*μC*, *νC*)) = ((*μA*(*μ<sup>B</sup>* − *μC*)) ∨ 0,(*ν<sup>A</sup>* + (*ν<sup>B</sup>* − *ν<sup>C</sup>* + 1) − *νA*(*ν<sup>B</sup>* + 1 − *νC*)) ∧ 1).

*ν<sup>B</sup>* − *ν<sup>C</sup>* + 1 = *k*.

*ν<sup>A</sup>* + *k* ∧ 1 − *νA*(*k* ∧ 1) = *ν<sup>A</sup>* + 1 − *ν<sup>A</sup>* = 1, (*ν<sup>A</sup>* + *k* − *νAk*) ∧ 1 = (*ν<sup>A</sup>* + *k*(1 − *νA*)) ∧ 1 = 1.

> *ν<sup>A</sup>* + *k* ∧ 1 − *νAk* ∧ 1 = *ν<sup>A</sup>* + *k* − *νAk*, (*ν<sup>A</sup>* + *k* − *νAk*) ∧ 1 = *ν<sup>A</sup>* + *k* − *νAk*,

*A*.(*B* � ¬*C*)=(*A*.*B*) � (¬(*A*.*C*)).

Similarly as in Section 1 we can define a product in D-posets, we shall name such D-posets

**Definition 6.2.** A Kôpka D-poset is a pair (*D*, ∗), where D is a D-poset, and ∗ is a commutative

Evidently every IF-family F can be embedded to an MV-algebra with product and it is a special case of a Kôpka D-poset, hence any result from the Kôpka D-poset theory can be

Now let us consider a theory dual to the IF-events theory, theory of IV-events. A prerequisity of IV-theory is in the fact that it considers natural ordering and operations of vectors. On the

**Definition 6.3.** Let (Ω, S) be a measurable space, S be a *σ*-algebra. By an IV-event a pair

*A* ≤ *B* ⇐⇒ *μ<sup>A</sup>* ≤ *μB*, *ν<sup>A</sup>* ≤ *νB*;

and associative operation on D satisfying the following conditions:

other hand the IV-theory is isomorphic to the IF-theory ([65],[43]).

<sup>2</sup> is considered such that

Moreover

Denote

If 1 ≤ *k*, then

If *k* < 1, then

hence actually

Kôpka D-posets.

1. ∀*a* ∈ *D* : *a* ∗ 1 = *a*;

2. ∀*a*, *b* ∈ *D*, *a* ≤ *b*, ∀*c* ∈ *D* : *a* ∗ *c* ≤ *b* ∗ *c*;

4. ∀(*an*)*<sup>n</sup>* ⊂ *D*, *an* � *a*, ∀*b* ∈ *D* : *an* ∗ *b* � *a* ∗ *b*.

applied to our IF-events theory ([26], [64]).

3. ∀*a*, *b* ∈ *D* : *a* − (*a* ∗ *b*) ≤ 1 − *b*;

*A* = (*μA*, *νA*) : Ω → [0, 1]

On the other hand

Let *Bn* � *B*. Then *x*(*Bn*) � *x*(*B*), hence *A*.*x*(*Bn*) � *A*.*x*(*B*). Therefore

$$\nu(B\_{\mathfrak{n}}) = \mathfrak{m}(A.\mathfrak{x}(B\_{\mathfrak{n}})) \nearrow \mathfrak{m}(A.\mathfrak{x}(B)) = \nu(B).\mathfrak{x}$$

**Theorem 5.2.** Let *x* : *σ*(J ) → F be an observable, *m* : F → [0, .1] be a state, and let *A* ∈ F. Then there exists a Borel measurable function *<sup>f</sup>* : *<sup>R</sup>* <sup>→</sup> *<sup>R</sup>* (i. e. *<sup>B</sup>* <sup>∈</sup> *<sup>σ</sup>*(<sup>J</sup> ) =<sup>⇒</sup> *<sup>f</sup>* <sup>−</sup>1(*B*) <sup>∈</sup> *<sup>σ</sup>*(<sup>J</sup> )) such that

$$\mathfrak{m}(A.\mathfrak{x}(B)) = \int\_{B} f dm\_{\mathfrak{x}}$$

for any *B* ∈ *σ*(J ). If *g* is another such a function, then

$$m\_{\mathfrak{X}}(\{\mathfrak{u}\in \mathbb{R}; f(\mathfrak{x}) \neq \mathfrak{g}(\mathfrak{x})\}) = 0.$$

Proof. Define *μ*, *ν* : *σ*(J ) → [0, 1] by the formulas

$$
\mu(B) = m\_\mathfrak{x}(B) = m(\mathfrak{x}(B)), \\
\nu(B) = m(A.\mathfrak{x}(B)).
$$

Then *μ*, *ν* : *σ*(J ) → [0, 1] are measures, and *ν* ≤ *μ*.

By the Radon - Nikodym theorem there exists exactly one function *f* : *R* → *R* (with respect to the equality *μ*- almost everywhere) such that

$$\operatorname{Im}(A.\mathfrak{x}(B)) = \nu(B) = \int\_{B} f d\mu = \int\_{B} f d\mathfrak{m}\_{\mathcal{X}} \, B \in \sigma(\mathcal{J}).$$

**Definition 5.1.** Let *x* : *σ*(J ) → F be an observable *A* ∈ F. Then the conditional probability *<sup>p</sup>*(*A*|*x*) = *<sup>f</sup>* is a Borel measurable function (i. e. *<sup>B</sup>* ∈ J <sup>=</sup><sup>⇒</sup> *<sup>f</sup>* <sup>−</sup>1(*B*) <sup>∈</sup> *<sup>σ</sup>*(<sup>J</sup> )) such that

$$\int\_{B} p(A|\mathfrak{x}) dm\_{\mathfrak{x}} = m(A.\mathfrak{x}(B)),$$

for any *B* ∈ *σ*(J ).

#### **7. Algebraic world**

At the end of our communication we shall present two ideas. The first one is in some algebraizations of the product

$$A.B = (\mu\_A.\mu\_{B'}.\nu\_A + \nu\_B - \nu\_A.\nu\_B).\dots$$

The second idea is a presentation of a dual notion to the notion of *IF*-event.

In MV-algebras the product was introduced independently in [56] and [47]. Let us return to Definition 1.3 and Example 1.5.

**Definition 6.1.** An MV-algebra with product is a pair (*M*, .), where *M* is an MV-algebra, and . is a commutative and associative binary operation on *M* satisfying the following conditions:

$$\text{(i)}\;1.a = a$$

$$(\text{ii)}\ a. (b \odot \neg c) = (a.b) \odot \neg(a.c).$$

**Example 6.1.** Let M⊃F be the MV-algebra defined in Theorem 1.1 (Example1.5). Then M with the product *A*.*B* = (*μAμB*, *ν<sup>A</sup>* + *ν<sup>B</sup>* − *νAνB*) is an MV-algebra with product. Indeed,

$$(1,0). (\mu\_A.\nu\_A) = (1.\mu\_A.0 + \nu\_A - 0.\nu\_A) = (\mu\_{A\prime}.\nu\_A).$$

Moreover

20 Will-be-set-by-IN-TECH

*ν*(*Bn*) = *m*(*A*.*x*(*Bn*)) � *m*(*A*.*x*(*B*)) = *ν*(*B*).

**Theorem 5.2.** Let *x* : *σ*(J ) → F be an observable, *m* : F → [0, .1] be a state, and let *A* ∈ F. Then there exists a Borel measurable function *<sup>f</sup>* : *<sup>R</sup>* <sup>→</sup> *<sup>R</sup>* (i. e. *<sup>B</sup>* <sup>∈</sup> *<sup>σ</sup>*(<sup>J</sup> ) =<sup>⇒</sup> *<sup>f</sup>* <sup>−</sup>1(*B*) <sup>∈</sup> *<sup>σ</sup>*(<sup>J</sup> ))

*mx*({*u* ∈ *R*; *f*(*x*) �= *g*(*x*)}) = 0.

*μ*(*B*) = *mx*(*B*) = *m*(*x*(*B*)), *ν*(*B*) = *m*(*A*.*x*(*B*)).

By the Radon - Nikodym theorem there exists exactly one function *f* : *R* → *R* (with respect to

**Definition 5.1.** Let *x* : *σ*(J ) → F be an observable *A* ∈ F. Then the conditional probability *<sup>p</sup>*(*A*|*x*) = *<sup>f</sup>* is a Borel measurable function (i. e. *<sup>B</sup>* ∈ J <sup>=</sup><sup>⇒</sup> *<sup>f</sup>* <sup>−</sup>1(*B*) <sup>∈</sup> *<sup>σ</sup>*(<sup>J</sup> )) such that

*p*(*A*|*x*)*dmx* = *m*(*A*.*x*(*B*))

At the end of our communication we shall present two ideas. The first one is in some

*A*.*B* = (*μA*.*μB*, *ν<sup>A</sup>* + *ν<sup>B</sup>* − *νA*.*νB*).

In MV-algebras the product was introduced independently in [56] and [47]. Let us return to

**Definition 6.1.** An MV-algebra with product is a pair (*M*, .), where *M* is an MV-algebra, and . is a commutative and associative binary operation on *M* satisfying the following conditions:

The second idea is a presentation of a dual notion to the notion of *IF*-event.

*f dμ* = *B*

*f dmx*, *B* ∈ *σ*(J ).

 *B*  *B f dmx*

*m*(*A*.*x*(*B*)) =

Let *Bn* � *B*. Then *x*(*Bn*) � *x*(*B*), hence *A*.*x*(*Bn*) � *A*.*x*(*B*). Therefore

for any *B* ∈ *σ*(J ). If *g* is another such a function, then

Proof. Define *μ*, *ν* : *σ*(J ) → [0, 1] by the formulas

Then *μ*, *ν* : *σ*(J ) → [0, 1] are measures, and *ν* ≤ *μ*.

*m*(*A*.*x*(*B*)) = *ν*(*B*) =

 *B*

the equality *μ*- almost everywhere) such that

such that

for any *B* ∈ *σ*(J ).

**7. Algebraic world**

(i) 1.*a* = *a*

algebraizations of the product

Definition 1.3 and Example 1.5.

(ii) *a*.(*b* � ¬*c*)=(*a*.*b*) � ¬(*a*.*c*).

$$(\mu\_{A'}\nu\_A).((\mu\_{B'}\nu\_B)\odot(1-\mu\_{\mathbb{C}'}1-\nu\_{\mathbb{C}})) = $$

$$= (\mu\_A((\mu\_B-\mu\_{\mathbb{C}})\vee 0), \nu\_A+(\nu\_B-\nu\_{\mathbb{C}}+1)\wedge 1-\nu\_A((\nu\_B+1-\nu\_{\mathbb{C}})\wedge 1)).$$

On the other hand

$$\begin{aligned} ( (\mu\_A, \nu\_A). (\mu\_B, \nu\_B) ) \odot (\neg(\mu\_A, \nu\_A). (\mu\_\complement \nu\_\complement)) &= \\ ( (\mu\_A (\mu\_B - \mu\_\complement)) \lor 0, (\nu\_A + (\nu\_B - \nu\_\complement + 1) - \nu\_A (\nu\_B + 1 - \nu\_\complement)) \land 1). \end{aligned}$$

Denote

*ν<sup>B</sup>* − *ν<sup>C</sup>* + 1 = *k*.

If 1 ≤ *k*, then

$$\begin{aligned} \nu\_A + k \wedge 1 - \nu\_A(k \wedge 1) &= \nu\_A + 1 - \nu\_A = 1, \\ (\nu\_A + k - \nu\_A k) \wedge 1 &= (\nu\_A + k(1 - \nu\_A)) \wedge 1 = 1. \end{aligned}$$

If *k* < 1, then

$$\begin{aligned} \nu\_A + k \wedge 1 - \nu\_A k \wedge 1 &= \nu\_A + k - \nu\_A k\_\prime \\ (\nu\_A + k - \nu\_A k) \wedge 1 &= \nu\_A + k - \nu\_A k\_\prime \end{aligned}$$

hence actually

$$A. (B \odot \neg \heartsuit) = (A. B) \odot (\neg (A. \heartsuit)).$$

Similarly as in Section 1 we can define a product in D-posets, we shall name such D-posets Kôpka D-posets.

**Definition 6.2.** A Kôpka D-poset is a pair (*D*, ∗), where D is a D-poset, and ∗ is a commutative and associative operation on D satisfying the following conditions:

$$\begin{aligned} 1. \; \forall a \in D: a \ast 1 = a; \\ 2. \; \forall a, b \in D, a \le b, \forall c \in D: a \ast c \le b \ast c; \\ 3. \; \forall a, b \in D: a - (a \ast b) \le 1 - b; \end{aligned}$$

4. ∀(*an*)*<sup>n</sup>* ⊂ *D*, *an* � *a*, ∀*b* ∈ *D* : *an* ∗ *b* � *a* ∗ *b*.

Evidently every IF-family F can be embedded to an MV-algebra with product and it is a special case of a Kôpka D-poset, hence any result from the Kôpka D-poset theory can be applied to our IF-events theory ([26], [64]).

Now let us consider a theory dual to the IF-events theory, theory of IV-events. A prerequisity of IV-theory is in the fact that it considers natural ordering and operations of vectors. On the other hand the IV-theory is isomorphic to the IF-theory ([65],[43]).

**Definition 6.3.** Let (Ω, S) be a measurable space, S be a *σ*-algebra. By an IV-event a pair *A* = (*μA*, *νA*) : Ω → [0, 1] <sup>2</sup> is considered such that

$$A \le B \Longleftrightarrow \overline{\mu}\_A \le \overline{\mu}\_{B'} \overline{\nu}\_A \le \overline{\nu}\_{B'}.$$

**9. Acknowledgement**

**10. References**

IFS, 2007.

sciences (to appear).

PAS Warsaw 2011, 79 - 88.

Appl. 234, 1999, 208-222.

2010, 229 - 234.

2008, 33 - 60.


Dordrecht, 2000.

This work was partially supported by the Agency of Slovak Ministry of Education for the structural Funds of the EU, uder project ITMS 26220120007 and the Agency VEGA 1/0621/11

Analysis of Fuzzy Logic Models 241

[1] Atanassov: Intuitionistic Fuzzy Sets: Theory and Applications. Studies in Fuzziness and

[2] Atanassov K.T. and Rieˇcan B.: On two new types of probability on IF-events. Notes on

[3] Cignoli L., D'Ottaviano M., Mundici D.: Algebraic Foundations of Many-valued

[4] Ciungu L., Rieˇcan, B.: General form of probabilities on IF-sets. In: Fuzzy Logic and

[5] Ciungu L., Rieˇcan B.: Representation theorem for probabilities on IFS-events.

[6] Ciungu L., Rieˇcan B.: The inclusion - exclusion principle for IF-states. Information

[7] Cunderlíková K.: The individual ergodic theorem on the IF-events. Soft Computing - A ˇ Fusion of Foundations, Methodologies and Applications, Vol. 14, Number 3, Springer

[8] Cunderlíková K., Rieˇcan B.: The probability on B-structures. Developments in Fuzzy ˇ Sets, Intuitionistic Fuzzy Sets, Generalized Nets and related Topics Vol I. EXIT, Warsaw

[9] De S.K., Biswas R., Roy A.R.: An application of intuitionistic fuzzy sets in medical

[10] Deschrijver G., Kerre E.E.: On the relationship between some extensions of fuzzy set

[11] Di Nola A., Dvureˇcenskij A., Hyˇcko M., and Manara C.: Entropy of efect algebras with the Riesz decomposition property I: Basic properties. Kybernetika 41, 143-160, 2005. [12] Di Nola A., Dvureˇcenskij A., Hyˇcko M., and Manara C.: Entropy of efect algebras with the Riesz decomposition property II: MV-algebras. Kybernetika 41, 2005, 161-176. [13] Drygas P.: Problem of monotonicity for decomposable operations. Recent Advances in Fuzzy Sets, Intuitionistic Fuzzy Sets, Generalized Nets and related Topics I, IBS PAN SRI

[14] Durica M.: Hudetz entropy on IF-events. Developments in Fuzzy Sets, Intuitionistic ˇ Fuzzy Sets, Generalized Nets and related Topics Vol I., IBS PAN SRI PAS Warsaw 2010,73

[15] Dvureˇcenskij A. and Pulmannová S.: New Trends in Quantum Structures. Kluwer,

[16] Dvureˇcenskij A. and Rachunek : Rieˇcan and Bosbach states for bounded

[17] Dvureˇcenskij A. and Rieˇcan B.: Weakly divisible MV-algebras and product. J. Math. Anal.

[18] Dvureˇcenskij A. and Rieˇcan B.: On states on BL-algebras and related structures. In: Tributes 10, 2009, Essays in honour of Petr Hajek (P. Cintula et al. eds.), 287 - 302.

non-commutative RI-monoids. Math. Slovaca 56, 2006, 487-500.

Soft Computing. PhysicaVerlag, Heidelberg, 1999.

Applications. Proc. WILF Palermo, 2009, 101 - 107.

diagnosis. Fuzzy Sets and Systems 117, 2001, 209 - 213.

theory. Fuzzy Sets and Systems133, 227 - 235.

Reasoning. Kluwer, Dordrecht, 2000.

Information Sciences 180, 2010, 793 - 798.

$$\begin{aligned} \overline{A} \boxdot \overline{B} &= ( (\overline{\mu}\_A + \overline{\mu}\_B) \wedge 1 , (\overline{\upsilon}\_A + \overline{\upsilon}\_B) \wedge 1 ); \\ \overline{A} \boxdot \overline{B} &= ( (\overline{\mu}\_A + \overline{\mu}\_B - 1) \vee 0 , (\overline{\upsilon}\_A + \overline{\upsilon}\_B - 1) \vee 0 ). \end{aligned}$$

Denote by V the family of all IV-events. By an IV-state a map *m* : V → [0, 1] is considered such that the following properties are satisfied:

$$\begin{aligned} \text{(i)} \,\overline{m}((0,0)) &= 0, \overline{m}((1,1)) = 1; \\ \text{(ii)} \,\overline{A} \,\Box \,\overline{B} &= (0,0) \Longrightarrow \overline{m}(\overline{A} \boxplus \overline{B}) = \overline{m}(\overline{A}) + \overline{m}(\overline{B}); \\ \text{(iii)} \,\overline{A}\_{\text{n}} \,\backslash^{\circ}\overline{A} &\Longrightarrow \overline{m}(\overline{A}\_{\text{n}}) \,\not\supset \overline{m}(\overline{A}). \end{aligned}$$

**Theorem 6.1.** Let V be the family of all IV-events (with respect to Ω, S)), *m* : →[0, 1] be an IV-state. Define

$$\begin{aligned} \mathcal{F} &= \{ (\overline{\mu}\_{A'} 1 - \overline{\nu}\_A); (\overline{\mu}\_{A'} \overline{\nu}\_A) \in \mathcal{V} \}, \\ m: \mathcal{F} &\to [0, 1], m((\mu\_{A'} \nu\_A)) = 1 - \overline{m}(\mu\_{A'} 1 - \nu\_A)), \\ \varrho: \mathcal{V} &\to \mathcal{F}, \varrho((\overline{\mu}\_{A'} \overline{\nu}\_A)) = (\overline{\mu}\_{A'} 1 - \overline{\nu}\_A). \end{aligned}$$

Then F is the family of all IF-events (with respect to (Ω, S), *m* is an IF-state and *ϕ* is an isomorphism such that

$$\begin{aligned} \varrho((0,0)) &= (0,1), \varrho((1,0)) = (1,1), \\ \varrho(\overline{A} \sqsupset \overline{B}) &= \varrho(\overline{A}) \odot \varrho(\overline{B}), \\ \varrho(\overline{A} \boxplus \overline{B}) &= \varrho(\overline{A}) \oplus \varrho(\overline{B}), \\ \varrho(\neg \overline{A}) &= \neg \varrho(\overline{A}), \\ \overline{m}(\overline{A}) &= m(\varrho(\overline{A})), \overline{A} \in \mathcal{V}. \end{aligned}$$

Proof. It is almost straightforward. Of course, the using of the family V is more natural and the results can be applied immediately to probability theory on F.

#### **8. Conclusion**

The structures studied in this chapter have two aspects: the first one is practical, the second theoretical one. Fuzzy sets and their generalization - Atanassov intuitionistic fuzzy sets - in both directions new possibilities give.

From the practical point of view we can recommend e. g. [1], [9], [69]. Of course, the whole IF - theory can be motivated by practical problems and applications (see[10],[44 - 46], [53]).

The main contribution of the presented theory is a new point of view on human thinking and creation. We consider algebraic models for multi valued logic: IF-events, and more generally MV-algebras, D-posets, and effect algebras. They are important for many valued logic as Boolean algebras for two valued logic. Of course, we presented also some results about entropy ([11], [12], [40 - 42], [59]), or inclusion - exclusion principle ([6], [26], [30])for an illustration. But the more important idea is in building the probability theory on IF-events.

The theoretical description of uncertainty has two parts in the present time : objective probability and statistics, and subjective - fuzzy sets. We show that both parts can be considered together.

#### **9. Acknowledgement**

This work was partially supported by the Agency of Slovak Ministry of Education for the structural Funds of the EU, uder project ITMS 26220120007 and the Agency VEGA 1/0621/11

#### **10. References**

22 Will-be-set-by-IN-TECH

*A B* = ((*μ<sup>A</sup>* + *μB*) ∧ 1,(*ν<sup>A</sup>* + *νB*) ∧ 1); *A B* = ((*μ<sup>A</sup>* + *μ<sup>B</sup>* − 1) ∨ 0,(*ν<sup>A</sup>* + *ν<sup>B</sup>* − 1) ∨ 0). Denote by V the family of all IV-events. By an IV-state a map *m* : V → [0, 1] is considered such

**Theorem 6.1.** Let V be the family of all IV-events (with respect to Ω, S)), *m* : →[0, 1] be an

F = {(*μA*, 1 − *νA*);(*μA*, *νA*) ∈ V}, *m* : F → [0, 1], *m*((*μA*, *νA*)) = 1 − *m*(*μA*, 1 − *νA*)), *ϕ* : V→F, *ϕ*((*μA*, *νA*)) = (*μA*, 1 − *νA*).

Then F is the family of all IF-events (with respect to (Ω, S), *m* is an IF-state and *ϕ* is an

Proof. It is almost straightforward. Of course, the using of the family V is more natural and

The structures studied in this chapter have two aspects: the first one is practical, the second theoretical one. Fuzzy sets and their generalization - Atanassov intuitionistic fuzzy sets - in

From the practical point of view we can recommend e. g. [1], [9], [69]. Of course, the whole IF - theory can be motivated by practical problems and applications (see[10],[44 - 46], [53]).

The main contribution of the presented theory is a new point of view on human thinking and creation. We consider algebraic models for multi valued logic: IF-events, and more generally MV-algebras, D-posets, and effect algebras. They are important for many valued logic as Boolean algebras for two valued logic. Of course, we presented also some results about entropy ([11], [12], [40 - 42], [59]), or inclusion - exclusion principle ([6], [26], [30])for an illustration. But the more important idea is in building the probability theory on IF-events. The theoretical description of uncertainty has two parts in the present time : objective probability and statistics, and subjective - fuzzy sets. We show that both parts can be

the results can be applied immediately to probability theory on F.

that the following properties are satisfied:

(ii) *A B* = (0, 0) =⇒ *m*(*A B*) = *m*(*A*) + *m*(*B*);

(i) *m*((0, 0)) = 0, *m*((1, 1)) = 1;

(iii)*An* � *A* =⇒ *m*(*An*) � *m*(*A*).

*ϕ*((0, 0)) = (0, 1), *ϕ*((1, 0)) = (1, 1),

both directions new possibilities give.

IV-state. Define

isomorphism such that

*ϕ*(*A B*) = *ϕ*(*A*) � *ϕ*(*B*), *ϕ*(*A B*) = *ϕ*(*A*) ⊕ *ϕ*(*B*),

*m*(*A*)) = *m*(*ϕ*(*A*)), *A* ∈ V.

*ϕ*(¬*A*) = ¬*ϕ*(*A*),

**8. Conclusion**

considered together.


[41] Markechová D.: Entropy of complete fuzzy partitions. Mathematica Slovaca 43, 1993, 1 -

Analysis of Fuzzy Logic Models 243

[42] Markechová D.: A note on the Kolmogorov - Sinaj entropy of fuzzy dynamical systems.

[43] Mesiar R., Komorníková M.: Probability measures on interval-valued fuzzy events. Acta

[44] Michalíková A.: Outer measure on IF-sets. First International Workshop on Intuitionistic Fuzzy Sets, Generalized Nets and Knowledge Engineering. Univ. Westminster London,

[45] Michalíková A.: A measure extension theorem in l-groups. Proc. IPMU'08 Torremoklinos

[46] Michalíková A.: The differential calculus on IF sets. FUZZ-IEEE 2009 Korea 2009, 1393 -

[47] Montagna, F.: An algebraic approach to propositional fuzzy logic. J. Logic Lang. Inf. (D.

[48] Mundici D.: Interpretation of AFC algebras in Lukasiewicz sentential calculus. J. Funct.

[49] Mundici D.: Advanced Lukasiewicz calculus and MV-algebras. Springer,Dordrecht 2011. [50] Potocký R.: On random variables having values in a vector lattice. Math.Slovaca 27, 1977,

[51] Potocký R.: On the expected value of vector lattice - valued random variables. Math.

[52] Renˇcová, M.: A generalization of probability theory on MV-algebras to IF-events. Fuzzy

[53] Renˇcová, M.: General form of strongly additive phi-probabilities. In: Proc. IPMU'08 (L.

[54] Renˇcová M.: State - preserving mappings on IF-events. Developments in Fuzzy Sets, Intuitionistic Fuzzy Sets, Generalized Nets and related Topics, Vol.I, EXIT Warsaw 2008,

[55] Renˇcová M., Rieˇcan B.:Probability on IF-sets: an elementary approach. In: First Int. Workshop on IFS, Generalized Nets and Knowledge Engineering, 2006, 8 - 17. [56] Rieˇcan, B.: On the product MV-algebras. Tatra Mt. Math. Publ. 16, 1999, 143 - 149. [57] Rieˇcan B.: A descriptive definition of the probability on intuitionistic fuzzy sets. In:

[58] Rieˇcan B. : Representation of probabilities on IFS events. In: Soft Methodology and

[59] Rieˇcan B. :Kolmogorov - Sinaj entropy on MV-algebras. Int. J. Theor. Physics, 44, 2005,

[60] Rieˇcan B. : On the probability on IF-sets and MV-algebras. Notes on IFS, 11, 2005, 21 - 25. [61] Rieˇcan B. : On a problem of Radko Mesiar: general form of IF-probabilities. Fuzzy Sets

[62] Rieˇcan B.: On the probability and random variables on IF events. In: Applied Artificial Intelligence, Proc. 7th FLINS Conf. Genova ( D. Ruan et al. eds.), 2006, 138 - 145. [63] Rieˇcan B.: Probability theory on intuitionistic fuzzy events. In: Algebraic and Proof theoretic Aspects of Non-classical Logic (S. Aguzzoli et. al., eds.) Papers in honour of Daniele Mundici's 60th birthday. Lecture Notes in Computer Science, Springer, Berlin

Magdalena et al. eds.) Torremolinos (Malaga), Spainm 2008, 1671 - 1674.

EUSFLAT '2003 ( M. Wagenecht, R. Hampet eds.) 2003, 263 -266.

Random Information Systems, (Lopez-Diaz et al. eds)., 2004, 243 - 248.

Mundici et al. eds.), Special issue on Logics of Uncertainty 9, 2000, 91 - 124.

10.

1395.

267 - 276.

313 - 318.

1041 - 1052.

2007, 290 - 308.

Fuzzy Sets and Systems 64, 1994, 87 - 90.

Malaga, Spain, 22 - 27 June 2008, 1666 - 1670.

Univ. M. Belii,Math. 19(2011), 5 - 10.

6 - 7 September 2006, 39 - 44.

Anal. 56 (1986), 889 - 894.

Slovaca 36, 1986, 401 - 405.

Sets and Systems 161, 2010, 1726 - 1739.

and Systems, 152, 2006, 1485 - 1490.



24 Will-be-set-by-IN-TECH

[19] Foulis D. and Bennett M.: Effect algebras and unsharp quantum logics. Found. Phys. 24,

[22] Grzegorzewski P. and Mrowka E.: Probability of intuistionistic fuzzy events. In: Soft Metods in Probability, Statistics and Data Analysis, (P. Grzegorzewski et al.eds),2002,

[24] Hanesová R.: Statistical estimation on MV-algebras. Proc. of the Eleventh International Workshop on Generalized Nets and the Second International Workshop on Generalized Nets, Intuitionistic Fuzzy Sets and Knowledge Engineering, London 9 July 2010, 66 - 70. [25] Jureˇcková, M.: The addition to ergodic theorem on probability MV-algebras with

[26] Kelemenová,J.: The inclusion-exclusion principle in semigroups. Recent Advances in Fuzzy Sets, Intuitionistic Fuzzy Sets, Generalized Nets and related Topics, Vol.I IBS PAN

[29] Krachounov M.: Intuitionistic probability and intuitionistic fuzzy sets. In: First Intern.

[30] Kuková M.: The inclusion-exclusion principle on some algebraic structures. Recent Advances in Fuzzy Sets, Intuitionistic Fuzzy Sets, Generalized Nets and related Topics,

[31] Lašová L.: The individual ergodic theorem on IF-events. Developments in Fuzzy Sets, Intuitionistic Fuzzy Sets, Generalized Nets and related Topics, Vol.I IBS PAN SRI PAS

[32] Lendelová K.: Convergence of IF-observables. In: Issues in the Representation and Processing of Uncertain and Imprecise Information - Fuzzy Sets, Intuitionistic Fuzzy Sets,

[33] Lendelová K.: IF-probability on MV-algebras. Notes on Intuitionistic Fuzzy Sets 11, 2005,

[34] Lendelová K.: Conditional IF-probability. In: Advances in Soft Computing: Soft Methods

[35] Lendelová K., Petroviˇcová J.: Representation of IF-probability for MV- algebras. Soft Computing (A Fusion of Fundations, Metodologies and Applications 10, 2006, 564-566. [36] Lendelová K., Rieˇcan B.: Weak law of large numbers for IF-events. In: Current Issues in Data and Knowledge Engineering,(Bernard De Baets et al. eds.), 2004, 309-314. [37] Lendelová K., Rieˇcan B.: Probability on triangle and square. In: Proceedings of the Eleventh International Conference IPMU 2006, July, 2-7, 2006, Paris, 977 - 982. [38] Lendelová K., Rieˇcan B.: Strong law of large numbers for IF-events. In: Proceedings of the Eleventh International Conference IPMU 2006, July, 2-7, 2006, Paris 2006, 2363-2366. [39] Markechová D.: The conjugation of fuzzy probability spaces to the unit interval. Fuzzy

[40] Markechová D.: F-quantum spaces and their dynamics. Fuzzy Sets and Systems 50, 1992,

Generalized nets, and Related Topics. EXIT, Warsaw 2005, 232 - 240.

[27] Klement E., Mesiar R., and Pap E.: Triangular Norms. Kluwer, Dordrecht, 2000.

[28] Kôpka F., Chovanec F.: D-posets. Math. Slovaca 44, 1994, 21-34.

Workshop on IFS, (El-Darzi et al. eds.), 2006, 714-717.

for Integrated Uncertainty Modelling, 2006, 275-283.

Vol.I IBS PAN SRI PAS Warsaw 2011, 123 - 126.

[20] Georgescu G.: Bosbach states on fuzzy structures. Soft Computing 8, 2004, 217-230. [21] Gerstenkorn T., Manko J.: Probabilities of intuitionistic fuzzy events. In: Issues in Intelligent Systems: Paradigms, (O.Hryniewicz et al. eds.), Intuitionistic fuzzy

[23] Hájek P.: Metamathematics of Fuzzy Logic. Kluwer, Dordrecht, 1998.

1994, 1325-1346.

105-115.

probability theory 45 2005, 63-58.

product. Soft Computing 7, 2003, 105 - 115.

SRI PAS Warsaw 2011, 87- 94.

Warsaw 2010, 131 - 140.

Sets and Systems 47, 1992, 87 - 92.

66-72.

79- 88.


**1. Introduction** 

intelligence:

 \*

Corresponding author

**11** 

*Australia* 

**Recognition and Resolution of** 

Sukanto Bhattacharya1,\* and Kuldeep Kumar2 *1Deakin Graduate School of Business, Deakin University,* 

*2School of Business, Bond University,* 

**"Comprehension Uncertainty" in** *AI*

**1.1 Uncertainty resolution as an integral characteristic of intelligent systems** 

1950s post- publication of Alan Turing's landmark paper (Turing, 1950).

Handling uncertainty is an important component of most intelligent behaviour – so uncertainty resolution is a key step in the design of an artificially intelligent decision system (Clark, 1990). Like other aspects of intelligent systems design, the aspect of uncertainty resolution is also typically sought to be handled by emulating natural intelligence (Halpern, 2003; Ball and Christensen, 2009). In this regard, a number of computational uncertainty resolution approaches have been proposed and tested by Artificial Intelligence (*AI*) researchers over the past several decades since birth of *AI* as a scientific discipline in early

The following chart categorizes various forms of uncertainty whose resolution ought to be a pertinent consideration in the design an artificial decision system that emulates natural

Fig. 1. Broad classifications of "uncertainty" that intelligent systems are expected to resolve

