Algorithm Prob (pM (x))

Input: D = {(x<sup>p</sup> , y<sup>r</sup> )|p∈ZP } and D<sup>∗</sup> = {(x<sup>p</sup> )|p∈ZP }

Output: pM (x)

Step 1: Give an input data x<sup>i</sup> ∈D<sup>∗</sup> , we determine the neighborhood ranking (x<sup>i</sup> 0 , x<sup>i</sup> 1, …, x<sup>i</sup> k ,…, xi <sup>P</sup>�1) of the vector <sup>x</sup><sup>i</sup> with <sup>x</sup><sup>i</sup> 0 = x<sup>i</sup> , x<sup>i</sup> 1 being closest to x<sup>i</sup> and x<sup>i</sup> k (k = 0, …, P � 1) being the vector x<sup>i</sup> for which there are k vectors x<sup>j</sup> with <sup>x</sup><sup>i</sup> � <sup>x</sup><sup>j</sup> � � � � <sup>&</sup>lt; <sup>x</sup><sup>i</sup> � <sup>x</sup>ik � � � �.

Step 2: Determine H(x<sup>i</sup> ) which shows the rate of output change for input data x<sup>i</sup> , by the following equation:

$$H(\mathbf{x}^i) = \sum\_{l=1}^{M} \frac{|y^i - y^{i\cdot}|}{||\mathbf{x}^i - \mathbf{x}^{i\cdot}||},\tag{16}$$

where xil for l ZM means the lth neighborhood ranking of x<sup>i</sup> , i∈ZP, and yi and yil are output for input x<sup>i</sup> and xil , respectively. The number M means the range considering H(x).

Step 3: Determine the probability pM (x<sup>i</sup> ) for x<sup>i</sup> by normalizing H(x<sup>i</sup> ) as follows:

$$p\_M(\mathbf{x}^i) = \frac{H(\mathbf{x}^i)}{\sum\_{j=1}^p H(\mathbf{x}^j)},\tag{17}$$

where P<sup>P</sup> <sup>i</sup>¼<sup>1</sup> pM <sup>x</sup><sup>i</sup> � � <sup>¼</sup> 1.

Learning Algorithm B using Algorithm Center (c) is introduced as follows [16, 17]:

Step 2: Center and width parameters are determined from Algorithm Center(P) and the set D<sup>∗</sup>

Step 4: If E(t)≤θ, then algorithm terminates else go to Step 3 with n n + 1 and t t + 1.

Step 3: Parameters c, b, and w are updated using Algorithm SDM (c, b, w).

Learning Algorithm B

Figure 1. Neural gas method [18].

max: maximum number of learning time for NG Tmax: maximum number of learning time for SDM

134 From Natural to Artificial Intelligence - Algorithms and Applications

θ: threshold of MSE

M: the size of ranges n: the number of rules

Step 1: Initialize()

T0

See Ref. [19] for the detailed explanation using the example of pM (x). Using pM (x), Kishida has proposed the following learning algorithm [13]:

## Learning Algorithm C

θ: threshold of MSE

T0 max: maximum number of learning time for NG

Tmax: maximum number of learning time for SDM

M: the size of ranges

.

n: the number of rules

Step 1: Initialize ( )

Step 2: The probability pM (x) is obtained from algorithm prob (pM (x)).

Step 3: Center and width parameters are determined using pM (x) from Algorithm Center (P) and the data set D.

Step 4: Parameters c, b, and w are updated using Algorithm SDM (c, b, w).

Step 5: If E(t)≤θ, then algorithm terminates else go to Step 3 with n n + 1 and t = 1.

#### 2.4. Determination of weight parameters using the generalized inverse method

The optimum values of parameters c and b are determined by using pK(x). Then, how can we decide weight parameters w? We can determine them as the interpolation problem for parameters c, b, and w. That is, it is the method that membership values for antecedent part of rules are computed from c and b and weight parameters w are determined by solving the interpolation problem. So far, the method was used as a determination problem of weight parameters for RBF networks [1].

Let us explain fuzzy inference systems and interpolation problem using the generalized inverse method [1]. This problem can be stated mathematically as follows:

Given P points {x<sup>p</sup> |p∈ZP } and P real numbers {yr <sup>p</sup>|p∈ZP }, find a function <sup>f</sup>: <sup>R</sup>m!<sup>R</sup> such that the following conditions are satisfied:

$$f(\mathbf{x}^p) = \mathbf{y}\_p^r \tag{18}$$

Let us consider the case n < P. This is the realistic case. The optimum solution w<sup>∗</sup> that

<sup>w</sup><sup>þ</sup> <sup>¼</sup> <sup>φ</sup><sup>T</sup> <sup>y</sup> and Emin <sup>¼</sup> k k ð Þ <sup>I</sup> � <sup>Ψ</sup> <sup>y</sup>

, and I is identify matrix of P�P . The matrix Φ<sup>+</sup> is called the generalized inverse of φ. The method using Φ<sup>+</sup> to determine the

2

Learning Algorithms for Fuzzy Inference Systems Using Vector Quantization

exp � <sup>1</sup> 2

@

. That is, it is learning method composed of two stages. The center

x p <sup>j</sup> � cij bij !<sup>2</sup> <sup>0</sup>

<sup>w</sup> <sup>¼</sup> <sup>Φ</sup><sup>þ</sup> <sup>y</sup><sup>r</sup> (24)

1 A

, (23)

137

http://dx.doi.org/10.5772/intechopen.79925

minimizes <sup>E</sup> = ||yr � <sup>φ</sup>w||2 can be obtained as follows:

, Ψ ≜ΦΦ<sup>T</sup>

weights is called the generalized inverse method (GIM).

Step 2: Calculate the matrix Φ and Φ<sup>+</sup> using Eq. (20):

<sup>φ</sup>pi <sup>¼</sup> xp ð Þ¼ k k � ci

Step 3: Determine the weight vectors w as follows:

Using GIM, a decision procedure for parameters is defined as follows:

μp P i n <sup>j</sup>¼<sup>1</sup> <sup>μ</sup><sup>p</sup> j , μ<sup>p</sup> <sup>i</sup> <sup>¼</sup> <sup>Y</sup><sup>m</sup> j¼1

Let us explain the relation between the proposed method and related works using Figure 2.

1. The fundamental flow of algorithm A is shown in Figure 2(a). Initial parameters of c, b, and w are set randomly, and all parameters are updated using SDM until the inference

2. The first method using VQ is the one that both the initial assignment of parameters and the assignment of parameters in iterating step (see outer loop of Figure 2(b)) are also deter-

3. The second method using VQ is the one that is the same method as the first one except for selecting any learning data based on pM (x) (see Figure 2(c)). That is, center parameters c

parameters c are determined using D<sup>∗</sup> by VQ, b is computed by Eq. (15) using the result of center parameters, and weight parameter w is set to the results of SDM, where the initial values of w are set randomly. Further, all parameters are updated using SDM for the definite number of learning time. In iterating processes, parameters of the result obtained by SDM are set as initial ones of the next process. Outer iterating process is repeated until

2.5. The relation between the proposed algorithm and related works

the inference error become sufficiently small (see Figure 2(b)).

are determined by pM (x) using input and output learning data.

error become sufficiently small (see Figure 2(a)) [1].

where Φ<sup>+</sup>

≜[Φ<sup>T</sup> Φ]

Algorithm Weight(c, b)

, y<sup>r</sup>

mined by NG using D<sup>∗</sup>

Output: The weight parameters w

Step 1: Calculate μ<sup>i</sup> based on Eq. (2)

)|p∈ZP }

Input: D = {(x<sup>p</sup>

�1 ΦT

In fuzzy modeling, this problem is solved as follows:

$$y\_p = f(\mathbf{x}^p) = \sum\_{i=1}^n w\_i \wp\_{pi}(\|\mathbf{x}^p - \mathbf{c}\_i\|) \tag{19}$$

$$\left(\boldsymbol{\varphi}\_{pi}(\|\mathbf{x}^{p} - \mathbf{c}\_{i}\|)\right) = \frac{\mu\_{i}}{\sum\_{l=1}^{n} \mu\_{l}}, \mu\_{i} = \prod\_{j=1}^{m} M\_{ij}(\mathbf{x}\_{j}), \tag{20}$$

where μ<sup>i</sup> and Mij are defined as Eqs. (2) and (4).

That is,

$$
\varphi \text{ } \mathbf{w} = \mathbf{y},
\tag{21}
$$

where φ = (φij) (i ∈ ZP and j ∈ Zn), w = (w1, …, wn) T , and y = (y<sup>r</sup> <sup>1</sup>,…, y<sup>r</sup> p) T .

Let P = n and x<sup>i</sup> = ci. The width parameters are determined by Eq. (15). Then, if φij( ) is suitably selected as Gaussian function, then the solution of weights w is obtained as

$$
\omega \circ \varphi^{-1} \mathbf{y} \tag{22}
$$

Let us consider the case n < P. This is the realistic case. The optimum solution w<sup>∗</sup> that minimizes <sup>E</sup> = ||yr � <sup>φ</sup>w||2 can be obtained as follows:

$$\mathfrak{w}^{+} = \mathfrak{q}^{T} \mathfrak{y} \text{ and } E\_{\text{min}} = \left\|(I - \Psi^{\prime})\mathfrak{y}\right\|^{2}, \tag{23}$$

where Φ<sup>+</sup> ≜[Φ<sup>T</sup> Φ] �1 ΦT , Ψ ≜ΦΦ<sup>T</sup> , and I is identify matrix of P�P .

The matrix Φ<sup>+</sup> is called the generalized inverse of φ. The method using Φ<sup>+</sup> to determine the weights is called the generalized inverse method (GIM).

Using GIM, a decision procedure for parameters is defined as follows:
