Algorithm Weight(c, b)

Step 1: Initialize ( )

and the data set D.

for RBF networks [1].

Given P points {x<sup>p</sup>

That is,

Let P = n and x<sup>i</sup>

following conditions are satisfied:

Step 2: The probability pM (x) is obtained from algorithm prob (pM (x)).

136 From Natural to Artificial Intelligence - Algorithms and Applications

Step 4: Parameters c, b, and w are updated using Algorithm SDM (c, b, w).

inverse method [1]. This problem can be stated mathematically as follows:

yp <sup>¼</sup> <sup>f</sup> <sup>x</sup><sup>p</sup> ð Þ¼ <sup>X</sup><sup>n</sup>

selected as Gaussian function, then the solution of weights w is obtained as

<sup>φ</sup>pi <sup>x</sup><sup>p</sup> ð Þ¼ k k � <sup>c</sup><sup>i</sup>


In fuzzy modeling, this problem is solved as follows:

where μ<sup>i</sup> and Mij are defined as Eqs. (2) and (4).

where φ = (φij) (i ∈ ZP and j ∈ Zn), w = (w1, …, wn)

Step 5: If E(t)≤θ, then algorithm terminates else go to Step 3 with n n + 1 and t = 1.

2.4. Determination of weight parameters using the generalized inverse method

Step 3: Center and width parameters are determined using pM (x) from Algorithm Center (P)

The optimum values of parameters c and b are determined by using pK(x). Then, how can we decide weight parameters w? We can determine them as the interpolation problem for parameters c, b, and w. That is, it is the method that membership values for antecedent part of rules are computed from c and b and weight parameters w are determined by solving the interpolation problem. So far, the method was used as a determination problem of weight parameters

Let us explain fuzzy inference systems and interpolation problem using the generalized

<sup>f</sup> <sup>x</sup><sup>p</sup> ð Þ¼ yr

i¼1

μ P i n <sup>I</sup>¼<sup>1</sup> <sup>μ</sup><sup>I</sup> , <sup>μ</sup><sup>i</sup> <sup>¼</sup> <sup>Y</sup><sup>m</sup> j¼1

T

<sup>w</sup> <sup>¼</sup> <sup>φ</sup>�<sup>1</sup>

, and y = (y<sup>r</sup>

= ci. The width parameters are determined by Eq. (15). Then, if φij( ) is suitably

Mij xj

<sup>p</sup>|p∈ZP }, find a function <sup>f</sup>: <sup>R</sup>m!<sup>R</sup> such that the

<sup>p</sup> (18)

wiφpi <sup>x</sup><sup>p</sup> ð Þ k k � <sup>c</sup><sup>i</sup> (19)

φ w ¼ y, (21)

y (22)

<sup>1</sup>,…, y<sup>r</sup> p) T .

� �, (20)

Input: D = {(x<sup>p</sup> , y<sup>r</sup> )|p∈ZP }

Output: The weight parameters w

Step 1: Calculate μ<sup>i</sup> based on Eq. (2)

Step 2: Calculate the matrix Φ and Φ<sup>+</sup> using Eq. (20):

$$\varphi\_{pi} = (||\mathbf{x}^p - c\_i||) = \frac{\mu\_i^p}{\sum\_{j=1}^n \mu\_j^p}, \mu\_i^p = \prod\_{j=1}^m \exp\left(-\frac{1}{2} \left(\frac{\mathbf{x}\_j^p - c\_{ij}}{b\_{ij}}\right)^2\right).$$

Step 3: Determine the weight vectors w as follows:

$$
\omega = \boldsymbol{\Phi}^+ \,\, \mathbf{y}' \tag{24}
$$

#### 2.5. The relation between the proposed algorithm and related works

Let us explain the relation between the proposed method and related works using Figure 2.


3. The proposed learning method using VQ

1. Determine the initial assignment of c using the probability pK(x).

4. The optimum value of M is determined by hill climing method [16].

Tmax<sup>1</sup> and Tmax2: The maximum numbers of learning time for NG and SDM.

, yr

D'. It is composed of four techniques as follows:

are the optimal parameters for c, b, and w.

θ and θ1: Thresholds for MSE and SDM

△M: The rate of change of the range

E(t): MSE of inference error at step t

: Learning data D = {(x<sup>i</sup>

M0, Mmax: The size of initial and final of ranges

Emin: The minimum MSE of E for the rule number

using GIM.

D and D<sup>∗</sup>

n: The number of rules

Let us explain the detailed algorithm of Figure 2(d'). The method is called Learning Algorithm

2. Determine the assignment of weight parameters w by solving the interpolation problem

The general scheme of the proposed method is shown as Figure 3, where cmin, bmin, and wmin

)|i∈ZP } and D<sup>∗</sup> = {x<sup>i</sup>

The proposed method of Figure 3 consists of five phases: In the first phase, all values for algorithm are initialized. In the second phase, the probability pM (x) is determined for the size of range M. In the third phase, parameters c are determined by NG using pM (x), and parameters b are computed from parameters c. In the forth phase, parameters w are determined from algorithm weight(c, b). In the fifth phase, all parameters are updated using pM (x) by SDM. The optimum number n<sup>∗</sup> of rules and the optimum size M <sup>∗</sup> of range are determined in Figure 4. That is, the number M for the fixed number n is adjusted, and the optimum values of n<sup>∗</sup> and M<sup>∗</sup> with the minimum number for MSE are determined. Especially, Learning Algorithm D is same method as Learning Algorithm D' except for the step with the symbol "\*" in Figure 3. In learning steps of SDM for Learning Algorithm D, learning data is selected randomly (see Figure 2(d)).

Likewise, we also propose improved methods for Figure 2(a)–(c). In learning process of SDM for algorithm (a), (b), and (c), any learning data is selected randomly. In the proposed methods, any learning data is selected based on pM (x). These algorithms are defined as (a'), (b'), and (c').


Learning Algorithms for Fuzzy Inference Systems Using Vector Quantization

http://dx.doi.org/10.5772/intechopen.79925

139

3. The processes (1) and (2) and learning steps of SDM using pM (x) are iterated.

Figure 2. Concept of conventional and proposed algorithms: mark 1 means that initial values of w are selected randomly and parameters w are set to the result of SDM after the second step.

