Learning Algorithm A

Step A1: The threshold θ of inference error and the maximum number of learning time Tmax are set. Let n0 be the initial number of rules. Let t=1.

The evaluation function for the partition is defined by

Let us introduce the neural gas method as follows [18]:

Let the probability of v selected from V be denoted by p(v).

adjusting the parameters is given by

method is called learning algorithm NG.

)|p∈Zp}

Step 1: By using p(x) for x ∈ D<sup>∗</sup>

<sup>i</sup>¼<sup>1</sup>Ci and <sup>n</sup> <sup>¼</sup> <sup>P</sup><sup>r</sup>

p(x): the probability of x selected for x∈D<sup>∗</sup>

<sup>i</sup>¼<sup>1</sup> ni.

where ε ∈ [0, 1] and λ > 0.

Algorithm Center (c)

,…, xp

D<sup>∗</sup> = {(xp

<sup>C</sup> <sup>¼</sup> <sup>∪</sup><sup>r</sup>

For any input data vector v, the neighborhood ranking cik for k∈ Z<sup>∗</sup>

determined, being the reference vector for which there are k vectors c<sup>j</sup> with

where ni = |Vi|.

<sup>E</sup> <sup>¼</sup> <sup>X</sup><sup>r</sup> i¼1

X v ∈Vi

1 ni

Let the number k associated with each vector c<sup>i</sup> denoted by ki(v,ci). Then, the adaption step for

The flowchart of the conventional neural gas algorithm is shown in Figure 1 [18], where εint, εfin, and Tmax<sup>2</sup> are learning constants and the maximum number of learning, respectively. The

Using the set D , a decision procedure for center and width parameters is given as follows:

, NG method of Figure 1 [16, 18] is performed.

cij � xkj � �<sup>2</sup> (15)

.

As a result, the set C of reference vectors for D<sup>∗</sup> is determined, where C = n.

Step 2: Each value for center parameters is assigned to a reference vector. Let

bij <sup>¼</sup> <sup>1</sup> ni X x<sup>k</sup> ∈Ci

As a result, center and width parameters are determined from algorithm center (c).

where Ci and ni are the set and the number of learning data belonging to the ith cluster Ci and

k k v � c<sup>i</sup> 2

, (11)

http://dx.doi.org/10.5772/intechopen.79925

133

<sup>r</sup>�<sup>1</sup> is

kv � cjk < kv � cik k (12)

Learning Algorithms for Fuzzy Inference Systems Using Vector Quantization

△c<sup>i</sup> <sup>¼</sup> <sup>ε</sup> � <sup>h</sup>λð Þ� kið Þ <sup>v</sup>; <sup>c</sup><sup>i</sup> ð Þ <sup>v</sup> � <sup>c</sup><sup>i</sup> (13) hλð Þ¼ kið Þ v; c<sup>i</sup> exp ð Þ �kið Þ v; c<sup>i</sup> =λ (14)

Step A2: The parameters bij, cij, and wi are set randomly.

Step A3: Let p = 1.

Step A4: A data x p <sup>1</sup>; ⋯; x p <sup>m</sup>; yr p <sup>∈</sup> <sup>D</sup> is given.

Step A5: From Eqs. (2) and (3), μ<sup>i</sup> and y<sup>∗</sup> are computed.

Step A6: Parameters wi, cij, and bij are updated by Eqs. (6), (7), and (8).

Step A7: If p=P, then go to Step A8, and if p < P then go to Step A4 with p p + 1.

Step A8: Let E(t) be inference error at step t calculated by Eq. (5). If E(t) > θ and t < Tmax, then go to Step A3 with t t + 1; else, if E(t) ≤ θ and t ≤ Tmax, then the algorithm terminates.

Step A9: If t > Tmax and E(t) > θ, then go to Step A2 with n = n + 1 and t = 1.

In particular, Algorithm SDM is defined as follows:
