**2.2 Fuzzy CMAC models**

422 Fuzzy Inference System – Theory and Applications

As mentioned in the previous section, the traditional CMAC model (Albus, 1975a, 1975b) has fast learning ability and good local generalization capability for approximating nonlinear functions. The basic idea of the CMAC model is to store learned data in overlapping regions in a way that the data can easily be recalled yet use less storage space. The action of storing weight information in the CMAC model is similar to that of the cerebellum in humans. Take a two-dimensional (2-D) input vector, or the so-called twodimensional CMAC (2-D CMAC), as an example, while its structure is shown as in Fig. 1. The input vector is defined by two input variables, *s*1 and *s*2, which are quantized into three discrete regions, called blocks. It is noted that the width of the blocks affects the generalization capability of the CMAC. In the first method of quantization, *s*1 and *s*2 are divided into blocks A, B, and C, and blocks a, b, and c, respectively. The areas Aa, Ab, Ac, Ba, Bb, Bc, Ca, Cb, and Cc formed by quantized regions are called hypercubes. When each block is shifted by a small interval, different hypercubes can be obtained. In Fig. 1, there are 27 hypercubes used to distinguish 49 different states in the 2-D CMAC. For example, let the hypercubes Bb and Ee be addressed by the state (*s*1, *s*2) = (2, 2). Only these three hypercubes

In the work of (Mohajeri et al., 2009a) the hash coding and corresponding comparison with other neural networks for the CMACs were reviewed. There are 13 architectural improvements discussed in (Mohajeri et al., 2009a), including approaches of interpolation, numerous transfer functions (such as sigmoid, basic spline, general basis and wavelet ones), weighted regression, optimized weight smoothing, recurrence and generalization issue. Learning techniques such as neighborhood sequential training and random training were

**2. From CMACs to fuzzy CMAC models** 

**2.1 The traditional CMAC models** 

are set to 1, and the others are set to 0.

Fig. 1. Structure of a 2-D CMAC

Many researchers have integrated the fuzzy concept into the CMAC network, such as in (Chen 2001; Dai et al., 2010; Guo et al., 2002; Jou, 1992; Ker et al., 1997; Lai & Wong, 2001; Lee & Lin, 2005; Lee et al., 2007a; Lee et al., 2007b; Lin & Lee, 2008; Lin & Lee, 2009; Lin et al., 2008; Peng & Lin, 2011; Wang, 1994; Zhang & Qian, 2000). In general, they use membership functions rather than basis functions, and the resulting structures are then called fuzzy CMACs (FCMACs).

In addition, the work of (Mohajeri et al., 2009b) provides a review of FCMACs, including over 23 relative aspects such as membership function, memory layered structure, defuzzification and fuzzy systems, was provided. Even FCMACs have originally reduced memory requirement for the CMAC, further discussions of clustering (such as fuzzy Cmean, discrete incremental clustering and Bayesian Ying-Yang) and hierarchical approaches for reducing memory sizes of FCMACs themselves were overviewed in (Mohajeri et al., 2009b) as well. Furthermore, as divided in (Dai et al., 2010), there are two classes of FCMACs architectures, i.e., forward and feedback fuzzy neural networks, which is useful for beginners to have a big picture of the basic concept for the FCMACs.

In the following sections, being the example models in this chapter the self-constructing FCMAC (SC-FCMAC, Lee et al., 2007a) and the powerful parametric FCMAC (P-FCMAC, Lin & Lee, 2009) are reviewed, in order to provide readers the insight knowledge of how these FCMAC work. Companied by their corresponding architectures and learning schemes, illustrative examples of system identification are provided as well.
