**2.1 The traditional CMAC models**

As mentioned in the previous section, the traditional CMAC model (Albus, 1975a, 1975b) has fast learning ability and good local generalization capability for approximating nonlinear functions. The basic idea of the CMAC model is to store learned data in overlapping regions in a way that the data can easily be recalled yet use less storage space. The action of storing weight information in the CMAC model is similar to that of the cerebellum in humans. Take a two-dimensional (2-D) input vector, or the so-called twodimensional CMAC (2-D CMAC), as an example, while its structure is shown as in Fig. 1. The input vector is defined by two input variables, *s*1 and *s*2, which are quantized into three discrete regions, called blocks. It is noted that the width of the blocks affects the generalization capability of the CMAC. In the first method of quantization, *s*1 and *s*2 are divided into blocks A, B, and C, and blocks a, b, and c, respectively. The areas Aa, Ab, Ac, Ba, Bb, Bc, Ca, Cb, and Cc formed by quantized regions are called hypercubes. When each block is shifted by a small interval, different hypercubes can be obtained. In Fig. 1, there are 27 hypercubes used to distinguish 49 different states in the 2-D CMAC. For example, let the hypercubes Bb and Ee be addressed by the state (*s*1, *s*2) = (2, 2). Only these three hypercubes are set to 1, and the others are set to 0.

Fig. 1. Structure of a 2-D CMAC

In the work of (Mohajeri et al., 2009a) the hash coding and corresponding comparison with other neural networks for the CMACs were reviewed. There are 13 architectural improvements discussed in (Mohajeri et al., 2009a), including approaches of interpolation, numerous transfer functions (such as sigmoid, basic spline, general basis and wavelet ones), weighted regression, optimized weight smoothing, recurrence and generalization issue. Learning techniques such as neighborhood sequential training and random training were firstly considered, while the developments of other five training schemes (i.e., credit assignment, gray relational, error norm, active deformable and Tikhonov ones) were mentioned as well. In order to reduce relative memory usages, proposed approaches of hierarchical and self-organizing CMACs were reasoned, whereas the fuzzy variation of the self-organizing CMAC will be further presented in the following section of this chapter.
