**3. Structure of wavelet neural network (WNN)**

The arrangement of WNN is similar to that of the neural network. The hidden layer contains neurons, whose activation functions are driven from a wavelet basis. These wavelet neurons are generally referred to as wavelons, whose parameters of the inputs contain the wavelet dilation and translation elements [12, 14]. Wavelet networks can be categorized into recurrent and nonrecurrent (feedforward) types.

### **3.1 Feedforward wavelet neural network (FFWNN)**

The FFWNN has no feedback connection. That is, the output is calculated straightly from the input with feedforward connections [12]. There are two arrangements of feedforward wavelet networks:

*Wavelet Neural Networks for Speed Control of BLDC Motor DOI: http://dx.doi.org/10.5772/intechopen.91653*

**Figure 1.** *The building of radial basis wavelet neural network (RBWNN).*

### *3.1.1 Radial basis wavelet neural network (RBWNN)*

Radial basis wavelet neural network (RBWNN) is the simplest form of the wavelet network [15, 16]. The arrangement of radial basis wavelet neural network (RBWNN) is shown in **Figure 1**. This network approaches any required signal f(t) by simplifying a linear combination of a group of daughter wavelets ψa,b, where ψa,b are created by dilation a and translation b from mother wavelet ψ [11, 17].

$$
\Psi\_{\mathbf{a},\mathbf{b}} = \Psi \left( \frac{\mathbf{x} - \mathbf{b}}{\mathbf{a}} \right) \tag{1}
$$

The network output is specified as follows [10, 18]:

$$\mathbf{y} = \sum\_{\mathbf{n}=1}^{N} \mathbf{w}\_{\mathbf{N}} \boldsymbol{\upmu}\_{\mathbf{a}\_{\mathbf{N}}, \mathbf{b}\_{\mathbf{N}}} \tag{2}$$

where x is the input signal, N is the number of neuron in the hidden layer, and wN is the weights of the output. The network parameters wN, aN, and bN can be training and optimizing by any optimization technique. In this chapter, the PSO algorithm is used to minimize the error according to the fitness function as will be demonstrated later.

#### *3.1.2 Conventional wavelet neural network*

The conventional WNN is a general form of radial basis wavelet neural network [19]. **Figure 2** depicts the building of the conventional wavelet network, the number of hidden layers and neurons that are selected to create an appropriate WNN, and the parameters that are optimized by the PSO algorithm. The input layer can be represented by a vector x ¼ ½ � x1, x2, … , xM , the output layer represented by a vector y ¼ y1, y2, … , yK � �, and the activation function of hidden layer is the wavelet basis function. The output Yj can be given as follows [11, 19]:

$$\mathbf{Y}\_{\mathbf{j}} = \sigma(\mathbf{u}\_{\mathbf{j}}) = \sigma\left[\sum\_{\mathbf{n}=1}^{N} \mathbf{w}\_{\mathbf{j},\mathbf{n}} \,\,\Psi\_{\mathbf{a},\mathbf{b}\_{\mathbf{n}}}\left(\sum\_{\mathbf{m}=1}^{M} \mathbf{v}\_{\mathbf{n},\mathbf{m}} \,\mathbf{x}\_{\mathbf{m}}\right) + \mathbf{g}\right] \tag{3}$$

where, j ¼ 1, 2, 3, … , K; M is the number of inputs; K is the number of output layers; N is the number of hidden layers; g is the bias; and σð Þ u is the activation function of the output layer, the most common formula of activation function being sigmoid function which can be illustrated as follows [12]:

$$\sigma(\mathbf{u}) = \frac{1}{\mathbf{1} + \mathbf{e}^{-\mathbf{u}}} \tag{4}$$

#### **3.2 Recurrent wavelet neural networks (RWNNs)**

In recurrent WNNs, the output depends not only on the present inputs of the network but also on the prior outputs or conditions of the network [12, 15]. Recurrent networks have feedback and are also known as feedback networks. There are several types of recurrent networks that depend on the feedback connection [12–22].

In the recurrent wavelet network structures, the wavelet network input involves delayed samples of the system output y(k). The number of inputs increases with the order of the system actuality demonstrated. **Figure 3** depicts the structure of recurrent wavelet network. Hence, the output for each layer can be calculated as [20, 23, 24]:

$$
\Psi \Psi\_{\rm N} = \Psi \left( \frac{\mathbf{u\_N} - \mathbf{b\_N}}{\mathbf{a\_N}} \right) \tag{5}
$$

where aN and bN are translation and dilation parameters of wavelets. The inputs of this layer for time n can be denoted as:

$$\mathbf{u}\_{\rm N}(\mathbf{n}) = \mathbf{x}\_{\rm N}(\mathbf{n}) + \boldsymbol{\upmu}\_{\rm N}(\mathbf{n} - \mathbf{1}) \times \boldsymbol{\upodot}\_{\rm N} \tag{6}$$

where ∅<sup>N</sup> represents the weight of the self-feedback loop. The output of the network is given as follow:

$$\mathbf{y} = \sum\_{\mathbf{N}=1}^{N} \mathbf{w}\_{\mathbf{N}} \boldsymbol{\upmu} \left( \frac{\mathbf{u}\_{\mathbf{N}} - \mathbf{b}\_{\mathbf{N}}}{\mathbf{a}\_{\mathbf{N}}} \right) \tag{7}$$

$$\mathbf{u}(\mathbf{n}) = \mathbf{x}(\mathbf{n} - \mathbf{D}\_{\mathrm{i}}) + \mathbf{y}(\mathbf{n} - \mathbf{D}\_{\mathrm{0}}) \times \mathbf{r}\_{\mathrm{N}} \tag{8}$$

**Figure 2.** *The building of conventional wavelet neural network.*

*Wavelet Neural Networks for Speed Control of BLDC Motor DOI: http://dx.doi.org/10.5772/intechopen.91653*

**Figure 3.** *The recurrent wavelet neural network.*

where x is the input signal, N is the number of neuron in the hidden layer, wN is the output weight, Di, D0 is the number of delay for the input and output network, and rn is the weight of the output feedback loop.
