4. Parameter recognition using RBFNN

3.3.4. Selection of a BPANN topology

1 24–2-2- 0.2088

110 Advanced Applications for Artificial Neural Networks

2 44–4-2 0.1562

3 64–6-2 0.1653

4 84–8-2 0.0811

5 10 4–10-2 0.1174

(108.80%)

(56.19%)

(65.35%)

(11.91%)

(17.39%)

After some parametric studies, a BPANN with 4–6-2 topology using LM training algorithm is proposed for current parameter recognition problems. The convergence of RMSE is shown in Figure 8 to depict very fast decay of RSME. Figure 9 shows the QQ plot of tested results and

Table 8. Effect of number of neurons of hidden layer on the prediction accuracy of engineering constants using BPANN.

Case NH NI-NH-NO Soil CLSM-B80/30% CLSM-B130/30%

0.3007 (0.22%)

0.2878 (4.06%)

0.2985 (0.50%)

0.3015 (0.51%)

0.2992 (0.28%)

Experimental values 0.1 0.3 0.27 0.25 0.87 0.25

E(Error%) ν(Error%) E(Error%) ν(Error%) E(Error%) ν(Error%)

0.2498 (0.08%)

0.2453 (1.87%)

0.2491 (0.36%)

0.2507 (0.29%)

0.2531 (1.23%) 0.9413 (8.20%)

0.8903 (2.33%)

0.8859 (1.83%)

0.8566 (1.54%)

0.8833 (1.53%) 0.2499 (0.04%)

0.2498 (0.09%)

0.2492 (0.31%)

0.2508 (0.34%

0.2542 (1.69%)

0.2327 (13.80%)

0.3097 (14.69%)

0.3238 (19.94%)

0.2392 (11.43%)

0.2777 (2.86%)

modulus and Poisson's ratio are 0.99454 and 0.99864, respectively. It reflects that the training

Figure 8. Convergence of RSME with iteration of finally selected of BPANN (trainlm, NI = 4, NH = 6, NO = 2).

value for the Young's

predicted results during testing stage after training process. The R<sup>2</sup>

#### 4.1. Application of RBFNN for parameter recognition

Figure 10 demonstrates a typical RBFNN for the identification of engineering constants of CLSM. The RBFNN shown in Figure 10 contains a single output, that is, the Young's modulus (y<sup>1</sup> = E, y<sup>2</sup> = ν), L input neurons (xi, i = 1, 2, ⋯, L), and M hidden neurons (hj, j = 1, 2, ⋯, M). Suppose we have S samples for training, we can select M ≤ S. The predicted output can be expressed as [35–37]:

$$\mathbf{y}\_k = \mathbf{F}\_k(\mathbf{x}\_i) = \sum\_{j=1}^{M} \mathbf{w}\_j \ \mathbf{K}(\left\|\mathbf{x}\_i - \mathbf{C}\_j\right\|\big|) = \sum\_{j=1}^{M} \mathbf{w}\_j e^{\frac{-1}{2\sigma\_j^2} \left\|\mathbf{x}\_i - \mathbf{C}\_j\right\|^2} \tag{12}$$

where the kernel function K xi � Cj � � � � � � <sup>¼</sup> <sup>e</sup> �1 2σ2 j k k xi�Cj 2 is the Gaussian basis function.

Figure 10. Schematic of a RBFNN topology for single parameter recognition.

#### 4.2. Learning algorithms for RBFNN for parameter recognition

There exist many approaches for the determination of the network parameters in RBFNN (Cj ,σ<sup>j</sup> ,wkj). For parallel comparison, here the supervised learning algorithm, that is, the generalized delta rule based on the method of the steepest descent is used, which can be expressed as [35–37]:

$$\begin{aligned} \varpi\_{kj}(p+1) &= \varpi\_{kj}(p) - \eta \stackrel{\partial E}{\partial w\_{kj}}(p) + \mu \,\varpi\_{kj}(p) \\ \mathsf{C}\_{j}(p+1) &= \varpi \mathsf{C}\_{j}(p) - \eta \stackrel{\partial E}{\partial \mathsf{C}\_{j}}(p) + \mu \mathsf{C}\_{j}(p) \\ \sigma\_{j}(p+1) &= \sigma\_{j}(p) - \eta \stackrel{\partial E}{\partial \sigma\_{j}}(p) + \mu \,\sigma\_{j}(p) \end{aligned} \tag{13}$$

where

$$E(p) = \frac{1}{2} \sum\_{k=1}^{N} e\_k^2(p) = \frac{1}{2} \sum\_{k=1}^{N} \left[ d\_k(p) - y\_k(p) \right]^2 \tag{14}$$

denotes the error between targets and trained output results.

$$\begin{aligned} \frac{\partial E}{\partial w\_{kj}}(p) &= -K(||\mathbf{x}\_i(p) - \mathbf{C}\_j(p)||) \\\\ \frac{\partial E}{\partial \mathbf{C}\_j}(p) &= -w\_{kj}(p) \frac{1}{\sigma\_j^2(p)} K(||\mathbf{x}\_i(p) - \mathbf{C}\_j(p)||) \bullet \left[\mathbf{x}\_i(p) - \mathbf{C}\_j(p)\right] \\\\ \frac{\partial E}{\partial \sigma\_j}(p) &= -w\_{kj}(p) \frac{1}{\sigma\_j^3(p)} K(||\mathbf{x}\_i(p) - \mathbf{C}\_j(p)||) \bullet \left||\mathbf{x}\_i(p) - \mathbf{C}\_j(p)\right||^2 \end{aligned} \tag{15}$$

There also exist many algorithms for learning RBFNN; among those, some are unsupervised ones such as using kNN for determination of Cj and using the pseudo-inversion method to evaluate wkj [35–37].

#### 4.3. Numerical results of parameter recognition using RBFNN

4.2. Learning algorithms for RBFNN for parameter recognition

Figure 10. Schematic of a RBFNN topology for single parameter recognition.

112 Advanced Applications for Artificial Neural Networks

There exist many approaches for the determination of the network parameters in RBFNN (Cj

based on the method of the steepest descent is used, which can be expressed as [35–37]:

wkjð Þ¼ p þ 1 wkjð Þ� p η

Cjð Þ¼ p þ 1 wCjð Þ� p η

σjð Þ¼ p þ 1 σjð Þ� p η

k¼1 e 2 <sup>k</sup> ð Þ¼ p 1 2 X N

� � � � �

> 1 σ2 <sup>j</sup> ð Þp

> 1 σ3 <sup>j</sup> ð Þp

E pð Þ¼ <sup>1</sup> 2 X N

denotes the error between targets and trained output results.

ð Þ¼� p wkjð Þp

ð Þ¼� p wkjð Þp

ð Þ¼� <sup>p</sup> K xið Þ� <sup>p</sup> Cjð Þ<sup>p</sup> �

∂E ∂wkj

> ∂E ∂Cj

∂E ∂σ<sup>j</sup>

where

For parallel comparison, here the supervised learning algorithm, that is, the generalized delta rule

∂E ∂wkj

∂E ∂Cj

∂E ∂σ<sup>j</sup>

k¼1

K xið Þ� <sup>p</sup> Cjð Þ<sup>p</sup> � � �

K xið Þ� <sup>p</sup> Cjð Þ<sup>p</sup> � � �

ð Þþ p μwkjð Þp

ð Þþ p μCjð Þp

� � �• xið Þ� <sup>p</sup> Cjð Þ<sup>p</sup> � �

� � �• xið Þ� <sup>p</sup> Cjð Þ<sup>p</sup> �

� �

� 2

dkð Þ� <sup>p</sup> ykð Þ<sup>p</sup> � �<sup>2</sup> (14)

ð Þþ p μ σjð Þp

,σ<sup>j</sup> ,wkj).

(13)

(15)

Table 10 summarizes the accuracy of predicted results by using RBFNN with different combinations of input variables. MATLAB nntool (newrb) is used for numerical experiments [38]. Unfortunately, the results are not as good as those obtained using BPANN with the LM training method. Figure 11 also reflects the same results that Young's modulus cannot be


Table 10. Effect of input variables on the prediction accuracy of engineering constants using RBFNN.

Figure 11. QQ plot of predicted and tested engineering constants of CLSM using RBFNN (NI = 4).

predicted well using RBFNN (even though a lot of trials on selection of spread constants cannot obtain satisfying results).
