3.3.1. Comparison of different algorithms of learning rules

The convergence of RSME with iteration of BPANN by using different training algorithms under a BPANN topology with 4–6-2 (NI = 4, NH = 6, NO = 2) for the 216 training set is shown in Figure 5. The training functions in MATLAB are as follows: (1) GD (traingd); (2) GDM (traingdm); (3) GDA (traingda); and (4) LM (trainlm). Some parameters employed in the four BPANN algorithms are also shown in Figure 5. It can be observed that except LM (trainlm), all another three algorithms cannot allow the root mean square errors (RMSEs) approach to zero quickly. (The results had been verified by the self-developed BPANN by using GD learning rule programming in Python). This reflects the current problem that contains a special feature that first-order method of the steepest decent cannot help to find the global minimum quickly, while the Levenberg-Marquardt (LM) algorithms based on Hessian matrix containing secondorder derivative of cost function work well.

Table 6 summarizes the effect of different training algorithms of BPANN on the predicted accuracy. It can be shown that LM (trainlm) gives accurate prediction of both the Young's modulus (error is within 6% for CLSMs) and Poisson's ratio (error is 1% for all) for three kinds of backfilled materials.

Parameter Recognition of Engineering Constants of CLSMs in Civil Engineering Using Artificial Neural Networks http://dx.doi.org/10.5772/intechopen.71538 107

b. Generalized steepest decent including momentum (GDM):

3.3. Numerical results of parameter recognition using BPANN

3.3.1. Comparison of different algorithms of learning rules

order derivative of cost function work well.

of backfilled materials.

topology of BPANN. The MATLAB nntool is used for network simulation.

denoted as ζinc and ζdec [35].

106 Advanced Applications for Artificial Neural Networks

The learning rule can be written as

d. Levenberg-Marquardt (LM):

ing rate with large λ [35].

The learning rule can be written as Eq. (8) with η 6¼ 0, μ 6¼ 0. c. Generalized steepest decent with adjustable learning rate (GDA):

In this algorithm, the basic learning rule is the same as Eq. (6) but adding a conditional judgment. When stable learning can be kept under a learning rate, then the learning rate is increased, otherwise it is decreased. The learning rate increment and decrement can be

where λ denotes a constant to assure the inversion of matrix, and the learning rule becomes Gauss-Newton algorithm when λ = 0, while it approaches GD with small learn-

The first supervised learning employed for parameter recognition of engineering constants is the BPANN, which can be easily implemented for multiple inputs and multiple outputs. Since there are various design parameters in the construction of a BPANN, we study some influence factors (such as different algorithms of training rules, different combination of input variables, and number of neurons of hidden layer), and then we propose and analyze an appropriate

The convergence of RSME with iteration of BPANN by using different training algorithms under a BPANN topology with 4–6-2 (NI = 4, NH = 6, NO = 2) for the 216 training set is shown in Figure 5. The training functions in MATLAB are as follows: (1) GD (traingd); (2) GDM (traingdm); (3) GDA (traingda); and (4) LM (trainlm). Some parameters employed in the four BPANN algorithms are also shown in Figure 5. It can be observed that except LM (trainlm), all another three algorithms cannot allow the root mean square errors (RMSEs) approach to zero quickly. (The results had been verified by the self-developed BPANN by using GD learning rule programming in Python). This reflects the current problem that contains a special feature that first-order method of the steepest decent cannot help to find the global minimum quickly, while the Levenberg-Marquardt (LM) algorithms based on Hessian matrix containing second-

Table 6 summarizes the effect of different training algorithms of BPANN on the predicted accuracy. It can be shown that LM (trainlm) gives accurate prediction of both the Young's modulus (error is within 6% for CLSMs) and Poisson's ratio (error is 1% for all) for three kinds

h i�<sup>1</sup>

½ �<sup>J</sup> <sup>T</sup> f g<sup>e</sup> (11)

<sup>Δ</sup>wijð Þ¼ <sup>t</sup> <sup>þ</sup> <sup>1</sup> wijð Þ� <sup>t</sup> <sup>þ</sup> <sup>1</sup> wijðÞ¼� <sup>t</sup> ½ �<sup>J</sup> <sup>T</sup>½ �þ<sup>J</sup> <sup>λ</sup>½ �<sup>I</sup>

Figure 5. Convergence of RSME with iteration of BPANN using different training algorithms (NI = 4, NH = 6, NO = 2).


Table 6. Effect of different training algorithms of BPANN on the predicted accuracy.

#### 3.3.2. Effects of input variable combinations

In a practical engineering situation, we are interested in a question: what physical variables should we measure? To solve this problem, five cases of different combination of input variables are investigated:

Case 1: (Uy1,Uy2, Sx1, Sx2) (NI =4) (two displacements, two stresses).

Case 2: (Uy1,Uy2, Sx1) (NI =3) (two displacements, one stress).

Case 3: (Uy1,Uy2) (NI =2) (two displacements).

Case 4: (Uy1, Sx1,) (NI =2) (one displacement, one stress).

Case 5: (Sx1, Sx2) (NI =2) (two stresses).

The convergence of RSME with iteration of BPANN using different combinations of input variables using LM training algorithms (NH = 6, NO = 2) for the 216 training set is shown in Figure 6. Again, LM works well, but the predicted accuracy shown in Table 7 depicts that only cases 1, 2, and 3 are appropriate for use. The result of case 5 reflects the fact that only stresses information cannot predict the material constant. Therefore, in the following analysis, we basically consider input variables of case 1; this means we need to measure two displacements and two stresses.

Figure 6. Convergence of RSME with iteration of BPANN using different combinations of input variables (trainlm, NI = 4, NH = 6, NO = 2).

Parameter Recognition of Engineering Constants of CLSMs in Civil Engineering Using Artificial Neural Networks http://dx.doi.org/10.5772/intechopen.71538 109


Table 7. Effect of input variables on the prediction accuracy of engineering constants using BPANN.

#### 3.3.3. Effect of number of hidden neurons

3.3.2. Effects of input variable combinations

108 Advanced Applications for Artificial Neural Networks

Case 3: (Uy1,Uy2) (NI =2) (two displacements).

Case 5: (Sx1, Sx2) (NI =2) (two stresses).

Case 1: (Uy1,Uy2, Sx1, Sx2) (NI =4) (two displacements, two stresses).

Case 2: (Uy1,Uy2, Sx1) (NI =3) (two displacements, one stress).

Case 4: (Uy1, Sx1,) (NI =2) (one displacement, one stress).

ables are investigated:

and two stresses.

NH = 6, NO = 2).

In a practical engineering situation, we are interested in a question: what physical variables should we measure? To solve this problem, five cases of different combination of input vari-

The convergence of RSME with iteration of BPANN using different combinations of input variables using LM training algorithms (NH = 6, NO = 2) for the 216 training set is shown in Figure 6. Again, LM works well, but the predicted accuracy shown in Table 7 depicts that only cases 1, 2, and 3 are appropriate for use. The result of case 5 reflects the fact that only stresses information cannot predict the material constant. Therefore, in the following analysis, we basically consider input variables of case 1; this means we need to measure two displacements

Figure 6. Convergence of RSME with iteration of BPANN using different combinations of input variables (trainlm, NI = 4,

The results of BPANN results using different number of neurons of hidden layer (NH) are shown in Figure 7 and Table 8. From the results it is recommended to employ NH > 2 for the accurate prediction of engineering constants if training algorithms LM (trainlm) are employed.

Figure 7. Convergence of RSME with iteration of BPANN using different number of neurons of hidden layer (trainlm, NI = 4, NO = 2).


Table 8. Effect of number of neurons of hidden layer on the prediction accuracy of engineering constants using BPANN.

#### 3.3.4. Selection of a BPANN topology

After some parametric studies, a BPANN with 4–6-2 topology using LM training algorithm is proposed for current parameter recognition problems. The convergence of RMSE is shown in Figure 8 to depict very fast decay of RSME. Figure 9 shows the QQ plot of tested results and predicted results during testing stage after training process. The R<sup>2</sup> value for the Young's modulus and Poisson's ratio are 0.99454 and 0.99864, respectively. It reflects that the training

Figure 8. Convergence of RSME with iteration of finally selected of BPANN (trainlm, NI = 4, NH = 6, NO = 2).

Parameter Recognition of Engineering Constants of CLSMs in Civil Engineering Using Artificial Neural Networks http://dx.doi.org/10.5772/intechopen.71538 111

Figure 9. QQ plot of testing data of finally selected of BPANN (trainlm, NI = 4, NH = 6, NO = 2).


Table 9. Predicted results using final design of a BPANN for the recognition of engineering constants of CLSM.

BPANN works well for testing data. Table 9 shows predicted results using final design of a BPANN for the recognition of engineering constants of CLSM.
