**4. Intelligent design model for the anchor support of the underground powerhouse**

### **4.1 Model design and training logic**

#### *4.1.1 Model design*

As seen from Parts 2 and 3, the design of the anchor support for the underground powerhouse can be determined by the plant span and strength-stress ratio. There is a certain relationship among the plant span, strength-stress ratio, anchor diameter, anchor spacing, and row spacing. Their mapping relationship can be reflected by a back propagation (BP) neural network. The BP neural network is a multilayer feed-forward neural network that is widely used in nonlinear modeling, function approximation, logic classification, etc. Therefore, an intelligent design model for anchor support of the underground powerhouse was created, which can output diameter *D*, anchor spacing *a*, and row spacing *b* by inputting plant span *B* and strength-stress ratio *Kσ*. The model takes advantage of the logical classification of BP neural networks to find the mapping of plant spans and strength-stress ratio to different support schemes. By analyzing the scheme of anchor bolt support of completed underground powerhouses in **Table 1**, the anchor bolt support schemes are divided into six types, as shown in **Table 6**.

The model consists of three parts: an input layer, a hidden layer, and an output layer. The structure of the model is shown in **Figure 11**. The input layer contains the plant span and strength-stress ratio, and the output layer contains the anchor diameter, spacing, and row spacing of the anchor. The hidden layer is used to connect the input and output layers and to pass the weights of the neural network. The number of layers and nodes in the hidden layers affect the prediction results of the model. Theoretically, the greater the number of hidden layers is, the smaller the error of the prediction results, but too many hidden layers will lead to an overly complex network


**Table 6.** *Anchor support schemes.*

**Figure 11.** *The structure of BP neural network.*

structure and slow computation speed. In this paper, the number of hidden layers is chosen as one layer with reference to a typical BP neural network structure. The number of nodes in the hidden layer is directly related to the number of input and output units, but there is still no perfect analytical formula. Too many nodes in the hidden layer will lead to a long learning time, while too few nodes in the hidden layer will have poor fault tolerance. According to previous experience [37], the number of nodes is designed with reference to Eq. (11).

$$m = \sqrt{n+l} + a \tag{11}$$

where *m* is the number of nodes in the hidden layer, *n* is the number of nodes in the input layer, *l* is the number of nodes in the output layer, and *α* is a constant between 1 and 10. In this model, the value of *α* is 7, so the number of hidden layer nodes calculated according to Eq. (11) is 10.

#### *4.1.2 Model training logic*

#### 1.Forward propagation

The initial training of the model in Step 3 of Section 4.1.2 is achieved by forward propagation of the BP neural network. Suppose the sample set is **X**, the second layer of the BP neural network (hidden layer) is **a2**, the third layer (output layer) is **a3**, **Θ**(i) is the weight from layer *i* to layer (*i* + 1), the initial **Θ**(i) is set randomly, the model target output value is **y**, and **h** is the actual output value of the model after training. Forward propagation can be expressed by the following equations:

$$\mathbf{a\_2} = \text{sigmoid}\left(\boldsymbol{\Theta}^{(1)} \times \mathbf{X}^{\text{T}}\right) \tag{12}$$

$$\mathbf{a}\_{\mathcal{B}} = \text{sigmoid}\left(\boldsymbol{\Theta}^{(2)} \times \mathbf{X}^{\mathrm{T}}\right) \tag{13}$$

$$\mathbf{h} = \mathbf{a}\_3 \tag{14}$$

where **X**, **a2**, **a3**, **Θ**(i), and **h** are the matrix and *sigmoid* is the transfer function, as shown in Eq. (15).

*Support Strength Criteria and Intelligent Design of Underground Powerhouses DOI: http://dx.doi.org/10.5772/intechopen.102791*

$$sigmoid(\infty) = \frac{1}{1 + e^{-\infty}}\tag{15}$$

### 2.Cost function

Because the initial **Θ**(i) is set randomly, the actual output value **h** of the initial model has a large error with the target output value **y**. To evaluate the accuracy of the actual output value **h**, the cost function *J*(**Θ**) is introduced, and the formula is shown in Eq. (16). The smaller the value of*J*(**Θ**) is, the closer the actual output value **h** is to the target output value **y**, representing a better value for the weight **Θ**.

$$\begin{split} J(\Theta) &= -\frac{1}{m} \sum\_{i=1}^{m} \sum\_{k=1}^{K} \left[ x\_k^{(i)} \log \left( h\_{\Theta} \left( x^{(i)} \right) \right)\_k + \left( 1 - y\_k^{(i)} \log \left( 1 - \left( h\_{\Theta} \left( x^{(i)} \right) \right)\_k \right) \right) \right] \\ &+ \frac{\lambda}{2m} \sum\_{l=1}^{L-1} \sum\_{i=1}^{l} \sum\_{j=1}^{q\_{l+1}} \left( \Theta\_{ji}^{(l)} \right)^2 \end{split} \tag{16}$$

where *x(i) k* is the *k*-th data in the *i*-th layer, *m* is the total number of layers in the BP neural network, *K* is the total number of data, *λ* is a constant, L is the total number of layers in the neural network, and *sl* is the number of nodes in the *l*-th layer.

#### 3.Back propagation

To continuously obtain a smaller cost function *J*(**Θ**), we continuously update the value of the weight **Θ** by back propagation. The error transfer and weight update process are as follows:

$$\delta\_k^{(3)} = \left( a\_k^{(3)} - \mathcal{y}\_k \right) \tag{17}$$

$$\mathbf{\dot{\mathfrak{G}}}^{(2)} = \left(\mathbf{\dot{\mathfrak{G}}^{(2)}}\right)^{T} \mathbf{\dot{\mathfrak{G}}}^{(3)} \times \mathbf{g}'^{(\mathbf{z}^{(2)})} \tag{18}$$

$$\mathbf{g}^{\mathsf{f}^{\left(\mathbf{z}^{(2)}\right)}} = \mathbf{a}^{(2)} \times \left(\mathbf{1} - \mathbf{a}^{(2)}\right) \tag{19}$$

where *δ(l) j* is the error of the *j*-th node in the *l*-th layer and *a(i) k* is the *k*-th data in the *i*-th layer.

The errors are stored in **Δ**(*l*) .

$$\mathbf{A}^{(l)} = \mathbf{A}^{(l)} + \mathbf{\hat{s}}^{(l+1)} \left(\mathbf{a}^{(l)}\right)^T \tag{20}$$

where **Δ**(*l*) is the set of each node in *l*-th layer. The weight is updated with **Δ**(*l*) and calculated by Eq. (20). The intelligent design model is obtained after the optimal weights are calculated.

#### *4.1.3 Model training*

The first 23 data points in **Table 1** were used as training samples to train the intelligent design model for anchor support of the underground powerhouse. The training process of the model is shown in **Figure 12**.

#### **Figure 12.**

*The training process of BP neural network.*

Step 1: A neural network structure applicable to the intelligent design model is established, as shown in **Figure 11**.

Step 2: The weight values of the neural network are initialized.



**Table 7.** *Validation data.* *Support Strength Criteria and Intelligent Design of Underground Powerhouses DOI: http://dx.doi.org/10.5772/intechopen.102791*


**Table 8.**

*Testing data.*

Step 5: After the new weights have been calculated, the new model is validated by the validation set, and if it does not meet the requirements of the validation set, then step 4 is repeated. If it does, then the final intelligent support model is obtained.

The validation set for the intelligent design model is shown in **Table 7**. The test set for the intelligent design model is shown in **Table 8**.

### **4.2 Model implementation**

Based on the process in Section 4.1 and the data in **Table 1**, the intelligent design model was trained. Now, if the plant span and strength-stress ratio are input into the model, then the anchor diameter, spacing, and row spacing can be output. The interaction of the models can be implemented by MATLAB 2019b.

Taking the underground plant of Huangjinping Hydropower Station as an example, by inputting the plant span *B* and the strength-stress ratio *Kσ*, the model will automatically output the anchor support scheme, and the results are shown in **Figure 13**.
