**4.5 Augmented radial basis functions**

Developed for fitting topographic contours, an RBF surrogate model <sup>e</sup>*g*ð Þ <sup>x</sup> is written as:

$$\widetilde{\mathbf{g}}(\mathbf{x}) = \sum\_{i=1}^{n} \lambda\_i \phi(||\mathbf{x} - \mathbf{x}\_i||) \tag{11}$$

where *ϕ* is the basis function, xk k � x*<sup>i</sup>* is the Euclidean norm, and *λ<sup>i</sup>* is the unknown weighted coefficient that need to be determined. **Table 1** lists commonly used RBFs.

Using the *n* available sample points and function values, a total of *n* equations can be written, as:

$$\mathbf{g\_1} = \widetilde{\mathbf{g}}(\mathbf{x\_1}) = \sum\_{i=1}^{n} \lambda\_i \phi(||\mathbf{x\_1} - \mathbf{x\_i}||) \tag{12}$$

…

**4.2 Response surface method using quadratic polynomials**

*s i*¼1

<sup>e</sup>*g*ð Þ¼ <sup>x</sup> *<sup>β</sup>*<sup>0</sup> <sup>þ</sup> <sup>∑</sup>

*Reliability and Maintenance - An Overview of Cases*

expressed as [63]:

squares method to solve for e*β*, as:

**4.3 Least squares support vector machine**

used least squares SVM model is given as [52, 53]:

Using linear or quadratic polynomials, a response surface model can be developed. The most commonly used quadratic polynomial response surface model is

*βijxixj* (2)

*g* ¼ *X*e*β* (3)

<sup>e</sup>*<sup>β</sup>* <sup>¼</sup> *<sup>X</sup>TX* � ��<sup>1</sup> *<sup>X</sup>Tg* � � (4)

*αiK*ð Þþ x*;* x*<sup>i</sup> b* (5)

(6)

(7)

*βixi* þ ∑ *s i*¼1 *βiix*<sup>2</sup> *<sup>i</sup>* þ ∑ *s*�1 *i*¼1 ∑ *s j*¼*i*þ1

where the *β*'s are the unknown coefficients. Using the function values at *n* sample points, a total of *n* linear equations can be written in a matrix form, as:

where e*β* ð Þ *k* � 1 is the least-square estimation of the unknown coefficients in Eq. (2), and *X n*ð Þ � *k* is a matrix of input variables at sample points. Apply the least

The support vector machine (SVM) uses a nonlinear mapping technique and solves for a nonlinear input-output relationship. For *n* sample points, a commonly

where *α<sup>i</sup>* (*i* = 1,… *n*) are Lagrange multipliers, *b* is the scalar threshold, and *K*ð Þ x*;* x*<sup>i</sup>* is a kernel function. Available kernel functions include polynomial, radial,

> *α* � �

where *<sup>γ</sup>* is a tolerance error, 1 <sup>¼</sup> ½ � <sup>1</sup> <sup>⋯</sup> <sup>1</sup> <sup>T</sup>*, <sup>α</sup>* <sup>¼</sup> ½ � *<sup>α</sup>*<sup>1</sup> <sup>⋯</sup> *<sup>α</sup><sup>n</sup>* T, and *<sup>Ω</sup>* ð Þ *<sup>n</sup>* � *<sup>n</sup>* is a matrix of kernels based on the sample points. *α* and *b* can be calculated from:

!�<sup>1</sup>

<sup>1</sup> *<sup>Ω</sup>* <sup>þ</sup> *<sup>γ</sup>*�<sup>1</sup>*<sup>I</sup>*

The Kriging model is an interpolation technique that combines two parts, i.e., a

� �<sup>T</sup> are the corresponding regression coefficients. The first part of Eq. (8) approximates the global trend of the original function, in which *β* can be

*p*

*i*¼1

<sup>¼</sup> <sup>0</sup> *g* � �

> 0 *g* � �

*Bi*ð Þ x *β<sup>i</sup>* þ *z*ð Þ x (8)

<sup>e</sup>*g*ð Þ¼ <sup>x</sup> <sup>∑</sup> *n i*¼1

and sigmoid kernels [53]. A system of (*n* + 1) equations can be written as:

<sup>¼</sup> 0 1*<sup>T</sup>*

0 1*<sup>T</sup>* <sup>1</sup> *<sup>Ω</sup>* <sup>þ</sup> *<sup>γ</sup>*�<sup>1</sup>*<sup>I</sup>* ! *b*

*b α* � �

linear regression part and a stochastic error, as [38, 39]:

<sup>e</sup>*g*ð Þ¼ <sup>x</sup> *BT*ð Þ <sup>x</sup> *<sup>β</sup>* <sup>þ</sup> *<sup>z</sup>*ð Þ¼ <sup>x</sup> <sup>∑</sup>

where *<sup>B</sup>*ð Þ¼ <sup>x</sup> *<sup>B</sup>*1ð Þ <sup>x</sup> <sup>⋯</sup> *Bp*ð Þ <sup>x</sup> � �<sup>T</sup> are the *<sup>p</sup>* basis functions, and

**4.4 Kriging**

*<sup>β</sup>* <sup>¼</sup> *<sup>β</sup>*<sup>1</sup> <sup>⋯</sup> *<sup>β</sup><sup>p</sup>*

**74**

$$\mathbf{g}\_n = \widetilde{\mathbf{g}}(\mathbf{x}\_n) = \sum\_{i=1}^n \lambda\_i \phi(||\mathbf{x}\_n - \mathbf{x}\_i||) \tag{13}$$

Write all the *n* equations in a matrix form, as:

$$\mathbf{g} = A\boldsymbol{\lambda} \tag{14}$$


#### **Table 1.** *Some commonly used RBFs [65].*

where *<sup>λ</sup>* <sup>¼</sup> ½ � *<sup>λ</sup>*<sup>1</sup> <sup>⋯</sup> *<sup>λ</sup><sup>n</sup>* T, and *<sup>A</sup>* is given as:

$$\mathbf{A} = \begin{bmatrix} \phi(||\mathbf{x}\_1 - \mathbf{x}\_1||) & \cdots & \phi(||\mathbf{x}\_1 - \mathbf{x}\_n||) \\ \vdots & \ddots & \vdots \\ \phi(||\mathbf{x}\_n - \mathbf{x}\_1||) & \cdots & \phi(||\mathbf{x}\_n - \mathbf{x}\_n||) \end{bmatrix}\_{n \times n} \tag{15}$$

Solve the linear system of Eq. (14) to calculate coefficients *λ*, as:

$$
\lambda = A^{-1} \mathbf{g} \tag{16}
$$

can be efficiently calculated in each iteration of the SRBF approach. Based on the surrogate model <sup>e</sup>*g*ð Þ <sup>x</sup> , the failure probability *PF* can be computed using a sampling

> *<sup>N</sup>* <sup>∑</sup> *N i*¼1

<sup>0</sup> *if* <sup>e</sup>*<sup>g</sup>* <sup>x</sup>*<sup>i</sup>* � �><sup>0</sup>

where *N* is the total number of MCS samples, x*<sup>i</sup>* is the *i*th realization of x, and *Γ*

*<sup>Γ</sup>* <sup>¼</sup> <sup>1</sup> *if* <sup>e</sup>*<sup>g</sup>* <sup>x</sup>*<sup>i</sup>* � �≤<sup>0</sup>

*<sup>β</sup>* ¼ �*Φ*�<sup>1</sup>

**Figure 1** shows a flowchart of reliability analysis using SRBF-based surrogate modeling technique and MCS. Once the explicit augmented RBF surrogate model is generated in one iteration of the proposed method, MCS is applied to efficiently estimate the failure probability for any sample size. If the convergence criterion is not satisfied in the current iteration, more sample points will be added and another iteration starts. As the sample size increases, the SRBF surrogate models in general become more accurate, a reduction was observed in the failure probability estimation errors. However this results in more function evaluations. Since the number of response simulations is determined by the sample size used to create a surrogate model, the majority of the computational cost is from the response simulations. The

1. Determine initial and additional sample sizes, *n* and *m*, and convergence criterion. In this study, the initial sample size *n* is suggested be 5–10 times of the number of random variables *s*. The additional sample size *m* in each subsequent iteration can be typically taken as one third to one half of the initial

2. Generate the initial sample set with *n* sample points; set the iteration number *k* ¼ 1. A commonly used LHS was applied to generate samples for RBF

3. Evaluate limit state function *g*ð Þ x for the initial sample set *n* generated in Step 2. Numerical analyses such as FE analyses may be required for practical problems.

4.Update sample set *n* to include all sample points, *n* ¼ *n* þ *m*. For the first iteration (*k* ¼ 1), *m* ¼ 0, and no additional sample points are added.

5. Construct augmented RBF surrogate models <sup>e</sup>*g*ð Þ <sup>x</sup> of function *<sup>g</sup>*ð Þ <sup>x</sup> based on

Eq. (17) using all available sample points.

where *Φ* is the standard normal cumulative distribution function.

(

*<sup>Γ</sup>* <sup>e</sup>*<sup>g</sup>* <sup>x</sup>*<sup>i</sup>* � �<sup>≤</sup> <sup>0</sup>� � (22)

ð Þ *PF* (24)

(23)

*PF* � *P g*ð Þ¼ ð Þ <sup>x</sup> <sup>≤</sup><sup>0</sup> <sup>1</sup>

*Reliability Analysis Based on Surrogate Modeling Methods*

*DOI: http://dx.doi.org/10.5772/intechopen.84640*

The reliability index *β* can be further determined, as [49]:

**6. Reliability analysis based on successive RBF models**

method, such as MCS, as:

is a deciding function, as:

detailed procedure is as follows:

sample size, *n*.

surrogate models.

**77**

Since highly nonlinear basis functions are used, the RBF surrogate models in Eq. (11) can approximate nonlinear responses very well. However, they were found to have more errors for linear responses [58]. In order to overcome this drawback, the RBF model in Eq. (11) can be augmented by polynomial functions, as:

$$\widetilde{\mathbf{g}}(\mathbf{x}) = \sum\_{i=1}^{n} \lambda\_i \phi(||\mathbf{x} - \mathbf{x}\_i||) + \sum\_{j=1}^{p} c\_j f\_j(\mathbf{x}) \tag{17}$$

where the second part represents *p* terms of polynomial functions, and *cj* (*j* = 1*,… p*) are the unknown coefficients to be determined. There are more unknowns than available equations; therefore the following orthogonality condition is required to solve for all unknowns, as:

$$\sum\_{i=1}^{n} \lambda\_{\hat{\mathbb{U}}} f\_j(\mathbf{x}\_i) = \mathbf{0}, \text{ for} \\ j = \mathbf{1}, \ldots p \tag{18}$$

Eqs. (17) and (18) consist of (*n* þ *p*) equations in total, and they can be rewritten, as:

$$
\begin{pmatrix} A & F \\ F^T & \mathbf{0} \end{pmatrix} \begin{pmatrix} \lambda \\ \varepsilon \end{pmatrix} = \begin{pmatrix} \mathbf{g} \\ \mathbf{0} \end{pmatrix} \tag{19}
$$

where *c* ¼ *c*<sup>1</sup> ⋯ *cp* � �<sup>T</sup> , and *F* is given as:

$$F = \begin{bmatrix} f\_1(\mathbf{x}\_1) & \cdots & f\_p(\mathbf{x}\_1) \\ \vdots & \ddots & \vdots \\ f\_1(\mathbf{x}\_n) & \cdots & f\_p(\mathbf{x}\_n) \end{bmatrix}\_{n \times p} \tag{20}$$

Solve the linear system of Eq. (19) to get *λ* and *c*, as:

$$
\begin{pmatrix} \lambda \\ c \end{pmatrix} = \begin{pmatrix} A & F \\ F^T & \mathbf{0} \end{pmatrix}^{-1} \begin{pmatrix} \mathbf{g} \\ \mathbf{0} \end{pmatrix} \tag{21}
$$

For augmented RBFs, either linear or quadratic polynomial functions can be used. In this study, only linear polynomial functions were added to Eq. (17). For the rest of the paper, a suffix "-LP" is used to represent linear polynomials added to RBFs. The following RBF models were studied:

**SRBF-MQ-LP**: sequential multiquadric function with linear polynomials.

**SRBF-CS20-LP**: sequential compactly supported function *ϕ*2*,*<sup>0</sup> with linear polynomials.

**SRBF-CS30-LP**: sequential compactly supported function *ϕ*3*,*<sup>0</sup> with linear polynomials.
