**4.2 Response surface method using quadratic polynomials**

Using linear or quadratic polynomials, a response surface model can be developed. The most commonly used quadratic polynomial response surface model is expressed as [63]:

$$\widetilde{\mathbf{g}}(\mathbf{x}) = \beta\_0 + \sum\_{i=1}^{s} \beta\_i \mathbf{x}\_i + \sum\_{i=1}^{s} \beta\_{ii} \mathbf{x}\_i^2 + \sum\_{i=1}^{t-1} \sum\_{j=i+1}^{s} \beta\_{ij} \mathbf{x}\_i \mathbf{x}\_j \tag{2}$$

where the *β*'s are the unknown coefficients. Using the function values at *n* sample points, a total of *n* linear equations can be written in a matrix form, as:

$$\mathbf{g} = \mathbf{X}\ddot{\boldsymbol{\beta}}\tag{3}$$

estimated using the least squares method. The second part, *z*ð Þ x , represents a

� � � � <sup>¼</sup> *<sup>σ</sup>*<sup>2</sup>

� � <sup>¼</sup> exp � <sup>∑</sup>

respectively, and *θ<sup>k</sup>* are unknown correlation parameters to fit the model.

<sup>e</sup>*g*ð Þ¼ <sup>x</sup> <sup>∑</sup> *n i*¼1

*<sup>g</sup>*<sup>1</sup> <sup>¼</sup> <sup>e</sup>*g*ð Þ¼ x1 <sup>∑</sup>

*gn* <sup>¼</sup> <sup>e</sup>*g*ð Þ¼ <sup>x</sup>*<sup>n</sup>* <sup>∑</sup>

Write all the *n* equations in a matrix form, as:

**Function name Radial basis function**

Gaussian function *<sup>ϕ</sup>*ð Þ¼ *<sup>r</sup> exp* �*cr*<sup>2</sup> ð Þ;0< *<sup>c</sup>*≤<sup>1</sup>

Multiquadric function *<sup>ϕ</sup>*ð Þ¼ *<sup>r</sup>* ffiffiffiffiffiffiffiffiffiffiffiffiffiffi

Linear function *ϕ*ð Þ¼ *r r* Cubic function *<sup>ϕ</sup>*ð Þ¼ *<sup>r</sup> <sup>r</sup>*<sup>3</sup>

Developed for fitting topographic contours, an RBF surrogate model <sup>e</sup>*g*ð Þ <sup>x</sup> is

where *ϕ* is the basis function, xk k � x*<sup>i</sup>* is the Euclidean norm, and *λ<sup>i</sup>* is the unknown weighted coefficient that need to be determined. **Table 1** lists commonly used RBFs. Using the *n* available sample points and function values, a total of *n* equations

> *n i*¼1

*n i*¼1

*<sup>r</sup>*<sup>2</sup> <sup>þ</sup> *<sup>c</sup>*<sup>2</sup> <sup>p</sup> ;0< *<sup>c</sup>*<sup>≤</sup> <sup>1</sup>

CS function *<sup>ϕ</sup>*3*,*<sup>0</sup> *<sup>ϕ</sup>*3*,*0ð Þ¼ *<sup>z</sup>* ð Þ <sup>1</sup> � *<sup>z</sup>* <sup>7</sup> <sup>5</sup> <sup>þ</sup> <sup>35</sup>*<sup>z</sup>* <sup>þ</sup> <sup>101</sup>*z*<sup>2</sup> <sup>þ</sup> <sup>147</sup>*z*<sup>3</sup> <sup>þ</sup> <sup>101</sup>*z*<sup>4</sup> <sup>þ</sup> <sup>35</sup>*z*<sup>5</sup> <sup>þ</sup> <sup>5</sup>*z*<sup>6</sup> � �

CS function *<sup>ϕ</sup>*2*,*<sup>0</sup> *<sup>ϕ</sup>*2*,*0ð Þ¼ *<sup>z</sup>* ð Þ <sup>1</sup> � *<sup>z</sup>* <sup>5</sup> <sup>1</sup> <sup>þ</sup> <sup>5</sup>*<sup>z</sup>* <sup>þ</sup> <sup>9</sup>*z*<sup>2</sup> <sup>þ</sup> <sup>5</sup>*z*<sup>3</sup> <sup>þ</sup> *<sup>z</sup>*<sup>4</sup> ð Þ; *<sup>z</sup>* <sup>¼</sup> *<sup>r</sup>=r*<sup>0</sup>

CS function *<sup>ϕ</sup>*3*,* <sup>1</sup> *<sup>ϕ</sup>*3*,* <sup>1</sup>ð Þ¼ *<sup>z</sup>* ð Þ <sup>1</sup> � *<sup>z</sup>* <sup>6</sup> <sup>6</sup> <sup>þ</sup> <sup>36</sup>*<sup>z</sup>* <sup>þ</sup> <sup>82</sup>*z*<sup>2</sup> <sup>þ</sup> <sup>72</sup>*z*<sup>3</sup> <sup>þ</sup> <sup>30</sup>*z*<sup>4</sup> <sup>þ</sup> <sup>5</sup>*z*<sup>5</sup> ð Þ

CS function *<sup>ϕ</sup>*2*,* <sup>1</sup> *<sup>ϕ</sup>*2*,* <sup>1</sup>ð Þ¼ *<sup>z</sup>* ð Þ <sup>1</sup> � *<sup>z</sup>* <sup>4</sup> <sup>4</sup> <sup>þ</sup> <sup>16</sup>*<sup>z</sup>* <sup>þ</sup> <sup>12</sup>*z*<sup>2</sup> <sup>þ</sup> <sup>3</sup>*z*<sup>3</sup> ð Þ

where *σ*<sup>2</sup> is the process variance, and R is a correlation matrix. If Gaussian

*s k*¼1

*θ<sup>k</sup>* x*<sup>k</sup> <sup>i</sup>* � <sup>x</sup>*<sup>k</sup> j*

� � �

<sup>2</sup> � �<sup>∗</sup>

*<sup>i</sup>* are the *k*th (*k* = 1,… *s*) component of sample points x*<sup>i</sup>* and x*j*,

R *R* x*i;* x*<sup>j</sup>*

� � is written as:

� � �

*λiϕ*ð Þ k k x � x*<sup>i</sup>* (11)

*λiϕ*ð Þ k k x1 � x*<sup>i</sup>* (12)

*λiϕ*ð Þ k k x*<sup>n</sup>* � x*<sup>i</sup>* (13)

*g* ¼ *Aλ* (14)

� � � � (9)

(10)

*Cov z*ð Þ x*<sup>i</sup> ; z* x*<sup>j</sup>*

stochastic process with zero mean and covariance

*Reliability Analysis Based on Surrogate Modeling Methods*

*DOI: http://dx.doi.org/10.5772/intechopen.84640*

function is used as the correlation function, *R* x*i;* x*<sup>j</sup>*

where x*<sup>k</sup>*

written as:

can be written, as:

…

**Table 1.**

**75**

*Some commonly used RBFs [65].*

*<sup>i</sup>* and x*<sup>k</sup>*

**4.5 Augmented radial basis functions**

*R* x*i;* x*<sup>j</sup>*

where e*β* ð Þ *k* � 1 is the least-square estimation of the unknown coefficients in Eq. (2), and *X n*ð Þ � *k* is a matrix of input variables at sample points. Apply the least squares method to solve for e*β*, as:

$$
\tilde{\boldsymbol{\beta}} = \left(\mathbf{X}^T \mathbf{X}\right)^{-1} \left(\mathbf{X}^T \mathbf{g}\right) \tag{4}
$$

### **4.3 Least squares support vector machine**

The support vector machine (SVM) uses a nonlinear mapping technique and solves for a nonlinear input-output relationship. For *n* sample points, a commonly used least squares SVM model is given as [52, 53]:

$$\widetilde{\mathbf{g}}(\mathbf{x}) = \sum\_{i=1}^{n} a\_i \mathbf{K}(\mathbf{x}, \mathbf{x}\_i) + b \tag{5}$$

where *α<sup>i</sup>* (*i* = 1,… *n*) are Lagrange multipliers, *b* is the scalar threshold, and *K*ð Þ x*;* x*<sup>i</sup>* is a kernel function. Available kernel functions include polynomial, radial, and sigmoid kernels [53]. A system of (*n* + 1) equations can be written as:

$$
\begin{pmatrix} \mathbf{0} & \mathbf{1}^T \\ \mathbf{1} & \mathfrak{Q} + \chi^{-1}I \end{pmatrix} \begin{pmatrix} b \\ a \end{pmatrix} = \begin{pmatrix} \mathbf{0} \\ \mathbf{g} \end{pmatrix} \tag{6}
$$

where *<sup>γ</sup>* is a tolerance error, 1 <sup>¼</sup> ½ � <sup>1</sup> <sup>⋯</sup> <sup>1</sup> <sup>T</sup>*, <sup>α</sup>* <sup>¼</sup> ½ � *<sup>α</sup>*<sup>1</sup> <sup>⋯</sup> *<sup>α</sup><sup>n</sup>* T, and *<sup>Ω</sup>* ð Þ *<sup>n</sup>* � *<sup>n</sup>* is a matrix of kernels based on the sample points. *α* and *b* can be calculated from:

$$
\begin{pmatrix} b \\ a \end{pmatrix} = \begin{pmatrix} 0 & \mathbf{1}^T \\ \mathbf{1} & \mathfrak{Q} + \chi^{-1}I \end{pmatrix}^{-1} \begin{pmatrix} \mathbf{0} \\ \mathbf{g} \end{pmatrix} \tag{7}
$$

#### **4.4 Kriging**

The Kriging model is an interpolation technique that combines two parts, i.e., a linear regression part and a stochastic error, as [38, 39]:

$$\widetilde{\mathbf{g}}(\mathbf{x}) = \mathbf{B}^T(\mathbf{x})\boldsymbol{\beta} + \mathbf{z}(\mathbf{x}) = \sum\_{i=1}^p B\_i(\mathbf{x})\boldsymbol{\beta}\_i + \mathbf{z}(\mathbf{x}) \tag{8}$$

where *<sup>B</sup>*ð Þ¼ <sup>x</sup> *<sup>B</sup>*1ð Þ <sup>x</sup> <sup>⋯</sup> *Bp*ð Þ <sup>x</sup> � �<sup>T</sup> are the *<sup>p</sup>* basis functions, and *<sup>β</sup>* <sup>¼</sup> *<sup>β</sup>*<sup>1</sup> <sup>⋯</sup> *<sup>β</sup><sup>p</sup>* � �<sup>T</sup> are the corresponding regression coefficients. The first part of Eq. (8) approximates the global trend of the original function, in which *β* can be

estimated using the least squares method. The second part, *z*ð Þ x , represents a stochastic process with zero mean and covariance

$$\mathcal{Cov}\left[z(\mathbf{x}\_i), z(\mathbf{x}\_j)\right] = \sigma^2 \mathbb{R}\left[\mathcal{R}\left(\mathbf{x}\_i, \mathbf{x}\_j\right)\right] \tag{9}$$

where *σ*<sup>2</sup> is the process variance, and R is a correlation matrix. If Gaussian function is used as the correlation function, *R* x*i;* x*<sup>j</sup>* � � is written as:

$$R(\mathbf{x}\_i, \mathbf{x}\_j) = \exp\left[ -\sum\_{k=1}^s \theta\_k \left| \mathbf{x}\_i^k - \mathbf{x}\_j^k \right|^2 \right]^\* \tag{10}$$

where x*<sup>k</sup> <sup>i</sup>* and x*<sup>k</sup> <sup>i</sup>* are the *k*th (*k* = 1,… *s*) component of sample points x*<sup>i</sup>* and x*j*, respectively, and *θ<sup>k</sup>* are unknown correlation parameters to fit the model.
