Global Optimization Method to Multiple Local Optimals with the Surface Approximation Methodology and Its Application for Industry Problems

*Ichiro Hagiwara*

## **Abstract**

Although generally speaking, a great number of functional evaluations may be required until convergence, it can be solved by using neural network effectively. Here, techniques to search the region of interest containing the global optimal design selected by random seeds is investigated. Also techniques for finding more accurate approximation using Holographic Neural Network (HNN) improved by using penalty function for generalized inverse matrix is investigated. Furthermore, the mapping method of extrapolation is proposed to make the technique available to general application in structural optimization. Application examples show that HNN may be expected as potential activate and feasible surface functions in response surface methodology than the polynomials in function approximations. Finally, the real design examples of a vehicle performance such as idling vibration, booming noise, vehicle component crash worthiness and combination problem between vehicle crashworthiness and restraint device performance at the head-on collision are used to show the effectiveness of the proposed method.

**Keywords:** nonlinear problem, holographic neural network, function approximation, response surface methodology, structural analysis, vehicle idling vibration, vehicle booming noise, vehicle crashworthiness, vehicle occupant restraints

#### **1. Introduction**

Due to the recent developments in computer science and software technology, fluid analysis, nonlinear structural analysis and their combined analysis have become possible for large-scale models and will eventually be employed in manufacturing design. The importance of being able to make rational and correct decisions during the early stages of design is currently being emphasized. However, non-linear problems such as crashworthiness analysis require considerable time for one functional evaluation even using supercomputers. Optimization techniques which cover several design disciplines (multiple discipline optimization-MDO [1]) have been hot topics recently. Sensitivity-based optimization approaches [2–15],

which sequentially construct local approximations, are widely employed and provide valuable contributions to linear optimization problems. Unfortunately, sensitivity approaches are not available for nonlinear optimization problems. For this purpose, response surface methodology (RSM) works effectively to achieve these design optimizations. The most important research work on RSM is to considerably reduce the extremely numerous number of function evaluations for computational cost problem and to get the robust approximation design model. That means instead of the time-consuming finite element method (FEM) analysis, robust approximation models can be constructed to evaluate responses rapidly in the design space of interest by RSM, when the response surface of the structure is infiltrated with computational noise. Early examples are such that White Jr. K.P. used RSM in their study of passenger-car crash worthiness [16] and Giunta, A.A. applied RSM to high speed civil transportation design [17]. Several key issues must be considered to develop the response surface modeling. One issue for RSM may be the selection of the activation function. Most response surfaces are expressed in terms of polynomials and among them *quadratic polynomial* are the most popular [18–25]. Unfortunately, this is not always accepted as a proper activation function for response surfaces and may not provide a satisfactory solution, particularly for wide ranges of design space with heavy non linearity. This limits the validity of RSM to only cover a part of the design space. And so in the past, feed forward multiple layers backpropagation neural network seemed to be a potential activate function in response surface function methodology [26]. Carpenter, W. made the comparison between polynomials and multiple layers neural network [27]. There are some problems when using back-propagation neural network: (1) the learned result is influenced by the initial parameters of the network due to the nonlinear gradient descent algorithm. (2) Training of neural network costs computational time, etc. In contrast, Hagiwara et al. firstly employed the Holographic neural network (HNN) [28] for the activation function of the response surface and compared it to the multilayer feed-forward neural network and demonstrated that the former type of neural networks had a much higher approximation accuracy and the training cost of HNN was about 1/1000 the computational time compared to that of a multilayer feedforward neural network in the case on the assumption that the designer roughly knows the location of local optimal [29, 30].

Another problem in using RSM is the difficulty in predicting the error incurred using response surface function values rather than the "true" values from the analysis code. Therefore, the extensive RSM research is focused on the selection of design trial points in design space, such as Box–Behnken Designs (BBD [20, 21, 25]) and Central Composite Design (CCD [19, 22–24]) which are two typical choices. And Central Composite Rotatable Design (CCRD [31]) and Face Central Composite Design (FCCD [18]) have also been applied to optimization studies in recent years.

By the way, now they say 3rd AI boom has come due to the born of deep learning [32]. And so recently the use of artificial neural network models to derive response surface plot has found increasing use in optimization modeling. In most of them, CCD is used such as in [33–35]. These researches mainly consist of 2 types. One is neural network itself works as RSM step [33]. The other is artificial neural network function is used as an activation function of RSM [34, 35]. Deep learning must have a lot of data. It also needs a lot of data to search the global optimization for multiple local optimal problem shown here by RSM.

Since the exact form of response surface remains unclear in general application and the accuracy of the function approximation near the global optimal design is very important, techniques to search the design space containing the global optimal design are examined. For the purpose of determining the global optimal design, techniques for searching the most-likely global optimal design (MPOD) over the

*Global Optimization Method to Multiple Local Optimals with the Surface Approximation… DOI: http://dx.doi.org/10.5772/intechopen.98907*

entire design space and techniques for determining a more accurate approximation near the global optimal designs using the extrapolation ability of the HNN are proposed [36–38].

The MPOD is as mentioned above a sequential procedure composed of two steps. The first step is employed to determine the most likely global optimal design over the entire design space by conditioned random seeds which control the mutual distances between every two designs, to scan the wide design space rapidly. The second step is to verify and improve the accuracy of the approximation of the objective function and constraint functions at optimal design. The trade-off between the approximation accuracy and computational cost can be well balanced by the preset threshold of the mutual distance of designs. Some design examples for vehicle structure such as idling vibration, interior noise, crashworthiness and combination problem between vehicle crashworthiness and restraint device performance at the head-on collision demonstrate the validity and utility of this method. The method developed by us can omit many useless data and very unique from both of the 2 important issues for RSM such as activation function and selected design points. This approach is also useful for the execution of the deep learning. Let us get started to show this splendid method.

#### **2. Theory of holographic neural network (HNN)**

#### **2.1 Basic theory of HNN**

Here, *x* = {*x*1, *x*2, … , *xk*} <sup>T</sup> is the stimulus data, *y* = {*y*1, *y*2, … , *ym*} <sup>T</sup> is the training data. If there is learning data for *n*, it is presented in Eq. (1).

$$X = \begin{pmatrix} X\_1^T \\ \vdots \\ X\_n^T \end{pmatrix} = \begin{pmatrix} \mathbf{x}\_{11} & \mathbf{x}\_{12} & \cdots & \mathbf{x}\_{1k} \\ \mathbf{x}\_{21} & \mathbf{x}\_{22} & \cdots & \mathbf{x}\_{2k} \\ \vdots & \vdots & \ddots & \vdots \\ \mathbf{x}\_{n1} & \mathbf{x}\_{n2} & \cdots & \mathbf{x}\_{nk} \end{pmatrix} \\ Y = \begin{pmatrix} Y\_1^T \\ \vdots \\ Y\_n^T \end{pmatrix} = \begin{pmatrix} \mathcal{Y}\_{11} & \mathcal{Y}\_{12} & \cdots & \mathcal{Y}\_{1m} \\ \mathcal{Y}\_{21} & \mathcal{Y}\_{22} & \cdots & \mathcal{Y}\_{2m} \\ \vdots & \vdots & \ddots & \vdots \\ \mathcal{Y}\_{n1} & \mathcal{Y}\_{n2} & \cdots & \mathcal{Y}\_{nm} \end{pmatrix} \tag{1}$$

Here, *X*, *Y* are input and output matrices. Each element of *X* and *Y* is converted to angles *θai*ð Þ *a* ¼ 1, ⋯, *n*; *i* ¼ 1, ⋯, *k* and *ϕaj*ð Þ *a* ¼ 1, ⋯, *n*; *j* ¼ 1, ⋯, *m* .

$$
\theta\_{ai} = f\_{\,\,\mathbf{x}}(\mathbf{x}\_{ai}) \phi\_{aj} = f\_{\,\,\mathbf{y}}\left(\mathbf{y}\_{aj}\right) \tag{2}
$$

Linear, Sigmoid and Arc Tangent functions are applied for *fx* and *fy*. Especially, for the stimulus data, Sigmoid function is used as follows.

$$\theta\_{\rm ai} = \frac{2\pi}{\left(\mathbf{1} + e^{-c\_{\rm ai}(\mathbf{x\_{ai}} - \mu\_{\rm ai})/c\_{\rm ui}}\right)}\tag{3}$$

Here, *μai* and *ϭai* are the mean value and standard deviation of *xai* respectively and *λai* defines a vector magnitude bound within the unit circle (0 to 1), expressing a weighting or dominance for each element of the stimulus field. If no design variable dominates, *λai* ¼ 1. *cai* is the coefficient to adjust the transformation slope shown in **Figure 1**.

Next, the above angles are mapped on complex representation by using exponential function as follows.

#### **Figure 1.**

*Transform function of phase angle: If the untrained designs are outside the trained design space, the predicted values are same values as those at trained design edges 0 or 1 for the case of cai* ¼ 1*:*0*. In contrast, the predicted function values are extended outside the trained design edges 0 or 1 for the case of cai* ¼ 0*:*5*: In this way, the extrapolation can be done and the trained designs are not necessary on the boundary.*

$$\varkappa\_{\rm ati} = \lambda\_{\rm ati} \mathbf{e}^{\hat{i}\theta\_{\rm ai}} \ r\_{\rm aj} = \chi\_{\rm aj} \mathbf{e}^{\hat{i}\phi\_{\rm aj}} \tag{4}$$

From the above, *X* and *Y* are converted to stimulus *S* and response *R*.

$$S = \begin{pmatrix} s\_{11} & s\_{12} & \cdots & s\_{1k} \\ s\_{21} & s\_{22} & \cdots & s\_{2k} \\ \vdots & \vdots & \ddots & \vdots \\ s\_{n1} & s\_{n2} & \cdots & s\_{nk} \end{pmatrix} R = \begin{pmatrix} r\_{11} & r\_{12} & \cdots & r\_{1m} \\ r\_{21} & r\_{22} & \cdots & r\_{2m} \\ \vdots & \vdots & \ddots & \vdots \\ r\_{n1} & r\_{n2} & \cdots & r\_{nm} \end{pmatrix} \tag{5}$$

*H* = [*h*1*, … , hm*], the transfer function of HNN is presented by minimizing the difference between teaching data *R* and *S*・H as follows;

$$\mathbf{H} = (\mathbb{S}^\* \mathbb{S})^{-1} \mathbb{S}^\* \ \mathbf{R} \tag{6}$$

Here, symbol \* presents conjugate transpose. Output *V* can be predicted to be *V* = *U*・*H* for new input *U*. H is the transfer function of HNN. It is not efficient to treat Eq. (6) because it has the inverse matrix operating. When the number of activation functions is large, it will lead to considerable calculation time and may result in calculation error. The following iteration process is used to save calculation time.

$$H\_0 = \mathbb{S}^\* \mathbf{Y} \tag{7}$$

The enhancement of parameters will be performed by

$$
\Delta H\_{k+1} = \mathbb{S}^\* \left( Y - \mathbb{S}H\_k \right) \tag{8}
$$

$$H\_{k+1} = H\_k + \Delta H\_{k+1} \tag{9}$$

The iteration may be terminated when the error between the approximating response and the true response is orthogonal to every column vector of S in a complex domain. The iteration is terminated when the error becomes smaller than the preset threshold value, eps.

*Global Optimization Method to Multiple Local Optimals with the Surface Approximation… DOI: http://dx.doi.org/10.5772/intechopen.98907*

$$\text{J}(H\_{k+1}) = (\text{Y} - \text{S} \, H\_{k+1})^\* (\text{Y} - \text{S} \, H\_{k+1}) \le \text{eps} \tag{10}$$

By the way, within the HNN, limitations of mapping accuracy can be largely overcome through a pre-processing operation involving the generation of higherorder product terms. For example, for two design variables (*n* = 2), *s* = *λ*1*eiθ*<sup>1</sup> , *λ*2*ei<sup>θ</sup>*<sup>2</sup> can be increased to 12 terms (n = 12) if performed to the second order.

$$\overline{\mathbf{S}} = \left[ \lambda\_1 \mathbf{e}^{i\theta\_1}, \lambda\_2 \mathbf{e}^{i\theta\_2}, \lambda\_1 \mathbf{e}^{-i\theta\_1}, \lambda\_2 \mathbf{e}^{-i\theta\_2}, \right]$$

$$\lambda^2 \mathbf{1} \mathbf{e}^{i2\theta\_1}, \lambda^2 \mathbf{e}^{i2\theta\_2}, \lambda^2 \mathbf{1} \mathbf{e}^{-i2\theta\_1}, \lambda^2 \mathbf{e}^{-i2\theta\_2}, \lambda\_1 \lambda\_2 \mathbf{e}^{i(\theta\_1 + \theta\_2)},$$

$$\lambda\_1 \lambda\_2 \mathbf{e}^{i(-\theta\_1 + \theta\_2)}, \lambda\_1 \lambda\_2 \mathbf{e}^{-i(\theta\_1 + \theta\_2)}, \lambda\_1 \lambda\_2 \mathbf{e}^{i(\theta\_1 - \theta\_2)} \tag{11}$$

Furthermore, the extrapolation of the approximated response function may be easily performed with trained HNN by presetting the coefficient *ck* in Eq. (3) to a small value to adjust the slope to be a smaller. **Figure 1** shows the effect of the coefficient. In **Figure 1**, the experimental designs used for training the neural network are located within the range [0–1], the dotted line expresses the transform when *ck* ¼ 1*:*0 and the solid line expresses the transform when *ck* = 0.5. We can observe from the figure that if the predicted designs are outside the trained design space, then the predicted function values outside the trained design space may be considered to be the same values as those at trained design edges 0 or 1 for the case of *ck* ¼ 1*:*0. In contrast, the predicted function values outside the trained design space may keep the trend beyond the trained design edges 0 or 1 for the case of *ck* ¼ 0*:*5*:* **Figure 2** shows the effect of extrapolation. In this figure, space "I" expresses the design space inside which designs are used for approximation by training and space "D" expresses the entire feasible design space. Approximation using designs in space "I" may extrapolate the values of space "D". This is an important issue for reducing functional evaluations and some advantages of this are: 1) design spaces with irregular shapes can be considered, 2) any number of experimental designs can be considered, and 3) searching for reasonable optimal designs over a wide space becomes possible.

#### **Figure 2.**

*Explanation of extrapolation: Space "I" expresses the design space inside which designs are used for approximation by training and space "D" expresses the entire feasible design space. Approximation using designs in space "I" may extrapolate the values of space "D".*

#### **2.2 Examination for getting the inverse matrix of transfer matrix**

Although for calculation inverse matrix (*S* <sup>∗</sup> *S*), we can use the procedure from Eqs. (7) to (10), for not so big problem, the inverse of (*S* <sup>∗</sup> *S*) is calculated directly. If (*S* <sup>∗</sup> *S*) has not the inverse matrix in Eq. (6), we can use Moore Penrose generalized inverse matrix *<sup>H</sup>*^ *MP* which is generated by operating k k *hk* <sup>2</sup> (*k*�1, … .,*m*) to be the minimum. Here *hk* is the column vector of *H*.

$$
\hat{H}\_{MP} = \overline{\mathfrak{S}}^\* \overline{\mathfrak{S}} \left( \overline{\mathfrak{S}}^\* \overline{\mathfrak{S}} \overline{\mathfrak{S}}^\* \overline{\mathfrak{S}} \right)^{-1}\_G \overline{\mathfrak{S}}^\* R \tag{12}
$$

Here ðÞ�<sup>1</sup> *<sup>G</sup>* is the generalized inverse matrix.

But following is the example where it has not satisfied predicted value for **Figure 3** with *H*^ *MP:*

It is shown the approximation function in **Figure 4** which is generated with Moore Penrose *pseudo-inverse matrix* by using 23 sampling points as shown ● in **Figure 5**. Here the base function of HNN is extended to the 3rd order where the total number of the base function is 30.

It is recognized that the surface shape in **Figure 4** is quite different from the one in **Figure 3**. This is because over fitting phenomena occurs which is unique to neural network. This phenomena occurs in such way that the precision of the predicted values of the points other than sampling points become worth because the values of the sampling points are forced fit to the given HNN function. In this case we cannot use HNN for the prediction.

Instead, the following penalty function was developed [39].

$$f(h\_k) = (r\_k - \text{Sh}\_k)^\*(r\_k - \text{Sh}\_k) + \rho\_1 h\_k^\* \, \text{Oh}\_k + \rho\_2 h\_k^\* h\_k(k = 1, \dots m) \tag{13}$$

Here *O* is the square matrix whose all elements are 1. The right side paragraph 1 means approximation error to output, the paragraph 2 means the average of regression factor *hk* and the paragraph 3 means variation. The paragraphs 2 and 3 are weighted with *ρ*1, *ρ*2. In Eq. (13), it cannot be predicted uniquely without some

**Figure 3.** *Visualization of function used for test.*

*Global Optimization Method to Multiple Local Optimals with the Surface Approximation… DOI: http://dx.doi.org/10.5772/intechopen.98907*

#### **Figure 4.**

*Approximated function using Moore-Penrose generalized inverse: The base function of HNN is extended to the 3rd order. The surface shape is quite different from the original one because over fitting phenomena occurs.*

**Figure 5.** *Layout of sampling point on design space.*

constraints and so such conditions are added. Although theoretically, *H* is predicted uniquely without the paragraph 2, it is more stable with this paragraph. H is as follows by minimizing the penalty function (13);

$$H\_P = \left(\mathbb{S}^\* \cdot \mathbb{S} + \rho\_1 O + \rho\_2 I\right)^{-1} \cdot \mathbb{S}^\* \cdot R \tag{14}$$

Here, *I* is identity matrix. The approximation function by using this *HP* is depicted in **Figure 6** where 23 sampling data are used as above, *ρ*1, *ρ*2, are 1.0. It can be said it does not occur the over fitting phenomena in **Figure 6**. The approximation functions by both methods are estimated by mean square error.

$$E = \sqrt{\frac{1}{N} \sum\_{k=1}^{N} \left(\boldsymbol{\wp}\_{k} - \boldsymbol{\wp}\_{k}\right)^{2}}\tag{15}$$

**Figure 6.** *Approximated function using penalty coefficient method: The over fitting phenomena does not occur.*

Here *N* is number of estimation points, *yk* is the real function value and y^<sup>k</sup> is the value of approximation function. Here it is called "approximation error" which is estimated by using 23 sampling points and called "prediction error" which is estimated by 2601 points, generated by dividing design region into each of the two axes 50 equally separated. From **Table 1**, it can be estimated in the case of Moore Penrose, the approximation error is very small but the prediction error gets worth. On the other hand, in the case of proposed method, the approximation error is not smaller than in the case of Moore Penrose but the prediction error is moderately suppressed. In the case of proposed one, it is important how much *ρ*1, *ρ*<sup>2</sup> to be set*:* If they are very small, it approaches to the approximation function by Moore Penrose because the part of right side paragraph 1 of Eq. (14) is larger. On the other hand, if they are too large, the part of right side paragraph 1 of Eq. (14) is too small to be used effectively the information of sampling data. And so when the number of sampling data is large and the sampling data are selected equally over the design region, *ρ*1, *ρ*<sup>2</sup> are small so that the information of sampling data can be used effectively. On the other hand, if it is difficult to have enough sampling data, it is better *ρ*1, *ρ*<sup>2</sup> are made reasonably large so that design region is roughly approximated by small information in order not to have over fitting phenomena.

In the case of Moore Penrose, the approximation error is very small but the prediction error gets worth. In the case of Penalty Coefficient Method, the approximation error is not smaller than in the case of Moore Penrose but the prediction error is moderately suppressed.


**Table 1.**

*Comparison between two methods by square-mean-error.*

*Global Optimization Method to Multiple Local Optimals with the Surface Approximation… DOI: http://dx.doi.org/10.5772/intechopen.98907*

#### **3. Optimization method with holographic neural network**

#### **3.1 Procedure for MPOD**

A two-step sequential optimization is proposed to determine the optimal design as shown in **Figure 7**. The first step is to scan the entire design space to determine where the global optimum design exists, most probably by designs selected using conditioned random seeds. The second step is to increase the accuracy of the most-likely global optimum design by imputing additional information where required. The details regarding these two steps are described below. The optimum problem is defined as.

$$\mathbf{Max}f(\mathbf{x})\tag{16}$$

$$\text{Subject to } \mathbf{g}\_j(\mathbf{x}) \le \mathbf{0}, (j = \mathbf{1}, 2, 3, \dots, \mathbf{m}) \tag{17}$$

Here m is the number of constraints.

Step 1: To search the most-likely global optimum design over the entire design space. Selecting initial trial designs *xk* (*k =* 1, 2, *… .*, *N*) where *N* is the initial number of design points. Although one should understand that this is not the only choice, it is generally recommended *N =* (*n+*1)(*n+2*)*/*2, *n* is the number of design variables. Actually the determination of the *N* value depends on the scale of the problem such as the number of design variables and the computation time of each functional evaluation. For instance, as far as the formulation of CCD (Center Composite Design), points *N=2n+1* is recommended for problems with a large number of designs. It should be emphasized that the designs produced by conditioned random seeds are employed to search the entire design space. The uniformly distributed conditioned random seeds are used to provide a good valance of search in the design space. Design conditions are required to satisfy Eq. (18).

#### **Figure 7.**

*Flowchart of MPOD: The first step is to scan the entire design space to determine where the global optimum design exists, most probably by designs selected using conditioned random seeds. The second step is to increase the accuracy of the most-likely global optimum design by imputing additional information where required.*

$$d\_m = \min\left\{ \frac{||\boldsymbol{\omega}\_i - \boldsymbol{\omega}\_j||}{||\boldsymbol{\omega}\_{CG}||} \right\} (1 \le i, j \le N, i \ne j, m = 1, 2, \dots, n, p) \tag{18}$$

Where *xCG* is the normalization parameter which can be selected to be the geometrical center of design *xk* = f g *x*1,*k*, *x*2,*k*, … *:*, *xn*,*<sup>k</sup>* , (*k*=1, … … ,*N*), *dm* ≥ *dmin* (*m*=1,2, … .,*p*), *dmin* is the threshold of distance, *p* is the total number of design combinations *p* = *NC*2. If no new design is obtained after 5000 random trials within the threshold distance, the new threshold distance must be decreased by *dmin* ¼ β*dmin* (0 <*β* < 1). Designs satisfying Eq. (18) and their objective function *f (xi*Þ ð Þ *i* ¼ 1, … … , *N* . and constraint *g <sup>j</sup>* ð Þ *xi* (*i*=1, … … ,*N*, *j*=1, … … ,*m*) values are trained using neural networks to approximate the response surface *f* (x) and *g <sup>j</sup>* ð Þ *x :* A global optimization design *xop*ð Þ 0 with an approximation function can be solved by optimization codes. Then, more *S* designs are created with the same conditions *dm* ≥*dmin* (*m*=1,2, … .,*p*),which are used to improve the accuracy of the approximation. Here, *S* is generally set to *n*. But it is also not the only choice. The random *S* designs are selected as the Gaussian normal distribution with the mean value of *xop*ð Þ 0 and the deviation must be carefully chosen. The approximation accuracy can be judged by the change in the movement of approximation optimal design location. Therefore, the deviation should change with the movement distance *σ*=*αΔd*. Here, *Δd* is the distance of designs calculated by Eq. (19) and *α* is the coefficient which controls the deviation value and affects the approximation accuracy and the number of functional evaluations, as discussed below. The *k*th sequential improved approximation functions by the neural network is to be used to obtain new optimal design *xop*ð Þ*k* . The convergent conformation for the *k*th sequential approximation is taken to terminate the random seeds operation

$$\Delta d = \frac{||\varkappa\_{op}(\mathbf{k}) - \varkappa\_{op}(\mathbf{k} - \mathbf{1})||}{||\varkappa\_{CG}||} \le \varepsilon\_1 \tag{19}$$

Thus far, the above sequential approximation is used to determine the approximation global optimal location.

Step 2: To verify and improve the accuracy of optimal design and its performance. The accuracy check is based on

$$\frac{||f(\mathbf{x}\_{OP}(\mathbf{k}) - \overline{f}(\mathbf{x}\_{OP}(\mathbf{k}))||}{||f(\mathbf{x}\_{OP}(\mathbf{k}))||} \le \mathbf{c}\_2\tag{20}$$

where *xOP*ð Þ*k* is the optimal design in Step 1, *ɛ*<sup>2</sup> is the threshold of the termination and *f x*ð Þ *OP*ð Þ k and *f x*ð Þ *OP*ð Þ k are the corresponding exact and approximation function values. If Eq. (20) is not satisfied, *n* learning points are added by normal distribution random number of which center is *xOP*ð Þ k until Eq. (20) is satisfied. Here n is number of design values and variance *σ<sup>i</sup>* is decided in Eq. (21).

$$\sigma\_i = a\_i(\mathbf{x}\_{\text{OP}}(k) - \mathbf{x}\_{\text{OP}}(k-1)) \ (i = 1, 2, \dots, \dots, n) \tag{21}$$

#### **3.2 Necessary designs used for optimal design by MPOD in step 1**

The convergence and efficiency of the proposed method are based on the deviation of the Gaussian random seeds. The probability of determining the global optimal design is increased when a larger deviation is used which can search a wider feasible design space resulting in an increase of the functional evaluations. In

*Global Optimization Method to Multiple Local Optimals with the Surface Approximation… DOI: http://dx.doi.org/10.5772/intechopen.98907*

contrast, if the deviation is small, the number of functional evaluations will decrease but the possibility of searching the smaller cluster will reduce. As illustrated in **Figure 8**, the scatter of design points is adjusted by the deviation value. This method is sufficiently flexible to satisfy any user's requirements through the adjustment of coefficient *α*. The necessary designs used for solving the global optimal can definitely be estimated. For sake of convergence and without a loss of generality, a one-dimensional example is employed to express the estimator as shown in **Figure 9**. Assume the design space R is *d*<sup>0</sup> and the size of the entire feasible design space is *D*.

#### **Figure 8.**

*Random data distribution according to standard deviation: The scatter of design points is adjusted by the deviation value.*

#### **Figure 9.**

*Explanation of necessary designs to find global optimal: The design space* R *is* d*<sup>0</sup> and the size of the entire feasible design space is* D.

$$d\_0 = \frac{2\min \left||\mathbf{x}\_{\varepsilon} - \mathbf{x}\_{\varepsilon}\right||}{||\mathbf{x}\_{\varepsilon}||}\tag{22}$$

$$f(\boldsymbol{\pi}\_i) > \mathbf{f}\left(\boldsymbol{\pi}\_j\right); \boldsymbol{\pi}\_i \in \mathcal{R}; \boldsymbol{\pi}\_j \notin \mathcal{R} \tag{23}$$

We define the space R as an optimal cluster, which is mathematically defined by Eq. (22).which indicates that the function values within the optimal cluster R as an optimal cluster are longer than those outside the optimal cluster. *xe* is the edge point of the optimal cluster. *xc* is the geometrical center of optimal cluster and *d*<sup>0</sup> expresses the size of the optimal cluster. If the size of the optimal cluster is known, it is estimated that the maximum number of designs for determining the optimum cluster with probability unity is decided by Eq. (24).

$$P = \text{int}\left(\frac{2\max\left|\|\mathbf{x}\_E - \mathbf{x}\_{CG}\|\|/\|\mathbf{x}\_{CG}\|\|}{d\_0}\right|\right) \tag{24}$$

Where *xE* expresses the group points on the edge of design space, *xCG* is the geometrical center of the design space and int. expresses the rounded integer. In this case, if ε<sup>1</sup> = *d*<sup>0</sup> in Step 1, it can be moved to Step 2 with less than *P* designs.

#### **4. Numerical examples**

#### **4.1 Comparison of approximation by holographic neural network and polynomials**

The comparison of the HNN with polynomials for the function approximation is performed by the following multiple peaks function, which is shown in **Figure 9**.

$$f\ (\mathbf{x}) = \mathbf{1} \mathbf{0} \ x^{\mathbf{5}} \ (\mathbf{x}^2 - \mathbf{2}) \ \mathbf{e}^{-\mathbf{x}^2}, \ -4.0 \le \mathbf{x} \le \mathbf{3}.0\tag{25}$$

For the above numerical example, the number of training designs is 15 and they are evenly spaced over the design space. The approximation accuracy of the HNN and polynomials is shown in **Figure 10**. In this figure, the horizontal axis expresses the orders of polynomials (a) or orders of the HNN (b), the vertical axis expresses

#### **Figure 10.**

*Performance comparison of polynomial and holographic neural networks: The horizontal axis is the orders of polynomials (left) or number of expansions for the HNN (right), the vertical axis is the standard variance calculated with the 200 designs evenly spaced in the design space [*�*4.0,3.0].*

*Global Optimization Method to Multiple Local Optimals with the Surface Approximation… DOI: http://dx.doi.org/10.5772/intechopen.98907*

the standard variance calculated with the 200 designs evenly spaced in the design space [�4.0,3.0]. We can see from the results that the HNN is much more accurate than the polynomials and maintains good accuracy after the ninth order of expansion. The standard deviation of the HNN at the ninth order is 0.3, while those of polynomials at the 14th order is higher than 1.0. In other words, a larger number of coefficients must be selected for the polynomials and thus, more experimental designs must be used compared to that of the HNN. Therefore, the HNN is a more suitable activation response function for this problem than polynomials.

#### **4.2 Numerical example with holographic neural network for extrapolation application**

To demonstrate the efficiency of this method, a numerical example of multiple local maximums with constraints is used to search the global optimal design in the feasible design space. The optimization problem is defined as

$$\max f(\mathbf{x}\_1, \mathbf{x}\_2)$$

$$f\left(\mathbf{x}\_1, \mathbf{x}\_2\right) = 3(1 - \mathbf{x}\_1)^2 \mathbf{e}^{-\mathbf{x}\_1^2 - (\mathbf{x}\_2 + 1)^2} - 10\left(\frac{\mathbf{x}\_1}{5} - \mathbf{x}\_1^3 - \mathbf{x}\_2^5\right) \mathbf{e}^{-\left(\mathbf{x}\_1^2 + \mathbf{x}\_2^2\right)} - 1$$

$$\frac{1}{3}\mathbf{e}^{-\left(1 + \mathbf{x}\_1\right)^2 - \mathbf{x}\_1^3}$$

$$\text{Submit to } \{-2, -2\} \le \{\mathbf{x}\_1, \mathbf{x}\_2\} \le \{2, 2\}$$

$$\mathbf{g}\_1\left(\mathbf{x}\right) = \frac{15}{80}\mathbf{x}\_1^2 - \frac{5}{8}\mathbf{x}\_1 + \mathbf{x}\_2 - 2 \le 0$$

$$\mathbf{g}\_2\left(\mathbf{x}\right) = \frac{1}{2}\mathbf{x}\_1 - \mathbf{x}\_2 - 1 \le 0$$

Three local optimal designs, marked ①, ②, ③, exist within the feasible design space depicted in **Figure 11**. It is a contour plot, and the feasible design space is enclosed within the shadowed area. The following two cases are performed to investigate the performances of convergent accuracy and the number of functional evaluations.

2


#### **Figure 11.**

*Contour plot of objective function and constraints: It is a contour plot, and the feasible design space is enclosed within the shadowed area. Three local optimal designs, marked ①, ②, ③ exist within the feasible design space.*

of functional evaluations until convergence is 18. The approximation optimal design is =8.105. The history of each iteration is illustrated in **Figure 13**, in which the horizontal and vertical axes have the same meaning as those of **Figure 12**. Further inspection of the global approximation between the above two cases is performed using 1600 (4040) lattice design points covering the entire design space. The deviations of cases (1) and (2) are 1.12 and 1.32,

**Figure 12.**

*Convergence of large standard deviation: The horizontal axis stands for the distance (ds) between the original position (0,0) and the approximation optimal design.*

*Global Optimization Method to Multiple Local Optimals with the Surface Approximation… DOI: http://dx.doi.org/10.5772/intechopen.98907*

**Figure 13.**

*Convergence of small standard deviation: The horizontal axis stands for the distance (ds) between the original position (0,0) and the approximation optimal design.*

respectively. That is, a larger deviation of Gauss random designs gives slower convergence but higher accuracy of approximation over the entire design space. On the other hand, a smaller deviation of Gauss random designs gives rapid convergence but lower accuracy of approximation over the entire design space. Let us confirm the effect of Eqs. (22) and (23). If the threshold termination in Step 1 is set to ε1=0.15, the number of designs is 12 and 22 in case α=2 and α=5, respectively. If the threshold termination in Step 1 is set sufficiently small, say, <sup>ε</sup>1=1.0<sup>10</sup><sup>3</sup> , then the history of convergence is depicted in **Figure 14** and the number of functional evaluations in Step 1 becomes 36. However, on the basis of the result shown in **Figure 14** we know

**Figure 14.** *Change of distance in large standard deviation: The number of functional evaluations in step 1 becomes 36.*

that there is no further improvement in determining the global optimal design after 36 functional evaluations, except to increase the probability. On the other hand, d0=0.9 can be determined for this problem, and Step 1 is terminated when ε1=0.9 at eight designs. In practice, however, this is not known; therefore, if one attempts to search for the global optimal design including small clusters, one is expected to select a stricter termination threshold in Step 1 and a larger deviation of Gauss random designs.

#### **5. Application to vehicle problems**

#### **5.1 Vehicle idling vibration problem**

#### *5.1.1 Modeling for optimal problem*

NVH such as noise, vibration and harshness is very important for *merchantability of vehicle. Here idling vibration is treated as one of* NVH*. Many studies have been done related to idling vibration where two types of modeling exist. One modeling is that body is separated from powertrain and engine mount [40]. The other one consists of powertrain, engine mount and body together where body is modeled very simply such as solid or as only 1st bending and twist modes in most of studies [41, 42]. Here powertrain and engine mount are modeled together with rather precise body model.*

*Here powertrain and engine mount are modeled as shown in* **Figure** *15 where weight of powertrain is supported by 3 main mounts. Each mount is modeled with 3 springs in x, y, z directions and damping. The weight of each mount is so small compared with body and powertrain that its weight is ignored.*

*Powertrain is so stiff compared with body that it is modeled as 6 rigid body modes.* Body is presented with 44 modes under 50 Hz which are calculated by eigenvalue analysis for the FEM model in **Figure 16**. Total model is generated by component mode synthesis [43] by using spring parameters, *d*amping for engine

#### **Figure 15.**

*Mounts model: Weight of powertrain is supported by 3 main mounts. Each mount is modeled with 3 springs in x, y, z directions and damping.*

*Global Optimization Method to Multiple Local Optimals with the Surface Approximation… DOI: http://dx.doi.org/10.5772/intechopen.98907*

**Figure 16.** *Vehicle FEM model: Body is presented with 44 modes under 50 Hz.*

mount and by vectors, eigenvalues and eigen damping for body and powertrain. This total model has 50 numbers of freedom which is much lower than that of physical coordinates. Only engine mount is changed in repeated optimal calculations.

#### *5.1.2 Setting up optimization problems*

#### *5.1.2.1 Position arrangement of engine mount*

It is shown 34 candidates for position arrangement of engine mount in **Figure 17**. They are arranged so that powertrain is enclosed. 3 points are selected among these 34 points. Among the 3 selected points *pt*1, *pt*2, *pt*3, the next relationship is held:

$$1 \le p\_{t1} < p\_{t2} < p\_{t3} \le 34 \tag{26}$$

When these 3 mounts are placed closely, support for powertrain becomes unstable.

#### **Figure 17.**

*Candidates for location of mounts: 34 candidates for position arrangement of engine mount are arranged so that powertrain is enclosed. 3 points are selected among these 34 points.*

To deal with this, the following restrictions are added:

$$p\_{t2} - p\_{t1} \ge 5$$

$$p\_{t3} - p\_{t2} \ge 5\tag{27}$$

$$p\_{t1} + 34 - p\_{t3} \ge 5$$

Spring and damping parameters are continuous variables. Following constraints are given to guarantee other characteristics than idling vibration.

• Front and rear stiffness for dynamic vibration and acceleration shock when the engine starts:

$$\sum\_{i=1}^{3} \text{kxi} = \text{35.0 [kgf/mm]} \tag{28}$$

• Left and right stiffness for handling and stability

$$\sum\_{i=1}^{3} \text{kyi} = \text{45.0} \text{[kgf/mm]} \tag{29}$$

• Up and down stiffness for engine shake

$$\sum\_{i=1}^{3} \text{kzi} = 45.0 \text{[kgf/mm]} \tag{30}$$

Here kdi (d=x, y, z, i=1,2,3) are stiffness parameters of number i, direction d. All parameters have upper and lower limits as follows:

$$7.0 \le \mathbf{k}\_{\rm xi} \le 21.0$$

$$9.0 \le \mathbf{k}\_{\rm yi} \le 27.0 \tag{31}$$

$$9.0 \le \mathbf{k}\_{\rm xi} \le 27.0$$

$$0.02 \le \mathbf{c}\_{\rm di} \le 0.1 \text{ ( $d = x, y, z, i = 1, 2, 3$ )}$$

Damping parameter is also constrained as follows:

$$\sum\_{i=1}^{3} c\_{di} = \mathbf{0}.\mathbf{1} \left(\mathbf{d} = \mathbf{x}, \mathbf{y}, \mathbf{z}\right) \tag{32}$$

Total number of design values is 21 such as mount location (3�3), stiffness parameters of 3 directions (3�3) and damping parameter (3).

From Eqs. (27) and (30), follows are induced:

$$\mathbf{K\_{d3}} = \mathbf{35.0} - (\mathbf{k\_{d1}} + \mathbf{k\_{d2}})$$

$$\mathbf{14.0} \le \mathbf{k\_{d1}} + \mathbf{k\_{d2}} \le \mathbf{28.0} \text{ (d = x, y, z)}\tag{33}$$

This means that 21 design values and 6 linear equation constraints induce 15 design values (as shown in **Table 2**) and 6 boundary values (as shown in **Table 3**). *Global Optimization Method to Multiple Local Optimals with the Surface Approximation… DOI: http://dx.doi.org/10.5772/intechopen.98907*


#### **Table 2.**

*21 design variable candidates. Since there are 6 boundary conditions as shown in Table 3, 15 of them are design variables.*


**Table 3.**

*Boundary conditions.*

#### *5.1.2.2 Settings of objective function*

Idling vibration is estimated at 2 observer points where the occupant seat is installed on the floor. These points are intersection points between left and right side sills and cross member where the seat is installed. The frequency vertical responses of these 2 points are shown in **Figure 18**. Idling vibration is the phenomena during engine rpm 600–900 and so it is estimated by frequency response from 20 Hz to 30 Hz. As a result, the objective function is the sum of 2 integral functions with frequency in this range. Trapezoidal rule is used for integration with integral interval 0.2 Hz.

#### **Figure 18.**

*Acceleration response for frequency at observation point: Idling vibration is the phenomena during engine rpm 600–900 and so it is estimated by frequency response from 20 Hz to 30 Hz.*

#### *5.1.3 Optimization results*

First, with N initial samplings, response values are calculated. N means at least necessary number for the approximation with secondary polynomials;

N= (n+1)(n+2)/2 (n is the number of design variables, here, n=15. So N=136). Each sampling point is selected among feasible region which is given from Eqs. (25), (26), (30) and **Tables 2** and **3**. Both of *ɛ*1, *ɛ*<sup>2</sup> are set up 0.05 and *α<sup>i</sup>* is set up 1.5. Both of GA and SQP (Sequential Quadratic Programming) are used to explore the optimal value on the approximated response surfaces.

Three types of calculation are performed as far as initial point for optimal analysis by SQP. In 1st one, it is selected the point where it is the optimal point using GA. In 2nd one, last optimal point is selected as the initial point. In the last one, the best solution point among sample points is selected as the initial point. In each step, these 3 calculations are performed. However, because the solution by SQP is continuous value, at last stage discrete value is searched around the solution point. It is the optimal point which has the least value among three types calculation in each step.

#### *5.1.3.1 Comparison with GA*

To certify the effectiveness of MPOD, it is compared with GA. Here general GA is selected although there are many types of GA. Real-coded Genetic Algorithms is adopted with elite preservation strategy where population size is 50 and number of generations is 100. It is shown in **Table 4** the comparison between GA and MPOD.

As far as the problem where it takes much time to calculate objective function, it is very effective to use MPOD.

In term A, it means the precision of optimal value is almost same. In term B, it is compared the number of calculation of objective function. It is fewer in MPOD than in GA. But in term C, total time is shorter in GA than in MPOD. The calculation time of objective function for one selected point is 0.23 sec. In term D, all calculation times for object functions are filled out. In term E, in MPOD, the calculation time of objective function accounts for about 13% in the total calculation time although in GA it accounts about 90%. In other words, the total calculation time in GA is more influenced by the calculation time of object function than in MPOD. On the other hand, the most time-spent process in MPOD is to search the optimal solution on the approximation function updated every time. In GA, this process is finished within few seconds although in SQP, it takes sometimes few minutes with some initial values. And so in MPOD, it influences very much to the calculation time how the optimal value can be searched on the approximation function. As far as the problem where it takes much time to calculate objective function, it is very effective to use MPOD.


**Table 4.**

*Comparison between result of MPOD and one of GA.*

*Global Optimization Method to Multiple Local Optimals with the Surface Approximation… DOI: http://dx.doi.org/10.5772/intechopen.98907*

#### **5.2 Booming noise problem for simplified vehicle model**

#### *5.2.1 Modeling for optimal problem*

Here booming noise for wagon-type automobile is treated. This noise is from 40 to 60 hertz when a vehicle gets over a small protrusion.

This is the coupled acoustic-structural vibration problem [44–48].

It is shown the simplified model simulated vehicle system in **Figure 19** which has average dimensions such as width 1.5 m, length 3 m, height 1.2 m. It is supposed that the nodal shape box is body and inside the box is interior space. The FEM model is generated such that the body has 1980 rectangular shell elements with length of one side 0.1 m, thickness 1 mm and inside the box has 5400 cube acoustic elements with length of one side 0.1 m. To treat the coupled forward and rear direction acoustic-upper panel structural phenomena, the 5 faces except upper panel are high stiffness by 100,000 times Young modulus of these 5 faces such that the 1st eigen-frequency of acoustic system and second panel bending one are near. The mode attenuation ratio is 3.5% both for the structural system and acoustic system.

#### *5.2.2 Setting up optimization problems*

The problem is set as follows:

The objective function: the average of 8 frequency integral values of each sound pressure response absolute value at observation points shown in **Figure 19**. The eight coordinate positions are 1 m from the front or the back panel in Z direction, 0.3 m from the right or the left panel in Y direction and 0.3 m from the upper or the bottom panel in X direction symmetrically. Integral values of each sound pressure response absolute value are calculated by trapezoid official over from 40 to 50 hertz divided into 200 pieces:

Design values: *X*1,*X*2. X1: the position coordinate of the roof bow on the roof panel

0.5 ≤*X*<sup>1</sup> ≤2*:*5, 0.1 m increments, 21 point discrete values

#### **Figure 19.**

*Simplified vehicle FE model: To treat the coupled forward and rear direction acoustic-upper panel structural phenomena, the 5 faces except upper panel are high stiffness by 100,000 times young modulus of these 5 faces such that the 1st eigen-frequency of acoustic system and second panel bending one are near.*

*X*2: panel thickness 0.6 ≤ *X*<sup>2</sup> ≤1*:*8, 0.3 mm increments, 5 point discrete values. Input force: Sinusoid forced displacement in forward and rear direction with amplitude 0.2 mm which is equivalent to the force which is generated when the vehicle gets over a small protrusion.

#### *5.2.3 Optimization results*

Because the number of design values is 2, 6 initial learning points are selected through uniform random number by keeping longer than *dmin*. Here *dmin* is 0.33, *ɛ*1*:* in Eq. (19) is 0.05, *ɛ*<sup>2</sup> in Eq. (20) is 0.01 and *α*<sup>1</sup> = *α*<sup>2</sup> in Eq. (21) are 2.0. The distribution of learning points is shown in **Figure 20**. In step 1, 6 points marked 1st and 3 points marked 7th–9th are set over entire design space. On the other hand, 3 points marked 10th–12th in step 2 are concentrated on one part of the space. In step 1, although after 3 repetition, Eq. (19) is satisfied with *X*<sup>1</sup> =2.5 and *X*<sup>2</sup> =0.6, Eq. (20) is not satisfied. And so in step 2 it is repeated. The first optimal point in step 2 is *X*<sup>1</sup> =1.3 and *X*<sup>2</sup> =0.6 apart from *X*<sup>1</sup> =2.5 and *X*<sup>2</sup> =0.6. And finally, Eq. (20) is satisfied with *X*<sup>1</sup> =1.5 and *X*<sup>2</sup> =0.9. This is because there are 2 design points which have almost same minimum values, one design point is *X*<sup>1</sup> =1.5, the other is *X*<sup>1</sup> =2.5. And so the optimal point moves between 2 minimum points on the way to be improved accuracy on response surface. As a result, it converges to the optimal solution *X*<sup>1</sup> =1.5, *X*<sup>2</sup> =0.9 and Y=95.59 with 12 learning points. In such way, optimal value is searched by repeating step 1where global solution is searched and by repeating step 2 where approximation precision is improved on the response surface. The approximation response surface is shown in **Figure 21**. Although referring to **Figure 19**, originally, the response surface must be symmetric with respect to *X*<sup>1</sup> =1.5, it is asymmetric because the learning points have biased distribution. This shows we can abbreviate unnecessary learning points for getting minimum value by taking many learning points in neighborhood of optimal point.

#### **Figure 20.**

*Learning points distribution in MPOD: In step 1, 6 points marked 1st and 3 points marked 7th–9th are set over entire design space. On the other hand, 3 points marked 10th–12th in step 2 are concentrated on one part of the space.*

*Global Optimization Method to Multiple Local Optimals with the Surface Approximation… DOI: http://dx.doi.org/10.5772/intechopen.98907*

#### **Figure 21.**

*Approximate response surface by MPOD: The response surface must be symmetric with respect to X*<sup>1</sup> *=1.5, it is asymmetric because the learning points have biased distribution. This shows we can abbreviate unnecessary learning points for getting minimum value by taking many learning points in neighborhood of optimal point.*

#### *5.2.4 Comparison between MPOD and experimental planning method using orthogonal table (EPO)*

As shown in **Figures 22**, 25 points at 5 levels are selected by quartering upper and lower design limits. **Figure 23** shows the response surface approximated by 4th order direct polynomial. It is high in the symmetric level of response surface with respect to *X*<sup>1</sup> =1.5 because the design points are uniformly selected. It has 3 valleys

#### **Figure 22.**

*Designing point distribution in EPO: 25 points at 5 levels are selected by quartering upper and lower design limits.*

#### **Figure 23.**

*Approximate response surface by EPO: This response surface is approximated by 4th order direct polynomial. It is high in the symmetric level of response surface with respect to X*<sup>1</sup> *=1.5 because the design points are uniformly selected.*

in the neighborhood of *X*<sup>1</sup> =0.5, 1.5, 2.5 and 2 mountains in the neighborhood of *X*<sup>1</sup> =1.0, 2.0. Gradient of response surface for *X*<sup>2</sup> is extremely smaller than the one for *X*1*:* The optimal solution by this response surface is *X*<sup>1</sup> =1.5, *X*<sup>2</sup> =0.6 and Y=94.36 which are slightly different from *X*<sup>1</sup> =1.5, *X*<sup>2</sup> =0.9 and Y=95.59 by the MPOD. The reason for this difference is that although in EPO the trial points must be placed uniformly, in MPOD they are placed adaptably such as the distance between any 2 points is longer than *dmin* The minimum point by EPO exits in the region within *dmin* =0.33 where it is out of this problem.

#### **5.3 Booming noise problem for real vehicle model**

#### *5.3.1 Modeling for optimal problem*

Following the previous section, it is applied the MPOD also to the booming coupled noise and structural problem for a real wagon-type vehicle where the dynamic damper should be set. It is shown the overview of vehicle FEM model in **Figure 24** where the structural system is modeled with shell elements, the acoustic

#### **Figure 24.**

*(a) Structural FE model for trimmed body. (b) Acoustic FE model of cabin. Overview of vehicle FEM model: The structural system is modeled with shell elements, the acoustic system is modeled with solid elements and the dynamic damper is modeled with lumped mass and one degree of freedom scalar spring.*

system is modeled with solid elements and the dynamic damper is modeled with lumped mass and one degree of freedom scalar spring.

#### *5.3.2 Setting up optimization problems*

Let us set up the objective function Y and the design variables X1, X2 as follows: Y: Minimize the maximum SPL (Sound Pressure Level) during 30 and 60 Hertz at the observation point ● which is set ear height position of the center of the front seat under the condition such that forward and rear unit input is loaded at the connected point of the arm to the front suspension for giving the same-phase vibration on the left and right. This load is generated when the vehicle gets over a small protrusion.

X1: the position of the damper which is selected among 22 candidate from ➀ to ?<sup>22</sup> in **Figure 25**.

X2: Spring constant which is selected among 5 candidates:

7.0,9.0,11.0,13.0,15.0 kgf/mm. The weight and the structural damping coefficient are constant such as is 1.5 kg and 0.1 each other.

#### *5.3.3 Optimization results*

As far as the convergence judgment for the optimal, ε<sup>1</sup> is 0.05, ε<sup>2</sup> is 0.01 and also as far as the control coefficient, α<sup>1</sup> and α<sup>2</sup> are 2.0. From the approximation response surface with the initial 6 points, the 1st optimal is X1=16, X2=11.0, Y=0.659. The 2nd optimal is X1=15, X2=7.0, Y=0.567 by adding 7th, 8th 2 points. The step 1 is terminated because the 3rd optimal is X1=15, X2=7.0 that is same as last time by adding 9th, 10th 2 points. It is finished because the optimal solution X1=14, X2=7.0, Y=0.556 are gained by adding the 11th point and this satisfies with Eq. (20). It is shown the learning distribution points in **Figure 26** and the approximated response surface using 11 learning points in **Figure 27**. This means the minimal solution over the total design region is gained with very few learning points compared with total 110 design points. As shown in **Figure 28**. The objective function Y is reduced 6 dB around 45Herz by the optimal analysis. As shown in **Figure 29**, the coupled acoustic structural mode at 44.3 Hz gives the entire car body elastic deformation induced by the deformation of the upper panel and rear floor. Moreover the acoustic system gives 1st mode of which node is a little rear from the center of passenger cabin and of which anti-node is front to back end of passenger cabin. It is shown the difference of the panel vibration acceleration along the center line of vehicle body in **Figure 30** with the optimally optimum arranged damper and without damper.

**Figure 25.**

*Candidates of dynamic damper location: The position of the damper which is selected among 22 candidate from* <sup>➀</sup> *to* ?<sup>22</sup> *.*

#### **Figure 26.**

*Learning points distribution: The minimal solution over the total design region is gained with very few learning points compared with total 110 design points.*

**Figure 27.** *Response surface by 11 learning points:*

From **Figure 30**, the effect of optimal arrangement of damper can be interpreted to be the vibration-absorbing effect for the global coupled acoustic-structural mode at 44.3 Hz rather than the local vibration reduction around the front roof rail.

To certify the effect of the optimal damper arrangement, bench test using smooth drum is performed with the optimal arrangement damper and without damper. It is shown the noise level–frequency relationship in **Figure 31**. Maximum approximately 5 dB reduction of SPL is realized among 4550 Hz. Although it needs many man-hours to find out the optimal arrangement of dynamic damper by experiment trial and error, it has been certified the method for booming noise reduction can be realized efficiently by MPOD.

*Global Optimization Method to Multiple Local Optimals with the Surface Approximation… DOI: http://dx.doi.org/10.5772/intechopen.98907*

**Figure 28.** *Estimation of optimized dynamic damper effect: The objective function Y is reduced 6 dB around 45Herz by the optimal analysis.*

#### **5.4 Vehicle crash problem**

#### *5.4.1 Setting up optimization problems*

The vehicle side member (component parallel to the central axis of vehicle) plays a big role in energy absorption while the frontal collision and energy absorption of the vehicle is determined by its size, shape and welding [49–53]. In this example, the uniform hollow square-cross section, a perfectly straight side member with uniform thickness is investigated and reinforcement of the component is considered as a way to increase the energy absorption. The size and material property of the member are *ρ* = 7.85E-3 g/mm3, *E* = 210GPa, *Et* = 2.5GPa, *σ<sup>y</sup>* = 220 MPa, *v* = 0.3, and the form of reinforcement is depicted in **Figure 32**.

All degrees of freedom at the bottom end are rigidly fixed, and at the top end, a rigid mass of 500 kg, and velocity of 54 km/h are used as a load to simulate a crush. The load–displacement behavior of the member while crushing is calculated by FEM solver LS-DYNA3D [54], in which 2700 shell elements and 2754 nodes are used. The design variables include the thickness of the base plate (*t*1), the upper plate (*t*2) and the location of reinforcement (*z*). The total weight of the component is constant (w=780 g), therefore only two design variables are independent. The objective function is the energy absorption of the component, 10 ms after crushing. The mathematical definition of the problem is as follows;

$$\mathbf{Max} \boldsymbol{f} \ (t\_1, t\_2, z)$$

$$\text{Subject to } : \boldsymbol{0.5} \le t\_1, t\_2 \le \mathbf{1.0}, \mathbf{1.5} \le z \le \mathbf{250}$$

$$t\_2 = (\mathbf{1.5} + (\mathbf{1} - t\_1)) / \mathbf{2.0}$$

#### *5.4.2 Optimization results*

The training procedure and end thresholds are identical to the case of large deviation, that is, *α*=5, *β*=0.6 and *ε*1=0.15, *ε*2=0.01. Step 1 is terminated after four iterations (corresponding to 12 functional evaluations) of random seed searches. Then, Eq. (20) is satisfied after two iterations and the total number of functional evaluations until convergence, is 14. The approximation optimal is (1.21, 1.39,160), f (*xop*(*k*))=8484.11 kJ, which is 11.6% higher than that of the original design

#### **Figure 29.**

*Fluid–structure coupled eigen mode (44.3 Hz): The coupled acoustic structural mode at 44.3 Hz gives the entirecar body elastic deformation induced by the deformation of the upper panel and rear floor.*

(1.00,1.50,0.0). The contour plot of the approximation function is depicted in **Figure 33**. In the figure, the open circle ○ indicates the optimal design using the proposed MPOD method and the plus sign+ expresses the designs used for approximation. The comparison between the approximation function and FEM values with *t*1=1.21 and *t*2=1.39 are fixed while the changing reinforcement location *z* is plotted in **Figure 34**. It can be seen from the figure, and also mentioned in Refs. [49–53], that the FEM value varies, because of the heavy nonlinearity of the crashworthiness problems [49–53]. A smooth and robust approximation function is obtained by the HNN approximation and an optimal design is realized successfully. The MPOD

*Global Optimization Method to Multiple Local Optimals with the Surface Approximation… DOI: http://dx.doi.org/10.5772/intechopen.98907*

**Figure 30.**

*Acceleration of vehicle body panel (44.3 Hz): Optimal arrangement of damper can give the vibrationabsorbing effect for the global coupled acoustic-structural mode at 44.3 Hz rather than the local vibration reduction around the front roof rail.*

#### **Figure 31.**

*Validation of optimized dynamic damper effect: Maximum approximately 5 dB reduction of SPL is realized among 4550 Hz.*

#### **Figure 32.**

*Analysis model of crashworthiness optimization: The member is perfectly straight with uniform hollow squarecross section and uniform thickness.*

#### **Figure 33.**

*Approximation function of the present method: The open circle* ◯ *indicates the optimal design using the proposed MPOD method and the plus sign+ expresses the designs used for approximation.*

*Global Optimization Method to Multiple Local Optimals with the Surface Approximation… DOI: http://dx.doi.org/10.5772/intechopen.98907*

**Figure 34.**

*Comparison of FEM and HNN when design variable z is changed: The FEM value varies, because of the heavy nonlinearity of the crashworthiness problems. A smooth and robust approximation function is obtained by the HNN approximation and an optimal design is realized successfully.*

approach based on the HNN has a robust property against calculation noise. However, the deviation of the coefficient is dependent on the design distribution. This topic will be discussed further in future works.

#### **5.5 Combination problem between vehicle crashworthiness and restraint device performance at the head-on collision**

#### *5.5.1 Setting up optimization problems*

Here the problem is the head-on collision where the vehicle collides perpendicular to a concrete wall. This phenomena is that the vehicle finishes the moving after the crush energy is absorbed completely by the vehicle structure [55]. During this phenomena, the occupant moves length L+I in the axial direction where L is the amount of crush and I is the amount of movement of the occupant in the cabin. The longer L+I is the smaller the average deceleration of the occupant is. And so it is necessary to raise L+I. On the other hand, if the amount of crush of the vehicle front-end is too large, the possible amount of occupant moving becomes smaller. Also if the movement amount of the occupant to the vehicle front side, it arises the possibilities of the crush between the occupant and the steering wheel, etc. In such way, the most important issue is to get the optimal valance between the vehicle crashworthiness and occupant restraints performances. So let us start the feasibility study concerned with this. Ever before, representative 4 patterns have been considered as the best vehicle performance for the occupant. As shown in **Figure 35**, these 4 patterns of vehicle crush performance have been referred to the standard numbers which are generated when the small passenger cars collide perpendicular to a concrete wall with 35mph based on the criterion of NHTSA (National Highway Traffic Safety Administration). Here it is selected among these 4 vehicle crush performances as the design of vehicle. That means the design vehicle value becomes integer for the vehicle performance. As far as the occupant restraint, it is supposed

**Figure 35.**

*Type of deceleration curve of vehicle: These 4 patterns of vehicle crush performance have been referred to the standard numbers which are generated when the small passenger cars collide perpendicular to a concrete wall with 35mph based on the criterion of NHTSA.*

that there is not the contact between the occupant and the steering wheel for simplicity so that only seat belt is considered. For the design values of seat belt, the magnification *x*<sup>1</sup> of standard value 2.5GPa of seat belt Young modulus and the magnification *x*<sup>2</sup> of standard value 3000 N of working load of the load limiter are selected. Here the load limiter is the device which relieves the impact on the chest by sending seat belt with the load remained constant when the load applied to the seat belt winder reaches to the working load as shown in **Figure 36**. Here the sendoff amount of load limiter is 100 mm. Next it is shown in **Figure 37** the analysis model of dummy system which consists of dummy, seat belt and sled. The dummy is Hybrid III whose size is the average of adult male. The number of total nodal points is 3700. The number of shell elements is 3203 for dummy and 116 for sled. Seat belt is basically modeled with 58 beams. But the parts of the seat belt for the chest and the waist are modeled with 192 shell elements so that the contact between dummy and seat belt can be considered precisely. The dummy and the sled are solid and seat belt is elastic. Contact by penalty method is defined for the friction between dummy and seat. The coefficient of friction is 0.2. The model for seat is comparatively coarse because it is only for the in-plane force to get friction as shown in this figure. The objective function is Eq. (35) by using HIC in Eq. (34) and chest G which are generated by head-on collision with crush velocity 35mph referred to the criterion of NHTSA.

$$\text{HIC} = \left[\frac{1}{t\_{2-t\_1}} \int\_{t\_1}^{t\_2} a \, dt\right]^{2.5} (t\_2 - t\_1) \tag{34}$$

$$\text{Min f}(\mathbf{n}, \mathbf{x}\_1, \mathbf{x}\_2) = \sqrt{\left(\frac{\text{HIC} - \text{200}}{\text{200}}\right)^2 + \left(\frac{\text{ChetG} - \text{20}}{\text{10}}\right)^2} \tag{35}$$

*Global Optimization Method to Multiple Local Optimals with the Surface Approximation… DOI: http://dx.doi.org/10.5772/intechopen.98907*

#### **Figure 36.**

*Safety belt pull out vs. load of load limiter: The load limiter is the device which relieves the impact on the chest by sending seat belt with the load remained constant when the load applied to the seat belt winder reaches to the working load.*

**Figure 37.**

*Hybrid III dummy and sled FEM model: The analysis model of dummy system which consists of dummy, seat belt and sled.*

$$\begin{aligned} \text{Subject to } &1 \le \mathbf{n} \le 4\\ &0.7 \le \mathbf{x}\_1 \le \mathbf{1.3} \\ &0.5 \le \mathbf{x}\_2 \le \mathbf{1.5} \\ &H\_{dip} \ (\mathbf{n}, \mathbf{x}\_1, \mathbf{x}\_2) \le 420 \text{mm} \end{aligned} \tag{36}$$

**Figure 38.**

*Safety criteria using NHTSA: The optimal design point is n=3, x*<sup>1</sup> ¼ 0*:*70, *x*<sup>2</sup> ¼ 1*:*20*:The value of the objective function is 1.34 (HIC=355.8*、*chest G=30.9) estimated as 5☆. The other local optimal design point is n=3, x*<sup>1</sup> ¼ 1*:*30, *x*<sup>2</sup> ¼ 1*:*15*: The objective function is 3.12 (HIC=620.2, chest G=43.1) estimated as 4☆.*

#### **Figure 39.**

*Contour of approximation response surface at n=3: A is the given optimal point. Although all trial points are given within the region, a is on the boundary. This means the optimal point a is never found without the MPOD's intrinsic extrapolation ability.*

#### *Global Optimization Method to Multiple Local Optimals with the Surface Approximation… DOI: http://dx.doi.org/10.5772/intechopen.98907*

Here, *α* in Eq. (34) is deceleration of head. The unit of deceleration for HIC and chest G is used g(1 g=9.8 m/ *s* 2). Chest G is the maximum deceleration which continuously generates longer than 3 ms.

The objective function in Eq. (35) is rated on a 5-point stage. Here *Hdisp* in Eq. (36) means the moving amount of head to the front and it is shorter than 420 mm so that the occupant does not crush to the steering.

#### *5.5.2 Optimization results*

6 points are selected for the initial learning points. *ɛ*<sup>1</sup> ¼ 0*:*02 in Eq. (19), *ɛ*<sup>2</sup> ¼ 0*:*05 in Eq. (20), *α*<sup>1</sup> = *α*<sup>2</sup> ¼ *α*<sup>3</sup> =1.0 in Eq. (21). Firstly step 1 is terminated with 5 iterations which use total 22 points. In step 2, it is terminated with 1 iteration which uses 2 points. So 24 points are used. The optimal design point is n=3, *x*<sup>1</sup> ¼ <sup>0</sup>*:*70, *<sup>x</sup>*<sup>2</sup> <sup>¼</sup> <sup>1</sup>*:*20*:* The value of the objective function is 1.34 (HIC=355.8、Chest G=30.9) of which estimation is 5*☆* in **Figure 38**. The other local optimal design point is n=3, *x*<sup>1</sup> ¼ 1*:*30, *x*<sup>2</sup> ¼ 1*:*15*:* of which objective function is 3.12 (HIC=620.2, Chest G=43.1). This corresponds to 4*☆* in **Figure 38**. It is shown the contour of the objective function with n=3 in **Figure 39**. In this figure, A is the given optimal point, B is the local optimal point and the shadow part means the one where it cannot be designed. Although all trial points are given within the region, A is on the boundary.

#### **Figure 40.**

*Motion behavior of the dummy model at optimal design: The HIC becomes the maximum when the jaw contacts the chest around 100 ms. but the maximum HIC is rather small because the contact between the jaw and the chest becomes loose around 100 ms by the operation of the load limiter.*

This means the optimal point A is never found without the MPOD's intrinsic extrapolation ability. Such result is never found only with the conventional interpolation unless the trial points are included also on the boundary. In such style, it becomes very efficiency. It is shown the motion behavior of the dummy model at optimal design in **Figure 40**. The HIC becomes the maximum when the jaw contacts the chest around 100 ms as shown in this figure. But the maximum HIC is rather small because the contact between the jaw and the chest becomes loose around 100 ms by the operation of the load limiter. Generally speaking, lots of studies for response surface optimal problem have been done for only one peak problem. On the other hand, MPOD gives us plural optimal combination simultaneously without the knowledge such as how many and where the maximum points exist. Now present vehicle also has air back in addition to seat belt as restraint device. And so it has more local maximum points in the optimal combination problem among vehicle and restraint device than this problem mentioned here. That means the best combination has not been realized between vehicle performance and restraint device one. Only the MPOD can be applied to this multi-peak optimal problem.

## **6. Conclusion**

Here, it is presented a novel method to solve the global optimal design within a feasible design space using response surface methodology based on HNN activation. The sequential approximation procedure and extrapolation of the HNN are proposed for irregular design space and arbitrary designs. The detailed summaries of the issues raised in this paper are as follows.


#### *Global Optimization Method to Multiple Local Optimals with the Surface Approximation… DOI: http://dx.doi.org/10.5772/intechopen.98907*

space with fewer designs, Step 2 focuses on increasing and confirming the approximation accuracy near the optimal design determined in Step 1. The proposed sequential optimal design method can be applied to design problems with regular and irregular boundaries of design space by the benefit of extrapolation.


#### **Acknowledgements**

This research has been done by many co-authors. Especially this author thanks to Dr. Shi, Q., Dr. Kamada, Y., Dr. Tokura, S., Mr. Takasima, F. and Mr. Fukushima, H.

### **Author details**

Ichiro Hagiwara

Research and Intellectual Property Strategic Organization, Meiji Institute for Advanced Study of Mathematical Sciences (MIMS), Meiji University's Institute of Autonomous Driving (MIAD), Tokyo, Japan

\*Address all correspondence to: ihagi@meiji.ac.jp

© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## **References**

[1] Benaouali A, Stanisław KS. Multidisciplinary design optimization of aircraft wing using commercial software integration. Aerospace Science and Technology. September 2019;**92**:766-776

[2] Ma, Z.D. and Hagiwara, I., Sensitivity Analysis Methods for Coupled Acoustic-Structural Systems Part I: Modal Sensitivities, AIAA Journal, Volume 29, Number 11 (1991-11), pp. 1787-1795.

[3] Ma, Z.D. and Hagiwara, I., Sensitivity Analysis Methods for Coupled Acoustic-Structural Systems Part II: Direct Frequency Response and its Sensitivities, AIAA Journal, Volume 29, Number 11 (1991-11), pp. 1796-1801.

[4] Ma, Z.D. and Hagiwara, I., Sensitivity Calculation Methods for Conducting Modal Frequency Response Analysis of Coupled Acoustic-Structural Systems, JSME International Journal Series III, Vol. 35, No. 1(1992-3), pp. 14-21.

[5] Hagiwara I, Ma ZD. Arai, A. and Nagabuchi, K., reduction of vehicle interior noise using structural-acoustic sensitivity analysis methods. SAE 910208 Transaction Section. 1991;**6** (1992–4):267-276

[6] Hagiwara, I. and Ma, Z.D., Development of Eigen mode and Frequency Response Sensitivity Analysis Methods for Coupled Acoustic-Structural Systems, JSME International Journal Series III, Vol. 35,No. 2(1992-6), pp. 229-235.

[7] Hagiwara, I., Ma, Z.D., Arai, A. and Nagabuchi, K., Reducing Noise by Structural-Acoustic Sensitivity Analysis, Automotive Technology International '92 (1992-7), pp. 339-342.

[8] Hagiwara, I., Kozukue, W. and Kitagawa, Y., Advanced Sensitivity Analysis Techniques for Improving Automobile Noise and Vibration and Crashworthiness Characteristics, Design-Sensitivity Analysis, ATLANTA TECHNOLOGYPUBLICATIONS, Kleiber M. and Hisada T. ed.(1993-7).

[9] Hagiwara, I., Nagabuchi, K. and Arai, A., Study of Structure Identification Method using Sensitivity Analysis for Vibration, Finite Elements in Analysis and Design Elsevier, Vol. 14,No. 2 and 3 (1993-10), pp. 111-126.

[10] Pal, C. M. and Hagiwara, I., Development of Eigen mode Sensitivity Analysis Methods for Coupled Acoustic-Structural Systems and Its Application to Vehicle Interior Noise Problems, Finite Elements in Analysis and Design Elsevier, Vol. 14, No. 2 and 3(1993-10), pp. 225-234.

[11] Hagiwara, I., Kozukue, W. and Ma, Z.D., Development of Eigen mode Sensitivity Analysis Methods for Coupled Acoustic-Structural Systems and Its Application to Reduction of Vehicle Interior Noise, Finite Elements in Analysis and Design, Elsevier, Vol. 14, No. 2 and 3 (1993-10), pp. 235-248.

[12] Ma, Z.D. and Hagiwara, I., Development of a New Mode-Superposition Technique for Truncating-Lower and/or Higher-Frequency Modes (Application of Eigen mode Sensitivity Analysis Method for Systems with Repeated Eigenvalues), JSME International Journal Series C, Vol. 37,No. 1(1994-3), pp. 7-13.

[13] Hagiwara, I. and Ma, Z.D., Development of a New Mode-Superposition Technique for Truncating Lower- and/or Higher-Frequency Modes (Application for Eigen mode Sensitivity Analysis) JSME International Journal Series C, Vol. 37,No. 1 (1994-3), pp. 14-20.

*Global Optimization Method to Multiple Local Optimals with the Surface Approximation… DOI: http://dx.doi.org/10.5772/intechopen.98907*

[14] Pal, C. M. and Hagiwara, I., Optimization of Noise Level Reduction by Truncated Modal Coupled Structural-Acoustic Sensitivity Analysis, JSME International Journal Series C, Vol. 37,No. 2 (1994-6) pp. 246-251.

[15] Kozukue W, Hagiwara I. Development of sound pressure level integral sensitivity and its application to vehicle interior noise reduction. Engineering Computations. 1996;**13**(5): 91-107

[16] White KP Jr. Simulation optimization of the crashworthiness of a passenger vehicle in frontal collision using response surface methodology, SAETransaction. Sec. 1985;**3**:798-811

[17] Giunta, A.A., Noise Aerodynamic Response and Smooth Approximations in HSCT Design, AIAA-94-4376-CP, (1994), pp. 1117-1128.

[18] Şahin S, Samli R, Tan AS, Barba FJ, Chemat F, Cravotto G, et al. Solventfree microwave-assisted extraction of polyphenols from olive tree leaves: Antioxidant and antimicrobial properties. Molecules. 2017;**7**:1054

[19] Guglielmetti A, Ghirardello D, Belviso S, Zeppa G. Optimisation of ultrasound and microwave-assisted extraction of caffeoylquinic acids and caffeine from coffee silverskin using response. Italian Journal of Food Science. 2017;**29**:409-423

[20] Aydar AY, Bağdatlıoğlu N. and Köseoğlu. O: Effect of ultrasound on olive oil extraction and optimization of ultrasound-assisted extraction of extra virgin olive oil by response surface methodology (RSM). Grasas Y Aceites = International journal of fats and oils; 2017. p. 68

[21] Elksibi I, Haddar W, Ticha MB, Mhenni MF. Development and optimisation of a non conventional extraction process of natural dye from olive solid waste using response surface methodology (RSM). Food Chemistry. 2016;**161**:345-352

[22] Agcam E, Akyıldız A, Balasubramaniam VM. Optimization of anthocyanins extraction from black carrot pomace with thermosonication. Food Chemistry. 2017;**237**:461-470

[23] Espínola F, Moya M, Fernández DG, Castro E. Modelling of virgin olive oil extraction using response surface methodology. International Journal of Food Science and Technology. 2011;**46**: 2576-2583

[24] Ghasemzadeh A, Jaafar HZ, Karimi E, Rahmat A. Optimization of ultrasound-assisted extraction of flavonoid compounds and their pharmaceutical activity from curry leaf (Murraya koenigii L.) using response surface methodology. BMC Complementary and Alternative Medicine. 2014;**14**:318

[25] Yilmaz T. Tavman Ş. ultrasound assisted extraction of polysaccharides from hazelnut skin. Food Science and Technology International. 2015;**22**(2): 112-121

[26] Hornik K et al. Multilayer feed forward networks are universal apploximations. Neural Networks. 1989; **2**:359-366

[27] Carpenter, W.C. and Barthelemy, J. M., A Comparison of Approximations and Artificial Nrural Nets as Response Surface, AIAA-92-2247-CP, (1992), pp. 2474-2482.

[28] Sutherland J. G. the holographic model of memory, learning and expression. International Journal of Neural System. 1990;**1**:259-267

[29] Hagiwara, I., Shi, Q., Kozukue, W., Development of Nonlinear Dynamic Optimization Method Using the Holographic Neural Network, Trans.

Jpn. Soc. Mech.Eng., (in Japanese), Vol. 63,No. 616,A (1997), pp. 2510-2517.

[30] Hagiwara I, Shi Q, Takashima F. Development of crashworthiness optimization method using the neural network, (2nd report, application to vehicle component)trans. Jpn. Soc. Mech. Eng., (in Japanese), Vol. 64. 1998;**626A**:2241-2247

[31] Ugwele, F. O., Aninwede, C. S., Chime, T. O., Christian, O. A. and Innocent, S., Application of response surface methodology in optimizing the process conditions for the regeneration of used mobil oil using different kinds of acids, Heliyon 2020 Oct 7; 6(10): e05062.doi:10.1016/j.heliyon. 2020. e05062.eCollection 2020 Oct.

[32] Hinton, G.E., Krizhevsky, A., Srivastava, N., Sutskever, I., and Salakhutdinov, R., Dropout: A Simple Way to Prevent Neural Networks from Overfitting, Journal of Machine Learning Research, 15, 1929–1958 (Cited 2084 Times, HIC: 142, CV: 536) (2014).

[33] Ciric, A., Krajnc, B., Heath, D. and Ogrinc, N., Response surface methodology and artificial neural network approach for the optimization of ultrasound-assisted extraction of polyphenols from garlic, Food and Chemical Toxicology, Volume 135, January 2020, 110976.

[34] Ilomuanya, M.O., Onwubuya, C.P. and Amenaghawon, A.N., Development and optimization of antioxidant polyherbal cream using artificial neural network aided response surface methodology, Journal Pharmaceutical technology, e-ISSN: 2717-7904 https://d oi.org/10.37662/jpt.2020.6,pp.46-53.

[35] Ilomuanya MO, Amenaghawon NA. Odimegwu. J., Okubanjo, O.O. and Aghaizu, C., Oluwatobiloba a. formulation and optimization of gentamicin hydrogel infused with *tetracarpidium conophorum* extract via

central composite design for topical delivery. Turk J Pharm Sci. 2018;**15**(3): 319-327

[36] Shi, Q., Hagiwara, I., Azetsu, S. and Ichikawa, T., Holographic Neural Network Approximations for Acoustics Optimization, JSAE Review, Japan Society of Automobile Engineering, Vol. 19 No. 4 (1998-10), pp. 361-363.

[37] Shi Q, Hagiwara I. Optimal design method using holographic neural Network's approximation to automobile problems, JJIAM (Japan journal of industrial and applied mathematics). Vol. 2000;**17-3**:240-247

[38] Shi Q, Hagiwara I, Takashima F. Global optimization to multiple local optima with response surface approximation methodology. JSME International Journal Series A. 2001; **44**(1):175-184

[39] Fukushima H, Kamada Y, Hagiwara I. Optimum engine mounting layout using MPOD. Transactions of the Japan Society of Mechanical Engineers, Series C. 2004;**70**(689):54-61 (in Japanese)

[40] Sakai T, Iwahara M, Shirai Y, Hagiwara I. Optimum designing of engine mounting for heavy duty vehicle: 5th report. Optimum Engine Mounting Layout by Genetic Algorithm, Transactions of the Japan Society of Mechanical Engineers, Series C. 2001; **63**(664):3815-3822 (in Japanese)

[41] Arai T, Kubozuka T, Gray SD. Development of an engine mount optimization method using modal parameters. SAE Technical Paper 932898. 1993

[42] Ishihama M, Satoh S, Seto K, Nagamatsu A. A., vehicle vibration reduction by transfer function phase control on hydraulic engine mounts. JSME International Journal Series C. 1994;**37**(3):536-541

*Global Optimization Method to Multiple Local Optimals with the Surface Approximation… DOI: http://dx.doi.org/10.5772/intechopen.98907*

[43] Ichikawa, T. and Hagiwara, I., Frequency Response Analysis of Large-Scale Damped Structuresusing Component Mode Synthesis, JSME International Journal Series III, Vol. 39, No. 3 (1996-9), pp. 450-455.

[44] Ma, Z.D. and Hagiwara, I., Development of New Modesuperposition technique for Modal Frequency Response Analysis of Coupled Acoustic-Structural Systems, Finite Elements in Analysis and Design Elsevier, Vol. 14, No. 2 and 3 (1993-10), pp. 209-223.

[45] Yashiro H, Suzuki K, Kajio Y, Hagiwara I, Arai A. An Applicationof structural-acoustic to Car body structure. SAE1985 Transactions Section. 1985;**4**:777-784

[46] Hagiwara, I., Advanced Structural Analysis Techniques for Improving Automobile Crashworthiness and Noise and Vibration Characteristic, The 40th Proceedings of Theoretical and Applied Mechanics (1991-6), pp 11-20.

[47] Ma, Z.D. and Hagiwara, I., Improved Mode-Superposition Technique for Modal Frequency Response Analysis of Coupled Acoustic-Strctural Systems, AIAA Jounal, Volume 29, Number 10 (1991-10), pp. 1720-1726.

[48] Li, D., Douglas, C.C., Kako, T., Suzuki, M. and Hagiwara, I., A Novel Perturbation Expasion Method for Coupled System of Acoustics and Structure, Computers and Mathematics with Applications (2006), Pp. 1689-1704.

[49] Hagiwara I, Tsuda M, Sato Y, Kitagawa Y. Simulation of automobile side member collapse for crash energy management, the international journal of supercomputer applications, (1990 summer). Vol. 4;**2**:107-114

[50] Hagiwara, I., Tsuda, M. and Sato, Y., Dynamic Analysis of Thin-Walled

Box Columns Subjected to Axial Crushing Using Finite Element Method, JSME International Journal Series I, Vol. 33, No. 3(1990-10), pp. 444-452.

[51] Kitagawa Y, Hagiwara I, Tsuda M. Development of a collapse mode control Methodfor side members in vehicle collisions, SAE 910809. Transaction Section. 1991;**6**(1992–4):1101-1107

[52] Kitagawa, Y., Hagiwara, I. and Tsuda, M., Dynamic Analysis of Thin-Walled Columns with Arbitrary Section Geometry Subjected to Axial Crushing, JSME International Journal Series I, Vol. 35,No. 2,(1992-4), pp. 189-193.

[53] Hagiwara, I., Satoh, Y., and Tsuda, M., Study of an analytical and System for Conducting Vehicle Crash Simulations, International Journal of Vehicle Design, Vol. 11,No. 6(1990-6), pp. 564-577.

[54] LS-DYNA (1997), Livermore Software Technology Corporation.

[55] Shi, Q., Hagiwara, I., Takashima, F. and Tokura, S., A Study on Deformation Behavior of Vehicle Cabin and safety Belt Using a Most Probable Optimal Design Method, Transactions of the Society of Automotive Engineers of Japan, Vol. 31,No. 3(2000-7), pp. 107-113 (in Japanese).

#### **Chapter 4**
