**Abstract**

There is an increasing demand for the performance optimization under the reliability constraints in various engineering problems. These problems are commonly known as reliability-based design optimization (RBDO) problems. Among different RBDO frameworks, the decoupled methods are widely accepted for their high efficiency and stability. However, when facing problems with high nonlinearity and nonnormally distributed random variables, they lose their computational performance. In this study, a new efficient decoupled method with two level quantile-based sampling strategy is presented. The strategies introduced for two level sampling followed by information reuse of nearby designs are intended to enhance the sampling from failure region, thus reducing the number of samples to improve the efficiency of sampling-based methods. Compared with the existing methods which decouples RBDO in the design space and thus need to struggle with searching for most probable point (MPP), the proposed method decouples RBDO in the probability space to further make beneficial use of an efficient optimal shifting value search strategy to reach an optimal design in less iterations. By comparing the proposed method with crude MCS and other sampling-based methods through benchmark examples, our proposed method proved to be competitive in dramatically saving the computational cost.

**Keywords:** reliability-based design optimization (RBDO), decoupled method, Stochastic optimization, Monte Carlo simulation, data-driven modeling, information reuse

### **1. Introduction**

It is a high priority for the engineers to reach an optimal design scheme for the entire system. However, optimality alone cannot guarantee the reliability of the design since theoretically, the optimal design solutions are very close to the boundaries of the reliability constraints. So, the optimal solution in practice can be very sensitive to the design uncertainties and may easily crosses the reliability constraint limits. To avoid such a situation, the optimal design should either be represented with a confidence range (margin) or an optimization process with the objective of maintaining the desired level of reliability [1]. Reliability-based design optimization

(RBDO) methodologies have been introduced to incorporate uncertainties in the design components and to provide a scheme seeking an optimal solution with the desired level of reliability.

To better understand the mechanism of these methods, researchers divided them into three major categories [2]:

	- *Unilevel methods:* It replaces the inner reliability analysis by Karush–Kuhn–Tucker (KKT) optimality conditions on the optimum point and then enforces it as a constraint itself in the main design optimization loop [2, 5–7].
	- *Single loop methods:* This approach arranges the reliability analysis and DDO in a sequential manner so that in each cycle, the probabilistic constraints are transformed into equivalent deterministic constraints [8–13].

The failure probability estimation is both the key component and challenge of an RBDO problem.

Three types of methods have been identified to calculate the failure probability of constraints [14]:


*An Efficient Quantile-Based Adaptive Sampling RBDO with Shifting Constraint Strategy DOI: http://dx.doi.org/10.5772/intechopen.110442*

probabilities, an inexpensive surrogate model is employed to approximate the original high-fidelity model. These methods are classified into three main groups:


If the constraints are mildly nonlinear, analytical methods (approximate function methods like FORM and SORM) as well as surrogate methods (e.g., RSM, Kriging) will provide rather a good estimation of the reliability. In contrast, simulation-based approaches have the advantage of not being sensitive to the complexity of the limit state functions (LSF) and avoiding LSF approximation errors [41]. The high expenses of MCS in the presence of expensive LSF, however, make their combined usage inefficient. Even the advent of surrogate models for coping with this issue have proven to cause a bias in failure probability estimation. They also do not reuse data from the previous design iterations which can save a great amount of computational budget [30]. Other approaches for decreasing the computational efforts of simulationbased methods, especially when encountering smaller failure probabilities, are IS and subset sampling. One of the major disadvantages of these sampling techniques is their need to design an initial proposed distribution. If the biasing density is not built appropriately, it can be problematic for high-dimensional and highly-nonlinear LSF since they still require a huge sampling for the estimations.

In RBDO, both the reliability assessment and its integration with the optimization process to find not only an optimal design but also a reliable one are vital. To make a balance between computational efficiency and accuracy of reliability estimate, decoupled methods proved to be good choices. They break down the nested loop structure of DDO and the reliability analysis and perform them sequentially. Sequential optimization and reliability analysis (SORA) [4] is the most popular method in literature with a desirable stability and efficiency [42] in which the feasibility of the current optimal point for probabilistic constraints is assessed by the MPP [43]. However, it has the shortcoming of MPP-based methods discussed earlier. In recent years, most of the studies in SORA have concentrated on addressing how to efficiently search the MPP in a decoupling procedure [42].

The main contribution of this work is twofold:


The first idea aims at eliminating the need to search for MPP and thus is able to overcome the inherent limits of the MPP-based methods. To make this adjustment, unlike SORA which decouples RBDO structure in the design variable space, the proposed method decouples RBDO in the probability space. By calculating the required distance to reach a more reliable design in the probability space, the

reliability constraint is converted into a deterministic constraint. Then, the RBDO problem is defined as a sequence of sub-deterministic optimization problems.

The second proposition aims to improve the computational efficiency. For the failure probability calculation, the quantile is computed by the use of sampling methods. In order to reduce the computational effort for sampling, two strategies are proposed in each iteration; firstly, adaptive constraint shifting based on the required level of reliability in each iteration is suggested, leading to a significant reduction in iterations. Secondly, once the initial sampling for forming the constraint distribution around the current optimal point is performed, new samples are generated. To generate the samples, statistical properties of infeasible region of constraint distribution are used. In the meanwhile, the samples generated from previous designs are recycled to avail the current iteration. A mixture of these recycled samples with newly generated samples will be further used in the reliability analysis to provide an accurate estimate of quantile. By doing so, the convergence to the final reliable design point is gained.

To the best of our knowledge, such an idea has not been implemented under quantile-based RBDO concepts. Reusing existing information in the context of RBDO has been identified in [30, 44]; however, they used them in a double-loop RBDO framework which suffers from heavy computational burden discussed earlier. The idea of recycling samples comes from the assumption that the nearby designs are likely to have similar failure regions. Hence, reusing them can result in computational saving that is required for a better description of the failure region.

The remainder of the paper is organized as follows:

The proposed framework with its innovative adjustments is described in detail under Section 2. Section 3 studies the performance of the proposed method by applying it on a number of highly cited examples in literature that is followed by a discussion on its advantages. At the end, the key characteristics of the proposed method as well as the conclusion of the achieved results are summarized.

### **2. Proposed sampling-based RBDO framework**

SORA is dependent on the location of MPP to find the shifting vector. The strategies of searching MPP are always based on the gradient methods which may drop into the local optimum or oscillate between multiple MPPs. Another error arises when random variables are not normal since they have to be mapped into equivalent standard normal space [42]. In this study, the sensitivity of the failure probability to solve the RBDO problem is based on the adaptive sampling without any need to map the random variables to the standard normal space during the MPP-search procedure. In this framework, constraint shifting is carried out in the probability space.

#### **2.1 RBDO problem formulation**

A generic formulation of the RBDO problem is expressed as

$$\begin{array}{ll}\text{Minimize }: & f\left(\mathbf{d}, \mu\_{X}, \mu\_{p}\right) \\\\ \text{w.r.t. }: & \text{Prob}\left[\mathfrak{g}\_{i}(\mathbf{d}, X, P) \ge 0\right] < P\_{f}^{t}, \qquad i = 1, 2, \ldots, n \\\\ \mathbf{d}^{L} \le \mathbf{d} \le \mathbf{d}^{U}, \qquad \mu\_{X}^{L} \le \mu\_{X} \le \mu\_{X}^{U}, \qquad \mu\_{P}^{L} \le \mu\_{P} \le \mu\_{P}^{U} \end{array} \tag{1}$$

*An Efficient Quantile-Based Adaptive Sampling RBDO with Shifting Constraint Strategy DOI: http://dx.doi.org/10.5772/intechopen.110442*

Where **d** is a deterministic design variable, *X* is a stochastic design variable vector, and *P* is an uncertain design parameter vector. The inequality function, *gi* ð Þ *d*,*X*, *P* ≥0, is generally the safety requirement, where *gi* >0 indicates the safe region, *gi* <0 indicates the failure (infeasible) region, and *gi* ¼ 0 defines the limit state surface. The value *Pt <sup>f</sup>* is the target failure probability, so for the probabilistic constraint to guarantee the design's reliability, this value should be reached.

An equivalent model to Eq. (1) aligned with quantile-based probability estimation is given by

$$\begin{array}{ll}\text{Minimize }: & f\left(\mathbf{d}, \mu\_{X}, \mu\_{p}\right) \\ \text{ } & w.r.t.: \ \operatorname{g}\_{i}^{\mathbb{R}}(\mathbf{d}, X, P) \le 0, \quad i = 1, 2, \ldots, n \end{array} \tag{2}$$

$$\operatorname{Prob}\left[\operatorname{g}\_{i}(\mathbf{d}, X, P) \ge \mathbf{g}^{\mathbb{R}}\right] = P\_{f}^{t},$$

Which indicates that the probability of *gi* ð Þ *d*,*X*, *P :* is greater than or equal to R-percentile *g<sup>R</sup>* which is exactly equal to the target failure probability *P<sup>t</sup> f* .

To better understand the shifting of unreliable constraints toward safe region in a probability space (rather than random variable space), the RBDO problem is formulated as

$$\begin{array}{ll}\text{Minimize } : & \mathbb{E}[f(\mathbf{d}, \boldsymbol{X})] \\ w.r.t. \; \text{Prob}\left(\mathbf{g}\_i(\mathbf{d}, \boldsymbol{X}) < \mathbf{0}\right) \leq \boldsymbol{P}\_f^t \end{array} \tag{3}$$

For simplicity in notations, *X* denotes all random samples (whether design variables or design parameters).

In each iteration, once the DDO is solved, the reliability of constraints will be evaluated on the current optimal design point. A shifting vector is required to be searched in the probability space which implies the required step size for moving the boundaries of constraints toward the feasible region for the next iteration. This will ensure that the optimal point gained from the deterministic optimization will be located on the deterministic boundary of constraints with improved reliability level. In this framework, efforts are dedicated to seek for the best shifting value for each active constraint in order to efficiently shift the probabilistic constraint toward the feasible region. Moreover, it should be emphasized that the distance between the quantiles of *P<sup>t</sup> <sup>f</sup>* and the optimal point of current design is measured in the probability space contrary to the SORA that measures the distance between MPP and the optimal point in a standardized design space. Doing so increases the stability and accuracy of the method when solving the RBDO problem. This will be illustrated through the examples section.

As in **Figure 1**, if the value of *Pt <sup>f</sup>* is predefined, by shifting the limit state (*g* ¼ 0) from its original place to the location of line 2, the target failure probability will be obtained. This shift for limit state line can be done by means of an appropriate auxiliary shifting variable "*y*".

The minimum value of *y:* is zero and begins from where the constraint *g* is zero (*g* ¼ 0). On the other hand, *y* can take a maximum value equal to the maximum value of *g* or the endpoint of the tail of PDF termed *UBg*.

To calculate the required shifting for constraints, a very simple line search problem should be defined as

#### **Figure 1.**

*Shifting the limit state line as much as an optimal valuey* <sup>∗</sup> *to satisfy the desired reliability (P<sup>t</sup> <sup>f</sup> ) for constraint g in probability space.*

$$\begin{array}{ll}\text{Minimize } & -y\\ \text{s.t. } & P\_f - P\_f^t \le 0 \\ & \mathbf{0} < y < \mathbf{U} \mathbf{B}\_{\mathbf{g}} \end{array} \tag{4}$$

Where the shifting variable "*y*" can only take values between 0 and upper bound of constraint distribution "*g*". The amount of "*y*" should be maximized while the probability of failure is less than target failure probability. It states that seeking for a "*y*" is aimed so that the percentage of samples (of *g*'s distribution) that violates the value of "*y*" becomes equal to the target failure probability.

As it is stated in [4], as the design moves toward 100% reliability, the computational effort for searching the MPP in a standard normal space becomes greater. However, in the proposed approach, part of this issue is solved in the reliability assessment level by retrieving the useful historical data from nearby designs, and part of it can be addressed by relying on an adaptive "*y*" search strategy.

#### **2.2 Adaptive shifting value search**

If the value of *y* is chosen to be a very small amount, then the constraint will be shifted at a very low pace. Subsequently, the rate of convergence will decrease, and hence, after a great number of iterations the probability requirements would be met. On the other hand, if *y* takes a large value, it is very likely to have a premature design with higher reliability than is desired, meaning that an overdesign has occurred. To tackle this problem, an adaptive step size is introduced so as to make a tradeoff between the amount of constraint shifting and the level of required *Pf* . The essence is to move the design solution as quickly as possible to its optimum in order to eliminate the need for locating MPP that is adjusted to simulation-based approaches. The tradeoff problem is formulated as below:

$$\text{Cost function} = \mathbf{y}\_{\text{ratio}} + \Delta \mathbf{P}\_f \tag{5}$$

$$\mathcal{Y}\_{ratio} = \frac{\mathcal{Y}}{\mathbf{U} \mathbf{B}\_{\mathbf{g}}} \tag{6}$$

*An Efficient Quantile-Based Adaptive Sampling RBDO with Shifting Constraint Strategy DOI: http://dx.doi.org/10.5772/intechopen.110442*

$$
\Delta P\_f = \left(\frac{1}{N} \sum\_{i=1}^N \mathbb{I}\_{\text{inf}}(d, \mathbf{x}\_i)\right) > \mathbf{y} \tag{7}
$$

Where the indicator function *inf* is defined as

$$\mathbb{I}\_{\text{inf}}(d, X) = \begin{cases} \mathbf{1}, & \mathfrak{x} \in \text{inf}^{\mathfrak{f}}, \\ \mathbf{0}, & \text{else.} \end{cases}$$

And *xi*, *i* ¼ 1, … , *N* are the *N* samples from probability distribution of constraint *g*. The first term in Eq. (5) denotes the normalized shifting value, and the second term denotes the error in failure probability.

The above problem can be stated in the form of a penalty function:

$$\text{Cost function} = y\_{\text{ratio}} + r \left[ \sum \left( \max \{ 0, v \} \right)^2 \right] \tag{8}$$

and

$$\nu = \left[ \left( \frac{1}{N} \sum\_{i=1}^{N} \mathbb{I}\_{\text{inf}}(d, \infty\_i) \right) > \wp \right] - P\_f^t \tag{9}$$

Where multiplier "*r*" determines the severity of the punishment and must be specified a priori depending on the type of the problem (e.g., small failure problems).

The main advantage of using this penalty function is that in the initial iterations (less reliable designs), the shifting toward safe region would be quicker by putting more weight on "*y*", whereas by approaching to the last iterations where the infeasible region becomes narrower, the second term of Eq. (5) outweighs the first term. Consequently, it does not allow for taking big step sizes and exceeding the *P<sup>t</sup> <sup>f</sup>* at later stages especially for rare events.

It should be noted that in Eq. (8), "*r*" plays the role of a controller to control the effect of the error term which should be changed adaptively. For instance, it can be stipulated that when the difference between *Pf* and *P<sup>t</sup> <sup>f</sup>* is less than an adequately small amount (e.g., 0.1), then an increase in the value of "*r*" will be applied, that is, magnifying the error in the failure probability. This is especially beneficial for the problems with stricter reliability requirements.

We use the results of the example 3.1 (section3) in advance for intuition.

The value of "*r*" in the following problem is set to 1e+3 since the *P<sup>t</sup> <sup>f</sup>* is 1e�3. Accordingly, if the *Pt <sup>f</sup>* was 1e-4, "*r*" should be set to 1e+4. To enlighten how the formula works, the tradeoff curve between the amount of shifting and the probability failure error is provided in **Figure 2**.

As is evident in **Figure 2**, by employing such optimization formulation, a tradeoff between two terms of the objective function is made; first, greater values of "normalized *y*" is in favor of minimizing the penalty function, because the great amount of failure probability has more impact on the penalty function and tries to make it maximum as fast as possible. Then, the trend will be reversed, meaning that the amount of shift becomes more important. Narrower failure region in the next iterations does not cause the effect of "*y*" to fade since "*y*" is normalized. The second term is squared so as not to allow any violation from *P<sup>t</sup> <sup>f</sup>* (occurrence of overdesign).

**Figure 2.**

*Results of the adaptive optimal shift value search for example 1. The left panel indicates the effect of shift value step size on the cost function, and right panel indicates a sharp impact of error in the probability of failure on the cost function.*

This shifting value search algorithm is robust because it is suitable for both continuous and discrete constraint functions.

Defining *y* and solving its optimization problem will not impose a significant computational load, because in fact, a comparison between a predefined value (*P<sup>t</sup> f* ) and available samples is performed. As such, in each iteration a value for *y* is suggested which will be added to the deterministic constraint of the next iteration. Then, the DDO problem aiming at finding new optimal solution will be done considering the shifted constraint in a decoupled DDO-reliability assessment approach.

The algorithm will be in progress until a stopping criterion is satisfied. A proper stopping criterion for this framework could be a maximum shifting value or shifting vector (in case of more than one constraint). This criterion implies that constraints are all reliably satisfied. Therefore, by this search strategy adapted to simulation-based RBDO, the reliability of constraints improves progressively, while the need for searching MPPs is eliminated.

#### **2.3 Sampling strategy for reliability analysis**

To get an insight into the constraint's behavior near the failure region, the first level of sampling (L1) is required. This will happen around the optimal point identified in the DDO of current iteration with the distribution properties defined for the random variables. Contrary to the existing sampling-based methods [30, 44], in the proposed method, for a more efficient sampling with least wastes, central limit theory is used. According to this theory, "the mean of a batch of samples will be closer to the mean of the overall population as the sample size increases" [45]. The overall population here means the huge MCS samples required to describe the probability distribution of the active constraints. Therefore, an adaptive sampling can be employed to distribute the initial samples around the optimal point of the current iteration regarding the difference between mean of the batch of samples and the mean of the whole population. By the aid of such adaptive sampling in each iteration, a more appropriate generation of sample size with less waste can be obtained. Moreover, this sampling will provide a distribution function for the constraint that has a better description of the failure region and hence provides a proper basis for the second-level sampling. The second-level sampling (L2) aims at distributing more samples with a focus on the region significant to the RBDO.

MCS has been run 500 times for example 1 (Section 3) to examine the process of convergence to the population mean (**Figure 3**). Then, it is repeated 100 times to

*An Efficient Quantile-Based Adaptive Sampling RBDO with Shifting Constraint Strategy DOI: http://dx.doi.org/10.5772/intechopen.110442*

**Figure 3.** *Variability for different sample sizes in 500 repetitions of MCS.*

exhibit the variability. In each run, a specified number of samples is generated, and then, these samples are aggregated with the previously produced samples. Then, the mean of the batches of samples is plotted in order to trace the value to which their means are approaching. Three different sample sizes are depicted to show the amount of variability for each one of them.

Across the 500 repetitions, there was noticeable variability at low sample sizes. As expected, this variability decreases with larger sample sizes. Thus, in each RBDO iteration, an average sample size can be considered. Then, as long as the coefficient of variation (CoV) is not reached its desirable value, the process of generating the batch of samples will continue. In example 1 (Section 3), the sample size has been set to 50. The total number of samples generated to obtain a CoV of 0.1 was 1000 for the first iteration. It is clear that as the iterations continue, due to narrower failure region, more data is required. This volume of samples will be then adaptively determined based on the preset CoV. The impact of using CoV as the stopping criteria rule depends on the properties of the probability space. This adaptive initial sampling also reduces the potential error of improper sample size selection when building biasing density functions in IS methods.

Now, the generated samples will be used both for estimating the failure probability and the second level of sampling. In order to more accurately identify the constraint's quantile of each cycle, two actions are taken; (1) Generating second level of sampling with its focus on the failure region, and (2) Recycling the historical samples (samples from previous iterations). For quantile-based failure probability, only the samples generated in the tail of the constraints are significant to an RBDO problem.

It goes without saying that initial sampling around the optimal point results in a number of samples dropped in the safe region and the rest dropped in the failure region. The latter are those carrying useful information for second level sampling.

It should be noted that once initial sampling is carried out, inactive constraints can be identified to avoid their contribution in second sampling. The strategy proposed by [46] can be employed to remove inactive constraints regardless of having knowledge about their boundaries.

**Figure 4** (top) shows the 100 random data generated from a normal distribution with mean of 0.87 and standard deviation of 0.3 in the L1. The location of *xmpp* is indicated as well. The panel has broken and classified the random samples to two parts: pre-*xmpp* and post-*xmpp* as are demonstrated in **Figure 4** (bottom). So, there are two subsets of the main population of L1-sampling. It should be noted that regardless of the distribution types of random variables (i.e., normal, gumbel, weibull, etc.), to produce new samples, it is possible to safely use normal distributions. This is allowed by the central limit theorem [45].

The standard deviation of the sampling distribution is smaller than the standard deviation of the population by a factor of √*n* [45], and hence, with this distribution, most of the L2-samples are distributed around the infeasible region.

Sorting the generated samples of random variables based on the index *g* ¼ 0 of constraint distribution gives the location of *xmpp* in the probability space.

Magnification of the infeasible region is subject to sufficient sampling from this region. Thus, new random samples will be generated with the mean and standard deviation of L1-samples dropped into the infeasible region (**Figure 4**, bottom right panel).

In failure probability estimation with IS, the smaller the variance of constructed biasing density, the more accurate the failure estimation. Chaudhuri et al. [30] have *An Efficient Quantile-Based Adaptive Sampling RBDO with Shifting Constraint Strategy DOI: http://dx.doi.org/10.5772/intechopen.110442*

**Figure 4.**

*Top panel: Sample points generated for level one with a sample size of 100. Bottom panels: The same samples of upper panel but broken from the xmpp point into two separate panels each with its own mean and standard deviation.*

utilized the information reuse when building a posteriori biasing density to reach an optimal biasing density. Then, they sum the currently built biasing density with biasing densities of nearby designs to form the final density that will be used for failure probability estimation. In our proposed method, such strategy is employed but in a quantile-based frame to produce an accurate estimation of the failure probability. The newly generated samples along with the samples of the nearby design that have an overlap with the tail of current constraint's distribution will form the final distribution function. Our method is different from [30] in the way the failure probability is calculated. In quantile-based method, the final distribution of the constraints will be directly used in the stage of searching for an optimal shifting value that is intended to reduce the failure probability error. In the implementation of the information reuse, no evaluation of the expensive LSF is required.

The new distribution function will be as below

$$\mathbf{g}\_f = \mathbf{g} + \mathbf{g}\_{\text{inf}} + \sum\_{j=0}^{k-1} \mathbf{\acute{g}}\_j \tag{10}$$

Where *g* is the initial distribution (from L-1 sampling), *ginf* is the distribution of L2-sampling, and *g*´*<sup>j</sup>* is the distribution built over all contributing samples from past RBDO iterations *j*∈ f g 0, … , *k* � 1 that are stored in a database.

Similar to IS method where each sample has its own weight, in quantile-based method, each sample added to the *g* distribution causes a change in the MCS

estimator. That is because by adding any single sample, the denominator in Eq. (7) will increase [42]. Thus, using the mixture distribution in the process of optimal shifting value search plays an important role in the accuracy of estimations.

Considering the recycled samples, L1-sampling and L2-sampling, the problem can be reformulated as

$$\Delta P\_f = \left(\frac{1}{n\_{L1} + n\_{L2} + m} \sum\_{i=1}^{n\_{L1} + n\_{L2} + m} \mathbb{I}\_{i \neq f}(d, \varkappa\_i)\right) > \chi \tag{11}$$

Where the indicator function *inf* is defined as:

$$\mathbb{I}\_{\inf}(d, X) = \begin{cases} \mathbf{1}, & x \in \inf \text{ ,} \\ \mathbf{0}, & \text{else}. \end{cases}$$

Here *inf* stands for the infeasible region of *gf* , *nL*1, *nL*2, *m* are the samples from *g*, *ginf* and *g*´*<sup>j</sup>* , respectively.

**Figure 5** shows the flowchart of the proposed algorithm.

**Figure 5.** *Flowchart of the proposed algorithm.*

In the next section, three well-known benchmark problems are employed to verify the efficiency and/or accuracy of the proposed method and to examine the whole proposed RBDO framework for quantile-based sampling methods.

### **3. Implementation and performance evaluation**

In this section, the performance of the proposed framework in terms of applicability and computational efficiency will be investigated through three examples which are among widely cited benchmark problems. Then, the results will be compared with those previously presented for the sake of verification. To solve the deterministic optimization part, the sequential quadratic programming (SQP) through *fminbnd* function in MATLAB is used.

Since the computational cost of the deterministic optimization is negligible for sequential RBDO methods, the number of LSF evaluation (short form: NFE) is a fair measurement for efficiency.

The RBDO optimal solutions will be validated by using a crude MCS of ten millionsize as a reference to satisfy the target failure probability.

#### **3.1 First RBDO problem: buckling of a straight column**

A structural engineering problem is considered to practice both the efficiency and accuracy of the method. A long straight column with a rectangular cross section, fixed at one end and free at the other is considered (**Figure 6**) [47]. A deterministic axial load F is applied at the free end.

The column design optimization is defined so that the total area of the cross section should be minimized while the exerted load should not exceed the critical buckling load. This is formulated as below:

$$\begin{aligned} \text{Minimize } & \quad f(\mathbf{x}) = bh \\ \text{s.t. } & \quad P\_f(\mathbf{x}) \le P\_f^t \\ & \quad b - h \ge 0 \\ & \quad \mathbf{g}(\mathbf{x}, p) = \frac{\pi^2 Ebh^3}{12L^2} - F\_a < 0 \\ & \quad \mathbf{F} \\ & \quad \mathbf{F} \\ & \quad \mathbf{F} \\ & \quad \mathbf{F} \end{aligned} \tag{12}$$

**Figure 6.** *A straight column with an axial load applied on it.*

Where *b*,*h* and *p* denote the width, the height of the column and random parameters, respectively. *h*, *b* are chosen to be the design variables.*E*, *L* and *F* stand for elasticity modulus, length of the column and axial load. Distribution information of *E* and *L* as random parameters and lower/upper bounds of *b* and *h* as deterministic variables are given in **Table 1**.

To examine the ability of the proposed algorithm to reach different target failure probabilities, three cases are considered (**Table 2**).

The results of **Table 2** indicate that our algorithm not only is capable to satisfy the desired level of reliability but also compared to the other decoupled sampling-based method, uses less function calls. The algorithm has presented an efficiency of around 9, 5 and 2 times better in the first, second and third cases, respectively, in comparison with the other sampling method.

The probability of failure for the probabilistic constraint of each case at the optimum is evaluated by crude MCS with a ten-million sample points so as to ensure the target failure probability is not violated.



*Information on variables.*


**Table 2.**

*Results summarized for three cases with different P<sup>t</sup> f .* *An Efficient Quantile-Based Adaptive Sampling RBDO with Shifting Constraint Strategy DOI: http://dx.doi.org/10.5772/intechopen.110442*

**Figure 7.** *Pdfs of the kernel distribution fit to the random samples for second iteration P<sup>t</sup> <sup>f</sup>* <sup>¼</sup> <sup>0</sup>*:*<sup>01</sup> *) (example 1).*

Calling samples of nearby design and generating samples with a focus on failure region has led to significant improvement in the effective samples of constraint's quantile. The contribution of historical samples, L-1 sampling and L-2 sampling in the final distribution is 0.1%, 0.2% and 0.5% of the whole existing samples, respectively. **Figure 7** is provided to illustrate the contribution of each batch of samples in the failure region.

The amount of required shift for the constraint's distribution within successive iterations to reach *Pt <sup>f</sup>* ¼ 0*:*01 can be traced in **Figure 8**. It is obvious from **Figure 8** that how the algorithm is able to shift the probability constraint to the safer region with respect to the required level of reliability. In each cycle, since the failure region at the tail of the probability constraint gets narrower, the contribution of the generated samples increases, while the contribution of the historical data from nearby designs drops.

In case of setting a fixed sample size for all iterations and without recycling past information for the reliability analysis, the algorithm should produce much more samples to be able to converge to the reliable optimal point, and this will put an emphasis on the efficiency of the proposed sampling algorithm. As can be observed from **Figure 9**, without the defined adjustments in sampling levels, a sample size of 5000 is required in each iteration (a total of 25,000 for a full RBDO) to get the desired failure probability (*P<sup>t</sup> <sup>f</sup>* ¼ 0*:*001). However, with the measurements which are consistent with a quantile-based sequential RBDO, the total number of function evaluations is as few as 11,350, that is to say, 2.2 times less computational load without sacrificing the accuracy.

#### **3.2 Second RBDO problem: a highly nonlinear RBDO problem:**

This problem is very well studied in various papers [48, 49], and the results of the proposed method are compared with an MCS-RBDO.

#### **Figure 8.**

*Pdfs of the kernel distribution fit to the random samples of all RBDO iterations for example 1 and for its second case (P<sup>t</sup> <sup>f</sup>* ¼ 0*:*01*).*

**Figure 9.**

*A great number of samples, without the proposed strategy, required for each iteration to achieve the target failure probability (here P<sup>t</sup> <sup>f</sup>* ¼ 0*:*001*). Right panel is a zoomed view of the left panel (example 1).*

The considered mathematical design problem is formulated as

$$\begin{aligned} \text{Minimize} \quad f(\mathbf{x}) &= -\frac{\left(\mu\_{\mathbf{x}\_1} + \mu\_{\mathbf{x}\_2} - \mathbf{1}\mathbf{0}\right)^2}{30} - \frac{\left(\mu\_{\mathbf{x}\_1} + \mu\_{\mathbf{x}\_2} + \mathbf{1}\mathbf{0}\right)^2}{120} \\ \text{s.t.} \quad P\_f\left(\mathbf{g}\_i(\mathbf{x}) > \mathbf{0}\right) &\le P\_{f,i}^t, \qquad i = \mathbf{1}, 2, 3 \\ \mathbf{g}\_1(\mathbf{x}) &= \mathbf{1} - \frac{\mathbf{x}\_1^2 \mathbf{x}\_2}{5} \end{aligned}$$

*An Efficient Quantile-Based Adaptive Sampling RBDO with Shifting Constraint Strategy DOI: http://dx.doi.org/10.5772/intechopen.110442*

$$\mathbf{g}\_2(\mathbf{x}) = \mathbf{1} - \frac{\left(\mathbf{x}\_1 + \mathbf{x}\_2 - \mathbf{5}\right)^2}{30} - \frac{\left(\mathbf{x}\_1 - \mathbf{x}\_2 - \mathbf{12}\right)^2}{120}$$

$$\mathbf{g}\_3(\mathbf{x}) = \mathbf{1} - \frac{80}{\left(\mathbf{x}\_1^2 + 8\mathbf{x}\_2 + \mathbf{5}\right)}\tag{13}$$

Which is a minimization problem including three nonlinear constraints. The input design variables *x*1and *x*<sup>2</sup> are random with normal PDFs and standard deviation of 0.3 for each of them. The target failure probability is set to 2*:*28% for each constraint which is equivalent to 97*:*72% reliability. The deterministic optimal answer is taken as the initial point. The results are presented in **Table 3**.

Youn et al. in [49] have pointed out that this problem could be a good benchmark example to be tested for RBDO methods because it has a highly nonlinear and nonmonotonic performance function *g*2 as one of the active constraints. Our proposed method has shown to have good stability in guiding the constraint boundaries toward their final locations. It is apparently observed that the proposed framework requires fewer LSF calls to reach the desired level of reliability. Therefore, it can save more computational efforts than a directional MCS-based RBDO algorithm.

To give an insight into the deterministic and probabilistic constraints, **Figure 10** illustrates the contours of the design space. The initial design point and the optimal points for DDO and RBDO are shown in this figure as well. The results of the first run (reported in **Table 3**) are obtained after three iterations. The iterations progress can be seen in **Figure 11**.

The third constraint is not depicted in **Figure 11** because the algorithm has removed the third constraint that was probabilistically inactive. The remaining active constraints are shifted in each iteration until the desired level of reliability is reached. The total amounts of shifting value for these active constraints are 1*:*171 and 0*:*166, respectively.

### **3.3 Third RBDO problem: highly nonlinear limit state with random and deterministic variables**


This specific benchmark problem has been selected to investigate the convergence ability of the framework [50]. This problem is characterized by two normally distributed variables as well as two deterministic design variables in a highly nonlinear

*\*The number in parenthesis of last column denotes results of the second run.*

#### **Table 3.**

*Optimization results on the second RBDO problem.*

*Results of the proposed method in the design space, the boundaries of deterministic (solid lines) and probabilistic (dotted lines) constraints (example 2).*

**Figure 11.**

*Active constraints in three iterations and optimal points of each iteration (example 2). The final optimal point emerges in iteration 3.*

design space. This problem is mathematically expressed as Eq. (14). The first random variable *x*<sup>1</sup> has mean 5 and standard deviation 1.5, whereas the second variable *x*<sup>2</sup> has mean and standard deviation equal to 3 and 0.9, respectively.

$$\begin{aligned} \text{Find} \quad d &= [d\_1, d\_2] \\ \text{Minimize} & f(d) = d\_1^2 + d\_2^2 \\ \text{s.t.} \quad Pr(g(\mathbf{x}, d) \le \mathbf{0})) &\le \rho(-\boldsymbol{\beta}^t) \end{aligned}$$

*An Efficient Quantile-Based Adaptive Sampling RBDO with Shifting Constraint Strategy DOI: http://dx.doi.org/10.5772/intechopen.110442*

$$0 \le \{d\_1, d\_2\} \le \mathbf{15}$$

$$\log(\mathbf{x}, d) = d\_1 d\_2 \mathbf{x}\_2 - \ln(\mathbf{x}\_1) \tag{14}$$

The optimization results of the proposed algorithm are reported in **Table 4** alongside the reported result by Shayanfar et al. in [51]. With our proposed algorithm, the achieved optimal solution is [1.35, 1.35] which is consistent with the results obtained from methods RIA, PMA, SLA and SORA reported in [50] as well as with the particle swarm optimization-wolf search algorithm (PSO-WSA) proposed by Shayanfar et al. [51] (**Table 4**) are the same and all approaches converged to the optimal point [1.35, 1.35]. Our method is robust as it shows no sensitivity to the choice of the initial point. For example, the result of **Table 4** is related to the initial point *x*<sup>0</sup> ¼ ½ � 4, 5 .

Solving the DDO is dependent on the initial point, and this issue can cause problems when getting into the reliability assessment part. To reduce the dependency on the initial point, there was a need to choose an appropriate deterministic constraint handling. In order to make a balance between the effect of shift value and the error in failure probability, a cost function in the form of an exterior penalty function Eq. (8) is defined. This adjustment makes it possible for the algorithm to change the importance of the two terms adaptively, and hence, the dependency of the RBDO problem on the initial point is solved in this way.

**Figure 12** depicts a scheme of the objective function and constraint function in the allowed ranges of deterministic design variables.


**Table 4.**

*Optimization results of the proposed algorithm on the third RBDO problem in comparison with PSO-WSA.*

**Figure 12.** *Design space of the third RBDO problem with the deterministic optimal point in red.*

If the constraint is properly handled in the DDO problem, then from the first iteration the probabilistic constraint will be violated. Apart from sensitivity of solving the DDO, the reliability assessment also encounters difficulties from the moving direction of the constraint. When the algorithm is running, in each iteration a specified shifting value is added to the deterministic constraint. The optimal point in each iteration moves toward the safe region, due to the structure of the problem, shifting occurs in an inappropriate direction. Consequently, instead of increasing the solution's reliability, the algorithm faces an increase the failure probability. **Figure 13** demonstrates the direction of moving constraint and optimal point for the first and second iterations. From this figure, it is clear that the optimal point has dropped into the infeasible region (**Figure 13b**.second iteration) due to the wrong direction.

This problem arises from the nonlinearity and the fact that there is no touch between the constraint function and the objective function at the points near the optimal solution. Some simple changes in the algorithm solve this issue such as replacing *g* þ *y* with *g* � *y* in the optimization problem (**Figure 5**). Thereby, instead of reducing the failure probability, shifting values should be searched for which it leads to maximize the probability of success. In this case, necessary changes to the algorithm would be as the following:

$$1. P\_f < \mathsf{596} \quad \rightarrow \quad P\_{\text{success}} > \mathsf{9596}$$

2. Statistical properties of new samples for building *ginf* :

$$\begin{array}{rcl} \mathbf{Menan:} \boldsymbol{\mu}\_{\operatorname{sort}\,\mathbf{x}} \begin{pmatrix} \boldsymbol{\pi}\_{\operatorname{supp}} & \boldsymbol{\pi}\_{\operatorname{curl}} \end{pmatrix} & \longrightarrow & \boldsymbol{\mu}\_{\operatorname{sort}\,\mathbf{x}} \begin{pmatrix} \boldsymbol{\pi}\_{\mathbf{1}:\mathbf{x}\_{\operatorname{supp}}} \end{pmatrix} \\\\ \mathbf{Std:} \boldsymbol{\sigma}\_{\operatorname{sort}\,\mathbf{x}} \begin{pmatrix} \boldsymbol{\pi}\_{\operatorname{supp}} \end{pmatrix} & \longrightarrow & \boldsymbol{\sigma}\_{\operatorname{sort}\,\mathbf{x}} \begin{pmatrix} \boldsymbol{\pi}\_{\mathbf{1}:\mathbf{x}\_{\operatorname{supp}}} \end{pmatrix} \end{array}$$

$$\begin{array}{rcl} \mathbf{3}.\mathbf{g}\_{f} > \mathbf{y} & \longrightarrow & \mathbf{g}\_{f} < -\mathbf{y} \\\\ \mathbf{4}.P\_{f} = \frac{1}{N} \sum\_{i=1}^{N} \mathbb{I}\_{i \neq f}(d, X) \end{array}$$

$$\mathbb{I}\_{i \neq f}(d, X) = \begin{cases} \mathbf{1}, & \mathbf{x} \in \text{inf} \leq -\mathbf{y}, \\\ \mathbf{0}, & \text{else.} \end{cases}$$

$$\mathsf{5.} P\_f^t = \mathsf{596} \quad \rightarrow \quad P\_{\text{success}}^t = \mathsf{9596}$$

**Figure 13.** *Design space of the third RBDO problem for two consecutive iterations.*

*An Efficient Quantile-Based Adaptive Sampling RBDO with Shifting Constraint Strategy DOI: http://dx.doi.org/10.5772/intechopen.110442*

**Figure 14.** *Design space of the third RBDO problem after making changes to reverse the direction of shifting constraint (last iteration).*

By making the above changes in the algorithm, the direction of shifting constraints will be reversed. See **Figure 14**.

In **Figure 15**, as expected, the algorithm has sharply reached a high probability of failure at the very beginning when the reliability is still very low. Therefore, according to Eq. (5), the shift value term has outweighed the failure probability error. As the algorithm approaches the last iterations and hence the narrower the failure region, the algorithm performs the constraint's shifting with smaller sizes until it minimizes the

**Figure 15.** *Failure probability in each optimization iteration for the third RBDO problem.*

failure probability error. Another interesting point is that as opposed to many other sampling-based RBDO [30, 44], the proposed algorithm ensures a more reliable point in each iteration compared to previous iterations. According to **Figure 15**, the algorithm has converged from high errors in failure probability to lower errors as the algorithm progresses.

Finally, from a top-down view, the proposed method has several advantages: First, due to adopting a sequential RBDO structure, only constraints which become active in each iteration will undergo reliability analysis which is not the case of double-loop methods. This causes a significant computational savings. Second, thanks to adopting sampling and quantile strategy in the estimation of failure probability, there is no need to struggle with searching a global MPP, particularly in highly nonlinear problems, as opposed to the existing sequential RBDO. Third, due to the adaptive step size introduced in the shifting value search strategy, we can make sure that a more reliable optimal point will be obtained in each iteration without any concern about the overdesign issue.

**Table 5** is a general comparison between the proposed method and the existing methods which clearly show the applicability of the algorithm.


#### **Table 5.**

*General features of the proposed method in summary.*

*An Efficient Quantile-Based Adaptive Sampling RBDO with Shifting Constraint Strategy DOI: http://dx.doi.org/10.5772/intechopen.110442*
