**3. Application in cutting and packing**

The field of operations research concentrates many real world applications of optimization techniques in engineering, as it essentially e aims to increase the efficiency of industry operations [8, 9, 16–19]. Interest in this area has grown recently due to advances in optimization algorithms and the importance of the waste and pollution reduction, which is usually an effect of an improved solution.

Among operations research subjects, cutting and packing (C&P) problems can be highlighted due to its importance and singular combination of geometry and optimization. These characterises are even more prominent in irregular packing problems, which involves simple polygonal items. Essentially, C&P problems consists of assigning a set of small items to a set of set of large containers, maintaining some geometric restrictions, while minimizing an objective function.

In this section, a SA based solution for the irregular packing problem is proposed. It adopts a discrete objective function, as it is related to the number of items in the container, while using continuous parameter for items rotations.

#### **3.1 Irregular bin packing problem with free rotations**

In the 2D irregular single bin packing problem, given a collection of items, one must place a subset of these inside a rectangular container with the aim of minimizing the unused space inside the bin. Each item is represented by a simple polygon and may be rotated by any angle. The main geometric restrictions dictates that no two items may overlap and there should be no protrusion from the container.

Consider a set of items P ¼ f g *P*1, *P*2, … , *Pn* and a rectangular container C. The layout can be represented by the translation vector f g *t*1, *t*2, … , *tn* , a rotation vector f g *r*1,*r*2, … ,*ri* and an assignment set T , which contains the subset items placed inside the container. The irregular bin packing problem can be described as the minimization of the difference between the area of the container *A*ð Þ C and the assigned set of items *A*ð Þ T , as

$$\begin{aligned} \text{minimize} \qquad & A(\mathcal{C}) - A(\mathcal{T})\\ \text{subject to} \quad & i(P\_i(r\_i) \oplus t\_i) \cap (P\_j(r\_j) \oplus t\_j) = \mathcal{Q} \text{ i}, j \in \mathcal{T} \text{ and } i < j\\ & (P\_i(r\_i) \oplus t\_i) \subseteq \mathcal{C} \text{ 1} \le i \in \mathcal{T} \\ & 0 \le r\_i < 2\pi \text{ 1} \le i \in \mathcal{T} \\ & t\_i \in \mathbb{R}^2 \text{ 1} \le i \in \mathcal{T} \end{aligned} \tag{3}$$

where *P r*ð Þ represents an item rotate by *r*, operator *i P*ð Þ expresses only the interior of *P* and *P* ⊕ *t* indicates a translation *t* applied to *P*.

#### **3.2 Solution using SA**

One of the main challenges in irregular packing is the complexity of managing the geometric constraint, which results in fewer proposed solutions in the literature [20]. In order to optimize the layout without compromising the geometric feasibility, two main strategies are often employed. The first is to define a constructive heuristic, which places one item at a time, usually at the bottom-left position. In this approach, the optimization algorithm only controls the placement order; a popular solution is to employ genetic algorithms [2, 21]. The alternative is to allow the items to move freely inside the container while applying penalization on the item overlaps [22–24].

In [9], however, a different approach was adopted: a SA based algorithm was employed to directly control the placement order, as well as the position and rotation of each item. Items were placed sequentially, using the parameters given by the SA, until no more items fitted the container. Then, the objective function, which was the unoccupied area of the container, which is directly related to the subset of placed items, was evaluated.

The main difficulty was the definition of the item position parameter, which should always correspond to a valid placement, i.e., without overlap or protrusion from the container. It was given by the collision free region (CFR), a polygon describing the allowed placement region, as shown in **Figure 3**. The CFR can be obtained using modified Boolean operations on polygons, described in [25]. A continuous parameter evaluated to a position along the perimeter of the CFR, and the closest vertex was chosen as the placement position of the item. The rotation

*Versatility of Simulated Annealing with Crystallization Heuristic: Its Application… DOI: http://dx.doi.org/10.5772/intechopen.98562*

**Figure 3.** *Example of collision free region for item P.*

was defined by a second controlled variable, and it had to be applied prior to the determination of the CFR.

Therefore, the solution optimization basically consisted of a SA controlling the placement and rotation parameter of each item in the layout. At each iteration, one parameter for a single item was changed, or two items were swapped in the placement order. Then, the cost was evaluated and the new solution was accepted according to (1). The crystallization factor described in Section 2 was applied to the rotation parameters.

#### **3.3 Results and discussion**

Six broken glass puzzle instances were created to evaluate the performance of the algorithm. These instances have a known optimal solution, which are displayed in **Figure 4**. Therefore, the effectiveness of the algorithm can be measure by its success rate, which relates to the number of executions converging to the optimal solution.

The tests were executed on a Phenom 9550 2.21GHz and the convergence condition for the SA was that (1) the cost variation for the final temperature was zero, and (2) the final solution has the lowest cost found. The initial temperature was adjusted by admitting initial 50% acceptance and a geometric cooling schedule was adopted with *α* ¼ 0*:*98 (as shown in Algorithm 1).

**Table 1** shows the results data obtained for 30 executions of each instance. Results show that only very simple problems were solved optimally in every executions. This indicates that the SA algorithm should be complemented with heuristics approaches to increase the convergence rate. Nevertheless, given the complexity of the problem with free rotations, the fact that it found the optimal solutions for all instances is important, as some could not be achieved by enforcing simple heuristics such as bottom-left or larger first.

One important characteristic of the irregular bin packing problem with free rotations is that the objective function is discrete and some of the parameters are continuous. This is illustrated in **Figure 5**, which shows discrete cost values for different values for the leftmost item rotation. The SA solution was not affected by

**Figure 4.** *Instances for the irregular bin packing problem.*


#### **Table 1.**

*The results for the SA based irregular bin packing solution. Nitems: total number of items. Nconv: average number of iterations. Tconv: average time untill convergence (in seconds). Pconv: success rate.*

**Figure 5.** *Discrete cost behaviour with single continuous parameter variation.*

this issue, as the results show, it can handle both continuous and discrete parameters and objective function. Moreover, the continuous rotation convergence for each item was improved by adopting the crystallization factor.

### **4. Application in TO**

TO is a mathematical approach to determine distribution of material in a design domain such that the performance becomes maximized. The definition of performance could be different in each application. The physic of problem and desired application determines the objective function and the constraints. Depending on the problem, different methods have been developed in the literature [26, 27].

For the TO problems with well defined objective functions and constraints, the gradient based algorithms have been widely used [28–30]. The sensitivity information converges the final results to the optimized topology. This method shows fast convergence and low computational costs. The gradient based TO methods use Optimality Criteria (OC), Method of Moving Asymptotes (MMA), Sequential Linear Programming (SLP), etc. to optimize the objective functions [31, 32]. In the cases that the objective function or its derivatives are not mathematically modeled or hard to calculate, using the non-gradient based algorithms are more advantages [33, 34]. In the non-gradient based TO methods, GA and SA are the two most popular algorithms of optimization [3, 35–37]. Even these methods have high computational costs, but can optimize the topology without need to calculation of the derivatives and sensitivity information.

Using the SA method in the non-gradient based TO is beneficial because of reaching global minimum as well as provide the information of convergence.

*Versatility of Simulated Annealing with Crystallization Heuristic: Its Application… DOI: http://dx.doi.org/10.5772/intechopen.98562*

The information of convergence such as the number of accepted and rejected solutions could be used in the evaluation of TO results. The available SA for TO uses random search to generate new solutions, while by using SA with crystallization heuristic [10], the search for new accepted solutions has been improved. After finishing the optimization process, a density filter can reduce gray area and discontinued regions.

## **4.1 SA for non-gradient based TO**

The structural TO to minimize compliance in beams is a classic problem. This problem has been solved with gradient based methods in the literature [38] and the results have been used to verify TO with crystallization heuristic SA (as described in Section 2). The problem of minimizing compliance can be represented as the problem of minimizing strain energy, modeled as

$$\text{minimize } \mathcal{S} = U^T K U, \text{ subjectted to : } KU = \mathcal{F} \tag{4}$$

where F, *U*, and *K* are, respectively, external force, elastic deformation, and stiffness. The constraint is volume fraction that represents the final optimized topology should have less or equal volume of the design domain. By discretizing the design domain to *N* square elements, the total strain energy can be calculated as

$$S = \sum\_{\epsilon=1}^{N} (\varkappa\_{\epsilon})^p u\_{\epsilon}^T k\_{\epsilon} u\_{\epsilon} \tag{5}$$

where *xe* is the density of each element varying from a minimum value (to avoid singularity in matrix calculation) to 1. *p* is the penallization parameter, *ue* is elastic deformation for element *e* and *ke* is the stiffness for element *e*. The penalization factor *p* penalizes the intermediate gray area in the Solid Isotropic Material with Penalization (SIMP) method to reduce gray area. In this case, the TO has continuous parameters and it is solved using SA with crystallization heuristic, as described in Section 2.

A new heuristic is included, after reaching the thermal equilibrium, the domain is regularized by filtering. The new density of each element after filtering gets some effects from the adjacent elements, as

$$\mathbf{x}\_{\text{filter}} = \frac{\sum\_{\varepsilon=1}^{N} w\_{\varepsilon} \mathbf{x}\_{\varepsilon}}{\sum\_{\varepsilon=1}^{N} w\_{\varepsilon}} \tag{6}$$

where *we* is the weighting function which is the filter radius minus distance of each adjacent element. It should be noted that the weighting function is zero outside of the filter radius and the density changes just inside the filer domain. The design domain and loading conditions for cantilever and half-MBB beam problems are shown in **Figure 6**. A comparison of obtained compliance from proposed method and some results from the literature is shown in **Table 2**.

As shown in **Table 2**, the results from this non-gradient based TO method are very close to the gradient based results. The main advantage of the proposed method is that there is no need to calculate derivatives of the objective function.

#### **4.2 SA for multi-objective TO**

The TO objective function can be complex and combine two or more objective functions. In such situations, the solution is not necessarily unique and comes with a

**Figure 6.**

*The design domain and loading for cantilever beam (left) and half-MBB beam (right).*


#### **Table 2.**

*The results for compliance of cantilever and half-MBB beam for different volume fractions.*

set of optimum solutions called Pareto Front. The Pareto Front curve shows the solutions where none of the objective functions can be improved without degrading the other objective functions. The Pareto Front curve can be used for trade-off the suitable solution within this set instead of considering the full range of every parameter. The traditional TO usually optimizes one objective function while considering the other objective functions fixed or as a constraint [39]. The SA showed also this ability to incorporate multiple objective functions, like the one presented by the CoAnnealing [40] and AMOSA [41].

**Figure 7.** *The Pareto Front curve for minimization of compliance and weight in cantilever problem.*

*Versatility of Simulated Annealing with Crystallization Heuristic: Its Application… DOI: http://dx.doi.org/10.5772/intechopen.98562*

The CoAnnealing was used to solve TO problems and the Pareto Front was obtained; particularly, compliance and volume fraction as considered as cost functions in a cantilever beam, as shown in **Figure 7**. The points on the curve showed a good agreement with the results for that volume fraction and compliance [42]. The results shows that Coannealing can be used for the multiple objective TO problems.

#### **5. Application to curve fitting**

The problem of curve fitting is an essential Computer Aided Design (CAD) problem with applications in various engineering fields including but not limited to digital metrology, robotics, path planning, and data modeling. The problem consists of constructing a curve from a set of discrete points, as shown in **Figure 8**. The two main types of curve fitting can be defined, as an approximating curve fitting, when the constructed curve approximates the location of the points in the dataset, and an interpolating curve fitting, when the curve passes exactly through the set of points. The curve approximation has application in many engineering problems when a certain level of uncertainty exists in the dataset. This always can be expected in data points collected by an experimental process, when sources of uncertainties including the equipment errors, environmental effects, human errors, measurement resolution, etc., represent a combined level of uncertainty [43]. Due to the nonsystematic nature of the uncertainties in the data, the approximating curve fitting process aims to recognise the true pattern in the data points instead of exactly passing through them [44]. The three major computation tasks to complete the approximating curve fitting include Point Measurement Planning (PMP), Substitute Geometry Estimation (SGE), and Deviation Zone Evaluation (DZE). Reducing or controlling the level of uncertainty in the constructed curve have been studied comprehensively at PMP by proper selection of the datapoints [45]. It is also addressed in SGE by improving the curve or surface fitting algorithms, using an enhanced optimization processes to avoid trapping in local minima, and by using iterative fitting approaches with monitoring some indicators for the level of uncertainty [46]. Various approaches have been also presented to measure and monitor the level of uncertainty typically at DZE stage by modeling the pattern and nature of the resulting deviations of the processed data points from the approximated curves or surfaces [47].

On the contrary, the curve interpolation is applicable when the datapoints are known to be fairly accurate and are conducted with no accommodation for any level

**Figure 8.** *Approximating curve (dashed) versus the interpolating curve (red).*

of uncertainty in the datapoints. **Figure 8** presents a set of datapoints with its corresponding approximating and interpolating curves. The set of datapoints is shown by blue dots, the interpolating curve is presented by solid red line, and the approximating curve is presented by black dashed line.

Since a certain level of uncertainty typically exists in the most of the engineering problems, developing methodologies to control or reduce the level of uncertainty in the finally constructed curve is highly important. In this section, a SA approach is proposed to determine an approximation curve from a sequence of points. In this approach, the control points are continuous parameters and index corresponding points of the sequence are discrete parameters. All these parameters are adjusted by SA. This study was started by Ueda et al. [48]. The developed methodology employs piece-wise Bézier curve structure to solve the problem, as explained in the followings.

#### **5.1 Piece-wise Bézier curve**

There are several curve structures that can be used for the purpose of solving a curve fitting problem. However, each one has it own advantages and drawbacks. In a Bézier curve, the control points influence the entire curve globally including the regions that the curve is already fitted. On the other hand, the control points in a Bspline have only a local influence, i.e., by changing a group of control points only a certain region of the curve is modified. One feature of the Bézier curve structure that is beneficial in the presented fitting approach here is that the resulting Bézier curve always interpolates the first and last control point, while in the B-spline curve structure, these points usually are not interpolated. It is possible to interpolates these points in a B-spline. However, a higher number of optimization parameters is needed to achieve such feature, compared to the Bézier curve.

A piece-wise Bézier curve overcome the problem of the global influence of the control point. This curve is a sequence of cubic Bézier curves as shown in **Figure 9**, in which the last control point of curve **p**<sup>3</sup> is the first control point of the following curve. The determination of the second control point of the second curve **p**<sup>4</sup> is given by

$$\mathbf{p}\_4 = \mathbf{p}\_3 - \boldsymbol{\beta} \cdot (\mathbf{p}\_2 - \mathbf{p}\_3),\tag{7}$$

with *β* being a proportional factor that ensures the weak-*G*1 continuity between the curves, i.e., the tangent vector of the end of the first curve has the same direction but not necessary the same intensity of the tangent of the start of the second curve. Ueda et al. [49] proposed an algorithm for automatic evaluate the number of piece-wise Bézier curves necessary to interpolate a sequence of points.

#### **Figure 9.**

*Piece-wise cubic Bézier curve with* 2 *curve segments, points p*0, *p*1, *p*<sup>2</sup> *and p*<sup>3</sup> *defines the first curve and points p*3, *p*4, *p*<sup>5</sup> *and p*<sup>6</sup> *defines the second one.*

*Versatility of Simulated Annealing with Crystallization Heuristic: Its Application… DOI: http://dx.doi.org/10.5772/intechopen.98562*
