**2. Greedy randomized adaptive search procedure (GRASP)**

Greedy randomized adaptive search procedure (GRASP) is a well-known metaheuristic, i.e., a particular method to find sufficiently good solutions to optimization problems, that has been successfully used to solve many difficult combinatorial optimization problems. It is an iterative multi-start process which operates in two phases, namely, the construction and the local search phases.

In the construction phase, a feasible solution is built whose neighborhood is then explored in the local search phase [3]. A neighborhood of a certain solution S is a set of solutions that differ from S in well-defined forms (e.g., replacing any link by a different one, replacing "stars" by "triangles," and so on). Regarding optimization, different neighborhoods of S will not, in general, share the same local minimum. Thus, local optima trap problems may be overcome by deterministically changing the neighborhoods [4, 5].

**Figure 1** illustrates a generic GRASP implementation pseudo-code. The GRASP takes as input parameters the following:


In the construction phase, a feasible solution is built, with one element at a time. At each step of the construction phase, a candidate list is determined by ordering all non-already selected elements with respect to a greedy function that measures the benefit of including them in the solution. The heuristic is adaptive because the benefits associated with every element are updated at each step to reflect the changes brought on by the selection of the previous elements. Then, one element is randomly chosen from the best candidate list and added into the solution. This is the probabilistic component of GRASP, which allows for different solutions to be obtained at each GRASP iteration but does not necessarily jeopardize the power of the adaptive greedy component.

The solutions generated by the construction phase are not guaranteed to be locally optimal with respect to simple neighborhood definitions. Hence, it is beneficial to apply a local search to attempt to improve each constructed solution. A local search algorithm works in an iterative fashion by successively replacing the current solution by a better solution from its neighborhood. It terminates when there is no better solution found in the neighborhood. The local search algorithm depends on the suitable choice of a neighborhood structure, efficient neighborhood search techniques, and the starting solution.

The construction phase plays an important role with respect to this last point, since it produces good starting solutions for local search. Normally, a local optimization procedure, such as a two-exchange, is employed. While such procedures can require exponential time from an arbitrary starting point, experience has shown that their efficiency significantly improves as the initial solutions improve. Through the use of customized data structures and careful implementation, an efficient construction phase that produces good initial solutions for efficient local search can be created. The result is that often many GRASP solutions are generated in the same amount of time required for the local optimization procedure to converge from a single random start. Furthermore, the best of these GRASP solutions is generally significantly better than the solution obtained from a random starting point.


**117**

**Figure 2.** *Global algorithm.*

*A Survivable and Reliable Network Topological Design Model*

Recursive variance reduction (RVR) is a Monte Carlo simulation method for network reliability estimation [6]. It has shown excellent performance relative to other estimation methods, particularly when component failures are rare

It is a recursive method that works with probability measures conditioned to the operation or failure of specific cut-sets. A cut-set is a set of links (or nodes) such that the failure of any of its members results in a failure state for the overall network. RVR computes the unreliability (i.e., 1—reliability) of a network by finding a cut-set and recursively invoking itself several times, based on exhaustive and mutually exclusive combinations of up-and-down states for the members of the cut-set that cover the "cut-set fail" state space (i.e., a partition of the latter). While finding the cut-set and linking the recursion results introduce some overhead compared to other methods (e.g., crude Monte Carlo), RVR achieves significant reductions of the unreliability estimator variance, particularly in the (realistic) setting where failures are rare events. This allows for the use of smaller sample sizes, eventually beating the alternative methods in the trade-off between processing time

NetworkDesign is the main algorithm which iteratively executes the different phases that solve the GSP-SRC. The algorithm (shown in **Figure 2**) receives as entry *G* the original graph, *MaxIter* the number of iterations that is going to be executed, *k* an integer (parameter of the construction phase), the *threshold* of *T*-terminal reli-

ability required, and the number of replications used in reliability phase.

**3. Recursive variance reduction technique**

**4. The algorithmic solution for the GSP-SRC**

**4.1 Network design algorithm**

*DOI: http://dx.doi.org/10.5772/intechopen.84842*

events.

and precision.


**Figure 1.** *GRASP pseudo-code.*
