**1. Introduction**

Optimization algorithms are systems that determine an optimal set of parameters that minimize or maximize a cost, or objective, function subject to constraints. Optimization applications are common in engineering and other scientific and mathematical fields. For a typical engineering optimization application, a cost, or objective, function mathematically describes a metric of the error between a desired performance and actual performance over a constrained solution space. For such applications, optimization algorithms would determine an optimal set of parameters that minimize the cost function subject to physical constraints, such as the optimal parameters result in a stable system. As computing power has increased, many multimodal optimization problems have been solved using heuristic evolutionary optimization algorithms. An evolutionary algorithm is an optimization search algorithm that is loosely based on the principles of evolution and natural genetics and uses operators such as reproduction, selection, recombination and mutation [1]. Popular evolutionary algorithms include genetic algorithms [2], differential evolution [3], particle swarm optimization [4], simulated annealing [5] and colony optimization [6, 7]. Although no algorithm can solve all types of optimization problems [8, 9], genetic

algorithms and differential evolution algorithms have become popular in engineering optimization applications because these method are simple, effective and flexible.

Because no algorithm can solve all types of optimization problems [8, 9], hybrid algorithms that combine the elements of an evolutionary algorithm with one or more evolutionary algorithms or search algorithms have been developed and have been shown to be effective search algorithms [10]. Because genetic algorithms and differential evolution algorithms have become popular in engineering optimization applications, this chapter presents a hybrid genetic, differential evolution algorithm. The algorithm uses an elitist, ranking, random selection method. Elitist selection methods assure the survival of the fittest individual, which is the candidate solution with the best optimization criterion cost, during the selection process. The fittest individual is also assured selection in all recombination and mutation operations. Except for the fittest individual which is guaranteed selection, the candidate solutions that survive the selection process are randomly selected for a differential evolution operator to improve convergence, a differential evolution mutation operator to improve diversity and a recombination operator that improves both convergence and diversity. The selection probabilities for the mutation and recombination operators are dynamic and change each generation, or algorithm iteration, to maintain a constant population size. After generating new candidate solutions using these operators, the new candidate solutions are added to the set of candidate solutions that survived the selection process. Except for the fittest individual (which is guaranteed selection), candidate solutions are randomly selected for Taguchi crossover [11] which is an effective recombination operator that creates near optimal new candidate solutions from two or more parent candidate solutions. Section 2 of this chapter describes the basic elements of genetic and differential evolution algorithms. Section 3 describes this chapter's algorithm in detail. In Section 4, this algorithm is applied to 13 commonly used global numerical optimization test functions, including a spherical, three hyper-ellipsoid, the sum of different powers, Rastrigin's, Schwefel's, Griewank's, Rosenbrock's valley, Styblinski-Tang, Ackley's Path, Price-Rosenbrock, and Eggholder's functions.
