**3.1 Genetic algorithms**

In the years 50 and 60, many biologists began to develop computational simulations of genetic systems. However, it was John Holland who began, in earnest, developing the first researches in the theme. Holland was gradually refining their ideas and in 1975 published his book "Adaptation in Natural and Artificial Systems" (Holland, 1975), now considered the bible of genetic algorithms. Since then, these algorithms are being applied with success in the most diverse problems of optimization and machine learning.

Genetic algorithms are global optimization algorithms, based on the mechanisms of natural selection and genetics. They employ a parallel search strategy and structured, but random, which is geared toward enhancing search of "high fitness" points, i.e. points where the function to be minimized (or maximized) has values relatively low (or high).

Although they are not random, random walks, directed not because explore historical information to find new points of search where are expected best performances. This is done through iterative processes, where each iteration is called generation.

During each iteration, the principles of selection and reproduction are applied to a population of candidates that can vary, depending on the complexity of the problem and the computational resources available. Through the selection, if determines which individuals will be able to reproduce, generating a particular number of descendants for the next generation, with a probability given by its index of fitness. In other words, individuals with greater relative adaptation have greater chances of reproducing.

The starting point for the use of genetic algorithms, as a tool for troubleshooting is the representation of these problems in a way that the genetic algorithms to work properly on them. Most representations are genotype, use vectors of finite size in a finite alphabet.

Traditionally, individuals are represented by binary vectors, where each element of a vector (1) denotes the presence or absence (0) of a particular characteristic. However, there are applications where it is more convenient to use representations for integers as shown later in this work.

The basic principle of operation of AGs is that a selection criterion will do with that, after many generations, the initial set of individuals generates another set of individuals more able. Most methods are designed to check individuals preferentially choose majors with fitness, although not exclusively, in order to maintain the diversity of the population. A

3. Meta-heuristic methods use cost information or reward and not derived or other

In addition to being a strategy to generate-and-test very elegant, because they are based on social organization or biological evolution, are able to identify and explore environmental factors and converge to optimal solutions, or approximately optimal in overall levels. The better a person adapt to their environment, the greater your chance of surviving and generate descendants: this is the basic concept of social organization or biological genetic evolution. The biological area more closely linked to genetic algorithms is the genetics, and

In the years 50 and 60, many biologists began to develop computational simulations of genetic systems. However, it was John Holland who began, in earnest, developing the first researches in the theme. Holland was gradually refining their ideas and in 1975 published his book "Adaptation in Natural and Artificial Systems" (Holland, 1975), now considered the bible of genetic algorithms. Since then, these algorithms are being applied with success

Genetic algorithms are global optimization algorithms, based on the mechanisms of natural selection and genetics. They employ a parallel search strategy and structured, but random, which is geared toward enhancing search of "high fitness" points, i.e. points where the

Although they are not random, random walks, directed not because explore historical information to find new points of search where are expected best performances. This is done

During each iteration, the principles of selection and reproduction are applied to a population of candidates that can vary, depending on the complexity of the problem and the computational resources available. Through the selection, if determines which individuals will be able to reproduce, generating a particular number of descendants for the next generation, with a probability given by its index of fitness. In other words, individuals with

The starting point for the use of genetic algorithms, as a tool for troubleshooting is the representation of these problems in a way that the genetic algorithms to work properly on them. Most representations are genotype, use vectors of finite size in a finite alphabet.

Traditionally, individuals are represented by binary vectors, where each element of a vector (1) denotes the presence or absence (0) of a particular characteristic. However, there are applications where it is more convenient to use representations for integers as shown later in

The basic principle of operation of AGs is that a selection criterion will do with that, after many generations, the initial set of individuals generates another set of individuals more able. Most methods are designed to check individuals preferentially choose majors with fitness, although not exclusively, in order to maintain the diversity of the population. A

in the most diverse problems of optimization and machine learning.

through iterative processes, where each iteration is called generation.

greater relative adaptation have greater chances of reproducing.

function to be minimized (or maximized) has values relatively low (or high).

4. Meta-heuristic methods use probabilistic transition rules and not deterministic.

auxiliary knowledge.

**3.1 Genetic algorithms** 

this work.

the social area is particle swarm optimization.

selection method used is the method of roulette, where individuals of one generation are chosen to be part of the next generation, through a raffle of roulette. Figure 6 shows the representation of the roulette to a population of 4 individuals.

Fig. 6. Individuals of a population and its corresponding check roulette

In this method, each individual of the population is represented in roulette in proportion to its index of fitness. Thus, individuals with high fitness are given a greater portion of the wheel, while the lowest fitness is given a relatively smaller portion of roulette. Finally, the roulette wheel is rotated a certain number of times, depending on the size of the population, and are chosen, as individuals who will participate in the next generation, those drawn in roulette.

A set of operations is necessary so that, given a population, to generate successive populations that (hopefully) improve your fitness with time. These operators are: crossover (crossover) and mutation. They are used to ensure that the new generation is entirely new, but has in some way, characteristics of their parents, i.e. the population diversifies and maintains adaptation characteristics acquired by previous generations. To prevent the best individuals does not disappear from the population by manipulating the genetic operators; they can be automatically placed on the next generation via playing elitist.

This cycle is repeated a specified number of times. The following is an example of genetic algorithm. During this process, the best individuals, as well as some statistical data, can be collected and stored for evaluation.

```
Procedure AG 
{g = 0; 
inicial_population (P, g) 
evaluation (P, g); 
Repeat until (g = t) 
{g = g +1; 
 Father_selection (P, g); 
recombination (P, g); 
mutation (P, g); 
evaluation (P, g); 
 } 
}
```
Where *g* is the current generation; *t* is the number of generations to terminate the algorithm; and *P* is the population.

An Evolutionary Fuzzy Hybrid System for Educational Purposes 405

It is also important to analyze how some parameters influence the behavior of genetic algorithms in order to establish them as the needs of the problem and available resources. **Population size.** The size of the population affects the overall performance and efficiency of AGs. With a small population performance may fall, because this way the population provides a small coverage of the search space of the problem. A large population typically provides a representative coverage of the problem domain, and preventing premature convergence solutions to local rather than global. However, for working with large populations, larger computational resources are required, or that the algorithm works by a

**Passing Rate.** The higher this ratio, the faster new structures will be introduced in the population. But if this is too high, the majority of the population will be replaced and can be

**Mutation rate.** A low mutation rate prevents a given position stay stagnant in a value, and allow to reach anywhere in the search space. With a very high search becomes essentially random.

The optimization method called Particle Swarm Optimization (PSO) as other meta-heuristics recently developed, simulates the behavior of systems making the analogy with social behaviors. PSO was originally inspired by biological partner behavior associated with group of birds (Goldberg, 1989). This topic will be discussed in more detail after the basic

The PSO was first proposed by John Kennedy and Russell Eberhart (1995a, 1995b). Some of the interesting features of PSO include ease of implementation and the fact that no gradient information is required. It can be used to solve a range of different optimization problems, including most of the problems can be solved through genetic algorithms; one can cite as an example some of the applications, such as neural network training (Lee & El-Sharkawi,

Many popular optimizations algorithms are deterministic, as the gradient-based algorithms. The PSO, like its similar, belonging to the family of Evolutionary Algorithm is an algorithm of stochastic type that needs no gradient information derived from error function. This allows the use of PSO in functions where the gradient is unavailable or the production of

The algorithm maintains a population of particles, where each particle represents a potential solution to an optimization problem. *S* assumed as being the size of the swarm. *I* each particle can be represented as an object with various features. These characteristics are as

2008) and to minimize various types of functions (Eberhart et al., 1996).

which is associated with a high computational cost.

*yi*: the best personal position achieved by the particle.

lost high fitness structures. With a low value, the algorithm can become very slow.

**3.1.2 Genetic parameters** 

much longer time period.

**3.2 Particle Swarm Optimization** 

algorithm is described.

**3.2.1 The PSO algorithm** 

*xi*: the current position of the particle; *vi*: the current speed of the particle;

follows:

These algorithms are computationally very simple, are quite powerful. In addition, they are not limited by assumptions about the search space, for continuity, existence of derivatives, and so on.

## **3.1.1 Genetic operators**

The basic principle of genetic operators is to transform the population through successive generations, extending the search until you reach a satisfactory outcome. Genetic operators are needed to enable the population to diversify and keep adaptation characteristics acquired by previous generations.

The operator mutation is necessary for the introduction and maintenance of genetic diversity of the population, arbitrarily changing one or more components of a structure chosen, as is illustrated in Figure 7, thus providing the means for introduction of new elements in the population. Thus, the mutation ensures that the probability of reaching any point in the search space will never be zero, in addition to circumvent the problem of local minima, because with this mechanism, slightly changes the search direction. The mutation operator is applied to individuals with a probability given by the mutation rate *Pm*; usually uses a small mutation rate, because it is a genetic operator secondary.


#### Fig. 7. Example of mutation

The crossing is the operator responsible for the recombination of traits of parents during play, allowing future generations to inherit these traits. It is considered the predominant genetic operator, so it is applied with probability given by the crossover rate *Pc*, which must be greater than the rate of mutation.

This operator can also be used in several ways; the most commonly used are:

a. One-point: a crossover point is chosen and from this point the parental genetic information will be exchanged. The information prior to this point in one of the parents is related to information subsequent to this point in the other parent, as shown in the example in Figure 8.

Fig. 8. Example of crossover from one-point: (a) two individuals are chosen; (b) a crossover point (2) is chosen; (c) the characteristics are recombined, generating two new individuals

