**5. Deterministic dynamic adaptation**

276 Bio-Inspired Computational Algorithms and Their Applications

Fig. 9. Success rate for strategies evolved with the benchmark and the newly discovered rate

Fig. 10. Average payoff for strategies evolved with the benchmark and the newly discovered

This section investigated the performance of various combinations of predetermined sets of genetic operators' rates in genetic algorithm on a flexible and configurable heuristic decision making framework that is capable to tackle the problem of bidding across multiple auctions that applied different protocols (English, Vickrey and Dutch). As mentioned earlier, the optimal combinations of operators' probabilities of applying these operators are problem dependent. Thus, experiments have to be conducted in order to discover a new operator of combinations genetic operator probability which can improve the effectiveness of the bidding strategy. This experiment has proven that the crossover rate and mutation rate which were applied in the previous work are not the best value to be used in this framework. With this new combination of genetic operators, the experimental evaluation has also shown that the strategies evolved performed better than the other strategies evolved from the other combinations in terms of success rate and average payoff when bidding in the online auction marketplace. By discovering a better combination of genetic operator's probabilities, the improved performance of the bidding strategies as shown in Fig. 8, 9, and 10 are achieved. From this parameter tuning experiment, it can be confirmed that the parameters are problem dependent. However, trying out all of the different

rate

Many researchers have applied deterministic dynamic adaptation in evolutionary algorithms as a method to improve the limitation in the performance of evolutionary algorithms. This type of adaptation alters the value of strategy parameter by using some deterministic rule (Fogarty, 1989; Hinterding *et al.* 1997). The value of the strategy parameter is modified by the deterministic rule which is normally a time-varying schedule. It is different from the standard genetic algorithm since GA applies a fixed mutation rate over the evolutionary process. Most of the practical applications often favor larger or nonconstant settings of the genetic operators' probabilities. (Back & Schutz, 1996). Some of the studies have proved the usefulness and effectiveness of larger, varying mutation rates (Back, 1992; Muhlenbein, 1992).

In this work, a time-variant dependent control rule is applied to change the control parameters over time without taking into account any present information by the evolutionary process itself (Eiben *et al.* 1999; Hinterding *et al.* 1997). Several studies have shown that a time dependent schedule is able to perform better than a fixed constant control parameter (Fogarty, 1989; Hesser & Manner, 1990; Hesser & Manner, 1992; Back & Schutz, 1996). The control rule is used to change the control parameter over the generation of the evolutionary process. The newly discovered crossover and mutation rates from the first experiment will be used in this particular schedule to serve as the midpoint in the time schedule. The parameter step size will change equally over the generation of the evolutionary process as well. This experiment is intended to discover the best deterministic dynamic adaptation by varying the genetic operators' probability scheme in exploring the bidding strategies.

The deterministic increasing and decreasing schemes for the crossover and mutation are different due to the changing scale of the values. The newly discovered crossover rates obtained from Section 3 is used as the midpoint for the time variant schedule because the convergence period of the evolution occur around the 25th generation. Consequently, the deterministic increasing scheme for the crossover rate will change progressively from *pc* = 0.2 to *pc* = 0.6 over the generation whereas the decrease scheme for the crossover rate is vice versa. The mutation rate obtained from the previous experiment is used as the midpoint of the time variant schedule for the increasing and decreasing schemes. The deterministic increasing scheme for the mutation rate, in contrast, will change progressively from *pm* = 0.002 to *pm* = 0.2 over the generation and vice versa for the deterministic decreasing schemes. The changing scale during each generation is decided by taking the difference between ranges of the rate divided by the total number of generation.

#### **5.1 Experimental setup**

Table 6 shows the parameter setting for the deterministic dynamic adaptation genetic algorithm. The evolutionary setting and parameter setting in the simulated environment is

Performance of Varying Genetic Algorithm Techniques in Online Auction 279

The performance of the evolved bidding strategies is evaluated based on three measurements discussed in Section 4.2. As before, the average fitness of the each population is calculated over 50 generations. The success rate of the agent's strategy and the average

A series of experiments were conducted with the deterministic dynamic adaptation using the testing sets in Table 7. From the experiments, CFMD and CDMI performed better than the other combinations (Gan *et al*, 2008a, Gan *et al*, 2008b). Fig. 12 shows that the population evolved with deterministic dynamic adaptation is able to perform a lot better than the fixed constant crossover and mutation rates. This result is similar to the ones observed by other researches where non-constant control parameter performed better than fixed constant control parameter (Back 1992; Back 1993; Back & Schutz 1996; Fogarty 1989; Hesser & Manner, 1991; Hesser & Manner, 1992). Even though, the point of convergence for the different dynamic deterministic scheme is similar, the population with CDMI achieved a higher average fitness when compared to the populations with CFMD. The CDMI scheme with the increase mutation rate is able to maintain exploration velocity in the search space till the end of the run with the decreasing crossover rate achieving a balance between exploitation with the exploration in the search space and also to achieve a balance between

**Comparisons between the Average Fitness of CFMF, CFMD, and CDMI**

1 10 20 30 40 50

CFMF CFMD CDMI

**Generation**

Based on Fig. 13 and Fig. 14 CDMI outperformed CFMF and CFMD in both the success rate and the average payoff. This shows that the strategy evolved by using the CDMI does not only generate a better average fitness but also evolves better effective strategies compared to the strategy evolved for the other deterministic schemes and they are able to gain a higher profit when procuring the item at the end of the auction. It achieved a higher average fitness

Fig. 12. Comparisons between the average fitness of CFMF, CFMD, and CDMI

**5.2 Experimental evaluation** 

payoff is observed over 200 runs in the market simulation.

exploration and exploitation in this setting.

function during the evolution process as well.

0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0.00

**Fitness**


the same as Tables 2 and 4. Fig. 11 shows the pseudocode of the deterministic dynamic adaptive genetic algorithm.

Table 6. Deterministic dynamic adaptation parameter setting


Fig. 11. The Deterministic Dynamic Adaptation Genetic Algorithm


Table 7. The Deterministic Dynamic Adaptation testing sets

#### **5.2 Experimental evaluation**

278 Bio-Inspired Computational Algorithms and Their Applications

the same as Tables 2 and 4. Fig. 11 shows the pseudocode of the deterministic dynamic

Representation Floating Points Number

Selection Operator Tournament Selection

Mutation Operator Creep Operator

Table 6. Deterministic dynamic adaptation parameter setting

Termination Criteria After 50 Generation

While not (Stopping Criterion) do

Select the fittest individuals (HP);

 pool to create new generation(SF); New generation is HP + SF;

Fig. 11. The Deterministic Dynamic Adaptation Genetic Algorithm

Table 7. The Deterministic Dynamic Adaptation testing sets

 marketplace 2000 times; Create new population

Crossover Operator Extension Combination Operator

Randomly create initial bidder populations;

Calculate fitness of each individual by running the

 Create mating pool for the remaining population; Perform crossover and mutation in the mating

Change the control parameter value (Crossover / Mutation)

Crossover Rate Mutation Rate Abbreviation

Fixed Increase CFMI Fixed Decrease CFMD Increase Fixed CIMF Decrease Fixed CDMF Increase Increase CIMI Decrease Decrease CDMD Increase Decrease CIMD Decrease Increase CDMI

Crossover Probability Change(Range from 0.4 to 0.6) / Fixed (0.4)

Mutation Probability Change (Range from 0.2 to 0.002) / Fixed (0.02)

Number of Generations 50 Number of Individuals 50 Elitism 10%

Numbers of Repeat Run 30

 Gen = Gen + 1 End while

End

Begin

adaptive genetic algorithm.

The performance of the evolved bidding strategies is evaluated based on three measurements discussed in Section 4.2. As before, the average fitness of the each population is calculated over 50 generations. The success rate of the agent's strategy and the average payoff is observed over 200 runs in the market simulation.

A series of experiments were conducted with the deterministic dynamic adaptation using the testing sets in Table 7. From the experiments, CFMD and CDMI performed better than the other combinations (Gan *et al*, 2008a, Gan *et al*, 2008b). Fig. 12 shows that the population evolved with deterministic dynamic adaptation is able to perform a lot better than the fixed constant crossover and mutation rates. This result is similar to the ones observed by other researches where non-constant control parameter performed better than fixed constant control parameter (Back 1992; Back 1993; Back & Schutz 1996; Fogarty 1989; Hesser & Manner, 1991; Hesser & Manner, 1992). Even though, the point of convergence for the different dynamic deterministic scheme is similar, the population with CDMI achieved a higher average fitness when compared to the populations with CFMD. The CDMI scheme with the increase mutation rate is able to maintain exploration velocity in the search space till the end of the run with the decreasing crossover rate achieving a balance between exploitation with the exploration in the search space and also to achieve a balance between exploration and exploitation in this setting.

Fig. 12. Comparisons between the average fitness of CFMF, CFMD, and CDMI

Based on Fig. 13 and Fig. 14 CDMI outperformed CFMF and CFMD in both the success rate and the average payoff. This shows that the strategy evolved by using the CDMI does not only generate a better average fitness but also evolves better effective strategies compared to the strategy evolved for the other deterministic schemes and they are able to gain a higher profit when procuring the item at the end of the auction. It achieved a higher average fitness function during the evolution process as well.

Performance of Varying Genetic Algorithm Techniques in Online Auction 281

The idea of self-adaptation is based upon the evolving of evolution. Self-adaptation has been used as one of the method to regulate the control parameter. As the name implies, the algorithm controls the adjustment of the parameters itself. This is done by encoding the parameter into the individual genomes by undergoing mutation and recombination. The control parameters can be any of the strategy parameters in evolutionary algorithm such as mutation rate, crossover rate, population size, selection operators and others (Back et al. 1997). However, the encoded parameters do not affect the fitness of the individuals directly, but rather, "better" values will lead to "better" individuals and these individuals will be more likely to survive and produce offspring and hence, proliferating these "better" parameter values. The goal of the self-adaptation is not only to find the suitable adjustment but also to execute it efficiently. The task is further complicated when the optimizer faced by a dynamic problem is taken into account since a parameter setting that was optimal at the beginning of an EA-run might become unsuitable during the evolution process. This scenario has been shown in some of the researches that different values of parameters might be optimal at different stages of the evolutionary process (Back, 1992a; Back, 1992b; Back, 1993; Davis, 1987; Hesser & Manner, 1991). Self-adaptation aims at biasing the distribution towards appropriate regions of search space and maintains sufficient diversity among individuals in order to enable further evolvability (Angeline, 1995; Meyer-Nieberg & Beyer,

The self-adaptation method has been commonly used in evolutionary programming (Fogel, 1962; Fogel, 1966) and evolutionary strategies (Rechenberg, 1973; Schwefel, 1977) but it is rarely used in genetic algorithms (Holland, 1975). This work applies self-adaptation in genetic algorithm which aims to adjust the crossover rate and mutation rate. The optimal rate for different phases of the evolution is obtained when different self-adaptation is capable in improving the algorithm by adjusting the crossover rate and mutation rate based on the current phase of the algorithm. Researchers have shown that the self-adaptation is able to improve the crossover in genetic algorithm (Schaffer & Morishima, 1987; Spears, 1995). In addition, studies also showed that the self-adaptive mutation rate does perform better than fixed constant mutation rate by incorporating the mutation rate into the individual genomes (Back, 1992a; Back, 1992b). In this section, three different self-adaptation schemes will be tested to discover the best self-adaptation scheme from this testing set. The self-adaptation requires the crossover and mutation rates to be encoded into the individual's genomes. Thus, some modification the encoding representation needs to be performed. The crossover and mutation rate become part of the genomes which will go through the

Table 8 shows the parameter setting for the self-adaptive genetic algorithm. The evolutionary setting and parameter setting in the simulated environment is same as Table 2 and 4. Fig. 15 shows the pseudocode of the deterministic dynamic adaptive genetic algorithm. Fig. 16 shows the different encoding representation of the individual genome that will be used in the experiment. The crossover and mutation rate are encoded into the

crossover and mutation processes similar to the other alleles.

representation in order to go through the evolution process.

**6.1 Experimental setup** 

**6. Self-adaptation** 

2006).

Fig. 13. Success rate comparison between CFMF, CFMD and CDMI

Fig. 14. Average payoff comparison between CFMF, CFMD and CDMI

This experiment has proven that non-constant genetic probabilities are more favorable than constant genetic probabilities. However, the deterministic dynamic adaptation may change the control parameter without taking into account the current evolutionary process as it does not take feedback from the current state evolutionary process whether the genetic operators' probabilities performed best at that current state of evolutionary process. The third stage of the experiment applies another adaptation method known as selfadaptation. The self-adaptation method is different from the deterministic dynamic adaptation where the self-adaptation evolves the parameter based on the current status of the evolutionary process. The self-adaptation method incorporates the control parameters into the chromosomes, thereby, subjecting them to evolution. In the last stage of the experiment, the self-adaptation is applied to genetic algorithm in order to evolve the bidding strategies.
