**6. Self-adaptation**

280 Bio-Inspired Computational Algorithms and Their Applications

Fig. 13. Success rate comparison between CFMF, CFMD and CDMI

Fig. 14. Average payoff comparison between CFMF, CFMD and CDMI

bidding strategies.

This experiment has proven that non-constant genetic probabilities are more favorable than constant genetic probabilities. However, the deterministic dynamic adaptation may change the control parameter without taking into account the current evolutionary process as it does not take feedback from the current state evolutionary process whether the genetic operators' probabilities performed best at that current state of evolutionary process. The third stage of the experiment applies another adaptation method known as selfadaptation. The self-adaptation method is different from the deterministic dynamic adaptation where the self-adaptation evolves the parameter based on the current status of the evolutionary process. The self-adaptation method incorporates the control parameters into the chromosomes, thereby, subjecting them to evolution. In the last stage of the experiment, the self-adaptation is applied to genetic algorithm in order to evolve the The idea of self-adaptation is based upon the evolving of evolution. Self-adaptation has been used as one of the method to regulate the control parameter. As the name implies, the algorithm controls the adjustment of the parameters itself. This is done by encoding the parameter into the individual genomes by undergoing mutation and recombination. The control parameters can be any of the strategy parameters in evolutionary algorithm such as mutation rate, crossover rate, population size, selection operators and others (Back et al. 1997). However, the encoded parameters do not affect the fitness of the individuals directly, but rather, "better" values will lead to "better" individuals and these individuals will be more likely to survive and produce offspring and hence, proliferating these "better" parameter values. The goal of the self-adaptation is not only to find the suitable adjustment but also to execute it efficiently. The task is further complicated when the optimizer faced by a dynamic problem is taken into account since a parameter setting that was optimal at the beginning of an EA-run might become unsuitable during the evolution process. This scenario has been shown in some of the researches that different values of parameters might be optimal at different stages of the evolutionary process (Back, 1992a; Back, 1992b; Back, 1993; Davis, 1987; Hesser & Manner, 1991). Self-adaptation aims at biasing the distribution towards appropriate regions of search space and maintains sufficient diversity among individuals in order to enable further evolvability (Angeline, 1995; Meyer-Nieberg & Beyer, 2006).

The self-adaptation method has been commonly used in evolutionary programming (Fogel, 1962; Fogel, 1966) and evolutionary strategies (Rechenberg, 1973; Schwefel, 1977) but it is rarely used in genetic algorithms (Holland, 1975). This work applies self-adaptation in genetic algorithm which aims to adjust the crossover rate and mutation rate. The optimal rate for different phases of the evolution is obtained when different self-adaptation is capable in improving the algorithm by adjusting the crossover rate and mutation rate based on the current phase of the algorithm. Researchers have shown that the self-adaptation is able to improve the crossover in genetic algorithm (Schaffer & Morishima, 1987; Spears, 1995). In addition, studies also showed that the self-adaptive mutation rate does perform better than fixed constant mutation rate by incorporating the mutation rate into the individual genomes (Back, 1992a; Back, 1992b). In this section, three different self-adaptation schemes will be tested to discover the best self-adaptation scheme from this testing set. The self-adaptation requires the crossover and mutation rates to be encoded into the individual's genomes. Thus, some modification the encoding representation needs to be performed. The crossover and mutation rate become part of the genomes which will go through the crossover and mutation processes similar to the other alleles.

#### **6.1 Experimental setup**

Table 8 shows the parameter setting for the self-adaptive genetic algorithm. The evolutionary setting and parameter setting in the simulated environment is same as Table 2 and 4. Fig. 15 shows the pseudocode of the deterministic dynamic adaptive genetic algorithm. Fig. 16 shows the different encoding representation of the individual genome that will be used in the experiment. The crossover and mutation rate are encoded into the representation in order to go through the evolution process.

Performance of Varying Genetic Algorithm Techniques in Online Auction 283

The performance of the evolved bidding strategies is also evaluated based on the three measurements discussed in Section 4.2. As before, the average fitness of the each population is calculated over 50 generations. The success rate of the agent's strategy and the average

A series of experiments were conducted with the self-adaptive testing sets described in Table 10. From the experiments, self-adapting both crossover and mutation rates performed better than the other combinations (Gan *et al*, 2009). The population with self adaptive crossover and mutation (SACM) achieved a higher average fitness compared to the population of self-adaptive crossover (SAC) and self –adaptive mutation schemes (SAM) as shown in Fig. 17. This scenario implies that the population with self adaptive crossover and mutation perform at its best among other populations and this is due to the self-adaptation crossover and mutation scheme which has combined the advantageous of the self-adaptive crossover and self-adaptive mutation scheme together. By having the two parameters to self-adapt, the control parameter can be adjusted to find the solution in different stages with the best control parameter which have been shown in the previous study indicating that different evolution stages will possess different optimal parameter values (Eiben *et al.* 1999).

**Average Fitness for Different Self-Adaptation Schemes**

1 10 20 30 40 50

SACM SAC SAM Schemes Adaptation Different Self

**Generation**

Fig. 18. Success rate for strategies evolved from different self-adaptation schemes

Fig. 17. Average fitness for different self-adaptation schemes

**6.2 Experimental evaluation** 

payoff is observed over 200 runs in the market simulation.

0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0.00

**Fitness**


Table 8. Self-adaptation genetic algorithm parameter setting


Fig. 15. The self adaptation algorithm both genetic operators


Fig. 16. Encoding of a bidding strategy for self-adaptation crossover and mutation rate


Table 9. Self-adaptation testing sets

#### **6.2 Experimental evaluation**

282 Bio-Inspired Computational Algorithms and Their Applications

Representation Floating Points Number

Selection Operator Tournament Selection

Mutation Operator Creep Operator

Termination Criteria After 50 Generation

Select the top 10% to next generation

Check Parents Crossover Rate

Check Individual Mutation Rate

Generation = Generation + 1

Select offspring to the next generation

Mutate the offspring

Fig. 15. The self adaptation algorithm both genetic operators

Table 9. Self-adaptation testing sets

Tournament Selection Parents to Mating Pool

Generating offspring through crossover process

*krt βrt kra βra kba βba kde βde wrt wra wba wde pc pm*

Crossover Rate Mutation Rate Abbreviation

Fig. 16. Encoding of a bidding strategy for self-adaptation crossover and mutation rate

Fixed Self-Adapted SAM Self-Adapted Fixed SAF Self-Adapted Self-Adapted SACM

Crossover Probability Self-Adapted / Fixed (0.4)

Mutation Probability Self-Adapted / Fixed (0.02)

Crossover Operator Extension Combination Operator

Number of Generations 50 Number of Individuals 50 Elitism 10%

Numbers of Repeat Run 30

Generation = 0

Table 8. Self-adaptation genetic algorithm parameter setting

Random initialize population While generation not equal 50 Evaluate population fitness The performance of the evolved bidding strategies is also evaluated based on the three measurements discussed in Section 4.2. As before, the average fitness of the each population is calculated over 50 generations. The success rate of the agent's strategy and the average payoff is observed over 200 runs in the market simulation.

A series of experiments were conducted with the self-adaptive testing sets described in Table 10. From the experiments, self-adapting both crossover and mutation rates performed better than the other combinations (Gan *et al*, 2009). The population with self adaptive crossover and mutation (SACM) achieved a higher average fitness compared to the population of self-adaptive crossover (SAC) and self –adaptive mutation schemes (SAM) as shown in Fig. 17. This scenario implies that the population with self adaptive crossover and mutation perform at its best among other populations and this is due to the self-adaptation crossover and mutation scheme which has combined the advantageous of the self-adaptive crossover and self-adaptive mutation scheme together. By having the two parameters to self-adapt, the control parameter can be adjusted to find the solution in different stages with the best control parameter which have been shown in the previous study indicating that different evolution stages will possess different optimal parameter values (Eiben *et al.* 1999).

Fig. 17. Average fitness for different self-adaptation schemes

Fig. 18. Success rate for strategies evolved from different self-adaptation schemes

Performance of Varying Genetic Algorithm Techniques in Online Auction 285

achieves a higher average fitness compared to benchmark, the newly discovered static rate and deterministic dynamic adaptation. Although average fitness of the self-adaptation and deterministic dynamic adaption is similar, self-adaptation achieves a higher average fitness

**Comparison of the Success Rate between Different Genetic Algorithm Disciplines**

Fig. 21. Success rate for strategies evolved from different genetic algorithm disciplines

Benchmark Newly Discovered Rate Deterministic Dynamic Adaptation Self-Adaptation

Fig. 22. Average payoff for strategies evolved from different genetic algorithm disciplines

The individuals evolved from the self adaptive genetic algorithm outperformed the other individuals from the other disciplines by delivering a more promising success rate. The strategy evolved is 1% higher than the strategies evolved from the deterministic dynamic adaptation. When compared to the benchmark value, an increase of 4% in the success rate is generated by the strategy which that employed the self-adaptation method. As a result, the strategy evolved from the self adaptive genetic algorithm can evolve better strategies and deliver higher success rate when bidding in online auctions which will eventually, improve

All of the strategies evolved from the self adaptive genetic algorithm outperformed the rest with 2% higher average payoff when compared to the strategies which applied deterministic dynamic adaptation and 4% higher when compared to the strategies from the benchmark. This result obtained indicates that the strategy evolved by using the self adaptive genetic algorithm does not only produce a better average fitness and success rate but also evolves

when compared to deterministic dynamic adaptation.

80

85

90

**Success Rate**

95

100

the GA in searching for better bidding strategies.

All of the individuals generated a 4% increase in success rate and average payoff after employing the self adaptive crossover and mutation schemes as shown in Fig. 18 and Fig. 19 This has proven that the strategy evolved by using the self adaptive crossover and mutation does not only generate a better average fitness and success rate but also evolves better effective strategies compared to the strategy evolved for other self adaptive schemes.

### **7. Comparison between variations of genetic algorithm**

In order to determine which of the three approaches perform the best in improving the effectiveness of the bidding strategies, the best result of each experiment is compared. The comparison is made by choosing the best performing schemes from the parameter tuning, deterministic dynamic adaptation and self-adaptation experiments. The main objective of this work is to improve the effectiveness of the existing bidding strategies by using different disciplines of the genetic algorithm.

Fig. 20. Average fitness population with different genetic algorithm disciplines

Fig. 20 shows the average fitness for the evolving bidding strategy with different disciplines of the genetic algorithm. It can be seen clearly that there is an obvious differences between the convergence points in the different genetic algorithm disciplines. Self-adaptation 284 Bio-Inspired Computational Algorithms and Their Applications

Fig. 19. Average payoff for strategies evolved from different self-adaptation schemes

effective strategies compared to the strategy evolved for other self adaptive schemes.

1 10 20 30 40 50

**Generation**

Fig. 20. Average fitness population with different genetic algorithm disciplines

Fig. 20 shows the average fitness for the evolving bidding strategy with different disciplines of the genetic algorithm. It can be seen clearly that there is an obvious differences between the convergence points in the different genetic algorithm disciplines. Self-adaptation

**7. Comparison between variations of genetic algorithm** 

disciplines of the genetic algorithm.

0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0.00

**Fitness**

All of the individuals generated a 4% increase in success rate and average payoff after employing the self adaptive crossover and mutation schemes as shown in Fig. 18 and Fig. 19 This has proven that the strategy evolved by using the self adaptive crossover and mutation does not only generate a better average fitness and success rate but also evolves better

In order to determine which of the three approaches perform the best in improving the effectiveness of the bidding strategies, the best result of each experiment is compared. The comparison is made by choosing the best performing schemes from the parameter tuning, deterministic dynamic adaptation and self-adaptation experiments. The main objective of this work is to improve the effectiveness of the existing bidding strategies by using different

**Average Fitness for Different Genetic Algorithm Schemes**

Benchmark New ly Discov er Rate Deterministic A daptation Self-A daptation

Different GA Schemes

achieves a higher average fitness compared to benchmark, the newly discovered static rate and deterministic dynamic adaptation. Although average fitness of the self-adaptation and deterministic dynamic adaption is similar, self-adaptation achieves a higher average fitness when compared to deterministic dynamic adaptation.

Fig. 21. Success rate for strategies evolved from different genetic algorithm disciplines

Fig. 22. Average payoff for strategies evolved from different genetic algorithm disciplines

The individuals evolved from the self adaptive genetic algorithm outperformed the other individuals from the other disciplines by delivering a more promising success rate. The strategy evolved is 1% higher than the strategies evolved from the deterministic dynamic adaptation. When compared to the benchmark value, an increase of 4% in the success rate is generated by the strategy which that employed the self-adaptation method. As a result, the strategy evolved from the self adaptive genetic algorithm can evolve better strategies and deliver higher success rate when bidding in online auctions which will eventually, improve the GA in searching for better bidding strategies.

All of the strategies evolved from the self adaptive genetic algorithm outperformed the rest with 2% higher average payoff when compared to the strategies which applied deterministic dynamic adaptation and 4% higher when compared to the strategies from the benchmark. This result obtained indicates that the strategy evolved by using the self adaptive genetic algorithm does not only produce a better average fitness and success rate but also evolves

Performance of Varying Genetic Algorithm Techniques in Online Auction 287

Anthony, P. and N. R. Jennings. 2003a. Agents in Online Auctions. *In* Yaacob, S. Nagarajan,

Babanov, A., Ketter, W. and Gini, M. L. 2003. An Evolutionary Approach for Studying

Back T. and Schutz M. 1996. Intelligent Mutation Rate Control in Canonical Genetic

Back, T. 1992a. The Interaction of Mutation Rate, Selection, and Self-Adaptation within a

Back, T. 1993. Optimal Mutation Rates in Genetic Search. *Proceedings of the 5th International Conferences of Genetic Algorithms*, pp. 2-8. San Francisco: Morgan Kaufmann. Back, T. Fogel, David. and Michalewicz, Z. Eds. 1997. *Handbook of Evolutionary Computation.* 

Bapna, R., P. Goes, and A. Gupta (2001). Insights and Analyses of Online Auctions.

Beasley, D., Bull, D. R. and Martin R. R. 1993. An Overview of Genetic Algorithms: Part 2,

Blickle, T. and Thiele, L. 1995. A Comparison of Selection Schemes Used in Genetic Algorithms. *Technical Report 11*. Zurich: Swiss Federal Institute of Technology. Blickle, T. and Thiele, L. 2001. A Mathematical Analysis of Tournament Selection.

Catania, V., Malgeri, M. and Russo, M. 1997. Applying Fuzzy Logic to Codesign

Cervantes, J. and Stephens, C. R. 2006. "Optimal" mutation rates for genetic search.

Chandrasekharam, R., Subhramanian, S. and Chaudhury, S. 1993. Genetic Algorithm for

*Choi, J. H., Ahn, H., and Han, I. 2008. Utility-based double auction mechanism using genetic* 

Cliff, D. 1997. Minimal Intelligence Agents for Bargaining Behaviours in Market Environment. *Technical Report HPL-97-91.* Hewlett Packard Laboratories.

*Proceedings of the Sixth International Conference on Genetic Algorithms*, pp. 9-16. San

*Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation* 

Node Partitioning Problem and Application in VLSI Design. *IEE Proceedings Series* 

*and Applications*, pp. 42-50. Kota Kinabalu: Universiti Malaysia Sabah. Anthony, P. and N. R. Jennings. 2003b. Developing a Bidding Agent for Multiple Heterogeneous Auctions. *ACM Transactions on Internet Technology,* 3(3): 185-217. Anthony, P. and N. R. Jennings. 2003c. A Heuristic Bidding Strategy for Multiple

*Commerce*, pp. 9-16. New York: ACM.

1079: 158-167. London: Springer-Verlag.

New York: Oxford University Press.

Francisco: Morgan Kaufmann.

Partitioning*. IEEE Micro* 17(3): 62-70.

*Communications of the ACM,* 44 (11): 43-50.

*Artificial Life,* pp. 263-271. Cambridge: MIT Press.

Research Topics. *University Computing* 15(4): 170 - 181.

*Conference*, pp.1313 – 1320. New York: ACM Press.

*E: Computers and Digital Techniques*, 140(5): 255–260.

*algorithms. Expert System Applicationl. 2008, pp. 150-158.* 

*2003*. pp. 157-168.

R., Chekima, A. and Sainarayanan, G. (Eds.). *Current Trends in Artificial Intelligence* 

Heteregenous Auctions. *Proceedings of the Fifth International Conference on Electronic* 

Heterogeneous Strategies in Electronic Markets. *Engineering Self-Organising Systems* 

Algorithms. *Proceedings of the International Symposium on Methodologies for Intelligent Systems*. *In* Ras, Z. W. and Michalewicz, Z. (Eds.) *Lecture Notes In Computer Science,* 

Genetic Algorithm. In Manner, R. and Manderick, B. (Eds). *Proceeding 2nd Conferences of Parallel Problem Solving from Nature,* pp. 85-94. Belgium: Elsevier. Back, T. 1992b. Self-Adaptation in Genetic Algorithms. *In* Varela, F. J. and Bourgine, P. (Eds.)

*Toward a Practice of Autonomous Systems: Proceeding 1st European Conference of* 


better effective strategies compared to the other strategies evolved for other disciplines and they have gained higher profit when procuring the item.

Table 10. P value for the comparison between different disciplines in term of success rate and average payoff

The symbol ⊕ in Table 10 indicates that the P-value is less than 0.05 and has significant improvement. The result of P value in the t-test in Table 10 shows the improvement generated by the self-adaptation is more significant compared to the other disciplines. Hence, it can be confirmed that self-adaptation is the best discipline in improving the effectiveness of the bidding strategies.
