**4.1 Experimental setup**

272 Bio-Inspired Computational Algorithms and Their Applications

best individuals to the new population and to ensure that a significant number of the fitter individuals will make it to the next generation. Tournament selection is applied in the genetic algorithm for selecting the individuals to the mating pools for the remaining ninety percent of the population (Blickle & Thiele, 2001). Tournament selection technique was chosen because it is known to perform well in allowing a diverse range of fitter individuals to populate the mating pool (Blickle & Thiele, 1995). By implementing the tournament selection, fitter individuals can contribute to the next generation genetic construction and the best individual will not dominate in the reproduction process compared to the

The extension operator floating point crossover operator is used this work (Beasley *et al.* 1993b). This operator works by taking the differences between the two values, adding it to the higher value (giving the maximum range), and subtracting it from the lower value (giving the minimum range). The new values for the genes are then generated between the minimum and the maximum range that were derived using this operator (Anthony &

Since the encoding is a floating point, the mutation operator used in this work must be a non-binary mutation operator. Beasley has suggested a few non-binary mutation operators such as random replacement, creep operator and geometric creep (Beasley *et al.* 1993b) that can be used. The *creep* operator which adds or subtracts a small randomly generated amount from selected gene is used to allow a small constant of 0.05 to be added or subtracted from the selected gene depending on the range limitation of the parameter (Anthony & Jennings,

The genetic algorithm will repeat the process until the termination criteria are met. In this work, the evolution stops after 50 iterations. An extensive experiment was conducted to determine the point at which the population converges. It was decided to choose 50 as the stopping criterion since it is was observed that the population will always converge before

Anthony's work has some shortcoming where the crossover and mutation rate used in the work is based on literature review recommended values. However, researches have shown that the crossover rate and mutation rate applied in the application are application dependent, thus, simulation need to be conducted in order to find the suitable crossover and mutation rate. Besides that, other variations of genetic algorithm have proven to perform

Many researchers such De Jong, Grefenstte, Schaffer and others have contributed considerable efforts into finding the parameters values which are good for a number of

better that traditional genetic algorithm which is worthwhile to be investigated.

proportional selection.

Jennings, 2002).

2002).

**3.3.5 Mutation process** 

**3.3.6 Stopping criteria** 

**4. Parameter tuning** 

or at the end of the 50 iterations.

**3.3.4 Crossover process** 

Table 2 and 3 show the evolutionary and parameter setting for the genetic algorithm. The parameters setting in the simulated environment for the empirical evaluations are shown in Table 4. These parameters include the agent's reservation price; the agent's bidding time and the number of active auctions. The agent's reservation price is the maximum amount that the agent is willing to pay for the item while the bidding time is the time allocated for the agent to obtain the user's required item. The active auctions are the list of auctions that is ongoing before time tmax. Fig. 7 shows the pseudocode of the genetic algorithm.




Table 3. Genetic algorithm parameter setting

Performance of Varying Genetic Algorithm Techniques in Online Auction 275

where pr is the agent's private valuation, n is the number of runs, vi is the winning bid value for auction i. This value is then divided by the agent's private valuation, summed and average over the number of runs. The agent's payoff is 0 if it is not successful in obtaining

A series of experiments was conducted using the set of crossover and mutation rate described in Table 2. It was found that 0.4 crossover rate and 0.02 mutation rate performed better than the other combinations (Gan *et al*, 2008a, Gan *et al*, 2008b). An experiment was conducted with the newly discovered crossover rate *pc* = 0*.*4 and mutation rate *pm* = 0*.*02. The result was then compared with the original combination of the genetic operators' (*pc* = 0*.*6 and *pm* = 0*.*02). Figures 8, 9 and 10 shows the comparison between the strategies evolved using a combination of crossover rate 0.4 and a mutation rate of 0.02 and the combination of crossover rate 0.6 with a mutation rate of 0.02. The new strategies evolved from the combination of the crossover rate of 0.4 and mutation rate of 0.02 produced better result in terms of the average fitness, the success rate and the average payoff. It can be observed that the mutation rate of 0.02 evolved better strategies when compared to other mutation rates as well (0.2 and 0.002). This rate is similar to the research outcome by Cervantes (Cervantes & Stephen, 2006) in which a mutation rate below the 1/N and error threshold is recommended. Besides, the results of the comparison showed that the combination of 0.4 crossover rate and 0.02 mutation rate can achieve better balance in the exploration and exploitation in evolving the bidding strategies as well. T-test is performed to show the significant improvement of this newly discovered combination of genetic operator probabilities. The symbol of ⊕ in Table 5 indicates that the P-value is less than 0.05 and has

 P Value Average Fitness ⊕ Success Rate ⊕ Average Payoff ⊕ Table 5. P value of the t-test statistical analysis for comparison between newly discovered

**Comparison of Average Fitness** 

0.02m 0.4c 0.02m 0.6c

Fig. 8. Comparison of Average Fitness between the benchmark and the newly discovered rate.

1 5 9 13 17 21 25 29 33 37 41 45 49 **Generation**

genetic operator probabilities with the old set of genetic operator probabilities

0 0.02 0.04 0.06 0.08 0.1

**Average Fitness**

the item.

significant improvement.


Table 4. Configurable parameters for the simulated marketplace


Fig. 7. Genetic algorithm

#### **4.2 Experimental evaluation**

The performance of the evolved strategies is evaluated based on three measurements. Firstly, the average fitness is the fitness of the population at each generation over 50 generations. The average fitness shows how well the strategy converges over time to find the best solution.

Secondly, success rate is the percentage of time that an agent succeeds in acquiring the item by the given time at any price less than or equal to its private valuation. This measure will determine the efficiency of the agent in terms of guaranteeing the delivery of the requested item. Individual will be selected from each of the data set to compete in the simulated marketplace for 200 times. The success is calculated based on the number of time the agent is able to win the item over 200 runs. The formula below is used to calculate the success rate.

$$\text{Success Rate} = \frac{\text{(Number of winning)} \times 200}{100} \tag{9}$$

Finally, the third measurement is the average payoff which is defined as

$$\frac{\sum\_{1 \le x \le 100} \left( \frac{p\_r - v\_i}{p\_r} \right)}{n} \tag{10}$$

274 Bio-Inspired Computational Algorithms and Their Applications

Agent reservation price 73 ≤ pr ≤ 79 Bidding time for each auction 21 ≤ tmax ≤ 50 Number of active auction 20 ≤ L(t) ≤ 45

Table 4. Configurable parameters for the simulated marketplace

While not (Stopping Criterion) do

Select the fittest individuals (HP);

 pool to create new generation(SF); New generation is HP + SF;

 marketplace 2000 times; Create new population

 Gen = Gen + 1 End while

End

Fig. 7. Genetic algorithm

the best solution.

success rate.

**4.2 Experimental evaluation** 

Randomly create initial bidder populations;

Calculate fitness of each individual by running the

 Create mating pool for the remaining population; Perform crossover and mutation in the mating

The performance of the evolved strategies is evaluated based on three measurements. Firstly, the average fitness is the fitness of the population at each generation over 50 generations. The average fitness shows how well the strategy converges over time to find

Secondly, success rate is the percentage of time that an agent succeeds in acquiring the item by the given time at any price less than or equal to its private valuation. This measure will determine the efficiency of the agent in terms of guaranteeing the delivery of the requested item. Individual will be selected from each of the data set to compete in the simulated marketplace for 200 times. The success is calculated based on the number of time the agent is able to win the item over 200 runs. The formula below is used to calculate the

(Number of winning) x 200 Success Rate=

*r i*

<sup>−</sup>

*r p v p*

1 100

*n*

*x*

≤ ≤

Finally, the third measurement is the average payoff which is defined as

100 (9)

(10)

Begin

where pr is the agent's private valuation, n is the number of runs, vi is the winning bid value for auction i. This value is then divided by the agent's private valuation, summed and average over the number of runs. The agent's payoff is 0 if it is not successful in obtaining the item.

A series of experiments was conducted using the set of crossover and mutation rate described in Table 2. It was found that 0.4 crossover rate and 0.02 mutation rate performed better than the other combinations (Gan *et al*, 2008a, Gan *et al*, 2008b). An experiment was conducted with the newly discovered crossover rate *pc* = 0*.*4 and mutation rate *pm* = 0*.*02. The result was then compared with the original combination of the genetic operators' (*pc* = 0*.*6 and *pm* = 0*.*02). Figures 8, 9 and 10 shows the comparison between the strategies evolved using a combination of crossover rate 0.4 and a mutation rate of 0.02 and the combination of crossover rate 0.6 with a mutation rate of 0.02. The new strategies evolved from the combination of the crossover rate of 0.4 and mutation rate of 0.02 produced better result in terms of the average fitness, the success rate and the average payoff. It can be observed that the mutation rate of 0.02 evolved better strategies when compared to other mutation rates as well (0.2 and 0.002). This rate is similar to the research outcome by Cervantes (Cervantes & Stephen, 2006) in which a mutation rate below the 1/N and error threshold is recommended. Besides, the results of the comparison showed that the combination of 0.4 crossover rate and 0.02 mutation rate can achieve better balance in the exploration and exploitation in evolving the bidding strategies as well. T-test is performed to show the significant improvement of this newly discovered combination of genetic operator probabilities. The symbol of ⊕ in Table 5 indicates that the P-value is less than 0.05 and has significant improvement.


Table 5. P value of the t-test statistical analysis for comparison between newly discovered genetic operator probabilities with the old set of genetic operator probabilities

Fig. 8. Comparison of Average Fitness between the benchmark and the newly discovered rate.

Performance of Varying Genetic Algorithm Techniques in Online Auction 277

combinations systematically is practically impossible as hand tuning the parameter is very time consuming. Therefore, in the second stage of the experiment, deterministic dynamic adaptation is applied to genetic algorithm to evolve the bidding strategies in order to

Many researchers have applied deterministic dynamic adaptation in evolutionary algorithms as a method to improve the limitation in the performance of evolutionary algorithms. This type of adaptation alters the value of strategy parameter by using some deterministic rule (Fogarty, 1989; Hinterding *et al.* 1997). The value of the strategy parameter is modified by the deterministic rule which is normally a time-varying schedule. It is different from the standard genetic algorithm since GA applies a fixed mutation rate over the evolutionary process. Most of the practical applications often favor larger or nonconstant settings of the genetic operators' probabilities. (Back & Schutz, 1996). Some of the studies have proved the usefulness and effectiveness of larger, varying mutation rates (Back,

In this work, a time-variant dependent control rule is applied to change the control parameters over time without taking into account any present information by the evolutionary process itself (Eiben *et al.* 1999; Hinterding *et al.* 1997). Several studies have shown that a time dependent schedule is able to perform better than a fixed constant control parameter (Fogarty, 1989; Hesser & Manner, 1990; Hesser & Manner, 1992; Back & Schutz, 1996). The control rule is used to change the control parameter over the generation of the evolutionary process. The newly discovered crossover and mutation rates from the first experiment will be used in this particular schedule to serve as the midpoint in the time schedule. The parameter step size will change equally over the generation of the evolutionary process as well. This experiment is intended to discover the best deterministic dynamic adaptation by varying the genetic operators' probability scheme in exploring the

The deterministic increasing and decreasing schemes for the crossover and mutation are different due to the changing scale of the values. The newly discovered crossover rates obtained from Section 3 is used as the midpoint for the time variant schedule because the convergence period of the evolution occur around the 25th generation. Consequently, the deterministic increasing scheme for the crossover rate will change progressively from *pc* = 0.2 to *pc* = 0.6 over the generation whereas the decrease scheme for the crossover rate is vice versa. The mutation rate obtained from the previous experiment is used as the midpoint of the time variant schedule for the increasing and decreasing schemes. The deterministic increasing scheme for the mutation rate, in contrast, will change progressively from *pm* = 0.002 to *pm* = 0.2 over the generation and vice versa for the deterministic decreasing schemes. The changing scale during each generation is decided by taking the difference between

Table 6 shows the parameter setting for the deterministic dynamic adaptation genetic algorithm. The evolutionary setting and parameter setting in the simulated environment is

ranges of the rate divided by the total number of generation.

overcome the manual tuning problem.

1992; Muhlenbein, 1992).

bidding strategies.

**5.1 Experimental setup** 

**5. Deterministic dynamic adaptation** 

Fig. 9. Success rate for strategies evolved with the benchmark and the newly discovered rate

Fig. 10. Average payoff for strategies evolved with the benchmark and the newly discovered rate

This section investigated the performance of various combinations of predetermined sets of genetic operators' rates in genetic algorithm on a flexible and configurable heuristic decision making framework that is capable to tackle the problem of bidding across multiple auctions that applied different protocols (English, Vickrey and Dutch). As mentioned earlier, the optimal combinations of operators' probabilities of applying these operators are problem dependent. Thus, experiments have to be conducted in order to discover a new operator of combinations genetic operator probability which can improve the effectiveness of the bidding strategy. This experiment has proven that the crossover rate and mutation rate which were applied in the previous work are not the best value to be used in this framework. With this new combination of genetic operators, the experimental evaluation has also shown that the strategies evolved performed better than the other strategies evolved from the other combinations in terms of success rate and average payoff when bidding in the online auction marketplace. By discovering a better combination of genetic operator's probabilities, the improved performance of the bidding strategies as shown in Fig. 8, 9, and 10 are achieved. From this parameter tuning experiment, it can be confirmed that the parameters are problem dependent. However, trying out all of the different combinations systematically is practically impossible as hand tuning the parameter is very time consuming. Therefore, in the second stage of the experiment, deterministic dynamic adaptation is applied to genetic algorithm to evolve the bidding strategies in order to overcome the manual tuning problem.
