**3. Simulated quenching applied to nonlinear decoupling problem**

The functioning of the classical SQ method can be described using the pseudo code, divided into initialization and the simulated annealing part as presented in Figure 4.

Inside the first section the temperature factor *T* is set to the initial temperature (*T*0). The temporary minimum of the cost function (*Qmin*) is calculated for the initial set of decoupling coefficients (*Coef*0). These values are then appointed to actual state configuration. Afterwards, the algorithm enters the search loop until allowed processing time measured in number of iterations is exceeded (*Iter* > *IMAX*). The selection of new coefficients (neighbors) is done inside the *New\_Candidate* function, and then the cost function for the set of selected coefficients is calculated (*Qnew*). If the cost function (*Cost*) of the new neighbor presents lower value than temporary minimum cost function (*Qmin* ), the perturbation is accepted. The new cost function value (*Qnew*) becomes the new minimum and the corresponding set of the coefficients is stored in *Coefmin*. In case of higher cost function value the perturbation is accepted only if the randomly selected number between zero and one is smaller than the *Trans\_Prob* function of the actual temperature *T*. However, in this case only the actual position is updated and the global minimum cost configuration is left unchanged. The search at one temperature level is limited

8 Will-be-set-by-IN-TECH 138 Simulated Annealing – Single and Multiple Objective Problems Simulated Quenching for Cancellation of Non-Linear Multi-Antennas Coupling <sup>9</sup>

• maximum allowed number of iterations (*IMAX*)

and the figures present the averaged values.

*3.1.1. The state space*

*Rescoef* = 0.00001.

*3.1.2. Coefficients initial values*

zero, except *a*<sup>1</sup> which is set to one:

*Coef*<sup>0</sup> :

correspond to *bi*, *i* = 1, .., 9, and the last two correspond to *k*<sup>1</sup> and *k*2.

⎧ ⎨ ⎩

**3.1. Tuning of SQ parameters**


The choice of SQ parameters can have a significant impact on the method effectiveness. Unfortunately, there is no selection good enough for all problems, and there is no general way of finding the best set of parameters for a given problem. However, some more generic parameters (e.g. the annealing schedule or probability transition function) can be set based on the experience of other authors like [8, 9, 11, 13, 14, 16]. This way, preliminary SQ search executions are over-dimensioned favoring precision over speed and are used for tuning correctly the search parameters. The empirical tuning of SQ parameters is done for 64QAM signals. The results are based on 2000 calibration symbols transmitted under SNR of 100dB. This way the analysis is concentrated primarily on coupling rather than on noise effects. Since SQ search carries probability factor each presented simulation is in fact executed ten times,

Simulated Quenching for Cancellation of Non-Linear Multi-Antennas Coupling 139

This parameter is defined with the number of design variables, their discrete domain resolution and the overall symbol precision. Starting from a analog domain the search coefficients are transformed into discrete domain by setting the distance between two sample points. For example, by setting the resolution to 0.1 the coefficients can only obtain resolution multiples values like 0.2, −0.1, 0.8 etc. By setting the resolution to more dense value, like 0.0001, exactly 1000 times more possible candidates are available for each coefficient inside the SQ search. Expanding the solution search area naturally leads to better solution precision, but at the expense of processing time. Apart from the coefficients resolution, the state space also defines the resolution of the received and decoupled symbols which depend on the transmission system requirements and implemented A/D converter. We propose to set *Ressymb* = 0.00001 and apply the same resolution on decoupling symbols, that is,

The initial assumption is that no coupling takes place, and the signal arrives without any distortion. Consequently, the signals that suffered larger distortions will require longer search time as their solution is located far from the starting point. This way the algorithm favors the solutions of smaller coupling which is expected to occur more frequently under realistic conditions. By using the decoupling model in accordance to eq. 9, all coefficients are set to

In fact, to make the indexing simpler, the coefficients are expressed as 20 dimensional vector where first nine elements correspond to coefficients *ai*, *i* = 1, .., 9, second nine elements

*a* = [1, 0, 0, 0, 0, 0, 0, 0, 0] *b* = [0, 0, 0, 0, 0, 0, 0, 0, 0] *k* = [0, 0]

(11)

**Figure 4.** Pseudo code of SQ applied to Nonlinear Decoupling Problem

with a number of visited candidates per level defined with *NVCPL* parameter. Afterwards, the temperature *T* is decreased according to the annealing schedule function (*Ann\_Schedule*) and the quest is continued.

According to the formulation of SQ algorithm when applying the SQ method to a decoupling problem, one must specify:

	- symbol precision (*Ressymb*)
	- coefficient precision (*Rescoef*)
	- time reduction factor (*α*)
	- number of visited candidates per temperature level (*NVCPL*)

• maximum allowed number of iterations (*IMAX*) - minimum probability of worse move during the search (*pmin*)

### **3.1. Tuning of SQ parameters**

8 Will-be-set-by-IN-TECH

**Figure 4.** Pseudo code of SQ applied to Nonlinear Decoupling Problem

• the initial coefficients values or the starting point (*Coef*0)

• the form of the cost function (*Cost*\_*Func*) - already defined with Equation 10.



• the neighbor selection method (*New*\_*Candidate*) - coefficient maximum displacement (*R*<sup>0</sup> )

• the probability transition function (*Trans*\_*Prob*)

• the annealing schedule function (*Ann*\_*Schedule*)

the quest is continued.

problem, one must specify:



• the state space or the coefficient resolution

with a number of visited candidates per level defined with *NVCPL* parameter. Afterwards, the temperature *T* is decreased according to the annealing schedule function (*Ann\_Schedule*) and

According to the formulation of SQ algorithm when applying the SQ method to a decoupling

The choice of SQ parameters can have a significant impact on the method effectiveness. Unfortunately, there is no selection good enough for all problems, and there is no general way of finding the best set of parameters for a given problem. However, some more generic parameters (e.g. the annealing schedule or probability transition function) can be set based on the experience of other authors like [8, 9, 11, 13, 14, 16]. This way, preliminary SQ search executions are over-dimensioned favoring precision over speed and are used for tuning correctly the search parameters. The empirical tuning of SQ parameters is done for 64QAM signals. The results are based on 2000 calibration symbols transmitted under SNR of 100dB. This way the analysis is concentrated primarily on coupling rather than on noise effects. Since SQ search carries probability factor each presented simulation is in fact executed ten times, and the figures present the averaged values.

#### *3.1.1. The state space*

This parameter is defined with the number of design variables, their discrete domain resolution and the overall symbol precision. Starting from a analog domain the search coefficients are transformed into discrete domain by setting the distance between two sample points. For example, by setting the resolution to 0.1 the coefficients can only obtain resolution multiples values like 0.2, −0.1, 0.8 etc. By setting the resolution to more dense value, like 0.0001, exactly 1000 times more possible candidates are available for each coefficient inside the SQ search. Expanding the solution search area naturally leads to better solution precision, but at the expense of processing time. Apart from the coefficients resolution, the state space also defines the resolution of the received and decoupled symbols which depend on the transmission system requirements and implemented A/D converter. We propose to set *Ressymb* = 0.00001 and apply the same resolution on decoupling symbols, that is, *Rescoef* = 0.00001.

#### *3.1.2. Coefficients initial values*

The initial assumption is that no coupling takes place, and the signal arrives without any distortion. Consequently, the signals that suffered larger distortions will require longer search time as their solution is located far from the starting point. This way the algorithm favors the solutions of smaller coupling which is expected to occur more frequently under realistic conditions. By using the decoupling model in accordance to eq. 9, all coefficients are set to zero, except *a*<sup>1</sup> which is set to one:

$$\text{Coef}\_0: \begin{cases} a = [1, 0, 0, 0, 0, 0, 0, 0, 0] \\ b = [0, 0, 0, 0, 0, 0, 0, 0, 0] \\ k = [0, 0] \end{cases} \tag{11}$$

In fact, to make the indexing simpler, the coefficients are expressed as 20 dimensional vector where first nine elements correspond to coefficients *ai*, *i* = 1, .., 9, second nine elements correspond to *bi*, *i* = 1, .., 9, and the last two correspond to *k*<sup>1</sup> and *k*2.

#### 10 Will-be-set-by-IN-TECH 140 Simulated Annealing – Single and Multiple Objective Problems Simulated Quenching for Cancellation of Non-Linear Multi-Antennas Coupling <sup>11</sup>

#### *3.1.3. Probability transition function*

The key SA feature that prevents the system of becoming stuck in a local minimum is its probability to move to the new state even when this new state has worse characteristics than the current one. The probability of making the transition from the current state to a candidate new state is a function of the energies of the two states, and of a global time varying parameter *T* denominated as temperature. Even though the algorithm is open to the use of any probability function (*p*), the new configurations are usually accepted or rejected according to the Boltzmann probability distribution, *kB* is the Boltzmann constant that relates temperature to energy. When SA problem is not related directly to the physical energy this constant is replaced with problem specific constant *K* and modified Boltzmann probability function is used accordingly:

$$p(\Delta E) = e^{-\frac{\Delta E}{k\_B T}} \longleftrightarrow p(\Delta Q) = e^{-\frac{\Delta Q}{kT}} = e^{-\frac{Qnav - Q\_{act}}{kT}} \tag{12}$$

allowed individual coefficient displacement of *R*<sup>0</sup> = 0.05. By approximating Δ*Q* distribution with Gaussian-like probability density function, three standard deviations (*σ*Δ*Q*) represent 99.7% of all possible cost function differences and as *p*<sup>0</sup> can be created according to this value

<sup>−</sup> <sup>3</sup>*σ*Δ*<sup>Q</sup>*

*KT*<sup>0</sup> (13)

*ln*(*pmax*) (14)

Simulated Quenching for Cancellation of Non-Linear Multi-Antennas Coupling 141

*p*<sup>0</sup> = *pmax* = *e*

*<sup>K</sup>* <sup>=</sup> <sup>−</sup> <sup>3</sup>*σ*Δ*<sup>Q</sup>*

By setting the initial probability of uphill movement close to one, the initial search wander is allowed regardless of the energy difference between two candidates. This slows down the search usually without any solution precision gained. Hence, the initial probability level is generally set to *pmax* = 50% and numerical value of *K* is calculated in *setK* function before algorithm enters SQ search loop. Inside the *setK* function the approximation of Gauss distribution is obtained by implementing the actual neighbor selection method on 2000 new

The annealing schedule function has no strict definition, as the function only restricts to uphill surface exploration at the beginning, and then gradually restraining the search wondering favoring the movement in the direction of better solution only. Inside physical simulated annealing the probability of acceptance of uphill movement is directly related to temperature, but in mathematical interpretations the initial temperature level has lost its significance, and thus it is set to *T*<sup>0</sup> = 1 in order to simplify the calculation process. Consequently, modified Boltzmann constant *K* is calculated accordingly. A popular choice for the annealing function is the exponential schedule, where the temperature level, denoted as *T* or *TL* , is decreased by

where *TL* and *TL*−<sup>1</sup> are the new and the actual temperature respectively, and index *L* stands

In order to define the annealing schedule both, temperature reduction factor *α* and number of visited candidates per level *NVCPL* have to be determined. In theory, when keeping *NVCPL* constant, the search is done more thoroughly if the temperature is decreased slower and better solution precision is achieved. Additionally, if (*α*) is constant, and *NVCPL* is increased, each level is scanned in more detail and, again, better results are expected. However, when both factor are set in favor of precision the search is extremely slowed down and the precision

These two parameters are established empirically. In Figure 6 the dependence of the cost function minimum on the number of visited points per level is shown, with bars of different colors corresponding to different temperature reduction factors. The simulations are done for moderate coupling distortion of [-10,-12,-12] dB and maximum allowed displacement set to *R*<sup>0</sup> = 0.05. It is seen that as *α* approaches 1 the probability of finding the better solution increases, and as a result minimum cost function moves away from its starting value. However, if *NVCPL* is low, e.g. *NVCPL* = 2 no significant progress is made even with *α* = 0.995. This is due to the scan space around the actual minimum which is always kept narrow and so

*TL* = *<sup>α</sup>TL*−1; *<sup>T</sup>*<sup>0</sup> = 1, 0 < *<sup>α</sup>* < 1, *<sup>L</sup>* = 1, 2, 3... (15)

as:

candidates.

a fixed factor *α*:

with *T*<sup>0</sup> = 1, *K* is obtained as:

*3.1.4. Annealing schedule function*

for the number of temperature level changes.

improvement might not justify the processing load.

where *Qnew* and *Qact* are the cost function values associated with the actual and new candidate state respectively, and Δ*Q* their difference. However for a given Δ*Q* the transition probability is not constant throughout the annealing process as it also depends on the actual temperature. By setting the initial temperature *T*<sup>0</sup> to high values the probability of search wandering is increased. For example, if Δ*Q* = 0.1 and initial temperature *KT*<sup>0</sup> = 1 the perturbation will be accepted if the random variable falls between �0, 0.9� which correspond to 90%, while for Δ*Q* = 0.1 and initial temperature *KT*<sup>0</sup> = 0.1 this probability falls to 36%. As the temperature is constantly decreased according to the annealing schedule, the probability of worse move tends to zero. This is in accordance with the physical simulated annealing where no wondering around is possible at low temperatures.

**Figure 5.** Approximation of the consecutive cost function differences with Gauss function

The selection of problem specific constant *K* depends on the initial probability of uphill movement acceptance defined as *p*<sup>0</sup> which should be guaranteed for wide range of energy differences. In order to define this energy level we approximated the cost differences of the two consecutive surfaces (Δ*Q* ) with Gaussian probability function, as it fits the actual empirical data. This is shown in Figure 5 where 2000 differences are calculated with maximum allowed individual coefficient displacement of *R*<sup>0</sup> = 0.05. By approximating Δ*Q* distribution with Gaussian-like probability density function, three standard deviations (*σ*Δ*Q*) represent 99.7% of all possible cost function differences and as *p*<sup>0</sup> can be created according to this value as:

$$p\_0 = p\_{\max} = e^{-\frac{3r\_{\Lambda Q}}{kT\_0}} \tag{13}$$

with *T*<sup>0</sup> = 1, *K* is obtained as:

10 Will-be-set-by-IN-TECH

The key SA feature that prevents the system of becoming stuck in a local minimum is its probability to move to the new state even when this new state has worse characteristics than the current one. The probability of making the transition from the current state to a candidate new state is a function of the energies of the two states, and of a global time varying parameter *T* denominated as temperature. Even though the algorithm is open to the use of any probability function (*p*), the new configurations are usually accepted or rejected according to the Boltzmann probability distribution, *kB* is the Boltzmann constant that relates temperature to energy. When SA problem is not related directly to the physical energy this constant is replaced with problem specific constant *K* and modified Boltzmann probability function is

*kBT* ←→ *p*(Δ*Q*) = *e*

**Figure 5.** Approximation of the consecutive cost function differences with Gauss function

The selection of problem specific constant *K* depends on the initial probability of uphill movement acceptance defined as *p*<sup>0</sup> which should be guaranteed for wide range of energy differences. In order to define this energy level we approximated the cost differences of the two consecutive surfaces (Δ*Q* ) with Gaussian probability function, as it fits the actual empirical data. This is shown in Figure 5 where 2000 differences are calculated with maximum

where *Qnew* and *Qact* are the cost function values associated with the actual and new candidate state respectively, and Δ*Q* their difference. However for a given Δ*Q* the transition probability is not constant throughout the annealing process as it also depends on the actual temperature. By setting the initial temperature *T*<sup>0</sup> to high values the probability of search wandering is increased. For example, if Δ*Q* = 0.1 and initial temperature *KT*<sup>0</sup> = 1 the perturbation will be accepted if the random variable falls between �0, 0.9� which correspond to 90%, while for Δ*Q* = 0.1 and initial temperature *KT*<sup>0</sup> = 0.1 this probability falls to 36%. As the temperature is constantly decreased according to the annealing schedule, the probability of worse move tends to zero. This is in accordance with the physical simulated annealing where

<sup>−</sup> <sup>Δ</sup>*<sup>Q</sup> KT* = *e*

<sup>−</sup> *Qnew*−*Qact KT* (12)

*3.1.3. Probability transition function*

*p*(Δ*E*) = *e*

no wondering around is possible at low temperatures.

<sup>−</sup> <sup>Δ</sup>*<sup>E</sup>*

used accordingly:

$$K = -\frac{3\sigma\_{\Lambda Q}}{\ln(p\_{\text{max}})} \tag{14}$$

By setting the initial probability of uphill movement close to one, the initial search wander is allowed regardless of the energy difference between two candidates. This slows down the search usually without any solution precision gained. Hence, the initial probability level is generally set to *pmax* = 50% and numerical value of *K* is calculated in *setK* function before algorithm enters SQ search loop. Inside the *setK* function the approximation of Gauss distribution is obtained by implementing the actual neighbor selection method on 2000 new candidates.

#### *3.1.4. Annealing schedule function*

The annealing schedule function has no strict definition, as the function only restricts to uphill surface exploration at the beginning, and then gradually restraining the search wondering favoring the movement in the direction of better solution only. Inside physical simulated annealing the probability of acceptance of uphill movement is directly related to temperature, but in mathematical interpretations the initial temperature level has lost its significance, and thus it is set to *T*<sup>0</sup> = 1 in order to simplify the calculation process. Consequently, modified Boltzmann constant *K* is calculated accordingly. A popular choice for the annealing function is the exponential schedule, where the temperature level, denoted as *T* or *TL* , is decreased by a fixed factor *α*:

$$T\_L = \mathfrak{a} T\_{L-1};\ T\_0 = 1, \ 0 < \mathfrak{a} < 1, \ L = 1, 2, 3... \tag{15}$$

where *TL* and *TL*−<sup>1</sup> are the new and the actual temperature respectively, and index *L* stands for the number of temperature level changes.

In order to define the annealing schedule both, temperature reduction factor *α* and number of visited candidates per level *NVCPL* have to be determined. In theory, when keeping *NVCPL* constant, the search is done more thoroughly if the temperature is decreased slower and better solution precision is achieved. Additionally, if (*α*) is constant, and *NVCPL* is increased, each level is scanned in more detail and, again, better results are expected. However, when both factor are set in favor of precision the search is extremely slowed down and the precision improvement might not justify the processing load.

These two parameters are established empirically. In Figure 6 the dependence of the cost function minimum on the number of visited points per level is shown, with bars of different colors corresponding to different temperature reduction factors. The simulations are done for moderate coupling distortion of [-10,-12,-12] dB and maximum allowed displacement set to *R*<sup>0</sup> = 0.05. It is seen that as *α* approaches 1 the probability of finding the better solution increases, and as a result minimum cost function moves away from its starting value. However, if *NVCPL* is low, e.g. *NVCPL* = 2 no significant progress is made even with *α* = 0.995. This is due to the scan space around the actual minimum which is always kept narrow and so

*3.1.5. Neighbor selection method*

set (*Coefnew*) can be described as:

and conclude the search far from the optimal solution.

**Figure 8.** *Q* in dependence of *R*<sup>0</sup> under different coupling conditions

displacement to *R*<sup>0</sup> = 0.0005.

When selecting new candidates (neighbors), it must be possible to move from the initial state to a *good enough* state by relatively short path while at the same time allowing the search to scan the area but never loosing the good path from the sight. The selection of the new candidate

where *ξ<sup>i</sup>* is the randomly chosen number between -1 and 1, and *R*<sup>0</sup> defines maximum allowed coefficient displacement. If *R*<sup>0</sup> is large, the SQ search can easily explore wide search area, but needs more iterations to reach the optimal solution as the algorithm is easily distracted and moved away from the good path. If *R*<sup>0</sup> is small, the algorithm can get stuck in local minimum

The behavior of cost function with respect to *R*<sup>0</sup> under different coupling conditions is presented in Figure 8 as this parameter is retrieved empirically. The conducted search is based on classical SQ algorithm with *NVCPL* = 30 and *α* = 0.995. The figure can be divided into three clearly distinguished areas. The left side, with *R*<sup>0</sup> lower than 10−<sup>5</sup> corresponds to algorithm where no wandering around is allowed. The search progress is slow and any encountered local minimum is presented as the final solution. The optimal solution is only found if the search is executed close to it. On the right side of the figure with *R*<sup>0</sup> values larger than 0.5, wandering is allowed during the whole search process and as a consequence no clear search path is established. The search tends to move too quickly and without any exploration around the actual position. As search is randomly wandering around, finding the optimal solution in this area is highly uncertain. The central part of the image coincides with the meaningful SQ search where wandering around and detail space exploration are in balance. Here, the search method does follow some path and the traps of local minimums are successfully avoided. Based on the presented *Q* values, we propose to set maximum allowed

*Coefnew*(*i*) = *ξiR*<sup>0</sup> + *Coefact*(*i*); *i* = 1, ..., 20 (16)

Simulated Quenching for Cancellation of Non-Linear Multi-Antennas Coupling 143

**Figure 6.** *Q* as a function of *NVCPL* and *α* for [-10,-12,-12] dB coupling distortion

the search is easily stacked in the local minimum. According to empirical decoupling results, the number of visited points per level starts producing good results with *NVCPL* ≥ 10 .

**Figure 7.** *Q* as a function of *α* and *NVCPL* for [-8,-10,-8] dB coupling distortion

The cost performance as a function of temperature reduction factor is shown in Figure 7 where bars of different colors correspond to different *NVCPL* levels. For *α* between 0.5 and 0.95 the search progress presents low consistency as the cost function minimum experienced big differences between two realizations of the same SQ search. It shows that the temperature reduction is reduced too quickly and that the final precision is still not guaranteed as search robustness is not achieved. This statement is confirmed in Figure 7 by comparing the *Q* function obtained with different *NVCPL* for this *α* range. For *α* = 0.9 better precision is achieved with *NVCPL* = 30 than with *NVCPL* = 300. Hence in order to avoid the uncertainty, temperature reduction factor should be between 0.95 and 1 since this area shows signs of consistency with theoretical background (precision is increased with the increase of either *NVCPL* or *α*). According to the presented simulation results, the SQ search is adopted adequately to inverse coupling approximation with the number of visited candidates per level set to *NVCPL* = 30 , and temperature reduction factor set to *α* = 0.995 .

#### *3.1.5. Neighbor selection method*

12 Will-be-set-by-IN-TECH

the search is easily stacked in the local minimum. According to empirical decoupling results, the number of visited points per level starts producing good results with *NVCPL* ≥ 10 .

The cost performance as a function of temperature reduction factor is shown in Figure 7 where bars of different colors correspond to different *NVCPL* levels. For *α* between 0.5 and 0.95 the search progress presents low consistency as the cost function minimum experienced big differences between two realizations of the same SQ search. It shows that the temperature reduction is reduced too quickly and that the final precision is still not guaranteed as search robustness is not achieved. This statement is confirmed in Figure 7 by comparing the *Q* function obtained with different *NVCPL* for this *α* range. For *α* = 0.9 better precision is achieved with *NVCPL* = 30 than with *NVCPL* = 300. Hence in order to avoid the uncertainty, temperature reduction factor should be between 0.95 and 1 since this area shows signs of consistency with theoretical background (precision is increased with the increase of either *NVCPL* or *α*). According to the presented simulation results, the SQ search is adopted adequately to inverse coupling approximation with the number of visited candidates per level

**Figure 6.** *Q* as a function of *NVCPL* and *α* for [-10,-12,-12] dB coupling distortion

**Figure 7.** *Q* as a function of *α* and *NVCPL* for [-8,-10,-8] dB coupling distortion

set to *NVCPL* = 30 , and temperature reduction factor set to *α* = 0.995 .

When selecting new candidates (neighbors), it must be possible to move from the initial state to a *good enough* state by relatively short path while at the same time allowing the search to scan the area but never loosing the good path from the sight. The selection of the new candidate set (*Coefnew*) can be described as:

$$\text{Coef}\_{new}(i) = \text{f}\_i \text{R}\_0 + \text{Coef}\_{\text{act}}(i); \text{ i } = 1, \dots, \text{20} \tag{16}$$

where *ξ<sup>i</sup>* is the randomly chosen number between -1 and 1, and *R*<sup>0</sup> defines maximum allowed coefficient displacement. If *R*<sup>0</sup> is large, the SQ search can easily explore wide search area, but needs more iterations to reach the optimal solution as the algorithm is easily distracted and moved away from the good path. If *R*<sup>0</sup> is small, the algorithm can get stuck in local minimum and conclude the search far from the optimal solution.

**Figure 8.** *Q* in dependence of *R*<sup>0</sup> under different coupling conditions

The behavior of cost function with respect to *R*<sup>0</sup> under different coupling conditions is presented in Figure 8 as this parameter is retrieved empirically. The conducted search is based on classical SQ algorithm with *NVCPL* = 30 and *α* = 0.995. The figure can be divided into three clearly distinguished areas. The left side, with *R*<sup>0</sup> lower than 10−<sup>5</sup> corresponds to algorithm where no wandering around is allowed. The search progress is slow and any encountered local minimum is presented as the final solution. The optimal solution is only found if the search is executed close to it. On the right side of the figure with *R*<sup>0</sup> values larger than 0.5, wandering is allowed during the whole search process and as a consequence no clear search path is established. The search tends to move too quickly and without any exploration around the actual position. As search is randomly wandering around, finding the optimal solution in this area is highly uncertain. The central part of the image coincides with the meaningful SQ search where wandering around and detail space exploration are in balance. Here, the search method does follow some path and the traps of local minimums are successfully avoided. Based on the presented *Q* values, we propose to set maximum allowed displacement to *R*<sup>0</sup> = 0.0005.

#### 14 Will-be-set-by-IN-TECH 144 Simulated Annealing – Single and Multiple Objective Problems Simulated Quenching for Cancellation of Non-Linear Multi-Antennas Coupling <sup>15</sup>
