**1. Introduction**

20 Will-be-set-by-IN-TECH

efficiency of the CTHG on the wavelength of FW of the designed structure. Only for the two preset wavelengths, the conversion efficiency is quite high. The conversion efficiencies are *η*1,1 = 0.0917 for *λ*1,1 = 1.458*μm* and *η*1,2 = 0.0938 for *λ*1,2 = 1.578*μm*, the conversion

In a conclusion, the SA algorithm is employed to design nonlinear frequency conversion devices. The basic design method is explained in detail. Some devices include multiple second harmonics generation devices, multiple coupled third harmonics generation devices, multiple channeled photonic crystal filters, multiple second harmonics photonic crystal devices, and multiple coupled third harmonics photonic crystal devices are designed using the proposed method. The designed devices can achieved preset goals well. It is expected that this new proposed method can provide a novel approach for nonlinear conversion devices design.

[3] Bloembergen, N. & Sievers, A. J. Nonlinear optical properties of periodic laminar

[4] Meyn, J. P. & Fejer, M. M. (1997). Tunable ultraviolet radiation by second-harmonic generation in periodically poled lithium tantalate, *Opt. Lett.*, Vol. 22 (No. 16) 1214-1216. [5] Giordmaine, J. A. & Miller R. C. Tunable coherent parametric oscillation in *LiNbO*<sup>3</sup> at

[6] Yablonobitch E. (1987). Inhibited spontaneous emission in solid-state physics and

efficiencies are nearly identical which corresponds to the required aim well.

**4. Conclusion**

**Author details**

**5. References**

*Beijing Key Lab for Terahertz Spectroscopy and Imaging*

*Key Laboratory of Terahertz Optoelectronics, Ministry of Education Department of Physics, Capital Normal University, Beijing, 100048 China*

structures, *Appl. Phys. Lett.*, Vol. 17 (No. 11) 483-485.

optical frequencies, *Phys. Rev. Lett.* Vol. 14 (No. 24) 973-976.

electronics, *Phys. Rev. Lett.* Vol. 58 (No. 20) 2059-2062.

[1] Yariv, A. & Yeh, A. (1984). *Optical Wave in Crystal*, Wiley, New York. [2] Shen, Y. R. (1984). *The Principles of Nonlinear Optics*, Wiley, New York.

Yan Zhang

Adaptive modulation and coding (AMC) is an effective way for improving the spectral efficiency in wireless communication systems. By increasing the size of the modulation scheme constellation, the spectral efficiency can be improved, generally at the cost of a degraded error rate. A similar trade-off is possible by using a higher rate channel code. By an appropriate combination of the modulation order and channel code rate, we can design a set of modulation and coding schemes (MCSs), from which an MCS is selected in an adaptive fashion in each transmission-time interval (TTI) in order to maximize system throughput under different channel conditions. The use of AMC yields a rich variety of scheduling strategies [25]; [12]. In practice, a commonly encountered constraint is that the probability of erroneous decoding of a Transmission Block should not exceed some threshold value [11].

Multiple orthogonal channelization codes (multicodes) can be used to transmit data to a single user, thereby increasing the per-user bit rate and the granularity of adaptation [11, 16]. In Wideband Code-Division Multiple Access (WCDMA), the channelization codes are often referred to as Orthogonal Variable Spreading Factor (OVSF) codes. The number of OVSF codes per base station (BS) is quite limited due to the orthogonality constraint [11] and thus OVSF codes and transmit power are scarce resources. Fig. 1 shows the number of OVSF codes as a function of the spreading factor for WCDMA. Note that a lower value of spreading factor corresponds to a higher bit rate and vice versa. According to Fig. 1, if a spreading factor of 2 is needed, the system can allocate at most two such OVSF codes. On the other hand, if a spreading factor of 4 is required, a total of 4 such codes can be allocated. In High Speed Downlink Packet Access (HSDPA), a fixed spreading factor of 16 has been specified, thereby limiting the number of OVSF codes to 161.

The allocation of the number of OVSF codes (or multicodes) and the MCS level for each user depends on the strength of the received signal at the user, which, in turn, depends on 1) the quality of the wireless channel, and 2) the level of the transmit power to the respective user.

<sup>1</sup> In principle, 16 OVSF codes can be used. However, one code is allocated for other purposes such as signalling. Thus, a maximum of 15 codes can be allocated for data traffic [11].

©2012 Kwan et al., licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0),which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ©2012 Kwan et al., licensee InTech. This is a paper distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### 2 Will-be-set-by-IN-TECH 114 Simulated Annealing – Single and Multiple Objective Problems Simulated Annealing and Multiuser Scheduling in Mobile Communication Networks <sup>3</sup>

As shown in Fig. 2, if the spreading factor and the number of OVSF codes are fixed, one way to increase the user bit rate is to increase the MCS level. As each MCS level is associated with a specific signal quality requirement, the highest MCS level that can be allocated depends on the channel quality at the user receiver, which is stochastic by nature. Thus, at a given channel quality at the receiver, the MCS level can be increased by increasing the transmit power to the user. Another way to increase the user bit rate at a fixed spreading factor is to fix the MCS level while increasing the number of OVSF codes allocated to the respective user, as shown in Fig. 3. In order to achieve a given signal quality requirement for each OVSF code, a higher power level is required to be allocated to this user. Thus, the general problem of HSDPA resource allocation boils down to the joint allocation of user-specific MCS, number of OVSF codes, and power level over all users connected to a given base station, subject to the constraints of code and power resources as shown in Fig.4. It is important to note that this allocation is done very rapidly (on the order of two or more milliseconds) in order to exploit the channel diversity of the users. HSDPA is based on a shared channel concept, in which multiple users share the channel in a time-multiplexed fashion, and the process of resource allocation is performed at regular time intervals. Note that in HSDPA, as shown in Fig. 5, the shared channel is a dual to the dedicated channel in which the bit rate for each user is kept constant over a relatively long time period by appropriate closed-loop power control.

**Figure 2.** Bit rate and channel quality requirement trade-off for HSDPA.

Simulated Annealing and Multiuser Scheduling in Mobile Communication Networks 115

**Figure 3.** Bit rate and channel quality requirement trade-off for HSDPA.

**Figure 1.** Orthogonal Variable Spreading Factor (OVSF) code tree.

For simplicity, the downlink transmit power is normally held constant (or slowly changing)<sup>2</sup> in HSDPA [11]. A number of scheduling algorithms have been proposed for HSDPA [7]. The most commonly encountered ones are (1) round-robin in which users are allocated resources in turn, regardless of channel conditions (2) Max C/I in which resources are allocated to the user with the best channel condition (3) proportional fair in which resources are assigned to the user with the *relatively* best channel condition. Other schedulers include minimum bit rate (MBR), MBR with proportional fairness and minimum delay (MD).

In exploiting multiuser diversity, a common way to achieve the best network throughput is to assign resources to a user with the largest signal-to-noise ratio (SNR) among all backlogged

<sup>2</sup> In some cases, the specification stipulates a slight power reduction for a mobile with an exceptionally good channel quality [22].

**Figure 2.** Bit rate and channel quality requirement trade-off for HSDPA.

2 Will-be-set-by-IN-TECH

As shown in Fig. 2, if the spreading factor and the number of OVSF codes are fixed, one way to increase the user bit rate is to increase the MCS level. As each MCS level is associated with a specific signal quality requirement, the highest MCS level that can be allocated depends on the channel quality at the user receiver, which is stochastic by nature. Thus, at a given channel quality at the receiver, the MCS level can be increased by increasing the transmit power to the user. Another way to increase the user bit rate at a fixed spreading factor is to fix the MCS level while increasing the number of OVSF codes allocated to the respective user, as shown in Fig. 3. In order to achieve a given signal quality requirement for each OVSF code, a higher power level is required to be allocated to this user. Thus, the general problem of HSDPA resource allocation boils down to the joint allocation of user-specific MCS, number of OVSF codes, and power level over all users connected to a given base station, subject to the constraints of code and power resources as shown in Fig.4. It is important to note that this allocation is done very rapidly (on the order of two or more milliseconds) in order to exploit the channel diversity of the users. HSDPA is based on a shared channel concept, in which multiple users share the channel in a time-multiplexed fashion, and the process of resource allocation is performed at regular time intervals. Note that in HSDPA, as shown in Fig. 5, the shared channel is a dual to the dedicated channel in which the bit rate for each user is kept

constant over a relatively long time period by appropriate closed-loop power control.

For simplicity, the downlink transmit power is normally held constant (or slowly changing)<sup>2</sup> in HSDPA [11]. A number of scheduling algorithms have been proposed for HSDPA [7]. The most commonly encountered ones are (1) round-robin in which users are allocated resources in turn, regardless of channel conditions (2) Max C/I in which resources are allocated to the user with the best channel condition (3) proportional fair in which resources are assigned to the user with the *relatively* best channel condition. Other schedulers include minimum bit rate

In exploiting multiuser diversity, a common way to achieve the best network throughput is to assign resources to a user with the largest signal-to-noise ratio (SNR) among all backlogged

<sup>2</sup> In some cases, the specification stipulates a slight power reduction for a mobile with an exceptionally good channel

**Figure 1.** Orthogonal Variable Spreading Factor (OVSF) code tree.

(MBR), MBR with proportional fairness and minimum delay (MD).

quality [22].

**Figure 3.** Bit rate and channel quality requirement trade-off for HSDPA.

4 Will-be-set-by-IN-TECH 116 Simulated Annealing – Single and Multiple Objective Problems Simulated Annealing and Multiuser Scheduling in Mobile Communication Networks <sup>5</sup>

**Figure 5.** Resource allocation for shared and dedicated channels.

**2. System model**

where *N* is the number of users and

given by

a near-optimum performance with significantly reduced complexity.

*evolutionary simulated annealing* (ESA) approach is explored. It is shown that ESA can provide

Simulated Annealing and Multiuser Scheduling in Mobile Communication Networks 117

In a communication system, the quality of the channel is often quantified by the Signal to Interference and Noise Ratio (SINR), which is defined as the ratio of the received signal power relative to the power contribution from interference and noise. A better channel quality is represented by a higher received signal power and a smaller interference and noise power. We consider downlink transmissions from a BS to a number of mobile users. Let *Pi* denote the downlink transmit power to user *i*, *hi* denote the link gain from the BS to user *i*, and *Ii* be the total received interference and noise power at user *i*. The received SINR for user *i* is then

, *i* = 1, . . . , *N*, (1)

*Pi* ≤ *PT* (2)

*<sup>γ</sup><sup>i</sup>* <sup>=</sup> *hiPi Ii*

> *N* ∑ *i*=1

where *PT* is the total HSDPA power constraint. Ideally, the user would measure the received SINR, and report its value back to the BS. Upon receiving the SINR value for this user, the BS would decide what MCS and the number of multicodes that the user can be allocated, taking into account all the resource constraints that the BS has. However, if each user*i* were to send back its exact SINR value *γ<sup>i</sup>* to the BS, the required feedback channel bandwidth would be impractically large. As specified in [22], the channel quality information fed back by a

**Figure 4.** Resource allocation in HSDPA.

users (i.e. users with data to send) at the beginning of each scheduling period [5]. However, due to limited mobile capability, a user might not be able to utilize all the radio resources available at the BS. Thus, transmission to multiple users during a scheduling period may be more resource efficient. In [2, 15], the problem of downlink multiuser scheduling subject to limited code and power constraints is addressed. It is assumed in [15] that the exact path-loss and received interference power at every TTI for each user are fed back to the BS. This would require a large bandwidth overhead.

In this chapter, the problem of optimal (maximum aggregate throughput) multiuser scheduling in HSDPA is addressed3. The MCSs, numbers of multicodes and power levels for all users are jointly optimized at each scheduling period, given that only limited CSI information, as specified in the HSDPA standard [22], is fed back to the BS. This problem corresponds to a general resource allocation formulation for HSDPA, as compared to those in the existing literature, where resources are typically not allocated jointly among users for simplicity. Due to the inherent complexity, an integer programming formulation is proposed for the above problem. Due to the complexity of obtaining an optimal solution, an

<sup>3</sup> The materials presented here are mostly based on the contents in [14].

116 Simulated Annealing – Single and Multiple Objective Problems Simulated Annealing and Multiuser Scheduling in Mobile Communication Networks <sup>5</sup> Simulated Annealing and Multiuser Scheduling in Mobile Communication Networks 117

**Figure 5.** Resource allocation for shared and dedicated channels.

*evolutionary simulated annealing* (ESA) approach is explored. It is shown that ESA can provide a near-optimum performance with significantly reduced complexity.

## **2. System model**

4 Will-be-set-by-IN-TECH

users (i.e. users with data to send) at the beginning of each scheduling period [5]. However, due to limited mobile capability, a user might not be able to utilize all the radio resources available at the BS. Thus, transmission to multiple users during a scheduling period may be more resource efficient. In [2, 15], the problem of downlink multiuser scheduling subject to limited code and power constraints is addressed. It is assumed in [15] that the exact path-loss and received interference power at every TTI for each user are fed back to the BS. This would

In this chapter, the problem of optimal (maximum aggregate throughput) multiuser scheduling in HSDPA is addressed3. The MCSs, numbers of multicodes and power levels for all users are jointly optimized at each scheduling period, given that only limited CSI information, as specified in the HSDPA standard [22], is fed back to the BS. This problem corresponds to a general resource allocation formulation for HSDPA, as compared to those in the existing literature, where resources are typically not allocated jointly among users for simplicity. Due to the inherent complexity, an integer programming formulation is proposed for the above problem. Due to the complexity of obtaining an optimal solution, an

**Figure 4.** Resource allocation in HSDPA.

require a large bandwidth overhead.

<sup>3</sup> The materials presented here are mostly based on the contents in [14].

In a communication system, the quality of the channel is often quantified by the Signal to Interference and Noise Ratio (SINR), which is defined as the ratio of the received signal power relative to the power contribution from interference and noise. A better channel quality is represented by a higher received signal power and a smaller interference and noise power.

We consider downlink transmissions from a BS to a number of mobile users. Let *Pi* denote the downlink transmit power to user *i*, *hi* denote the link gain from the BS to user *i*, and *Ii* be the total received interference and noise power at user *i*. The received SINR for user *i* is then given by

$$\gamma\_i = \frac{h\_i P\_i}{I\_i}, \; i = 1, \dots, N\_\prime \tag{1}$$

where *N* is the number of users and

$$\sum\_{i=1}^{N} P\_i \le P\_T \tag{2}$$

where *PT* is the total HSDPA power constraint. Ideally, the user would measure the received SINR, and report its value back to the BS. Upon receiving the SINR value for this user, the BS would decide what MCS and the number of multicodes that the user can be allocated, taking into account all the resource constraints that the BS has. However, if each user*i* were to send back its exact SINR value *γ<sup>i</sup>* to the BS, the required feedback channel bandwidth would be impractically large. As specified in [22], the channel quality information fed back by a

#### 6 Will-be-set-by-IN-TECH 118 Simulated Annealing – Single and Multiple Objective Problems Simulated Annealing and Multiuser Scheduling in Mobile Communication Networks <sup>7</sup>

mobile, also known as the *channel quality indicator* (CQI), can only take on a finite number of non-negative integer values {0, 1, . . . , *K*}. According to [22], the CQI is provided by the mobile via the High Speed Dedicated Physical Control Channel (HS-DPCCH). Each CQI value maps directly to a maximum bit rate<sup>4</sup> that a mobile can support, based on the channel quality and mobile capability [23], while ensuring that the block error rate (BLER) does not exceed 10%. Finally, upon receiving the CQIs from all the users, the BS decides on the most appropriate combination of MCS and number of multicodes for each user.

Although the mapping between the CQI and the SINR is not specified in [22], it has been discussed in various proposals [18]; [13]. In [8], a mapping is proposed in which the system throughput is maximized while the BLER constraint is relaxed. Let *γ*˜*<sup>i</sup>* = 10 log10(*γi*) be the received SINR value, in dB, for user *i* and let *qi* be the CQI value that user *i* reports back to the BS via HS-DPCCH. The mapping between *qi* and *γ*˜*<sup>i</sup>* can generally be expressed as a piece-wise linear function [6, 8, 18]

$$q\_{i} = \begin{cases} 0 & \tilde{\gamma}\_{i} \le t\_{i,0} \\ \lfloor c\_{i,1}\tilde{\gamma}\_{i} + c\_{i,2} \rfloor & t\_{i,0} < \tilde{\gamma}\_{i} \le t\_{i,1} \\ q\_{i,\max} & \tilde{\gamma}\_{i} > t\_{i,1} \end{cases} \tag{3}$$

**3. Joint optimal scheduling**

and number of multicodes constraints.

index, *Ji*.

at the base station. ([14]©IET)

subject to (7)-(9) and

*Ji* <sup>=</sup> min

*ηi*(*γ*˜ †

max

where *φ<sup>i</sup>* is the power adjustment factor for user *i*, i.e. *γ*ˆ*<sup>i</sup>* �→ *φiγ*ˆ*i*, and

*<sup>i</sup>* , *<sup>φ</sup>i*) = �*ci*,1

The multiuser joint optimal scheduling problem **P1** can be expressed as

*N* ∑ *i*=1

*Ji* ∑ *j*=0

**P1** : max

**A**,*φ*

*Ji* ∑ *j*=0

> *N* ∑ *i*=1

Note that the value of the channel quality, *qi*, reported by user *i* indicates the rate index that is associated with the maximum bit rate that the user can be supported by the BS, and is related jointly to a required number of OVSF codes (multicodes) and MCS. The number of multicodes and MCS assigned to each user as well as the estimated SINR values of the users determine the transmit power required by the BS. Since the number of multicodes and transmit power are limited, the BS might not be able to simultaneously satisfy the bit rate requests for all users as indicated by {*qi*, *i* = 1, . . . , *N*}. Therefore, for a set {*qi*, *i* = 1, . . . , *N*}, the BS must calculate a set of *modified* CQIs, {*Ji*, *i* = 1, . . . , *N*}, for all users by taking into account the transmit power

From the *forward* and *reverse* mappings in (3) and (4), the modified CQIs are chosen as

*<sup>i</sup>* , *<sup>φ</sup>i*), 0

*γ*˜ †

Fig. 6 summarizes the conversion process from the received CQI, *qi*, to the final assigned rate

**Figure 6.** The conversion process from the received CQI, *qi*, from the mobile to the assigned rate index *Ji*

*N* ∑ *i*=1

*Ji* ∑ *j*=0

, *qi*,*max*

Simulated Annealing and Multiuser Scheduling in Mobile Communication Networks 119

*<sup>i</sup>* <sup>+</sup>*ci*,2) <sup>10</sup>*ci*,1

*<sup>i</sup>* + 10 log10 *φ<sup>i</sup>*

*qi*,*max*−(*ci*,1*γ*˜ †

, *i* = 1, . . . , *N* (7)

+ *ci*,2�, (8)

. (9)

*ai*,*jri*,*<sup>j</sup>* (10)

*ai*,*<sup>j</sup>* = 1, ∀*i*, (11)

*ai*,*jni*,*<sup>j</sup>* ≤ *Nmax*, (12)

*ai*,*<sup>j</sup>* ∈ {0, 1}, (13)

*φ<sup>i</sup>* ≤ *N*. (14)

*ηi*(*γ*˜ †

0 ≤ *φ<sup>i</sup>* ≤ 10

where the terms {*ci*,1, *ci*,2, *ti*,0, *ti*,1} are model and mobile capability dependent constants, and �.� denotes the floor function. Due to the quantization operation implied in (3), *γ*˜*<sup>i</sup>* cannot generally be recovered exactly from the value of *qi* alone. It should be noted that the region *ti*,0 < *γ*˜*<sup>i</sup>* ≤ *ti*,1 is the operating region for the purpose of link adaptation. This region should be chosen large enough to accommodate the SINR variations encountered in most practical scenarios [11], i.e. the probability that *γ*˜*<sup>i</sup>* falls outside this region should be quite small. As part of our proposed procedure, *γ*˜*<sup>i</sup>* is approximated as

$$
\gamma\_i^\dagger = \tilde{\gamma}\_i^{(l)} + \left(\tilde{\gamma}\_i^{(u)} - \tilde{\gamma}\_i^{(l)}\right)\tilde{\xi}\_{\prime} \tag{4}
$$

where

$$
\tilde{\gamma}\_i^{(l)} = \frac{q\_i - c\_{i,2}}{c\_{i,1}},
\tag{5}
$$

$$
\gamma\_i^{(u)} = \frac{q\_i + 1 - c\_{i,2}}{c\_{i,1}},
\tag{6}
$$

and *ξ* is a uniformly distributed random variable, i.e. *ξ* ∼ *U* (0, 1). In a more conservative design, the value of *ξ* could be set to 0. Note that this approximation assumes that *γ*˜*<sup>i</sup>* is uniformly distributed between *γ*˜ (*l*) *<sup>i</sup>* and *γ*˜ (*u*) *<sup>i</sup>* for a given value of *qi*.

For *qi* = 0 and *qi* = *qi*,*max*, *γ*˜*<sup>i</sup>* could be approximated as *ti*,0 and *ti*,1 respectively, or more generally as *ti*,0 − *ξi*,0 and *ti*,1 + *ξi*,1 respectively, with *ξi*,0 and *ξi*,1 following some pre-defined probability distributions. Finally, the estimated value of *<sup>γ</sup><sup>i</sup>* is given by *<sup>γ</sup>*ˆ*<sup>i</sup>* = <sup>10</sup>*γ*˜ † *<sup>i</sup>* /10. We refer to the mapping from SINR to *qi* in (3) as the *forward mapping*, and the approximation of SINR based on the received value of *qi* in (4) as the *reverse mapping*.

<sup>4</sup> In this chapter, the bit rate refers to the transport block size, i.e. the maximum number of radio link control (RLC) protocol data bit (PDU) bits that a transport block can carry, divided by the duration of a TTI, i.e. 2 ms [11].

### **3. Joint optimal scheduling**

6 Will-be-set-by-IN-TECH

mobile, also known as the *channel quality indicator* (CQI), can only take on a finite number of non-negative integer values {0, 1, . . . , *K*}. According to [22], the CQI is provided by the mobile via the High Speed Dedicated Physical Control Channel (HS-DPCCH). Each CQI value maps directly to a maximum bit rate<sup>4</sup> that a mobile can support, based on the channel quality and mobile capability [23], while ensuring that the block error rate (BLER) does not exceed 10%. Finally, upon receiving the CQIs from all the users, the BS decides on the most appropriate

Although the mapping between the CQI and the SINR is not specified in [22], it has been discussed in various proposals [18]; [13]. In [8], a mapping is proposed in which the system throughput is maximized while the BLER constraint is relaxed. Let *γ*˜*<sup>i</sup>* = 10 log10(*γi*) be the received SINR value, in dB, for user *i* and let *qi* be the CQI value that user *i* reports back to the BS via HS-DPCCH. The mapping between *qi* and *γ*˜*<sup>i</sup>* can generally be expressed as a piece-wise

> 0 *γ*˜*<sup>i</sup>* ≤ *ti*,0 �*ci*,1*γ*˜*<sup>i</sup>* + *ci*,2� *ti*,0 < *γ*˜*<sup>i</sup>* ≤ *ti*,1 *qi*,*max γ*˜*<sup>i</sup>* > *ti*,1

where the terms {*ci*,1, *ci*,2, *ti*,0, *ti*,1} are model and mobile capability dependent constants, and �.� denotes the floor function. Due to the quantization operation implied in (3), *γ*˜*<sup>i</sup>* cannot generally be recovered exactly from the value of *qi* alone. It should be noted that the region *ti*,0 < *γ*˜*<sup>i</sup>* ≤ *ti*,1 is the operating region for the purpose of link adaptation. This region should be chosen large enough to accommodate the SINR variations encountered in most practical scenarios [11], i.e. the probability that *γ*˜*<sup>i</sup>* falls outside this region should be quite small. As

> (*l*) *i* �

*<sup>i</sup>* for a given value of *qi*.

*ξ*, (4)

, (5)

, (6)

*<sup>i</sup>* /10. We refer

(3)

combination of MCS and number of multicodes for each user.

*qi* =

part of our proposed procedure, *γ*˜*<sup>i</sup>* is approximated as

uniformly distributed between *γ*˜

⎧ ⎨ ⎩

*γ*˜ † *<sup>i</sup>* = *γ*˜ (*l*) *<sup>i</sup>* + � *γ*˜ (*u*) *<sup>i</sup>* − *γ*˜

> *γ*˜ (*l*)

*γ*˜ (*u*)

(*l*) *<sup>i</sup>* and *γ*˜

based on the received value of *qi* in (4) as the *reverse mapping*.

*<sup>i</sup>* <sup>=</sup> *qi* <sup>−</sup> *ci*,2 *ci*,1

*<sup>i</sup>* <sup>=</sup> *qi* <sup>+</sup> <sup>1</sup> <sup>−</sup> *ci*,2 *ci*,1

and *ξ* is a uniformly distributed random variable, i.e. *ξ* ∼ *U* (0, 1). In a more conservative design, the value of *ξ* could be set to 0. Note that this approximation assumes that *γ*˜*<sup>i</sup>* is

For *qi* = 0 and *qi* = *qi*,*max*, *γ*˜*<sup>i</sup>* could be approximated as *ti*,0 and *ti*,1 respectively, or more generally as *ti*,0 − *ξi*,0 and *ti*,1 + *ξi*,1 respectively, with *ξi*,0 and *ξi*,1 following some pre-defined

to the mapping from SINR to *qi* in (3) as the *forward mapping*, and the approximation of SINR

<sup>4</sup> In this chapter, the bit rate refers to the transport block size, i.e. the maximum number of radio link control (RLC) protocol data bit (PDU) bits that a transport block can carry, divided by the duration of a TTI, i.e. 2 ms [11].

(*u*)

probability distributions. Finally, the estimated value of *<sup>γ</sup><sup>i</sup>* is given by *<sup>γ</sup>*ˆ*<sup>i</sup>* = <sup>10</sup>*γ*˜ †

linear function [6, 8, 18]

where

Note that the value of the channel quality, *qi*, reported by user *i* indicates the rate index that is associated with the maximum bit rate that the user can be supported by the BS, and is related jointly to a required number of OVSF codes (multicodes) and MCS. The number of multicodes and MCS assigned to each user as well as the estimated SINR values of the users determine the transmit power required by the BS. Since the number of multicodes and transmit power are limited, the BS might not be able to simultaneously satisfy the bit rate requests for all users as indicated by {*qi*, *i* = 1, . . . , *N*}. Therefore, for a set {*qi*, *i* = 1, . . . , *N*}, the BS must calculate a set of *modified* CQIs, {*Ji*, *i* = 1, . . . , *N*}, for all users by taking into account the transmit power and number of multicodes constraints.

From the *forward* and *reverse* mappings in (3) and (4), the modified CQIs are chosen as

$$J\_i = \min\left(\max\left(\eta\_i(\tilde{\gamma}\_i^\dagger, \phi\_i), 0\right), q\_{i, \max}\right), i = 1, \dots, N\tag{7}$$

where *φ<sup>i</sup>* is the power adjustment factor for user *i*, i.e. *γ*ˆ*<sup>i</sup>* �→ *φiγ*ˆ*i*, and

$$\eta\_i(\tilde{\gamma}\_i^\dagger \phi\_i) = \lfloor c\_{i,1} \left( \tilde{\gamma}\_i^\dagger + 10 \log\_{10} \phi\_i \right) + c\_{i,2} \rfloor\_\prime \tag{8}$$

$$0 \le \phi\_{\dot{i}} \le 10^{\left(\frac{q\_{\dot{l},max} - (c\_{l,1}\dot{\gamma}\_l^\dagger + c\_{l,2})}{10c\_{l,1}}\right)}.\tag{9}$$

Fig. 6 summarizes the conversion process from the received CQI, *qi*, to the final assigned rate index, *Ji*.

**Figure 6.** The conversion process from the received CQI, *qi*, from the mobile to the assigned rate index *Ji* at the base station. ([14]©IET)

The multiuser joint optimal scheduling problem **P1** can be expressed as

$$\mathbf{P1}: \qquad \max\_{\mathbf{A}, \underline{\Phi}} \sum\_{i=1}^{N} \sum\_{j=0}^{I\_i} a\_{i,j} r\_{i,j} \tag{10}$$

subject to (7)-(9) and

$$\sum\_{j=0}^{I\_l} a\_{i,j} = 1,\ \forall i,\tag{11}$$

$$\sum\_{i=1}^{N} \sum\_{j=0}^{I\_i} a\_{i,j} n\_{i,j} \le N\_{\text{max}} \tag{12}$$

$$a\_{i,j} \in \{0, 1\},\tag{13}$$

$$\sum\_{i=1}^{N} \phi\_i \le N. \tag{14}$$

#### 8 Will-be-set-by-IN-TECH 120 Simulated Annealing – Single and Multiple Objective Problems Simulated Annealing and Multiuser Scheduling in Mobile Communication Networks <sup>9</sup>

In (10), *Ji* is the maximum allowable CQI value for user *i* and *ri*,*<sup>j</sup>* denotes the achievable bit rate for user *i* and CQI value *j* [22]; the decision variable *ai*,*<sup>j</sup>* is equal to 1 if rate index *j* is assigned to user *i*; otherwise, *ai*,*<sup>j</sup>* = 0. In (12), *Nmax* is the maximum number of multicodes available for HSDPA at the BS and *ni*,*<sup>j</sup>* is the required number of multicodes for user *i* and CQI value *j* [22]. Depending on multicode availability, the assigned combination of MCS and the number of multicodes may correspond to a bit rate that is smaller than that permitted by *Ji*. The constraint in (14) can be obtained by substituting *Pi* = *φiPT*/*N* into (2). The objective in the optimization problem **P1** is to choose **A** = {*ai*,*j*} and *φ* = {*φi*} at each TTI so as to maximize the sum bit rate for all users, subject to (7)-(9) and (11)-(14).

## **4. Linearization**

It is important to note that the quantity *Ji*, which appears in the upper summation index in (10), is itself a function of the decision variable *φi*. On the other hand, *φ<sup>i</sup>* is related to *Ji* via a non-linear relationship (9). Thus, the problem **P1** is not a standard linear integer programming problem. As such a problem is highly non-linear, a global optimal solution is very difficult to obtain. To solve problem **P1** with linear integer programming methods, the problem needs to be appropriately transformed into a linear problem by introducing additional auxiliary variables. The first step is to re-formulate the model as follows:

$$\mathbf{P1}': \qquad \max\_{\mathbf{A}, \underline{\Phi}} \sum\_{i=1}^{N} \sum\_{j=0}^{q\_{i,max}} b\_{i,j} a\_{i,j} r\_{i,j} \tag{15}$$

subject to

$$\sum\_{j=0}^{q\_{i,max}} b\_{i,j} a\_{i,j} = 1, \; \forall i,\tag{16}$$

subject to

together with (14), where

**5. Simulated annealing**

Section 5.

*qi*,*max* ∑ *j*=0

*Ji* =

*qi*,*max* ∑ *j*=0

*ei*,*<sup>j</sup>* =

⎧ ⎪⎪⎨

⎪⎪⎩

10

� *<sup>j</sup>*−(*ci*,1*γ*˜†

*qi*,*max* ∑ *j*=0

−*M* if *j* = 0

*<sup>i</sup>* <sup>+</sup>*ci*,2) <sup>10</sup>*ci*,1 �

and *M* is a large number. In this new formulation, constraints (22)-(24) model the product *ai*,*jbi*,*j*, while (25)-(27) models the floor and the max functions in (7) and the constraint in (9). The constraints (28)-(30) are used to linearize the expression defined in (18). Note that the introduction of auxiliary variables increases the size of the model, and, thereby, increases the complexity of the problem. The linearized Problem **P1**�� was solved using a commercial optimization software package implementing the branch-and-bound method [19]. An alternative method to solve Problem **P1** is to use meta-heuristics and is discussed in

Meta-heuristic approaches attract much attention with their success in solving hard combinatorial and optimization problems. One of these successful approaches is Simulated annealing (SA) [24], which is proven to be a powerful meta-heuristic approach used for optimization in many combinatorial problems. In SA, a probabilistic decision making process is typically involved, in which a control parameter, often known as temperature *τ*, is used to control the probability of accepting a poorer solution in the neighborhood of the current

*M* if *j* = *qi*,*max* + 1,

*N* ∑ *i*=1 *qi*,*max* ∑ *j*=0

*mi*,*<sup>j</sup>* = 1, ∀*i*, (20)

Simulated Annealing and Multiuser Scheduling in Mobile Communication Networks 121

*mi*,*jni*,*<sup>j</sup>* = ≤ *Nmax*, (21)

*mi*,*<sup>j</sup>* ≤ *bi*,*j*, ∀ *i*, *j* (22) *mi*,*<sup>j</sup>* ≤ *ai*,*jM*, ∀ *i*, *j* (23) *mi*,*<sup>j</sup>* ≥ *bi*,*<sup>j</sup>* − (1 − *ai*,*j*)*M*, ∀ *i*, *j* (24) *ei*,*<sup>j</sup>* − *φ<sup>i</sup>* ≤ (1 − *ωi*,*j*)*M*, ∀ *i*, *j*, (25) *φ<sup>i</sup>* − *ei*,*j*+<sup>1</sup> ≤ (1 − *ωi*,*j*)*M*, ∀ *i*, *j*, (26)

*j* − *Ji* ≤ (1 − *bi*,*j*)*M*, ∀ *i*, *j* (28) *Ji* + 1 − *j* ≤ *bi*,*jM*, ∀ *i*, *j* (29)

*ωi*,*<sup>j</sup>* = 1 (30)

*bi*,*j*, *ai*,*j*, *mi*,*j*, *ωi*,*<sup>j</sup>* ∈ {0, 1} (31)

if 1 ≤ *j* ≤ *qi*,*max*

*jωi*,*j*, ∀ *i*, (27)

(32)

$$\sum\_{i=1}^{N} \sum\_{j=0}^{q\_{i,\max}} b\_{i,j} a\_{i,j} n\_{i,j} \le N\_{\max} \tag{17}$$

together with (7)-(9),(13)-(14), where the new variable *bi*,*<sup>j</sup>*

$$b\_{\bar{i},\bar{j}} = \begin{cases} 0 \text{, } \bar{j} > J\_{\bar{i}} \\ 1 \text{, } \bar{j} \le J\_{\bar{i}} \end{cases} \tag{18}$$

is introduced to limit the rate index *j* to no higher than *Ji*.

After the above re-formulation, Problem **P1**� is still non-linear due to terms involving the product *ai*,*jbi*,*j*, and the presence of the floor function �.� and the logarithm in (7). Subsequent linearlization of Problem **P1**� , involves introducing a new decision variable *mi*,*<sup>j</sup>* = *ai*,*jbi*,*<sup>j</sup>* and re-writing the problem as follows:

$$\mathbf{P1}^{\prime\prime}: \qquad \max\_{\mathbf{A}, \mathbf{B}, \underline{\underline{\Phi}}} \sum\_{i=1}^{N} \sum\_{j=0}^{q\_{i,max}} m\_{i,j} r\_{i,j} \tag{19}$$

subject to

8 Will-be-set-by-IN-TECH

In (10), *Ji* is the maximum allowable CQI value for user *i* and *ri*,*<sup>j</sup>* denotes the achievable bit rate for user *i* and CQI value *j* [22]; the decision variable *ai*,*<sup>j</sup>* is equal to 1 if rate index *j* is assigned to user *i*; otherwise, *ai*,*<sup>j</sup>* = 0. In (12), *Nmax* is the maximum number of multicodes available for HSDPA at the BS and *ni*,*<sup>j</sup>* is the required number of multicodes for user *i* and CQI value *j* [22]. Depending on multicode availability, the assigned combination of MCS and the number of multicodes may correspond to a bit rate that is smaller than that permitted by *Ji*. The constraint in (14) can be obtained by substituting *Pi* = *φiPT*/*N* into (2). The objective in the optimization problem **P1** is to choose **A** = {*ai*,*j*} and *φ* = {*φi*} at each TTI so as to

It is important to note that the quantity *Ji*, which appears in the upper summation index in (10), is itself a function of the decision variable *φi*. On the other hand, *φ<sup>i</sup>* is related to *Ji* via a non-linear relationship (9). Thus, the problem **P1** is not a standard linear integer programming problem. As such a problem is highly non-linear, a global optimal solution is very difficult to obtain. To solve problem **P1** with linear integer programming methods, the problem needs to be appropriately transformed into a linear problem by introducing additional auxiliary

> *N* ∑ *i*=1

*qi*,*max* ∑ *j*=0

0 , *j* > *Ji*

After the above re-formulation, Problem **P1**� is still non-linear due to terms involving the product *ai*,*jbi*,*j*, and the presence of the floor function �.� and the logarithm in (7). Subsequent

> *N* ∑ *i*=1

*qi*,*max* ∑ *j*=0

1 , *j* ≤ *Ji*

*bi*,*jai*,*jri*,*<sup>j</sup>* (15)

*mi*,*jri*,*<sup>j</sup>* (19)

(18)

*bi*,*jai*,*<sup>j</sup>* = 1, ∀*i*, (16)

*bi*,*jai*,*jni*,*<sup>j</sup>* ≤ *Nmax*, (17)

, involves introducing a new decision variable *mi*,*<sup>j</sup>* = *ai*,*jbi*,*<sup>j</sup>* and

maximize the sum bit rate for all users, subject to (7)-(9) and (11)-(14).

variables. The first step is to re-formulate the model as follows:

**P1**� : max

*N* ∑ *i*=1

together with (7)-(9),(13)-(14), where the new variable *bi*,*<sup>j</sup>*

is introduced to limit the rate index *j* to no higher than *Ji*.

linearlization of Problem **P1**�

re-writing the problem as follows:

*qi*,*max* ∑ *j*=0

*bi*,*<sup>j</sup>* =

**P1**�� : max

**A**,**B**,*φ*

**A**,*φ*

*qi*,*max* ∑ *j*=0

**4. Linearization**

subject to

$$\sum\_{j=0}^{q\_{l,\text{max}}} m\_{i,j} = 1, \forall i,\tag{20}$$

$$\sum\_{i=1}^{N} \sum\_{j=0}^{q\_{l,\max}} m\_{i,j} n\_{i,j} = \le N\_{\max} \tag{21}$$

$$m\_{i,j} \le b\_{i,j\prime} \,\forall\, i, j \tag{22}$$

$$m\_{i,j} \le a\_{i,j} \mathbf{M}\_{\prime} \; \forall \; i, j \tag{23}$$

$$
\forall m\_{i,j} \ge b\_{i,j} - (1 - a\_{i,j})M\_\prime \,\forall \, i, j \tag{24}
$$

$$
\rho\_{i,j} - \phi\_i \le (1 - \omega\_{i,j}) M\_\prime \,\forall \, i, j,\tag{25}
$$

$$
\phi\_{i} - e\_{i,j+1} \le (1 - \omega\_{i,j}) M\_{\prime} \; \forall \; i, j, \tag{26}
$$

$$J\_i = \sum\_{j=0}^{q\_{\cup max}} j \omega\_{i,j} \,\forall \, i,\tag{27}$$

$$j - J\_i \le (1 - b\_{i,j})M\_\star \,\forall \, i, j \tag{28}$$

$$J\_i + 1 - j \le b\_{i,j} M\_\prime \; \forall \; i, j \tag{29}$$

$$\sum\_{j=0}^{q\_{i,max}} \omega\_{i,j} = 1\tag{30}$$

$$b\_{i,j\_{\prime}} a\_{i,j\_{\prime}} m\_{i,j\_{\prime}} \omega\_{i,j} \in \{0, 1\} \tag{31}$$

together with (14), where

$$e\_{i,j} = \begin{cases} -M & \text{if } j = 0\\ 10^{\left(\frac{j - (c\_{l,1}\gamma\_l^\dagger + c\_{l,2})}{10}\right)} & \text{if } 1 \le j \le q\_{i,\max} \\\ M & \text{if } j = q\_{i,\max} + 1 \end{cases} \tag{32}$$

and *M* is a large number. In this new formulation, constraints (22)-(24) model the product *ai*,*jbi*,*j*, while (25)-(27) models the floor and the max functions in (7) and the constraint in (9). The constraints (28)-(30) are used to linearize the expression defined in (18). Note that the introduction of auxiliary variables increases the size of the model, and, thereby, increases the complexity of the problem. The linearized Problem **P1**�� was solved using a commercial optimization software package implementing the branch-and-bound method [19]. An alternative method to solve Problem **P1** is to use meta-heuristics and is discussed in Section 5.

## **5. Simulated annealing**

Meta-heuristic approaches attract much attention with their success in solving hard combinatorial and optimization problems. One of these successful approaches is Simulated annealing (SA) [24], which is proven to be a powerful meta-heuristic approach used for optimization in many combinatorial problems. In SA, a probabilistic decision making process is typically involved, in which a control parameter, often known as temperature *τ*, is used to control the probability of accepting a poorer solution in the neighborhood of the current

#### 10 Will-be-set-by-IN-TECH 122 Simulated Annealing – Single and Multiple Objective Problems Simulated Annealing and Multiuser Scheduling in Mobile Communication Networks <sup>11</sup>

solution. The idea is to provide the possibility of reaching a better solution by diversifying the search within search space, redirecting the search in a new neighborhood when the chance of discovering a better solution within the old neighborhood is not high. The algorithm explores the solution space through a simulated cooling process from a given initial (hot) temperature to a final (frozen) temperature. At a higher temperature, the probability of selecting the poorer solution is higher, and thereby allowing the search to be more extensive within the solution space. However, as the temperature decreases, the search becomes more confined in the region near the desirable solution within the solution space, and thereby providing a refinement to the existing solution. Essentially, the search is conducted through two nested loops; the outer one decreases the temperature using a particular cooling schedule, while the inner one repeats the search at the same temperature. Within the inner loop, a sequence of solutions are obtained by manipulating the current solution. Each solution is the result of one iteration.

Let **x***n* be the solution at iteration *n*, and **x***n* � = *N*(**x***n*), where *N*(**x***n*) is some neighbor function of **x***n*. The next solution in the search process is a probabilistic function of **x***n*, and is given by [24]

$$\mathbf{x}\_{\mathsf{n}+1} = \begin{cases} \mathbf{x}\_{\mathsf{n}\prime}^{\prime} \text{ if } s(\mathbf{x}\_{\mathsf{n}\prime}^{\prime}) > s(\mathbf{x}\_{\mathsf{n}\prime})\\ \mathbf{x}\_{\mathsf{n}\prime}^{\prime} \text{ if } r < e^{-\Delta s/\tau\_{k}}\\ \mathbf{x}\_{\mathsf{n}\prime} \text{ otherwise} \end{cases},\tag{33}$$

where *θ* ∈ (0, 1) is the cooling coefficient. The *Nτ*-th temperature level is also referred to as the frozen temperature *τfrozen*. Note that the combined use of (33) and (34) corresponds to a

In the proposed scheme, each temperature level is associated with *U* iterations. For each iteration, a user *i* is randomly selected, resulting in a solution to the problem of the form

By applying function *f*(.) to the current value of *φi*, new solutions can be generated. Each

Subsequently, the value of *ai*,*Ji* is set to 1 while the remaining elements are set to 0, i.e.

0, 0, ..., 1}. If constraint (12) is violated, **a***<sup>i</sup>* is cyclically shifted to the left by one position,

*<sup>i</sup>*,..., *φN*, **a**1,..., **a**�

The computational complexity of ESA in terms of the number, *N*, of users at given values of *G* and *N<sup>τ</sup>* is discussed as follows. The number of iterations, *U*, of solutions explored at each temperature is chosen to be *N*. Thus, the time required to check whether a solution **x**�

satisfies constraints (12) and (14) has a complexity of O(*N*). Subsequently, due to the fact that the time to verify these constraints grows linearly with *<sup>N</sup>*, the complexity of ESA is <sup>O</sup>(*N*2). As both SA and ESA are heuristic algorithms, solutions obtained by these methods may not necessarily be optimal, but are generally close to optimal. The "temperature" parameter determines trade-off between the speed of convergence towards the optimal value, and how far the solution is relative to that of the optimal. Generally, a higher temperature allows the optimal value to be approached faster, while a lower temperature provides improved fine-tuning, and thereby improving the solution quality. ESA converges faster than conventional SA since the search is diversified by periodically re-increasing the temperature. As with any heuristic algorithm, a detailed convergence study requires the determination of

A number of different simulation cases were used to illustrate the effectiveness of ESA for the multiuser HSDPA resource allocation problem. The first case involves *N* = 2 users, and the following values *ti*,0 = −4.5, *ti*,1 = 25.5, *ci*,1 = 1, *di*,1 = 4.5, and *qi*,*max* = 30 for *i* = 1, 2 are used for the parameters in (3). These values are obtained from [18], assuming that the mobiles are of category 10 (i.e. have a wide CQI range) as defined in [22]. Values for *ni*,*<sup>j</sup>* and *ri*,*<sup>j</sup>* are obtained from [22]. The fading channel following the general Nakagami model [21] is assumed so that {*γi*, *i* = 1, 2} in (1) are outcomes of Gamma distributed random variables

0, 0, ..., 1, 0}. This process is repeated until (12) is satisfied, resulting in an updated

, is obtained by applying the function *f*(.) successively until *φ*�

**x***<sup>n</sup>* = (*φ*1,..., *φi*,..., *φN*, **a**1,..., **a***i*,..., **a***N*), (35)

Simulated Annealing and Multiuser Scheduling in Mobile Communication Networks 123

*<sup>i</sup>* is available, the corresponding *Ji* is then obtained using (7).

*i*

*<sup>i</sup>* satisfies (9)

*<sup>n</sup>* in (36)

,... **a***N*). (36)

).

variant of simulated annealing known as the *Simulated Quenching* (SQ) [9].

at iteration *n* ∈ {1, . . . , *U*}, where **a***<sup>i</sup>* = (*ai*,1, *ai*,2, ..., *ai*,*Ji*

. Subsequently, the new solution is given by

*<sup>n</sup>* = (*φ*1,..., *φ*�

**x**�

appropriate parameter values through experimentation.

new value, *φ*�

*Ji*+1 

**a***<sup>i</sup>* = {

i.e. **a***<sup>i</sup>* = {

vector **a**� *i*

*i*

and (14). Once a suitable *φ*�

*Ji*+1 

**6. Simulation results**

where *<sup>τ</sup><sup>k</sup>* corresponds to the *<sup>k</sup>th* temperature level, *<sup>s</sup>*(·) is the objective function to be maximized, Δ*s* = *s*(**x***n*) − *s*(**x**� *<sup>n</sup>*), and *r* is the outcome of a random variable which is uniformly distributed in [0,1]. This method for choosing a new solution is commonly referred to as the Metropolis rule [24]. The motivation is to diversify the search process, thereby reducing the possibility of locally optimal solutions. As the temperature decreases, so does the probability of accepting a worse solution.

SA has become a basis which inspires different algorithmic variations for solving a large variety of optimization problems. *Evolutionary* Simulated Annealing (ESA) is one of these recently developed population-based SA algorithm enhanced with evolutionary operators [3]. Instead of manipulating a single solution, ESA makes use of a population of solutions in order to combine the advantages of both SA and population-based approaches. A single instance of SA is devised to act as an evolutionary operator and is invoked successively starting at a fixed initial temperature each time. Each invocation of SA is commonly referred to as a *generation*. ESA evolves the population of solutions with the SA operator alongside the selection and replacement operators *generation-by-generation*. The idea is to decrease the temperature during each SA operation and raise it back to the fixed initial value whenever the SA operator is invoked. These artificially induced fluctuations in temperature allows the solution space to be explored more thoroughly, ad thereby reducing the possibility of being trapped in local optima. Recently, a comprehensive study on implementing ESA for facility location problems has been reported in [26].

In this chapter, we propose to use the ESA algorithm to solve the multiuser scheduling problem as outlined in (7)-(14) due to its ability to cope with the highly non-linear nature of the problem. Our ESA implementation consists of two components - an initial solution, and an SA operator. The SA operator is invoked once per generation for *G* generations. For each SA operation, a search is conducted over *N<sup>τ</sup>* different temperatures. Starting with an initial temperature *τhot*, the *k*-th temperature level is given by

$$
\tau\_k = \tau\_{hot} \theta^{k-1}, k = 1, 2, \dots, N\_{\tau\_{\tau}} \tag{34}
$$

where *θ* ∈ (0, 1) is the cooling coefficient. The *Nτ*-th temperature level is also referred to as the frozen temperature *τfrozen*. Note that the combined use of (33) and (34) corresponds to a variant of simulated annealing known as the *Simulated Quenching* (SQ) [9].

In the proposed scheme, each temperature level is associated with *U* iterations. For each iteration, a user *i* is randomly selected, resulting in a solution to the problem of the form

$$\mathbf{x}\_{\text{ll}} = (\phi\_{1\prime}, \dots, \phi\_{\dot{1}\prime}, \dots, \phi\_{N\prime}, \mathbf{a}\_{1\prime}, \dots, \mathbf{a}\_{\dot{1}\prime}, \dots, \mathbf{a}\_{N})\_{\prime} \tag{35}$$

at iteration *n* ∈ {1, . . . , *U*}, where **a***<sup>i</sup>* = (*ai*,1, *ai*,2, ..., *ai*,*Ji* ).

10 Will-be-set-by-IN-TECH

solution. The idea is to provide the possibility of reaching a better solution by diversifying the search within search space, redirecting the search in a new neighborhood when the chance of discovering a better solution within the old neighborhood is not high. The algorithm explores the solution space through a simulated cooling process from a given initial (hot) temperature to a final (frozen) temperature. At a higher temperature, the probability of selecting the poorer solution is higher, and thereby allowing the search to be more extensive within the solution space. However, as the temperature decreases, the search becomes more confined in the region near the desirable solution within the solution space, and thereby providing a refinement to the existing solution. Essentially, the search is conducted through two nested loops; the outer one decreases the temperature using a particular cooling schedule, while the inner one repeats the search at the same temperature. Within the inner loop, a sequence of solutions are obtained

of **x***n*. The next solution in the search process is a probabilistic function of **x***n*, and is given by

where *<sup>τ</sup><sup>k</sup>* corresponds to the *<sup>k</sup>th* temperature level, *<sup>s</sup>*(·) is the objective function to be

distributed in [0,1]. This method for choosing a new solution is commonly referred to as the Metropolis rule [24]. The motivation is to diversify the search process, thereby reducing the possibility of locally optimal solutions. As the temperature decreases, so does the probability

SA has become a basis which inspires different algorithmic variations for solving a large variety of optimization problems. *Evolutionary* Simulated Annealing (ESA) is one of these recently developed population-based SA algorithm enhanced with evolutionary operators [3]. Instead of manipulating a single solution, ESA makes use of a population of solutions in order to combine the advantages of both SA and population-based approaches. A single instance of SA is devised to act as an evolutionary operator and is invoked successively starting at a fixed initial temperature each time. Each invocation of SA is commonly referred to as a *generation*. ESA evolves the population of solutions with the SA operator alongside the selection and replacement operators *generation-by-generation*. The idea is to decrease the temperature during each SA operation and raise it back to the fixed initial value whenever the SA operator is invoked. These artificially induced fluctuations in temperature allows the solution space to be explored more thoroughly, ad thereby reducing the possibility of being trapped in local optima. Recently, a comprehensive study on implementing ESA for facility location problems

In this chapter, we propose to use the ESA algorithm to solve the multiuser scheduling problem as outlined in (7)-(14) due to its ability to cope with the highly non-linear nature of the problem. Our ESA implementation consists of two components - an initial solution, and an SA operator. The SA operator is invoked once per generation for *G* generations. For each SA operation, a search is conducted over *N<sup>τ</sup>* different temperatures. Starting with an

initial temperature *τhot*, the *k*-th temperature level is given by

*<sup>n</sup>*, if *r* < *e*−Δ*s*/*τ<sup>k</sup>* **x***n*, otherwise

*<sup>n</sup>*) > *s*(**x***n*)

*<sup>n</sup>*), and *r* is the outcome of a random variable which is uniformly

*<sup>τ</sup><sup>k</sup>* = *<sup>τ</sup>hotθk*−1, *<sup>k</sup>* = 1, 2, . . . , *<sup>N</sup>τ*, (34)

� = *N*(**x***n*), where *N*(**x***n*) is some neighbor function

, (33)

by manipulating the current solution. Each solution is the result of one iteration.

⎧ ⎨ ⎩

**x**� *<sup>n</sup>*, if *s*(**x**�

**x**�

**x***n*+<sup>1</sup> =

Let **x***n* be the solution at iteration *n*, and **x***n*

maximized, Δ*s* = *s*(**x***n*) − *s*(**x**�

of accepting a worse solution.

has been reported in [26].

[24]

By applying function *f*(.) to the current value of *φi*, new solutions can be generated. Each new value, *φ*� *i* , is obtained by applying the function *f*(.) successively until *φ*� *<sup>i</sup>* satisfies (9) and (14). Once a suitable *φ*� *<sup>i</sup>* is available, the corresponding *Ji* is then obtained using (7). Subsequently, the value of *ai*,*Ji* is set to 1 while the remaining elements are set to 0, i.e. *Ji*+1

**a***<sup>i</sup>* = { 0, 0, ..., 1}. If constraint (12) is violated, **a***<sup>i</sup>* is cyclically shifted to the left by one position, *Ji*+1

i.e. **a***<sup>i</sup>* = { 0, 0, ..., 1, 0}. This process is repeated until (12) is satisfied, resulting in an updated vector **a**� *i* . Subsequently, the new solution is given by

$$\mathbf{x}'\_{n} = (\phi\_{1\prime}, \dots, \phi'\_{1\prime}, \dots, \phi\_{N\prime}, \mathbf{a}\_{1\prime}, \dots, \mathbf{a}'\_{1\prime}, \dots, \mathbf{a}\_{N}).\tag{36}$$

The computational complexity of ESA in terms of the number, *N*, of users at given values of *G* and *N<sup>τ</sup>* is discussed as follows. The number of iterations, *U*, of solutions explored at each temperature is chosen to be *N*. Thus, the time required to check whether a solution **x**� *<sup>n</sup>* in (36) satisfies constraints (12) and (14) has a complexity of O(*N*). Subsequently, due to the fact that the time to verify these constraints grows linearly with *<sup>N</sup>*, the complexity of ESA is <sup>O</sup>(*N*2).

As both SA and ESA are heuristic algorithms, solutions obtained by these methods may not necessarily be optimal, but are generally close to optimal. The "temperature" parameter determines trade-off between the speed of convergence towards the optimal value, and how far the solution is relative to that of the optimal. Generally, a higher temperature allows the optimal value to be approached faster, while a lower temperature provides improved fine-tuning, and thereby improving the solution quality. ESA converges faster than conventional SA since the search is diversified by periodically re-increasing the temperature. As with any heuristic algorithm, a detailed convergence study requires the determination of appropriate parameter values through experimentation.

#### **6. Simulation results**

A number of different simulation cases were used to illustrate the effectiveness of ESA for the multiuser HSDPA resource allocation problem. The first case involves *N* = 2 users, and the following values *ti*,0 = −4.5, *ti*,1 = 25.5, *ci*,1 = 1, *di*,1 = 4.5, and *qi*,*max* = 30 for *i* = 1, 2 are used for the parameters in (3). These values are obtained from [18], assuming that the mobiles are of category 10 (i.e. have a wide CQI range) as defined in [22]. Values for *ni*,*<sup>j</sup>* and *ri*,*<sup>j</sup>* are obtained from [22]. The fading channel following the general Nakagami model [21] is assumed so that {*γi*, *i* = 1, 2} in (1) are outcomes of Gamma distributed random variables

#### 12 Will-be-set-by-IN-TECH 124 Simulated Annealing – Single and Multiple Objective Problems Simulated Annealing and Multiuser Scheduling in Mobile Communication Networks <sup>13</sup>

{Γ*i*, *i* = 1, 2} with pdfs given by

$$f\_{\Gamma\_i}(\gamma) = \begin{cases} \left(\frac{a\_i}{\Gamma\_i}\right)^{a\_i} \frac{\gamma^{a\_i - 1}}{\Gamma(a\_i)} \exp\left(\frac{-a\_i \gamma}{\Gamma\_i}\right) \gamma \ge 0\\ 0 & \gamma < 0 \end{cases},\tag{37}$$

where Γ(.) is the Gamma function, *α<sup>i</sup>* is the fading figure, and Γ*<sup>i</sup>* is the mean of Γ*i*. The parameter values are listed in Table 1. Let **Γ** = {Γ<sup>1</sup> Γ2}, and let the aggregate transport block size (TBS), *T*, per TTI be


**Table 1.** List of parameter values used ([14]©IET)

$$T = \sum\_{i=1}^{N} \sum\_{j=0}^{Ii} a\_{i,j} r\_{i,j}. \tag{38}$$

<sup>103</sup> <sup>104</sup> <sup>0</sup>

**Figure 7.** CDF of the aggregate bit rate in TBS per TTI. ([14]©IET)

**Scenario II**

TBS per TTI (in bits)

Scenario Scheme ATBS/TTI (kbits) Gain (%)

II ESA 6.492 7.64

III ESA 14.587 16.95

**Table 2.** Average rate (in TBS per TTI) for the three different algorithms under the three scenarios.

SG 2.151 0 I ESA 2.588 20.03

> JGO 2.591 20.42 SG 6.031 0

JGO 6.507 7.88 SG 12.472 0

JGO 14.944 19.82

**Scenario III**

Simulated Annealing and Multiuser Scheduling in Mobile Communication Networks 125

JGO SG ESA

0.1

([14]©IET)

0.2

0.3

0.4

0.5

0.6

**Scenario I**

0.7

0.8

0.9

1

The performance improvements of the proposed Joint Global Optimum (JGO) and ESA approaches, as discussed in sections 4 and 5 respectively, are compared to that of a simple greedy (SG) algorithm. The idea behind the SG algorithm is to allocate resources to users in decreasing order of their estimated SINR values, *γ*ˆ*i*. In other words, the user with the highest *γ*ˆ is first allocated as much resources as it can possibly use. Subsequently, remaining resources that can be productively used are then assigned to the user with the next highest *γ*ˆ. This allocation of resources continues until resources are exhausted.

Fig. 7 shows the cumulative distribution function (CDF) of *T* for the three different scenarios in Table 1. The results in this figure are obtained based on two thousand channel realizations. It can be seen that ESA can achieve a performance that is close to the optimal. A summary of the average performance, i.e. *E***<sup>Γ</sup>** [*T*], for all three schemes is presented in Table 2. It can be seen that both JGO and ESA can provide a good throughput improvement over SG.

The second study involves the comparison of performance among the three algorithms by increasing the number of users from 2 to 5. The average computation times per TTI required by JGO, ESA, and SG on a personal computer with an Intel CoreTM 2 Duo T5500 processor are plotted as a function of the number of users in Fig. 8. As expected, SG is the fastest and JGO is the slowest. The running for ESA is about 0.1 s per TTI and increases slowly with the number of users. With the use of parallel dedicated processors at the BS, ESA becomes a viable alternative to SG. Fig. 9 shows the average computation time, normalized to the 2-user case, as a function of the number of users; the average is taken over only twenty channel realizations due to the long simulations times needed for JGO. The following parameter values are used: *α*<sup>1</sup> = *α*<sup>2</sup> = ... = *α<sup>N</sup>* = 5 and Γ<sup>1</sup> = Γ<sup>2</sup> = ... = Γ*<sup>N</sup>* = 8.45 dB. It can be seen clearly that even though JGO provides a globally optimal solution, the complexity increases very rapidly with

**Figure 7.** CDF of the aggregate bit rate in TBS per TTI. ([14]©IET)

12 Will-be-set-by-IN-TECH

where Γ(.) is the Gamma function, *α<sup>i</sup>* is the fading figure, and Γ*<sup>i</sup>* is the mean of Γ*i*. The parameter values are listed in Table 1. Let **Γ** = {Γ<sup>1</sup> Γ2}, and let the aggregate transport

> Scenario User *i* Γ*<sup>i</sup>* (in dB) *α<sup>i</sup>* <sup>1</sup> <sup>5</sup> 6.5 <sup>I</sup> 2 6 3

> > <sup>1</sup> <sup>12</sup> 6.5 II 2 10 3

<sup>1</sup> <sup>17</sup> 6.5 III 2 16 3

*T* =

This allocation of resources continues until resources are exhausted.

*N* ∑ *i*=1

*Ji* ∑ *j*=0

The performance improvements of the proposed Joint Global Optimum (JGO) and ESA approaches, as discussed in sections 4 and 5 respectively, are compared to that of a simple greedy (SG) algorithm. The idea behind the SG algorithm is to allocate resources to users in decreasing order of their estimated SINR values, *γ*ˆ*i*. In other words, the user with the highest *γ*ˆ is first allocated as much resources as it can possibly use. Subsequently, remaining resources that can be productively used are then assigned to the user with the next highest *γ*ˆ.

Fig. 7 shows the cumulative distribution function (CDF) of *T* for the three different scenarios in Table 1. The results in this figure are obtained based on two thousand channel realizations. It can be seen that ESA can achieve a performance that is close to the optimal. A summary of the average performance, i.e. *E***<sup>Γ</sup>** [*T*], for all three schemes is presented in Table 2. It can be

The second study involves the comparison of performance among the three algorithms by increasing the number of users from 2 to 5. The average computation times per TTI required by JGO, ESA, and SG on a personal computer with an Intel CoreTM 2 Duo T5500 processor are plotted as a function of the number of users in Fig. 8. As expected, SG is the fastest and JGO is the slowest. The running for ESA is about 0.1 s per TTI and increases slowly with the number of users. With the use of parallel dedicated processors at the BS, ESA becomes a viable alternative to SG. Fig. 9 shows the average computation time, normalized to the 2-user case, as a function of the number of users; the average is taken over only twenty channel realizations due to the long simulations times needed for JGO. The following parameter values are used: *α*<sup>1</sup> = *α*<sup>2</sup> = ... = *α<sup>N</sup>* = 5 and Γ<sup>1</sup> = Γ<sup>2</sup> = ... = Γ*<sup>N</sup>* = 8.45 dB. It can be seen clearly that even though JGO provides a globally optimal solution, the complexity increases very rapidly with

seen that both JGO and ESA can provide a good throughput improvement over SG.

*ai*,*jri*,*j*. (38)

 <sup>−</sup>*αi<sup>γ</sup>* Γ*i γ* ≥ 0 <sup>0</sup> *<sup>γ</sup>* <sup>&</sup>lt; <sup>0</sup> , (37)

{Γ*i*, *i* = 1, 2} with pdfs given by

block size (TBS), *T*, per TTI be

**Table 1.** List of parameter values used ([14]©IET)

*f*Γ*i* (*γ*) =

 *<sup>α</sup><sup>i</sup>* Γ*i* *<sup>α</sup><sup>i</sup> <sup>γ</sup>α<sup>i</sup>* −1 <sup>Γ</sup>(*αi*) exp


**Table 2.** Average rate (in TBS per TTI) for the three different algorithms under the three scenarios. ([14]©IET)

the number of users. On the other hand, ESA offers a very similar performance at a much

Simulated Annealing and Multiuser Scheduling in Mobile Communication Networks 127

Fig. 10 shows the CDFs of *T* for SG and ESA for different numbers of users. As in Fig. 9, *α*<sup>1</sup> = *α*<sup>2</sup> = ... = *α<sup>N</sup>* = 5 and Γ<sup>1</sup> = Γ<sup>2</sup> = ... = Γ*<sup>N</sup>* = 8.45 dB. The CDF curves for JGO are not shown due to the excessively long running times but are expected to be close to those for ESA. Table 3 shows the average aggregate TBS per TTI values for SG and ESA. It can be seen that the performance improvement of ESA over SG increases with the number of users.

<sup>103</sup> <sup>104</sup> <sup>0</sup>

dB

. ([14]©IET)

TBS per TTI (in bits)

**Figure 10.** CDF of the aggregate bit rate in TBS per TTI for SG and ESA with different number of users.

No. of Users Scheme ATBS/TTI (kbits) Gain (%)

ESA 4.324 12.21 <sup>2</sup> SG 3.853 <sup>0</sup>

ESA 6.676 14.75 <sup>3</sup> SG 5.818 <sup>0</sup>

ESA 8.865 15.41 <sup>4</sup> SG 7.681 <sup>0</sup>

ESA 10.651 20.00 <sup>5</sup> SG 8.877 <sup>0</sup>

**Table 3.** Average rates (in TBS per TTI) for the ESA and SG algorithms with different number of users.

reduced complexity.

0.1

For each user, *α* = 5 and Γ = 8.45

([14]©IET)

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

ESA SG

**5 users**

**4 users** 

**3 users**

**2 users**

**Figure 8.** Average computation time per TTI as a function of the number of users. ([14]©IET)

**Figure 9.** Average computation time per TTI, normalized to the 2-user case, as a function of the number of users. ([14]©IET)

the number of users. On the other hand, ESA offers a very similar performance at a much reduced complexity.

14 Will-be-set-by-IN-TECH

2 3 4 5

Number of users

2 3 4 5

Number of users

**Figure 9.** Average computation time per TTI, normalized to the 2-user case, as a function of the number

**Figure 8.** Average computation time per TTI as a function of the number of users. ([14]©IET)

10−8

100

of users. ([14]©IET)

101

Normalized Computation Time per TTI

102

103

JGO SG ESA

10−6

10−4

10−2

Computation Time per TTI (sec)

100

102

104

JGO SG ESA

Fig. 10 shows the CDFs of *T* for SG and ESA for different numbers of users. As in Fig. 9, *α*<sup>1</sup> = *α*<sup>2</sup> = ... = *α<sup>N</sup>* = 5 and Γ<sup>1</sup> = Γ<sup>2</sup> = ... = Γ*<sup>N</sup>* = 8.45 dB. The CDF curves for JGO are not shown due to the excessively long running times but are expected to be close to those for ESA. Table 3 shows the average aggregate TBS per TTI values for SG and ESA. It can be seen that the performance improvement of ESA over SG increases with the number of users.

**Figure 10.** CDF of the aggregate bit rate in TBS per TTI for SG and ESA with different number of users. ([14]©IET)


**Table 3.** Average rates (in TBS per TTI) for the ESA and SG algorithms with different number of users. For each user, *α* = 5 and Γ = 8.45 dB. ([14]©IET)

The CDFs of *T* for ESA, SG, max C/I, and Round Robin (RR) with five users are plotted in Fig. 11. Note that max C/I is a special case of SG in which, for each TTI, resources are allocated only to the user with the best channel condition. On the other hand, RR refers to a base-line scheduling scheme, whereby each user is scheduled one at a time in a round robin fashion. Under such a scheme, the respective channel quality of the users are not used during the scheduling. The corresponding values of *E***<sup>Γ</sup>** [*T*] for ESA, SG, max C/I, and RR are 10.6, 8.8, 2.8, and 1.9 bits per TTI respectively. The prominent steps in the CDFs for RR and max C/I are due to the coarse quantization resulting from allocation of resources to only one user in each TTI. It can be seen that RR has the lowest bit rate as it does not take into account user channel conditions.

computational complexity. The advantage of simulated annealing-based method relative to the simple greedy method increases with the number of users. In this chapter, it was assumed that the user SINR values, on which the channel quality indicators are based, can be accurately estimated. One research direction would be to study the performance degradation due to noisy SIR estimates and methods in reducing this degradation. Another potential study item is to investigate the benefits and trade-offs of other heuristic optimization methods for the

Simulated Annealing and Multiuser Scheduling in Mobile Communication Networks 129

This work was supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada under Grant OGP0001731, by the UBC PMC-Sierra Professorship in Networking and Communications and by a Marie Curie International Incoming Fellowship

[1] Abedi, S. [2005]. Efficient Radio Resource Management for Wireless Multimedia Communications: A Multidimensional QoS-Based Packet Scheduler, *IEEE Transactions*

[2] Aniba, G. & Aissa, S. [2005]. Resource Allocation in HSDPA using Best-Users Selection Under Code Constraints, *Proc. of IEEE Vehicular Technology Conference, Spring*, Vol. 1,

[3] Aydin, M. E. & Fogarty, T. C. [2004]. A Distributed Evolutionary Simulated Annealing Algorithm for Combinatorial Optimisation Problems , *Journal of Heuristics* 10(3): 269 –

[4] Baum, K. L., Kostas, T. A., Sartori, P. J. & Classon, B. K. [2003]. Performance Characteristics of Cellular Systems with Different Link Adaptation Strategies, *IEEE*

[5] Bedekar, A., Borst, S. C., Ramanan, K., Whiting, P. A. & Yeh, E. M. [1999]. Downlink Scheduling in CDMA Data Network, *Proc. of IEEE Global Telecommunications Conference,*

[6] Brouwer, F., de Bruin, I., Silva, J. C., Souto, N., Cercas, F. & Correia, A. [2004]. Usage of Link-Level Performance Indicators for HSDPA Network-Level Simulation in E-UMTS, *Proc. of International Symposium on Spread Spectrum Techniques and Applications (ISSSTA)*,

[7] Dahlman, E., Parkvall, S., Sköld, J. & Beming, P. [2007]. *3G HSPA and LTE for Mobile*

[8] Freudenthaler, K., Springer, A. & Wehinger, J. [2007]. Novel SINR-to-CQI Mapping Maximizing the Throughput in HSDPA, *Proc. of IEEE Wireless Communications and*

same problem.

**Acknowledgement**

PIIF-GA-2008-221380.

Raymond Kwan and M. E. Aydin *University of Bedfordshire, United Kingdom*

*University of British Columbia, Canada*

*on Wireless Communications* 4(6): 2811 – 2822.

*Transactions on Vehicular Technology* 52(6): 1497 – 1507.

*Networking Conference (WCNC)*, Hong Kong, China.

*GLOBECOM '99*, Vol. 5, pp. 2653–2657.

**Author details**

Cyril Leung

**8. References**

292.

pp. 319 – 323.

Sydney, Australia.

*Broadband*, Academic Press.

**Figure 11.** CDF of the aggregate bit rate in TBS per TTI for ESA, max C/I, and Round Robin (RR) with five users. ([14]©IET)

#### **7. Conclusion**

In this chapter, the issue of allocating resources to multiple users simultaneously in HSDPA has been examined via a number of optimization methods. The problem formulation based on the channel feedback scheme specified in the WCDMA standard has been presented. Simulation results have shown that both the global optimal and the simulated annealing-based methods can provide a substantial throughput improvement over a more simple greedy algorithm. It has been observed that the method based on simulated annealing can achieve a bit rate that is very close to that of the global optimal, with a much lower computational complexity. The advantage of simulated annealing-based method relative to the simple greedy method increases with the number of users. In this chapter, it was assumed that the user SINR values, on which the channel quality indicators are based, can be accurately estimated. One research direction would be to study the performance degradation due to noisy SIR estimates and methods in reducing this degradation. Another potential study item is to investigate the benefits and trade-offs of other heuristic optimization methods for the same problem.
