**Table 3.**

*Explanation of symbols.*

a hard threshold θ is introduced as the percentage of the total time slots that can be allocated to migration traffic within each polling cycle.

In the proposed algorithm, the calculation of the lengths of slices in each polling cycle plays a very important role and is described in more detail in the following part. The symbols used are explained in **Table 3**. In each polling cycle (e.g., *i th* polling cycle), the required time slots (*G <sup>j</sup>*,*<sup>k</sup> <sup>i</sup>* <sup>Þ</sup> for the *<sup>k</sup>th* migration traffic from the

ONUj can be calculated by

$$\mathbf{G}\_{i}^{j,k} = \mathbf{R}\_{i}^{j,k} / \left\lceil T\_{i}^{j,k} / \mathbf{W}\_{\max} \right\rceil \tag{1}$$

*TG <sup>j</sup>*,*<sup>k</sup>*

equals to the total length of the granted time slots ( *TG <sup>j</sup>*,*<sup>k</sup>*

*Ca*

Then, the granted time slots can be calculated by

*i*

*DOI: http://dx.doi.org/10.5772/intechopen.91439*

In the *i*

calculated by

traffic (*S<sup>n</sup>*

**Table 4.**

**41**

*Simulation parameters [16].*

migration traffic (*C<sup>t</sup>*

slots can be calculated by

Similarly, in the *i*

**4.3 Performance evaluation**

*<sup>i</sup>* <sup>¼</sup> *TG <sup>j</sup>*,*<sup>k</sup>*

*Ct <sup>i</sup>* <sup>¼</sup> <sup>X</sup> *N*

*<sup>i</sup>* <sup>¼</sup> *Wmax*–*S<sup>m</sup>*

*Ct*

For the non-migration traffic, the maximum available time slot (*C<sup>a</sup>*

*<sup>i</sup>* <sup>¼</sup> *<sup>C</sup><sup>t</sup> i* , *Ct*

*<sup>i</sup>* ) equals the total length of the granted time slots (*C<sup>t</sup>*

(

*Ca <sup>i</sup>* , *Ct*

The performance of the proposed algorithm has been investigated through simulation and is also further compared with two benchmarks that are based on the conventional DBA algorithms [15]. In Benchmark1, the migration traffic and non-migration traffic follow FCFS, while in Benchmark2 a higher priority is given to the nonmigration traffic. Besides, in both benchmarks, the non-migration traffic is assumed with two priority levels (e.g., low and high). **Table 4** summarizes the main parameters. As mentioned, a threshold *θ* is introduced to regulate the allocation of time slots within each polling cycle. It has been shown that with *θ* set to 1, 85% of the time slots

**Parameter Value** Number of ONUs in a PON 8 Propagation delay in the optical links 5 μs/km Packet size of Ethernet frame (bytes) (64, 1518) Guard time between two consecutive time slots 1 μs Buffer size (Mbytes) 100 Amount of data encapsulated in application VMs (Mbits) (10, 50) Deadline for the migration data (second) (1, 5) Confidence level 95%

*j*¼1

8 < :

*Rm*

*Low-Latency Strategies for Service Migration in Fog Computing Enabled Cellular Networks*

*<sup>i</sup>* , *TG <sup>j</sup>*,*<sup>k</sup>*

*<sup>i</sup>* , *TG <sup>j</sup>*,*<sup>k</sup>*

*th* polling cycle, the length of the slice for the migration traffic (*S<sup>m</sup>*

X *H <sup>j</sup>*

*B <sup>j</sup>*,*<sup>l</sup>*

*<sup>i</sup>* <*C<sup>a</sup> i*

*<sup>i</sup>* ≥*C<sup>a</sup> i*

*th* polling cycle, the length of the slice for the non-migration

*l*¼1

*<sup>i</sup>* < *Rm i*

(4)

*i* )

*<sup>i</sup>* ) can be

(7)

*<sup>i</sup>* Þ. Then, for the non-

*<sup>i</sup>* (5)

*<sup>i</sup>* � *<sup>N</sup>* � *BR* <sup>þ</sup> *BG* � � (6)

*i* ).

*<sup>i</sup>* ≥ *Rm i*

) with different priorities, the total length of the granted time

In the proposed resource allocation algorithm, the length of the polling cycle (*W*) varies dynamically with the traffic load. Thus, when calculating the required time slots in the current polling cycle, the maximum polling cycle (*Wmax*) is used to guarantee that the transmission of the whole migration traffic can be finished before the deadline. Here, the time unit (μs) is used to represent the length of the time slots and polling cycles. Then, the total length of the time slots granted for the migration traffic (*TG <sup>j</sup>*,*<sup>k</sup> <sup>i</sup>* ) can be calculated by

$$TG\_i^{j,k} = \sum\_{j=1}^{N} \sum\_{k=1}^{K\_j} G\_i^{j,k} \tag{2}$$

To guarantee the fairness between the migration and non-migration traffic, the length of the granted time slots cannot exceed the maximum allowed length of the time slots in this polling cycle, which can be calculated by.

$$R\_i^m = \left(W\_{\text{max}} - N \times \left(\mathbf{B}^R + \mathbf{B}^G\right)\right) \times \theta \tag{3}$$

The maximum length of the allocated time slots for the migration traffic is set by the threshold (*θ*) for the slice of the migration traffic (*θ* ∈½ �Þ 0, 1 . Thus, the time slots granted for the migration traffic in the *i th* polling cycle can be calculated by

*Low-Latency Strategies for Service Migration in Fog Computing Enabled Cellular Networks DOI: http://dx.doi.org/10.5772/intechopen.91439*

$$TG\_i^{j,k} = \begin{cases} TG\_i^{j,k}, & TG\_i^{j,k} < R\_i^m \\\ R\_i^m, & TG\_i^{j,k} \ge R\_i^m \end{cases} \tag{4}$$

In the *i th* polling cycle, the length of the slice for the migration traffic (*S<sup>m</sup> i* ) equals to the total length of the granted time slots ( *TG <sup>j</sup>*,*<sup>k</sup> <sup>i</sup>* Þ. Then, for the nonmigration traffic (*C<sup>t</sup> i* ) with different priorities, the total length of the granted time slots can be calculated by

$$\mathbf{C}\_{i}^{t} = \sum\_{j=1}^{N} \sum\_{l=1}^{H\_{j}} \mathbf{B}\_{i}^{j,l} \tag{5}$$

For the non-migration traffic, the maximum available time slot (*C<sup>a</sup> <sup>i</sup>* ) can be calculated by

$$\mathbf{C}\_{i}^{t} = \mathbf{W}\_{\max} \mathbf{-S}\_{i}^{m} - \mathbf{N} \times \left(\mathbf{B}^{R} + \mathbf{B}^{G}\right) \tag{6}$$

Then, the granted time slots can be calculated by

$$\mathbf{C}\_{i}^{t} = \begin{cases} \mathbf{C}\_{i}^{t}, & \mathbf{C}\_{i}^{t} < \mathbf{C}\_{i}^{t} \\ \mathbf{C}\_{i}^{t}, & \mathbf{C}\_{i}^{t} \ge \mathbf{C}\_{i}^{t} \end{cases} \tag{7}$$

Similarly, in the *i th* polling cycle, the length of the slice for the non-migration traffic (*S<sup>n</sup> <sup>i</sup>* ) equals the total length of the granted time slots (*C<sup>t</sup> i* ).

#### **4.3 Performance evaluation**

a hard threshold θ is introduced as the percentage of the total time slots that can be

*<sup>i</sup>* Required time slot for the remaining of the *<sup>k</sup>th* migration traffic from the ONUj in the *<sup>i</sup>*

*Moving Broadband Mobile Communications Forward - Intelligent Technologies for 5G …*

*<sup>i</sup>* Remaining time for transmitting the *kth* migration traffic sent by ONUj, which starts from

*th* polling cycle to its deadline

In the proposed algorithm, the calculation of the lengths of slices in each polling cycle plays a very important role and is described in more detail in the following part. The symbols used are explained in **Table 3**. In each polling cycle (e.g., *i*

*<sup>i</sup> <sup>=</sup> <sup>T</sup> <sup>j</sup>*,*<sup>k</sup>*

In the proposed resource allocation algorithm, the length of the polling cycle (*W*) varies dynamically with the traffic load. Thus, when calculating the required time slots in the current polling cycle, the maximum polling cycle (*Wmax*) is used to guarantee that the transmission of the whole migration traffic can be finished before the deadline. Here, the time unit (μs) is used to represent the length of the time slots and polling cycles. Then, the total length of the time slots granted for the migration

*<sup>i</sup> =Wmax* l m *th*

*th*

*th* polling cycle

*th* priority at the in

(1)

*<sup>i</sup>* <sup>Þ</sup> for the *<sup>k</sup>th* migration traffic from the

*<sup>i</sup>* (2)

allocated to migration traffic within each polling cycle.

*D <sup>j</sup>*,*<sup>k</sup>* Deadline for the *kth* migration traffic sent by ONUj

*B<sup>R</sup>* Transmission time for sending each report and grant massage

*B<sup>G</sup>* Guard time with a fixed value for the slice of the migration traffic in the *i*

*<sup>i</sup>* Length of the requested time slots for the non-migration traffic with the *l*

*H <sup>j</sup>* Number of priority levels for the non-migration traffic at the ONUj

*K <sup>j</sup>* Number of the migration tasks in the ONUj

*G <sup>j</sup>*,*<sup>k</sup> <sup>i</sup>* <sup>¼</sup> *<sup>R</sup> <sup>j</sup>*,*<sup>k</sup>*

*TG <sup>j</sup>*,*<sup>k</sup>*

time slots in this polling cycle, which can be calculated by.

*R<sup>m</sup>*

granted for the migration traffic in the *i*

*<sup>i</sup>* <sup>¼</sup> <sup>X</sup> *N*

*j*¼1

To guarantee the fairness between the migration and non-migration traffic, the length of the granted time slots cannot exceed the maximum allowed length of the

The maximum length of the allocated time slots for the migration traffic is set by the threshold (*θ*) for the slice of the migration traffic (*θ* ∈½ �Þ 0, 1 . Thus, the time slots

X *K <sup>j</sup>*

*G <sup>j</sup>*,*<sup>k</sup>*

*<sup>i</sup>* <sup>¼</sup> *Wmax* � *<sup>N</sup>* � *<sup>B</sup><sup>R</sup>* <sup>þ</sup> *BG* � � � � � *<sup>θ</sup>* (3)

*th* polling cycle can be calculated by

*k*¼1

polling cycle), the required time slots (*G <sup>j</sup>*,*<sup>k</sup>*

*<sup>i</sup>* ) can be calculated by

ONUj can be calculated by

**Symbol Explanation**

the *i*

*Explanation of symbols.*

polling cycle

the beginning of the *i*

*th* polling cycle

*R <sup>j</sup>*,*<sup>k</sup>*

*T <sup>j</sup>*,*<sup>k</sup>*

*B <sup>j</sup>*,*<sup>l</sup>*

**Table 3.**

traffic (*TG <sup>j</sup>*,*<sup>k</sup>*

**40**

The performance of the proposed algorithm has been investigated through simulation and is also further compared with two benchmarks that are based on the conventional DBA algorithms [15]. In Benchmark1, the migration traffic and non-migration traffic follow FCFS, while in Benchmark2 a higher priority is given to the nonmigration traffic. Besides, in both benchmarks, the non-migration traffic is assumed with two priority levels (e.g., low and high). **Table 4** summarizes the main parameters.

As mentioned, a threshold *θ* is introduced to regulate the allocation of time slots within each polling cycle. It has been shown that with *θ* set to 1, 85% of the time slots


in the overall polling cycle are allocated to the migration traffic at load = 0.9. In the following simulation, different values have been chosen to illustrate the impacts.

**Figure 12** illustrates the migration success probability (MSP) versus traffic load. Here, MSP is defined as the ratio of the amount of services migrated before the required deadline over the total amount of services that are migrated. As shown, MSP decreases with increasing traffic load in Benchmark2 and DBS with different thresholds. At a lower traffic load (e.g., less than 0.4), all three schemes achieve high MSP, while when the traffic load is above 0.4, MSP starts to decrease. For Benchmark1, MSP shows very minor changes when traffic load increases with almost 1 when traffic load is 0.9. This is because according to the principle of FCFS, large-size migration traffic can be fully transmitted in several cycles once the migration starts. On the other hand, in Benchmark2, the MSP for the migration traffic decreases sharply due to the fact that non-migration traffic is prioritized. As shown, MSP is as low as 0.1 when the traffic load is 0.9. For DBS, all the migration tasks can be performed within the time constraints when the traffic load is under 0.5. When the traffic load is higher than 0.6, the MSP performance is mainly affected by the threshold of the allowable time slots that can be used for the migration traffic and increases as the threshold increases. For example, with the threshold set to 0.5, MSP can be up to 0.98 at load of 0.7.

The average E2E latency for the non-migration traffic with high priority is shown in **Figure 13(a)** as a function of load. It can be seen that the proposed scheme and two benchmarks have a similar trend, that is, the average latency increases with traffic load. Among the three schemes, Benchmark1 always has the highest average E2E latency which can be up to 100 ms when traffic load is 0.9. Such large latency may not be accepted for time-critical services (e.g., interactive voice). Compared with Benchmark1, the average E2E latency in Benchmark2 increases more slowly even when the traffic load is high. The reason is that the non-migration traffic in Benchmark2 has high priority to be transmitted. Compared with Benchmark1 and Benchmark2, the average E2E latency for DBS is much lower, which is less than 1 ms when the traffic load is lower than 0.5 and remains to be less than 10 ms even at high traffic load. This is due to the fact that the migration traffic can be transmitted in multiple cycles by partitioning those with large size into multiple smaller pieces; thereby the non-migration traffic that

arrives after or during the transmission of migration traffic does not need to wait too long for transmission. Furthermore, when the threshold increases, the allocated time slots for transmitting the non-migration traffic decreases; thus the average E2E latency increases. Regarding the jitter for the non-migration data with high priority, a similar trend as the average E2E latency can be found, as shown in

*Low-Latency Strategies for Service Migration in Fog Computing Enabled Cellular Networks*

*DOI: http://dx.doi.org/10.5772/intechopen.91439*

*(a) The average latency and (b) jitter for the non-migration data with high priority.*

*The average (a) latency and (b) jitter for the non-migration data with low priority.*

The average E2E latency of the low-priority non-migration data with different traffic loads is shown in **Figure 14(a)**. Similar to high-priority non-migration traffic, a general trend is that E2E latency increases with the traffic load for all schemes. When the traffic load is low, all kinds of traffic can be assigned with sufficient time slots, while when the traffic load increases, the average E2E latency for the lowpriority non-migration traffic increases sharply because of its large queueing delay. More specifically, Benchmark1 has the highest average E2E latency among the three schemes. Compared with Benchmark1, the average E2E latency in Benchmark2 is much lower. And when traffic load is low (e.g., less than 0.6), the average latency of the low-priority non-migration traffic in DBS is the lowest, which is smaller than 2 ms. However, the latency of DBS increases quickly with larger thresholds

**Figure 13(b)**.

**Figure 14.**

**43**

**Figure 13.**

**Figure 12.** *The migration success probability versus traffic load.*

*Low-Latency Strategies for Service Migration in Fog Computing Enabled Cellular Networks DOI: http://dx.doi.org/10.5772/intechopen.91439*

#### **Figure 13.**

in the overall polling cycle are allocated to the migration traffic at load = 0.9. In the following simulation, different values have been chosen to illustrate the impacts.

*Moving Broadband Mobile Communications Forward - Intelligent Technologies for 5G …*

Here, MSP is defined as the ratio of the amount of services migrated before the required deadline over the total amount of services that are migrated. As shown, MSP decreases with increasing traffic load in Benchmark2 and DBS with different thresholds. At a lower traffic load (e.g., less than 0.4), all three schemes achieve high MSP, while when the traffic load is above 0.4, MSP starts to decrease. For Benchmark1, MSP shows very minor changes when traffic load increases with almost 1 when traffic load is 0.9. This is because according to the principle of FCFS, large-size migration traffic can be fully transmitted in several cycles once the migration starts. On the other hand, in Benchmark2, the MSP for the migration traffic decreases sharply due to the fact that non-migration traffic is prioritized. As shown, MSP is as low as 0.1 when the traffic load is 0.9. For DBS, all the migration tasks can be performed within the time constraints when the traffic load is under 0.5. When the traffic load is higher than 0.6, the MSP performance is mainly affected by the threshold of the allowable time slots that can be used for the migration traffic and increases as the threshold increases. For example, with the

The average E2E latency for the non-migration traffic with high priority is shown in **Figure 13(a)** as a function of load. It can be seen that the proposed scheme and two benchmarks have a similar trend, that is, the average latency increases with traffic load. Among the three schemes, Benchmark1 always has the highest average E2E latency which can be up to 100 ms when traffic load is 0.9. Such large latency may not be accepted for time-critical services (e.g., interactive voice). Compared with Benchmark1, the average E2E latency in Benchmark2 increases more slowly even when the traffic load is high. The reason is that the non-migration traffic in Benchmark2 has high priority to be transmitted.

Compared with Benchmark1 and Benchmark2, the average E2E latency for DBS is much lower, which is less than 1 ms when the traffic load is lower than 0.5 and remains to be less than 10 ms even at high traffic load. This is due to the fact that the migration traffic can be transmitted in multiple cycles by partitioning those with large size into multiple smaller pieces; thereby the non-migration traffic that

threshold set to 0.5, MSP can be up to 0.98 at load of 0.7.

**Figure 12.**

**42**

*The migration success probability versus traffic load.*

**Figure 12** illustrates the migration success probability (MSP) versus traffic load.

*(a) The average latency and (b) jitter for the non-migration data with high priority.*

arrives after or during the transmission of migration traffic does not need to wait too long for transmission. Furthermore, when the threshold increases, the allocated time slots for transmitting the non-migration traffic decreases; thus the average E2E latency increases. Regarding the jitter for the non-migration data with high priority, a similar trend as the average E2E latency can be found, as shown in **Figure 13(b)**.

The average E2E latency of the low-priority non-migration data with different traffic loads is shown in **Figure 14(a)**. Similar to high-priority non-migration traffic, a general trend is that E2E latency increases with the traffic load for all schemes. When the traffic load is low, all kinds of traffic can be assigned with sufficient time slots, while when the traffic load increases, the average E2E latency for the lowpriority non-migration traffic increases sharply because of its large queueing delay. More specifically, Benchmark1 has the highest average E2E latency among the three schemes. Compared with Benchmark1, the average E2E latency in Benchmark2 is much lower. And when traffic load is low (e.g., less than 0.6), the average latency of the low-priority non-migration traffic in DBS is the lowest, which is smaller than 2 ms. However, the latency of DBS increases quickly with larger thresholds

**Figure 14.** *The average (a) latency and (b) jitter for the non-migration data with low priority.*

(e.g., larger than 0.5) and exceeds the level observed for Benchmark2 when traffic load is high (e.g., higher than 0.7). The reason is that the time slots are prioritized for the non-migration traffic with high priority and the migration traffic; thus the low-priority non-migration has to wait. The jitter shows similar trend as the E2E latency, as shown in **Figure 14(b)**.
