**4. Challenges of fog computing and its integration with other computing platforms**

As aforementioned, fog computing offers a complementary platform that allows users to offload computational tasks to the network edge. So, resource-limited fog nodes with processing and storage functionalities are deployed at the edge of the network to enhance network performance. Nevertheless, the huge users' data from various applications will be wirelessly transmitted to the cloud center or fog nodes, as the case may be. Based on this, massive communication bandwidth is demanded. It is noteworthy that this is an expensive and constrained resource in the communication systems [8, 28, 29].

Moreover, there are a number of associated challenges of fog-cloud-based computing integrated architecture. One of the main associated challenges is on efficient management of fog infrastructure and allocation of accessible resources to the IoT devices. It is noteworthy that a huge amount of services can be demanded simultaneously by the IoT devices while on the other hand, the respective fog service node is bestowed with limited storage and computing capabilities. Based on this, the entire fog nodes have to be managed optimally. In this context, for the efficient provision of the requested services, they have to be optimally allocated to the required IoT devices. Besides, fog computing resource management is another notable challenge that calls for effective control among fog nodes [26].

Furthermore, it is highly imperative to consider various factors such as energy consumption, service availability, and associated expenses, when fog nodes are deployed to deliver services [30]. In this context, to meet IoT application requirements, optimal mapping of the fog service nodes to the IoT devices is a challenging task. Besides, privacy and security issues such as access control, trust management, intrusion detection, access authentication, etc. [31] in integrated fog computing and IoT device setups are challenging [26].

As aforementioned, one of the main challenges in an integrated computing platform is the multiple resource allocation issue. In this context, efficient resource management across the fog-cloud platform for an effective computation offloading is of paramount importance. To address this challenge, an effective multiple access technique such as NOMA is required [8].

NOMA is an attractive radio multiple access techniques that mainly targets nextgeneration wireless communications. In NOMA, the power domain for multiple access is leveraged. It presents a number of potential advantages such as high reliability, reduced transmission latency, enhanced spectrum efficiency, and massive connectivity [32]. The main concept of NOMA is to serve multiple users through the same resource regarding time, code/space, and frequency, by exploiting different power levels. Afterward, at the fog nodes, a cancelation technique like successive interference cancelation (SIC) can be implemented to separate and decode the superimposed signals [8, 33]. In the following section, we present some models for resource allocation in an integrated fog-cloud architecture with NOMA implementation.

#### **5. System model**

This section presents some related models for the fog computing hierarchical network model illustrated in **Figure 1**. Assume that *M* fog nodes with various

*Trends in Cloud Computing Paradigms: Fundamental Issues, Recent Advances, and Research… DOI: http://dx.doi.org/10.5772/intechopen.98315*

computing and storage capabilities are deployed to offer offloading services given by a set of M ¼ f g 1, 2, … , *M* . Besides, assume *N* users denoted by N ¼ f g 1, 2, … , *N* with J ¼ f g 1, 2, … , *J* independent computation tasks to be executed. The respective task can be expressed as [8]

$$F\_{\eta j} = \left\{ A\_{in}(\eta j), Q\_{req}(\eta j), T\_{\text{max}}(\eta j), n \in \mathcal{N}, j \in \mathcal{I} \right\},\tag{1}$$

where *Ain*ð Þ *nj* is the size of computation input data of the *j*-th task demanded by the *n*-th user, *Tmax*ð Þ *nj* represents the maximum tolerable latency of the *j*-th task required by the *n*-th user, and *Qreq*ð Þ *nj* is the total number of central processing unit (CPU) cycles needed to execute the task.

In addition, to express different related models, we assume a quasi-static scenario in which the users are unchanged in the course of computation offloading, however, they can change over different periods. Besides, we assume a perfect instantaneous channel that remains unchanged during the packet transmission. Based on this, we present the following models for an integrated fog-cloud architecture with NOMA implementation.

#### **5.1 Communication model**

deployment to support various devices [27]. In **Figure 2**, we present some of the major features of the Fog paradigm. Also, the following section focuses on the

*Moving Broadband Mobile Communications Forward - Intelligent Technologies for 5G…*

As aforementioned, fog computing offers a complementary platform that allows users to offload computational tasks to the network edge. So, resource-limited fog nodes with processing and storage functionalities are deployed at the edge of the network to enhance network performance. Nevertheless, the huge users' data from various applications will be wirelessly transmitted to the cloud center or fog nodes, as the case may be. Based on this, massive communication bandwidth is demanded. It is noteworthy that this is an expensive and constrained resource in the commu-

Moreover, there are a number of associated challenges of fog-cloud-based computing integrated architecture. One of the main associated challenges is on efficient management of fog infrastructure and allocation of accessible resources to the IoT devices. It is noteworthy that a huge amount of services can be demanded simultaneously by the IoT devices while on the other hand, the respective fog service node is bestowed with limited storage and computing capabilities. Based on this, the entire fog nodes have to be managed optimally. In this context, for the efficient provision of the requested services, they have to be optimally allocated to the required IoT devices. Besides, fog computing resource management is another

Furthermore, it is highly imperative to consider various factors such as energy consumption, service availability, and associated expenses, when fog nodes are deployed to deliver services [30]. In this context, to meet IoT application requirements, optimal mapping of the fog service nodes to the IoT devices is a challenging task. Besides, privacy and security issues such as access control, trust management, intrusion detection, access authentication, etc. [31] in integrated fog computing and

As aforementioned, one of the main challenges in an integrated computing platform is the multiple resource allocation issue. In this context, efficient resource management across the fog-cloud platform for an effective computation offloading is of paramount importance. To address this challenge, an effective multiple access

NOMA is an attractive radio multiple access techniques that mainly targets nextgeneration wireless communications. In NOMA, the power domain for multiple access is leveraged. It presents a number of potential advantages such as high reliability, reduced transmission latency, enhanced spectrum efficiency, and massive connectivity [32]. The main concept of NOMA is to serve multiple users through the same resource regarding time, code/space, and frequency, by exploiting different power levels. Afterward, at the fog nodes, a cancelation technique like successive interference cancelation (SIC) can be implemented to separate and decode the superimposed signals [8, 33]. In the following section, we present some models for resource allocation in an integrated fog-cloud architecture with NOMA implementation.

This section presents some related models for the fog computing hierarchical network model illustrated in **Figure 1**. Assume that *M* fog nodes with various

**4. Challenges of fog computing and its integration with other**

notable challenge that calls for effective control among fog nodes [26].

resource allocation challenges in fog computing.

**computing platforms**

nication systems [8, 28, 29].

IoT device setups are challenging [26].

technique such as NOMA is required [8].

**5. System model**

**12**

When an *n*-th user with a number of offload tasks and the transmission power, *pmn*, transmits signal, *xmn*, to the *m*-th fog node, the received signals, *ymn*, can be expressed as

$$\mathcal{Y}\_{mn} = \underbrace{\sqrt{p\_{mn}}}\_{\text{Desired signal}} h\_{mn} \mathbf{x}\_{mn} + \underbrace{\sum\_{i \neq n, i \in \mathcal{N}} \sqrt{p\_{mi}} h\_{mi} \mathbf{x}\_{mi}}\_{\text{Intra-cell interference}} + \underbrace{\mathbf{z}\_{mn}}\_{\text{Noise}} \tag{2}$$

where the first term represents the desired signal from the *n*-th user, the second term is the intra-cell interference suffered by the *n*-th user from other users being served by the *m*-th fog node on the same frequency band, the third term, *zmn*, denotes the additive white Gaussian noise (AWGN) with zero mean and variance *δ*<sup>2</sup> , and *hmn* denotes the channel gain for the *n*-th user that connects to the *m*-th fog node.

It is noteworthy that the transmitted signals from various users to each fog node are the desired signals. However, they bring about interference with each other. Also, as individual users that are connected to a specified fog node suffer different channel conditions, the interference can be alleviated and the superimposed signals can be decoded sequentially by each fog node using SIC [8, 33].

In the linear interference cancelation techniques, the desired signal is detected, but other signals are regarded as interference. So, the SIC concept is based on the fact that the signal that has the highest signal-to-interference-plus-noise-ratio (SINR) can be detected first. In this regard, its interference is canceled from other streams [34]. Furthermore, regarding the integrated computing platform, the received signal by a specified fog node from the user that has the highest channel gain is the potential strongest signal, so it is decoded first at the fog node. Afterward, the strongest signal will be removed from the streams. The same approach is then applied to the user with the second-highest channel gain and so on. Consequently, the users'signals on the same frequency band can be sorted in relation to the channel gains. In this context, the users served by the *m*-th fog node can be arranged in descending as [8]

$$|h\_{m1}|^2 \ge |h\_{m2}|^2 \ge \cdots \ge |h\_{mN}|^2 \text{ } \forall n \in \mathcal{N} \tag{3}$$

Using Eq. (3), every single fog node can subtract and decode the desired signals. Besides, the received SINR, *γ*, of the *n*-th user being served through the *m*-th fog node can be defined as [8, 35, 36]

$$\chi\_{mn}(p\_{mn}) = \frac{p\_{mn}|h\_{mn}|^2}{\delta^2 + \sum\_{i=n+1}^N p\_{mi}|h\_{mi}|^2} \tag{4}$$

where, *ξ*max

**5.2 Fog computing model**

*5.2.1 Task processing latency*

Eq. (5) can be expressed as

Ξ*<sup>f</sup>* represents the resource *m*-th fog node.

*DOI: http://dx.doi.org/10.5772/intechopen.98315*

*<sup>i</sup>* denotes the maximum resource required by the *n*-th end-user and

The fog computing model is based on the required tasks and the associated overhead. For instance, each fog node will receive task offloading requests from the users. Based on its resource capabilities, the respective node is expected to process the requested computational tasks. As a result of this, certain overhead regarding time and energy will be incurred to transmit and process at the fog nodes [8, 37]. The associated overheads are discussed in the following subsections.

*Trends in Cloud Computing Paradigms: Fundamental Issues, Recent Advances, and Research…*

Based on the communication model presented in subsection 5.1, the transmission latency regarding computation offloading can be evaluated. Assume that the *m*-th fog node receives computation task *Fnj* from the *n*-th user, the incurred transmission latency when the *n*-th user send data to offload the *j*-th task using

> *mnj*,*<sup>t</sup>* <sup>¼</sup> *Ain*ð Þ *nj Rmn*

As aforementioned, each fog node possesses limited computation capabilities.

*mnj*,*<sup>e</sup>* <sup>¼</sup> *Qreq*ð Þ *nj Cf mn*

Furthermore, consider a scenario in which each fog node is equipped with a CPU that is based on non-preemptive allocation. Also, assume that computing resource is assigned to an individual user each time until its required tasks are accomplished. Moreover, assume the process sequence, **<sup>q</sup>***<sup>m</sup>* <sup>¼</sup> *qms*j*qms* <sup>∈</sup>f g 1, 2, … , *<sup>N</sup>* , *qms* ¼6 *qmn*, *<sup>s</sup>*, *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> � � in the

(10)

(11)

(13)

*mn*, is assigned to the

*mnj*,*<sup>e</sup>* can be defined as

*msj*,*<sup>e</sup>* (12)

*mnj*,*<sup>q</sup>* (14)

*T f*

*T f*

*m*-th fog node in which the tasks are executed in the ascending order, *qm*. In this

*mnj*,*<sup>q</sup>* <sup>¼</sup> <sup>X</sup>

criteria of the *m*-th fog node by the *s*-th user can be defined as [8, 37]

offloads the *j*-th task to the *m*-th fog node can be defined as [8, 36, 37]

*T f mnj* <sup>¼</sup> *<sup>T</sup> <sup>f</sup>*

*bms* <sup>¼</sup> 1 if *<sup>s</sup>*‐th user selects associated ð Þ with the *<sup>m</sup>*‐th fog node

The aggregate latency incurred by the fog computing when the *n*-th user

*mnj*,*<sup>t</sup>* <sup>þ</sup> *<sup>T</sup> <sup>f</sup>*

*N*

*bmsT <sup>f</sup>*

*mnj*,*<sup>e</sup>* <sup>þ</sup> *<sup>T</sup> <sup>f</sup>*

*s*, *qms* <*qmn*

where *bms* represents the outcome of user scheduling for the fog nodes that specifies selection of the *m*-th fog node by the *s*-th user for offloading. The selection

Assume that the *m*-th fog node with computing capability, *C<sup>f</sup>*

*n*-th user, the related computation execution time *T <sup>f</sup>*

scenario, for task *j*, the queuing delay time can be defined as

0 otherwise �

**15**

*T f*

Furthermore, the resultant transmitted data rate of the *n*-th user at the *m*-th fog node can be expressed as [8, 35, 37]

$$R\_{mn}(w\_{mn}, p\_{mn}) = w\_{mn} \log\_2 \left( 1 + \chi\_{mn}(p\_{mn}) \right),\tag{5}$$

where *wmn* represents the occupied frequency band of the *n*-th user that is served by the *m*-th fog node and W denotes the total frequency band.

Moreover, as a result of the limited resources in the fog node, it is challenging to concurrently fulfill the entire services demanded by the end users. So, to acquire the demanded services, each end-user should have a satisfaction function for the evaluation of the allocated resources, *ξ*. The associated satisfaction function, *χ*, can be defined as [26]

$$\chi(\xi) = \begin{cases} \log\left(\xi + \mathbf{1}\right), & \mathbf{0} \leqslant \xi < \xi\_{\min} \\ \log\left(\xi\_{\max} + \mathbf{1}\right), & \xi \geqslant \xi\_{\min} \end{cases},\tag{6}$$

where *ξ*max denotes the maximum resource that is required to offer the demanded service.

Moreover, based on the satisfaction function, the major objective of the fog node is to offer a global satisfaction maximization for the entire end users. This can be expressed as [26]

$$\begin{aligned} \textbf{Objective.} \max \left\{ \boldsymbol{\chi}\_{\text{g}} \right\} \\ \textbf{S.t.} \\ \left\{ \begin{aligned} \boldsymbol{\chi}\_{\text{g}} &= \sum\_{i=1}^{n} \{ \boldsymbol{\tau}\_{i} \cdot \boldsymbol{\chi}\_{i}(\boldsymbol{\xi}\_{i}) \} \\ \boldsymbol{\xi}\_{1} + \boldsymbol{\xi}\_{2} + \cdots + \boldsymbol{\xi}\_{n} &\leqslant \Xi \\ \boldsymbol{\tau}\_{1} + \boldsymbol{\tau}\_{2} + \cdots + \boldsymbol{\tau}\_{n} &= \mathbf{1} \end{aligned} \right. \end{aligned} \tag{8}$$

where *χ<sup>g</sup>* denotes the overall satisfaction of the entire end users, *ξ<sup>i</sup>* denotes the allocated resource to the *n*-th end-user, Ξ represents the possessed resource by the fog node, and *τ* represents the associated priority level for the *n*-th end-user.

Furthermore, using Eqs. (7) and (8), resources of the fog node can be allocated to the entire end-device while the overall maximum satisfaction is achieved. Moreover, the fog nodes are connected and are capable of sharing their resources to deliver the requested service by the end-users. Assume a scenario in which a fog node does not possess sufficient resources to offer services that are locally requested, then it can shift certain requested services with low priority level to the neighboring fog nodes with spare resources for processing. The spare resources, *R<sup>f</sup>* s , of the *m*-th fog node can be defined as

$$
\boldsymbol{\Xi}\_{\text{space}}^{\boldsymbol{f}} = \boldsymbol{\Xi}^{\boldsymbol{f}} - \sum\_{i=1}^{n} \boldsymbol{\xi}\_{i}^{\text{max}}, \tag{9}
$$

*Trends in Cloud Computing Paradigms: Fundamental Issues, Recent Advances, and Research… DOI: http://dx.doi.org/10.5772/intechopen.98315*

where, *ξ*max *<sup>i</sup>* denotes the maximum resource required by the *n*-th end-user and Ξ*<sup>f</sup>* represents the resource *m*-th fog node.

#### **5.2 Fog computing model**

Using Eq. (3), every single fog node can subtract and decode the desired signals. Besides, the received SINR, *γ*, of the *n*-th user being served through the *m*-th fog

Furthermore, the resultant transmitted data rate of the *n*-th user at the *m*-th fog

� � <sup>¼</sup> *wmn* log <sup>2</sup> <sup>1</sup> <sup>þ</sup> *<sup>γ</sup>mn pmn*

Moreover, as a result of the limited resources in the fog node, it is challenging to concurrently fulfill the entire services demanded by the end users. So, to acquire the demanded services, each end-user should have a satisfaction function for the evaluation of the allocated resources, *ξ*. The associated satisfaction function, *χ*, can be

where *wmn* represents the occupied frequency band of the *n*-th user that is

*χ ξ*ð Þ¼ log ð Þ *<sup>ξ</sup>* <sup>þ</sup> <sup>1</sup> , 0<sup>⩽</sup> *<sup>ξ</sup>*<*ξ*min log ð Þ *ξ*max þ 1 , *ξ*⩾*ξ*min

Moreover, based on the satisfaction function, the major objective of the fog node is to offer a global satisfaction maximization for the entire end users. This can be

n o

**Objective***:* max *χ*<sup>g</sup>

*<sup>χ</sup>*<sup>g</sup> <sup>¼</sup> <sup>P</sup>*<sup>n</sup>*

8 >>><

>>>:

Ξ*f*

of the *m*-th fog node can be defined as

**14**

**S***:***t***:*

*ξ*<sup>1</sup> þ *ξ*<sup>2</sup> þ ⋯ þ *ξ<sup>n</sup>* ⩽ Ξ *τ*<sup>1</sup> þ *τ*<sup>2</sup> þ ⋯ þ *τ<sup>n</sup>* ¼ 1 *ξ*1, *ξ*2, … , *ξ<sup>n</sup>* ⩾0

where *χ<sup>g</sup>* denotes the overall satisfaction of the entire end users, *ξ<sup>i</sup>* denotes the allocated resource to the *n*-th end-user, Ξ represents the possessed resource by the fog node, and *τ* represents the associated priority level for the *n*-th end-user.

Furthermore, using Eqs. (7) and (8), resources of the fog node can be allocated to the entire end-device while the overall maximum satisfaction is achieved. Moreover, the fog nodes are connected and are capable of sharing their resources to deliver the requested service by the end-users. Assume a scenario in which a fog node does not possess sufficient resources to offer services that are locally

requested, then it can shift certain requested services with low priority level to the neighboring fog nodes with spare resources for processing. The spare resources, *R<sup>f</sup>*

spare <sup>¼</sup> <sup>Ξ</sup>*<sup>f</sup>* �X*<sup>n</sup>*

*i*¼1

*ξ*max

*<sup>i</sup>*¼<sup>1</sup> *<sup>τ</sup><sup>i</sup>* � *<sup>χ</sup><sup>i</sup> <sup>ξ</sup><sup>i</sup>* f g ð Þ

where *ξ*max denotes the maximum resource that is required to offer the

served by the *m*-th fog node and W denotes the total frequency band.

�

*<sup>i</sup>*¼*n*þ<sup>1</sup>*pmi hmi* j j<sup>2</sup> (4)

� � � � , (5)

, (6)

, (8)

*<sup>i</sup>* , (9)

(7)

s ,

� � <sup>¼</sup> *pmn*j j *hmn* <sup>2</sup> *<sup>δ</sup>*<sup>2</sup> <sup>þ</sup> <sup>P</sup>*<sup>N</sup>*

*Moving Broadband Mobile Communications Forward - Intelligent Technologies for 5G…*

node can be defined as [8, 35, 36]

node can be expressed as [8, 35, 37]

defined as [26]

demanded service.

expressed as [26]

*γmn pmn*

*Rmn wmn*, *pmn*

The fog computing model is based on the required tasks and the associated overhead. For instance, each fog node will receive task offloading requests from the users. Based on its resource capabilities, the respective node is expected to process the requested computational tasks. As a result of this, certain overhead regarding time and energy will be incurred to transmit and process at the fog nodes [8, 37]. The associated overheads are discussed in the following subsections.

#### *5.2.1 Task processing latency*

Based on the communication model presented in subsection 5.1, the transmission latency regarding computation offloading can be evaluated. Assume that the *m*-th fog node receives computation task *Fnj* from the *n*-th user, the incurred transmission latency when the *n*-th user send data to offload the *j*-th task using Eq. (5) can be expressed as

$$T\_{m\eta,t}^{f} = \frac{A\_{in}(n\dot{j})}{R\_{mn}} \tag{10}$$

As aforementioned, each fog node possesses limited computation capabilities. Assume that the *m*-th fog node with computing capability, *C<sup>f</sup> mn*, is assigned to the *n*-th user, the related computation execution time *T <sup>f</sup> mnj*,*<sup>e</sup>* can be defined as

$$T\_{mnj,e}^f = \frac{\mathcal{Q}\_{req}(nj)}{\mathcal{C}\_{mn}^f} \tag{11}$$

Furthermore, consider a scenario in which each fog node is equipped with a CPU that is based on non-preemptive allocation. Also, assume that computing resource is assigned to an individual user each time until its required tasks are accomplished. Moreover, assume the process sequence, **<sup>q</sup>***<sup>m</sup>* <sup>¼</sup> *qms*j*qms* <sup>∈</sup>f g 1, 2, … , *<sup>N</sup>* , *qms* ¼6 *qmn*, *<sup>s</sup>*, *<sup>n</sup>* <sup>∈</sup> <sup>N</sup> � � in the *m*-th fog node in which the tasks are executed in the ascending order, *qm*. In this scenario, for task *j*, the queuing delay time can be defined as

$$T\_{m\eta j,q}^f = \sum\_{s,q\_{m\bullet} < q\_{m\bullet}}^N b\_{ms} T\_{m\circ j,e}^f \tag{12}$$

where *bms* represents the outcome of user scheduling for the fog nodes that specifies selection of the *m*-th fog node by the *s*-th user for offloading. The selection criteria of the *m*-th fog node by the *s*-th user can be defined as [8, 37]

$$b\_{ms} = \begin{cases} 1 & \text{if } s\text{-th user selects (associated with) the } m\text{-th fog node} \\ 0 & \text{otherwise} \end{cases} \tag{13}$$

The aggregate latency incurred by the fog computing when the *n*-th user offloads the *j*-th task to the *m*-th fog node can be defined as [8, 36, 37]

$$T\_{mnj}^{f} = T\_{mnj,t}^{f} + T\_{mnj,e}^{f} + T\_{mnj,q}^{f} \tag{14}$$

#### *5.2.2 Energy consumption*

The energy consumptions for transmitting and processing tasks are the main offloading energy consumptions [30]. When an *n*-th user offloads *j*-th task to an *m*-th fog node, the associated energy consumption can be defined as

$$E\_{mnj}^f = \underbrace{\overbrace{E\_{mnj,t}^f}^{\text{Transmission energy consumption}} + \overbrace{E\_{mnj,e}^f}^{\text{Compating energy consumption}}}^{\text{Compating energy consumption}} \tag{15}$$

*Ec mnj* <sup>¼</sup> *<sup>E</sup> <sup>f</sup>*

*DOI: http://dx.doi.org/10.5772/intechopen.98315*

of the cloud per cycle, *p<sup>b</sup>*

cloud center by an *m*-th fog node.

AI techniques is illustrated in **Figure 3**.

*mnj*,*<sup>t</sup>* <sup>þ</sup> *<sup>E</sup><sup>c</sup>*

*pmn* <sup>þ</sup> *<sup>T</sup><sup>c</sup>*

*Trends in Cloud Computing Paradigms: Fundamental Issues, Recent Advances, and Research…*

<sup>¼</sup> *<sup>T</sup> <sup>f</sup> mnj*,*t*

**6. Trends toward intelligent-based fog computing**

*mnj*,*<sup>t</sup>* <sup>þ</sup> *Ec*

*mnj*,*t pb <sup>m</sup>* <sup>þ</sup> *<sup>η</sup>cT<sup>c</sup>*

where *η<sup>c</sup>* represents a coefficient that signifies the energy consumed by the CPU

This section focuses on the current trends toward intelligent integrated computing networks. As aforementioned, different challenges as regards scalability, management, and optimization have been presented in the fog-cloud-based computing integrated architecture. For efficient management of resource-limited fog nodes and optimization of the cloud computing platform, the trend is toward the adoption of AI-enabled techniques in the *Network 2030* (6G and Beyond) [4, 10]. For instance, apart from intelligent driving that the 6G network is anticipated to support, it will also offer a promising path toward the industrial revolutions where the future intelligent factories are anticipated to support densely concentrated intelligent mobile robots. Based on this, a number of new service classes like ultra-high data density (uHDD), ubiquitous mobile ultra-broadband (uMUB), and ultrahighspeed-with-low-latency communications (uHSLLC) have been defined [38, 39]. A typical instance of an integrated hierarchical computing platform supported by the

Furthermore, in the considered integrated hierarchical computing AI-enabled platform, we consider the fog radio access networks (F-RANs) as a case study. In this context, we discuss how F-RANs can facilitate the deployment of hierarchical AI in wireless networks. Besides, consideration is given to the influences of AI in

In addition, regarding the influences of F-RANs on the deployment of AI (F-RAN-Enabled AI), the F-RANs present hierarchical layers (cloud, fog, and IoT) that can be exploited. So, F-RAN offers heterogeneous processing capability that can be leveraged for hierarchical intelligence across the integrated layers through centralized, distributed, and federated learning. Besides, to significantly alleviate the memory issue of mobile devices, cross-layer learning can also be employed. Besides, concerning the influences of AI on the F-RANs (AI-Enabled F-RAN), AI presents F-RANs with techniques and technologies for effective support of the huge traffic. Likewise, it helps in making intelligent decisions in the networks. These features can be harnessed through the implementation of ML tools such as reinforcement learning (RL) algorithms and deep neural networks (DNNs) [28, 29, 40, 41]. For instance, DNNs can be adopted for data processing. Besides, RL algorithms can be employed for optimizations and decisions [40]. We expatiate on the relationship

The F-RAN heterogeneous platforms with varied memory and computational resources offer hierarchical application scenarios for the AI. Based on this, hierarchical intelligence such as cloud intelligence, fog intelligence, and on-device intelligence, can be achieved across the layers [40]. In this part, we present learning-

making F-RANs smarter in rendering better services to mobile devices.

between the F-RANs and AI in the following subsections.

based intelligence schemes for the F-RAN-Enabled AI.

**6.1 F-RAN-enabled AI**

**17**

*mnj*,*e*

*<sup>m</sup>* denotes the allocated power for tasks forwarding to the

*mnj*,*eCc*

(17)

$$=T^{f}\_{m\eta j,t}p\_{mn} + \eta\_m C^{f}\_{mn} T^{f}\_{m\eta j,e}.$$

where *η<sup>m</sup>* represents the coefficient that signifies the energy consumption per CPU cycle of an *m*-th fog node. The first and the second terms are the transmission and computing energy consumptions on the *m*-th fog node.

#### **5.3 Cloud computing model**

As aforementioned, the fog nodes have relatively limited resources regarding memory (storage), power, and computing capacity [15]. Therefore, when resourcelimited fog nodes are not capable of accomplishing the requested computational tasks due to their constrained resources, the tasks have to be sent to the cloud center via the backhaul links [8].

Furthermore, the tasks can be efficiently executed at the cloud centers owing to their sufficiently high resource capabilities. It should be noted that additional overhead regarding energy and time will be incurred in the process of forwarding tasks to the remote cloud center [8]. The related overheads are considered in the following subsections.

#### *5.3.1 Task processing latency*

Suppose the backhaul links data rate, *R<sup>b</sup> <sup>m</sup>*, is available between the *m*-th fog node and the remote cloud center with computation capability given by *Cc*. Besides, based on sufficient resources and powerful computation capabilities of the cloud center, the tasks from various users can be executed instantly. In this context, the queuing delay can be omitted in the processing latency analysis of cloud computing. Following the fog computing model analysis presented in subsection 5.2, the aggregate latency that will be incurred in forwarding the *j*-th task of the *n*-th user from the *m*-th fog node to the remote cloud center can be expressed as [8, 35]

$$\begin{aligned} T\_{mnj}^{\mathbf{c}} &= T\_{mnj,t}^{\mathbf{f}} + T\_{mnj,t}^{\mathbf{c}} + T\_{mnj,\mathbf{c}}^{\mathbf{c}} \\ &= \frac{A\_{in}(nj)}{R\_{mn}} + \frac{A\_{in}(nj)}{R\_{m}^{b}} + \frac{Q\_{req}(nj)}{C\_{\mathbf{c}}} \end{aligned} \tag{16}$$

#### *5.3.2 Energy consumption*

The aggregate energy consumption of an *m*-th fog node that offloads a *j*-th task of an *n*-th user to a remote cloud center can be defined as [8, 36]

*Trends in Cloud Computing Paradigms: Fundamental Issues, Recent Advances, and Research… DOI: http://dx.doi.org/10.5772/intechopen.98315*

$$\begin{split} E\_{mnj}^{\mathbf{c}} &= E\_{mnj,t}^{f} + E\_{mnj,t}^{\mathbf{c}} + E\_{mnj,\mathbf{c}}^{\mathbf{c}} \\ &= T\_{mnj,t}^{f} p\_{mn} + T\_{mnj,t}^{\mathbf{c}} p\_{m}^{b} + \eta\_{c} T\_{mnj,\mathbf{c}}^{\mathbf{c}} \mathbf{C}\_{\mathbf{c}} \end{split} \tag{17}$$

where *η<sup>c</sup>* represents a coefficient that signifies the energy consumed by the CPU of the cloud per cycle, *p<sup>b</sup> <sup>m</sup>* denotes the allocated power for tasks forwarding to the cloud center by an *m*-th fog node.

### **6. Trends toward intelligent-based fog computing**

This section focuses on the current trends toward intelligent integrated computing networks. As aforementioned, different challenges as regards scalability, management, and optimization have been presented in the fog-cloud-based computing integrated architecture. For efficient management of resource-limited fog nodes and optimization of the cloud computing platform, the trend is toward the adoption of AI-enabled techniques in the *Network 2030* (6G and Beyond) [4, 10]. For instance, apart from intelligent driving that the 6G network is anticipated to support, it will also offer a promising path toward the industrial revolutions where the future intelligent factories are anticipated to support densely concentrated intelligent mobile robots. Based on this, a number of new service classes like ultra-high data density (uHDD), ubiquitous mobile ultra-broadband (uMUB), and ultrahighspeed-with-low-latency communications (uHSLLC) have been defined [38, 39]. A typical instance of an integrated hierarchical computing platform supported by the AI techniques is illustrated in **Figure 3**.

Furthermore, in the considered integrated hierarchical computing AI-enabled platform, we consider the fog radio access networks (F-RANs) as a case study. In this context, we discuss how F-RANs can facilitate the deployment of hierarchical AI in wireless networks. Besides, consideration is given to the influences of AI in making F-RANs smarter in rendering better services to mobile devices.

In addition, regarding the influences of F-RANs on the deployment of AI (F-RAN-Enabled AI), the F-RANs present hierarchical layers (cloud, fog, and IoT) that can be exploited. So, F-RAN offers heterogeneous processing capability that can be leveraged for hierarchical intelligence across the integrated layers through centralized, distributed, and federated learning. Besides, to significantly alleviate the memory issue of mobile devices, cross-layer learning can also be employed. Besides, concerning the influences of AI on the F-RANs (AI-Enabled F-RAN), AI presents F-RANs with techniques and technologies for effective support of the huge traffic. Likewise, it helps in making intelligent decisions in the networks. These features can be harnessed through the implementation of ML tools such as reinforcement learning (RL) algorithms and deep neural networks (DNNs) [28, 29, 40, 41]. For instance, DNNs can be adopted for data processing. Besides, RL algorithms can be employed for optimizations and decisions [40]. We expatiate on the relationship between the F-RANs and AI in the following subsections.

#### **6.1 F-RAN-enabled AI**

The F-RAN heterogeneous platforms with varied memory and computational resources offer hierarchical application scenarios for the AI. Based on this, hierarchical intelligence such as cloud intelligence, fog intelligence, and on-device intelligence, can be achieved across the layers [40]. In this part, we present learningbased intelligence schemes for the F-RAN-Enabled AI.

*5.2.2 Energy consumption*

*E f*

*mnj* <sup>¼</sup> *<sup>E</sup> <sup>f</sup>*

<sup>¼</sup> *<sup>T</sup> <sup>f</sup> mnj*,*t*

**5.3 Cloud computing model**

via the backhaul links [8].

*5.3.1 Task processing latency*

*5.3.2 Energy consumption*

Suppose the backhaul links data rate, *R<sup>b</sup>*

*Tc mnj* <sup>¼</sup> *<sup>T</sup> <sup>f</sup>*

ing subsections.

[8, 35]

**16**

The energy consumptions for transmitting and processing tasks are the main offloading energy consumptions [30]. When an *n*-th user offloads *j*-th task to an

*Moving Broadband Mobile Communications Forward - Intelligent Technologies for 5G…*


where *η<sup>m</sup>* represents the coefficient that signifies the energy consumption per CPU cycle of an *m*-th fog node. The first and the second terms are the transmission

As aforementioned, the fog nodes have relatively limited resources regarding memory (storage), power, and computing capacity [15]. Therefore, when resourcelimited fog nodes are not capable of accomplishing the requested computational tasks due to their constrained resources, the tasks have to be sent to the cloud center

Furthermore, the tasks can be efficiently executed at the cloud centers owing to their sufficiently high resource capabilities. It should be noted that additional overhead regarding energy and time will be incurred in the process of forwarding tasks to the remote cloud center [8]. The related overheads are considered in the follow-

and the remote cloud center with computation capability given by *Cc*. Besides, based on sufficient resources and powerful computation capabilities of the cloud center, the tasks from various users can be executed instantly. In this context, the

computing. Following the fog computing model analysis presented in subsection 5.2, the aggregate latency that will be incurred in forwarding the *j*-th task of the *n*-th user from the *m*-th fog node to the remote cloud center can be expressed as

*mnj*,*<sup>t</sup>* <sup>þ</sup> *<sup>T</sup><sup>c</sup>*

þ

<sup>¼</sup> *Ain*ð Þ *nj Rmn*

of an *n*-th user to a remote cloud center can be defined as [8, 36]

*mnj*,*<sup>t</sup>* <sup>þ</sup> *<sup>T</sup><sup>c</sup>*

*Ain*ð Þ *nj Rb m* þ

The aggregate energy consumption of an *m*-th fog node that offloads a *j*-th task

*mnj*,*e*

*Qreq*ð Þ *nj Cc*

queuing delay can be omitted in the processing latency analysis of cloud

<sup>þ</sup> *<sup>E</sup> <sup>f</sup>*

*mnj*,*e* zffl}|ffl{ Computing energy consumption

*<sup>m</sup>*, is available between the *m*-th fog node

(15)

(16)

*m*-th fog node, the associated energy consumption can be defined as

*mnT <sup>f</sup> mnj*,*e*,

*mnj*,*t* zffl}|ffl{ Transmission energy consumption

and computing energy consumptions on the *m*-th fog node.

*pmn* <sup>þ</sup> *<sup>η</sup>mC<sup>f</sup>*

normally implemented distributedly. In this regard, the training samples are distributed randomly over a considerable number of mobile devices and fog nodes. Each fog node executes training tasks using the gathered local data samples from the associated mobile devices. In addition, when the local model state information (MSI) is aggregated, a global model can be acquired from the fog nodes. Nevertheless, devices in this scheme (distributed ML) have to send their respective data to the fog nodes. This procedure can also violate data privacy. In view of this, federated learning can be employed to further enhance privacy [11, 40, 43].

*Trends in Cloud Computing Paradigms: Fundamental Issues, Recent Advances, and Research…*

There have been unprecedented improvements in the processing capability of mobile devices, making joint training and inference more viable. The learning in this layer can be achieved using federated learning architecture [44]. This approach is contingent on periodic computation and exchange of updated MSI versions of the individual mobile devices. Consequently, rather than sending the raw data, the MSI computed using their datasets are exchanged. At the fog node, the distributed MSI updates of the associated mobile devices are aggregated. Based on this, the results of low-latency inference can be acquired. These results can be used in delay-sensitive applications to make a fast response to local events. Apart from being a swifter solution, this approach also helps in enhancing data privacy and is a promising solution for privacy-sensitive applications. In addition, it is also possible to aggregate the distributed MSI updates of the fog nodes to achieve a global model in the cloud. It is noteworthy that this learning process is appropriate for training a lowweight AI model in which there are fewer parameters on mobile devices. Nevertheless, for a remarkably large number of mobile devices and an AI model with huge parameters, wireless data aggregation of MSI updates is challenging [11, 40, 43].

In a scenario where the AI model size is more than the memory size of the mobile devices, the mobile devices will be unable to complete the entire model training on themselves. In such a case, the model should be partitioned into sections and distributed over the network entities. So, the lower layers' (mobile devices) outputs will be aggregated prior to transmission to the fog nodes (intermediate layers). Likewise, the intermediate layers' output will be aggregated ahead of transmission to the top cloud layers. One of the advantages of cross-layer learning is that it aids system scalability. Besides, the demand for mobile devices' memory size can

also be reduced. Nevertheless, cross-layer DNNs demand stringent training

is mainly due to the proliferation of mobile devices and various supported

The significant growth in the bandwidth demands by the radio access networks

bandwidth-intensive multimedia applications. This results in a traffic explosion that is challenging for the current mobile networks. To address the issue, various network architectures that exploit different types of resources have been presented. However, resource management in such emerging architectures is very demanding. Based on this, innovative techniques for excellent data processing and efficient network optimization are required [20, 40]. The following subsections present AI techniques as viable tools for attending to the associated network challenges.

*6.1.3 Federated learning-based on-device intelligence*

*DOI: http://dx.doi.org/10.5772/intechopen.98315*

*6.1.4 Cross-layer learning-based hierarchal intelligence*

algorithms [40].

**19**

**6.2 AI-enabled F-RAN**

**Figure 3.**

*A typical AI/fog-enabled computing architecture. uHSLLC: Ultrahigh-speed-with-low-latency communications; uMUB: ubiquitous mobile ultra-broadband; and uHDD: ultra-high data density.*

### *6.1.1 Centralized learning-based cloud intelligence*

The centralized cloud is not only endowed with considerable access to a global dataset but also has a significant amount of storage and computing power. Consequently, with sufficient data samples, training of the centralized DNN algorithms can be used to leverage the powerful cloud intelligence. Owing to its flexibility and pay-as-you-go capability, cloud intelligence-based services can be on-demand. In this regard, it can scale in accordance with the subscribers' requirements [42].

Moreover, in centralized-based learning, it is assumed that the mobile devices transmit data to the central cloud. However, this is at the expense of communication overhead, regarding bandwidth and energy. Usually, it is challenging to meet the real-time application demands because of the incurred latency. Besides, due to privacy concerns of the mobile devices, attention should also be paid to the transmission of the generated data. Therefore, these concerns demand alternative solutions. A viable approach is based on the exploitation of the distributed architecture and processing capabilities of the mobile devices and/or fog nodes in the development of distributed ML techniques [40, 42].t

#### *6.1.2 Distributed learning-based fog intelligence*

In edge computing, cloud resources are leveraged by the fog nodes. Also, the service latency is significantly reduced because the fog nodes are in proximity to the devices. Also, the proximity helps in enhancing privacy. Moreover, based on the distributed natures of the fog nodes and mobile devices, the edge ML algorithms are *Trends in Cloud Computing Paradigms: Fundamental Issues, Recent Advances, and Research… DOI: http://dx.doi.org/10.5772/intechopen.98315*

normally implemented distributedly. In this regard, the training samples are distributed randomly over a considerable number of mobile devices and fog nodes. Each fog node executes training tasks using the gathered local data samples from the associated mobile devices. In addition, when the local model state information (MSI) is aggregated, a global model can be acquired from the fog nodes. Nevertheless, devices in this scheme (distributed ML) have to send their respective data to the fog nodes. This procedure can also violate data privacy. In view of this, federated learning can be employed to further enhance privacy [11, 40, 43].

#### *6.1.3 Federated learning-based on-device intelligence*

There have been unprecedented improvements in the processing capability of mobile devices, making joint training and inference more viable. The learning in this layer can be achieved using federated learning architecture [44]. This approach is contingent on periodic computation and exchange of updated MSI versions of the individual mobile devices. Consequently, rather than sending the raw data, the MSI computed using their datasets are exchanged. At the fog node, the distributed MSI updates of the associated mobile devices are aggregated. Based on this, the results of low-latency inference can be acquired. These results can be used in delay-sensitive applications to make a fast response to local events. Apart from being a swifter solution, this approach also helps in enhancing data privacy and is a promising solution for privacy-sensitive applications. In addition, it is also possible to aggregate the distributed MSI updates of the fog nodes to achieve a global model in the cloud. It is noteworthy that this learning process is appropriate for training a lowweight AI model in which there are fewer parameters on mobile devices. Nevertheless, for a remarkably large number of mobile devices and an AI model with huge parameters, wireless data aggregation of MSI updates is challenging [11, 40, 43].

#### *6.1.4 Cross-layer learning-based hierarchal intelligence*

In a scenario where the AI model size is more than the memory size of the mobile devices, the mobile devices will be unable to complete the entire model training on themselves. In such a case, the model should be partitioned into sections and distributed over the network entities. So, the lower layers' (mobile devices) outputs will be aggregated prior to transmission to the fog nodes (intermediate layers). Likewise, the intermediate layers' output will be aggregated ahead of transmission to the top cloud layers. One of the advantages of cross-layer learning is that it aids system scalability. Besides, the demand for mobile devices' memory size can also be reduced. Nevertheless, cross-layer DNNs demand stringent training algorithms [40].

#### **6.2 AI-enabled F-RAN**

The significant growth in the bandwidth demands by the radio access networks is mainly due to the proliferation of mobile devices and various supported bandwidth-intensive multimedia applications. This results in a traffic explosion that is challenging for the current mobile networks. To address the issue, various network architectures that exploit different types of resources have been presented. However, resource management in such emerging architectures is very demanding. Based on this, innovative techniques for excellent data processing and efficient network optimization are required [20, 40]. The following subsections present AI techniques as viable tools for attending to the associated network challenges.

*6.1.1 Centralized learning-based cloud intelligence*

**Figure 3.**

**18**

The centralized cloud is not only endowed with considerable access to a global dataset but also has a significant amount of storage and computing power. Consequently, with sufficient data samples, training of the centralized DNN algorithms can be used to leverage the powerful cloud intelligence. Owing to its flexibility and pay-as-you-go capability, cloud intelligence-based services can be on-demand. In this regard, it can scale in accordance with the subscribers' requirements [42]. Moreover, in centralized-based learning, it is assumed that the mobile devices transmit data to the central cloud. However, this is at the expense of communication overhead, regarding bandwidth and energy. Usually, it is challenging to meet the real-time application demands because of the incurred latency. Besides, due to privacy concerns of the mobile devices, attention should also be paid to the transmission of the generated data. Therefore, these concerns demand alternative solutions. A viable approach is based on the exploitation of the distributed architecture

*A typical AI/fog-enabled computing architecture. uHSLLC: Ultrahigh-speed-with-low-latency communications; uMUB: ubiquitous mobile ultra-broadband; and uHDD: ultra-high data density.*

*Moving Broadband Mobile Communications Forward - Intelligent Technologies for 5G…*

and processing capabilities of the mobile devices and/or fog nodes in the

In edge computing, cloud resources are leveraged by the fog nodes. Also, the service latency is significantly reduced because the fog nodes are in proximity to the devices. Also, the proximity helps in enhancing privacy. Moreover, based on the distributed natures of the fog nodes and mobile devices, the edge ML algorithms are

development of distributed ML techniques [40, 42].t

*6.1.2 Distributed learning-based fog intelligence*

#### *6.2.1 Intelligent data processing*

Based on the growing increase in the diversity of F-RAN applications, the envisaged multimedia data to be supported will be heterogeneous, huge, and highdimensional. Therefore, direct raw data transmission to the cloud and fog node will bring about high communication overhead. Besides, direct utilization of raw data for network optimization can cause high-computing overhead and low-efficiency issues. Moreover, there has been considerable advancement in the DNNs that facilitate data processing. For instance, convolutional operations have been exploited by convolutional neural networks (CNNs) for spatial feature extraction from input signals [40].

#### *6.2.2 Intelligent network optimization*

There are a number of ML techniques such as unsupervised, supervised, and RL algorithms that can be employed for efficient network optimization. For instance, as supervised learning focuses on mapping inputs to outputs in accordance with the training samples, the DNN-based supervised learning is an attractive scheme for beamforming design and power control of fog nodes. On the other hand, unsupervised learning is based on inferring the underlying data structure without any label, so it is appropriate for empirical analysis such as computation offloading, clustering, and resource allocation, in the F-RAN. Besides, in the RL, to maximize predicted cumulative return, sequential actions are taken by actor/agent based on the environment observations [42, 45].

#### **7. Conclusion**

In this chapter, we have presented a comprehensive overview of the evolution of computing paradigms and have highlighted their associated features. Moreover, different models that focus on effective resource allocation across an integrated computing platform have been presented. Besides, a comprehensive discussion on efficient resource management and optimization of the 6G fog computing platform to meet strict on-device constraints, reliability, end-to-end latency, bit-rate, and security requirements have been presented. In this context, we have presented AI as a resourceful technique for the achievement of high-level automation in the integrated computing heterogeneous platform.

**Author details**

Isiaka A. Alimi<sup>1</sup>

Aveiro, Portugal

**21**

Tórshavn, Faroe Islands

\*, Romil K. Patel1,2, Aziza Zaouga1

1 Instituto de Telecomunicações and University of Aveiro, Portugal

2 Department of Electronics, Telecommunications and Informatics, University of

*Trends in Cloud Computing Paradigms: Fundamental Issues, Recent Advances, and Research…*

*DOI: http://dx.doi.org/10.5772/intechopen.98315*

© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

3 Department of Science and Technology, University of the Faroe Islands,

Armando N. Pinto1,2 and Paulo P. Monteiro1,2

\*Address all correspondence to: iaalimi@ua.pt

provided the original work is properly cited.

, Nelson J. Muga<sup>1</sup>

, Qin Xin<sup>3</sup>

,

#### **Acknowledgements**

This work is supported by the European Regional Development Fund (FEDER), and Internationalization Operational Programme (COMPETE 2020) of the Portugal 2020 (P2020) framework, under the projects DSPMetroNet (POCI-01-0145- FEDER-029405) and UIDB/50008/2020-UIDP/50008/2020 (DigCORE). It is also supported by the Project 5G (POCI-01-0247-FEDER-024539), SOCA (CENTRO-01-0145-FEDER-000010), ORCIP (CENTRO-01-0145-FEDER-022141), and RETIOT (POCI-01-0145-FEDER-016432).

*Trends in Cloud Computing Paradigms: Fundamental Issues, Recent Advances, and Research… DOI: http://dx.doi.org/10.5772/intechopen.98315*
