Trends in Cloud Computing Paradigms: Fundamental Issues, Recent Advances, and Research Directions toward 6G Fog Networks

*Isiaka A. Alimi, Romil K. Patel, Aziza Zaouga, Nelson J. Muga, Qin Xin, Armando N. Pinto and Paulo P. Monteiro*

### **Abstract**

There has been significant research interest in various computing-based paradigms such as cloud computing, Internet of Things, fog computing, and edge computing, due to their various associated advantages. In this chapter, we present a comprehensive review of these architectures and their associated concepts. Moreover, we consider different enable technologies that facilitate computing paradigm evolution. In this context, we focus mainly on fog computing considering its related fundamental issues and recent advances. Besides, we present further research directions toward the sixth generation fog computing paradigm.

**Keywords:** 5G, 6G, cloud computing, edge computing, fog computing, Internet of Things, mobile edge computing

### **1. Introduction**

The need to achieve excellent Quality of Service (QoS) to facilitate effective Quality of Experience (QoE) is one of the notable factors that has brought about substantial evolution in the computing paradigms. For instance, the cloud computing paradigm has been presented to ensure an effective development and delivery of various innovative Internet services [1]. Also, the unprecedented development of various applications and growing smart mobile devices for supporting Internet-of-Things (IoT) have presented significant constraints regarding latency, bandwidth, and connectivity on the centralized-based paradigm of cloud computing [2–4]. To address the limitations, research interests have been shifting toward decentralized paradigms [2].

A good instance of a decentralized paradigm is edge computing. Conceptually, edge computing focuses on rendering several services at the network edge to

alleviate the associated limitations of cloud computing. Also, a number of such edge computing implementations such as cloudlet computing (CC), mobile cloud computing (MCC), and mobile edge computing (MEC) have been presented [2, 5–7]. Besides, another edge computing evolution is fog computing. It offers an efficient architecture that mainly focuses on both horizontal and vertical resource distribution in the Cloud-to-Things continuum [2]. In this light, it goes beyond mere cloud extension but serves as a merging platform for both cloud and IoT to facilitate and ensure effective interaction in the system. Nevertheless, these paradigms demand further research efforts due to the required resource management that is demanding and the massive traffic to be supported by the network. For instance, fog nodes are typically equipped with limited computing and storage resources which may prevent them from being a good solution for supporting and meeting requests of large-scale users. Conversely, cloud resources are usually deployed far away from users, which makes cloud servers unable to support services that demand lowlatency. Based on this, there is a need for the integration of fog and other cloudbased computing platforms with an effective multiple access technique for efficient resource management across the fog-cloud platform. In this regard, the overall performance can be improved and effective computation offloading can be offered. One of such schemes for performance enhancement is non-orthogonal multiple access (NOMA) [8].

**2.1 Cloud computing**

*DOI: http://dx.doi.org/10.5772/intechopen.98315*

are presented in this section.

*2.1.1 Latency requirements*

effectively support [13].

*2.1.2 Bandwidth constraints*

**5**

As aforementioned, cloud computing has been in the mainstream of research and has been revolutionizing the information and communication technology (ICT) sector. Based on the National Institute of Standards and Technology (NIST) definition, cloud computing presents an enabling platform that offers ubiquitous and ondemand network access to a shared pool of computing resources such as storages, servers, networks, applications, and services. These interconnected resource pools can be conveniently configured and provisioned with minimal interaction. Besides cost-effectiveness regarding support for pay-per-use policy and expenditure savings, some of the key inducements for the adoption of the cloud computing para-

*Trends in Cloud Computing Paradigms: Fundamental Issues, Recent Advances, and Research…*

It is noteworthy that with the cloud computing paradigm, network entities regarding control, computing, and data storage are centralized in the cloud. For instance, storage, computing, and network management functions have been moved to different network places such as backbone IP networks, centralized data centers, and cellular core networks [13]. However, it is challenging for the centralized cloud model to meet the stringent requirements of the emerging IoT. The IoT comprises varieties of computing devices that are connected through the Internet to support a variety of applications and services [2, 13]. In this context, things such as smart meters, tablets, smartphones, robots, wireless routers, sensors, actuators, smart vehicles, and radio-frequency identification (RFID) tags are Internetconnected to ensure a more convenient standard of living [2, 14–16]. Therefore, the centralized-based paradigm offered by cloud computing is insufficient to attend to the stringent requirement of the IoT. Some of the fundamental challenges of the IoT

One of the main challenges of the IoT is the associated stringent latency requirements. For instance, a lot of industrial control systems usually require endto-end latencies of a small number of milliseconds between the control node and the sensor [17, 18]. Examples of such applications are oil and gas systems, manufacturing systems, goods packaging systems, and smart grids. On the other hand, end-toend latencies below a few tens of milliseconds are required by some time-sensitive (high-reliability and low-latency) IoT applications like drone flight control applications, vehicle-to-roadside communications, gaming applications, virtual reality applications, and vehicle-to-vehicle communications, and other real-time applications. However, these requirements are beyond what a conventional cloud can

The unprecedented increase in the number of connected IoT devices results in

the generation of huge data traffic. The created traffic can range from tens of megabytes to a gigabyte of data per second. For instance, about one petabyte is been trafficked by Google per month while AT&T's network consumes about 200 petabytes in 2010. Besides, it is estimated that the U.S. smart grid will generate about 1000 petabytes per year. Consequently, for effective support of this traffic, relatively huge network bandwidth is demanded. Moreover, there are some data privacy concerns and regulations that prohibit excessive data transmission. For example, according to ABI Research, about 90% of the generated data by the

digm are easy and ubiquitous access to applications and data [12].

In addition, there have been significant research efforts toward the sixth generation (6G) networks. Also, it is envisaged that various technologies such as device-to-device communications, Big Data, cloud computing, edge caching, edge computing, and IoT will be well-supported by the 6G mobile networks [9]. Meanwhile, 6G is envisioned to be based on major innovative technologies such as super IoT, mobile ultra-broadband, and artificial intelligence (AI) [3, 10]. Besides, it is envisaged that terahertz (THz) communications should be a viable solution for supporting mobile ultra-broadband. Also, super IoT can be achieved with symbiotic radio and satellite-assisted communications. Besides, machine learning (ML) methods are expected to be promising solutions for AI networks [10]. Based on the innovative technologies, beyond 5G network is envisaged to offer a considerable improvement on the 5G by employing AI to automate and optimize the system operation [11].

This chapter presents various evolutions of computing paradigms and highlights their associated features. Also, different related technological implementations are comprehensively discussed. Besides, it presents different models that focus on effective resource allocation across an integrated computing platform for performance enhancement. Moreover, it presents AI as a resourceful technique for the achievement of high-level automation for efficient management and optimization of the 6G fog computing platform. This chapter is organized as follows. Section 2 presents a comprehensive discussion on the evolution of computing paradigms with related concepts and features. Section 3 focuses on the fog architectural model. We discuss the challenges of fog computing and its integration with other computing platforms in Section 4. In Section 5, we present some models for resource allocation in an integrated fog-cloud hierarchical architecture. Section 6 focuses on the trends toward intelligent integrated computing networks and concluding remarks are given in Section 7.

#### **2. Evolution of computing paradigms**

This section presents the evolution of computing paradigms. In this regards, related concepts, features, and architectural models are considered.

*Trends in Cloud Computing Paradigms: Fundamental Issues, Recent Advances, and Research… DOI: http://dx.doi.org/10.5772/intechopen.98315*

### **2.1 Cloud computing**

alleviate the associated limitations of cloud computing. Also, a number of such edge computing implementations such as cloudlet computing (CC), mobile cloud computing (MCC), and mobile edge computing (MEC) have been presented [2, 5–7]. Besides, another edge computing evolution is fog computing. It offers an efficient architecture that mainly focuses on both horizontal and vertical resource distribution in the Cloud-to-Things continuum [2]. In this light, it goes beyond mere cloud extension but serves as a merging platform for both cloud and IoT to facilitate and ensure effective interaction in the system. Nevertheless, these paradigms demand further research efforts due to the required resource management that is demanding and the massive traffic to be supported by the network. For instance, fog nodes are typically equipped with limited computing and storage resources which may prevent them from being a good solution for supporting and meeting requests of large-scale users. Conversely, cloud resources are usually deployed far away from users, which makes cloud servers unable to support services that demand lowlatency. Based on this, there is a need for the integration of fog and other cloudbased computing platforms with an effective multiple access technique for efficient resource management across the fog-cloud platform. In this regard, the overall performance can be improved and effective computation offloading can be offered. One of such schemes for performance enhancement is non-orthogonal multiple

*Moving Broadband Mobile Communications Forward - Intelligent Technologies for 5G…*

In addition, there have been significant research efforts toward the sixth generation (6G) networks. Also, it is envisaged that various technologies such as device-to-device communications, Big Data, cloud computing, edge caching, edge computing, and IoT will be well-supported by the 6G mobile networks [9]. Meanwhile, 6G is envisioned to be based on major innovative technologies such as super IoT, mobile ultra-broadband, and artificial intelligence (AI) [3, 10]. Besides, it is envisaged that terahertz (THz) communications should be a viable solution for supporting mobile ultra-broadband. Also, super IoT can be achieved with symbiotic radio and satellite-assisted communications. Besides, machine learning (ML) methods are expected to be promising solutions for AI networks [10]. Based on the innovative technologies, beyond 5G network is envisaged to offer a considerable improvement on the 5G by employing AI to automate and optimize the system

This chapter presents various evolutions of computing paradigms and highlights their associated features. Also, different related technological implementations are comprehensively discussed. Besides, it presents different models that focus on effective resource allocation across an integrated computing platform for performance enhancement. Moreover, it presents AI as a resourceful technique for the achievement of high-level automation for efficient management and optimization of the 6G fog computing platform. This chapter is organized as follows. Section 2 presents a comprehensive discussion on the evolution of computing paradigms with related concepts and features. Section 3 focuses on the fog architectural model. We discuss the challenges of fog computing and its integration with other computing platforms in Section 4. In Section 5, we present some models for resource allocation in an integrated fog-cloud hierarchical architecture. Section 6 focuses on the trends toward intelligent integrated computing networks and concluding remarks are

This section presents the evolution of computing paradigms. In this regards,

related concepts, features, and architectural models are considered.

access (NOMA) [8].

operation [11].

given in Section 7.

**4**

**2. Evolution of computing paradigms**

As aforementioned, cloud computing has been in the mainstream of research and has been revolutionizing the information and communication technology (ICT) sector. Based on the National Institute of Standards and Technology (NIST) definition, cloud computing presents an enabling platform that offers ubiquitous and ondemand network access to a shared pool of computing resources such as storages, servers, networks, applications, and services. These interconnected resource pools can be conveniently configured and provisioned with minimal interaction. Besides cost-effectiveness regarding support for pay-per-use policy and expenditure savings, some of the key inducements for the adoption of the cloud computing paradigm are easy and ubiquitous access to applications and data [12].

It is noteworthy that with the cloud computing paradigm, network entities regarding control, computing, and data storage are centralized in the cloud. For instance, storage, computing, and network management functions have been moved to different network places such as backbone IP networks, centralized data centers, and cellular core networks [13]. However, it is challenging for the centralized cloud model to meet the stringent requirements of the emerging IoT. The IoT comprises varieties of computing devices that are connected through the Internet to support a variety of applications and services [2, 13]. In this context, things such as smart meters, tablets, smartphones, robots, wireless routers, sensors, actuators, smart vehicles, and radio-frequency identification (RFID) tags are Internetconnected to ensure a more convenient standard of living [2, 14–16]. Therefore, the centralized-based paradigm offered by cloud computing is insufficient to attend to the stringent requirement of the IoT. Some of the fundamental challenges of the IoT are presented in this section.

#### *2.1.1 Latency requirements*

One of the main challenges of the IoT is the associated stringent latency requirements. For instance, a lot of industrial control systems usually require endto-end latencies of a small number of milliseconds between the control node and the sensor [17, 18]. Examples of such applications are oil and gas systems, manufacturing systems, goods packaging systems, and smart grids. On the other hand, end-toend latencies below a few tens of milliseconds are required by some time-sensitive (high-reliability and low-latency) IoT applications like drone flight control applications, vehicle-to-roadside communications, gaming applications, virtual reality applications, and vehicle-to-vehicle communications, and other real-time applications. However, these requirements are beyond what a conventional cloud can effectively support [13].

#### *2.1.2 Bandwidth constraints*

The unprecedented increase in the number of connected IoT devices results in the generation of huge data traffic. The created traffic can range from tens of megabytes to a gigabyte of data per second. For instance, about one petabyte is been trafficked by Google per month while AT&T's network consumes about 200 petabytes in 2010. Besides, it is estimated that the U.S. smart grid will generate about 1000 petabytes per year. Consequently, for effective support of this traffic, relatively huge network bandwidth is demanded. Moreover, there are some data privacy concerns and regulations that prohibit excessive data transmission. For example, according to ABI Research, about 90% of the generated data by the

endpoints should not be processed in the cloud. In this context, it has to be stored and processed locally [13].

*2.1.8 Security and privacy*

*DOI: http://dx.doi.org/10.5772/intechopen.98315*

The present Internet cybersecurity schemes are mainly designed for securing consumer electronics, data centers, and enterprise networks. The solutions target perimeter-based protection provisioning using firewalls, Intrusion Detection

*Trends in Cloud Computing Paradigms: Fundamental Issues, Recent Advances, and Research…*

Deployment Centralized Distributed<sup>1</sup> [13, 21–

Usually owned by large companies. Depending on the size, it could be

Mainly cyber-domain systems. Cyber-domain and cyber-physical

Latency High Low [21–23]

Location Core network Edge network [13, 21]

**Cloud Fog**

Planning Demands complicated deployment

Operation It is controlled and maintained by

Connectivity Work effectively with consistent connectivity.

the expert cloud personnel and operated in designated environments.

Few seconds round-trip delaytolerant applications.

planning

Supported application

Storage and computation capabilities

Energy consumption

Bandwidth requirement

Location awareness

Security aspect

Attack on moving data

Client server distance

Mobility support

*1*

*2*

*3*

*4*

**7**

**Table 1.**

**Computing Reference**

planning<sup>2</sup>

systems.

milliseconds.

connectivity.

Strong Weak [22]

High Low [22]

Partially supported Supported [22]

Less secure More secure [24]

High probability Very low probability [24]

Multiple hops One hop [25]

Limited High [25]

*A distributed or centralized control system can be employed for distributed fog nodes.*

*Bandwidth requirement increases with the aggregate volume of generated data by the entire clients.*

*Bandwidth requirement increases with the aggregate volume of filtered data to be sent to the cloud.*

*Some fog deployment is ad-hoc and demands either minimal or no planning.*

*Comparison of main features of cloud and fog computing.*

High<sup>3</sup> Low<sup>4</sup> [13, 22]

Demands cautious deployment

The environments are usually determined by customer demands and may require little or no human

owned by large or small companies.

Time-critical applications that demand less than tens of

Can work with intermittent

expert intervention.

23]

[13]

[13, 22]

[13, 22]

[13]

### *2.1.3 Resource-constrained devices*

The IoT system comprises billions of objects and devices that have limited resources mainly regarding storage (memory), power, and computing capacity [15]. Based on these limitations, it is challenging for constrained devices to simultaneously execute the entire desired functionality [19]. Besides, it will be impractical to depend exclusively on their relatively limited resources to accomplish their entire computing demands. It will also be cost-prohibitive and unrealistic for the devices to interact directly with the cloud, owing to the associated complex protocols and resource-intensive processing [13]. For example, some constrained medical devices such as insulin pumps and blood glucose meters have to fulfill certain authentication and authorization tasks. Likewise, it has been observed that most of the resource-constrained IoT devices cannot partake in the blockchain consensus mechanisms such as Proof-of-Work (PoW) and Proof-of-Stake (PoS) protocols in which huge processing power is required for the mining process [15].

#### *2.1.4 Intermittent connectivity*

It will be challenging for the centralized-based cloud platforms to offer uninterrupted cloud services to systems and devices such as oil rigs, drones, and vehicles with intermittent network connectivity to the cloud resources. As a result, an intermediate layer of devices is required to address the challenges [2, 15].

#### *2.1.5 Information and operational technologies convergence*

The advent of Industry 4.0 facilitates the convergence of Information Technology and Operational Technology. In this context, new operational requirements and business priorities are presented. It is noteworthy that safe and incessant operation is of utmost importance in current cyber-physical systems. This is owing to the fact that service disruption can result in a significant loss or dissatisfaction. Consequently, software and hardware update in such sensitive systems is challenging. This calls for a novel architecture that is capable of reducing the system updates [2].

#### *2.1.6 Context awareness*

A lot of IoT applications like augmented reality and vehicular networks require access to be able to process local context information such as network conditions and user location. However, owing to the physical distance between central computing and IoT devices, the centralized-based cloud computing implementation is insufficient to support the requirement [2].

#### *2.1.7 Geographical location*

The IoT devices are huge in number and are widely distributed over broad geographical areas. These devices require computation and storage services for effectiveness. However, it is challenging to have a cloud infrastructure that can support the entire requirements of the IoT applications [2].

*Trends in Cloud Computing Paradigms: Fundamental Issues, Recent Advances, and Research… DOI: http://dx.doi.org/10.5772/intechopen.98315*

#### *2.1.8 Security and privacy*

endpoints should not be processed in the cloud. In this context, it has to be stored

*Moving Broadband Mobile Communications Forward - Intelligent Technologies for 5G…*

The IoT system comprises billions of objects and devices that have limited resources mainly regarding storage (memory), power, and computing capacity [15]. Based on these limitations, it is challenging for constrained devices to simultaneously execute the entire desired functionality [19]. Besides, it will be impractical to depend exclusively on their relatively limited resources to accomplish their entire computing demands. It will also be cost-prohibitive and unrealistic for the devices to interact directly with the cloud, owing to the associated complex protocols and resource-intensive processing [13]. For example, some constrained medical devices such as insulin pumps and blood glucose meters have to fulfill certain authentication and authorization tasks. Likewise, it has been observed that most of the resource-constrained IoT devices cannot partake in the blockchain consensus mechanisms such as Proof-of-Work (PoW) and Proof-of-Stake (PoS) protocols in

which huge processing power is required for the mining process [15].

*2.1.5 Information and operational technologies convergence*

It will be challenging for the centralized-based cloud platforms to offer uninterrupted cloud services to systems and devices such as oil rigs, drones, and vehicles with intermittent network connectivity to the cloud resources. As a result, an intermediate layer of devices is required to address the challenges [2, 15].

The advent of Industry 4.0 facilitates the convergence of Information Technology and Operational Technology. In this context, new operational requirements and business priorities are presented. It is noteworthy that safe and incessant operation is of utmost importance in current cyber-physical systems. This is owing to the fact that service disruption can result in a significant loss or dissatisfaction. Consequently, software and hardware update in such sensitive systems is

challenging. This calls for a novel architecture that is capable of reducing the system

A lot of IoT applications like augmented reality and vehicular networks require access to be able to process local context information such as network conditions and user location. However, owing to the physical distance between central computing and IoT devices, the centralized-based cloud computing implementation is

The IoT devices are huge in number and are widely distributed over broad geographical areas. These devices require computation and storage services for effectiveness. However, it is challenging to have a cloud infrastructure that can

and processed locally [13].

*2.1.3 Resource-constrained devices*

*2.1.4 Intermittent connectivity*

updates [2].

**6**

*2.1.6 Context awareness*

*2.1.7 Geographical location*

insufficient to support the requirement [2].

support the entire requirements of the IoT applications [2].

The present Internet cybersecurity schemes are mainly designed for securing consumer electronics, data centers, and enterprise networks. The solutions target perimeter-based protection provisioning using firewalls, Intrusion Detection


*1 A distributed or centralized control system can be employed for distributed fog nodes.*

*2 Some fog deployment is ad-hoc and demands either minimal or no planning.*

*3 Bandwidth requirement increases with the aggregate volume of generated data by the entire clients.*

*4 Bandwidth requirement increases with the aggregate volume of filtered data to be sent to the cloud.*

#### **Table 1.**

*Comparison of main features of cloud and fog computing.*

Systems (IDSs), and Intrusion Prevention Systems (IPSs). Besides, based on the associated advantages, certain resource-intensive security functions have been shifted to the cloud. In this regard, they are focusing on perimeter-based protection by requesting authentication and authorization through the clouds. However, the security paradigm is insufficient for IoT-based security challenges.

applications from different vendors can share a common network infrastructure with support for common lifecycle management. Based on this, different applications can be removed, added, updated, deactivated, activated, and configured, to

*Trends in Cloud Computing Paradigms: Fundamental Issues, Recent Advances, and Research…*

The fog computing architectural model is usually represented by a three-layer architecture that consists of the cloud, fog, and IoT layers [2, 22, 26]. Besides, a broader N-layer reference architecture has been defined by the OpenFog Consortium [2]. This architecture is an improvement on the three-layer architecture. This

**Figure 1** illustrates hierarchical architecture of fog computing with three-layer.

The terminal/IoT layer is the layer that is close to the physical environment and

The fog layer is normally positioned on the network edge and is the fundamental layer of fog computing hierarchical architecture. The layer comprises a huge number of fog nodes such as fog servers, base stations, switches, routers, access points, and gateways, which are broadly distributed between the cloud and end-user devices [2, 22]. It should be noted that the fog nodes are not only physical network

elements but are also logical ones that execute fog computing services [2]. Moreover, the fog nodes can be based on mobile implementation, when deployed on a nomadic carrier, or static, when fixed at a location. With these implementations, end-user devices can suitably connect with appropriate nodes to have access to the required services. Besides, the nodes are connected to the cloud

*Fog computing hierarchical network model: a three-tier architecture.*

end-user. It comprises numerous devices such as mobile phones, tablets, smart vehicles, smartphones, smart cards, drones, sensors, etc. Typically, these IoT devices are usually distributed geographically. Also, their major purpose is to sense feature data of physical events or objects for onward transfer to the upper layer for processing and/or storage. It is noteworthy that certain local processing can also be executed by a number of devices such as smart vehicles, smartphones, and mobile phones that have substantial computational capabilities. After local processing, the

resulting data can then be forwarded to the upper layers [2, 22].

ensure seamless end-to-end services across the continuum [13].

*DOI: http://dx.doi.org/10.5772/intechopen.98315*

associated layers of the architecture in this subsection.

*3.1.1 Terminal/IoT layer*

*3.1.2 Fog layer*

**Figure 1.**

**9**

subsection focuses on three-layer fog architecture and related concepts.

This architecture presents a significant extension to cloud computing. In this regard, to bridge the gap between the cloud infrastructure and the end/IoT devices,

it offers a transitional layer that is known as *Fog layer*. We expatiate on the

#### **2.2 Fog computing**

To address the centralized-based limitations and for effective support of the IoT devices, edge computing has been presented [2, 14]. Besides, a broader architecture known as fog computing that is based on a distributed scheme has been presented. In the fog paradigm, storage, control, communication, computation, and networking functions are distributed in close proximity to the end-user devices along the cloud-to-things continuum [13].

In addition, to complement the centralized-based cloud platforms in which data, computing functions, and control functions are stored and performed in the cellular core networks and remote data centers, fog stores a significant amount of data and performs considerable functions at or near the end-user. Likewise, instead of routing the entire network traffic over the backbone networks, a considerable amount of networking and communication are performed at or in close proximity to the end-user in fog computing [2, 8, 15, 20]. In this regard, when applications/ tasks are offloaded to the neighboring fog nodes rather than a cloud center, fastresponse and low-latency services can be offered by fog computing. Besides, the required enormous backhaul burden between the fog nodes and the remote cloud center is alleviated [8, 20].

Cloud and fog are complementing computing schemes. They establish a service continuum between the endpoints and the cloud. In this regard, they offer services that are jointly advantageous and symbiotic to ensure effective and ubiquitous control, communication, computing, and storage, along the established continuum [13]. In **Table 1**, we present the major features of the cloud and the fog to illustrate the advantage of their complements for effective and ubiquitous service delivery along the continuum.

### **3. Fog architectures and features**

Fog computing can enhance the QoS and the efficiency of different use cases. In this context, it can offer noble technical support for cyber-physical system, Mobile Internet, and IoT. This section presents the fog architectural model and the related advantages of fog computing.

#### **3.1 Three-layer architecture of fog**

As aforementioned, one of the main features that differentiate fog from cloud computing is that in the former, resources regarding the storage, communication, control, and computation are deployed in proximity to the end-user devices. Moreover, fog architecture can be predominantly centralized, fully distributed, or somewhere amid the two former configurations. Furthermore, fog architecture and supported applications can be implemented in dedicated hardware and software. In addition, fog architecture can also be virtualized to exploit the associated advantages of network virtualization. This will facilitate the execution of the same application wherever it is demanded. In this context, the demand for dedicated applications will be reduced. It can also encourage an open platform in which

#### *Trends in Cloud Computing Paradigms: Fundamental Issues, Recent Advances, and Research… DOI: http://dx.doi.org/10.5772/intechopen.98315*

applications from different vendors can share a common network infrastructure with support for common lifecycle management. Based on this, different applications can be removed, added, updated, deactivated, activated, and configured, to ensure seamless end-to-end services across the continuum [13].

The fog computing architectural model is usually represented by a three-layer architecture that consists of the cloud, fog, and IoT layers [2, 22, 26]. Besides, a broader N-layer reference architecture has been defined by the OpenFog Consortium [2]. This architecture is an improvement on the three-layer architecture. This subsection focuses on three-layer fog architecture and related concepts.

**Figure 1** illustrates hierarchical architecture of fog computing with three-layer. This architecture presents a significant extension to cloud computing. In this regard, to bridge the gap between the cloud infrastructure and the end/IoT devices, it offers a transitional layer that is known as *Fog layer*. We expatiate on the associated layers of the architecture in this subsection.

#### *3.1.1 Terminal/IoT layer*

Systems (IDSs), and Intrusion Prevention Systems (IPSs). Besides, based on the associated advantages, certain resource-intensive security functions have been shifted to the cloud. In this regard, they are focusing on perimeter-based protection by requesting authentication and authorization through the clouds. However, the

*Moving Broadband Mobile Communications Forward - Intelligent Technologies for 5G…*

To address the centralized-based limitations and for effective support of the IoT devices, edge computing has been presented [2, 14]. Besides, a broader architecture known as fog computing that is based on a distributed scheme has been presented. In the fog paradigm, storage, control, communication, computation, and networking functions are distributed in close proximity to the end-user devices along the

In addition, to complement the centralized-based cloud platforms in which data, computing functions, and control functions are stored and performed in the cellular core networks and remote data centers, fog stores a significant amount of data and performs considerable functions at or near the end-user. Likewise, instead of routing the entire network traffic over the backbone networks, a considerable amount of networking and communication are performed at or in close proximity to the end-user in fog computing [2, 8, 15, 20]. In this regard, when applications/ tasks are offloaded to the neighboring fog nodes rather than a cloud center, fastresponse and low-latency services can be offered by fog computing. Besides, the required enormous backhaul burden between the fog nodes and the remote cloud

Cloud and fog are complementing computing schemes. They establish a service continuum between the endpoints and the cloud. In this regard, they offer services that are jointly advantageous and symbiotic to ensure effective and ubiquitous control, communication, computing, and storage, along the established continuum [13]. In **Table 1**, we present the major features of the cloud and the fog to illustrate the advantage of their complements for effective and ubiquitous service delivery

Fog computing can enhance the QoS and the efficiency of different use cases. In this context, it can offer noble technical support for cyber-physical system, Mobile Internet, and IoT. This section presents the fog architectural model and the related

As aforementioned, one of the main features that differentiate fog from cloud computing is that in the former, resources regarding the storage, communication, control, and computation are deployed in proximity to the end-user devices. Moreover, fog architecture can be predominantly centralized, fully distributed, or somewhere amid the two former configurations. Furthermore, fog architecture and supported applications can be implemented in dedicated hardware and software. In addition, fog architecture can also be virtualized to exploit the associated advantages of network virtualization. This will facilitate the execution of the same appli-

cation wherever it is demanded. In this context, the demand for dedicated applications will be reduced. It can also encourage an open platform in which

security paradigm is insufficient for IoT-based security challenges.

**2.2 Fog computing**

cloud-to-things continuum [13].

center is alleviated [8, 20].

along the continuum.

**3. Fog architectures and features**

advantages of fog computing.

**8**

**3.1 Three-layer architecture of fog**

The terminal/IoT layer is the layer that is close to the physical environment and end-user. It comprises numerous devices such as mobile phones, tablets, smart vehicles, smartphones, smart cards, drones, sensors, etc. Typically, these IoT devices are usually distributed geographically. Also, their major purpose is to sense feature data of physical events or objects for onward transfer to the upper layer for processing and/or storage. It is noteworthy that certain local processing can also be executed by a number of devices such as smart vehicles, smartphones, and mobile phones that have substantial computational capabilities. After local processing, the resulting data can then be forwarded to the upper layers [2, 22].

#### *3.1.2 Fog layer*

The fog layer is normally positioned on the network edge and is the fundamental layer of fog computing hierarchical architecture. The layer comprises a huge number of fog nodes such as fog servers, base stations, switches, routers, access points, and gateways, which are broadly distributed between the cloud and end-user devices [2, 22]. It should be noted that the fog nodes are not only physical network elements but are also logical ones that execute fog computing services [2].

Moreover, the fog nodes can be based on mobile implementation, when deployed on a nomadic carrier, or static, when fixed at a location. With these implementations, end-user devices can suitably connect with appropriate nodes to have access to the required services. Besides, the nodes are connected to the cloud

#### **Figure 1.**

*Fog computing hierarchical network model: a three-tier architecture.*

infrastructure through an IP core network for service provision [22]. As aforementioned, fog nodes are capable of computing and transmitting sensed data. Likewise, the received sensed data can be temporarily stored by the nodes. Due to their access to specific network resources; latency-sensitive applications, as well as real-time analysis, can be realized in the fog layer. Furthermore, certain applications demand more powerful storage and computing capabilities. In this context, fog nodes have to interact with the cloud to obtain the required network resources [2, 22].

*3.2.3 Scalable and cost-effective architecture*

*DOI: http://dx.doi.org/10.5772/intechopen.98315*

As aforementioned, fog architecture is relatively simple and encourages prompt

innovation. Besides, it offers a platform that supports economical scaling. For instance, it is much more cost-effective and faster to use the edge (or client) devices for innovation experimentations instead of that of large operators and vendors networks. In this context, fog encourages an open-market (interoperable) that can support open-application programming interfaces. This is of utmost importance for

*Trends in Cloud Computing Paradigms: Fundamental Issues, Recent Advances, and Research…*

the proliferation of mobile devices to facilitate innovation, development,

In a fog network, data analytics are allowed at the network edge [13]. The generated data by devices and sensors is acquired locally by the fog nodes at the network edge. The acquired (high priority) data is then processed and stored by edge devices in the local area network. Based on this, traffic across the Internet can be considerably reduced and swift localized services with high-quality can be supported. Hence, time/latency-sensitive applications for real-time interactions can be supported [22]. For instance, time-sensitive functions can be well-supported for local cyber-physical systems. Besides, this feature is crucial for stability in control systems. Likewise, to support embedded AI applications, the feature is also important for the tactile Internet vision [13]. On the other hand, low priority data that is delay-insensitive can be conveyed to certain aggregation nodes where it will

Furthermore, fog computing focuses on allowing ubiquitous local access to centralized computing resource pools that can be swiftly and flexible provisioned on-demand basis. So, to alleviate communication latency and support delay/jitter sensitive applications, resource-limited end-user devices that are close to the fog nodes can access resource pools. In general, the key native features of fog computing are context awareness and edge location. Besides, it is based on pervasive spatial

deployment, and operation of advanced services [13].

*3.2.4 Low-latency and real-time applications*

be further processed and analyzed [26].

**Figure 2.**

**11**

*Fog paradigm features.*

### *3.1.3 Cloud layer*

The main components of the cloud layer are high-performance servers (with high memory and powerful computational capabilities) and storage devices. Consequently, a huge amount of data can be processed and permanently stored in this layer. Based on this, the layer normally supports various intensive services such as smart factories, smart transportations, and smart homes [2, 22].

Furthermore, unlike the traditional cloud computing architecture, not the entire storage and computing tasks traverse the cloud. Therefore, in the fog architecture, certain services and/or resources can be moved (offloaded) from the cloud to the fog layer through a number of control schemes to optimize resource utilization and increase efficiency [2, 22].

#### **3.2 Features of fog computing**

Fog offers various advantages that facilitate new business models and services that can help in expenditure reduction or product rollouts acceleration. In the following subsection, we discuss some of the main advantages of fog network.

#### *3.2.1 Client-centric objective cognizance*

Fog architecture consists of widely distributed nodes that support mainly shortrange communication and are capable of tracking and getting the end device locations to enable mobility. This feature can facilitate enhanced location-based services and improved potentials for real-time decision making [22]. For example, as fog applications are close to the end-user devices, they can be designed for efficient awareness of the customer requirements. Cognizance of customer requirements helps the fog architecture in establishing the appropriate place to perform storage, computing, and control functions across the cloud-to-thing continuum [13].

#### *3.2.2 Resource pooling and bandwidth efficiency*

In fog computing, there can be ubiquitous distributions of resources between the endpoints and the cloud. This helps in the efficient exploitation of the available resources. Besides, with the fog architecture, various applications can leverage the available resources that are abundant but idle on the end-user devices and network edge [13]. For instance, certain computation tasks such as data filtering, data cleaning, data preprocessing, valuable information extraction, redundancy removing, and decision making, are locally performed. In this context, a certain portion of useful data is conveyed to the cloud. Consequently, there is no need to transmit the majority of the data over the Internet. Based on this, fog computing is capable of reducing the network traffic, consequently, the bandwidth is effectively saved [22]. Similarly, the proximity of the fog system to the endpoints facilitate effective integration with the end-user systems. This helps in enhancing the performance and efficiency of the entire system [13].

*Trends in Cloud Computing Paradigms: Fundamental Issues, Recent Advances, and Research… DOI: http://dx.doi.org/10.5772/intechopen.98315*

#### *3.2.3 Scalable and cost-effective architecture*

infrastructure through an IP core network for service provision [22]. As aforementioned, fog nodes are capable of computing and transmitting sensed data. Likewise, the received sensed data can be temporarily stored by the nodes. Due to their access to specific network resources; latency-sensitive applications, as well as real-time analysis, can be realized in the fog layer. Furthermore, certain applications demand more powerful storage and computing capabilities. In this context, fog nodes have

*Moving Broadband Mobile Communications Forward - Intelligent Technologies for 5G…*

The main components of the cloud layer are high-performance servers (with high memory and powerful computational capabilities) and storage devices. Consequently, a huge amount of data can be processed and permanently stored in this layer. Based on this, the layer normally supports various intensive services such as

Furthermore, unlike the traditional cloud computing architecture, not the entire storage and computing tasks traverse the cloud. Therefore, in the fog architecture, certain services and/or resources can be moved (offloaded) from the cloud to the fog layer through a number of control schemes to optimize resource utilization and

Fog offers various advantages that facilitate new business models and services that can help in expenditure reduction or product rollouts acceleration. In the following subsection, we discuss some of the main advantages of fog network.

Fog architecture consists of widely distributed nodes that support mainly shortrange communication and are capable of tracking and getting the end device locations to enable mobility. This feature can facilitate enhanced location-based services and improved potentials for real-time decision making [22]. For example, as fog applications are close to the end-user devices, they can be designed for efficient awareness of the customer requirements. Cognizance of customer requirements helps the fog architecture in establishing the appropriate place to perform storage, computing, and control functions across the cloud-to-thing continuum [13].

In fog computing, there can be ubiquitous distributions of resources between the

endpoints and the cloud. This helps in the efficient exploitation of the available resources. Besides, with the fog architecture, various applications can leverage the available resources that are abundant but idle on the end-user devices and network edge [13]. For instance, certain computation tasks such as data filtering, data cleaning, data preprocessing, valuable information extraction, redundancy removing, and decision making, are locally performed. In this context, a certain portion of useful data is conveyed to the cloud. Consequently, there is no need to transmit the majority of the data over the Internet. Based on this, fog computing is capable of reducing the network traffic, consequently, the bandwidth is effectively saved [22]. Similarly, the proximity of the fog system to the endpoints facilitate effective integration with the end-user systems. This helps in enhancing the performance and

to interact with the cloud to obtain the required network resources [2, 22].

smart factories, smart transportations, and smart homes [2, 22].

*3.1.3 Cloud layer*

increase efficiency [2, 22].

**3.2 Features of fog computing**

*3.2.1 Client-centric objective cognizance*

*3.2.2 Resource pooling and bandwidth efficiency*

efficiency of the entire system [13].

**10**

As aforementioned, fog architecture is relatively simple and encourages prompt innovation. Besides, it offers a platform that supports economical scaling. For instance, it is much more cost-effective and faster to use the edge (or client) devices for innovation experimentations instead of that of large operators and vendors networks. In this context, fog encourages an open-market (interoperable) that can support open-application programming interfaces. This is of utmost importance for the proliferation of mobile devices to facilitate innovation, development, deployment, and operation of advanced services [13].

#### *3.2.4 Low-latency and real-time applications*

In a fog network, data analytics are allowed at the network edge [13]. The generated data by devices and sensors is acquired locally by the fog nodes at the network edge. The acquired (high priority) data is then processed and stored by edge devices in the local area network. Based on this, traffic across the Internet can be considerably reduced and swift localized services with high-quality can be supported. Hence, time/latency-sensitive applications for real-time interactions can be supported [22]. For instance, time-sensitive functions can be well-supported for local cyber-physical systems. Besides, this feature is crucial for stability in control systems. Likewise, to support embedded AI applications, the feature is also important for the tactile Internet vision [13]. On the other hand, low priority data that is delay-insensitive can be conveyed to certain aggregation nodes where it will be further processed and analyzed [26].

Furthermore, fog computing focuses on allowing ubiquitous local access to centralized computing resource pools that can be swiftly and flexible provisioned on-demand basis. So, to alleviate communication latency and support delay/jitter sensitive applications, resource-limited end-user devices that are close to the fog nodes can access resource pools. In general, the key native features of fog computing are context awareness and edge location. Besides, it is based on pervasive spatial

**Figure 2.** *Fog paradigm features.*

deployment to support various devices [27]. In **Figure 2**, we present some of the major features of the Fog paradigm. Also, the following section focuses on the resource allocation challenges in fog computing.

computing and storage capabilities are deployed to offer offloading services given by a set of M ¼ f g 1, 2, … , *M* . Besides, assume *N* users denoted by N ¼ f g 1, 2, … , *N* with J ¼ f g 1, 2, … , *J* independent computation tasks to be executed. The respec-

*Trends in Cloud Computing Paradigms: Fundamental Issues, Recent Advances, and Research…*

*Fnj* ¼ *Ain*ð Þ *nj* , *Qreq*ð Þ *nj* , *Tmax*ð Þ *nj* , *n* ∈ N , *j*∈J

where *Ain*ð Þ *nj* is the size of computation input data of the *j*-th task demanded by the *n*-th user, *Tmax*ð Þ *nj* represents the maximum tolerable latency of the *j*-th task required by the *n*-th user, and *Qreq*ð Þ *nj* is the total number of central processing unit

In addition, to express different related models, we assume a quasi-static scenario in which the users are unchanged in the course of computation offloading, however, they can change over different periods. Besides, we assume a perfect instantaneous channel that remains unchanged during the packet transmission. Based on this, we present the following models for an integrated fog-cloud archi-

When an *n*-th user with a number of offload tasks and the transmission power, *pmn*, transmits signal, *xmn*, to the *m*-th fog node, the received signals, *ymn*, can be

> <sup>þ</sup> <sup>X</sup> *i*6¼*n*, *i* ∈ N

where the first term represents the desired signal from the *n*-th user, the second term is the intra-cell interference suffered by the *n*-th user from other users being served by the *m*-th fog node on the same frequency band, the third term, *zmn*, denotes the additive white Gaussian noise (AWGN) with zero mean and variance *δ*<sup>2</sup>

and *hmn* denotes the channel gain for the *n*-th user that connects to the *m*-th fog node. It is noteworthy that the transmitted signals from various users to each fog node are the desired signals. However, they bring about interference with each other. Also, as individual users that are connected to a specified fog node suffer different channel conditions, the interference can be alleviated and the superimposed signals

In the linear interference cancelation techniques, the desired signal is detected, but other signals are regarded as interference. So, the SIC concept is based on the fact that the signal that has the highest signal-to-interference-plus-noise-ratio (SINR) can be detected first. In this regard, its interference is canceled from other streams [34]. Furthermore, regarding the integrated computing platform, the received signal by a specified fog node from the user that has the highest channel gain is the potential strongest signal, so it is decoded first at the fog node. Afterward, the strongest signal will be removed from the streams. The same approach is then applied to the user with the second-highest channel gain and so on. Consequently, the users'signals on the same frequency band can be sorted in relation to the channel gains. In this context,

the users served by the *m*-th fog node can be arranged in descending as [8]

<sup>2</sup> ≥⋯≥j j *hmN*

<sup>2</sup> <sup>≥</sup>j j *hm*<sup>2</sup>

j j *hm*<sup>1</sup>

ffiffiffiffiffiffiffi *pmi* <sup>p</sup> *hmixmi*

<sup>þ</sup> *zmn* |{z} Noise

<sup>2</sup> <sup>∀</sup>*n*<sup>∈</sup> <sup>N</sup> (3)

, (2)

,


n o, (1)

tive task can be expressed as [8]

*DOI: http://dx.doi.org/10.5772/intechopen.98315*

(CPU) cycles needed to execute the task.

tecture with NOMA implementation.

*ymn* ¼ ffiffiffiffiffiffiffiffi

*pmn* <sup>p</sup> *hmnxmn* |fflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflffl} Desired signal

can be decoded sequentially by each fog node using SIC [8, 33].

**5.1 Communication model**

expressed as

**13**
