**2.3 From cloud to edge to fog computing**

The global adoption of the Internet enabled cloud computing paradigm. Large data centres, using virtualisation technology, can offer end users scalable compute resource on a pay per use basis. This approach is well suited to traditional enterprise computing freeing businesses from capital expenditure on computing systems transferring the cost to operational expenditure and off-loading risk to cloud service providers.

Newer applications, such as the Internet of Things (IoT), involve data collection at end user devices, equipment that is often mobile and therefore linked by wireless to edge nodes. The complete system is geographically diverse, with Smart Cities being one of the best illustrations of this being. The opportunity to redistribute computation across the hierarchy from user device, through edge, and back to the data centre when needed is now called fog computing. A hypothetical IoT service with a target end-to-end latency of 200 ms can easily expect, for a roundtrip to the cloud, to spend half of its budget in the network. This leaves a very tight time budget for execution of the actual processing to at the data centre. Fog has the potential to eliminate most, if not all, of the communication latency and, therefore, can permit the option of running the edge systems at lower frequency and voltage; for example, operating at 50% of the peak frequency with 30% less voltage translates to running with 50% less energy and 75% less power. Edge servers can also benefit from virtualisation, running multiple virtual machines to separate functionality. Furthermore research suggests that compute accelerators, in particular GPUs, may be enabled at the edge though virtualisation [7].

**Figure 2** shows an analysis of the operation of an edge server, operating in extended margins, presented by the Horizion 2020 project Uniserver (http://www. uniserver2020.eu). UniServer created a cross layer approach from the hardware levels up to the system software layers. The following system enhancements were identified:


**59**

*Security at the Edge*

*DOI: http://dx.doi.org/10.5772/intechopen.92788*

**3. Cyber security at the edge**

**Figure 2.**

cantly limited compared to rack space in the data centre.

*Perspective from the UniServer project on the enhancement of the edge server.*

In this chapter, we focus on the security challenges at the edge components and as we have outlined the WANs that link these edge nodes back to the data centres.

The edge computing paradigm moves significant amounts of computation from the data centre closer to the source of the data, reducing but not eliminating the need for packet communications. It follows that there are larger number of smaller servers deployed at the edge and therefore energy efficiency of the server operation becomes a significant factor. Edge servers have fewer CPUs and less DRAM and limited power budgets when compared to rack mounted servers in data centres. One driver for this is often the fact that physical form factor of the edge server is signifi-

Manufacturers of server components define operational limits for parameters such as voltage, frequency and current. Routine adherence to these limits in the production of commercial servers reflects, in part, the need to account for the expected performance degradation of transistors and potential functionality failures due to the increased transistor variability in nanometre technologies. In general, the values adopted are quite pessimistic. DRAM manufacturers, in an effort to limit the potential faults, adopt a high operating supply voltage and refresh rate according to the assumed rare worst-case conditions [7]. This leads to the observation that DRAM alone can account for up to 40% power usage. Researchers have however investigated the operation of these electrical components in regions

### *Security at the Edge DOI: http://dx.doi.org/10.5772/intechopen.92788*

**Figure 2.**

*Cloud Computing Security - Concepts and Practice*

**2.3 From cloud to edge to fog computing**

be enabled at the edge though virtualisation [7].

an extra layer of security.

service providers.

identified:

margins;

easy adoption.

POS includes the option to apply scrambling to the transmission thereby adding

The global adoption of the Internet enabled cloud computing paradigm. Large data centres, using virtualisation technology, can offer end users scalable compute resource on a pay per use basis. This approach is well suited to traditional enterprise computing freeing businesses from capital expenditure on computing systems transferring the cost to operational expenditure and off-loading risk to cloud

Newer applications, such as the Internet of Things (IoT), involve data collection at end user devices, equipment that is often mobile and therefore linked by wireless to edge nodes. The complete system is geographically diverse, with Smart Cities being one of the best illustrations of this being. The opportunity to redistribute computation across the hierarchy from user device, through edge, and back to the data centre when needed is now called fog computing. A hypothetical IoT service with a target end-to-end latency of 200 ms can easily expect, for a roundtrip to the cloud, to spend half of its budget in the network. This leaves a very tight time budget for execution of the actual processing to at the data centre. Fog has the potential to eliminate most, if not all, of the communication latency and, therefore, can permit the option of running the edge systems at lower frequency and voltage; for example, operating at 50% of the peak frequency with 30% less voltage translates to running with 50% less energy and 75% less power. Edge servers can also benefit from virtualisation, running multiple virtual machines to separate functionality. Furthermore research suggests that compute accelerators, in particular GPUs, may

**Figure 2** shows an analysis of the operation of an edge server, operating in extended margins, presented by the Horizion 2020 project Uniserver (http://www. uniserver2020.eu). UniServer created a cross layer approach from the hardware levels up to the system software layers. The following system enhancements were

Pareto front maximising the returns from technology scaling;

i.**at the circuit, micro-architecture and architecture layer** by automatically revealing the possible operating points (i.e. voltage, frequency) of each hardware component no worse than the worst-case operating points used, thus helping to boost performance or energy efficiency at levels closer to the

ii. **at the firmware layer** with low-level handlers by monitoring and controlling the operating status of the underlying hardware components and updating a 'HealthLog', as well as performing periodical benchmarking of the hardware and reporting the findings in a 'StressLog'. The logs with the collected information are communicated to the software stack (hypervisor) in a generic way, allowing easy adoption and exploitation of the observed

iii.**at the software layer** by enabling an easy programmability, ensuring high dependability and full utilisation of the margins observed in the underlying hardware. State-of-the-art software packages for virtualisation (i.e. KVM) and resource management (i.e. OpenStack) will be ported on the microserver further strengthening its advantages with minimum intrusion and

**58**

*Perspective from the UniServer project on the enhancement of the edge server.*

In this chapter, we focus on the security challenges at the edge components and as we have outlined the WANs that link these edge nodes back to the data centres.
