**3. Cyber security at the edge**

The edge computing paradigm moves significant amounts of computation from the data centre closer to the source of the data, reducing but not eliminating the need for packet communications. It follows that there are larger number of smaller servers deployed at the edge and therefore energy efficiency of the server operation becomes a significant factor. Edge servers have fewer CPUs and less DRAM and limited power budgets when compared to rack mounted servers in data centres. One driver for this is often the fact that physical form factor of the edge server is significantly limited compared to rack space in the data centre.

Manufacturers of server components define operational limits for parameters such as voltage, frequency and current. Routine adherence to these limits in the production of commercial servers reflects, in part, the need to account for the expected performance degradation of transistors and potential functionality failures due to the increased transistor variability in nanometre technologies. In general, the values adopted are quite pessimistic. DRAM manufacturers, in an effort to limit the potential faults, adopt a high operating supply voltage and refresh rate according to the assumed rare worst-case conditions [7]. This leads to the observation that DRAM alone can account for up to 40% power usage. Researchers have however investigated the operation of these electrical components in regions of voltage and current beyond the conservative limits [8] and report 8.6% system energy savings on average for non-virtualised and 8.4% for virtualised workloads while ensuring the seamless server operation even under extreme temperatures.

Relaxation of voltage, timing and refresh-rate limitations may put at risk the correct functionality of the CPUs and DRAMs due to the potential failures that may occur at lower voltages and dynamically changing operating/environmental conditions (e.g. temperature). Such timing and memory failures may disrupt the operation of the server and/or directly impact the expected Quality of Service (QoS), which can be quantified in terms of throughput and quality-of-results (e.g. in terms of Bit-Error-Rate). As a consequence, such failures will affect service level agreements (SLA) in terms of availability, latency, accuracy and throughput as agreed at the higher level between the service user and the service provider. A further consequence of operating in these extended margins is that new security vulnerabilities may arise in addition to the cyber threats that already exist.

In contrast to a centralised cloud data centre, edge deployments will be constituted from many small clusters or individual installations, where elevated levels of physical security are not economically viable. Physical security of the micro-server may consist primarily of a light-weight enclosure and, from a security perspective, it should be assumed that a determined attacker will be able to gain full access to the system. This creates a larger threat surface, which now incorporates physical attacks, posing threats to the micro-server and the wider network it connects to. Deployments at the edge should be made under the assumption that networks are operating over untrustworthy links, with the use of encrypted tunnelling through VPNs, malware detection, firewalls, intrusion detection/prevention systems and DNSSEC all considerations for an endpoint security policy.

Threats posed by attackers gaining physical access to a system require consideration from both hardware and software security disciplines. Applications developers should employ secure coding practises, particularly when operating on any sensitive information. Care should also be taken to minimise, or, if possible, to avoid the storage of secret information in physical memory. The use of software, or ideally hardware-based, hard disk encryption technologies can offer protections, even when the disk is removed from a system.

Side-channel attacks can potentially be used to reveal sensitive information. In the UniServer system, sensitive extended margin information could be targeted to create denial of service attacks or cause system instability. The variation of voltage and frequency margins, core features of the UniServer solution, may also influence the relative amount of side-channel leakages. Side-channel resilient countermeasures, employing masking and hiding strategies, should be employed to help counteract such threats.

The differing deployment architectures of full stack and bare metal are considered. In the full stack deployment, representing a micro-server data centre, the UniServer software is running under the host OS, abstracted from other guest applications under separate virtual machines. However, in the bare metal deployment, the UniServer software runs along-side other system applications. It is in this deployment architecture where the UniServer system is most exposed to interference by other applications. The UniServer log files are identified as high value assets that need to be protected from tampering, since it could potentially lead to system instability or denial of service attacks. It is therefore a recommendation that the log and policy files are stored in an encrypted format, to avoid reading and manipulation by others. Additionally, consideration should be given as to whether the files should be digitally signed, to provide assurance that they come from a trusted source. These recommendations would naturally have overheads in terms of realtime operation, so their implementation would need to be considered carefully in

**61**

**Figure 3.**

*Layers of protection in the operating system.*

*Security at the Edge*

physical solutions.

summarised below [11].

*3.1.1 Security of the operating system*

**3.1 General attack vectors**

*DOI: http://dx.doi.org/10.5772/intechopen.92788*

will likely be candidate to form a security solution.

countermeasures used to mitigate against them.

terms of system performance. The use of encryption, and possibly digital signing,

In this section we consider the threats posed to both traditional networked server infrastructure and to the class of physical attacks, discussing the threats and

The primary aims of information security are to ensure the confidentiality, integrity and availability of a system [9]. There is generally no single solution to a security problem, since threats and vulnerabilities originate from many sources; rather the aim is to provide a series-layered security response, delivering defence in depth. An overall security response should be considered in the wider sense, consisting of measures that span the range of administrative, logical/technical and

The operating system (OS) is the fundamental software layer upon which the rest of the system software is built. In the common four-ring model, shown in **Figure 3**, the operating system is separated into two distinct regions of Kernel space, incorporating kernel memory, components and drivers from rings 0 to 2, and

For most commercial operating systems, control of user access is organised under discretionary access control (DAC), providing privileges at the individual user account level. However, unlike a system under mandatory access control (MAC), where applications run in isolated memory with strong separation, typical OS's are running in a multi-tasking environment where resources are shared and are potentially accessible between applications [10]. Security is, therefore, ultimately left up to the system administrator to ensure that appropriate measures are in place and that the system is configured appropriately. Some general recommendations for operating system security, which apply to both cloud and edge deployments, are

user space in ring 3, where end user applications may be run.

terms of system performance. The use of encryption, and possibly digital signing, will likely be candidate to form a security solution.
