**3.2.1 Virtualization considerations**

6 Will-be-set-by-IN-TECH

VM VM

Physical Server (Host)

(b) Para Virtualization

scheduling, input/output operations, error actions, and similar functions and regulates the flow of work in a data processing system. The supervisor is generally called kernel. The hypervisor presents to the guest OS a virtual hardware platform and manages the execution of the guest OS. Multiple instances of different operating systems may share the virtualized hardware resources. Hypervisors are installed on server hardware whose only task is to run guest operating systems. Non hypervisor virtualization systems are used for similar tasks on dedicated server hardware, but also commonly on desktop, portable and even handheld computers. There are three popular approaches to server virtualization: Full Virtualization, Para Virtualization and virtualization with hardware support (Hardware Virtual Machine, or

**Full Virtualization** provides emulation of the underlying platform on which a guest operating system and application set run without modifications and unaware that the platform is virtualized. This approach is idealistic, in real world scenarios virtualization

Providing a Full Virtualization implies that every platform device is emulated with enough details to permit the guest OS to manipulate them at their native level (such as register-level interfaces). Moreover, it allows administrators to create guests that use different operating systems. These guests have no knowledge about the host OS since they are not aware that the hardware they see is not real but emulated. The guests, however, require real computing resources from the host, so they use a hypervisor to coordinate instructions to the CPU. The hypervisor provides virtual machine to show all the needed hardware to start and run the operating system. Via a virtual bios, it shows CPU, RAM and storage devices. The main advantage of this paradigm concerns the ability to run virtual machines on all popular operating systems without requiring them to be modified since

**Para Virtualization Machine** approach is based on the host-guest paradigm and uses a virtual machine monitor. In this model the hypervisor modifies the guest operating system's code in order to run as a guest operating system in a specific virtual machine environment. Like virtual machines, Para Virtual Machines are able to run multiple operating systems. The main advantage of this approach is the execution speed, always faster than HVM and Full Virtualization approach. The Para Virtualization method uses

Hypervisor (Para Virtualized)

PV Drivers

OS

PV Drivers

Physical Server (Host)

(c) HVM Virtualization

Host Operating System

Hypervisor (Emulator)

VM VM OS

Apps

Apps

Modified OS

Physical Server (Host)

(a) Full Virtualization

Fig. 3. Virtualization approaches.

HVM).

comes with costs.

the emulated hardware is completely transparent.

Hypervisor (Full Virtualized)

Device Emulation

OS

Apps Unmodified OS

VM VM

As anticipated, several advantages ensue from virtualization use. The number reduction of physical servers is one of the most evident: the hardware (software) solution for virtualization allows multiple virtual machines running on one physical machine, moreover additional advantages are power consumption reduction, better fault tolerance, optimization of time needed for device installation and of the number of cabinets. Another advantage is that virtualization can be used to abstract hardware resources, since operating systems are closely related to the underlying hardware, several issues need to be taken into account when a server is moved or cloned, e.g., incompatible hardware, different drivers and so on. Another virtualization feature is the creation of abstraction levels such that the operating system does not see the actual physical hardware, but a virtualized one. Administrator can move or clone a system on other machines that run the same virtualization environment without worrying about physical details (network and graphics cards, chipsets, etc.).

Another important aspect is adaptability: if a company changes its priorities and needs, a service may become more important than another or may require more resources. Virtualization allows resource allocation to virtual hardware easily and quickly. Some companies maintain old servers with obsolete operating systems that cannot be moved to new servers as these OS would not be supported. In virtualized environments it is possible to run legacy systems allowing IT managers to get rid of old hardware no longer supported, and more prone to failure. In many cases it is appropriate to use virtualization to create test environments. It frequently happens that production systems need to be changed without knowledge about consequences, i.e., installing an operating system upgrade or a particular service pack is not a risk-free. Virtualization allows immediate replication of virtual machines in order to run all necessary tests.

Like any other technology, the virtualization gives disadvantages that depends on the application scenario. The most important are performance overhead and presence of hardware virtualization. Each virtualization solution is decreasing the overall performance, such as disk access times or access to memory. Some critical applications may suffer from this overhead introduced by virtualization. Depending on the products used, some devices may not be used by virtual machines, such as serial and parallel ports, USB and Bluetooth interfaces, or graphics acceleration hardware.

requirements in all information and communication areas, but become extremely critical in distributed and heterogeneous environment like grids. The Open Grid Forum released an open standard called Open Grid Standards Architecture OGSA (2007) which is the referring point for worldwide researchers. OGSA specifies a layer called Grid Security Infrastructure GSI (2010) which aim is to undertake these matters, for further details see Section 3.3.4. The GSI is based on X.509 infrastructure and Secure Socket Layer (SSL) protocol, and uses public key cryptography and certificates for creating secure grid and application level data encryption. The X.509 certificates are used to ensure authentication: every user or service owns a certificate which contains needed information to identify and

Grid Infrastructure for Domain Decomposition Methods in Computational ElectroMagnetics 255

**Authorization** is a security feature that implies the definition of "who" can access "what", clearly this definition can be extended by adding "when", "how", "how long", and so on. In the grid environment authorization is very important as resources are shared between several users and services. The Authorization Systems (AS) can be classified into two groups: (1) Virtual Organization (VO) Level and (2) Resource Level Systems. The former is centralized AS for an entire VO: when an user needs to access a resource of Resource Provider (RP) he requests for credentials to the AS and presents them to the RP which allows/denies rights to the user. The latter, instead, specifies the authorization to a set of resources, these systems allow the (authenticated) users to access the resources based on the credentials he presents. Examples of the first authorization systems are Community Authorization Service CAS (2010) and Virtual Organization Membership Service VOMS (2003), while example of the second are Privilege and Role Management Infrastructure

**Service Security** entails the implementation of protection mechanisms focussed to guarantee the availability of services. Attack class examples are QoS violation and Denial-of-Service (DoS). The first is achieved through traffic congestion, packet delay or drop, while the second aims to cause software crash and to completely disrupt the service. The countermeasures to contain these attacks are mainly based on prevention and monitoring in order to limit damages and to raise alarms. Some approaches also try to detect the

Infrastructure threats regards network and host devices that form the grid infrastructure and are classified in Host level and Network level. These issues impact data protection, job protection, host availability, access control functionalities, secure routing and all the

**Host Security** impacts data protection and job starvation. The former regards the protection of data already stored in the host, in fact the host submitting the job may be untrusted and the job code may be malicious. The latter is a scenario in which resources assigned to a job are denied to the original job and assigned to a different (malicious) job. The most effective countermeasures to limit data threats are: (1) application level sandboxing, (2) virtualization, and (3) sandboxing. The first approach uses proof carrying code (PCC), the compiler creates proofs of code-safeness and embed those in the executable binary. The second solution is based on the creation of Virtual Machines upon the physical host, this technique ensures strong isolation between different VMs and the host system. The third method confines system calls and sandboxes (isolates) the applications to avoid data and memory access not allowed. The solution adopted to avoid job starvation are based on

source, but an effective defense against these attacks is really hard to achieve.

Standards Validation PERMIS (2002), and the GridMap system.

authenticate the owner.

**3.3.2 Infrastructure issues**

communication aspects.

Fig. 4. Taxonomy of grid security issues.
