**3. Background**

2 Will-be-set-by-IN-TECH

(2007) and Francavilla et al. (2011), perfectly addresses our requirements (see Section 3.4 for details). In this chapter, a Grid Computing infrastructure is presented. This architecture allows parallel block execution by distributing tasks to nodes that belong to the Grid. The set of nodes is composed by physical machines and virtualized ones. This feature enables great flexibility and increase available computational power. Furthermore, the presence of virtual nodes allows a full and efficient Grid usage, indeed the presented architecture can be

The chapter is organized as follow, Section 2 briefly explains author's contribution. Section 3 describes technologies used and summarized Domain Decomposition principles. The architecture is shown in Section 4, while the advantages derived by its adoption are illustrated

Due to the decomposition of the original problems into a larger number of subproblems, the scheme is well suited to a parallelization approach, since the subproblems (which will be referred to as blocks in the following) are disjoint, and the elaboration of the blocks is intrinsically a parallelizable operation. Since the generation of the entire domain basis functions on each block is independent from the other blocks, a high scalability is expected. As an example, Figure 1 shows a fighter subdivided into 4 different blocks, identified by different colors. The four blocks can be processed in parallel by four different processes, in order to generate the basis functions describing each block. Parallelization can be achieved using two different techniques: parallel programming or Grid Computing. Parallel programming is the technique by which have been obtained the best results in this field (500 billion of unknowns) Mouriño et al. (2009). There are important barriers to the broader adoption of these methodologies though: first, the algorithms must be modified using parallel programming API like MPI (2011) and OpenMP (2011), in order to run jobs on selected cores. The second aspect to consider is the hardware: to obtain results of interest supercomputers with thousands of cores and thousands of GB of RAM are required. Typically these machines are made by institutions to meet specific experiments and do not always allow public access. Grid Computing is a rather more easily applicable model for those who do not fulfill the requirements listed above Foster & Kesselman (2003). For its adoption the only requirement is to have at disposal computers to use within the grid. The machines may be heterogeneous, both in terms of the types (workstations, servers) and hardware resources of each machine (RAM, CPU, HD); besides they can also be recovery machines. With the Virtualization technology the number of processes running on each machine (for multicore ones) can be optimized by instantiating multiple virtual nodes on the same physical machine. The real advantage of this model, however, is that researchers do not have to worry about rewriting code to make their application parallelizable. One of the drawbacks in the adoption of this model is the addition of overhead introduced by the grid (e.g., inputs time transfer) and by the hypervisor that manages the virtual machines. It has been shown, however, that the loss of performance due to the hypervisor is not particularly high, usually not exceeding 5% Chierici & Verald (2010). For the above reasons, in our case it was decided to grid infrastructure, composed of heterogeneous recovery machines, both physical and virtual, on which to run scientific applications without the need to modify the source codes of the used algorithms.

in Section 5 and Section 6 draws conclusions and gives hints for future works.

The ultimate goal is the reduction of execution times of CEM applications.

used by different users that run different applications.

**2. Motivation**

The rapid technology growth in the recent years has helped the development and rapid expansion of Grid Computing. "The roots of Grid Computing can be traced back from the late 80s when the research about scheduling algorithms for intensive applications in distributed environments accelerated considerably" Kourpas (2006). In the late 1990s began to emerge, a more generalized framework for accessing high performance computing systems, and at the turn of the millennium, the pace of change was accelerated by the recognition of potential synergies between the grid and the emerging Service Oriented Architectures (SOA) through the creation of the Open Grid Services Architecture (OGSA) Kourpas (2006). "The Open Grid Services Architecture is a set of standards defining the way in which information is shared among several components of large, heterogeneous grid systems. The OGSA is, in effect, an extension and refinement of the Service Oriented Architecture" OGSA (2007). The Open Grid Services Architecture is a standard created by the Open Grid Forum (OGF) OGF (2011). OGF was founded in 2006 from the Global Grid Forum (GGF) and the Enterprise Grid Alliance (EGA). GGF had a rich background and established international presence within the academic and research communities while EGA was focused on developing and promoting enterprise grid solutions. The OGF community now counts thousands of members working in research and industry, representing more than 400 organizations in 50 countries.
