**7.6 Grids and Clouds**

34 Will-be-set-by-IN-TECH

Another important issue is sustainability and support availability for the WLCG operations. The middleware used today for the WLCG operations is considerably complex with many services unnecessarily replicated in many places (like, e.g., databases) mainly due to original worries concerning network. The new conception is to gradually search for more standard solutions instead of often highly specialized middleware packages maintained and developed

Among the new technologies, the Clouds is the right buzzword now and the virtualization of resources comes along. The virtualization of WLCG sites started prior to the first LHC collisions and has gone quite far. It helps improving system management, provision of services on demand, can make use of resources more effective and efficient. Virtualization

But, no matter what the current technologies advertise, the LHC community will always use a Grid because the scientists need to collaborate and share resources. No matter what technologies are used underneath the Grid, the collaborative sharing of resources and the network of trust and all the security infrastructure developed on the way of building the WLCG is of enormous value, not only to WLCG community but to e-science in general. It

also enables to make use of industrial and commercial solutions.

Fig. 26. Schema of StratusLab IaaS Cloud interoperability with a Grid

allows people to collaborate across the infrastructures.

**7.4 Operations**

by WLCG.

**7.5 Clouds and virtualization**

As argued in [70], "Cloud Computing not only overlaps with Grid Computing, it is indeed evolved out of Grid Computing and relies on Grid Computing as its backbone and infrastructure support. The evolution has been a result of a shift in focus from an infrastructure that delivers storage and compute resources (in the case of Grids) to one that is economy based ....". Both the Grids and the Clouds communities are facing the same problems like the need to operate large facilities and to develop methods by which users/consumers discover, request and use resources provided by the centralized facilities.

There exist a number of projects looking into and developing Cloud to Grid interfaces with the idea that Grid and Cloud Computing serve different use cases and can work together improving Distributed Computing Infrastructures (see, e.g., [71]). Also CERN is involved in this activity together with other international laboratories in Europe.

With the WLCG resources becoming used up to their limits, using commercial Clouds to process the LHC data is a strategy that should be assessed. Several of the LHC experiments have done tests whether they can use commercial Clouds. But today, the cost is rather high. Also, there are issues like whether academic data can be shipped through academic networks to a commercial provider or how to make sure what happens to this data.

Nevertheless the strategy towards deployment over the WLCG resources Cloud interfaces, managed with high level of virtualization, is under evaluation. Some level of collaboration with industry would provide the understanding how to deploy this properly and what would be the cost. The Cloud and Grid interfaces can be deployed in parallel or on top of each other. This development might also give a way to evolve into a more standardized infrastructure and allow to make a transparent use of commercial Clouds.

A testbed of such an architecture is the CERN LXCloud [72] pilot cluster. Implementation at CERN allows to present a Cloud interface or to access other public or commercial Clouds. This is happening with no change to any of the existing Grid services. Another interesting example is the development of a comprehensive OpenSource IaaS (Infrastructure as a Service) Cloud distribution within the StratusLab project [71], see Figure 26. Anyone can take the code and deploy it on his site and have IaaS Cloud running on his site. The project is focused on deploying Grid services on top of this Cloud, 1) to be a service to existing European Grid infrastructures and to enable these people to use Cloud-operated resources and 2) because the developers consider the Grid services very complex and making sure they run safely on this Cloud should guarantee that also other applications will run without problems.

**9. References**

[1] D.H. Perkins: Introduction to High Energy Physics, Cambridge University Press,

Grid Computing in High Energy Physics Experiments 217

Paris, France, Proceedings of Science (PoS) electronic Journal: ICHEP 2010

[4] The Large Hadron Collider at CERN; http://lhc.web.cern.ch/lhc/; http://public.web.cern.ch/public/en/LHC/LHC-en.html

[5] ALICE Collaboration: http://aliceinfo.cern.ch/Public/Welcome.html

[8] LHCb Collaboration: http://lhcb-public.web.cern.ch/lhcb-public/

[10] LHCf Experiment: http://cdsweb.cern.ch/record/887108/files/

[11] W.N. Cottingham and D.A. Greenwood: An Introduction to the Standard Model of Particle Physics, Cambridge University Press, 2nd edition (2007), ISBN-13:

http://public.web.cern.ch/public/en/lhc/Computing-en.html

[14] I. Foster and C. Kesselman. The Grid: Blueprint for a New Computing Infrastructure.

https://osg-ress-1.fnal.gov:8443/ReSS/ReSS-prd-History.html [18] I. Legrand et al: MONARC Simulation Framework, ACAT'04, Tsukuba, Japan,2004; http://monarc.cacr.caltech.edu:8081/www\_monarc/monarc.htm

[20] I. Bird: LHC Computing: After the first year with data, TERENA Networking

I. Foster et al: The Anatomy of the Grid: Enabling Scalable Virtual Organizations, International Journal of High Performance Computing Applications Vol.15(2001), p.200.

Procedings of 35th International Conference of High Energy Physics, July 22-28, 2010,

[2] STAR Collaboration: Experimental and theoretical challenges in the search for the quark gluon plasma: The STAR Collaboration's critical assessment of the evidence from RHIC

4th edition (2000), ISBN-13: 978-0521621960.

collisions, Nucl. Phys. A757 (2005) 102-183.

http://public.web.cern.ch/public/

[6] ATLAS Collaboration: http://atlas.ch/ [7] CMS Collaboration: http://cms.web.cern.ch/

[13] LHC Computing Grid: Technical Design Report, http://lcg.web.cern.ch/LCG/tdr/

http://lcg.web.cern.ch/lcg/mou.htm

http://www.opensciencegrid.org/;

http://castor.web.cern.ch/castor/

[22] The GEANT Project, http://archive.geant.net/

Conference (TNC2011), Prague, 2011,

[16] EGI - The European Grid Initiative; http://web.eu-egi.eu/

https://tnc2011.terena.org/web/media/archive/7A [21] LHCOPN - The Large Hadron Collider Optical Private Network,

https://twiki.cern.ch/twiki/bin/view/LHCOPN/WebHome

lhcc-2005-032.pdf

[12] Worldwide LHC Computing Grid:

Morgan Kaufmann, 1999;

[17] OSG - The Open Science Grid,

[19] CERN Advanced Storage Manager:

[15] WLCG Memorandum of Understanding,

978-0521852494

[3] CERN - the European Organization for Nuclear Research;

[9] TOTEM Experiment: http://cern.ch/totem-experiment
