**4.1 Parallel support on GRID**

292 Grid Computing – Technology and Applications, Widespread Coverage and New Horizons



With the transition from ERNET to NKN for the network layer an interoperation problem occurred since all the nodes within the GARUDA grid became not visible to the external world. Thanks to the effort, coordinated by the EU-IndiaGrid2 project, ERNET, NIC and CDAC it was possible to solve this issue in the context of the EU-IndiaGrid2 Workshop in December in Delhi and since end 2010 all the GARUDA infrastructure is visible to worldwide grids. In addition the project supported the interoperability between the European Grid Initiative, EGI (www.eigi.eu) and the GARUDA grid infrastructure which is

The TEIN3 link from Europe to Mumbai was commissioned in March 2010. However a number of issues related to the connectivity between the TEIN2 PoP and the WLCG Tier2 at TIFR needed to be solved. Again with the coordinated effort of NIC and EU-IndiaGrid2 partners it was possible since fall 2010 to exploit the TEIN3 links for LHC data transfers. Considering that the Academia Sinica Computing Centre acts as reference Tier1 for CMS Tier2 at TIFR both TEIN3 links (to Europe for CERN and to Singapore for Academia Sinica Tier1) are crucial for WLCG operation in India. In addition the commissioning of the 1 Gbps NKN connectivity from Kolkata to Mumbai makes the international connectivity available

Finally the collaboration between Bhabha Atomic Research Centre (BARC) in Mumbai and Commissariat pour l' Energie Atomique (CEA) in Grenoble represents an excellent showcase for the usage of NKN-TEIN3-géant connectivity for remote control and data collection at the Grenoble beam facility. The BARC and CEA research groups collaborate in experiments dedicated to the study of crystallography of biological macromolecules. using protein crystallography beamlines. Two facilities have been set-up in India allowing to operate remotely the beamline FIP on ESRF, Grenoble. Good X-ray diffraction data has been collected on crystals of drug resistant HIV-1 protease enzyme. Both BARC and CEA are EU-IndiaGrid2 partners and this activity is fully supported by the EU-IndiaGrid2 project.

In this section we will briefly discuss some tools and some methodologies we successfully developed within the lifetime of the two EU-IndiaGrid projects in order to enable a full exploitation of the scientific applications we promoted. The motivations behind such development effort rely on the requirements of the user communities involved in the

ii. Specific advanced service to better implement their computational scientific packages

In the following subsection we will highlight three different actions, one for each category

collection at the Grenoble beam facility.

India

also for the ALICE experiment.

on the GRID.

listed above.


now possible using a metascheduler based on Gridway (Huedo 2005).

**4. Tools and methodologies within EU-IndiaGrid projects** 

projects. User communities issued several requests. In particular users wanted:

iii. Tools to use easy and seamlessly all the grid infrastructures made available.

i. Training additional tools and methods to learn how to use the Grid.

Many scientific applications, like for instance climate modelling simulations, require a parallel computing approach and many tasks are also of the tightly coupled type. The question of how to run in parallel on the grid is therefore of great importance and we want to address here. Nowadays multicore architectures are widely available, even on the European GRID, but they are only suitable for small and medium size jobs. Distributed memory, multi-node clusters are still the only viable tool for serious scientific computing done generally through the MPI paradigm.

Our aim was thus to provide a simple, transparent and efficient mechanism to exploit MPI distributed memory parallelism over capable GRID CEs.

As of today, the gLite middleware does not yet provide proper MPI support. gLite is now integrating the MPI-start mechanism, a set of scripts, which should make it easy to detect and use site specific MPI-related configuration. In fact, it can select the proper MPI distribution, the proper batch scheduler and it can distribute the files if there is no shared disk space. The MPI-Start scripts will also handle user's pre/post execution actions. However from the point of view of users the JDL attributes that could characterize MPI enabled CEs in job descriptor files are misleading and they describe a wrong level of abstraction. The EGEE- MPI working group proposed (more than one year ago) three new attributes added to CPUnumber in order to request explicitly MPI-type distributed resources. These are


Even if it is still an open question, whether WholeNodes has priority over SMPGranularity and whether SMPGranularity has priority over CPUnumber, they could provide however a great improvement to submit parallel jobs on the gLite infrastructure. Unfortunately these attributes are still to be implemented on gLite middleware.

There are however some patches available to enable the WMS and the CREAM CE to recognize these new attribute. The EUIndia WMS and some computing elements CEs have been therefore patched and the patches are now available and distributed within our Virtual Organization. The new attributes allow the GRID users to submit transparently their MPI parallel jobs to the MPI capable resources and furthermore fine-tune their request matching it to the job requirements.
