**4. ALICE computing model**

14 Will-be-set-by-IN-TECH

• Resources gentle, high efficiency data server (low CPU/byte overhead, small memory

• Server clustering for scalability, supports large number of clients from a small number of

• High WAN data access efficiency (exploit the throughput of modern WANs for direct data

VOMS [41] stands for Virtual Organization Management Service and is one of the most commonly used Grid technologies needed to provide user access to Grid resources. It works with users that have valid Grid certificates and represents a set of tools to assist authorization of users based on their affiliation. It serves as a central repository for user authorization information, providing support for sorting users into a general group hierarchy - users are grouped as members of Virtual Organizations (VOs). It also keeps track of users' roles and provides interfaces for administrators to manage the users. It was originally developed for

The VOBOX [42] is a standard WLCG service developed in 2006 in order to provide the LHC experiments with a place where they can run their own specific agents and services. In addition, it provides the file system access to the experiment software area. This area is shared between VOBOX and the Worker Nodes at the given site. In the case of ALICE, the VOBOX is installed at the WLCG sites on dedicated machines and its installation is mandatory for sites

• High transaction rate with rapid request dispersement (fast open, low latency)

• Write once read many times processing mode

• Integrated in ROOT

footprint)

• Simple installation

Additional features:

• Full POSIX access

the EU DataGrid project.

servers

**3.3.3 VOMS**

**3.3.4 VOBOX**

• Low administration costs

• Up to 262000 servers per cluster

access, and for copying files as well)

• Fault tolerance (if servers go, the clients do not die)

• Fault tolerance (able to manage in realtime distributed replicas)

• Configuration requirements scale linearly with site complexity • No 3rd party software needed (avoids messy dependencies)

• Generic Mass Storage System Interface (HPSS, CASTOR, etc)

From the site administrator point, the following features are important:

• No database requirements (no backup/recovery issues, high performance)

• Self-organizing servers remove need for configuration changes in big clusters

In this section, we will briefly describe the computing model of the ALICE experiment [43].

ALICE (A Large Ion Collider Experiment) [5] is a dedicated heavy-ion (HI) experiment at the CERN LHC which apart from the HI mission has also its proton-proton (pp) Physics program. Together with the other LHC experiments, ALICE has been successfully taking and processing pp and HI data since the LHC startup in November 2009. During the pp running, the data taking rate has been up to 500 MB/s while during the HI running the data was taken with the rate up to 2.5 GB/s. As was already mentioned, during 2010 the total volume of data taken by all the LHC experiments reached 15 PB, which corresponds to 7 months of the pp running and 1 month of the HI running (together with 4 months of an LHC technical stop for maintenance and upgrades this makes up for one standard data taking year (SDTY)).

The computing model of ALICE relies on the ALICE Computing Grid, the distributed computing infrastructure based on the hierarchical Tier structure as described in section 2. ALICE has developed over the last 10 years a distributed computing environment and its implementation: the Grid middleware suite AliEn (AliCE Environment) [30], which is integrated in the WLCG environment. It provides a transparent access to computing resources for the ALICE community and will be described in the next section.
