**10. References**

148 Grid Computing – Technology and Applications, Widespread Coverage and New Horizons

6(b), and the results show that the key influence factor about PCMGM performance is the capacity of Grid memory. So, mining large scale idle computer to construct the grid memory can improve the concurrent access response time for the hot segments during the peak

> sequence Random Mix

> > Sequence Random Mix

123456 Tc

(a)

123456 TC

(b)

For supporting the concurrent access for the hot segments of large files during the peak access phase in Internet, a parallel cache model based on the grid memory (PCMGM) was proposed in this paper. The concurrent accessing of hot segments depends on the disk performance greatly during the peak access. Through the grid technique, we can mine a lot of idle computers to construct the grid memory in order to store the duplications of hot segments. Because of the heterogeneous resources and the unstable catachrestic, the effective cache model is very difficult in the dynamic network environment. Through the techniques of *CA*, *DSEG*, the dynamic learning and the logical computer cluster partition based on fuzzy theory, PCMGM can support the parallel cache. These approaches can raise the peak access performance for hot segment of large files. It can fit for the grid computing

access.

Ratios

Users

and the large scale VOD in internet.

Fig. 6. The test results

**8. Conclusions** 


**0**

**8**

*Germany*

**Hierarchy-Aware Message-Passing in the**

Carsten Clauss, Simon Pickartz, Stefan Lankes and Thomas Bemmerl

The demands of large parallel applications often exceed the computing and memory resources a local computing site offers. Therefore, by combining distributed computing resources as provided by Grid environments can help to satisfy these resource demands. However, since such an environment is a heterogeneous system by nature, there are some drawbacks that, if not taken into account, are limiting its applicability. Especially the inter-site communication often constitutes a bottleneck in terms of higher latencies and lower bandwidths than compared to the site-internal case. The reason for this is that the inter-site communication is typically handled via wide-area transport protocols and respective networks; whereas the internal communication is conducted via fast local-area networks or even via dedicated high-performance cluster interconnects. That in turn means that an efficient utilization of such a hierarchical and heterogeneous infrastructure demands a Grid middleware that provides support for all these different kinds of communication facilities (Clauss et al., 2008). Moreover, with the upcoming Many-core era a further level of hierarchy gets introduced in terms of *Cluster-on-Chip* processor architectures. The Single-chip Cloud Computer (SCC) experimental processor is a *concept vehicle* created by Intel Labs as a platform for Many-core software research (Intel Corporation, 2010). This processor is indeed a very recent example for such a Cluster-on-Chip architecture. In this chapter, we want to discuss the challenges of hierarchy-aware message-passing in distributed Grid environments in the upcoming Many-core era by taking the example of the SCC. The remainder of this chapter is organized as follows: Section 2 initially reviews the basic knowledge about parallel processing and message-passing. In Section 3, the demands for parallel processing and message-passing especially in Grid computing environments are detailed. Section 4 focuses on the Intel SCC Many-core processor and how message-passing can be conducted with respect to this chip. Afterwards, Section 5 discusses how the world of chip-embedded Many-core communication can be integrated into the macrocosmic world of Grid computing. Finally, Section 6 concludes

With a rising amount of cores in today's processors, parallel processing is a prevailing field of research. One approach is the *message-passing paradigm*, where parallelization is achieved by having processes with the capability of exchanging messages with other processes. Instead

**1. Introduction**

this chapter.

**2. Parallel processing using message-passing**

**Upcoming Many-Core Era**

*Chair for Operating Systems, RWTH Aachen University*

