**4. Parallel algorithm analysis**

Amdahl law stipulates the speedup ratio limit for parallelism, and Gustafson law quantifies the acceleration effect [21]. To be equivalent to Amdahl and Gustafson, measurability of calculation models the performance. Assume that the running time of one computing problem is *T N*ð Þ , *x* , which represents the running time for the scale of problem *x*. Then the running efficiency could be utilized to prove the effectiveness of parallel core program from two categories.

#### **4.1 Strong measurability**

The change of computing time varies with the increase of processor cores, also is known as speedup ratio:

$$\mathcal{S}\_{\text{strong}} = \frac{T(\mathbf{1}, \mathbf{x})}{T(\mathbf{N}, \mathbf{x})} \tag{5}$$

#### **4.2 Weak measurability**

The problem size of each processor or process is fixed, and then the statistic of the running time varies as the processors change:

$$S\_{weak} = \frac{T(\mathbf{1}, \mathbf{x})}{T(\mathbf{N}, \mathbf{x} \cdot \mathbf{N})} \tag{6}$$

The deterministic calculation of the reactor core mainly makes the *Sstrong*,*<sup>N</sup>* as the variation of processor cores. In practical deterministic core, computation has features that the problem scale is related to the physical form. For instance, when the characteristic rays are fixed on every processor, the MOC transport would increase new characteristic rays and achieve new arrangement if the processors are added. The data expression and computational content may change with different program patterns. On the contrary, nondeterministic calculation of reactor core, such as the MC method, the computational features would remain unchanged so that the problem takes more emphasis on the statistics of weak measurability.

The steps proposed by Ian Foster is the most typical method for the parallel algorithm design and programming [22] (also known as PCAM method).


#### **Table 3.**

*Two different analysis steps.*

1.Partition identifies computing content that can be executed in parallel.

2.Communication determines the communication data of each parallel task.


At present, the cluster computers are often in heterogeneous development environment, and parallel algorithms are divided into many kinds such as pipeline parallelism, mater-slave parallelism, and so on. To facilitate engineering specifications, the usually analysis steps of parallel algorithm are divided into two cases in **Table 3**.

It can be concluded that the parallel analysis includes interpretation, design, and performance verification. The parallel analysis in reactor core calculation summarized here emphasizes to meet the application requirements, and then the common features of the algorithm are usually abstracted and recorded concisely, so that the analysis can be understood clearly and simply.
