**2. Basic group theoretic tools**

Although the basic mathematical definition of a group and much of the abstract algebraic machinery applies to finite, infinite, and continuous groups, our interest for applications in nuclear engineering is limited to finite point groups. Furthermore, it should be kept in mind that most of the necessary properties of the crystallographic point groups for applications, such as the group multiplication tables, the class structures, irreducible representations, and characters are tabulated in reference books or can be obtained with modern software such as MAPLE or MATHEMATICA for example.

### **2.1 Group definition**

2 Will-be-set-by-IN-TECH

not restricted to solving the diffusion equation. We wish to also point the interested reader to

An early application of group theory to Nuclear Engineering has been in the design of control systems for nuclear reactors (Nieva, 1997). Symmetry considerations allow the decoupling of the linear reactor model into decoupled models of lower order. Thereby, control systems can

Similarly, group theoretic principles have been shown to allow the decomposition of solution algorithms of boundary value problems in Nuclear Engineering to be specified over decoupled symmetric domain. This decomposition makes the the problem amenable to

Group theory is applicable in the investigation of the homogenization problem. D. S. Selengut addressed the following problem (Selengut, 1960) in 1960. He formulated the following principle: If the response matrix of a homogeneous material distribution in a volume *V* can be substituted by the response matrix of a homogeneous material distribution in *V*, then there exists a homogeneous material with which one may replace *V* in the core. The validity of this principle is widely used in reactor physics, was investigated applying group theoretic principles (Makai, 1992),(Makai, 2010). It was shown that Selengut's principle is not exact; it is only a good approximation under specific circumstances. These are that the homogenization

Group theory has also been fruitfully applied to in-core signal processing (Makai & Orechwa, 2000). Core surveillance and monitoring are implemented in power reactors to detect any deviation from the nominal design state of the core. This state is defined by a field that is the solution of an equation that describes the physical system. Based on measurements of the

The solution to these problems requires a complex approach that incorporates numerical

The benefits of group theory are not restricted to numerical problems. In 1985 Toshikazu Sunada (Sunada, 1985) made the following observation: If the operator of the equation over a volume *V* commutes with a symmetry group *G*, and the Green's function for the volume *V* is known and volume *V* can be tiled with copies tile *t* (subvolumes of *V*), then the Green's function of *t* can be obtained by a summation over the elements of the symmetry group *G*. Thus by means of group theory, one can separate the solution of a boundary value problem into a geometry dependent part, and a problem dependent part. The former one carries information on the structure of the volume in which the boundary value problem is studied, the latter on the physical processes taking place in the volume. That separation allows for extending the usage of the Green's function technique, as it is possible to derive Green's functions for a number of finite geometrical objects (square, rectangle, and regular triangle) as well as to relate Green's functions of finite objects, such as a disk, or disk sector, a regular hexagon and a trapezoid, etc. Such relations are needed in problems in heat conduction,

recipes preserve only specific reaction rates, but do not provide general equivalence.

1. Determine whether the operating state is consistent with the design state.

5. Obtain information as to the cause of a departure from the design state.

calculations incorporating group theoretic considerations and statistical analysis.

other areas of Nuclear Engineering were group theory has proven useful.

implementation for parallel computation (Orechwa & Makai, 1997).

field at limited positions the following issues can be addressed:

3. Give an estimate of the values at non-metered locations.

2. Find out-of-calibration measurements.

diffusion, etc. as well.

4. Detect loss-of-margin as early as possible.

be developed for each submodel independently.

An abstract group *G* is a set of elements for which a law of composition or "product" is defined.

For illustrative purposes let us consider a simple set of three elements {*E*, *A*, *B*}. A law of composition for these three elements can be expressed in the form of a multiplication table, see Table 1. In position *i*, *j* of Table 1. we find the product of element *i* and element *j* with the numbering 1 → *E*, 2 → *A*, 3 → *B*. From the table we can read out that *B* = *AA* because element 2, 2 is *B* and the third line contains the products *AE*, *AA*, *AB*. The table is symmetric therefore *AB* = *BA*. Such a group is formed for example by the even permutations of three objects: *E* = (*a*, *b*, *c*), *A* = (*c*, *a*, *b*), *B* = (*b*, *c*, *a*). The multiplication table reflects four necessary

Table 1. Multiplication table for elements {*E*, *A*, *B*}

conditions that a set of elements must satisfy to form a group *G*. These four conditions are:


The application of group theory to physical problems arises from the fact that many characteristics of physical problems, in particular symmetries and invariance, conform to the definition of groups, and thereby allows us to bring to bear on the solution of physical problems the machinery of abstract group theory.

For example, if we consider a characteristic of an equilateral triangle we observe the following with regard to the counter clockwise rotations by 120 degrees. Let us give the operations the following symbols: *E*-no rotations, *C*3-rotation by 120*<sup>o</sup>*, *C*3*C*<sup>3</sup> = *C*<sup>2</sup> <sup>3</sup>-rotation by 240*o*. The group operation is the sequential application of these operations, the leftmost operator should be applied first. The reader can easily check the multiplication table 2. applies to the

2. for any two elements in *S* their product is in *S* 3. all elements in *S* satisfy the four group postulates.

permutation. Thus the operations {*C*3, *<sup>C</sup>*<sup>2</sup>

We note that {*E*}, {*C*3, *<sup>C</sup>*<sup>2</sup>

space are assigned to each class.

namely the point group *<sup>C</sup>*3*<sup>v</sup>* <sup>=</sup> {*E*, *<sup>C</sup>*3, *<sup>C</sup>*<sup>2</sup>

vector space. In the usual symbolic form we have

**r**

**2.3 Group representations**

a point group.

obtain the following table of the transforms *T*−1*QT*:

From the presented multiplication tables we see that {*E*, *<sup>C</sup>*3, *<sup>c</sup>*<sup>2</sup>

they are the only subgroups in their respective groups. These subgroups are the rotations in the former example, and the even permutations in the latter. While the remaining elements are associated with reflections and odd permutations, respectively. Furthermore, we see that two reflections are equivalent to a rotation, and two odd permutations are equivalent to an even

Application of Finite Symmetry Groups to Reactor Calculations 289

<sup>3</sup> } and *σv*, *σ*�

belong in some sense to different sets. This property is illustrated by taking the transform

E E E E E E E

*<sup>v</sup> σ<sup>v</sup> σ*"*<sup>v</sup> σ*"*<sup>v</sup> σ*�

*<sup>v</sup> σ<sup>v</sup> σ*�

classes. Classes play a leading role in the application of group theory to the solution of physical problems. In general physically significant properties can be associated with each class. In the solution of boundary value problems, different subspaces of the solution function

The application of the information in an abstract group to a physical problem, especially to the calculation of the solution of the boundary value problem that models the physical setting, requires a mathematical "connection" between the two. This connection originates with the transformations of coordinates that define the symmetry operations reflected in the actions of

the form of its realization in forms of rotations and reflections of an equilateral triangle,

physical problem in terms of, for example, material distribution and the geometry of the boundary. Furthermore, let us consider a two-dimensional vector space with an orthonormal basis {**e**1, **e**2} relative to which the physical model is defined. Each operation by an element *g* of the group *C*3*<sup>v</sup>* can be represented by its action on an arbitrary vector **r** in a two-dimensional

� = D(*g*)**r** for all *g* ∈ *C*3*<sup>v</sup>* and where **r**� = *r*1**e**<sup>1</sup> + *r*2**e**<sup>2</sup> is the transformed vector, and D(*g*) is the matrix operator associated with the action of group element *g* ∈ *C*3*v*. It is well known from linear algebra that the matrix representation of operator *D*(*g*) for each *g* ∈ *C*3*<sup>v</sup>* is obtained by its action on

<sup>3</sup>, *σv*, *σ*�

As a simple illustration, let us again consider the abstract group *G* = {*E*, *A*, *B*, *a*�

<sup>3</sup> *σ<sup>v</sup> σ*�

<sup>3</sup> *<sup>C</sup>*<sup>2</sup> <sup>3</sup> *<sup>C</sup>*<sup>2</sup> 3

<sup>3</sup> *C*<sup>3</sup> *C*<sup>3</sup> *C*<sup>3</sup>

*<sup>v</sup> σ<sup>v</sup> σ*"*<sup>v</sup> σ*�

*<sup>v</sup> σ*"*<sup>v</sup>*

*v*

*<sup>v</sup>*, *σ*"*v*} transform into themselves and are thereby called

*<sup>v</sup>*, *σ*"*v*}. Let this group be consistent with a

*<sup>v</sup> σ<sup>v</sup>*

*<sup>v</sup> σ<sup>v</sup> σ*"*<sup>v</sup>*

*<sup>T</sup>*−1*QT* of each element *<sup>Q</sup>* in *<sup>G</sup>* by all elements *<sup>T</sup>* in *<sup>G</sup>*. For group {*E*, *<sup>C</sup>*3, *<sup>C</sup>*<sup>2</sup>

Q/T E *C*<sup>3</sup> *C*<sup>2</sup>

*σ<sup>v</sup> σ<sup>v</sup> σ*"*<sup>v</sup> σ*�

*σ*"*<sup>v</sup> σ*"*<sup>v</sup> σ*�

*C*2 <sup>3</sup> *<sup>C</sup>*<sup>2</sup> <sup>3</sup> *<sup>C</sup>*<sup>2</sup> <sup>3</sup> *<sup>C</sup>*<sup>2</sup>

*σ*� *<sup>v</sup> σ*�

<sup>3</sup> }, and {*σv*, *σ*�

*C*<sup>3</sup> *C*<sup>3</sup> *C*<sup>3</sup> *C*<sup>3</sup> *C*<sup>2</sup>

<sup>3</sup>} and {*E*, *A*, *B*} are subgroups;

<sup>3</sup>, *σv*, *σ*�

, *b*� , *c*� }

*<sup>v</sup>*, *σ*"*v*} we

, *b*� , *c*� } in

*<sup>v</sup>*, *σ*"*<sup>v</sup>* and similarly {*A*, *B*} and {*a*�


Table 2. Multiplication table of *<sup>G</sup>* <sup>=</sup> {*E*, *<sup>C</sup>*3, *<sup>C</sup>*<sup>2</sup> 3 }


Table 3. Multiplication table of *G*<sup>3</sup>


Table 4. Multiplication table of a permutation group

group *<sup>G</sup>* <sup>=</sup> {*E*, *<sup>C</sup>*3, *<sup>C</sup>*<sup>2</sup> <sup>3</sup> }. We see immediately that the multiplication table of the rotations of an equilateral triangle is identical to the multiplication table of the previous abstract group *<sup>G</sup>* <sup>=</sup> {*E*, *<sup>C</sup>*3, *<sup>C</sup>*<sup>2</sup> <sup>3</sup> }. Thus there is a one-to-one correspondence (called isomorphism) between the abstract group of the previous example, and its rules.

#### **2.2 Subgroups and classes**

Groups can have more properties than just a multiplication table. The illustration in the previous subsection is not amenable to illustrating this; the groups are to small. However, if we again consider the equilateral triangle, we note that it has further symmetry operations. Namely, those associated with reflections through a vertical plane through each vertex. Let us give these reflection operations the symbols *σv*, *σ*� *<sup>v</sup>* and *σ*"*v*, reflection through planes through vertex *a*, *b*, *c*, respectively. We may describe the operation by the transformations of the vertices *a*, *b*, *c*. For example *σ<sup>v</sup>* : (*a*, *b*, *c*) → (*a*, *c*, *b*). By adding these three reflections operations to the rotations, we form the larger group of symmetry operations of the equilateral triangle *<sup>G</sup>*<sup>3</sup> <sup>=</sup> {*E*, *<sup>C</sup>*3, *<sup>C</sup>*<sup>2</sup> <sup>3</sup>, *σv*, *σ*� *<sup>v</sup>*, *σ*"*v*}. The multiplication table of the new group is given in Table 3. Another group with the same multiplication table as above can be constructed by considering the six permutations of the three letters *a*, *b*, *c*. Let *E* = (*a*, *b*, *c*), *A* = (*c*, *a*, *b*), *B* = (*b*, *c*, *a*), *a*� = (*a*, *c*, *b*), *b*� = (*c*, *b*, *a*), *c*� = (*b*, *a*, *c*) and {*E*, *A*, *B*} are even permutations, {*a*� , *b*� , *c*� } are odd. This leads to the multiplication table 4, which is isomorphic to Table 3. The two multiplication tables illustrate the concept of a subgroup that is defined as: A set *S* of elements in group *G* is considered as a subgroup of *G* if:

1. all elements in *S* are also elements in *G*

2. for any two elements in *S* their product is in *S*

4 Will-be-set-by-IN-TECH

E E *C*<sup>3</sup> *C*<sup>2</sup>

*C*<sup>3</sup> *C*<sup>3</sup> *C*<sup>2</sup>

3 }

<sup>3</sup> E *C*<sup>3</sup> *σ*�

*<sup>v</sup> σ*"*<sup>v</sup> σ<sup>v</sup> C*<sup>2</sup>

*C*2 <sup>3</sup> *<sup>C</sup>*<sup>2</sup>

E *C*<sup>3</sup> *C*<sup>2</sup>

E E *C*<sup>3</sup> *C*<sup>2</sup>

*C*<sup>3</sup> *C*<sup>3</sup> *C*<sup>2</sup>

*σv σv σ*�

*σ*"*<sup>v</sup> σ*"*<sup>v</sup> σ<sup>v</sup> σ*�

*C*2 <sup>3</sup> *<sup>C</sup>*<sup>2</sup>

*σ*� *<sup>v</sup> σ*�

Table 2. Multiplication table of *<sup>G</sup>* <sup>=</sup> {*E*, *<sup>C</sup>*3, *<sup>C</sup>*<sup>2</sup>

Table 4. Multiplication table of a permutation group

the abstract group of the previous example, and its rules.

us give these reflection operations the symbols *σv*, *σ*�

<sup>3</sup>, *σv*, *σ*�

elements in group *G* is considered as a subgroup of *G* if:

1. all elements in *S* are also elements in *G*

Table 3. Multiplication table of *G*<sup>3</sup>

group *<sup>G</sup>* <sup>=</sup> {*E*, *<sup>C</sup>*3, *<sup>C</sup>*<sup>2</sup>

**2.2 Subgroups and classes**

triangle *<sup>G</sup>*<sup>3</sup> <sup>=</sup> {*E*, *<sup>C</sup>*3, *<sup>C</sup>*<sup>2</sup>

{*a*� , *b*� , *c*�

*<sup>G</sup>* <sup>=</sup> {*E*, *<sup>C</sup>*3, *<sup>C</sup>*<sup>2</sup>

E *C*<sup>3</sup> *C*<sup>2</sup> 3

3

<sup>3</sup> *E*

<sup>3</sup> *σ<sup>v</sup> σ*�

<sup>3</sup> *σ<sup>v</sup> σ*�

<sup>3</sup> E *σ*"*<sup>v</sup> σ<sup>v</sup> σ*�

*<sup>v</sup> σ*"*<sup>v</sup>* E *C*<sup>3</sup> *C*<sup>2</sup>

*<sup>v</sup> C*<sup>3</sup> *C*<sup>2</sup>

E A B a' b' c' E E A B a' b' c' A A B E b' c' a' B B E A c' a' b' a' a' c' b' E B A b' b' a' c' A E B c' c' b' a' B A E

an equilateral triangle is identical to the multiplication table of the previous abstract group

Groups can have more properties than just a multiplication table. The illustration in the previous subsection is not amenable to illustrating this; the groups are to small. However, if we again consider the equilateral triangle, we note that it has further symmetry operations. Namely, those associated with reflections through a vertical plane through each vertex. Let

through vertex *a*, *b*, *c*, respectively. We may describe the operation by the transformations of the vertices *a*, *b*, *c*. For example *σ<sup>v</sup>* : (*a*, *b*, *c*) → (*a*, *c*, *b*). By adding these three reflections operations to the rotations, we form the larger group of symmetry operations of the equilateral

Table 3. Another group with the same multiplication table as above can be constructed by considering the six permutations of the three letters *a*, *b*, *c*. Let *E* = (*a*, *b*, *c*), *A* = (*c*, *a*, *b*), *B* = (*b*, *c*, *a*), *a*� = (*a*, *c*, *b*), *b*� = (*c*, *b*, *a*), *c*� = (*b*, *a*, *c*) and {*E*, *A*, *B*} are even permutations,

The two multiplication tables illustrate the concept of a subgroup that is defined as: A set *S* of

} are odd. This leads to the multiplication table 4, which is isomorphic to Table 3.

*<sup>v</sup> σ*"*<sup>v</sup>*

*<sup>v</sup> σ*"*<sup>v</sup>*

*<sup>v</sup> σ*"*<sup>v</sup> σ<sup>v</sup>*

<sup>3</sup> E *C*<sup>3</sup>

<sup>3</sup> E

<sup>3</sup> }. We see immediately that the multiplication table of the rotations of

*<sup>v</sup>*, *σ*"*v*}. The multiplication table of the new group is given in

*<sup>v</sup>* and *σ*"*v*, reflection through planes

<sup>3</sup> }. Thus there is a one-to-one correspondence (called isomorphism) between

*v*

3

<sup>3</sup> E *C*<sup>3</sup>

3. all elements in *S* satisfy the four group postulates.

From the presented multiplication tables we see that {*E*, *<sup>C</sup>*3, *<sup>c</sup>*<sup>2</sup> <sup>3</sup>} and {*E*, *A*, *B*} are subgroups; they are the only subgroups in their respective groups. These subgroups are the rotations in the former example, and the even permutations in the latter. While the remaining elements are associated with reflections and odd permutations, respectively. Furthermore, we see that two reflections are equivalent to a rotation, and two odd permutations are equivalent to an even permutation. Thus the operations {*C*3, *<sup>C</sup>*<sup>2</sup> <sup>3</sup> } and *σv*, *σ*� *<sup>v</sup>*, *σ*"*<sup>v</sup>* and similarly {*A*, *B*} and {*a*� , *b*� , *c*� } belong in some sense to different sets. This property is illustrated by taking the transform *<sup>T</sup>*−1*QT* of each element *<sup>Q</sup>* in *<sup>G</sup>* by all elements *<sup>T</sup>* in *<sup>G</sup>*. For group {*E*, *<sup>C</sup>*3, *<sup>C</sup>*<sup>2</sup> <sup>3</sup>, *σv*, *σ*� *<sup>v</sup>*, *σ*"*v*} we obtain the following table of the transforms *T*−1*QT*:


We note that {*E*}, {*C*3, *<sup>C</sup>*<sup>2</sup> <sup>3</sup> }, and {*σv*, *σ*� *<sup>v</sup>*, *σ*"*v*} transform into themselves and are thereby called classes. Classes play a leading role in the application of group theory to the solution of physical problems. In general physically significant properties can be associated with each class. In the solution of boundary value problems, different subspaces of the solution function space are assigned to each class.

#### **2.3 Group representations**

The application of the information in an abstract group to a physical problem, especially to the calculation of the solution of the boundary value problem that models the physical setting, requires a mathematical "connection" between the two. This connection originates with the transformations of coordinates that define the symmetry operations reflected in the actions of a point group.

As a simple illustration, let us again consider the abstract group *G* = {*E*, *A*, *B*, *a*� , *b*� , *c*� } in the form of its realization in forms of rotations and reflections of an equilateral triangle, namely the point group *<sup>C</sup>*3*<sup>v</sup>* <sup>=</sup> {*E*, *<sup>C</sup>*3, *<sup>C</sup>*<sup>2</sup> <sup>3</sup>, *σv*, *σ*� *<sup>v</sup>*, *σ*"*v*}. Let this group be consistent with a physical problem in terms of, for example, material distribution and the geometry of the boundary. Furthermore, let us consider a two-dimensional vector space with an orthonormal basis {**e**1, **e**2} relative to which the physical model is defined. Each operation by an element *g* of the group *C*3*<sup>v</sup>* can be represented by its action on an arbitrary vector **r** in a two-dimensional vector space. In the usual symbolic form we have

$$\mathbf{r}' = \mathbf{D}(\mathcal{g})\mathbf{r} \quad \text{ for all } \mathcal{g} \in \mathbb{C}\_{3v}$$

and where **r**� = *r*1**e**<sup>1</sup> + *r*2**e**<sup>2</sup> is the transformed vector, and D(*g*) is the matrix operator associated with the action of group element *g* ∈ *C*3*v*. It is well known from linear algebra that the matrix representation of operator *D*(*g*) for each *g* ∈ *C*3*<sup>v</sup>* is obtained by its action on

That is, operator **O***<sup>g</sup>* gives a new function *h*(**r**) from *f*(**r**) at **r**, while *f* is unchanged at **r**�

*f*(*x*, *y*) = *ax* + *by*

Application of Finite Symmetry Groups to Reactor Calculations 291

<sup>D</sup>(*g*) = 1/√<sup>2</sup> <sup>−</sup>1/√<sup>2</sup>

**O***g*<sup>1</sup> *f*(**r**) = *f*(D−1(*g*)**r**) = *h*(**r**)

Note: operator **O***<sup>g</sup>* acts on the coordinates of function *f* and not on the argument of *f* .

in words: the consecutive application of **O***g*<sup>1</sup> and **O***g*<sup>2</sup> is the same as the application of the transformation **O***g*2*g*<sup>1</sup> belonging to group element *g*1*g*2, and the operators **O***g*, *g* ∈ *G* have the

A common approach to the solution of physical problems is harmonic analysis, where a solution to the problem is sought in terms of functions that span the solution space. If the problem exhibits some symmetry, we would expect this symmetry to be reflected in the solution for this particular problem. Intuitively we would expect therefore the solution to belong to a subspace of the general solution space, and that the subspace be invariant under

As an illustration of this notion, we assume the problem has the symmetry of the cyclic

function that allows the operation of the operators in the group *C*<sup>3</sup> as discussed above. The

etc. These observations can be summarized in a table: From that table we can construct matrix

**O***<sup>E</sup> fE* = *fE* **O***C*<sup>3</sup> *fE* = *fC*<sup>3</sup> **O***C*<sup>2</sup>

**O***C*<sup>3</sup> *fC*<sup>3</sup> = **O***C*3**O***C*<sup>3</sup> *fE* = **O***C*<sup>2</sup>

Based on this and the group multiplication table we get relations such as

**<sup>O</sup>***g*(*ax* <sup>+</sup> *by*) = *ax*� <sup>+</sup> *by*� <sup>=</sup> *<sup>a</sup>* <sup>+</sup> *<sup>b</sup>*

−<sup>1</sup>

same multiplication table as *G* and any group isomorphic with *G*.

For two group elements *g*<sup>1</sup> and *g*<sup>2</sup> in *G*, we obtain

*D*−1(*g*1)*D*−1(*g*2)

**2.5 Invariant subspaces and regular representations**

the symmetry operations exhibited by the problem.

action of each operator on *fE* defines a new function that, is

(permutation) representations of the operators **O***E*, **O***C*<sup>3</sup> , **O***C*<sup>2</sup>

permutation group *<sup>C</sup>*<sup>3</sup> <sup>=</sup> {*E*, *<sup>C</sup>*3, *<sup>C</sup>*<sup>2</sup>

 **O***g*<sup>1</sup> *f*(**r**) 

1/√2 1/√<sup>2</sup>

√2 *x* + *a* − *b*

<sup>√</sup><sup>2</sup> *<sup>y</sup>* <sup>=</sup> *<sup>h</sup>*(*x*, *<sup>y</sup>*)

<sup>=</sup> **<sup>O</sup>***g*<sup>2</sup> *<sup>h</sup>*(**r**) = *<sup>h</sup>*(D−1(*g*2)**r**) = *<sup>f</sup>*([D−1(*g*1)D−1(*g*2)]**r**) <sup>≡</sup> *<sup>f</sup>*([D(*g*2)D(*g*1)]−1**r**).

**r**) = *f*(*D*−1(*g*1)D−1(*g*2)D−1(*g*1)**r**) = *f*((D(*g*2)D(*g*1))

**O***g*2**O***g*<sup>1</sup> = **O***g*<sup>2</sup> *<sup>g</sup>*<sup>1</sup> , (2.7)

<sup>3</sup> } that was discussed previously. Let *fE*(*r*) be an arbitrary

<sup>3</sup> as for example

**D**(*C*3)=(2, 3, 1). (2.8)

<sup>3</sup> *fE* = *fC*<sup>2</sup> 3 .

<sup>3</sup> *fE* = *fC*<sup>2</sup> 3 ,

example, let

**O***g*2**O***g*<sup>1</sup> *f*(**r**) = **O***g*<sup>2</sup>

**O***g*2**O***g*<sup>1</sup> *f*(**r**) = *f*(

and thus we get

Therefore

and

then

. For

−1 **r**),

the basis vectors,

$$\mathbf{e}'\_{i} = \sum\_{j=1}^{2} D\_{ij}(g)\mathbf{e}\_{j\prime} \quad i = 1,2,\dots$$

and that the transpose of matrix *Dji*(*g*) gives the action of the group element *g* on the coordinates of the vector **r** as

$$r'\_i = \sum\_{j=1}^{2} D\_{ij}^{-1}(g) r\_{j\prime} \quad i = 1, 2. \tag{2.1}$$

For the point group *C*3*<sup>v</sup>* we obtain the following six matrix representations. To spare room we replace the matrices by permutations:

$$\mathbf{E} = (1,2,3); \; \mathbf{D}(\mathbb{C}\_3) = (3,1,2); \; \mathbf{D}(\mathbb{C}\_3^2) = (2,3,1); \tag{2.2}$$

$$\mathbf{D}(\sigma\_1) = (1,3,2); \; \mathbf{D}(\sigma\_2) = (3,2,1); \; \mathbf{D}(\sigma\_3) = (2,1,3). \tag{2.3}$$

These matrices satisfy the group multiplication table of *C*3*v*, and therefore also the multiplication table of the abstract group *G* that is isomorphic to *C*3*v*. We note that this is not the only matrix representation of *C*3*v*. There are two one-dimensional representations, in particular that also satisfy the multiplication table of *C*3*v*, and will be of interest later. These are

$$\mathbf{D}(E) = \mathbf{D}(\mathbb{C}\_3) = \mathbf{D}(\mathbb{C}\_3^2) = \mathbf{D}(\sigma\_\upsilon) = \mathbf{D}(\sigma\_\upsilon') = \mathbf{D}(\sigma'' \upsilon) = \mathbf{E}\_2. \tag{2.4}$$

where **E**<sup>2</sup> is the 2 × 2 identity matrix; and

$$\mathbf{D}(E) = \mathbf{D}(\mathbf{C}\_3) = \mathbf{D}(\mathbf{C}\_3^2) = \mathbf{E}\_2 \cdot \mathbf{D}(\sigma\_\upsilon) = \mathbf{D}(\sigma\_\upsilon') = \mathbf{D}(\sigma'' \upsilon) = -\mathbf{E}\_2. \tag{2.5}$$

The role played by these representations will become clear in later discussions of irreducible representations of groups, and their actions on function spaces.

#### **2.4 Generation of group representations**

To this point we have constructed the matrix representations the group elements of point groups such as *C*3*<sup>v</sup>* in the usual physical space (two dimensional in our case). These representations were based on the transformations of the coordinates of an arbitrary vector in a physical space due to physical operations on the vector. Mathematical solutions to physical problems, however, are represented by functions in function spaces whose dimensions are generally much greater than three. Thus to bring the group matrix representations that act on coordinates to bear on the solution of physical problems in terms of functions, we need one more "connection" between symmetry operators on coordinates and symmetry operators on functions. This connection is defined as follows.

Let *f*(**r**) be a function of a position vector **r** = (*x*, *y*) and D(*g*−1) be the matrix transformation associated with group element *g* ∈ *G*, such that (*x*, *y*) → (*x*� , *y*� ) through

$$\mathbf{r}' = \mathbf{D}^{-1}(\mathbf{g})\mathbf{r}.$$

What we need is an algorithm that uses D−1(*g*) to obtain a new function *h*(**r**) from *f*(**r**). To this end we define an operator **O***<sup>g</sup>* as

$$\mathbf{O}\_{\mathcal{S}}f(\mathbf{r}) = f(\mathbf{r}') = f(\mathbf{D}^{-1}(\mathcal{g})\mathbf{r}) = h(\mathbf{r}).\tag{2.6}$$

That is, operator **O***<sup>g</sup>* gives a new function *h*(**r**) from *f*(**r**) at **r**, while *f* is unchanged at **r**� . For example, let *f*(*x*, *y*) = *ax* + *by*

and

6 Will-be-set-by-IN-TECH

and that the transpose of matrix *Dji*(*g*) gives the action of the group element *g* on the

For the point group *C*3*<sup>v</sup>* we obtain the following six matrix representations. To spare room we

These matrices satisfy the group multiplication table of *C*3*v*, and therefore also the multiplication table of the abstract group *G* that is isomorphic to *C*3*v*. We note that this is not the only matrix representation of *C*3*v*. There are two one-dimensional representations, in particular that also satisfy the multiplication table of *C*3*v*, and will be of interest later. These

<sup>3</sup> ) = D(*σv*) = D(*σ*�

<sup>3</sup> ) = E2 D(*σv*) = D(*σ*�

The role played by these representations will become clear in later discussions of irreducible

To this point we have constructed the matrix representations the group elements of point groups such as *C*3*<sup>v</sup>* in the usual physical space (two dimensional in our case). These representations were based on the transformations of the coordinates of an arbitrary vector in a physical space due to physical operations on the vector. Mathematical solutions to physical problems, however, are represented by functions in function spaces whose dimensions are generally much greater than three. Thus to bring the group matrix representations that act on coordinates to bear on the solution of physical problems in terms of functions, we need one more "connection" between symmetry operators on coordinates and symmetry operators on

Let *f*(**r**) be a function of a position vector **r** = (*x*, *y*) and D(*g*−1) be the matrix transformation

� = D−1(*g*)**r**.

What we need is an algorithm that uses D−1(*g*) to obtain a new function *h*(**r**) from *f*(**r**). To

**r**

�

, *y*�

) through

) = *f*(D−1(*g*)**r**) = *h*(**r**). (2.6)

**D**(*σ*1)=(1, 3, 2); **D**(*σ*2)=(3, 2, 1); **D**(*σ*3)=(2, 1, 3). (2.3)

*D*−<sup>1</sup>

**E** = (1, 2, 3); **D**(*C*3)=(3, 1, 2); **D**(*C*<sup>2</sup>

*Dij*(*g*)**e***j*, *i* = 1, 2,

*ij* (*g*)*rj*, *i* = 1, 2. (2.1)

<sup>3</sup> )=(2, 3, 1); (2.2)

*<sup>v</sup>*) = D(*σ*"*v*) = **E**2, (2.4)

*<sup>v</sup>*) = D(*σ*"*v*) = −**E**2. (2.5)

**e**� *<sup>i</sup>* = 2 ∑ *j*=1

*r*� *<sup>i</sup>* = 2 ∑ *j*=1

D(*E*) = D(*C*3) = D(*C*<sup>2</sup>

representations of groups, and their actions on function spaces.

D(*E*) = D(*C*3) = D(*C*<sup>2</sup>

the basis vectors,

are

coordinates of the vector **r** as

replace the matrices by permutations:

where **E**<sup>2</sup> is the 2 × 2 identity matrix; and

**2.4 Generation of group representations**

functions. This connection is defined as follows.

this end we define an operator **O***<sup>g</sup>* as

associated with group element *g* ∈ *G*, such that (*x*, *y*) → (*x*�

**O***<sup>g</sup> f*(**r**) = *f*(**r**

$$\mathbf{D}(\mathbf{g}) = \begin{pmatrix} 1/\sqrt{2} & -1/\sqrt{2} \\ 1/\sqrt{2} & 1/\sqrt{2} \end{pmatrix}$$

then

$$\mathbf{O}\_{\mathcal{S}}(a\mathbf{x} + by) = a\mathbf{x}' + by' = \frac{a+b}{\sqrt{2}}\mathbf{x} + \frac{a-b}{\sqrt{2}}y = h(\mathbf{x}, y)$$

For two group elements *g*<sup>1</sup> and *g*<sup>2</sup> in *G*, we obtain

$$\mathbf{O}\_{\mathbb{S}1}f(\mathbf{r}) = f(\mathbf{D}^{-1}(\mathbf{g})\mathbf{r}) = h(\mathbf{r})$$

**O***g*2**O***g*<sup>1</sup> *f*(**r**) = **O***g*<sup>2</sup> **O***g*<sup>1</sup> *f*(**r**) <sup>=</sup> **<sup>O</sup>***g*<sup>2</sup> *<sup>h</sup>*(**r**) = *<sup>h</sup>*(D−1(*g*2)**r**) = *<sup>f</sup>*([D−1(*g*1)D−1(*g*2)]**r**) <sup>≡</sup> *<sup>f</sup>*([D(*g*2)D(*g*1)]−1**r**). Note: operator **O***<sup>g</sup>* acts on the coordinates of function *f* and not on the argument of *f* .

$$\begin{aligned} \text{Therefore} \\ \mathbf{O}\_{\mathfrak{G}\_2} \mathbf{O}\_{\mathfrak{G}\_1} f(\mathbf{r}) &= f(\left(D^{-1}(\mathfrak{g}\_1) \mathbf{D}^{-1}(\mathfrak{g}\_2)\right)^{-1} \mathbf{r}) = f(\mathbf{D}^{-1}(\mathfrak{g}1) \mathbf{D}^{-1}(\mathfrak{g}\_2) \mathbf{D}^{-1}(\mathfrak{g}\_1) \mathbf{r}) = f(\left(\mathbf{D}(\mathfrak{g}\_2) \mathbf{D}(\mathfrak{g}\_1)\right)^{-1} \mathbf{r}) \end{aligned}$$

and thus we get

$$\mathbf{O}\_{\mathfrak{J}^2} \mathbf{O}\_{\mathfrak{J}^1} = \mathbf{O}\_{\mathfrak{J}^2 \mathfrak{J}^1} \tag{2.7}$$

in words: the consecutive application of **O***g*<sup>1</sup> and **O***g*<sup>2</sup> is the same as the application of the transformation **O***g*2*g*<sup>1</sup> belonging to group element *g*1*g*2, and the operators **O***g*, *g* ∈ *G* have the same multiplication table as *G* and any group isomorphic with *G*.

#### **2.5 Invariant subspaces and regular representations**

A common approach to the solution of physical problems is harmonic analysis, where a solution to the problem is sought in terms of functions that span the solution space. If the problem exhibits some symmetry, we would expect this symmetry to be reflected in the solution for this particular problem. Intuitively we would expect therefore the solution to belong to a subspace of the general solution space, and that the subspace be invariant under the symmetry operations exhibited by the problem.

As an illustration of this notion, we assume the problem has the symmetry of the cyclic permutation group *<sup>C</sup>*<sup>3</sup> <sup>=</sup> {*E*, *<sup>C</sup>*3, *<sup>C</sup>*<sup>2</sup> <sup>3</sup> } that was discussed previously. Let *fE*(*r*) be an arbitrary function that allows the operation of the operators in the group *C*<sup>3</sup> as discussed above. The action of each operator on *fE* defines a new function that, is

$$\mathbf{O}\_E f\_E = f\_E \cdot \mathbf{O}\_{\mathbb{C}3} f\_E = f\_{\mathbb{C}3} \cdot \mathbf{O}\_{\mathbb{C}\_3^2} f\_E = f\_{\mathbb{C}\_3^2} \cdot \mathbf{I}$$

Based on this and the group multiplication table we get relations such as

$$\mathbf{O}\_{\mathbf{C}\_3} f\_{\mathbf{C}\_3} = \mathbf{O}\_{\mathbf{C}\_3} \mathbf{O}\_{\mathbf{C}\_3} f\_E = \mathbf{O}\_{\mathbf{C}\_3^2} f\_E = f\_{\mathbf{C}\_3^2} f\_E$$

etc. These observations can be summarized in a table: From that table we can construct matrix (permutation) representations of the operators **O***E*, **O***C*<sup>3</sup> , **O***C*<sup>2</sup> <sup>3</sup> as for example

$$\mathbf{D}(\mathbb{C}\_3) = (2,3,1). \tag{2.8}$$

**r**),

there are three irreducible representations in the regular representation. The condition

<sup>2</sup> <sup>+</sup> <sup>2</sup> <sup>3</sup> = 3

Application of Finite Symmetry Groups to Reactor Calculations 293

can only be satisfied by <sup>1</sup> = <sup>2</sup> = <sup>3</sup> = 1. Therefore, there are three distinct one-dimensional representations. These are the building blocks for decomposing the regular representation to

where *ω* = exp(2*πi*/3). The element in each of the three irreducible representation conform

These low dimension irreducible representations are used to build an irreducible representation from the regular representation of the operator *OC*<sup>3</sup> for example, as follows.

The irreducible representation has the form of a diagonal (block diagonal in the general case)

The mathematical relationship is discussed at length in all texts on the subject, and will not be repeated here. We assume the irreducible representations are known. Of interest is the information for the solution of physical problem, that is associated with irreducible

Recall that starting with an arbitrary function *f*(**r**) belonging to a function space **L** (a Hilbert space for example), we can generate a set of functions *<sup>f</sup>*1,..., *<sup>f</sup>*|*G*<sup>|</sup> that span an invariant subspace **<sup>L</sup>***<sup>s</sup>* <sup>⊂</sup> **<sup>L</sup>**. This process requires the matrices of coordinate transformations *<sup>g</sup>*1,..., *<sup>g</sup>*|*G*<sup>|</sup> that form the symmetry group *G* of interest. The diagonal structure of the irreducible representations of *G* tells us that there exists a set of basis functions { *f*1, *f*2,..., *fn*} that split the subspace **L***<sup>s</sup>* further into subspaces invariant under the symmetry group *G*, and are associated with each irreducible representation *D*(1)(*g*), *D*(2)(*g*),..., *D*(*nc* )(*g*) where *nc* is the

and thus an arbitrary function *f*(**r**) ∈ **L***<sup>s</sup>* is expressible as a sum of functions that act as basis function in the invariant subspaces associated with each irreducible representation

> *nc* ∑ *α*=1

*f*(**r**) =

(*C*3) = 1 *D*(1)

(*C*3) = *ω D*(2)

(*C*3) = *ω*<sup>∗</sup> *D*(3)

⎞ ⎠ =

⎞ ⎠ = ⎛ ⎝

**L***<sup>s</sup>* = **L**<sup>1</sup> ∪ **L**<sup>2</sup> ∪ ... **L***nc* (2.14)

*f <sup>α</sup>*(**r**). (2.15)

⎛ ⎝

010 001 100 ⎞ ⎠ .

⎞ ⎠ .

(*C*<sup>2</sup>

(*C*<sup>2</sup>

(*C*<sup>2</sup>

<sup>3</sup> ) = 1 (2.11)

<sup>3</sup> ) = *ω*<sup>∗</sup> (2.12)

<sup>3</sup> ) = *ω*, (2.13)

2 <sup>1</sup> <sup>+</sup> <sup>2</sup>

(*E*) = 1 *D*(1)

(*E*) = 1 *D*(2)

(*E*) = 1 *D*(3)

*D*11(*C*3) *D*12(*C*3) *D*13(*C*3) *D*21(*C*3) *D*22(*C*3) *D*23(*C*3) *D*31(*C*3) *D*32(*C*3) *D*33(*C*3)

*D*1(*C*3) 0) 0 0 *D*2(*C*3) 0 0 0 *D*3(*C*3)

irreducible representations, and can be found in tables:

*D*(1)

*D*(2)

*D*(3)

The regular representation has the form of a full matrix,

⎛ ⎝

⎛ ⎝

to the multiplication of point group *C*3.

matrix,

representations.

number of classes in *G*. That is

*D*(*α*)(*g*), *α* = 1, . . . , *nc* as


This procedure gives the so-called regular representation for the group *C*<sup>3</sup> as

$$\mathbf{O}\_{\rm E} = (1, 2, 3); \; \mathbf{O}\_{\rm C\_3} = (2, 3, 1); \; \mathbf{O}\_{\rm C\_3^2} = (3, 1, 2). \tag{2.9}$$

The matrices, in general, satisfy the group multiplication table, and are characterized by only the one integer one in each column, the rest zeros, and the dimension of the matrix equals to the number of elements in the group. The functions *fE*, *fC*<sup>3</sup> , *fC*<sup>2</sup> <sup>3</sup> that generate the regular representation, span the invariant subspace. They are not necessarily linearly independent basis functions.

#### **2.6 Complete sets of linearly independent basis functions and irreducible representations**

As was mentioned at the outset, symmetry as exemplified through group theory brings added information to the solution of physical problems, especially in the application of harmonic analysis. The heart of this information is encapsulated in the so called irreducible representations of the group elements. It should be stated at the outset that the irreducible representations used in most applications are readily available in tabulated form. Yet much of mathematical group theory is devoted to the derivation and properties of irreducible representations. We do not minimize in any way the importance of that material; it is necessary for a clear understanding of the applicability of the mathematical machinery and its physical interpretation. Our objective here is only to touch on a few of the central results used in the applications. Perhaps this may motivate the reader to look further into the subject.

The key property for the application of point groups to physical problems is that for a finite group all representations may be "built up" from a finite number of "distinct" irreducible representations. The number of distinct irreducible representations is equal to the number of classes in the group. Furthermore, the regular representation contains each irregular representation a number of times equal to the number of dimensions of that irreducible representation. Thus, if *<sup>α</sup>* is the dimension of the *α*-th irreducible representation,

$$\sum\_{k} \ell\_{\mathfrak{a}}^{2} = |G|\_{\mathsf{V}} \tag{2.10}$$

where |*G*| is the order of the group *G* to be satisfied.

Let us illustrate this with the group *C*<sup>3</sup> that was discussed previously. To identify the classes in *C*3, as before, we compute a table of *T*−1*QT*, see Table 5. The elements that transform into


Table 5. Classes of Group *G*<sup>3</sup>

themselves form a class. There are three classes in *C*3, denoted as *E*, *C*3, and *C*<sup>2</sup> <sup>3</sup> and therefore 8 Will-be-set-by-IN-TECH

*fE fE fC*<sup>3</sup> *fC*<sup>2</sup>

The matrices, in general, satisfy the group multiplication table, and are characterized by only the one integer one in each column, the rest zeros, and the dimension of the matrix equals

representation, span the invariant subspace. They are not necessarily linearly independent

**2.6 Complete sets of linearly independent basis functions and irreducible representations** As was mentioned at the outset, symmetry as exemplified through group theory brings added information to the solution of physical problems, especially in the application of harmonic analysis. The heart of this information is encapsulated in the so called irreducible representations of the group elements. It should be stated at the outset that the irreducible representations used in most applications are readily available in tabulated form. Yet much of mathematical group theory is devoted to the derivation and properties of irreducible representations. We do not minimize in any way the importance of that material; it is necessary for a clear understanding of the applicability of the mathematical machinery and its physical interpretation. Our objective here is only to touch on a few of the central results used in the applications. Perhaps this may motivate the reader to look further into the subject. The key property for the application of point groups to physical problems is that for a finite group all representations may be "built up" from a finite number of "distinct" irreducible representations. The number of distinct irreducible representations is equal to the number of classes in the group. Furthermore, the regular representation contains each irregular representation a number of times equal to the number of dimensions of that irreducible

*fC*<sup>3</sup> *fC*<sup>3</sup> *fC*<sup>2</sup>

*fC*2 <sup>3</sup> *fC*<sup>2</sup>

**O***<sup>E</sup>* = (1, 2, 3); **O***C*<sup>3</sup> = (2, 3, 1); **O***C*<sup>2</sup>

representation. Thus, if *<sup>α</sup>* is the dimension of the *α*-th irreducible representation,

*C*2 <sup>3</sup> *<sup>C</sup>*<sup>2</sup> <sup>3</sup> *<sup>C</sup>*<sup>2</sup> <sup>3</sup> *<sup>C</sup>*<sup>2</sup> 3

themselves form a class. There are three classes in *C*3, denoted as *E*, *C*3, and *C*<sup>2</sup>

where |*G*| is the order of the group *G* to be satisfied.

Table 5. Classes of Group *G*<sup>3</sup>

∑ *k* 2

Let us illustrate this with the group *C*<sup>3</sup> that was discussed previously. To identify the classes in *C*3, as before, we compute a table of *T*−1*QT*, see Table 5. The elements that transform into Q/T *E C*<sup>3</sup> *C*<sup>2</sup>

> *E E E E C*<sup>3</sup> *C*<sup>3</sup> *C*<sup>3</sup> *C*<sup>3</sup>

3

This procedure gives the so-called regular representation for the group *C*<sup>3</sup> as

to the number of elements in the group. The functions *fE*, *fC*<sup>3</sup> , *fC*<sup>2</sup>

basis functions.

*fE fC*<sup>3</sup> *fC*<sup>2</sup>

3

3

<sup>3</sup> = (3, 1, 2). (2.9)

*<sup>α</sup>* = |*G*|, (2.10)

<sup>3</sup> and therefore

<sup>3</sup> that generate the regular

<sup>3</sup> *fE*

<sup>3</sup> *fE fC*<sup>3</sup>

there are three irreducible representations in the regular representation. The condition

$$\ell\_1^2 + \ell\_2^2 + \ell\_3^2 = 3$$

can only be satisfied by <sup>1</sup> = <sup>2</sup> = <sup>3</sup> = 1. Therefore, there are three distinct one-dimensional representations. These are the building blocks for decomposing the regular representation to irreducible representations, and can be found in tables:

$$D^{(1)}(E) = 1 \,\, D^{(1)}(\mathbb{C}\_3) = 1 \,\,\, D^{(1)}(\mathbb{C}\_3^2) = 1 \,\,\,\tag{2.11}$$

$$D^{(2)}(E) = 1 \,\, D^{(2)}(\mathbb{C}\_3) = \omega \,\,\, D^{(2)}(\mathbb{C}\_3^2) = \omega^\* \tag{2.12}$$

$$D^{(3)}(E) = 1 \,\, D^{(3)}(\mathbb{C}\_3) = \omega^\* \,\,\, D^{(3)}(\mathbb{C}\_3^2) = \omega,\tag{2.13}$$

where *ω* = exp(2*πi*/3). The element in each of the three irreducible representation conform to the multiplication of point group *C*3.

These low dimension irreducible representations are used to build an irreducible representation from the regular representation of the operator *OC*<sup>3</sup> for example, as follows.

The regular representation has the form of a full matrix,

$$
\begin{pmatrix} D\_{11}(\mathbf{C\_3}) \ D\_{12}(\mathbf{C\_3}) \ D\_{13}(\mathbf{C\_3}) \\ D\_{21}(\mathbf{C\_3}) \ D\_{22}(\mathbf{C\_3}) \ D\_{23}(\mathbf{C\_3}) \\ D\_{31}(\mathbf{C\_3}) \ D\_{32}(\mathbf{C\_3}) \ D\_{33}(\mathbf{C\_3}) \end{pmatrix} = \begin{pmatrix} 0 \ 1 \ 0 \\ 0 \ 0 \ 1 \\ 1 \ 0 \ 0 \end{pmatrix}.$$

The irreducible representation has the form of a diagonal (block diagonal in the general case) matrix,

$$
\begin{pmatrix} D^1(\mathbb{C}\_3) & 0 \\ 0 & D^2(\mathbb{C}\_3) & 0 \\ 0 & 0 & D^3(\mathbb{C}\_3) \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 \\ 0 \ \omega & 0 \\ 0 & 0 & \omega^\* \end{pmatrix}.
$$

The mathematical relationship is discussed at length in all texts on the subject, and will not be repeated here. We assume the irreducible representations are known. Of interest is the information for the solution of physical problem, that is associated with irreducible representations.

Recall that starting with an arbitrary function *f*(**r**) belonging to a function space **L** (a Hilbert space for example), we can generate a set of functions *<sup>f</sup>*1,..., *<sup>f</sup>*|*G*<sup>|</sup> that span an invariant subspace **<sup>L</sup>***<sup>s</sup>* <sup>⊂</sup> **<sup>L</sup>**. This process requires the matrices of coordinate transformations *<sup>g</sup>*1,..., *<sup>g</sup>*|*G*<sup>|</sup> that form the symmetry group *G* of interest. The diagonal structure of the irreducible representations of *G* tells us that there exists a set of basis functions { *f*1, *f*2,..., *fn*} that split the subspace **L***<sup>s</sup>* further into subspaces invariant under the symmetry group *G*, and are associated with each irreducible representation *D*(1)(*g*), *D*(2)(*g*),..., *D*(*nc* )(*g*) where *nc* is the number of classes in *G*. That is

$$\mathbb{L}\_s = \mathbb{L}\_1 \cup \mathbb{L}\_2 \cup \dots \cup \mathbb{L}\_{n\_\zeta} \tag{2.14}$$

and thus an arbitrary function *f*(**r**) ∈ **L***<sup>s</sup>* is expressible as a sum of functions that act as basis function in the invariant subspaces associated with each irreducible representation *D*(*α*)(*g*), *α* = 1, . . . , *nc* as

$$f(\mathbf{r}) = \sum\_{\alpha=1}^{n\_{\mathcal{L}}} f^{\alpha}(\mathbf{r}). \tag{2.15}$$

and *f*(**r**) is decomposed into a complete finite set of orthogonal functions, with one for each

Application of Finite Symmetry Groups to Reactor Calculations 295

where **A** and **B** are linear operators. Group theory is not a panacea to the solution of boundary value problems; its application is limited. The main condition that must be met in nuclear engineering problems is that material distributions have symmetry. This is generally true in

In the following we give a heuristic outline of how the machinery presented above enters into the solution algorithm of a boundary value problem, and what benefits can be expected.

Symmetry is the key. If we have determined that the physical problem has symmetries these symmetries must form a group *G*. The symmetry operator **O***<sup>g</sup>* must commute for all *g* ∈ *G*

must hold for all *g* ∈ *G*. If this condition is met, the boundary value problem can be written

We can now use the projection operator (2.20) to form a set of boundary value problems

Since the projection operator creates linearly independent components, we have decomposed the boundary value problem into a number (equal to the number of irreducible components)

whose solution *ψα*(**r**) belongs to the *α*-th irreducible representation. From this complete set of linearly independent orthogonal functions we reconstruct the solution to the original problem

> *nc* ∑ *α*=1

Why is this better? Recall that we are applying harmonic analysis. The usual approach is to use some series that forms an incomplete set of expansion functions and results a coupled set of equations; one "large" matrix problem. With group theory, we find a relatively small set

*ψ*(**r**) =

with the linear operators **A** and **B** for group theory to be applicable. That is

**A***φ*(**r**) = 0 **r** ∈ *V* (3.1) **B***φ*(**r**) = *f*(**r**) **r** ∈ *∂V*, (3.2)

**O***g***A** = **AO***<sup>g</sup>* and **O***g***B** = **BO***<sup>g</sup>* (3.3)

**AO***gψ*(**r**) = 0 **r** ∈ *V* (3.4) **BO***gψ*(**r**) = **O***<sup>g</sup> f*(**r**) **r** ∈ *∂V*. (3.5)

**<sup>A</sup>***Pαψ*(**r**) = <sup>0</sup> **<sup>r</sup>** <sup>∈</sup> *<sup>V</sup>* (3.6) **<sup>B</sup>***Pαψ*(**r**) = *<sup>P</sup><sup>α</sup> <sup>f</sup>*(**r**) **<sup>r</sup>** <sup>∈</sup> *<sup>∂</sup>V*. (3.7)

**<sup>A</sup>***ψα*(**r**) = <sup>0</sup> **<sup>r</sup>** <sup>∈</sup> *<sup>V</sup>* (3.8) **<sup>B</sup>***ψα*(**r**) = *<sup>f</sup> <sup>α</sup>*(**r**) **<sup>r</sup>** <sup>∈</sup> *<sup>∂</sup>V*, (3.9)

*cαψα*(**r**), (3.10)

irreducible representation irrespective of its dimension.

Let us consider the following boundary value problem:

of independent boundary value problems. These are

where *nc* is the number of classes in *G*.

**3. Symmetries of a boundary value problem**

reactor cores, core cells and cell nodes.

as

as

If the decomposition of the regular representation contains irreducible representations of dimension greater than one, we have for each basis function that "belongs to the *α*-th irreducible representation"

$$f^{\mathfrak{a}}(\mathbf{r}) = \sum\_{i=1}^{\ell\_{\mathfrak{a}}} f\_{\text{ii}} \mathfrak{a}(\mathbf{r}) \tag{2.16}$$

where *<sup>α</sup>* is the dimension of the *α*-th irreducible representation.

The question now remains how do we obtain *f <sup>α</sup>*(**r**), the basis function of each irreducible representation?

To this end we can apply a projection operator that resolves a given function *f*(**r**) into basis functions associated with each irreducible representation. This projection operator is defined as

$$\mathbf{P}\_{i}^{\alpha} = \frac{\ell\_{\alpha}}{|G|} \sum\_{\mathcal{g} \in G} D\_{ii}^{\mu}(\mathcal{g}) \mathbf{O}\_{\mathcal{g}}.\tag{2.17}$$

The information needed to construct this operator–the coordinate transformations, the irreducible representations–are known in the case of the point groups encountered in practice. So, for example, the *i*-th basis function of the *α* irreducible representation that is *<sup>α</sup>* dimensional for a symmetry group with |*G*| elements is constructed from an arbitrary function *f*(**r**) in invariant space **L***<sup>s</sup>* as

$$f\_i^{\alpha}(\mathbf{r}) = \frac{\ell\_{\alpha}}{|G|} \sum\_{\mathcal{G} \in \mathcal{G}} D\_{ii}^{\alpha}(\mathcal{g}) \mathbf{O}\_{\mathcal{G}} f(\mathbf{r}). \tag{2.18}$$

This decomposition creates a complete finite set of orthogonal basis functions.

In practice, a more simple projection operator is generally sufficient. This is due to the fact that the *D<sup>α</sup> ii*(*g*)'s (the diagonal elements of a multidimensional irreducible representation) are quantities that are intrinsic properties of the irreducible representation D*α*(*g*). That is they are invariant under the change of coordinates.

Furthermore, the sum of the diagonal elements, or trace, of the irreducible representation D*α*(*g*) is also invariant under a change of coordinates. In group theory this trace is denoted by the symbol *χα*(*g*) and

$$\chi^{\mathfrak{a}}(\mathcal{g}) = \sum\_{i=1}^{\ell\_{\mathfrak{a}}} D\_{ii}^{\mathfrak{a}}(\mathcal{g})\_{i} \tag{2.19}$$

and referred to as the character of element *g* ∈ *G* in the *α*-th irreducible representation. There are tables of characters for all the point groups of physical interest.

The projection operator in terms of characters is given as

$$P^{\mathfrak{a}} = \frac{\ell\_{\mathfrak{a}}}{|G|} \sum\_{\mathcal{g} \in G} \chi^{\mathfrak{a}}(\mathcal{g}) \mathbf{O}\_{\mathcal{g}} \tag{2.20}$$

so that the basis functions are

$$f^{\mathfrak{a}}(\mathbf{r}) = \frac{\ell\_{\mathfrak{a}}}{|G|} \sum\_{\mathcal{S} \in G} \chi\_{\mathcal{S}}^{\mathfrak{a}} \mathbf{O}\_{\mathcal{S}} f(\mathbf{r}), \tag{2.21}$$

and *f*(**r**) is decomposed into a complete finite set of orthogonal functions, with one for each irreducible representation irrespective of its dimension.

#### **3. Symmetries of a boundary value problem**

10 Will-be-set-by-IN-TECH

If the decomposition of the regular representation contains irreducible representations of dimension greater than one, we have for each basis function that "belongs to the *α*-th

> *α* ∑ *i*=1

The question now remains how do we obtain *f <sup>α</sup>*(**r**), the basis function of each irreducible

To this end we can apply a projection operator that resolves a given function *f*(**r**) into basis functions associated with each irreducible representation. This projection operator is defined

The information needed to construct this operator–the coordinate transformations, the irreducible representations–are known in the case of the point groups encountered in practice. So, for example, the *i*-th basis function of the *α* irreducible representation that is *<sup>α</sup>* dimensional for a symmetry group with |*G*| elements is constructed from an arbitrary function

*Dα*

*Dα*

*ii*(*g*)'s (the diagonal elements of a multidimensional irreducible representation) are

In practice, a more simple projection operator is generally sufficient. This is due to the fact

quantities that are intrinsic properties of the irreducible representation D*α*(*g*). That is they are

Furthermore, the sum of the diagonal elements, or trace, of the irreducible representation D*α*(*g*) is also invariant under a change of coordinates. In group theory this trace is denoted

> *α* ∑ *i*=1 *Dα*

and referred to as the character of element *g* ∈ *G* in the *α*-th irreducible representation. There

<sup>|</sup>*G*<sup>|</sup> <sup>∑</sup> *g*∈*G*

<sup>|</sup>*G*<sup>|</sup> <sup>∑</sup> *g*∈*G χα*

*χα*(*g*) =

*<sup>P</sup><sup>α</sup>* <sup>=</sup> *<sup>α</sup>*

*<sup>f</sup> <sup>α</sup>*(**r**) = *<sup>α</sup>*

*fiiα*(**r**) (2.16)

*ii*(*g*)**O***g*. (2.17)

*ii*(*g*)**O***<sup>g</sup> f*(**r**). (2.18)

*ii*(*g*), (2.19)

*χα*(*g*)**O***<sup>g</sup>* (2.20)

*<sup>g</sup>***O***<sup>g</sup> f*(**r**), (2.21)

*f <sup>α</sup>*(**r**) =

where *<sup>α</sup>* is the dimension of the *α*-th irreducible representation.

**P***α <sup>i</sup>* <sup>=</sup> *<sup>α</sup>* <sup>|</sup>*G*<sup>|</sup> <sup>∑</sup> *g*∈*G*

*f α*

are tables of characters for all the point groups of physical interest.

The projection operator in terms of characters is given as

*<sup>i</sup>* (**r**) = *<sup>α</sup>*

This decomposition creates a complete finite set of orthogonal basis functions.

<sup>|</sup>*G*<sup>|</sup> <sup>∑</sup> *g*∈*G*

irreducible representation"

*f*(**r**) in invariant space **L***<sup>s</sup>* as

by the symbol *χα*(*g*) and

so that the basis functions are

invariant under the change of coordinates.

representation?

that the *D<sup>α</sup>*

as

Let us consider the following boundary value problem:

$$\mathbf{A}\phi(\mathbf{r}) = 0 \; \mathbf{r} \in V \tag{3.1}$$

$$\mathbf{B}\boldsymbol{\phi}(\mathbf{r}) = f(\mathbf{r}) \cdot \mathbf{r} \in \boldsymbol{\partial}V,\tag{3.2}$$

where **A** and **B** are linear operators. Group theory is not a panacea to the solution of boundary value problems; its application is limited. The main condition that must be met in nuclear engineering problems is that material distributions have symmetry. This is generally true in reactor cores, core cells and cell nodes.

In the following we give a heuristic outline of how the machinery presented above enters into the solution algorithm of a boundary value problem, and what benefits can be expected.

Symmetry is the key. If we have determined that the physical problem has symmetries these symmetries must form a group *G*. The symmetry operator **O***<sup>g</sup>* must commute for all *g* ∈ *G* with the linear operators **A** and **B** for group theory to be applicable. That is

$$\mathbf{O}\_{\mathcal{S}}\mathbf{A} = \mathbf{A}\mathbf{O}\_{\mathcal{S}} \quad \text{and} \; \mathbf{O}\_{\mathcal{S}}\mathbf{B} = \mathbf{B}\mathbf{O}\_{\mathcal{S}}\tag{3.3}$$

must hold for all *g* ∈ *G*. If this condition is met, the boundary value problem can be written as

$$\mathbf{A}\mathbf{O}\_{\mathcal{S}}\boldsymbol{\psi}(\mathbf{r}) = 0 \; \mathbf{r} \in V \tag{3.4}$$

$$\mathbf{B}\mathbf{O}\_{\mathcal{S}}\boldsymbol{\psi}(\mathbf{r}) = \mathbf{O}\_{\mathcal{S}}f(\mathbf{r}) \ \mathbf{r} \in \partial V. \tag{3.5}$$

We can now use the projection operator (2.20) to form a set of boundary value problems

$$\mathbf{A}P^{\mathbb{K}}\psi(\mathbf{r}) = 0 \ \mathbf{r} \in V \tag{3.6}$$

$$\mathbf{B}P^{\mathbb{A}}\psi(\mathbf{r}) = P^{\mathbb{A}}f(\mathbf{r}) \ \mathbf{r} \in \partial V. \tag{3.7}$$

Since the projection operator creates linearly independent components, we have decomposed the boundary value problem into a number (equal to the number of irreducible components) of independent boundary value problems. These are

$$\mathbf{A}\psi^{a}(\mathbf{r}) = \mathbf{0} \ \mathbf{r} \in V \tag{3.8}$$

$$\mathbf{B}\boldsymbol{\psi}^{\boldsymbol{\upalpha}}(\mathbf{r}) = f^{\boldsymbol{\upalpha}}(\mathbf{r}) \cdot \mathbf{r} \in \boldsymbol{\upbeta} V,\tag{3.9}$$

whose solution *ψα*(**r**) belongs to the *α*-th irreducible representation. From this complete set of linearly independent orthogonal functions we reconstruct the solution to the original problem as

$$\psi(\mathbf{r}) = \sum\_{a=1}^{n\_{\mathcal{L}}} c\_a \psi^a(\mathbf{r}),\tag{3.10}$$

where *nc* is the number of classes in *G*.

Why is this better? Recall that we are applying harmonic analysis. The usual approach is to use some series that forms an incomplete set of expansion functions and results a coupled set of equations; one "large" matrix problem. With group theory, we find a relatively small set

Fig. 1. Labeling Positions of Points on an Orbit

*<sup>i</sup>* vectors:

**O**<sup>+</sup> = � **e**�<sup>+</sup> <sup>1</sup> , **<sup>e</sup>**�<sup>+</sup> <sup>6</sup> , **<sup>e</sup>**�<sup>+</sup> <sup>2</sup> , **<sup>e</sup>**�<sup>+</sup> <sup>3</sup> , **<sup>e</sup>**�<sup>+</sup> <sup>4</sup> , **<sup>e</sup>**�<sup>+</sup> 5 �

**OAO**−<sup>1</sup> =

geometry, just a matrix invariant under a group of transformations.

� 14<sup>√</sup>

**O***b* =

of solvability, that kind of reduction may become important.

<sup>1</sup> As matrix **O** is orthogonal, its inverse is just its transpose.

where the prime indicates rearranging in accordance with Fig. 1. Using the rearranging

⎛

⎜⎜⎜⎜⎜⎝

**A***x* = *b*, **OAO**−1(**O***x*) = **O***b*,

Application of Finite Symmetry Groups to Reactor Calculations 297

where *<sup>a</sup>* <sup>=</sup> <sup>√</sup>3. Compare the structure of the above matrix with that given in Section 3, where the similar form is achieved by geometrical similarity. In the present example there is no

In order to solve the resulting equations, we need the transformed right hand side of the

Finally, note that instead of solving one equation with six unknowns, we have four equations, two of them are solved by one division for each, and we have to solve two pairs of equations

The Reader may ask: What is the benefit of the reduction? In a problem which is at the verge

, 0, <sup>−</sup>8, 4, <sup>−</sup> <sup>2</sup>

√3

�+ .

6, 2�<sup>2</sup> 3

with two unknowns for each. At the end, we have to transform back from **O***x* to *x*.

21 0 0 0 0 0 0 −10 0 0 0 0 0 −6 2*a* 0 0 0 0 −*a* −10 0 00 0 0 −6 2*a* 00 0 0 −*a* −1

⎞

⎟⎟⎟⎟⎟⎠ , (3.15)

the orthonormal **e**�

we find<sup>1</sup>

equation:

of complete basis functions that form the solution from symmetry considerations. These are found by solving a set of "small" boundary value problems. It is clear that the effectiveness of group theory is problem dependent. However, experience over the past half century has proven group theory's effectiveness in both nuclear engineering and other fields.

We present an especially simple example (Allgover et al., 1992) that demonstrates the advantages of symmetry considerations. The example is the solution of a linear system of equations with six unknowns:

$$
\begin{pmatrix} 1 \ 5 \ 6 \ 2 \ 3 \ 4 \\ 5 \ 1 \ 4 \ 3 \ 2 \ 6 \\ 3 \ 4 \ 1 \ 5 \ 6 \ 2 \\ 2 \ 6 \ 5 \ 1 \ 4 \ 3 \\ 6 \ 2 \ 3 \ 4 \ 1 \ 5 \\ 4 \ 3 \ 2 \ 6 \ 5 \ 1 \end{pmatrix} \begin{pmatrix} \mathbf{x}\_1 \\ \mathbf{x}\_2 \\ \mathbf{x}\_3 \\ \mathbf{x}\_4 \\ \mathbf{x}\_5 \\ \mathbf{x}\_6 \end{pmatrix} = \begin{pmatrix} 9 \\ 14 \\ 21 \\ 15 \\ 14 \\ 11 \end{pmatrix}. \tag{3.11}
$$

The example has been constructed so that the basis of the reduction is the observation that the matrix is invariant under the following permutations: *p*<sup>1</sup> = (1, 6)(2, 5)(3, 4) and *p*<sup>2</sup> = (1, 5, 3)(2, 6, 4). As *p*<sup>1</sup> and *p*<sup>2</sup> generate a group *D*<sup>6</sup> of six element, the matrix commutes with the representation of group *D*<sup>6</sup> by matrices of order six. This suggests the application of group theory: decompose the matrix and the vector on the right hand side of the equation into irreducible components, and solve the resulting equations in the irreducible subspaces. The *D*<sup>6</sup> group is isomorphic to the symmetry group of the regular triangle discussed in Section 2.2.

The character table of the group *D*<sup>6</sup> can be found in tables (Atkins, 1970; Conway, 2003; Landau & Lifshitz, 1980), or, can be looked up in computer programs, or libraries (GAP, 2008).

Using the character table, and projector (2.17), one can carry out the following calculations. The observation that *D*<sup>6</sup> is isomorphic to the symmetry group of the equilateral triangle makes the problem easier. (Mackey, 1980) has made the observation: There is an analogy of the group characters and the Fourier transform. This allows the construction of irreducible vectors by the following ad hoc method. Form the following *N*-tuples (*N* = |*G*|):

$$\begin{aligned} \mathbf{e}\_{2k-1} &= (\cos(2\pi/N\*(2k-1)\*1), \dots, \cos(2\pi/N\*(2k-1)\*N)) \\ \mathbf{e}\_{2k} &= (\sin(2\pi/N\*(2k)\*1), \dots, \sin(2\pi/N\*(2k)\*N)), k = 1, 2, \dots \text{N}. \end{aligned} \tag{3.12}$$

These vectors are orthonormal and can serve as an irreducible basis. After normalization, one gets a set of irreducible vectors in the *N* copies of the fundamental domain. Here one may exploit the isomorphism with the symmetry group of an equilateral triangle with the points positioned as shown in Fig. 1. Applying the above recipe to the points in the triangle, we get the following irreducible basis:

$$\mathbf{e}\_1 = (1, 1, 1, 1, 1, 1) \quad \mathbf{e}\_2 = (2, -1, -1, 2, -1, -1) \quad \mathbf{e}\_3 = (0, 1, -1, 0, 1, -1) \tag{3.13}$$

$$\mathbf{e}\_4 = (2, 1, -1, -2, -1, 1) \cdot \mathbf{e}\_5 = (0, 1, 1, 0, -1, -1) \cdot \ \mathbf{e}\_6 = (1, -1, 1, -1, 1, -1). \tag{3.14}$$

We note that the points in the vectors **e***<sup>i</sup>* do not follow the order shown in Fig. 1. Thus we need to renumber the points, and normalize the vectors. For ease of interpolation we also renumber the vectors given above. It is clear that the vectors formed from cos and sin transform together. Thus they form a two-dimensional representation. We bring forward the one-dimensional representations. The projection to the irreducible basis is through a 6 × 6 matrix that contains

Fig. 1. Labeling Positions of Points on an Orbit

the orthonormal **e**� *<sup>i</sup>* vectors:

$$\mathbf{O}^{+} = \left(\mathbf{e}\_{1}^{\prime+}, \mathbf{e}\_{6}^{\prime+}, \mathbf{e}\_{2}^{\prime+}, \mathbf{e}\_{3}^{\prime+}, \mathbf{e}\_{4}^{\prime+}, \mathbf{e}\_{5}^{\prime+}\right) \tag{3.15}$$

where the prime indicates rearranging in accordance with Fig. 1. Using the rearranging

$$\mathbf{A}\mathbf{x} = b\_\prime \mathbf{O} \mathbf{A} \mathbf{O}^{-1}(\mathbf{O}\mathbf{x}) = \mathbf{O}b\_\prime$$

we find<sup>1</sup>

12 Will-be-set-by-IN-TECH

of complete basis functions that form the solution from symmetry considerations. These are found by solving a set of "small" boundary value problems. It is clear that the effectiveness of group theory is problem dependent. However, experience over the past half century has

We present an especially simple example (Allgover et al., 1992) that demonstrates the advantages of symmetry considerations. The example is the solution of a linear system of

⎛

*x*1 *x*2 *x*3 *x*4 *x*5 *x*6 ⎞

⎛

⎞

⎟⎟⎟⎟⎟⎠

. (3.11)

⎜⎜⎜⎜⎜⎝

⎟⎟⎟⎟⎟⎠ =

⎜⎜⎜⎜⎜⎝

The example has been constructed so that the basis of the reduction is the observation that the matrix is invariant under the following permutations: *p*<sup>1</sup> = (1, 6)(2, 5)(3, 4) and *p*<sup>2</sup> = (1, 5, 3)(2, 6, 4). As *p*<sup>1</sup> and *p*<sup>2</sup> generate a group *D*<sup>6</sup> of six element, the matrix commutes with the representation of group *D*<sup>6</sup> by matrices of order six. This suggests the application of group theory: decompose the matrix and the vector on the right hand side of the equation into irreducible components, and solve the resulting equations in the irreducible subspaces. The *D*<sup>6</sup> group is isomorphic to the symmetry group of the regular triangle discussed in Section 2.2. The character table of the group *D*<sup>6</sup> can be found in tables (Atkins, 1970; Conway, 2003; Landau & Lifshitz, 1980), or, can be looked up in computer programs, or libraries (GAP, 2008). Using the character table, and projector (2.17), one can carry out the following calculations. The observation that *D*<sup>6</sup> is isomorphic to the symmetry group of the equilateral triangle makes the problem easier. (Mackey, 1980) has made the observation: There is an analogy of the group characters and the Fourier transform. This allows the construction of irreducible vectors by

⎞

⎟⎟⎟⎟⎟⎠

proven group theory's effectiveness in both nuclear engineering and other fields.

the following ad hoc method. Form the following *N*-tuples (*N* = |*G*|):

**<sup>e</sup>**2*k*−<sup>1</sup> = (cos(2*π*/*<sup>N</sup>* ∗ (2*<sup>k</sup>* − <sup>1</sup>) ∗ <sup>1</sup>), . . . , cos(2*π*/*<sup>N</sup>* ∗ (2*<sup>k</sup>* − <sup>1</sup>) ∗ *<sup>N</sup>*),

These vectors are orthonormal and can serve as an irreducible basis. After normalization, one gets a set of irreducible vectors in the *N* copies of the fundamental domain. Here one may exploit the isomorphism with the symmetry group of an equilateral triangle with the points positioned as shown in Fig. 1. Applying the above recipe to the points in the triangle, we get

**e**<sup>1</sup> = (1, 1, 1, 1, 1, 1) **e**<sup>2</sup> = (2, −1, −1, 2, −1, −1) **e**<sup>3</sup> = (0, 1, −1, 0, 1, −1) (3.13) **e**<sup>4</sup> = (2, 1, −1, −2, −1, 1) **e**<sup>5</sup> = (0, 1, 1, 0, −1, −1)) **e**<sup>6</sup> = (1, −1, 1, −1, 1, −1). (3.14)

We note that the points in the vectors **e***<sup>i</sup>* do not follow the order shown in Fig. 1. Thus we need to renumber the points, and normalize the vectors. For ease of interpolation we also renumber the vectors given above. It is clear that the vectors formed from cos and sin transform together. Thus they form a two-dimensional representation. We bring forward the one-dimensional representations. The projection to the irreducible basis is through a 6 × 6 matrix that contains

**e**2*<sup>k</sup>* = (sin(2*π*/*N* ∗ (2*k*) ∗ 1), . . . , sin(2*π*/*N* ∗ (2*k*) ∗ *N*), *k* = 1, 2, . . . *N*. (3.12)

⎛

⎜⎜⎜⎜⎜⎝

equations with six unknowns:

the following irreducible basis:

$$\mathbf{O} \mathbf{A} \mathbf{O}^{-1} = \begin{pmatrix} 21 & 0 & 0 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 & 0 & 0 \\ 0 & 0 & -6 & 2a & 0 & 0 \\ 0 & 0 & -a & -1 & 0 & 0 \\ 0 & 0 & 0 & 0 & -6 & 2a \\ 0 & 0 & 0 & 0 & -a & -1 \end{pmatrix} \prime$$

where *<sup>a</sup>* <sup>=</sup> <sup>√</sup>3. Compare the structure of the above matrix with that given in Section 3, where the similar form is achieved by geometrical similarity. In the present example there is no geometry, just a matrix invariant under a group of transformations.

In order to solve the resulting equations, we need the transformed right hand side of the equation:

$$\mathbf{O}b = \left(14\sqrt{6}, 2\sqrt{\frac{2}{3}}, 0, -8, 4, -\frac{2}{\sqrt{3}}\right)^{+}.$$

Finally, note that instead of solving one equation with six unknowns, we have four equations, two of them are solved by one division for each, and we have to solve two pairs of equations with two unknowns for each. At the end, we have to transform back from **O***x* to *x*.

The Reader may ask: What is the benefit of the reduction? In a problem which is at the verge of solvability, that kind of reduction may become important.

<sup>1</sup> As matrix **O** is orthogonal, its inverse is just its transpose.

to the neutron flux. In such equations the symmetries are mostly determined by the space dependence of the material properties. In the next subsection we investigate the possible

Application of Finite Symmetry Groups to Reactor Calculations 299

When the solutions Ψ*k*(**r**), *k* = 1, . . . , *G* are known, not only the reaction rates, and netand partial currents can be determined, but also matrices can be created to transform these quantities into each other. From diffusion theory it is known that the solution is determined by specifying the entering current along the boundary *∂V*. Thus the boundary flux is also determined. But the given boundary flux also determines the solution everywhere in *V*. The

Here G*k*0,*k*(**r**<sup>0</sup> → **r**) is the Green's function, it gives the neutron flux created at point **r** in energy group *k* by one neutron entering *V* at **r**<sup>0</sup> in energy group *k*0; and *fk*<sup>0</sup> (**r**0) is the given flux in

> *G* ∑ *k*0=1

First, the symmetry properties of the solution do not change in time because (4.1) is linear. This is not true for nonlinear equations. Secondly, the equations in (3.3) need to be satisfied. That is, the operations of the equation (4.4) and the boundary conditions must commute with the symmetry group elements. The symmetries of equation (4.4) are determined by the operators, the material parameters (cross-sections) and the geometry of *V*. The first term

<sup>∇</sup>(*Dk*(**r**)∇Ψ*k*(**r**, *<sup>t</sup>*)) = <sup>∇</sup>*Dk*(**r**)∇Ψ*k*(**r**) + *Dk*(**r**)∇2Ψ*k*(**r**). Here the first term contains a dot product which is invariant under rotations and reflections. The second term involves the laplace operator, which is also invariant under rotations and reflections. Thus, the major limiting symmetry factors are the material distributions, or the associated cross-sections as functions of space, and the shape of *V*. We assume the material distribution to be completely symmetric, thus for any cross-section Σ(**r**) we assume the

Here **O***<sup>g</sup>* is an operator applicable to the possible solutions. D(*g*) is a matrix representation of the symmetry group of the diffusion equation applicable to **r**. The following operators are encountered in diffusion theory. The general form of a reaction rate at point **r** ∈ *V* can be

�

**O***g*Σ(**r**) = Σ(D(*g*)**r**) = Σ(**r**

*R*(**r**) = ∑ *k*1

G*k*0,*k*(**r**<sup>0</sup> → **r**)*fk*<sup>0</sup> (**r**0)*d***r**0. (4.5)

G*k*0,*k*(**r**<sup>0</sup> → **r**)*fk*<sup>0</sup> (**r**0)*d***r**<sup>0</sup> (4.6)

) = Σ(**r**). (4.7)

Σ*k*1(**r**)Ψ*k*1(**r**). (4.8)

symmetries of equation (4.4) and the exploitation of those symmetries.

solution is given formally by a Green's function as follows:

<sup>Ψ</sup>*k*(**r**) =

*Jnk*(**r**) = −*Dk*∇

where the ∇ operator acts on variable **r**.

**4.1 Symmetries of the diffusion equation**

involves derivatives:

transformation property

expressed as

*∂V*

energy group *k*<sup>0</sup> at boundary point **r**0. Similarly the net current is obtained as

 *∂V*

*G* ∑ *k*0=1

A more favorable situation is when there are geometric transformations leaving the equation and the volume under consideration, invariant. But before immersing into the symmetry hunting, we investigate the diffusion equation.

#### **4. The multigroup diffusion equation**

The diffusion equation is one of the most widely used reactor physical models. It describes the neutron balance in a volume *V*, the neutron energy may be continuous or discretized (multi group model). The multi group version is:

$$\frac{1}{v\_k} \frac{\partial \Psi\_k(\mathbf{r}, t)}{\partial t} = \nabla (D\_k(\mathbf{r}) \nabla \Psi\_k(\mathbf{r}, t)) + \sum\_{k'=1}^G T\_{kk'} \Psi\_k'(\mathbf{r}, t), \tag{4.1}$$

where the processes leading to energy change are collected in *Tkk*� :

$$T\_{kk'} = -\Sigma\_{tk}\delta\_{kk'} + \Sigma\_{k'\to k} + \frac{\chi\_k}{k\_{eff}}\upsilon\Sigma\_{fk'\prime} \tag{4.2}$$

where subscripts *k*, *k*� label the energy groups, *vk* is the speed of neutrons in energy group *k*, Ψ*k*(**r**) is the space dependent neutron flux in group *k*, and *keff* = 1. In general, the cross-sections *Dk*, <sup>Σ</sup>*tk*, <sup>Σ</sup>*k*�→*k*, <sup>Σ</sup>*f k*� are the space dependent diffusion constant, the total cross-section, the scattering cross-section, and the fission cross-section. *χ<sup>k</sup>* is called the fission spectrum. Equation (4.1) is a set of partial differencial equations, to which the initial condition Ψ*k*(**r**, 0),**r** ∈ *V* and a suitable boundary condition, e.g. Ψ*k*(**r**, *t*),**r** ∈ *∂V* are given for every energy group *k* and every time *t*. The boundary conditions used in diffusion problems are of the type

$$(\nabla \mathbf{n})\Psi\_k(\mathbf{r}) + b\_k(\mathbf{r})\Psi\_k(\mathbf{r}) = h\_k(\mathbf{r}) \ \ k = 1, \ldots, G. \tag{4.3}$$

for **r** ∈ *∂V*. Here *bk*(**r**) depends on the boundary condition and may contain material properties, for example albedo.

The diffusion equation is a relationship between the cross-sections in *V* and the neutron flux Ψ*k*(**r**, *t*). The equation is linear in Ψ*k*(**r**, *t*). The main variants of equation (4.1) that are of interest in reactor physics are:

1. Static eigenvalue problem: When the flux does not depend on *t*, the left hand side is zero, and (4.1) has a nontrivial solution only if the cross-sections are interrelated. To this end, we free *keff* and the static diffusion equation is put in the form of an eigenvalue problem:

$$\nabla(D\_k(\mathbf{r})\nabla\Psi\_k(\mathbf{r})) + \sum\_{k=1}^{G} T\_{kk'}(k\_{eff})\Psi\_{k'}(\mathbf{r}) = 0,\tag{4.4}$$

where the eigenvalue *keff* introduced as a parameter in *Tkk*� thus allowing for a non-trivial solution Ψ*k*(**r**). That usage is typical in core design calculations.


The structure of the diffusion equation is simple. Mathematical operations, like summation and differentiation, and multiplication by material parameters (cross-sections) are applied 14 Will-be-set-by-IN-TECH

A more favorable situation is when there are geometric transformations leaving the equation and the volume under consideration, invariant. But before immersing into the symmetry

The diffusion equation is one of the most widely used reactor physical models. It describes the neutron balance in a volume *V*, the neutron energy may be continuous or discretized (multi

> *G* ∑ *k*�=1

*keff*

(∇**n**)Ψ*k*(**r**) + *bk*(**r**)Ψ*k*(**r**) = *hk*(**r**) *k* = 1, . . . , *G*. (4.3)

*Tkk*�Ψ�

*<sup>k</sup>*(**r**, *t*), (4.1)

*ν*Σ*f k*� , (4.2)

*Tkk*�(*keff*)Ψ*k*�(**r**) = 0, (4.4)

*<sup>∂</sup><sup>t</sup>* <sup>=</sup> <sup>∇</sup>(*Dk*(**r**)∇Ψ*k*(**r**, *<sup>t</sup>*)) +

*Tkk*� <sup>=</sup> <sup>−</sup>Σ*tkδkk*� <sup>+</sup> <sup>Σ</sup>*k*�→*<sup>k</sup>* <sup>+</sup> *<sup>χ</sup><sup>k</sup>*

where subscripts *k*, *k*� label the energy groups, *vk* is the speed of neutrons in energy group *k*, Ψ*k*(**r**) is the space dependent neutron flux in group *k*, and *keff* = 1. In general, the cross-sections *Dk*, <sup>Σ</sup>*tk*, <sup>Σ</sup>*k*�→*k*, <sup>Σ</sup>*f k*� are the space dependent diffusion constant, the total cross-section, the scattering cross-section, and the fission cross-section. *χ<sup>k</sup>* is called the fission spectrum. Equation (4.1) is a set of partial differencial equations, to which the initial condition Ψ*k*(**r**, 0),**r** ∈ *V* and a suitable boundary condition, e.g. Ψ*k*(**r**, *t*),**r** ∈ *∂V* are given for every energy group *k* and every time *t*. The boundary conditions used in diffusion problems are of

for **r** ∈ *∂V*. Here *bk*(**r**) depends on the boundary condition and may contain material

The diffusion equation is a relationship between the cross-sections in *V* and the neutron flux Ψ*k*(**r**, *t*). The equation is linear in Ψ*k*(**r**, *t*). The main variants of equation (4.1) that are of

1. Static eigenvalue problem: When the flux does not depend on *t*, the left hand side is zero, and (4.1) has a nontrivial solution only if the cross-sections are interrelated. To this end, we free *keff* and the static diffusion equation is put in the form of an eigenvalue problem:

> *G* ∑ *k*=1

where the eigenvalue *keff* introduced as a parameter in *Tkk*� thus allowing for a non-trivial

2. Time dependent solution allowing time dependence in some cross-sections. A typical

3. Equation (4.1) is homogeneous but it is possible to add an external source and to seek the

The structure of the diffusion equation is simple. Mathematical operations, like summation and differentiation, and multiplication by material parameters (cross-sections) are applied

∇(*Dk*(**r**)∇Ψ*k*(**r**)) +

solution Ψ*k*(**r**). That usage is typical in core design calculations.

hunting, we investigate the diffusion equation.

**4. The multigroup diffusion equation**

group model). The multi group version is:

1 *vk*

the type

properties, for example albedo.

interest in reactor physics are:

application is transient analysis.

response of *V* to the source.

*∂*Ψ*k*(**r**, *t*)

where the processes leading to energy change are collected in *Tkk*� :

to the neutron flux. In such equations the symmetries are mostly determined by the space dependence of the material properties. In the next subsection we investigate the possible symmetries of equation (4.4) and the exploitation of those symmetries.

When the solutions Ψ*k*(**r**), *k* = 1, . . . , *G* are known, not only the reaction rates, and netand partial currents can be determined, but also matrices can be created to transform these quantities into each other. From diffusion theory it is known that the solution is determined by specifying the entering current along the boundary *∂V*. Thus the boundary flux is also determined. But the given boundary flux also determines the solution everywhere in *V*. The solution is given formally by a Green's function as follows:

$$\Psi\_k(\mathbf{r}) = \int\_{\partial V} \sum\_{k\_0=1}^G \mathcal{G}\_{k\_0,k}(\mathbf{r}\_0 \to \mathbf{r}) f\_{k\_0}(\mathbf{r}\_0) d\mathbf{r}\_0. \tag{4.5}$$

Here G*k*0,*k*(**r**<sup>0</sup> → **r**) is the Green's function, it gives the neutron flux created at point **r** in energy group *k* by one neutron entering *V* at **r**<sup>0</sup> in energy group *k*0; and *fk*<sup>0</sup> (**r**0) is the given flux in energy group *k*<sup>0</sup> at boundary point **r**0. Similarly the net current is obtained as

$$J\_{\rm nl}(\mathbf{r}) = -D\_{\rm k} \nabla \int\_{\partial V} \sum\_{k\_0=1}^{G} \mathcal{G}\_{k\_0 k}(\mathbf{r}\_0 \to \mathbf{r}) f\_{k\_0}(\mathbf{r}\_0) d\mathbf{r}\_0 \tag{4.6}$$

where the ∇ operator acts on variable **r**.

#### **4.1 Symmetries of the diffusion equation**

First, the symmetry properties of the solution do not change in time because (4.1) is linear. This is not true for nonlinear equations. Secondly, the equations in (3.3) need to be satisfied. That is, the operations of the equation (4.4) and the boundary conditions must commute with the symmetry group elements. The symmetries of equation (4.4) are determined by the operators, the material parameters (cross-sections) and the geometry of *V*. The first term involves derivatives:

$$
\nabla(D\_k(\mathbf{r})\nabla\Psi\_k(\mathbf{r},t)) = \nabla D\_k(\mathbf{r})\nabla\Psi\_k(\mathbf{r}) + D\_k(\mathbf{r})\nabla^2\Psi\_k(\mathbf{r}).
$$

Here the first term contains a dot product which is invariant under rotations and reflections. The second term involves the laplace operator, which is also invariant under rotations and reflections. Thus, the major limiting symmetry factors are the material distributions, or the associated cross-sections as functions of space, and the shape of *V*. We assume the material distribution to be completely symmetric, thus for any cross-section Σ(**r**) we assume the transformation property

$$\mathbf{O}\_{\mathcal{S}}\Sigma(\mathbf{r}) = \Sigma(\mathbf{D}(\mathcal{g})\mathbf{r}) = \Sigma(\mathbf{r}') = \Sigma(\mathbf{r}).\tag{4.7}$$

Here **O***<sup>g</sup>* is an operator applicable to the possible solutions. D(*g*) is a matrix representation of the symmetry group of the diffusion equation applicable to **r**. The following operators are encountered in diffusion theory. The general form of a reaction rate at point **r** ∈ *V* can be expressed as

$$R(\mathbf{r}) = \sum\_{k1} \Sigma\_{k1}(\mathbf{r}) \Psi\_{k1}(\mathbf{r}).\tag{4.8}$$

for all **O***<sup>g</sup>* mapping *V* into itself. When the boundary conditions *hk*(**r**) in the expressions (4.3) transform according to an irreducible subspace *f <sup>α</sup>*(**r**) then the neutron flux Φ(**r**), the partial

Application of Finite Symmetry Groups to Reactor Calculations 301

Σ*g*(**r**)Ψ*g*(**r**)

*G* ∑ *k*=1

all transform under the automorphism group of *V* as do the boundary conditions *hk*(**r**).

The symmetry group of the volume *V* makes it possible to reduce the domain on which we have to determine the solution of the diffusion theory problem. Once we know the transformation rule of the flux, for example, it suffices to calculate the flux in a part of *V* and exploit the transformation rules. That observation is formulated in the following concise way. Let **r** ∈ *V* a point in *V* and let *g* · **r** be the image of **r** under *g* ∈ *G*. Then the set of points *g* · **r**, *g* ∈ *G* is called the orbit of **r** under the group *G*. If there is a set *V*<sup>0</sup> ∈ *V* such that the orbits of **<sup>r</sup>**<sup>0</sup> <sup>∈</sup> *<sup>V</sup>*<sup>0</sup> give every point<sup>2</sup> of *<sup>V</sup>* we call *<sup>V</sup>*<sup>0</sup> the fundamental domain of *<sup>V</sup>*. It is thus sufficient to solve the problem on the fundamental domain *V*0, and "continue" the solution to

When the boundary condition is not homogeneous or there is an external source, we exploit the linearity of the diffusion equation. The general solution is the sum of two terms: one with external source but homogeneous boundary condition and one with no external source but with non-homogeneous boundary condition. In either case, it is the external term that

The purely geometric symmetries of a suitable equation lead to a decomposition (2.16) of an arbitrary function in a function space, and thus the decomposition of the function space itself. The decomposed elements are linearly independent and can be arranged to form an

In a homogeneous material one can readily construct trial functions that fulfill the diffusion


**<sup>A</sup>***tk* = *<sup>λ</sup>*<sup>2</sup>

**D**−1. The general solution to (4.12) takes the form of

<sup>∇</sup>2*ψ*(**r**) + **<sup>A</sup>***ψ*(**r**) = <sup>0</sup> (4.12)

*<sup>i</sup>λ<sup>k</sup>* **<sup>r</sup>**·**<sup>e</sup>***Wk*(**e**)*d***<sup>e</sup>** (4.13)

*<sup>k</sup> tk*. (4.14)

<sup>2</sup> <sup>=</sup> <sup>−</sup>1, and *tk* signify the

determines the transformation properties of the respective solution component.

orthonormal system. This can be exploited in the calculations.

*ψ*(**r**) =

where the weight functions *Wk*(**e**) are arbitrary suitable functions, *i*

*G* ∑ *k*=1 *tk* 

equation at each point of *V*. For example consider

*R*(**r**) =

currents *I*(**r**), *J*(**r**), the reaction rate

the whole volume *V*.

where **A** =

eigenvectors of matrix **A**:

<sup>2</sup> Images of *V*<sup>0</sup> cover *V*.

**Σ***<sup>t</sup>* − **Σ***<sup>s</sup>* + **Σ***<sup>f</sup>*

**4.2 Selection of basis functions**

Here subscript 1 refers to the symmetric component. Since

$$\mathbf{O}\_{\mathcal{S}}R(\mathbf{r}) = \mathbf{O}\_{\mathcal{S}} \sum\_{k1} \Sigma\_{k1}(\mathbf{r}) \mathbf{Y}\_{k1}(\mathbf{r}) = \sum\_{k1} \mathbf{O}\_{\mathcal{S}}\left(\Sigma\_{k1}(\mathbf{r}) \mathbf{Y}\_{k1}(\mathbf{r})\right) = \sum\_{k1} \Sigma\_{k1}(\mathbf{r}) \mathbf{O}\_{\mathcal{S}} \mathbf{Y}\_{k1}(\mathbf{r})$$

because the material distribution is assumed symmetric hence **OgΣ**(**r**) = **Σ**(**r**) for every symmetry *g*, the transformation properties of a reaction rate are completely determined by the transformation properties of the flux Ψ*k*1(**r**). The normal component of the net current at **r** ∈ *∂V* is

$$J\_{nk}(\mathbf{r}) = -D\_k(\mathbf{r})(\mathbf{n}\nabla)\Psi\_k(\mathbf{r}),\tag{4.9}$$

where **n** is the normal vector at **r**. We apply **Og** to *Jnk*(**r**) to obtain:

$$\mathbf{O}\_{\mathcal{S}} l\_{nk}(\mathbf{r}) = -\mathbf{O}\_{\mathcal{S}} \left( D\_{k}(\mathbf{r}) (\mathbf{n} \nabla) \mathbf{V}\_{k}(\mathbf{r}) \right) = -D\_{k}(\mathbf{r}) (\mathbf{n} \nabla) \mathbf{O}\_{\mathcal{S}} \mathbf{V}\_{k}(\mathbf{r}). \tag{4.10}$$

Thus, the transformation properties of the normal component of the net current agree with the transformation properties of the flux. In diffusion theory, the partial currents are defined as

$$I\_k(\mathbf{r}) = \frac{1}{4} \left( \Psi\_k(\mathbf{r}) - 2I\_{nk}(\mathbf{r}) \right); \ f\_k(\mathbf{r}) = \frac{1}{4} \left( \Psi\_k(\mathbf{r}) + 2I\_{nk}(\mathbf{r}) \right). \tag{4.11}$$

From (4.9) it follows that the transformation properties of the partial currents correspond to the transformation properties of the flux.

The boundary condition (4.3) commutes with rotations and reflections provided the material properties do. The same is true for the diffusion equation (4.1). Our first conclusion is that the material distribution may set a limit to the symmetry properties. As to the symmetries, the volume *V* under consideration may also be a limiting factor. Let **O***<sup>g</sup>* be an operator that commutes with the operations of the diffusion equation (4.1) and (4.3). Furthermore, the representation D(*g*) maps *V* into itself. The set of operators form a group; the group operation is the repeated application. That group is called the symmetry group of the diffusion equation.

**Example 4.1** (Symmetries in a homogeneous square)**.** This symmetry group has eight elements, four rotations: *E*, *C*4, *C*<sup>2</sup> <sup>4</sup>, *<sup>C</sup>*<sup>3</sup> <sup>4</sup> and four reflections *σx*, *σy*, called of type *σ<sup>v</sup>* and *σd*1, *σd*<sup>2</sup> called of type *σd*. Characters of a given class have identical values. This group is known as the symmetry group of the square and denoted as *C*4*v*. The first column of a character table gives a mnemonic name to each representation, and a typical expression transforming according to the given representation. The first line is reserved for the most symmetric representation called unit representation. From the character table of the group *C*4*v*, we learn that there are groups with the same character tables, there are five irreducible representations labeled *A*1, *A*2, *B*1, *B*2, *E* where *A*s and *B*s are one.dimensional and *E* is two-dimensional, it has two linearly independent components transforming as the *x* and *y* coordinates.

**Example 4.2** (Symmetries in a homogeneous equilateral triangle)**.** The group has six elements, three rotations: *E*, *C*3, *C*<sup>2</sup> <sup>3</sup>, and three reflections through axis passing one edge: *σa*, *σb*, *σ<sup>c</sup>* called type *σv*. The symmetry group is isomorphic to the *C*3*<sup>v</sup>* group and its character table is the same as that of the group *D*3. The *C*3*<sup>v</sup>* group is the symmetry group of the equilateral triangle, it has two one-dimensional and one two-dimensional representations.

The key observation concerning the applications of symmetry considerations in boundary value problems is as follows. For a homogeneous problem (4.4) where there is no external source, the boundary condition is homogeneous, and every macroscopic cross-section Σ(**r**),**r** ∈ *V* is such that

$$\mathbf{O}\_{\mathcal{S}}\Sigma(\mathbf{r}) = \Sigma(\mathbf{r})$$

16 Will-be-set-by-IN-TECH

because the material distribution is assumed symmetric hence **OgΣ**(**r**) = **Σ**(**r**) for every symmetry *g*, the transformation properties of a reaction rate are completely determined by the transformation properties of the flux Ψ*k*1(**r**). The normal component of the net current at

Thus, the transformation properties of the normal component of the net current agree with the transformation properties of the flux. In diffusion theory, the partial currents are defined

From (4.9) it follows that the transformation properties of the partial currents correspond to

The boundary condition (4.3) commutes with rotations and reflections provided the material properties do. The same is true for the diffusion equation (4.1). Our first conclusion is that the material distribution may set a limit to the symmetry properties. As to the symmetries, the volume *V* under consideration may also be a limiting factor. Let **O***<sup>g</sup>* be an operator that commutes with the operations of the diffusion equation (4.1) and (4.3). Furthermore, the representation D(*g*) maps *V* into itself. The set of operators form a group; the group operation is the repeated application. That group is called the symmetry group of the diffusion equation. **Example 4.1** (Symmetries in a homogeneous square)**.** This symmetry group has eight

called of type *σd*. Characters of a given class have identical values. This group is known as the symmetry group of the square and denoted as *C*4*v*. The first column of a character table gives a mnemonic name to each representation, and a typical expression transforming according to the given representation. The first line is reserved for the most symmetric representation called unit representation. From the character table of the group *C*4*v*, we learn that there are groups with the same character tables, there are five irreducible representations labeled *A*1, *A*2, *B*1, *B*2, *E* where *A*s and *B*s are one.dimensional and *E* is two-dimensional, it has two

**Example 4.2** (Symmetries in a homogeneous equilateral triangle)**.** The group has six elements,

type *σv*. The symmetry group is isomorphic to the *C*3*<sup>v</sup>* group and its character table is the same as that of the group *D*3. The *C*3*<sup>v</sup>* group is the symmetry group of the equilateral triangle,

The key observation concerning the applications of symmetry considerations in boundary value problems is as follows. For a homogeneous problem (4.4) where there is no external source, the boundary condition is homogeneous, and every macroscopic cross-section

**O***g*Σ(**r**) = Σ(**r**)

<sup>4</sup> (Ψ*k*(**r**) <sup>−</sup> <sup>2</sup>*Jnk*(**r**)); *Jk*(**r**) = <sup>1</sup>

**O***<sup>g</sup>* (Σ*k*1(**r**)Ψ*k*1(**r**)) = ∑

**O***<sup>g</sup> Jnk*(**r**) = −**O***<sup>g</sup>* (*Dk*(**r**)(**n**∇)Ψ*k*(**r**)) = −*Dk*(**r**)(**n**∇)**O***g*Ψ*k*(**r**). (4.10)

*k*1

*Jnk*(**r**) = −*Dk*(**r**)(**n**∇)Ψ*k*(**r**), (4.9)

<sup>4</sup> and four reflections *σx*, *σy*, called of type *σ<sup>v</sup>* and *σd*1, *σd*<sup>2</sup>

<sup>3</sup>, and three reflections through axis passing one edge: *σa*, *σb*, *σ<sup>c</sup>* called

Σ*k*1(**r**)**O***g*Ψ*k*1(**r**)

<sup>4</sup> (Ψ*k*(**r**) + <sup>2</sup>*Jnk*(**r**)). (4.11)

*k*1

Here subscript 1 refers to the symmetric component. Since

Σ*k*1(**r**)Ψ*k*1(**r**) = ∑

where **n** is the normal vector at **r**. We apply **Og** to *Jnk*(**r**) to obtain:

<sup>4</sup>, *<sup>C</sup>*<sup>3</sup>

linearly independent components transforming as the *x* and *y* coordinates.

it has two one-dimensional and one two-dimensional representations.

*k*1

*Ik*(**r**) = <sup>1</sup>

the transformation properties of the flux.

elements, four rotations: *E*, *C*4, *C*<sup>2</sup>

three rotations: *E*, *C*3, *C*<sup>2</sup>

Σ(**r**),**r** ∈ *V* is such that

**O***gR*(**r**) = **O***<sup>g</sup>* ∑

**r** ∈ *∂V* is

as

for all **O***<sup>g</sup>* mapping *V* into itself. When the boundary conditions *hk*(**r**) in the expressions (4.3) transform according to an irreducible subspace *f <sup>α</sup>*(**r**) then the neutron flux Φ(**r**), the partial currents *I*(**r**), *J*(**r**), the reaction rate

$$R(\mathbf{r}) = \sum\_{k=1}^{G} \Sigma\_{\mathcal{S}}(\mathbf{r}) \Psi\_{\mathcal{S}}(\mathbf{r})$$

all transform under the automorphism group of *V* as do the boundary conditions *hk*(**r**).

The symmetry group of the volume *V* makes it possible to reduce the domain on which we have to determine the solution of the diffusion theory problem. Once we know the transformation rule of the flux, for example, it suffices to calculate the flux in a part of *V* and exploit the transformation rules. That observation is formulated in the following concise way. Let **r** ∈ *V* a point in *V* and let *g* · **r** be the image of **r** under *g* ∈ *G*. Then the set of points *g* · **r**, *g* ∈ *G* is called the orbit of **r** under the group *G*. If there is a set *V*<sup>0</sup> ∈ *V* such that the orbits of **<sup>r</sup>**<sup>0</sup> <sup>∈</sup> *<sup>V</sup>*<sup>0</sup> give every point<sup>2</sup> of *<sup>V</sup>* we call *<sup>V</sup>*<sup>0</sup> the fundamental domain of *<sup>V</sup>*. It is thus sufficient to solve the problem on the fundamental domain *V*0, and "continue" the solution to the whole volume *V*.

When the boundary condition is not homogeneous or there is an external source, we exploit the linearity of the diffusion equation. The general solution is the sum of two terms: one with external source but homogeneous boundary condition and one with no external source but with non-homogeneous boundary condition. In either case, it is the external term that determines the transformation properties of the respective solution component.

#### **4.2 Selection of basis functions**

The purely geometric symmetries of a suitable equation lead to a decomposition (2.16) of an arbitrary function in a function space, and thus the decomposition of the function space itself. The decomposed elements are linearly independent and can be arranged to form an orthonormal system. This can be exploited in the calculations.

In a homogeneous material one can readily construct trial functions that fulfill the diffusion equation at each point of *V*. For example consider

$$
\nabla^2 \underline{\psi}(\mathbf{r}) + \mathbf{A} \underline{\psi}(\mathbf{r}) = 0 \tag{4.12}
$$

where **A** = **Σ***<sup>t</sup>* − **Σ***<sup>s</sup>* + **Σ***<sup>f</sup>* **D**−1. The general solution to (4.12) takes the form of

$$\underline{\boldsymbol{\Psi}}(\mathbf{r}) = \sum\_{k=1}^{G} \underline{\boldsymbol{t}}\_{k} \int\_{|\mathbf{e}| = 1} e^{i\lambda\_{k} \mathbf{r} \cdot \mathbf{e}} \mathcal{W}\_{k}(\mathbf{e}) d\mathbf{e} \tag{4.13}$$

where the weight functions *Wk*(**e**) are arbitrary suitable functions, *i* <sup>2</sup> <sup>=</sup> <sup>−</sup>1, and *tk* signify the eigenvectors of matrix **A**:

$$\mathbf{A}\underline{\mathbf{t}}\_{k} = \lambda\_{k}^{2}\underline{\mathbf{t}}\_{k}.\tag{4.14}$$

<sup>2</sup> Images of *V*<sup>0</sup> cover *V*.

furthermore, for the reactions rates formed with the help of the cross-sections in (4.1), similar

Application of Finite Symmetry Groups to Reactor Calculations 303

in other words: solely the most symmetric, one dimensional representation contributes to the volume integrated reaction rates. Note that as a result of the decomposition of the solution or its approximation into irreducible components not only that irreducible components of a given physical quantity (like flux, reaction rate, net current) but also the given irreducible component of every physical quantity fall into the same linearly independent irreducible subspace. As a consequence, the operators (matrices) mapping the flux into net currents (or vice versa) fall into the same irreducible subspace, therefore the mapping matrix automatically

**Example 4.3** (Symmetry components of boundary fluxes)**.** Consider the flux given along the boundary of a square. The flux is given by four functions corresponding to the flux along the four sides of the square. The flux along a given face is the sum of an even and an odd function with respect to the reflection through the midpoint of the face. The decomposition (2.21) gives the eight irreducible components shown in Figure 2. Note that the irreducible

> Π 4 Π 2 3 Π <sup>4</sup> <sup>Π</sup> <sup>5</sup> <sup>Π</sup> 4 3 Π 2 7 Π <sup>4</sup> <sup>2</sup> <sup>Π</sup>

> Π 4 Π 2 3 Π <sup>4</sup> <sup>Π</sup> <sup>5</sup> <sup>Π</sup> 4 3 Π 2 7 Π <sup>4</sup> <sup>2</sup> <sup>Π</sup>

> Π 4 Π 2 3 Π <sup>4</sup> <sup>Π</sup> <sup>5</sup> <sup>Π</sup> 4 3 Π 2 7 Π <sup>4</sup> <sup>2</sup> <sup>Π</sup>

> Π 4 Π 2 3 Π <sup>4</sup> <sup>Π</sup> <sup>5</sup> <sup>Π</sup> 4 3 Π 2 7 Π <sup>4</sup> <sup>2</sup> <sup>Π</sup>

A2

B2

E2

E4

�1.0 �0.5

> 0.5 1.0

�1.0 �0.5

> 0.5 1.0

�1.0 �0.5

> 0.5 1.0

�1.0 �0.5

subspaces *αi*, *i* < 5 are one-dimensional whereas the subspace *α* = 5 is two-dimensional, and in a two-dimensional representation there are two pairs of basis functions that are identical as to symmetry properties. Thus, we have altogether eight linearly independent basis functions. The physical meaning of the irreducible components is that the flux distribution of a node is a combination of the flux distributions established by eight boundary condition types. The

0.5 1.0

� (**r**), *const*) = *δα*,1*δ*�,1, (4.22)

orthogonality relation holds. For the volume integrated reaction rates we have

(*f <sup>α</sup>*

0 <sup>Π</sup> 4 Π 2 3 Π <sup>4</sup> <sup>Π</sup> <sup>5</sup> <sup>Π</sup> 4 3 Π 2 7 Π <sup>4</sup> <sup>2</sup> <sup>Π</sup>

Π 4 Π 2 3 Π <sup>4</sup> <sup>Π</sup> <sup>5</sup> <sup>Π</sup> 4 3 Π 2 7 Π <sup>4</sup> <sup>2</sup> <sup>Π</sup>

Π 4 Π 2 3 Π <sup>4</sup> <sup>Π</sup> <sup>5</sup> <sup>Π</sup> 4 3 Π 2 7 Π <sup>4</sup> <sup>2</sup> <sup>Π</sup>

Π 4 Π 2 3 Π <sup>4</sup> <sup>Π</sup> <sup>5</sup> <sup>Π</sup> 4 3 Π 2 7 Π <sup>4</sup> <sup>2</sup> <sup>Π</sup>

Fig. 2. Irreducible components on the boundary of a square

�1.0 �0.5

> 0.5 1.0

�1.0 �0.5

> 0.5 1.0

�1.0 �0.5

0.5 1.0 A1

B1

E1

E3

0.5 1.0 1.5 2.0

becomes diagonal.

When **e**(*θ*)=(cos *θ*, sin *θ*), using (2.19), we build up a regular representation from (4.13) so that

$$\underline{\psi}\_{0}(\mathbf{r}) = \sum\_{k=1}^{G} \underline{\mathfrak{t}}\_{k} \int\_{0}^{2\pi/|G|} e^{i\lambda\_{k}\mathbf{r}\cdot\mathbf{e}(\theta)} \mathcal{W}\_{k}(\mathbf{e}(\theta)) d\theta \,\tag{4.15}$$

and the action of operators **<sup>O</sup>***<sup>g</sup>* on *<sup>ψ</sup>*<sup>0</sup> (**r**) is defined as follows. **O***<sup>g</sup>* acts on variable **r**, see (2.6), but in (4.13), **r** occurs only in the form of the dot product **re**(*θ*), therefore action of **O***<sup>g</sup>* can be transferred to an action on *θ*. As a result, each **O***<sup>g</sup>* acts as

$$\mathbf{O}\_{\mathcal{S}}\underline{\psi}\_{0}(\mathbf{r}) = \sum\_{k=1}^{G} \underline{\mathfrak{t}}\_{k} \int\_{I\_{\mathcal{S}}} e^{i\lambda\_{k}\mathbf{r}\cdot\mathbf{e}(\boldsymbol{\theta})} \mathcal{W}\_{k}(\mathbf{e}(\boldsymbol{\theta})) d\boldsymbol{\theta} \tag{4.16}$$

where **O***<sup>g</sup>* maps the interval 0 ≤ *θ* ≤ 2*π*/|*G*| into the interval *Ig*. In this manner we get the irreducible components of the solution as a linear combination of |*G*| exponential function, it is only the coefficients in the linear combination that determine the irreducible components. The weight function *Wk*(*θ*) makes it possible to match the entering currents at given points of the boundary. Let *θ* = 0 correspond to the middle of a side. Then choosing

$$\mathcal{W}\_{k}((\varepsilon))(\theta) = \mathcal{W}\_{k}\delta(\theta),\tag{4.17}$$

we get by (4.15) the solution at face midpoints. The last step is the formation of the irreducible components. Observe that in projection (2.20) the solutions at different images of **r** are used in a linear combination, the coefficients of the linear combinations are the rows of the character table. But in the images (4.16), only the weight function changes. In each *Ig* interval the image of *Wk*(**e**) is involved, which is a Dirac-delta function, only the place of the singularity changes as the group elements map the place of the singularity. A symmetry of the square maps a face center into another face center thus there will be four distinct positions and the space dependent part of the irreducible component of *ψ*<sup>0</sup> will contain four exponentials:

$$
\omega \pm e^{i\lambda\_k x}, \pm e^{-i\lambda\_k x}, \pm e^{i\lambda\_k y}, \pm e^{-i\lambda\_k y}. \tag{4.18}
$$

From these expressions the following irreducible combinations can be formed:

$$A\_1: \cos \lambda\_k \mathbf{x} + \cos \lambda\_k \mathbf{y}; A\_2: \cos \lambda\_k \mathbf{x} - \cos \lambda\_k \mathbf{y}; E\_1: \sin \lambda\_k \mathbf{x}; E\_2: \sin \lambda\_k \mathbf{y}. \tag{4.19}$$

It is not surprising that when we represent a side by its midpoint the odd functions along the side are missing.

The above method may serve as a starting point for developing efficient numerical methods. The only approximation is in the continuity of the partial currents at the boundary of adjacent homogeneous nodes.

If elements of the function space are defined for all **r** ∈ *V*, and if *f*1, *f*<sup>2</sup> ∈ **L***V*, then the following inner product is applicable:

$$f\_1(f\_1, f\_2) \equiv \int\_V f\_1(\mathbf{r}) f\_2(\mathbf{r}) d^3 \mathbf{r}.\tag{4.20}$$

Let *f <sup>α</sup>* (**r**), = 1, . . . , *n<sup>α</sup>* be a regular representation of group *G*. Then

$$(f\_{\ell}^{\mathfrak{a}}(\mathbf{r}), f\_{\ell'}^{\beta}(\mathbf{r})) = \delta\_{\mathfrak{a}, \mathfrak{b}} \delta\_{\ell, \ell'} \tag{4.21}$$

18 Will-be-set-by-IN-TECH

When **e**(*θ*)=(cos *θ*, sin *θ*), using (2.19), we build up a regular representation from (4.13) so

but in (4.13), **r** occurs only in the form of the dot product **re**(*θ*), therefore action of **O***<sup>g</sup>* can be

where **O***<sup>g</sup>* maps the interval 0 ≤ *θ* ≤ 2*π*/|*G*| into the interval *Ig*. In this manner we get the irreducible components of the solution as a linear combination of |*G*| exponential function, it is only the coefficients in the linear combination that determine the irreducible components. The weight function *Wk*(*θ*) makes it possible to match the entering currents at given points of

we get by (4.15) the solution at face midpoints. The last step is the formation of the irreducible components. Observe that in projection (2.20) the solutions at different images of **r** are used in a linear combination, the coefficients of the linear combinations are the rows of the character table. But in the images (4.16), only the weight function changes. In each *Ig* interval the image of *Wk*(**e**) is involved, which is a Dirac-delta function, only the place of the singularity changes as the group elements map the place of the singularity. A symmetry of the square maps a face center into another face center thus there will be four distinct positions and the space

<sup>−</sup>*iλkx*, <sup>±</sup>*<sup>e</sup>*

It is not surprising that when we represent a side by its midpoint the odd functions along the

The above method may serve as a starting point for developing efficient numerical methods. The only approximation is in the continuity of the partial currents at the boundary of adjacent

If elements of the function space are defined for all **r** ∈ *V*, and if *f*1, *f*<sup>2</sup> ∈ **L***V*, then the following

 *V* *<sup>i</sup>λky*, <sup>±</sup>*<sup>e</sup>*

*A*<sup>1</sup> : cos *λkx* + cos *λky*; *A*<sup>2</sup> : cos *λkx* − cos *λky*; *E*<sup>1</sup> : sin *λkx*; *E*<sup>2</sup> : sin *λky*. (4.19)

*e iλ<sup>k</sup>* **r**·**e**(*θ*)

*Wk*(**e**(*θ*))*dθ*, (4.15)

*Wk*(**e**(*θ*))*dθ* (4.16)

<sup>−</sup>*iλky*. (4.18)

*f*1(**r**)*f*2(**r**)*d*3**r**. (4.20)

�(**r**)) = *δα*,*βδ*,� (4.21)

(**r**) is defined as follows. **O***<sup>g</sup>* acts on variable **r**, see (2.6),

*Wk*((*e*))(*θ*) = *Wkδ*(*θ*), (4.17)

 2*π*/|*G*| 0

*ψ*0 (**r**) =

transferred to an action on *θ*. As a result, each **O***<sup>g</sup>* acts as

**<sup>O</sup>***gψ*<sup>0</sup>

and the action of operators **<sup>O</sup>***<sup>g</sup>* on *<sup>ψ</sup>*<sup>0</sup>

*G* ∑ *k*=1 *tk*

(**r**) =

the boundary. Let *θ* = 0 correspond to the middle of a side. Then choosing

dependent part of the irreducible component of *ψ*<sup>0</sup> will contain four exponentials:

*<sup>i</sup>λkx*, <sup>±</sup>*<sup>e</sup>*

(*f*1, *f*2) ≡

(**r**), = 1, . . . , *n<sup>α</sup>* be a regular representation of group *G*. Then

(*f <sup>α</sup>* (**r**), *f β*

From these expressions the following irreducible combinations can be formed:

± *e*

*G* ∑ *k*=1 *tk Ig e iλ<sup>k</sup>* **r**·**e**(*θ*)

that

side are missing.

Let *f <sup>α</sup>*

homogeneous nodes.

inner product is applicable:

furthermore, for the reactions rates formed with the help of the cross-sections in (4.1), similar orthogonality relation holds. For the volume integrated reaction rates we have

$$(f\_{\ell}^{\mathfrak{a}}(\mathbf{r}), \text{const}) = \delta\_{\mathfrak{a},1} \delta\_{\ell,1\prime} \tag{4.22}$$

in other words: solely the most symmetric, one dimensional representation contributes to the volume integrated reaction rates. Note that as a result of the decomposition of the solution or its approximation into irreducible components not only that irreducible components of a given physical quantity (like flux, reaction rate, net current) but also the given irreducible component of every physical quantity fall into the same linearly independent irreducible subspace. As a consequence, the operators (matrices) mapping the flux into net currents (or vice versa) fall into the same irreducible subspace, therefore the mapping matrix automatically becomes diagonal.

**Example 4.3** (Symmetry components of boundary fluxes)**.** Consider the flux given along the boundary of a square. The flux is given by four functions corresponding to the flux along the four sides of the square. The flux along a given face is the sum of an even and an odd function with respect to the reflection through the midpoint of the face. The decomposition (2.21) gives the eight irreducible components shown in Figure 2. Note that the irreducible

Fig. 2. Irreducible components on the boundary of a square

subspaces *αi*, *i* < 5 are one-dimensional whereas the subspace *α* = 5 is two-dimensional, and in a two-dimensional representation there are two pairs of basis functions that are identical as to symmetry properties. Thus, we have altogether eight linearly independent basis functions.

The physical meaning of the irreducible components is that the flux distribution of a node is a combination of the flux distributions established by eight boundary condition types. The

incident current of the adjacent subvolume. Thus in a composite volume the partial currents are connected by response matrices and adjacency. We collect the response matrices and

Application of Finite Symmetry Groups to Reactor Calculations 305

and because the adjacency is an invertible relationship, we multiply the first expression by **H**

Since there is a free parameter *keff* in matrix **R**, it makes the equation solvable. At external boundaries there is no adjacency, but the boundary condition there provides a rule to determine the entering current from the exiting current. With these supplements, the solution

(*m*+1) = **HR**(*m*)*I*

The iteration starts with *m* = 0 with an initial guess for the *keff* and the entering currents *I*. Let us assume that the needed matrices are available, their determinations are discussed in the subsequent Subsection. The iteration proceeds as follows. We sweep through the subvolumes

• determine the actual response matrix to calculate the new exiting currents and

• determine the new exiting currents (*J*) from the entering currents and the response matrices

After this, pass on to the next node. When the iteration reaches the last node, the sweep ends and the maximal difference is determined between the entering currents of the last two iterations. At the end of an iteration step, the parameter *keff* is re-evaluated from the condition that the largest eigenvalue of **HR** should equal one. If the difference of the last two estimates is greater than the given tolerance limit, a new iteration cycle is started, otherwise the iteration terminates. If we have a large number of nodes, the improvement after the calculations of a given node is small. This shows that the iteration process is rather slow, acceleration methods

It has been proven (Mika, 1972) that the outlined iteration is convergent. The goal of the iteration is to determine the partial current vector. The length of vector **I** is *Nnode* × *nF* × *G*. From the point of view of mathematics, the iteration is a transformation of the following type:

where *m* is the number of the iteration, matrix **A**(*keff*) makes the new entering current vector **x***m*+<sup>1</sup> from the old entering current vector *xm*. In the case of neutron diffusion or transport, operator **A**(*keff*) maps positive vectors into positive vectors. In accordance with the Krein–Ruthman theorem, **A**(*keff*) has a dominant eigenvalue and the associated eigenfunction4. When *keff* is a given value, the power method is a simple iteration technique to find a good estimate of **<sup>x</sup>** = lim*i*→<sup>∞</sup> **<sup>x</sup>***i*. Solution methods have been worked out for practical problems in nuclear reactor theory: for the solution of the diffusion and transport equations in

(*m*)

*I*

using equation (4.26) and the contributions to the volume integrals.

in a given sequence and carry out the following actions (in node *m*):

• collect the actual incoming currents of subvolume *m*.

<sup>3</sup> We obtain reactions rates also from the Green's function.

<sup>4</sup> Actually a discretized eigenfunction, i.e. **x**.

contributions to volume integrals3.

*J* = **R***I*; *I* = **H***J*, (4.26)

*I* = **HR***I*. (4.27)

**A**(*keff*)**x***<sup>m</sup>* = *a***x***m*+1, (4.29)

. (4.28)

adjacency into two big response matrices:

and get

are required.

of equation (4.27) proceeds

component *A*<sup>1</sup> represents a complete symmetry that is the same even distribution along each side. Component *A*<sup>2</sup> is also symmetric, but the boundary condition is an odd function on each side. Components *B*<sup>1</sup> and *B*<sup>2</sup> represent entering neutrons along one axes and exiting neutrons along the perpendicular axes, a realization of a second derivative with even functions over a face. *B*<sup>2</sup> is the same but with odd functions along a face. *E*<sup>1</sup> and *E*<sup>4</sup> represent streaming in the *x* and *y* directions with even distributions along a face, whereas *E*<sup>2</sup> and *E*<sup>3</sup> with odd distributions along a face.

The symmetry transformations of the square, map the functions given along the half faces into each other but they do not say anything about the function shape along a half face. Therefore, the functions in Fig. 2 serve only as patterns, the function shape is arbitrary along a half face. The corresponding mathematical term is the direct product; each function may be multiplied by a function *f*(*ξ*), −*h*/2 ≤ *ξ* ≤ +*h*/2. It is well known that a function along an interval can be approximated by a suitable polynomial (Weierstrass's theorem). We know from practice that in reactor calculations a second order polynomial suffices on a face for the precision needed in a power plant.

The invariant subspace means that the boundary flux, the net current, the partial currents must follow one of the patterns shown in Figure 2, the only difference may be in the shape function *f*(*ξ*), −*h*/2 ≤ *ξ* ≤ +*h*/2. This means the a constant flux may create a quadratic position dependent current, but the global structure of the flux, and current should belong to the same pattern of Figure 2.

Moreover, if we are interested in the solution inside the square, its pattern must also be the same although there the freedom allows a continuous function along 1/8-th of the square. These features are exploited in the calculation.

#### **4.3 Iteration**

It is known that the diffusion (as well as the transport) equation has a well defined solution in *V* provided the entering current is given along the boundary *∂V*. From the Green's function and from the operators in (4.11) we set up the following iteration scheme. To formalize this, we write the solution as

$$\Psi\_k(\mathbf{r}) = \sum\_{k'=1}^G \int\_{\partial V} \mathcal{G}\_{kk'}(\mathbf{r'} \to \mathbf{r}) I\_{k'}(\mathbf{r'}) d\mathbf{r'}.\tag{4.23}$$

Applying operator **F** that forms the exiting current from the flux, we obtain

$$J\_k(\mathbf{r}) = \sum\_{k'=1}^{G} \int\_{\partial V} \mathbf{F} \mathcal{G}\_{kk'}(\mathbf{r'} \to \mathbf{r}) I\_{k'}(\mathbf{r'}) d\mathbf{r'},\tag{4.24}$$

that can be put into the concise form

$$J\_k = \sum\_{k'=1}^{G} \mathbf{R}\_{kk'} I\_{k'} \tag{4.25}$$

where we have suppressed that the partial currents depend on position along the boundary and the response matrix **R** includes an integration over variable **r**� .

When volume *V* is large, we subdivide it into subvolumes (nodes) and determine the response matrices for each subvolume. At internal boundaries, the exiting current is the 20 Will-be-set-by-IN-TECH

component *A*<sup>1</sup> represents a complete symmetry that is the same even distribution along each side. Component *A*<sup>2</sup> is also symmetric, but the boundary condition is an odd function on each side. Components *B*<sup>1</sup> and *B*<sup>2</sup> represent entering neutrons along one axes and exiting neutrons along the perpendicular axes, a realization of a second derivative with even functions over a face. *B*<sup>2</sup> is the same but with odd functions along a face. *E*<sup>1</sup> and *E*<sup>4</sup> represent streaming in the *x* and *y* directions with even distributions along a face, whereas *E*<sup>2</sup> and *E*<sup>3</sup> with odd

The symmetry transformations of the square, map the functions given along the half faces into each other but they do not say anything about the function shape along a half face. Therefore, the functions in Fig. 2 serve only as patterns, the function shape is arbitrary along a half face. The corresponding mathematical term is the direct product; each function may be multiplied by a function *f*(*ξ*), −*h*/2 ≤ *ξ* ≤ +*h*/2. It is well known that a function along an interval can be approximated by a suitable polynomial (Weierstrass's theorem). We know from practice that in reactor calculations a second order polynomial suffices on a face for the precision needed

The invariant subspace means that the boundary flux, the net current, the partial currents must follow one of the patterns shown in Figure 2, the only difference may be in the shape function *f*(*ξ*), −*h*/2 ≤ *ξ* ≤ +*h*/2. This means the a constant flux may create a quadratic position dependent current, but the global structure of the flux, and current should belong to

Moreover, if we are interested in the solution inside the square, its pattern must also be the same although there the freedom allows a continuous function along 1/8-th of the square.

It is known that the diffusion (as well as the transport) equation has a well defined solution in *V* provided the entering current is given along the boundary *∂V*. From the Green's function and from the operators in (4.11) we set up the following iteration scheme. To formalize this,

G*kk*�(**r**

**F**G*kk*�(**r**

*G* ∑ *k*�=1

where we have suppressed that the partial currents depend on position along the boundary

When volume *V* is large, we subdivide it into subvolumes (nodes) and determine the response matrices for each subvolume. At internal boundaries, the exiting current is the

� → **r**)*Ik*�(**r**

� → **r**)*Ik*�(**r**

� )*d***r** �

� )*d***r** �

. (4.23)

, (4.24)

**R***kk*� *Ik*� , (4.25)

.

distributions along a face.

in a power plant.

**4.3 Iteration**

the same pattern of Figure 2.

we write the solution as

that can be put into the concise form

These features are exploited in the calculation.

Ψ*k*(**r**) =

*Jk*(**r**) =

and the response matrix **R** includes an integration over variable **r**�

*G* ∑ *k*�=1

Applying operator **F** that forms the exiting current from the flux, we obtain

*G* ∑ *k*�=1

 *∂V*

*Jk* =

 *∂V* incident current of the adjacent subvolume. Thus in a composite volume the partial currents are connected by response matrices and adjacency. We collect the response matrices and adjacency into two big response matrices:

$$
\underline{I} = \mathbf{R}\underline{I}; \; \underline{I} = \mathbf{H}\underline{I}; \tag{4.26}
$$

and because the adjacency is an invertible relationship, we multiply the first expression by **H** and get

$$
\underline{I} = \mathbf{H} \mathbf{R} \underline{I}.\tag{4.27}
$$

Since there is a free parameter *keff* in matrix **R**, it makes the equation solvable. At external boundaries there is no adjacency, but the boundary condition there provides a rule to determine the entering current from the exiting current. With these supplements, the solution of equation (4.27) proceeds

$$
\underline{I}^{(m+1)} = \mathbf{H} \mathbf{R}^{(m)} \underline{I}^{(m)}.\tag{4.28}
$$

The iteration starts with *m* = 0 with an initial guess for the *keff* and the entering currents *I*. Let us assume that the needed matrices are available, their determinations are discussed in the subsequent Subsection. The iteration proceeds as follows. We sweep through the subvolumes in a given sequence and carry out the following actions (in node *m*):


After this, pass on to the next node. When the iteration reaches the last node, the sweep ends and the maximal difference is determined between the entering currents of the last two iterations. At the end of an iteration step, the parameter *keff* is re-evaluated from the condition that the largest eigenvalue of **HR** should equal one. If the difference of the last two estimates is greater than the given tolerance limit, a new iteration cycle is started, otherwise the iteration terminates. If we have a large number of nodes, the improvement after the calculations of a given node is small. This shows that the iteration process is rather slow, acceleration methods are required.

It has been proven (Mika, 1972) that the outlined iteration is convergent. The goal of the iteration is to determine the partial current vector. The length of vector **I** is *Nnode* × *nF* × *G*. From the point of view of mathematics, the iteration is a transformation of the following type:

$$\mathbf{A}(k\_{eff})\mathbf{x}\_m = a\mathbf{x}\_{m+1} \tag{4.29}$$

where *m* is the number of the iteration, matrix **A**(*keff*) makes the new entering current vector **x***m*+<sup>1</sup> from the old entering current vector *xm*. In the case of neutron diffusion or transport, operator **A**(*keff*) maps positive vectors into positive vectors. In accordance with the Krein–Ruthman theorem, **A**(*keff*) has a dominant eigenvalue and the associated eigenfunction4. When *keff* is a given value, the power method is a simple iteration technique to find a good estimate of **<sup>x</sup>** = lim*i*→<sup>∞</sup> **<sup>x</sup>***i*. Solution methods have been worked out for practical problems in nuclear reactor theory: for the solution of the diffusion and transport equations in

<sup>3</sup> We obtain reactions rates also from the Green's function.

<sup>4</sup> Actually a discretized eigenfunction, i.e. **x**.

the case with *n* = 4, i.e. one value per face. In a square node we need the following matrix

Application of Finite Symmetry Groups to Reactor Calculations 307

to project the irreducible components from the side-wise values. As (2.20) shows, irreducible components are linear combinations of the decomposable quantity 6. The coefficients are

In a regular *n*-gonal node the response matrix has7 *Ent* [(*n* + 2)/2] free parameters. The response matrix also has to be decomposed into irreps, this is done by a basis change. Let

**J** = **RI**

**ΩRΩ**−<sup>1</sup>

and we see that for irreducible representations the response matrix is given by Ω**R**Ω+. In a

*A* = *r* + 2*t*<sup>1</sup> + *t*2, *B* = *r* − 2*t*<sup>1</sup> + *t*2, *C* = *r* − *t*2.

• Irreducible components of various items play a central role in the method. The irreducible representations often have a physical meaning and make the calculations more effective

• The usage of linearly independent irreducible components is rather useful in the analysis

• In several problems of practical importance, the problem is almost symmetric, some

<sup>6</sup> After normalization of the row vectors, the **Ω** matrices become orthogonal: **Ω**+**Ω** is the unit matrix.

(e.g. matrices transforming one irreducible component into another are diagonal). • The irreducible representations of a given quantity are linearly independent and that is

⎞

*r t*<sup>1</sup> *t*<sup>2</sup> *t*<sup>1</sup> *t*<sup>1</sup> *r t*<sup>1</sup> *t*<sup>2</sup> *t*<sup>2</sup> *t*<sup>1</sup> *r t*<sup>1</sup> *t*<sup>1</sup> *t*<sup>2</sup> *t*<sup>1</sup> *r*

⎛

⎜⎜⎝

�

⎞

1111 1 −1 1 −1 1 0 −1 0 010 −1 ⎞

⎟⎟⎠ . (4.30)

**ΩI**, (4.31)

⎟⎟⎠ , (4.32)

⎟⎟⎠ , (4.33)

⎛

⎜⎜⎝

**Ω**<sup>4</sup> =

**ΩJ** = �

**R**<sup>4</sup> =

⎛

⎜⎜⎝

We summarize the following advantages of applying group theory:

perturbations occur. This makes the calculation more effective.

given as rows in matrices **Ω**4.

Multiply this expression by **Ω** from the left:

and the irreducible representation of *R*<sup>4</sup> is diagonal:

exploited in the analysis of convergence.

of the iteration of a numerical process.

<sup>7</sup> Here *Ent* is the integer division.

the response matrix give

square node:

where

the core of a power reactor. The original numerical method is described elsewhere, see Refs. (Weiss, 1977), (Hegedus, 1991).

Note that the iteration (4.29) is just an example of the maps transforming an element of the solution space into another element. Thus in principle one can observe chaotic behavior, divergence, strange attractors<sup>5</sup> etc. Therefore it is especially important to design carefully the iteration scheme. The iteration includes derived quantities of two types: volume integrated and surface integrated. When you work with an analytical solution, the two are derived from the same analytical solution. But when you are using approximations (such as polynomial approximation), it has to be checked if the polynomials used inside the node and at the surface of the node are consistent. In an eigenvalue problem, parameter *keff* in equation (4.29)should be determined from the condition that the dominant eigenvalue *a* in (4.29) should equal one. First we deal with the general features of the iteration.

As has been mentioned, one iteration step (4.29) sweeps through all the subvolumes. The number of subvolumes (Gadó et al., 1994) varies between 590 and 7980, the number of unknowns is 9440 and 111680. At the boundary of two adjacent subvolumes, continuity of Φ and *D∂n*Φ (the normal current) is prescribed .

In node *m* in iteration *i*. In the derivation of the analytical solution we have assumed the node to be invariant under the group *<sup>G</sup>*<sup>V</sup> . Actually, not the material properties are stored in a program because the material properties depend on:


In the calculations, app. 50–60% of the time is spent on finding the actual response matrix elements, because those depend on a number of local material parameters (e.g. density, temperature, void content). We mention this datum to underline how important it is to reduce the parametrization work in a production code.

#### **4.4 Exploiting symmetries**

In a given node, the response matrices are determined based on the analytical solution (4.19). We need an efficient recipe for decomposing the entering currents into irreps and reconstructing the exiting currents on the faces. Since the only approximation in the procedure requires the continuity of the partial currents, we need to specify the representation of the partial currents and how to represent them. The simplest is a representation by discrete points along the boundary, the minimal number is four, the maximal number depends on the computer capacity. An alternative choice is to represent the partial currents by moments over the faces. Usually average, first and second moment suffice to get the accuracy needed by practice. The representation fixes the number of points we need on a side and the number of points (*n*) on the node boundary.

To project the irreps, we may use (Mackey, 1980) the cos((*k* − 1)2*π*/*n*), *k* = 1, . . . , *n*/2 and sin((*k*)2*π*/*n*), *k* = 1, . . . , *n*/2 vectors (after normalization). The following illustration shows

<sup>5</sup> Since *keff* depends on the entering currents, the problem is non-linear.

the case with *n* = 4, i.e. one value per face. In a square node we need the following matrix

$$
\boldsymbol{\Omega}\_4 = \begin{pmatrix} 1 & 1 & 1 & 1 \\ 1 & -1 & 1 & -1 \\ 1 & 0 & -1 & 0 \\ 0 & 1 & 0 & -1 \end{pmatrix} . \tag{4.30}
$$

to project the irreducible components from the side-wise values. As (2.20) shows, irreducible components are linear combinations of the decomposable quantity 6. The coefficients are given as rows in matrices **Ω**4.

In a regular *n*-gonal node the response matrix has7 *Ent* [(*n* + 2)/2] free parameters. The response matrix also has to be decomposed into irreps, this is done by a basis change. Let the response matrix give

**J** = **RI**

Multiply this expression by **Ω** from the left:

$$\mathbf{U}\mathbf{D}\mathbf{J} = \left(\mathbf{U}\mathbf{R}\mathbf{U}^{-1}\right)\mathbf{U}\mathbf{I},\tag{4.31}$$

and we see that for irreducible representations the response matrix is given by Ω**R**Ω+. In a square node:

$$\mathbf{R}\_4 = \begin{pmatrix} r & t\_1 & t\_2 & t\_1 \\ t\_1 & r & t\_1 & t\_2 \\ t\_2 & t\_1 & r & t\_1 \\ t\_1 & t\_2 & t\_1 & r \end{pmatrix} \\ \tag{4.32}$$

and the irreducible representation of *R*<sup>4</sup> is diagonal:

$$
\begin{pmatrix} A & 0 & 0 & 0 \\ 0 & B & 0 & 0 \\ 0 & 0 & C & 0 \\ 0 & 0 & 0 & C \end{pmatrix}' \tag{4.33}
$$

where

22 Will-be-set-by-IN-TECH

the core of a power reactor. The original numerical method is described elsewhere, see Refs.

Note that the iteration (4.29) is just an example of the maps transforming an element of the solution space into another element. Thus in principle one can observe chaotic behavior, divergence, strange attractors<sup>5</sup> etc. Therefore it is especially important to design carefully the iteration scheme. The iteration includes derived quantities of two types: volume integrated and surface integrated. When you work with an analytical solution, the two are derived from the same analytical solution. But when you are using approximations (such as polynomial approximation), it has to be checked if the polynomials used inside the node and at the surface of the node are consistent. In an eigenvalue problem, parameter *keff* in equation (4.29)should be determined from the condition that the dominant eigenvalue *a* in (4.29) should equal one.

As has been mentioned, one iteration step (4.29) sweeps through all the subvolumes. The number of subvolumes (Gadó et al., 1994) varies between 590 and 7980, the number of unknowns is 9440 and 111680. At the boundary of two adjacent subvolumes, continuity of

In node *m* in iteration *i*. In the derivation of the analytical solution we have assumed the node to be invariant under the group *<sup>G</sup>*<sup>V</sup> . Actually, not the material properties are stored in a

In the calculations, app. 50–60% of the time is spent on finding the actual response matrix elements, because those depend on a number of local material parameters (e.g. density, temperature, void content). We mention this datum to underline how important it is to reduce

In a given node, the response matrices are determined based on the analytical solution (4.19). We need an efficient recipe for decomposing the entering currents into irreps and reconstructing the exiting currents on the faces. Since the only approximation in the procedure requires the continuity of the partial currents, we need to specify the representation of the partial currents and how to represent them. The simplest is a representation by discrete points along the boundary, the minimal number is four, the maximal number depends on the computer capacity. An alternative choice is to represent the partial currents by moments over the faces. Usually average, first and second moment suffice to get the accuracy needed by practice. The representation fixes the number of points we need on a side and the number

To project the irreps, we may use (Mackey, 1980) the cos((*k* − 1)2*π*/*n*), *k* = 1, . . . , *n*/2 and sin((*k*)2*π*/*n*), *k* = 1, . . . , *n*/2 vectors (after normalization). The following illustration shows

(Weiss, 1977), (Hegedus, 1991).

First we deal with the general features of the iteration.

Φ and *D∂n*Φ (the normal current) is prescribed .

• actual temperature of the node;

• the void content of the moderator;

• the power level;

**4.4 Exploiting symmetries**

of points (*n*) on the node boundary.

program because the material properties depend on:

• the initial composition of the fuel (e.g. enrichment);

the parametrization work in a production code.

• the actual composition of the fuel as it may change with burn-up;

<sup>5</sup> Since *keff* depends on the entering currents, the problem is non-linear.

$$A = r + 2t\_1 + t\_2, \quad B = r - 2t\_1 + t\_2, \quad C = r - t\_2.$$

We summarize the following advantages of applying group theory:


<sup>6</sup> After normalization of the row vectors, the **Ω** matrices become orthogonal: **Ω**+**Ω** is the unit matrix.

<sup>7</sup> Here *Ent* is the integer division.

i/Order 0 1 2 3 4

 1 - (*x*<sup>2</sup> + *y*2) - (*x*2*y*2), (*x*<sup>4</sup> + *y*4) - - - - (*x*3*<sup>y</sup>* <sup>−</sup> *<sup>y</sup>*3*x*) - - (*x*<sup>2</sup> <sup>−</sup> *<sup>y</sup>*2) - *<sup>x</sup>*<sup>4</sup> <sup>−</sup> *<sup>y</sup>*<sup>4</sup> - - *xy* - (*x*3*y* + *y*3*x*) - *x* - *x*<sup>3</sup> - - - - *xy*<sup>2</sup> - - - - *x*2*y* - - *y* - *y*<sup>3</sup> -

Application of Finite Symmetry Groups to Reactor Calculations 309

Table 6. Irreducible components of at most fourth order polynomials under the symmetries

the same number of degrees of freedom (i.e. coefficients in the expansion) on the surface of the node as within the node. With the help of Table 6, for the case of a square node, we compare the required number of coefficients for different orders of polynomial expansion. A linear approximation along the four faces of the square has at least one component in each irreducible subspace. At the same time the first polynomial contributing to the second irrep is fourth order. Convergence requires the convergence in each subspace thus the approximation inside the square must be at least of fourth order. There is no linear polynomial approximation that would use the same number of coefficients on the surface as inside the volume. The appropriate choice of order of expansion is thus not straightforward but it is important to the accuracy of the solution, because a mismatch of degrees of freedom inside and on the surface of the node is likely to lead to a loss of information in the computational step that passes from one node to the next. A lack of convergence has been observed, see (Palmiotti, 1995), in the case of calculations with a square node when using first order polynomials on the surface. A convergent solution is obtained only with fourth or higher order polynomial interpolation inside the node. Similar relationships apply to nodes of other geometry. For a hexagonal node that there is no polynomial where the number of coefficients on the surface matches the

In a hexagonal node in (Palmiotti, 1995), the first convergent solution with a linear approximation on the surface requires at least a sixth order polynomial expansion within the node. Thus, in the case of a linear approximation on the surface, in the case of a square node a third order polynomial within the node does not lead to a convergent solution, although the number of coefficients is greater than those on the surface. In the case of the regular hexagonal node, a convergent solution is obtained only for the sixth order polynomial expansion in the node, while both a fourth and a fifth order polynomial have a greater number of coefficients inside the node than on the surface. It appears that some terms of the polynomial expansion contain less information than others, and are thus superfluous in the computational algorithm. If these terms can be "filtered out", a more efficient and convergent solution should result. The explanation becomes immediately clear from the decomposition of the trial functions inside the volume and on the boundary. In both the square and hexagon nodes, the first order approximation on the boundary is sufficient to furnish all irreducible subspaces whereas this is true for the interpolating polynomials inside *V* for surprisingly high order polynomials.

In analogy with the application of group theory in particle physics, where group theory leads to insights into the relationships between elementary particles, we present an application of

of a square *C*4*<sup>v</sup>*

number of coefficients inside the node.

**5. Reactor physics**

• It is more efficient to break up a problem into parts and solve each subproblem independently. Results have been reported for operational codes (Gadó et al., 1994).

The above considerations dealt with the local symmetries. However, if we decompose the partial currents into irreps, we get a decomposition of the global vector *x* in equation (4.28) as well. We exploit the linear independence of the irreducible components further on the global scale.

For most physical problems we have a priori knowledge about the solution to a given boundary value problem in the form of smoothness and boundedness. This is brought to bear through the choice of solution space. In the following, we introduce via group theoretical principles the additional information of the particular geometric symmetry of the node. This allows the decomposition of the solution space into irreducible subspaces, and leads, for a given geometry, not only to a rule for choosing the optimum combination of polynomial expansions on the surface and in the volume, but also elucidates the subtle effect that the geometry of the physical system can have on the algorithm for the solution of the associated mathematical boundary problem.

Consider the iteration (4.29) and decompose the iterated vector into irreducible component

$$
\underline{\underline{x}} = \sum\_{a} \underline{\underline{x}}^{a} \tag{4.34}
$$

where because of the orthogonality of the irreducible components

$$
\underline{x}^{\beta+} \underline{x}^{\alpha} = 0
$$

when *α* � *β*. The convergence of the iteration means that

$$\lim\_{N \to \infty} \underline{\mathfrak{x}}\_{N+k1} - \underline{\mathfrak{x}}\_{N+k2} = 0 \tag{4.35}$$

for any *k*1, *k*2. But that entails that as the iteration proceeds, the difference between two iterated vector must tend to zero. In other words, the iteration must converge in every irreducible subspace. This observation may be violated when the iteration process has not been carefully designed.

Let us assume a method, see (Palmiotti, 1995), in which *N* basis functions are used to expand the solution along the boundary of a node and *M* basis functions to expand the solution inside the node. It is reasonable to use the approximation of same order along each face, hence, in a square node *N* is a multiple of four. For an *M*th order approximation inside the node, the number of free coefficients is (*M* + 1)(*M* + 2)/2. It has been shown that an algorithm (Palmiotti, 1995) with a linear (*N* = 1) approximation along the four faces, with 8 free coefficients, of the boundary did not result in convergent algorithm unless *M* = 4 quartic polynomial, with 15 free coefficients, was used inside the node.

In such a code each node is considered to be homogeneous in composition. Central to the accuracy of the method are two approximations. In the first, we assume the solution on the boundary surface of the node to be expanded in a set of basis functions (*fi*(*ξ*); *i* = 1, . . . , *N*). In the second, the solution inside the volume is expanded in another set of basis functions (*Fj*(**r**); *j* = 1, . . . , *M*). Clearly the independent variable *ξ* is a limit of the independent variable **r**.

Any iteration procedure, in principle, connects neighboring nodes through continuity and smoothness conditions. For an efficient numerical algorithm it is therefore desirable to have 24 Will-be-set-by-IN-TECH

• It is more efficient to break up a problem into parts and solve each subproblem independently. Results have been reported for operational codes (Gadó et al., 1994). The above considerations dealt with the local symmetries. However, if we decompose the partial currents into irreps, we get a decomposition of the global vector *x* in equation (4.28) as well. We exploit the linear independence of the irreducible components further on the global

For most physical problems we have a priori knowledge about the solution to a given boundary value problem in the form of smoothness and boundedness. This is brought to bear through the choice of solution space. In the following, we introduce via group theoretical principles the additional information of the particular geometric symmetry of the node. This allows the decomposition of the solution space into irreducible subspaces, and leads, for a given geometry, not only to a rule for choosing the optimum combination of polynomial expansions on the surface and in the volume, but also elucidates the subtle effect that the geometry of the physical system can have on the algorithm for the solution of the associated

Consider the iteration (4.29) and decompose the iterated vector into irreducible component

*<sup>x</sup>* <sup>=</sup> <sup>∑</sup>*<sup>α</sup>*

*xβ*+*x<sup>α</sup>* = 0

for any *k*1, *k*2. But that entails that as the iteration proceeds, the difference between two iterated vector must tend to zero. In other words, the iteration must converge in every irreducible subspace. This observation may be violated when the iteration process has not

Let us assume a method, see (Palmiotti, 1995), in which *N* basis functions are used to expand the solution along the boundary of a node and *M* basis functions to expand the solution inside the node. It is reasonable to use the approximation of same order along each face, hence, in a square node *N* is a multiple of four. For an *M*th order approximation inside the node, the number of free coefficients is (*M* + 1)(*M* + 2)/2. It has been shown that an algorithm (Palmiotti, 1995) with a linear (*N* = 1) approximation along the four faces, with 8 free coefficients, of the boundary did not result in convergent algorithm unless *M* = 4 quartic

In such a code each node is considered to be homogeneous in composition. Central to the accuracy of the method are two approximations. In the first, we assume the solution on the boundary surface of the node to be expanded in a set of basis functions (*fi*(*ξ*); *i* = 1, . . . , *N*). In the second, the solution inside the volume is expanded in another set of basis functions (*Fj*(**r**); *j* = 1, . . . , *M*). Clearly the independent variable *ξ* is a limit of the independent variable

Any iteration procedure, in principle, connects neighboring nodes through continuity and smoothness conditions. For an efficient numerical algorithm it is therefore desirable to have

where because of the orthogonality of the irreducible components

lim

when *α* � *β*. The convergence of the iteration means that

polynomial, with 15 free coefficients, was used inside the node.

*x<sup>α</sup>* (4.34)

*<sup>N</sup>*→<sup>∞</sup> *xN*<sup>+</sup>*k*<sup>1</sup> <sup>−</sup> *xN*<sup>+</sup>*k*<sup>2</sup> <sup>=</sup> <sup>0</sup> (4.35)

scale.

mathematical boundary problem.

been carefully designed.

**r**.


Table 6. Irreducible components of at most fourth order polynomials under the symmetries of a square *C*4*<sup>v</sup>*

the same number of degrees of freedom (i.e. coefficients in the expansion) on the surface of the node as within the node. With the help of Table 6, for the case of a square node, we compare the required number of coefficients for different orders of polynomial expansion. A linear approximation along the four faces of the square has at least one component in each irreducible subspace. At the same time the first polynomial contributing to the second irrep is fourth order. Convergence requires the convergence in each subspace thus the approximation inside the square must be at least of fourth order. There is no linear polynomial approximation that would use the same number of coefficients on the surface as inside the volume. The appropriate choice of order of expansion is thus not straightforward but it is important to the accuracy of the solution, because a mismatch of degrees of freedom inside and on the surface of the node is likely to lead to a loss of information in the computational step that passes from one node to the next. A lack of convergence has been observed, see (Palmiotti, 1995), in the case of calculations with a square node when using first order polynomials on the surface. A convergent solution is obtained only with fourth or higher order polynomial interpolation inside the node. Similar relationships apply to nodes of other geometry. For a hexagonal node that there is no polynomial where the number of coefficients on the surface matches the number of coefficients inside the node.

In a hexagonal node in (Palmiotti, 1995), the first convergent solution with a linear approximation on the surface requires at least a sixth order polynomial expansion within the node. Thus, in the case of a linear approximation on the surface, in the case of a square node a third order polynomial within the node does not lead to a convergent solution, although the number of coefficients is greater than those on the surface. In the case of the regular hexagonal node, a convergent solution is obtained only for the sixth order polynomial expansion in the node, while both a fourth and a fifth order polynomial have a greater number of coefficients inside the node than on the surface. It appears that some terms of the polynomial expansion contain less information than others, and are thus superfluous in the computational algorithm. If these terms can be "filtered out", a more efficient and convergent solution should result. The explanation becomes immediately clear from the decomposition of the trial functions inside the volume and on the boundary. In both the square and hexagon nodes, the first order approximation on the boundary is sufficient to furnish all irreducible subspaces whereas this is true for the interpolating polynomials inside *V* for surprisingly high order polynomials.

#### **5. Reactor physics**

In analogy with the application of group theory in particle physics, where group theory leads to insights into the relationships between elementary particles, we present an application of

and the integration runs over volume *V* of the node. Note that only irrep *A*1 (i.e. complete symmetry) contributes to the average flux because of the orthogonality of the irreducible flux components. After eliminating *cA*<sup>1</sup> from (5.1), we get the response matrix for determining the

Application of Finite Symmetry Groups to Reactor Calculations 311

This assures that *V* is completely described by matrix **W** and the diagonal matrices < *F*(*r*) > , < *f*(*r*) >, < *g*(*r*) > for each irrep. For example, we are able to reconstruct the cross-section matrix **Σ** from them. Note that **WT** = **T** < *F*/ *f* >, the eigenvectors of matrix **W** are the eigenvectors of **A**. Now we need only a numerical procedure to find the eigenvalues *λ<sup>k</sup>* from

The question is, under what conditions are the above calculations feasible. We count the number of response matrices. The matrix elements we need to characterize *V* may be all different and the number of matrices depends on the shape of *V*, since the number of irreducible components of the involved matrices depends on the geometry. In a square shaped homogeneous *V*, we have four **R***<sup>i</sup>* matrices and one **W**. Altogether we have to determine 5 ∗ *G* ∗ *G* elements. In an inhomogeneous hexagonal volume, there are 6 ∗ *G* ∗ *G* matrix elements, whereas the homogeneous material is described by *G* ∗ (*G* + 1) parameters as in a homogeneous material there are altogether *<sup>G</sup>* <sup>∗</sup> *<sup>G</sup>* cross-sections and *<sup>G</sup>* diffusion constants8. Therefore the Selengut principle is not exact it may only be a good approximation under specific circumstances. Homogenization recipes preserve only specific reaction rates, but they

The basic elements of the theory of finite symmetry groups has been introduced. In particular, the use of the machinery associated with the decomposition into irreducible representations, in analogy with harmonic analysis of functions in function space, in the analysis of Nuclear Engineering problems. The physical settings of many Nuclear Engineering problems exhibit symmetry, as for example in the solution of the multi-group neutron diffusion equation. This symmetry can be systematically exploited via group theory, and elicit information that leads to more efficient numerical algorithms and also to useful insights. This is a result due to the added information inherent in symmetry, and the ability of group theory to define the "rules"

Allgower, E. L., et al. (1992). Exploiting symmetry in boundary element methods, *SIAM J.*

Falicov, L. M. (1996). *Group Theory and Its Physical Applications*, The University of Chicago

Atkins, P. W. et al. (1970). *Tables for group theory*, Oxford University Press, Oxford. Brooks, R. (1988). Constructing isospectral manifolds, *Amer. Math. Monthly*, 95, 823–839. Conway, J. H., et al. (2003). *The ATLAS of finite groups*, Oxford University Press, Oxford. Deniz, V. C. (1986). *The Theory of Neutron Leakage in Reactor Lattices*, in CRC Handbook of

Nuclear Reactor Calculations, vol. II, CRC Press, Boca Raton, (FL)

<sup>Φ</sup>¯ <sup>=</sup> **<sup>T</sup>** <sup>&</sup>lt; *FA*1/ *fA*<sup>1</sup> <sup>&</sup>gt; **<sup>T</sup>**−<sup>1</sup> <sup>&</sup>lt; *<sup>F</sup>* <sup>&</sup>gt;<sup>≡</sup> **<sup>W</sup>** <sup>&</sup>lt; *<sup>F</sup>* <sup>&</sup>gt; . (5.6)

volume integrated flux Φ¯ from the face integrated flux *FA*1:

< *F*/ *f* >.

do not provide general equivalence.

of the symmetry and allows one to exploit them.

*Numer. Anal.*, 29, 534–552.

Press, Chicago, (IL)

<sup>8</sup> Remember, here *G* is the number of energy groups.

**6. Conclusions**

**7. References**

group theory to the solution to a specific reactor physics problem. The question is whether it is possible to replace a part of a heterogeneous core by a homogeneous material so that the solution outside the homogeneous region remains the same? This old problem is known as homogenization (Selengut, 1960).

In particular, for non-uniform lattices, asymptotic theory has shown that a lattice composed of identical cells has a solution that is composed of a periodic microflux and a slowly varying macroflux. What happens if the cell geometry is the same but the material composition varies?

In reactor calculations, we solve an equation derived from neutron balance. In that equation, we encounter reaction rates, currents or partial currents. It is reasonable to derive all the quantities we need from one given basic quantity, say from the neutron flux at given points of the boundary. The archetype of such relation is the exiting current determined from the entering current by a response matrix. We show that by using irreducible components of the partial currents, the response matrix becomes diagonal.

The Selengut principle is formulated: if the response matrix of a given heterogeneous material in *V* can be substituted by the response matrix of a homogeneous material in *V*, there exist an equivalent homogeneous material with which one may replace *V*. This principle simplifies calculations considerably, and, therefore, has been widely used in reactor physics. We investigate the Selengut principle more closely(Makai, 2010),(Makai, 1992).

The analysis is based on the analytical solution of the diffusion equation derived in the previous Section. The problem is considered in a few energy groups, the boundary flux *F* is a vector, as well as the volume averaged flux Φ¯ . Using that solution, we are able to derive matrices mapping into each other the volume integrated fluxes, the surface integrated partial and net currents. The derivation of the corresponding matrices is as follows. Our basis is the boundary flux, that we derive for each irrep *i* from (4.13). The expression (4.13) has three components. The first one is vector *tk* which is independent of the position **r** and is multiplied by an exponential function with *λk***r** in the exponent. The third component is the weight *Wk* which is independent of **r** but varies with subscript *k*. The product is summed for subscript *k*, that labels the eigenvalues of the cross-section matrix in (4.14). That expression can be put into the following concise form:

$$\underline{F}\_{i} = \mathbf{T} < f\_{i} > \underline{c}\_{i'} \tag{5.1}$$

where *ci* comprises the third component. Here < *fi*(**r**) > is a diagonal matrix. Note that position dependent quantities like reaction rates, follow that structure. The normal component of the net current is *J net* obtained from the flux by taking the derivative and is given in irrep *i* as

$$J\_{\rm net,i} = -\mathbf{D}\mathbf{T} < \mathbf{g}\_i > \underline{\mathbf{c}}\_i. \tag{5.2}$$

We eliminate *ci* to get

$$\underline{J}\_{\text{net},i} = -\mathbf{D}\mathbf{T} < \mathcal{g}\_i/f\_i > \mathbf{T}^{-1}\underline{F}\_i \equiv \mathbf{R}\_i\underline{F}\_i. \tag{5.3}$$

Here **n** is the outward normal to face *Fi*,

$$\mathbf{g}\_{i} = -\nabla \mathbf{n} f\_{i}(\mathbf{r}).\tag{5.4}$$

The volume integrated flux Φ¯ is obtained after integration from (4.13) as

$$
\underline{\Phi} = \mathbf{T} < \overline{F}\_{A1} > c\_{A1\prime} \overline{F}\_{A1} = \int f\_{A1}(\mathbf{r}) d^3 \mathbf{r},\tag{5.5}
$$

and the integration runs over volume *V* of the node. Note that only irrep *A*1 (i.e. complete symmetry) contributes to the average flux because of the orthogonality of the irreducible flux components. After eliminating *cA*<sup>1</sup> from (5.1), we get the response matrix for determining the volume integrated flux Φ¯ from the face integrated flux *FA*1:

$$
\underline{\Phi} = \mathbf{T} < \overline{F}\_{A1}/f\_{A1} > \mathbf{T}^{-1} < \overline{F} > \equiv \mathbf{W} < \overline{F} > . \tag{5.6}
$$

This assures that *V* is completely described by matrix **W** and the diagonal matrices < *F*(*r*) > , < *f*(*r*) >, < *g*(*r*) > for each irrep. For example, we are able to reconstruct the cross-section matrix **Σ** from them. Note that **WT** = **T** < *F*/ *f* >, the eigenvectors of matrix **W** are the eigenvectors of **A**. Now we need only a numerical procedure to find the eigenvalues *λ<sup>k</sup>* from < *F*/ *f* >.

The question is, under what conditions are the above calculations feasible. We count the number of response matrices. The matrix elements we need to characterize *V* may be all different and the number of matrices depends on the shape of *V*, since the number of irreducible components of the involved matrices depends on the geometry. In a square shaped homogeneous *V*, we have four **R***<sup>i</sup>* matrices and one **W**. Altogether we have to determine 5 ∗ *G* ∗ *G* elements. In an inhomogeneous hexagonal volume, there are 6 ∗ *G* ∗ *G* matrix elements, whereas the homogeneous material is described by *G* ∗ (*G* + 1) parameters as in a homogeneous material there are altogether *<sup>G</sup>* <sup>∗</sup> *<sup>G</sup>* cross-sections and *<sup>G</sup>* diffusion constants8. Therefore the Selengut principle is not exact it may only be a good approximation under specific circumstances. Homogenization recipes preserve only specific reaction rates, but they do not provide general equivalence.

#### **6. Conclusions**

26 Will-be-set-by-IN-TECH

group theory to the solution to a specific reactor physics problem. The question is whether it is possible to replace a part of a heterogeneous core by a homogeneous material so that the solution outside the homogeneous region remains the same? This old problem is known as

In particular, for non-uniform lattices, asymptotic theory has shown that a lattice composed of identical cells has a solution that is composed of a periodic microflux and a slowly varying macroflux. What happens if the cell geometry is the same but the material composition varies? In reactor calculations, we solve an equation derived from neutron balance. In that equation, we encounter reaction rates, currents or partial currents. It is reasonable to derive all the quantities we need from one given basic quantity, say from the neutron flux at given points of the boundary. The archetype of such relation is the exiting current determined from the entering current by a response matrix. We show that by using irreducible components of the

The Selengut principle is formulated: if the response matrix of a given heterogeneous material in *V* can be substituted by the response matrix of a homogeneous material in *V*, there exist an equivalent homogeneous material with which one may replace *V*. This principle simplifies calculations considerably, and, therefore, has been widely used in reactor physics.

The analysis is based on the analytical solution of the diffusion equation derived in the previous Section. The problem is considered in a few energy groups, the boundary flux *F* is a vector, as well as the volume averaged flux Φ¯ . Using that solution, we are able to derive matrices mapping into each other the volume integrated fluxes, the surface integrated partial and net currents. The derivation of the corresponding matrices is as follows. Our basis is the boundary flux, that we derive for each irrep *i* from (4.13). The expression (4.13) has three components. The first one is vector *tk* which is independent of the position **r** and is multiplied by an exponential function with *λk***r** in the exponent. The third component is the weight *Wk* which is independent of **r** but varies with subscript *k*. The product is summed for subscript *k*, that labels the eigenvalues of the cross-section matrix in (4.14). That expression can be put

where *ci* comprises the third component. Here < *fi*(**r**) > is a diagonal matrix. Note that position dependent quantities like reaction rates, follow that structure. The normal

*J*

The volume integrated flux Φ¯ is obtained after integration from (4.13) as

Φ¯ = **T** < *FA*<sup>1</sup> > *cA*1, *F*¯ *<sup>A</sup>*<sup>1</sup> =

*J*

*Fi* = **T** < *fi* > *ci*, (5.1)

*net* obtained from the flux by taking the derivative and is

*net*,*<sup>i</sup>* = −**DT** < *gi* > *ci*. (5.2)

*gi* = −∇**n** *fi*(*r*). (5.4)

*fA*1(**r**)*d*3**r**, (5.5)

*net*,*<sup>i</sup>* <sup>=</sup> <sup>−</sup>**DT** <sup>&</sup>lt; *gi*/ *fi* <sup>&</sup>gt; **<sup>T</sup>**−<sup>1</sup>*Fi* <sup>≡</sup> **<sup>R</sup>***iFi*. (5.3)

We investigate the Selengut principle more closely(Makai, 2010),(Makai, 1992).

homogenization (Selengut, 1960).

into the following concise form:

component of the net current is *J*

Here **n** is the outward normal to face *Fi*,

given in irrep *i* as

We eliminate *ci* to get

partial currents, the response matrix becomes diagonal.

The basic elements of the theory of finite symmetry groups has been introduced. In particular, the use of the machinery associated with the decomposition into irreducible representations, in analogy with harmonic analysis of functions in function space, in the analysis of Nuclear Engineering problems. The physical settings of many Nuclear Engineering problems exhibit symmetry, as for example in the solution of the multi-group neutron diffusion equation. This symmetry can be systematically exploited via group theory, and elicit information that leads to more efficient numerical algorithms and also to useful insights. This is a result due to the added information inherent in symmetry, and the ability of group theory to define the "rules" of the symmetry and allows one to exploit them.

#### **7. References**


<sup>8</sup> Remember, here *G* is the number of energy groups.

**14** 

*University, Yozgat* 

*Turkey* 

**Neutron Shielding Properties of Some** 

*1Faculty of Science and Art, Department of Physics, Ibrahim Cecen University, Ağr 2Department of Civil Engineering, Faculty of Engineering and Architecture, Bozok* 

*3Department of Civil Engineering, Faculty of Engineering, Bartin University, Bartin* 

Nuclear reactor technology is known as an emerging area of study from past to present. It is an implementation of the nuclear sciences including reactions about atomic nucleus and productions. During the construction of a nuclear reactor, the most important issue is nuclear safety. The term of security can be attributed radiation shielding processes. For nuclear reactors, there are several different materials used to radiation shielding. While determining the most appropriate material to shield, the type and energy of radiation is

There are two types of nuclear reactions reveals very large energies. These are the disintegration of atomic nuclei (fission) and merging small atomic nuclei (fusion) reactions. Therefore, nuclear reactors can be divided into two groups according to the type of reaction occurred during as fission reactors and fusion reactors. Currently a nuclear reactor working with fusion reactions is not available. Today, there are the hundreds of nuclear reactors based on the fission reactions. For the realization of nuclear fission, a large fissile atomic nucleus such as 235U can absorb a neutron particle. At the end of nuclear fission event, fission products (two or more light nucleus, kinetic energy, gamma radiation and free neutrons) arise. Fission reactions are controls by using neutron attenuators such as heavy water, cadmium, graphite, beryllium and several hydrocarbons. While designing a reactor

Vermiculite is a monoclinic-prismatic crystal mineral including Al2O3, H2O, MgO, FeO and SiO2. It is used in heat applications, as soil conditioner, as loose-fill insulation, as absorber package material and lightweight aggregate for plaster etc. Its chemical formula is known as (MgFe,Al)3(Al,Si)4O10(OH)2·4H2O and physical density of it about 2.5 g.cm-3. Melting point of vermiculite is above 13500C. This mineral can be used as additive building material in

Vermiculite is a component of the phyllosilicate or sheet silicate group of minerals. It has high-level exfoliation property. So if vermiculite is heated, it expands to many times its

shield materials against gamma and neutron radiations should be used.

**1. Introduction** 

extremely important.

terms of mineral properties.

**Vermiculite-Loaded New Samples** 

Turgay Korkut1, Fuat Köksal2 and Osman Gencel3



457-472.

