**2. The Galois Field Fourier Transform**

We are particulary interested in the case of finite fields where *p* is a prime number and *α* ∈ *GF*(*pm*) is an element of order *n*. The Galois Field Fourier Transform (GFFT) and its inverse of a vector *<sup>v</sup>* = {*v*0, *<sup>v</sup>*1, ..., *vn*−1} over *GF*(*p*) of length *<sup>n</sup>* can be related via the equations:

$$V\_j = \sum\_{i=0}^{n-1} \alpha^{ij} v\_i \qquad j = 0, \ldots, n-1$$

and

$$v\_i = (n)^{-1} \sum\_{j=0}^{n-1} a^{-ij} V\_j \qquad i = 0, \ldots, n-1.$$

For any vector *f* over *GF*(*p*) where the above equations hold true, we define

$$\mathcal{F}(\upsilon) \equiv V = \{V\_0, V\_1, \dots, V\_{n-1}\} \tag{1}$$

as the GFFT of *v* and

$$\mathcal{F}^{-1}(V) = \upsilon = \{v\_{0\prime}v\_{1\prime}...v\_{n-1}\} \tag{2}$$

as the inverse GFFT of *F*.

Using this formulation, given two vectors

$$\begin{aligned} v &= \{v\_0, v\_1, \dots, v\_{n-1}\} \\ w &= \{w\_0, w\_1, \dots, w\_{n-1}\} \end{aligned} \tag{3}$$

over *GF*(*p*) and their associated transforms

$$\begin{aligned} \mathcal{F}(v) = V &= \{V\_{0\prime}, V\_{1\prime}, \dots, V\_{n-1}\} \\ \mathcal{F}(w) = W &= \{\mathcal{W}\_0, \mathcal{W}\_1, \dots, \mathcal{W}\_{n-1}\} \end{aligned} \tag{4}$$

the familiar convolution theorem can be demonstrated to hold true for the finite field case. Specifically, computing

$$w\_j = \sum\_{k=0}^{n-1} v\_k w\_{(j-k)} \tag{5}$$

is equivalent to computing

$$\mathbf{x}\_{\circ} = \mathcal{F}^{-1}(V\_{\circ}\mathbf{W}\_{\circ}).\tag{6}$$

#### **3. Error control coding**

Given a message encoded as a vector *μ* of length *k* over *GF*(*p*), the goal of error control coding (ECC) is to transform the message vector into a code vector *C* of length *n* > *k* in a way that causes *C* to be robust to errors arising over a communication channel (such as a wireless 2 Will-be-set-by-IN-TECH

finite geometries. Finally, Sections 5 and 6 conclude this chapter by deriving and applying the

We are particulary interested in the case of finite fields where *p* is a prime number and *α* ∈ *GF*(*pm*) is an element of order *n*. The Galois Field Fourier Transform (GFFT) and its inverse of a vector *<sup>v</sup>* = {*v*0, *<sup>v</sup>*1, ..., *vn*−1} over *GF*(*p*) of length *<sup>n</sup>* can be related via the equations:

*<sup>v</sup>* = {*v*0, *<sup>v</sup>*1, ..., *vn*−1}

F(*v*) = *<sup>V</sup>* = {*V*0, *<sup>V</sup>*1, ..., *Vn*−1}

the familiar convolution theorem can be demonstrated to hold true for the finite field case.

Given a message encoded as a vector *μ* of length *k* over *GF*(*p*), the goal of error control coding (ECC) is to transform the message vector into a code vector *C* of length *n* > *k* in a way that causes *C* to be robust to errors arising over a communication channel (such as a wireless

*n*−1 ∑ *k*=0

*xj* =

*<sup>α</sup>ijvi <sup>j</sup>* <sup>=</sup> 0, ..., *<sup>n</sup>* <sup>−</sup> <sup>1</sup>

*<sup>α</sup>*<sup>−</sup>*ijVj <sup>i</sup>* <sup>=</sup> 0, ..., *<sup>n</sup>* <sup>−</sup> 1.

F(*v*) ≡ *<sup>V</sup>* = {*V*0, *<sup>V</sup>*1, ..., *Vn*−1} (1)

<sup>F</sup>−1(*V*) = *<sup>v</sup>* <sup>=</sup> {*v*0, *<sup>v</sup>*1, ..., *vn*−1} (2)

*<sup>w</sup>* <sup>=</sup> {*w*0, *<sup>w</sup>*1, ..., *wn*−1} (3)

<sup>F</sup>(*w*) = *<sup>W</sup>* <sup>=</sup> {*W*0, *<sup>W</sup>*1, ..., *Wn*−1}, (4)

*vkw*(*j*−*k*) (5)

*xj* <sup>=</sup> <sup>F</sup>−1(*VjWj*). (6)

generalized convolution theorem.

and

as the GFFT of *v* and

as the inverse GFFT of *F*.

Specifically, computing

is equivalent to computing

**3. Error control coding**

Using this formulation, given two vectors

over *GF*(*p*) and their associated transforms

**2. The Galois Field Fourier Transform**

*Vj* =

*vi* = (*n*)−<sup>1</sup>

*n*−1 ∑ *i*=0

> *n*−1 ∑ *j*=0

For any vector *f* over *GF*(*p*) where the above equations hold true, we define

link, fiber optic cable, etc). Rather than the message vector *μ*, it is the code vector *C* that is transmitted over a channel where the receiver is only able to observe a received vector *C*ˆ. Ideally, in the absence of any noise, it should be the case that *C*ˆ = *C*. On the other hand, if noise is present on the channel, the method used to transform (i.e. 'encode') the message *μ* into the code vector *C* provides a way to recover *μ* from *C*ˆ. The basic strategy behind ECC is, given a message,


The general idea behind ECC then is to find a *<sup>C</sup>* that minimizes ||*<sup>C</sup>* <sup>−</sup> *<sup>C</sup>*ˆ|| ; however, numerically determining the minimum distance solution is wrought with dimensionality issues that can lead to computational intractability. Hence, classes of codes have been devised that relate the message encoding method to the decoding algorithm. Such algorithms are often iterative (Blahut (2003); Lin & Costello (1983); Wicker & Kim (2003)) and converge upon the optimal solution by exploiting the mathematical structure designed into the code.

Two important quantities in the field of ECC are the Hamming weight and the Hamming distance. Consider two vectors *v* and *w* of length *n* over *GF*(*p*).

**Definition 3.1.** *The Hamming weight wH*(*v*) *of a vector v is defined as the number of non-zero components in v.*

**Definition 3.2.** *The Hamming distance between v and w is defined as the number of components that differ between v and w.*

For example, over *GF*(3), assuming *n* = 5, *v* = {02102} and *w* = {02212}, according to the above definitions we have that *wH*(*v*) = 3, *wH*(*w*) = 4 and *dH*(*v*, *w*) = 2.

An important quantity for defining the noise sphere is referred to as *dmin* which is the minimum Hamming distance between all code vectors defined in the code class. To correct up to *t* errors in any code vector, it turns out that *dmin* = 2*t* + 1. Furthermore, when the ECC is a linear code, a major simplification arises where *dmin* is simply the minimum Hamming weight computed over all non-zero code vectors in the code class.

#### **3.1 Application of the GFFT to Reed-Solomon codes**

The GFFT and the convolution theorem have been applied in the field of error control coding for the construction of a class of linear codes known as Reed-Solomon codes (Blahut (2003); Wicker (1994)). The algorithm for encoding a message vector *μ* over *GF*(*pm*) of length *k* is

**Example 4.2.** *Consider the binary case where p* = 2 *and m* = 3*. Equation (8) gives*

The Fourier Convolution Theorem over Finite Fields: Extensions of Its Application to Error Control Coding

> ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ 235

.

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

.

<sup>0</sup> *otherwise for i,j=0, 1, ..., p-1*. (9)

*pm* , we introduce the following.

*<sup>p</sup> mod p* = *Ip* (10)

*<sup>p</sup> mod p*. (11)

*P*<sup>23</sup> =

**Example 4.3.** *Consider the ternary case where p* = 3 *and m* = 2*. Equation (8) gives*

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

*P*<sup>32</sup> =

**Observation 4.4.** *Let p be prime and let Qp be the p* × *p matrix defined by*

) mod *p if j* ≥ *i,*

**Observation 4.5.** *If p is prime and Pp is a* 1*st order Pascal matrix over GF*(*p*)*, then*

*Pp*

This result easily follows from the integer case (Call & Velleman (1993); Heller (1963)).

*<sup>p</sup> mod p* <sup>=</sup> *<sup>P</sup>p*−<sup>1</sup>

**4.2 The inverse of the Pascal matrix**

*qij* =

*<sup>p</sup>* mod *p.*

*where Ip* = *p* × *p identity matrix.*

Hence, it easily follows that

*Then Qp* = *P*−<sup>1</sup>

�

As Section 5 will require understanding *Qpm* <sup>≡</sup> *<sup>P</sup>*−<sup>1</sup>

(−1)*j*−*<sup>i</sup>* ( *j i*

Furthermore, it has been demonstrated that (Sakk (2002)):

**Corollary 4.6.** *If p is prime and Pp is a Pascal matrix over GF*(*p*)*, then*

*Qp* = *P*−<sup>1</sup>

quite straightforward. To be able to correct up to *t* errors, create a vector of length *n* by appending *μ* with 2*t* consecutive zeros. The code vector *C* is then derived by computing the inverse GFFT of the appended construction. One approach to proving that this construction is capable of correcting up to *t* errors involves applying the GFFT convolution theorem. Specifically, given a code vector *C*, a *locator* vector Λ must be defined such that *Cj*Λ*<sup>j</sup>* = 0 for all *j* = 0, ··· , *n*. Letting *c* and *λ* denote the GFFT of *C* and Λ, the convolution theorem implies *c* ∗ *λ* = 0. Based upon the convolution approach, the conclusion can be reached that the inverse GFFT construction leads to Reed-Solomon codes capable of correcting up to *t* errors in the code vector (Blahut (2003); Wicker (1994)).

The key feature of the GFFT approach to constructing Reed-Solomon codes described above is that restrictions are placed on the position and the number of zeros appended to the message vector. To summarize:


In this work, one our of goals is to demonstrate that, given other linear transformations inducing a convolution theorem, the above steps can be generalized to other classes of codes. As we shall see, the key is to define the transform and the structure of how zeros are introduced into the message vector.

#### **4. Pascal codes**

#### **4.1 The Pascal matrix over finite fields**

Let us now focus our attention on the case of *GF*(*p*) where *p* is prime. Our starting point will be:

**Definition 4.1.** *Let p be a prime number, then the ijth entry of a p<sup>m</sup>* <sup>×</sup> *<sup>p</sup><sup>m</sup>* <sup>m</sup>*th order Pascal matrix* P*p<sup>m</sup> over* GF (p) *is defined as*

$$\begin{aligned} p\_{ij} &= (j!)((j-i)!i!)^{-1} \mod p \\ &= \binom{j}{i} \mod p \end{aligned} \tag{7}$$

*for i*, *<sup>j</sup>* <sup>=</sup> 0, 1, ..., *<sup>p</sup><sup>m</sup>* <sup>−</sup> <sup>1</sup> *and, by convention, if i* <sup>&</sup>gt; *j, then pij* <sup>=</sup> <sup>0</sup>*.*

In other words, *Ppm* is an upper triangular matrix whose non-zero entries are the elements of Pascal's triangle taken *mod p*. For the purposes of this work, it is useful to observe that *Ppm* also has a Kronecker product description (Sakk & Wicker (2003)):

$$P\_{p^m} = P\_p \otimes P\_{p^{m-1}} \mod p \tag{8}$$

where *Pp* is a 1*st* order Pascal matrix.

4 Will-be-set-by-IN-TECH

quite straightforward. To be able to correct up to *t* errors, create a vector of length *n* by appending *μ* with 2*t* consecutive zeros. The code vector *C* is then derived by computing the inverse GFFT of the appended construction. One approach to proving that this construction is capable of correcting up to *t* errors involves applying the GFFT convolution theorem. Specifically, given a code vector *C*, a *locator* vector Λ must be defined such that *Cj*Λ*<sup>j</sup>* = 0 for all *j* = 0, ··· , *n*. Letting *c* and *λ* denote the GFFT of *C* and Λ, the convolution theorem implies *c* ∗ *λ* = 0. Based upon the convolution approach, the conclusion can be reached that the inverse GFFT construction leads to Reed-Solomon codes capable of correcting up to *t* errors

The key feature of the GFFT approach to constructing Reed-Solomon codes described above is that restrictions are placed on the position and the number of zeros appended to the message

i. Addition of zeros to the message vector *μ* of length *k* is performed at prescribed locations. ii. The resulting vector is then inverse transformed in order to compute the code vector *C*. iii. The error correcting properties of this code can be demonstrated by applying the

In this work, one our of goals is to demonstrate that, given other linear transformations inducing a convolution theorem, the above steps can be generalized to other classes of codes. As we shall see, the key is to define the transform and the structure of how zeros

Let us now focus our attention on the case of *GF*(*p*) where *p* is prime. Our starting point will

**Definition 4.1.** *Let p be a prime number, then the ijth entry of a p<sup>m</sup>* <sup>×</sup> *<sup>p</sup><sup>m</sup>* <sup>m</sup>*th order Pascal matrix*

*pij* = (*j*!)((*<sup>j</sup>* <sup>−</sup> *<sup>i</sup>*)!*i*!)−<sup>1</sup> mod *<sup>p</sup>*

mod *p*

In other words, *Ppm* is an upper triangular matrix whose non-zero entries are the elements of Pascal's triangle taken *mod p*. For the purposes of this work, it is useful to observe that *Ppm*

*Ppm* = *Pp* ⊗ *Ppm*−<sup>1</sup> mod *p* (8)

= *j i* 

*for i*, *<sup>j</sup>* <sup>=</sup> 0, 1, ..., *<sup>p</sup><sup>m</sup>* <sup>−</sup> <sup>1</sup> *and, by convention, if i* <sup>&</sup>gt; *j, then pij* <sup>=</sup> <sup>0</sup>*.*

also has a Kronecker product description (Sakk & Wicker (2003)):

in the code vector (Blahut (2003); Wicker (1994)).

vector. To summarize:

convolution theorem.

**4. Pascal codes**

be:

are introduced into the message vector.

**4.1 The Pascal matrix over finite fields**

P*p<sup>m</sup> over* GF (p) *is defined as*

where *Pp* is a 1*st* order Pascal matrix.

**Example 4.2.** *Consider the binary case where p* = 2 *and m* = 3*. Equation (8) gives*

$$P\_{2^3} = \begin{bmatrix} 1 \ 1 \ 1 \ 1 \ 1 \ 1 \ 1 \ 1 \ 1 \ 1 \\ 0 \ 1 \ 0 \ 1 \ 0 \ 1 \ 0 \ 1 \\ 0 \ 0 \ 1 \ 1 \ 0 \ 0 \ 1 \ 1 \\ 0 \ 0 \ 0 \ 1 \ 0 \ 0 \ 0 \ 1 \\ 0 \ 0 \ 0 \ 0 \ 1 \ 1 \ 1 \ 1 \\ 0 \ 0 \ 0 \ 0 \ 0 \ 1 \ 0 \ 1 \\ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 1 \\ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 1 \\ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 1 \\ \end{bmatrix}.$$

**Example 4.3.** *Consider the ternary case where p* = 3 *and m* = 2*. Equation (8) gives*

$$P\_{3^2} = \begin{bmatrix} 1 \ 1 \ 1 \ 1 \ 1 \ 1 \ 1 \ 1 \ 1 \ 1 \ 1 \ 1 \ 1 \\ 0 \ 1 \ 2 \ 0 \ 1 \ 2 \ 0 \ 1 \ 2 \\ 0 \ 0 \ 1 \ 0 \ 0 \ 1 \ 0 \ 0 \ 1 \\ 0 \ 0 \ 0 \ 1 \ 1 \ 1 \ 2 \ 2 \\ 0 \ 0 \ 0 \ 0 \ 1 \ 2 \ 0 \ 2 \ 1 \\ 0 \ 0 \ 0 \ 0 \ 0 \ 1 \ 0 \ 0 \ 2 \\ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 1 \ 1 \ 1 \\ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 1 \ 2 \\ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 1 \ 2 \\ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 1 \end{bmatrix}$$

#### **4.2 The inverse of the Pascal matrix**

As Section 5 will require understanding *Qpm* <sup>≡</sup> *<sup>P</sup>*−<sup>1</sup> *pm* , we introduce the following. **Observation 4.4.** *Let p be prime and let Qp be the p* × *p matrix defined by*

$$q\_{ij} = \begin{cases} (-1)^{j-i} \binom{j}{i} \mod p & \text{if } j \ge i, \\ 0 & \text{otherwise} \end{cases} \qquad \text{for } i, j = 0, 1, \dots, p \text{-1.} \tag{9}$$

.

*Then Qp* = *P*−<sup>1</sup> *<sup>p</sup>* mod *p.*

(7)

This result easily follows from the integer case (Call & Velleman (1993); Heller (1963)). Furthermore, it has been demonstrated that (Sakk (2002)):

**Observation 4.5.** *If p is prime and Pp is a* 1*st order Pascal matrix over GF*(*p*)*, then*

$$P\_p^p \bmod p = I\_p \tag{10}$$

*where Ip* = *p* × *p identity matrix.*

Hence, it easily follows that

**Corollary 4.6.** *If p is prime and Pp is a Pascal matrix over GF*(*p*)*, then*

$$Q\_p = P\_p^{-1} \bmod p = P\_p^{p-1} \bmod p. \tag{11}$$

*and*

*r:*

*r:*

*wp*(*i*) ≡

to generalized Reed-Muller codes (GRM) codes (Sakk (2002)).

The Fourier Convolution Theorem over Finite Fields: Extensions of Its Application to Error Control Coding

positions within *μ* to form code vectors *C* = *μPpm* .

Error control codes derived from the *mth* order Pascal matrix over *GF*(2) (i.e. binary data) have been related (Forney (1988); Massey et al. (1973)) to a class of codes known as *rth* order binary Reed-Muller codes *RM*(*r*, *m*) of length 2*<sup>m</sup>* (MacWilliams & Sloane (1977); Wicker (1994)). In addition, it has been further demonstrated (Sakk (2002)) that *P*2(*r*, *m*) codes over *GF*(2) are equivalent to *RM*(*r*, *m*) codes with minimum distance *dmin* = 2*m*−*r*. These observations have been extended where it has been demonstrated that *Pp*(*r*, *m*) codes over *GF*(*p*) are equivalent

To place this class of codes in the same context as that outlined in Section 3.1, we must show how to introduce zeros into the message vector, apply the Pascal matrix as the linear transformation and, based upon this transformation, introduce a convolution theorem. From the definition above, a given code is specified by choosing *p*, *m* and a value of 0 ≤ *r* ≤ *<sup>m</sup>*(*<sup>p</sup>* <sup>−</sup> <sup>1</sup>). The code vector length then becomes *<sup>n</sup>* <sup>=</sup> *<sup>p</sup>m*; and, for this class of codes, a given value of *r* defines the length *k* of the message. The rest of the *n* − *k* components of *μ* must be set to zero in a systematic way that leads to the minimum distance property of the code.

**Example 4.9.** *Consider P*<sup>23</sup> *in Example 4.2 (hence, n* = 23 = 8*) and a message vector μ* = (*μ*0, *μ*1, ..., *μ*7) *and let s be the number of consecutive zeros in the vector μ for a given value of*

*r* = 0 (*dmin* = 8) : *s* = 7 *μ* = (*μ*0, 0, 0, 0, 0, 0, 0, 0) (*k* = 1) *r* = 1 (*dmin* = 4) : *s* = 3 *μ* = (*μ*0, *μ*1, *μ*2, 0, *μ*4, 0, 0, 0) (*k* = 4) *r* = 2 (*dmin* = 2) : *s* = 1 *μ* = (*μ*0, *μ*1, *μ*2, *μ*3, *μ*4, *μ*5, *μ*6, 0) (*k* = 7) *r* = 3 (*dmin* = 1) : *s* = 0 *μ* = (*μ*0, *μ*1, *μ*2, *μ*3, *μ*4, *μ*5, *μ*6, *μ*7) (*k* = 8) **Example 4.10.** *Consider P*<sup>32</sup> *in Example 4.3 (hence, n* = 32 = 9*) and a message vector μ* = (*μ*0, *μ*1, ..., *μ*8) *and let s be the number of consecutive zeros in the vector μ for a given value of*

> *r* = 0 (*dmin* = 9) : *s* = 8 *μ* = (*μ*0, 0, 0, 0, 0, 0, 0, 0, 0) (*k* = 1) *r* = 1 (*dmin* = 6) : *s* = 5 *μ* = (*μ*0, *μ*1, 0, *μ*3, 0, 0, 0, 0, 0) (*k* = 3) *r* = 2 (*dmin* = 3) : *s* = 2 *μ* = (*μ*0, *μ*1, *μ*2, *μ*3, *μ*4, 0, *μ*6, 0, 0) (*k* = 6) *r* = 3 (*dmin* = 2) : *s* = 1 *μ* = (*μ*0, *μ*1, *μ*2, *μ*3, *μ*4, *μ*5, *μ*6, *μ*7, 0) (*k* = 8) *r* = 4 (*dmin* = 1) : *s* = 0 *μ* = (*μ*0, *μ*1, *μ*2, *μ*3, *μ*4, *μ*5, *μ*6, *μ*7, *μ*8) (*k* = 9)

In the above examples, *dmin* is shown in parentheses for each value of *r*; furthermore, observe that *dmin* = *s* + 1. Recalling for a moment the GFFT approach to Reed-Solomon code design, the minimum distance of a code where the message vector has *n* − *k* consecutive zeros can be shown to be *dmin* = *n* − *k* + 1 (Blahut (2003); Wicker (1994)). It is apparent that, by using a Pascal matrix as the transform, a result similar to that of the GFFT can be ascertained. The major difference is that, for Reed-Solomon codes, the string of zeros must occur at the end of the message vector before applying the GFFT to create *C*. For *P*(*r*, *m*), in addition to the string of consecutive zeros, based upon the structure of *Ppm* , zeros must also be dispersed in other

*m*−1 ∑ *j*=0 *ij*. 237

**Example 4.7.** *A Pascal matrix over GF*(5) *and its inverse:*

$$P\_p = \begin{bmatrix} 1 \ 1 \ 1 \ 1 \ 1 \ 1 \\ 0 \ 1 \ 2 \ 3 \ 4 \\ 0 \ 0 \ 1 \ 3 \ 1 \\ 0 \ 0 \ 0 \ 1 \ 4 \\ 0 \ 0 \ 0 \ 0 \ 1 \end{bmatrix}, \quad Q\_p = P\_p^4 = \begin{bmatrix} 1 \ 4 \ 1 \ 4 \ 1 \\ 0 \ 1 \ 3 \ 3 \ 1 \\ 0 \ 0 \ 1 \ 2 \ 1 \\ 0 \ 0 \ 0 \ 1 \ 1 \\ 0 \ 0 \ 0 \ 0 \ 1 \end{bmatrix}.$$

Based upon Equation (8), it should be clear that

$$Q\_{p^m} = Q\_p \otimes Q\_{p^{m-1}} \bmod p. \tag{12}$$

Finally, based upon Equation (10), it also follows that, for the *mth* order case,

$$P\_{p^m}^p \bmod p = I\_{p^m} \tag{13}$$

where *Ipm* <sup>=</sup> *<sup>p</sup><sup>m</sup>* <sup>×</sup> *<sup>p</sup><sup>m</sup>* identity matrix. In a manner similar to the *<sup>m</sup>* <sup>=</sup> 1 case, this characterization provides a path to computing the *mth* order inverse

$$Q\_{p^m} \equiv P\_{p^m}^{-1} \bmod p = P\_{p^m}^{p-1} \bmod p. \tag{14}$$

#### **4.3 Error control codes designed from Pascal matrices**

In a manner similar to the GFFT approach to Reed-Solomon codes summarized in Section 3.1, it has been pointed out that *Ppm* can also be used to transform message vectors with the appropriate coordinates set equal to zero (Sakk & Wicker (2003)). More precisely, we have the following:

**Definition 4.8.** *Consider an mth order Pascal matrix over GF*(*p*) *and let r be an integer such that* 0 ≤ *r* ≤ *m*(*p* − 1)*. Also, consider the p-ary expansion of an index*

$$i = i\_0 p^0 + i\_1 p^1 + \dots + i\_{m-1} p^{m-1}$$

*where* <sup>0</sup> <sup>≤</sup> *ij* <sup>≤</sup> *<sup>p</sup>* <sup>−</sup> <sup>1</sup> *for* <sup>0</sup> <sup>≤</sup> *<sup>j</sup>* <sup>≤</sup> *<sup>m</sup>* <sup>−</sup> <sup>1</sup>*. A codeword c for an* <sup>r</sup>*th order Pascal code of length* <sup>p</sup>*m, denoted by Pp*(*r*, *m*)*, is generated by*

$$
\mathcal{C} = \mu P\_{p^m} \tag{15}
$$

*where*

$$
\mu = \left(\mu\_0 \,\,\mu\_1 \,\,\cdots \,\,\mu\_{p^m-1}\right),
$$

*is a message vector of length p<sup>m</sup>* <sup>−</sup> <sup>1</sup> *such that <sup>μ</sup><sup>i</sup>* <sup>∈</sup> *GF*(*p*)*,*

$$\begin{cases} \mu\_i = 0 & \text{if} \quad w\_p(i) > r \\ \mu\_i \neq 0 & \text{if} \quad w\_p(i) \le r \end{cases} \tag{16}$$

*and*

6 Will-be-set-by-IN-TECH

*, Qp* = *P*<sup>4</sup>

*<sup>p</sup>* =

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

*Qpm* = *Qp* ⊗ *Qpm*−<sup>1</sup> *mod p*. (12)

*pm mod p* = *Ipm* (13)

*C* = *μPpm* (15)

(16)

*pm mod p*. (14)

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

.

**Example 4.7.** *A Pascal matrix over GF*(5) *and its inverse:*

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

Finally, based upon Equation (10), it also follows that, for the *mth* order case,

characterization provides a path to computing the *mth* order inverse

**4.3 Error control codes designed from Pascal matrices**

0 ≤ *r* ≤ *m*(*p* − 1)*. Also, consider the p-ary expansion of an index*

*is a message vector of length p<sup>m</sup>* <sup>−</sup> <sup>1</sup> *such that <sup>μ</sup><sup>i</sup>* <sup>∈</sup> *GF*(*p*)*,*

*μ* = �

�

following:

*where*

*denoted by Pp*(*r*, *m*)*, is generated by*

*Qpm* <sup>≡</sup> *<sup>P</sup>*−<sup>1</sup>

*Pp*

where *Ipm* <sup>=</sup> *<sup>p</sup><sup>m</sup>* <sup>×</sup> *<sup>p</sup><sup>m</sup>* identity matrix. In a manner similar to the *<sup>m</sup>* <sup>=</sup> 1 case, this

*pm mod p* <sup>=</sup> *<sup>P</sup>p*−<sup>1</sup>

In a manner similar to the GFFT approach to Reed-Solomon codes summarized in Section 3.1, it has been pointed out that *Ppm* can also be used to transform message vectors with the appropriate coordinates set equal to zero (Sakk & Wicker (2003)). More precisely, we have the

**Definition 4.8.** *Consider an mth order Pascal matrix over GF*(*p*) *and let r be an integer such that*

*<sup>i</sup>* <sup>=</sup> *<sup>i</sup>*<sup>0</sup> *<sup>p</sup>*<sup>0</sup> <sup>+</sup> *<sup>i</sup>*<sup>1</sup> *<sup>p</sup>*<sup>1</sup> <sup>+</sup> ··· <sup>+</sup> *im*−<sup>1</sup> *<sup>p</sup>m*−<sup>1</sup>

*where* <sup>0</sup> <sup>≤</sup> *ij* <sup>≤</sup> *<sup>p</sup>* <sup>−</sup> <sup>1</sup> *for* <sup>0</sup> <sup>≤</sup> *<sup>j</sup>* <sup>≤</sup> *<sup>m</sup>* <sup>−</sup> <sup>1</sup>*. A codeword c for an* <sup>r</sup>*th order Pascal code of length* <sup>p</sup>*m,*

*μ*<sup>0</sup> *μ*<sup>1</sup> ··· *μpm*−<sup>1</sup>

*μ<sup>i</sup>* = 0 *if wp*(*i*) > *r μ<sup>i</sup>* �= 0 *if wp*(*i*) ≤ *r*

�

*Pp* =

Based upon Equation (8), it should be clear that

$$w\_p(i) \equiv \sum\_{j=0}^{m-1} i\_j.$$

Error control codes derived from the *mth* order Pascal matrix over *GF*(2) (i.e. binary data) have been related (Forney (1988); Massey et al. (1973)) to a class of codes known as *rth* order binary Reed-Muller codes *RM*(*r*, *m*) of length 2*<sup>m</sup>* (MacWilliams & Sloane (1977); Wicker (1994)). In addition, it has been further demonstrated (Sakk (2002)) that *P*2(*r*, *m*) codes over *GF*(2) are equivalent to *RM*(*r*, *m*) codes with minimum distance *dmin* = 2*m*−*r*. These observations have been extended where it has been demonstrated that *Pp*(*r*, *m*) codes over *GF*(*p*) are equivalent to generalized Reed-Muller codes (GRM) codes (Sakk (2002)).

To place this class of codes in the same context as that outlined in Section 3.1, we must show how to introduce zeros into the message vector, apply the Pascal matrix as the linear transformation and, based upon this transformation, introduce a convolution theorem. From the definition above, a given code is specified by choosing *p*, *m* and a value of 0 ≤ *r* ≤ *<sup>m</sup>*(*<sup>p</sup>* <sup>−</sup> <sup>1</sup>). The code vector length then becomes *<sup>n</sup>* <sup>=</sup> *<sup>p</sup>m*; and, for this class of codes, a given value of *r* defines the length *k* of the message. The rest of the *n* − *k* components of *μ* must be set to zero in a systematic way that leads to the minimum distance property of the code.

**Example 4.9.** *Consider P*<sup>23</sup> *in Example 4.2 (hence, n* = 23 = 8*) and a message vector μ* = (*μ*0, *μ*1, ..., *μ*7) *and let s be the number of consecutive zeros in the vector μ for a given value of r:*

*r* = 0 (*dmin* = 8) : *s* = 7 *μ* = (*μ*0, 0, 0, 0, 0, 0, 0, 0) (*k* = 1) *r* = 1 (*dmin* = 4) : *s* = 3 *μ* = (*μ*0, *μ*1, *μ*2, 0, *μ*4, 0, 0, 0) (*k* = 4) *r* = 2 (*dmin* = 2) : *s* = 1 *μ* = (*μ*0, *μ*1, *μ*2, *μ*3, *μ*4, *μ*5, *μ*6, 0) (*k* = 7) *r* = 3 (*dmin* = 1) : *s* = 0 *μ* = (*μ*0, *μ*1, *μ*2, *μ*3, *μ*4, *μ*5, *μ*6, *μ*7) (*k* = 8)

**Example 4.10.** *Consider P*<sup>32</sup> *in Example 4.3 (hence, n* = 32 = 9*) and a message vector μ* = (*μ*0, *μ*1, ..., *μ*8) *and let s be the number of consecutive zeros in the vector μ for a given value of r:*

*r* = 0 (*dmin* = 9) : *s* = 8 *μ* = (*μ*0, 0, 0, 0, 0, 0, 0, 0, 0) (*k* = 1) *r* = 1 (*dmin* = 6) : *s* = 5 *μ* = (*μ*0, *μ*1, 0, *μ*3, 0, 0, 0, 0, 0) (*k* = 3) *r* = 2 (*dmin* = 3) : *s* = 2 *μ* = (*μ*0, *μ*1, *μ*2, *μ*3, *μ*4, 0, *μ*6, 0, 0) (*k* = 6) *r* = 3 (*dmin* = 2) : *s* = 1 *μ* = (*μ*0, *μ*1, *μ*2, *μ*3, *μ*4, *μ*5, *μ*6, *μ*7, 0) (*k* = 8) *r* = 4 (*dmin* = 1) : *s* = 0 *μ* = (*μ*0, *μ*1, *μ*2, *μ*3, *μ*4, *μ*5, *μ*6, *μ*7, *μ*8) (*k* = 9)

In the above examples, *dmin* is shown in parentheses for each value of *r*; furthermore, observe that *dmin* = *s* + 1. Recalling for a moment the GFFT approach to Reed-Solomon code design, the minimum distance of a code where the message vector has *n* − *k* consecutive zeros can be shown to be *dmin* = *n* − *k* + 1 (Blahut (2003); Wicker (1994)). It is apparent that, by using a Pascal matrix as the transform, a result similar to that of the GFFT can be ascertained. The major difference is that, for Reed-Solomon codes, the string of zeros must occur at the end of the message vector before applying the GFFT to create *C*. For *P*(*r*, *m*), in addition to the string of consecutive zeros, based upon the structure of *Ppm* , zeros must also be dispersed in other positions within *μ* to form code vectors *C* = *μPpm* .

*qkj* = (−1)*j*−*k*(

out *Ti*,*<sup>k</sup>* as follows:

*j k*

) mod *p* and *pji* = (

The Fourier Convolution Theorem over Finite Fields: Extensions of Its Application to Error Control Coding

> ⎡ ⎢ ⎢ ⎢ ⎢ ⎣

> ⎡ ⎢ ⎢ ⎢ ⎢ ⎣

*p*0*i p*1*i*

*Ti* = (*Ti*,0 *Ti*,1 ... *Ti*,*n*−1)

= (*μ*<sup>0</sup> *<sup>μ</sup>*<sup>1</sup> ... *<sup>μ</sup>n*−1)

= (*μ*<sup>0</sup> *<sup>μ</sup>*<sup>1</sup> ... *<sup>μ</sup>n*−1)

<sup>≡</sup> *<sup>μ</sup>DiQ<sup>T</sup> pm*

**Proof:** Let

Then,

where *T* denotes the matrix transpose.

*combination of the components of C* = (*C*<sup>0</sup> ... *Cn*−1)*.*

where *Di* is defined in Equation (18) and

Combining this result with Equation (17) we conclude

*n*−1 ∑ *k*=0

approach, the identity in Equation (8) does induce a simplification.

**Observation 5.2.** *(Symbolic Computation of Pascal Convolution)*

Γ*<sup>i</sup>* =

= *n*−1 ∑ *k*=0 *i j*

> . .

> > ...

expression that readily reduces the inner summation to a single term. To see why, let's write

. .

*<sup>p</sup>*(*n*−1)*<sup>i</sup>*

**Observation 5.1.** *The components of the vector Ti* = (*Ti*,0 *Ti*,1 ... *Ti*,*n*−1) *can be written as a linear*

*Mi* <sup>≡</sup> *DiQ<sup>T</sup>*

*Ai* <sup>≡</sup> *Qpm Mi* <sup>=</sup> *Qpm DiQ<sup>T</sup>*

Λ*kTi*,*<sup>k</sup>* mod *p i* = 0, 1, ..., *n* − 1

So, instead of *Ti*,*<sup>k</sup>* reducing to one single component of the vector *C* (as one might expect from a typical convolution operation), the Pascal convolution requires a linear combination of the components of *C*. Although this operation is slightly more complicated than the Fourier

*For the* <sup>1</sup>*st order case where n* <sup>=</sup> *p and i* <sup>=</sup> 0, ..., *<sup>p</sup>* <sup>−</sup> <sup>1</sup>*, using Equation (19) let <sup>M</sup>*<sup>ˆ</sup> *<sup>i</sup>* <sup>≡</sup> *Mi,*

Λ*k*(*CAi*)*<sup>k</sup>* mod *p i* = 0, 1, ..., *n* − 1

) mod *p*; therefore, the product *qkj pji* will not lead to an

. ··· .

*<sup>q</sup>*<sup>00</sup> *<sup>q</sup>*<sup>10</sup> ··· *<sup>q</sup>*(*n*−1)<sup>0</sup> *<sup>q</sup>*<sup>01</sup> *<sup>q</sup>*<sup>11</sup> ··· *<sup>q</sup>*(*n*−1)<sup>1</sup>

*<sup>q</sup>*0(*n*−1) *<sup>q</sup>*1(*n*−1) ··· *<sup>q</sup>*(*n*−1)(*n*−1)

. ··· .

*pm* (19)

. .

> . .

⎤ ⎥ ⎥ ⎥ ⎥ ⎦

> ⎤ ⎥ ⎥ ⎥ ⎥ ⎦

(18)

239

(22)

*<sup>q</sup>*<sup>00</sup> *<sup>p</sup>*0*<sup>i</sup> <sup>q</sup>*<sup>10</sup> *<sup>p</sup>*0*<sup>i</sup>* ··· *<sup>q</sup>*(*n*−1)<sup>0</sup> *<sup>p</sup>*0*<sup>i</sup> <sup>q</sup>*<sup>01</sup> *<sup>p</sup>*1*<sup>i</sup> <sup>q</sup>*<sup>11</sup> *<sup>p</sup>*1*<sup>i</sup>* ··· *<sup>q</sup>*(*n*−1)<sup>1</sup> *<sup>p</sup>*1*<sup>i</sup>*

*<sup>q</sup>*0(*n*−1) *<sup>p</sup>*(*n*−1)*<sup>i</sup> <sup>q</sup>*1(*n*−1) *<sup>p</sup>*(*n*−1)*<sup>i</sup>* ··· *<sup>q</sup>*(*n*−1)(*n*−1) *<sup>p</sup>*(*n*−1)*<sup>i</sup>*

. . . . .

*pm*

<sup>⇒</sup> *Mi* <sup>=</sup> *Ppm Ai*. (20)

*Ti* = *μMi* = *μPpm Ai* = *CAi*. (21)

.

⎤ ⎥ ⎥ ⎥ ⎥ ⎦ ⎡ ⎢ ⎢ ⎢ ⎢ ⎣

#### **5. Extensions of the Fourier convolution theorem over finite fields**

The convolution operation involves relating the componentwise product of two vectors in one domain to the convolution of their transforms (Blahut & Burrus (1991)). Many linear transforms have well-defined convolution operations. For instance, the Hadamard transform yields the so-called logical or 'dyadic' convolution operation (Ahmed et al. (1973); Dodd (2003); Robinson (1972)). In this chapter, we develop extensions of the convolution theorem that can be used to reveal useful properties of other classes of codes. As an example, we demonstrate how the GFFT approach can be applied to describe generalized Reed-Muller codes (Blahut (2003)).

To begin the formulation, we consider the componentwise product *γ<sup>j</sup>* = *μjλ<sup>j</sup>* of two vectors *<sup>μ</sup>* = (*μ*<sup>0</sup> ... *<sup>μ</sup>n*−1) and *<sup>λ</sup>* = (*λ*<sup>0</sup> ... *<sup>λ</sup>n*−1). Furthermore, we consider matrix transforms such that *<sup>C</sup>* <sup>≡</sup> *<sup>μ</sup>Ppm* and <sup>Λ</sup> <sup>≡</sup> *<sup>λ</sup>Ppm* or, equivalently, *<sup>μ</sup>* <sup>=</sup> *CQpm* and *<sup>λ</sup>* <sup>=</sup> <sup>Λ</sup>*Qpm* where (*Ppm* )−<sup>1</sup> <sup>≡</sup> *Qpm* . (here, '*μ*' denotes the message vector and '*C*' denotes the code vector). We demonstrate a formulation analogous to the convolution operation that describes *γ* = Γ*Qpm* :

$$\begin{aligned} \Gamma\_i &= \sum\_{j=0}^{n-1} \gamma\_j p\_{ji} \mod p \qquad i = 0, \ 1, \ \dots, n-1 \\ &= \sum\_{j=0}^{n-1} (\mu\_j \lambda\_j) p\_{ji} \mod p \\ &= \sum\_{j=0}^{n-1} \mu\_j (\sum\_{k=0}^{n-1} \Lambda\_k q\_{kj}) p\_{ji} \mod p \\ &= \sum\_{k=0}^{n-1} \Lambda\_k \sum\_{j=0}^{n-1} \mu\_j q\_{kj} p\_{ji} \mod p \\ &\equiv \sum\_{k=0}^{n-1} \Lambda\_k T\_{i,k} \mod p \qquad i = 0, \ 1, \dots, n-1 \end{aligned} \tag{17}$$

where *n* = *pm*.

Notice that if we are dealing with familiar spectral transforms such as the Fourier or the Hadamard transform (where *P* denotes the forward transform and *Q* denotes the inverse transform), *Ti*,*<sup>k</sup>* takes on a simple form. This is because the product *qkj pji* in <sup>∑</sup>*n*−<sup>1</sup> *<sup>j</sup>*=<sup>0</sup> *μjqkj pji* reduces to a term that enables us to take the transform of *<sup>μ</sup>* as *Cf*(*i*,*k*) = <sup>1</sup> *<sup>n</sup>* (∑*n*−<sup>1</sup> *<sup>j</sup>*=<sup>0</sup> *μ<sup>j</sup> pj*, *<sup>f</sup>*(*i*,*k*)). For the case of the Fourier transform *<sup>f</sup>*(*i*, *<sup>k</sup>*) = *<sup>i</sup>* <sup>−</sup> *<sup>k</sup>* and *Ti*,*<sup>k</sup>* <sup>=</sup> *<sup>C</sup>*(*i*−*k*); as expected, one ends up with the convolution theorem (Blahut (2003); Wicker (1994)). In the case of a Hadamard transform, *f*(*i*, *k*) = *i* ⊕ *k* (where ⊕ denotes bit-by-bit addition of the binary expansions of *i* and *<sup>k</sup>*) and *Ti*,*<sup>k</sup>* <sup>=</sup> *<sup>C</sup>*(*i*⊕*k*). Here, the bit-by-bit addition <sup>⊕</sup> of the binary expansions of *<sup>i</sup>* and *<sup>k</sup>* over GF(2) would result in the dyadic convolution (Ahmed et al. (1973); Robinson (1972)).

For the codes in this presentation, the *qkj pji* term in the above summation leads to a convolution theorem that depends on the matrix *Ppm* . Furthermore, this theorem can also be applied to demonstrate how to decode *C* to recover the message vector *μ*. In Equation (17) *qkj* = (−1)*j*−*k*( *j k* ) mod *p* and *pji* = ( *i j* ) mod *p*; therefore, the product *qkj pji* will not lead to an expression that readily reduces the inner summation to a single term. To see why, let's write out *Ti*,*<sup>k</sup>* as follows:

$$\begin{split} T\_{i} &= \left( T\_{i,0} \, T\_{i,1} \ldots T\_{i,n-1} \right) \\ &= \left( \mu\_{0} \, \mu\_{1} \ldots \mu\_{n-1} \right) \begin{bmatrix} q\_{00} p\_{0i} & q\_{10} p\_{0i} & \cdots & q\_{(n-1)0} p\_{0i} \\ q\_{01} p\_{1i} & q\_{11} p\_{1i} & \cdots & q\_{(n-1)1} p\_{1i} \\ \vdots & \vdots & \cdots & \vdots \\ q\_{0(n-1)} p\_{(n-1)i} \, q\_{1(n-1)} p\_{(n-1)i} & \cdots & q\_{(n-1)(n-1)} p\_{(n-1)i} \end{bmatrix} \\ &= \left( \mu\_{0} \, \mu\_{1} \ldots \mu\_{n-1} \right) \begin{bmatrix} p\_{0i} \\ p\_{1i} \\ \vdots \\ p\_{(n-1)i} \end{bmatrix} \begin{bmatrix} q\_{00} & q\_{10} & \cdots & q\_{(n-1)0} \\ q\_{01} & q\_{11} & \cdots & q\_{(n-1)1} \\ \vdots & \vdots & \cdots & \vdots \\ q\_{0(n-1)} \, q\_{1(n-1)} & \cdots & q\_{(n-1)(n-1)} \end{bmatrix} \\ &\equiv \mu D\_{i} Q\_{p^{n}}^{\mathsf{T}} \end{split} \tag{18}$$

where *T* denotes the matrix transpose.

**Observation 5.1.** *The components of the vector Ti* = (*Ti*,0 *Ti*,1 ... *Ti*,*n*−1) *can be written as a linear combination of the components of C* = (*C*<sup>0</sup> ... *Cn*−1)*.*

#### **Proof:** Let

(17)

*<sup>j</sup>*=<sup>0</sup> *μjqkj pji*

*<sup>j</sup>*=<sup>0</sup> *μ<sup>j</sup> pj*, *<sup>f</sup>*(*i*,*k*)).

*<sup>n</sup>* (∑*n*−<sup>1</sup>

8 Will-be-set-by-IN-TECH

The convolution operation involves relating the componentwise product of two vectors in one domain to the convolution of their transforms (Blahut & Burrus (1991)). Many linear transforms have well-defined convolution operations. For instance, the Hadamard transform yields the so-called logical or 'dyadic' convolution operation (Ahmed et al. (1973); Dodd (2003); Robinson (1972)). In this chapter, we develop extensions of the convolution theorem that can be used to reveal useful properties of other classes of codes. As an example, we demonstrate how the GFFT approach can be applied to describe generalized Reed-Muller

To begin the formulation, we consider the componentwise product *γ<sup>j</sup>* = *μjλ<sup>j</sup>* of two vectors *<sup>μ</sup>* = (*μ*<sup>0</sup> ... *<sup>μ</sup>n*−1) and *<sup>λ</sup>* = (*λ*<sup>0</sup> ... *<sup>λ</sup>n*−1). Furthermore, we consider matrix transforms such that *<sup>C</sup>* <sup>≡</sup> *<sup>μ</sup>Ppm* and <sup>Λ</sup> <sup>≡</sup> *<sup>λ</sup>Ppm* or, equivalently, *<sup>μ</sup>* <sup>=</sup> *CQpm* and *<sup>λ</sup>* <sup>=</sup> <sup>Λ</sup>*Qpm* where (*Ppm* )−<sup>1</sup> <sup>≡</sup> *Qpm* . (here, '*μ*' denotes the message vector and '*C*' denotes the code vector). We demonstrate a

*γ<sup>j</sup> pji* mod *p i* = 0, 1, ..., *n* − 1

Λ*kqkj*)*pji* mod *p*

*μjqkj pji*) mod *p*

Notice that if we are dealing with familiar spectral transforms such as the Fourier or the Hadamard transform (where *P* denotes the forward transform and *Q* denotes the inverse

For the case of the Fourier transform *<sup>f</sup>*(*i*, *<sup>k</sup>*) = *<sup>i</sup>* <sup>−</sup> *<sup>k</sup>* and *Ti*,*<sup>k</sup>* <sup>=</sup> *<sup>C</sup>*(*i*−*k*); as expected, one ends up with the convolution theorem (Blahut (2003); Wicker (1994)). In the case of a Hadamard transform, *f*(*i*, *k*) = *i* ⊕ *k* (where ⊕ denotes bit-by-bit addition of the binary expansions of *i* and *<sup>k</sup>*) and *Ti*,*<sup>k</sup>* <sup>=</sup> *<sup>C</sup>*(*i*⊕*k*). Here, the bit-by-bit addition <sup>⊕</sup> of the binary expansions of *<sup>i</sup>* and *<sup>k</sup>* over GF(2) would result in the dyadic convolution (Ahmed et al. (1973); Robinson (1972)).

For the codes in this presentation, the *qkj pji* term in the above summation leads to a convolution theorem that depends on the matrix *Ppm* . Furthermore, this theorem can also be applied to demonstrate how to decode *C* to recover the message vector *μ*. In Equation (17)

transform), *Ti*,*<sup>k</sup>* takes on a simple form. This is because the product *qkj pji* in <sup>∑</sup>*n*−<sup>1</sup>

reduces to a term that enables us to take the transform of *<sup>μ</sup>* as *Cf*(*i*,*k*) = <sup>1</sup>

Λ*kTi*,*<sup>k</sup>* mod *p i* = 0, 1, ..., *n* − 1

**5. Extensions of the Fourier convolution theorem over finite fields**

formulation analogous to the convolution operation that describes *γ* = Γ*Qpm* :

(*μjλj*)*pji* mod *p*

Γ*<sup>i</sup>* =

= *n*−1 ∑ *j*=0

= *n*−1 ∑ *j*=0 *μj*( *n*−1 ∑ *k*=0

= *n*−1 ∑ *k*=0 Λ*k*( *n*−1 ∑ *j*=0

≡ *n*−1 ∑ *k*=0

*n*−1 ∑ *j*=0

codes (Blahut (2003)).

where *n* = *pm*.

$$M\_{\rm i} \equiv D\_{\rm i} \mathbf{Q}\_{p^m}^T \tag{19}$$

where *Di* is defined in Equation (18) and

$$\begin{aligned} A\_i \equiv Q\_{p^m} M\_i &= Q\_{p^m} D\_i Q\_{p^m}^T \\ \Rightarrow M\_i &= P\_{p^m} A\_i. \end{aligned} \tag{20}$$

Then,

$$T\_i = \mu M\_i = \mu P\_{p^\#} A\_i = \mathbb{C} A\_i. \tag{21}$$

Combining this result with Equation (17) we conclude

$$\begin{aligned} \Gamma\_i &= \sum\_{k=0}^{n-1} \Lambda\_k T\_{i,k} \mod p \qquad i = 0, \ 1, \ldots, n-1 \\ &= \sum\_{k=0}^{n-1} \Lambda\_k (CA\_i)\_k \mod p \qquad i = 0, \ 1, \ldots, n-1 \end{aligned} \tag{22}$$

So, instead of *Ti*,*<sup>k</sup>* reducing to one single component of the vector *C* (as one might expect from a typical convolution operation), the Pascal convolution requires a linear combination of the components of *C*. Although this operation is slightly more complicated than the Fourier approach, the identity in Equation (8) does induce a simplification.

**Observation 5.2.** *(Symbolic Computation of Pascal Convolution) For the* <sup>1</sup>*st order case where n* <sup>=</sup> *p and i* <sup>=</sup> 0, ..., *<sup>p</sup>* <sup>−</sup> <sup>1</sup>*, using Equation (19) let <sup>M</sup>*<sup>ˆ</sup> *<sup>i</sup>* <sup>≡</sup> *Mi,*

Observation 5.2 tells us that, in order to calculate *Tj* = *CAj* for arbitrary *n* = *pm*, one need

*<sup>p</sup><sup>m</sup>*−<sup>1</sup> ∑ *i*=0

(where the sum is taken mod *p*) is a matrix of ones. This follows from two observations. First,

is one and all other entries are zero. Second, it can also be demonstrated that the last column

*, Qp* = *Pp* =

*From Observation 5.2, to obtain the Aj for n* = *p*<sup>2</sup> *and j* = 0, 1, 2, 3*, one need only take successive*

*<sup>i</sup>* for *i* = 0, ..., *p* − 1 and then take successive Kronecker products. The initial

*QpD*ˆ *iQ<sup>T</sup> p*

*<sup>i</sup>*=<sup>0</sup> *<sup>D</sup>*<sup>ˆ</sup> *iQ<sup>T</sup>*

� 1 1 0 1� ,

0 0� �1 0 1 1� = � 1 0 0 0�

0 1� �1 0 1 1� = � 0 1 1 1� .

<sup>⎦</sup> , *<sup>A</sup>*<sup>1</sup> <sup>=</sup> *<sup>A</sup>*ˆ0 <sup>⊗</sup> *<sup>A</sup>*<sup>ˆ</sup>

<sup>⎦</sup> , *<sup>A</sup>*<sup>3</sup> <sup>=</sup> *<sup>A</sup>*<sup>ˆ</sup>

<sup>1</sup> =

<sup>1</sup> <sup>⊗</sup> *<sup>A</sup>*<sup>ˆ</sup> <sup>1</sup> = ⎡ ⎢ ⎢ ⎣

⎡ ⎢ ⎢ ⎣

⎤ ⎥ ⎥ ⎦ ,

⎤ ⎥ ⎥ ⎦ .

*<sup>i</sup>*=<sup>0</sup> *Ai is a*

(25)

*<sup>i</sup>*=<sup>0</sup> *Di* is a matrix whose (*p<sup>m</sup>* <sup>−</sup> 1, *<sup>p</sup><sup>m</sup>* <sup>−</sup> <sup>1</sup>) entry

*<sup>i</sup>*=<sup>0</sup> *Ai* is a matrix of ones.

241

*<sup>p</sup>* <sup>=</sup> <sup>∑</sup>*p<sup>m</sup>*−<sup>1</sup>

*<sup>i</sup>* for *i* = 0, ..., *p* − 1 can easily be calculated by referring back to Equation (20) where

only calculate *A*ˆ

*<sup>i</sup>* = *QpM*ˆ *<sup>i</sup>* = *QpD*ˆ *iQ<sup>T</sup>*

*p* .

The Fourier Convolution Theorem over Finite Fields: Extensions of Its Application to Error Control Coding

from the definition of *Di* in Equation (18), <sup>∑</sup>*p<sup>m</sup>*−<sup>1</sup>

of *Qpm* must be a column of ones. Therefore, *Qp* <sup>∑</sup>*p<sup>m</sup>*−<sup>1</sup>

An interesting property concerning the *Ai* is that the sum

*<sup>p</sup><sup>m</sup>*−<sup>1</sup> ∑ *i*=0

**Example 5.3.** *For p* = 2*, the 1st order case n* = *p gives i* = 0, 1*; hence, over GF*(2)*,*

*<sup>p</sup>* = � 1 1 0 1� �1 0

*<sup>p</sup>* = � 1 1 0 1� �1 0

⎤ ⎥ ⎥

⎤ ⎥ ⎥

*As expected, the Ai are symmetric matrices. Also, notice, as mentioned above, that* <sup>∑</sup>*p<sup>m</sup>*−<sup>1</sup>

Γ<sup>0</sup> = Λ0*C*<sup>0</sup> + Λ1(0) + Λ2(0) + Λ3(0) Γ<sup>1</sup> = Λ0*C*<sup>1</sup> + Λ1(*C*<sup>0</sup> + *C*1) + Λ2(0) + Λ3(0) Γ<sup>2</sup> = Λ0*C*<sup>2</sup> + Λ1(0) + Λ2(*C*<sup>0</sup> + *C*2) + Λ3(0)

*matrix of ones. For the case where n* = *p*2*, let us now apply Observation 5.1 to calculate the Pascal convolution of the vectors C* = (*C*0, *C*1, *C*2, *C*3) *and* Λ = (Λ0, Λ1, Λ2, Λ3)*. Using Equation (22), we*

Γ<sup>3</sup> = Λ0*C*<sup>3</sup> + Λ1(*C*<sup>2</sup> + *C*3) + Λ2(*C*<sup>1</sup> + *C*3) + Λ3(*C*<sup>0</sup> + *C*<sup>1</sup> + *C*<sup>2</sup> + *C*3).

*Pp* = � 1 1 0 1�

*A*ˆ0 = *QpD*ˆ <sup>0</sup>*Q<sup>T</sup>*

<sup>1</sup> = *QpD*ˆ <sup>1</sup>*Q<sup>T</sup>*

⎡ ⎢ ⎢ ⎣

⎡ ⎢ ⎢ ⎣

*A*ˆ

*<sup>A</sup>*<sup>0</sup> <sup>=</sup> *<sup>A</sup>*ˆ0 <sup>⊗</sup> *<sup>A</sup>*ˆ0 <sup>=</sup>

<sup>1</sup> <sup>⊗</sup> *<sup>A</sup>*ˆ0 <sup>=</sup>

*A*<sup>2</sup> = *A*ˆ

*Ai* =

set of *A*ˆ

*we calculate*

*have:*

*Kronecker products as:*

*A*ˆ

*using Equation (18) let <sup>D</sup>*<sup>ˆ</sup> *<sup>i</sup>* <sup>≡</sup> *Di and let <sup>A</sup>*<sup>ˆ</sup> *<sup>i</sup>* <sup>≡</sup> *QpM*<sup>ˆ</sup> *i. Then, for any* <sup>0</sup> <sup>≤</sup> *<sup>j</sup>* <sup>≤</sup> *<sup>p</sup><sup>m</sup>* <sup>−</sup> <sup>1</sup> *where <sup>j</sup>* <sup>=</sup> *<sup>j</sup>*<sup>0</sup> *<sup>p</sup>*<sup>0</sup> <sup>+</sup> *<sup>j</sup>*<sup>1</sup> *<sup>p</sup>*<sup>1</sup> <sup>+</sup> ... <sup>+</sup> *jm*−<sup>1</sup> *<sup>p</sup>m*−<sup>1</sup> *and Aj* <sup>=</sup> *Qpm Mj,*

$$A\_{\hat{\mathbf{j}}} = \hat{A}\_{\hat{\mathbf{j}}\_{m-1}} \otimes \dots \otimes \hat{A}\_{\hat{\mathbf{j}}\_1} \otimes \hat{A}\_{\hat{\mathbf{j}}\_0} \tag{23}$$

*where Mj* <sup>≡</sup> *DjQ<sup>T</sup> pm .*

**Proof:** The statement is clearly true for the first order case *m* = 1 since *j* = *j*0. By induction let *<sup>j</sup>* <sup>=</sup> *<sup>j</sup>*<sup>0</sup> *<sup>p</sup>*<sup>0</sup> <sup>+</sup> *<sup>j</sup>*<sup>1</sup> *<sup>p</sup>*<sup>1</sup> <sup>+</sup> ... <sup>+</sup> *jm*−<sup>1</sup> *<sup>p</sup>m*−<sup>1</sup> and assume that

$$D\_{\dot{j}} = \dot{D}\_{\dot{j}\_{m-1}} \otimes \dots \otimes \dot{D}\_{\dot{j}\_1} \otimes \dot{D}\_{\dot{j}\_0}$$

where 0 ≤ *jk* ≤ *p* − 1 for all *k* = 0, ..., *m* − 1. Consider any *j* � = *j*<sup>0</sup> *p*<sup>0</sup> + ... + *jm*−<sup>1</sup> *<sup>p</sup>m*−<sup>1</sup> <sup>+</sup> *jm <sup>p</sup><sup>m</sup>* and apply Equation (18) along with Lucas' theorem to obtain the following intermediate result:

$$\begin{aligned} \left[\dot{\boldsymbol{D}}\_{j\_m} \otimes \dot{\boldsymbol{D}}\_{j\_{m-1}} \otimes \dots \otimes \dot{\boldsymbol{D}}\_{j\_0} = \dot{\boldsymbol{D}}\_{j\_0} \otimes \boldsymbol{D}\_{\dot{f}} \\ &= \begin{bmatrix} \binom{j\_m}{0} \\ & \binom{j\_m}{1} \\ & & \ddots \\ & & \ddots \\ & & & \binom{j\_m}{p-1} \end{bmatrix} \otimes \begin{bmatrix} \binom{j}{0} \\ & \binom{j}{1} \\ & & \ddots \\ & & \ddots \\ & & & \binom{j}{p^m-1} \end{bmatrix} \\ &= \begin{bmatrix} \binom{\tilde{f}}{0} \\ & \ddots \\ & & \ddots \\ & & & \binom{\tilde{f}}{p^{m+1}-1} \end{bmatrix} \\ &= \boldsymbol{D}\_{\tilde{f}} \end{aligned} \tag{24}$$

Therefore, *Dj* <sup>=</sup> *<sup>D</sup>*<sup>ˆ</sup> *jm*−<sup>1</sup> <sup>⊗</sup>...<sup>⊗</sup> *<sup>D</sup>*<sup>ˆ</sup> *<sup>j</sup>*<sup>1</sup> <sup>⊗</sup> *<sup>D</sup>*<sup>ˆ</sup> *<sup>j</sup>*<sup>0</sup> is true. Next, successively apply the identity (*AC*) <sup>⊗</sup> (*BD*)=(*A* ⊗ *B*)(*C* ⊗ *D*) to obtain:

$$\begin{aligned} \hat{\mathcal{M}}\_{j\_{m-1}} \otimes \dots \otimes \hat{\mathcal{M}}\_{j\_1} \otimes \hat{\mathcal{M}}\_{j\_0} &= (\hat{\mathcal{D}}\_{j\_{m-1}} \mathcal{Q}\_p^T) \otimes \dots \otimes (\hat{\mathcal{D}}\_{j\_1} \mathcal{Q}\_p^T) \otimes (\hat{\mathcal{D}}\_{j\_0} \mathcal{Q}\_p^T) \\ &= (\hat{\mathcal{D}}\_{j\_{m-1}} \otimes \dots \otimes \hat{\mathcal{D}}\_{j\_1} \otimes \hat{\mathcal{D}}\_{j\_0}) (\mathcal{Q}\_p^T \otimes \mathcal{Q}\_p^T \otimes \dots \otimes \mathcal{Q}\_p^T) \\ &= D\_j \mathcal{Q}\_{p^m}^T \\ &= M\_j \end{aligned}$$

Finally, we arrive at the desired conclusion

$$\begin{aligned} (\hat{A}\_{j\_{n-1}} \otimes \dots \otimes \hat{A}\_{j\_1} \otimes \hat{A}\_{j\_0}) &= (\mathcal{Q}\_p \hat{M}\_{j\_{n-1}}) \otimes \dots \otimes (\mathcal{Q}\_p \hat{M}\_{j\_1}) \otimes (\mathcal{Q}\_p \hat{M}\_{j\_0}) \\ &= (\mathcal{Q}\_p \otimes \mathcal{Q}\_p \otimes \dots \mathcal{Q}\_p)(\hat{M}\_{j\_{n-1}} \otimes \dots \otimes \hat{M}\_{j\_1} \otimes \hat{M}\_{j\_0}) \\ &= \mathcal{Q}\_{p^w} \mathcal{M}\_{j} \\ &= A\_j. \end{aligned}$$

Observation 5.2 tells us that, in order to calculate *Tj* = *CAj* for arbitrary *n* = *pm*, one need only calculate *A*ˆ *<sup>i</sup>* for *i* = 0, ..., *p* − 1 and then take successive Kronecker products. The initial set of *A*ˆ *<sup>i</sup>* for *i* = 0, ..., *p* − 1 can easily be calculated by referring back to Equation (20) where *A*ˆ *<sup>i</sup>* = *QpM*ˆ *<sup>i</sup>* = *QpD*ˆ *iQ<sup>T</sup> p* .

An interesting property concerning the *Ai* is that the sum

$$\sum\_{i=0}^{p^m-1} A\_i = \sum\_{i=0}^{p^m-1} Q\_p \hat{D}\_i Q\_p^T$$

(where the sum is taken mod *p*) is a matrix of ones. This follows from two observations. First, from the definition of *Di* in Equation (18), <sup>∑</sup>*p<sup>m</sup>*−<sup>1</sup> *<sup>i</sup>*=<sup>0</sup> *Di* is a matrix whose (*p<sup>m</sup>* <sup>−</sup> 1, *<sup>p</sup><sup>m</sup>* <sup>−</sup> <sup>1</sup>) entry is one and all other entries are zero. Second, it can also be demonstrated that the last column of *Qpm* must be a column of ones. Therefore, *Qp* <sup>∑</sup>*p<sup>m</sup>*−<sup>1</sup> *<sup>i</sup>*=<sup>0</sup> *<sup>D</sup>*<sup>ˆ</sup> *iQ<sup>T</sup> <sup>p</sup>* <sup>=</sup> <sup>∑</sup>*p<sup>m</sup>*−<sup>1</sup> *<sup>i</sup>*=<sup>0</sup> *Ai* is a matrix of ones.

**Example 5.3.** *For p* = 2*, the 1st order case n* = *p gives i* = 0, 1*; hence, over GF*(2)*,*

$$P\_p = \begin{bmatrix} 1 \ 1 \\ 0 \ 1 \end{bmatrix} \text{ \textbf{\textsuperscript{0}}} \text{ \textbf{\color{red}{0}}} \\ P\_p = P\_p = \begin{bmatrix} 1 \ 1 \\ 0 \ 1 \end{bmatrix} \text{ \textbf{\color{red}{0}}}$$

*we calculate*

10 Will-be-set-by-IN-TECH

**Proof:** The statement is clearly true for the first order case *m* = 1 since *j* = *j*0. By induction

*Dj* <sup>=</sup> *<sup>D</sup>*<sup>ˆ</sup> *jm*−<sup>1</sup> <sup>⊗</sup> ... <sup>⊗</sup> *<sup>D</sup>*<sup>ˆ</sup> *<sup>j</sup>*<sup>1</sup> <sup>⊗</sup> *<sup>D</sup>*<sup>ˆ</sup> *<sup>j</sup>*<sup>0</sup>

*jm*−<sup>1</sup> *<sup>p</sup>m*−<sup>1</sup> <sup>+</sup> *jm <sup>p</sup><sup>m</sup>* and apply Equation (18) along with Lucas' theorem to obtain the

( *jm <sup>p</sup>*−1)

( *<sup>j</sup>* � *pm*<sup>+</sup><sup>1</sup>−<sup>1</sup> )

Therefore, *Dj* <sup>=</sup> *<sup>D</sup>*<sup>ˆ</sup> *jm*−<sup>1</sup> <sup>⊗</sup>...<sup>⊗</sup> *<sup>D</sup>*<sup>ˆ</sup> *<sup>j</sup>*<sup>1</sup> <sup>⊗</sup> *<sup>D</sup>*<sup>ˆ</sup> *<sup>j</sup>*<sup>0</sup> is true. Next, successively apply the identity (*AC*) <sup>⊗</sup>

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⊗

> ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

*<sup>p</sup>* ) <sup>⊗</sup> ... <sup>⊗</sup> (*D*<sup>ˆ</sup> *<sup>j</sup>*1*Q<sup>T</sup>*

= (*Qp* <sup>⊗</sup> *Qp* <sup>⊗</sup> ...*Qp*)(*M*<sup>ˆ</sup> *jm*−<sup>1</sup> <sup>⊗</sup> ... <sup>⊗</sup> *<sup>M</sup>*<sup>ˆ</sup> *<sup>j</sup>*<sup>1</sup> <sup>⊗</sup> *<sup>M</sup>*<sup>ˆ</sup> *<sup>j</sup>*<sup>0</sup> )

= (*D*<sup>ˆ</sup> *jm*−<sup>1</sup> <sup>⊗</sup> ... <sup>⊗</sup> *<sup>D</sup>*<sup>ˆ</sup> *<sup>j</sup>*<sup>1</sup> <sup>⊗</sup> *<sup>D</sup>*<sup>ˆ</sup> *<sup>j</sup>*<sup>0</sup> )(*Q<sup>T</sup>*

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ (*j* 0) (*j* 1) ...

> ( *<sup>j</sup> pm*−1)

*<sup>p</sup>* ) <sup>⊗</sup> (*D*<sup>ˆ</sup> *<sup>j</sup>*0*Q<sup>T</sup>*

*<sup>p</sup>* <sup>⊗</sup> *<sup>Q</sup><sup>T</sup>*

*p* )

*<sup>p</sup>* <sup>⊗</sup> ... <sup>⊗</sup> *<sup>Q</sup><sup>T</sup>*

*p* )

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

where 0 ≤ *jk* ≤ *p* − 1 for all *k* = 0, ..., *m* − 1. Consider any *j*

*<sup>D</sup>*<sup>ˆ</sup> *jm* <sup>⊗</sup> *<sup>D</sup>*<sup>ˆ</sup> *jm*−<sup>1</sup>⊗... <sup>⊗</sup> *<sup>D</sup>*<sup>ˆ</sup> *<sup>j</sup>*<sup>0</sup> <sup>=</sup> *<sup>D</sup>*<sup>ˆ</sup> *jm* <sup>⊗</sup> *Dj*

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ( *j* � 0) ( *j* � 1) ...

( *jm* 0 ) ( *jm* 1 ) ...

=

=

= *Dj*�

= *DjQ<sup>T</sup> pm*

(*A*ˆ*jm*−<sup>1</sup> <sup>⊗</sup> ... <sup>⊗</sup> *<sup>A</sup>*ˆ*j*<sup>1</sup> <sup>⊗</sup> *<sup>A</sup>*ˆ*j*<sup>0</sup> )=(*QpM*<sup>ˆ</sup> *jm*−<sup>1</sup> ) <sup>⊗</sup> ... <sup>⊗</sup> (*QpM*<sup>ˆ</sup> *<sup>j</sup>*<sup>1</sup> ) <sup>⊗</sup> (*QpM*<sup>ˆ</sup> *<sup>j</sup>*<sup>0</sup> )

= *Qpm Mj* = *Aj*.

= *Mj*

*<sup>M</sup>*<sup>ˆ</sup> *jm*−<sup>1</sup> <sup>⊗</sup> ... <sup>⊗</sup> *<sup>M</sup>*<sup>ˆ</sup> *<sup>j</sup>*<sup>1</sup> <sup>⊗</sup> *<sup>M</sup>*<sup>ˆ</sup> *<sup>j</sup>*<sup>0</sup> = (*D*<sup>ˆ</sup> *jm*−<sup>1</sup>*Q<sup>T</sup>*

*<sup>i</sup>* <sup>≡</sup> *QpM*<sup>ˆ</sup> *i. Then, for any* <sup>0</sup> <sup>≤</sup> *<sup>j</sup>* <sup>≤</sup> *<sup>p</sup><sup>m</sup>* <sup>−</sup> <sup>1</sup> *where*

� = *j*<sup>0</sup> *p*<sup>0</sup> + ... +

(24)

*Aj* <sup>=</sup> *<sup>A</sup>*ˆ*jm*−<sup>1</sup> <sup>⊗</sup> ... <sup>⊗</sup> *<sup>A</sup>*ˆ*j*<sup>1</sup> <sup>⊗</sup> *<sup>A</sup>*ˆ*j*<sup>0</sup> (23)

*using Equation (18) let <sup>D</sup>*<sup>ˆ</sup> *<sup>i</sup>* <sup>≡</sup> *Di and let <sup>A</sup>*<sup>ˆ</sup>

*pm .*

following intermediate result:

(*BD*)=(*A* ⊗ *B*)(*C* ⊗ *D*) to obtain:

Finally, we arrive at the desired conclusion

*where Mj* <sup>≡</sup> *DjQ<sup>T</sup>*

*<sup>j</sup>* <sup>=</sup> *<sup>j</sup>*<sup>0</sup> *<sup>p</sup>*<sup>0</sup> <sup>+</sup> *<sup>j</sup>*<sup>1</sup> *<sup>p</sup>*<sup>1</sup> <sup>+</sup> ... <sup>+</sup> *jm*−<sup>1</sup> *<sup>p</sup>m*−<sup>1</sup> *and Aj* <sup>=</sup> *Qpm Mj,*

let *<sup>j</sup>* <sup>=</sup> *<sup>j</sup>*<sup>0</sup> *<sup>p</sup>*<sup>0</sup> <sup>+</sup> *<sup>j</sup>*<sup>1</sup> *<sup>p</sup>*<sup>1</sup> <sup>+</sup> ... <sup>+</sup> *jm*−<sup>1</sup> *<sup>p</sup>m*−<sup>1</sup> and assume that

$$\begin{aligned} \hat{A}\_0 &= Q\_p \hat{D}\_0 Q\_p^T = \begin{bmatrix} 1 \ 1 \\ 0 \ 1 \end{bmatrix} \begin{bmatrix} 1 \ 0 \\ 0 \ 0 \end{bmatrix} \begin{bmatrix} 1 \ 0 \\ 1 \ 1 \end{bmatrix} = \begin{bmatrix} 1 \ 0 \\ 0 \ 0 \end{bmatrix} \\ \hat{A}\_1 &= Q\_p \hat{D}\_1 Q\_p^T = \begin{bmatrix} 1 \ 1 \\ 0 \ 1 \end{bmatrix} \begin{bmatrix} 1 \ 0 \\ 0 \ 1 \end{bmatrix} \begin{bmatrix} 1 \ 0 \\ 1 \ 1 \end{bmatrix} = \begin{bmatrix} 0 \ 1 \\ 1 \ 1 \end{bmatrix}. \end{aligned}$$

*From Observation 5.2, to obtain the Aj for n* = *p*<sup>2</sup> *and j* = 0, 1, 2, 3*, one need only take successive Kronecker products as:*

$$A\_0 = \hat{A}\_0 \otimes \hat{A}\_0 = \begin{bmatrix} 1 \ 0 \ 0 \ 0 \\ 0 \ 0 \ 0 \ 0 \\ 0 \ 0 \ 0 \\ 0 \ 0 \ 0 \ 0 \end{bmatrix}, \quad A\_1 = \hat{A}\_0 \otimes \hat{A}\_1 = \begin{bmatrix} 0 \ 1 \ 0 \ 0 \\ 1 \ 1 \ 0 \\ 0 \ 0 \ 0 \\ 0 \ 0 \ 0 \ 0 \end{bmatrix},$$

$$A\_2 = \hat{A}\_1 \otimes \hat{A}\_0 = \begin{bmatrix} 0 \ 0 \ 1 \ 0 \\ 0 \ 0 \ 0 \ 0 \\ 1 \ 0 \ 1 \ 0 \\ 0 \ 0 \ 0 \ 0 \end{bmatrix}, \quad A\_3 = \hat{A}\_1 \otimes \hat{A}\_1 = \begin{bmatrix} 0 \ 0 \ 0 \ 1 \\ 0 \ 0 \ 1 \\ 0 \ 1 \ 0 \\ 1 \ 1 \ 1 \ 1 \end{bmatrix}.$$

*As expected, the Ai are symmetric matrices. Also, notice, as mentioned above, that* <sup>∑</sup>*p<sup>m</sup>*−<sup>1</sup> *<sup>i</sup>*=<sup>0</sup> *Ai is a matrix of ones. For the case where n* = *p*2*, let us now apply Observation 5.1 to calculate the Pascal convolution of the vectors C* = (*C*0, *C*1, *C*2, *C*3) *and* Λ = (Λ0, Λ1, Λ2, Λ3)*. Using Equation (22), we have:*

$$\begin{array}{llll}\Gamma\_{0} = \Lambda\_{0}\mathbb{C}\_{0} + & \Lambda\_{1}(0) & + & \Lambda\_{2}(0) & + & \Lambda\_{3}(0) \\ \Gamma\_{1} = \Lambda\_{0}\mathbb{C}\_{1} + & \Lambda\_{1}(\mathbb{C}\_{0} + \mathbb{C}\_{1}) & + & \Lambda\_{2}(0) & + & \Lambda\_{3}(0) \\ \Gamma\_{2} = \Lambda\_{0}\mathbb{C}\_{2} + & \Lambda\_{1}(0) & + & \Lambda\_{2}(\mathbb{C}\_{0} + \mathbb{C}\_{2}) & + & \Lambda\_{3}(0) \\ \Gamma\_{3} = \Lambda\_{0}\mathbb{C}\_{3} + & \Lambda\_{1}(\mathbb{C}\_{2} + \mathbb{C}\_{3}) & + & \Lambda\_{2}(\mathbb{C}\_{1} + \mathbb{C}\_{3}) + & \Lambda\_{3}(\mathbb{C}\_{0} + \mathbb{C}\_{1} + \mathbb{C}\_{2} + \mathbb{C}\_{3}) \end{array} \tag{25}$$

where *n* = *pm*. Similar to Equation (18), one can also show that

The Fourier Convolution Theorem over Finite Fields: Extensions of Its Application to Error Control Coding

which can also be written as

Furthermore, if we define

following identity will be required:

*<sup>j</sup>* <sup>=</sup> *<sup>j</sup>*<sup>0</sup> *<sup>p</sup>*<sup>0</sup> <sup>+</sup> *<sup>j</sup>*<sup>1</sup> *<sup>p</sup>*<sup>1</sup> <sup>+</sup> ... <sup>+</sup> *jm*−<sup>1</sup> *<sup>p</sup>m*−1,

11 : *γ*<sup>3</sup> = *λ*0(∑<sup>3</sup>

where

*si* = (*si*,0 *si*,1 ... *si*,*n*−1)

*si* = *μPpm* Δ*iP<sup>T</sup>*

where <sup>Δ</sup>*<sup>i</sup>* is a diagonal matrix with elements (*q*0*<sup>i</sup> <sup>q</sup>*1*<sup>i</sup>* ... *<sup>q</sup>*(*n*−1)*i*) along its diagonal.

*Bi* <sup>≡</sup> *Ppm* <sup>Δ</sup>*iP<sup>T</sup>*

then results similar to Observations 5.1 and 5.2 can also be demonstrated. However, in proving the dual of Observation 5.2 there is one difference be aware of. Since *qji* = (−1)*i*−*<sup>j</sup>*

achieve the equality <sup>Δ</sup>*<sup>j</sup>* <sup>=</sup> <sup>Δ</sup><sup>ˆ</sup> *jm*−<sup>1</sup> <sup>⊗</sup> ... <sup>⊗</sup> <sup>Δ</sup><sup>ˆ</sup> *<sup>j</sup>*<sup>1</sup> <sup>⊗</sup> <sup>Δ</sup><sup>ˆ</sup> *<sup>j</sup>*<sup>0</sup> where *<sup>j</sup>* <sup>=</sup> *<sup>j</sup>*<sup>0</sup> *<sup>p</sup>*<sup>0</sup> <sup>+</sup> *<sup>j</sup>*<sup>1</sup> *<sup>p</sup>*<sup>1</sup> <sup>+</sup> ... <sup>+</sup> *jm*−<sup>1</sup> *<sup>p</sup>m*−<sup>1</sup> the

for any 0 <sup>≤</sup> *<sup>k</sup>* <sup>≤</sup> *<sup>p</sup><sup>m</sup>* <sup>−</sup> 1 where we have applied *<sup>a</sup><sup>p</sup>* <sup>=</sup> *<sup>a</sup>* for any *<sup>a</sup>* <sup>∈</sup> *GF*(*p*). Then, following the proof of Observation 5.2, it is straightforward to show that for any 0 <sup>≤</sup> *<sup>j</sup>* <sup>≤</sup> *<sup>p</sup><sup>m</sup>* <sup>−</sup> 1 where

*<sup>B</sup>*ˆ*jk* <sup>=</sup> *Pp*Δ<sup>ˆ</sup> *jk <sup>P</sup><sup>T</sup>*

In Section 4, we explained that the form of message vectors when applying *Ppm* as the transformation where the message vector *<sup>μ</sup>* = (*μ*0, ..., *<sup>μ</sup>pm*−1) should have all components *μ<sup>j</sup>* = 0 if *wp*(*j*) > *r* (see Examples 4.9 and 4.10). To see how this formulation can lead to a decoding scheme, let us examine the case where *p* = 2, *m* = 2 and *r* = 1 (i.e. - a 1*st* order binary Reed-Muller code of length 4). Consider first using Equations (26) and (27) to calculate

> 00 : *γ*<sup>0</sup> = *λ*0*C*<sup>0</sup> + *λ*1(0) + *λ*2(0) + *λ*3(0) 01 : *γ*<sup>1</sup> = *λ*0(*C*<sup>0</sup> + *C*1) + *λ*1*C*<sup>1</sup> + *λ*2(0) + *λ*3(0) 10 : *γ*<sup>2</sup> = *λ*0(*C*<sup>0</sup> + *C*2) + *λ*1(0) + *λ*2*C*<sup>2</sup> + *λ*3(0)

*p* .

*<sup>i</sup>*=<sup>0</sup> *Ci*) + *λ*1(*C*<sup>1</sup> + *C*3) + *λ*2(*C*<sup>2</sup> + *C*3) + *λ*3*C*<sup>3</sup>

)*k*<sup>2</sup> ...((−1)*pm*−<sup>1</sup>

*Bj* <sup>=</sup> *<sup>B</sup>*ˆ*jm*−<sup>1</sup> <sup>⊗</sup> ... <sup>⊗</sup> *<sup>B</sup>*ˆ*j*<sup>1</sup> <sup>⊗</sup> *<sup>B</sup>*ˆ*j*<sup>0</sup> (30)

the Kronecker product in the dual of Equation (24) will contain extra factors of (−1)*i*−*<sup>j</sup>*

(−1)*<sup>k</sup>* = (−1)*k*<sup>0</sup> *<sup>p</sup>*<sup>0</sup>+*k*<sup>1</sup> *<sup>p</sup>*<sup>1</sup>+...+*km*−<sup>1</sup> *<sup>p</sup>m*−<sup>1</sup>

*<sup>l</sup>*=<sup>0</sup> *kl*

Pascal convolution of the vectors *μ* = (*μ*0, *μ*1, *μ*2, *μ*3) and *λ* = (*λ*0, *λ*1, *λ*2, *λ*3):

= (−1)∑*m*−<sup>1</sup>

= (−1)*k*<sup>0</sup> ((−1)*p*)*k*<sup>1</sup> ((−1)*p*<sup>2</sup>

(27)

243

( *i j* ),

. To

. (31)

*pm* (28)

*pm* (29)

)*km*−<sup>1</sup>

= *C*Δ*iP<sup>T</sup> pm*

To close this section, we draw some immediate conclusions from Equation (25):


#### **6. Majority logic decoding using Pascal convolution**

GRM codes fall into a larger category of codes known as Euclidean geometry codes (Blahut (2003); Lin & Costello (1983); MacWilliams & Sloane (1977); Wicker (1994)) where it is well-known that a technique known as 'majority logic decoding' (MLD) can be used to recover the message vector. Based upon statements made in Section 4, it should be clear that Pascal codes are also MLD. However, the role played by the Pascal convolution in the decoding strategy is worthy of mention. As pointed out in the conclusions of Example 5.3, the checksums of a majority logic decoding (MLD) scheme for GRM codes can be derived using the dual of the convolution relation derived above. We now demonstrate this observation more clearly.

Because of the similar forms of *Ppm* and *Qpm* the dual convolution relation is easily derived from the inverse transform. Consider the componentwise product Γ*<sup>j</sup>* = *Cj*Λ*<sup>j</sup>* of two vectors where *C* = *μPpm* and Λ = *λPpm* :

$$\begin{aligned} \gamma\_i &= \sum\_{j=0}^{n-1} \Gamma\_j q\_{ji} \mod p \qquad i = 0, \ 1, \ \dots, n-1 \\ &= \sum\_{j=0}^{n-1} (\mathsf{C}\_j \mathsf{A}\_j) q\_{ji} \mod p \\ &= \sum\_{j=0}^{n-1} \mathsf{C}\_j (\sum\_{k=0}^{n-1} \lambda\_k p\_{kj}) q\_{ji} \mod p \\ &= \sum\_{k=0}^{n-1} \lambda\_k (\sum\_{j=0}^{n-1} \mathsf{C}\_j p\_{kj} q\_{ji}) \mod p \\ &\equiv \sum\_{k=0}^{n-1} \lambda\_k s\_{i,k} \mod p \qquad i = 0, 1, \dots, n-1 \end{aligned} \tag{26}$$

where *n* = *pm*. Similar to Equation (18), one can also show that

$$\begin{aligned} s\_{\bar{i}} &= (s\_{\bar{i},0} \ s\_{\bar{i},1} \dots s\_{\bar{i},n-1}) \\ &= \mathbb{C} \Delta\_{\bar{i}} P\_{p^m}^T \end{aligned} \tag{27}$$

which can also be written as

12 Will-be-set-by-IN-TECH

• Because of the Kronecker product, a good deal of self-similar structure can be observed in the resulting vector Γ. For instance, the coefficients of the Λ*<sup>i</sup>* can be computed by iteration

Λ<sup>1</sup> in Γ<sup>1</sup> can be computed by adding the coefficient of Λ<sup>0</sup> in Γ<sup>0</sup> to the coefficient of Λ<sup>0</sup> in Γ1. The coefficients of Λ<sup>2</sup> and Λ<sup>3</sup> in Γ<sup>2</sup> and Γ<sup>3</sup> can be computed by adding the coefficients

• Looking at the result columnwise, the set of coefficients associated with a given Λ*<sup>i</sup>* appear to be the checksums for an *R*(*r*, 2) binary Reed-Muller code ((MacWilliams & Sloane, 1977, p.385-388), (Wicker, 1994, p.155-165)). As pointed out in the next section, although this observation is true for the binary case, an orthogonal set of checksums for *p* > 2 will not come about by this method. It is the dual of the Pascal convolution that will lead to the

GRM codes fall into a larger category of codes known as Euclidean geometry codes (Blahut (2003); Lin & Costello (1983); MacWilliams & Sloane (1977); Wicker (1994)) where it is well-known that a technique known as 'majority logic decoding' (MLD) can be used to recover the message vector. Based upon statements made in Section 4, it should be clear that Pascal codes are also MLD. However, the role played by the Pascal convolution in the decoding strategy is worthy of mention. As pointed out in the conclusions of Example 5.3, the checksums of a majority logic decoding (MLD) scheme for GRM codes can be derived using the dual of the convolution relation derived above. We now demonstrate this observation

Because of the similar forms of *Ppm* and *Qpm* the dual convolution relation is easily derived from the inverse transform. Consider the componentwise product Γ*<sup>j</sup>* = *Cj*Λ*<sup>j</sup>* of two vectors

(*Cj*Λ*j*)*qji* mod *p*

Γ*jqji* mod *p i* = 0, 1, ..., *n* − 1

*λ<sup>k</sup> pkj*)*qji* mod *p*

*Cj pkjqji*) mod *p*

*λksi*,*<sup>k</sup>* mod *p i* = 0, 1, ..., *n* − 1

of Λ<sup>0</sup> and Λ<sup>1</sup> in Γ<sup>0</sup> and Γ<sup>1</sup> to the coefficients of Λ<sup>0</sup> and Λ<sup>1</sup> in Γ<sup>2</sup> and Γ3, and so on.

1. As an example, the coefficient of

(26)

To close this section, we draw some immediate conclusions from Equation (25):

starting with the initial 'seed' generated by *A*ˆ <sup>0</sup> and *A*ˆ

**6. Majority logic decoding using Pascal convolution**

decoding of GRM codes.

where *C* = *μPpm* and Λ = *λPpm* :

*γ<sup>i</sup>* =

= *n*−1 ∑ *j*=0

= *n*−1 ∑ *j*=0 *Cj*( *n*−1 ∑ *k*=0

= *n*−1 ∑ *k*=0 *λk*( *n*−1 ∑ *j*=0

≡ *n*−1 ∑ *k*=0

*n*−1 ∑ *j*=0

more clearly.

$$\mathbf{s}\_{i} = \mu \mathbf{P}\_{\mathcal{P}^{\mathrm{m}}} \Delta\_{\mathbf{i}} \mathbf{P}\_{\mathcal{P}^{\mathrm{m}}}^{\mathrm{T}} \tag{28}$$

where <sup>Δ</sup>*<sup>i</sup>* is a diagonal matrix with elements (*q*0*<sup>i</sup> <sup>q</sup>*1*<sup>i</sup>* ... *<sup>q</sup>*(*n*−1)*i*) along its diagonal. Furthermore, if we define

$$B\_i \equiv P\_{p^m} \Delta\_i P\_{p^m}^T \tag{29}$$

then results similar to Observations 5.1 and 5.2 can also be demonstrated. However, in proving the dual of Observation 5.2 there is one difference be aware of. Since *qji* = (−1)*i*−*<sup>j</sup>* ( *i j* ), the Kronecker product in the dual of Equation (24) will contain extra factors of (−1)*i*−*<sup>j</sup>* . To achieve the equality <sup>Δ</sup>*<sup>j</sup>* <sup>=</sup> <sup>Δ</sup><sup>ˆ</sup> *jm*−<sup>1</sup> <sup>⊗</sup> ... <sup>⊗</sup> <sup>Δ</sup><sup>ˆ</sup> *<sup>j</sup>*<sup>1</sup> <sup>⊗</sup> <sup>Δ</sup><sup>ˆ</sup> *<sup>j</sup>*<sup>0</sup> where *<sup>j</sup>* <sup>=</sup> *<sup>j</sup>*<sup>0</sup> *<sup>p</sup>*<sup>0</sup> <sup>+</sup> *<sup>j</sup>*<sup>1</sup> *<sup>p</sup>*<sup>1</sup> <sup>+</sup> ... <sup>+</sup> *jm*−<sup>1</sup> *<sup>p</sup>m*−<sup>1</sup> the following identity will be required:

$$\begin{aligned} (-1)^k &= (-1)^{k\_0 p^0 + k\_1 p^1 + \dots + k\_{m-1} p^{m-1}} \\ &= (-1)^{k\_0} ((-1)^p)^{k\_1} ((-1)^{p^2})^{k\_2} \dots ((-1)^{p^{m-1}})^{k\_{m-1}} \\ &= (-1)^{\sum\_{l=0}^{m-1} k\_l} \end{aligned}$$

for any 0 <sup>≤</sup> *<sup>k</sup>* <sup>≤</sup> *<sup>p</sup><sup>m</sup>* <sup>−</sup> 1 where we have applied *<sup>a</sup><sup>p</sup>* <sup>=</sup> *<sup>a</sup>* for any *<sup>a</sup>* <sup>∈</sup> *GF*(*p*). Then, following the proof of Observation 5.2, it is straightforward to show that for any 0 <sup>≤</sup> *<sup>j</sup>* <sup>≤</sup> *<sup>p</sup><sup>m</sup>* <sup>−</sup> 1 where *<sup>j</sup>* <sup>=</sup> *<sup>j</sup>*<sup>0</sup> *<sup>p</sup>*<sup>0</sup> <sup>+</sup> *<sup>j</sup>*<sup>1</sup> *<sup>p</sup>*<sup>1</sup> <sup>+</sup> ... <sup>+</sup> *jm*−<sup>1</sup> *<sup>p</sup>m*−1,

$$B\_{\hat{f}} = \hat{B}\_{\hat{f}\_{m-1}} \otimes \dots \otimes \hat{B}\_{\hat{f}\_1} \otimes \hat{B}\_{\hat{f}\_0} \tag{30}$$

where

$$
\hat{\mathcal{B}}\_{\dot{\jmath}\_k} = P\_p \hat{\Delta}\_{\dot{\jmath}\_k} P\_p^T \dots
$$

In Section 4, we explained that the form of message vectors when applying *Ppm* as the transformation where the message vector *<sup>μ</sup>* = (*μ*0, ..., *<sup>μ</sup>pm*−1) should have all components *μ<sup>j</sup>* = 0 if *wp*(*j*) > *r* (see Examples 4.9 and 4.10). To see how this formulation can lead to a decoding scheme, let us examine the case where *p* = 2, *m* = 2 and *r* = 1 (i.e. - a 1*st* order binary Reed-Muller code of length 4). Consider first using Equations (26) and (27) to calculate Pascal convolution of the vectors *μ* = (*μ*0, *μ*1, *μ*2, *μ*3) and *λ* = (*λ*0, *λ*1, *λ*2, *λ*3):

$$\begin{array}{ccccccccc} 00 : & \gamma\_0 = & \lambda\_0 \mathbb{C}\_0 & + & \lambda\_1(0) & + & \lambda\_2(0) & + & \lambda\_3(0) \\ 01 : & \gamma\_1 = & \lambda\_0(\mathbb{C}\_0 + \mathbb{C}\_1) & + & \lambda\_1 \mathbb{C}\_1 & + & \lambda\_2(0) & + & \lambda\_3(0) \\ & 10 : & \gamma\_2 = & \lambda\_0(\mathbb{C}\_0 + \mathbb{C}\_2) & + & \lambda\_1(0) & + & \lambda\_2 \mathbb{C}\_2 & + & \lambda\_3(0) \\ & 11 : & \gamma\_3 = & \lambda\_0(\mathbb{E}\_{i=0}^3 \mathbb{C}\_i) & + & \lambda\_1(\mathbb{C}\_1 + \mathbb{C}\_3) + & \lambda\_2(\mathbb{C}\_2 + \mathbb{C}\_3) & + & \lambda\_3 \mathbb{C}\_3 \end{array} \tag{31}$$

(4) For each *i* ∈ *Sj*, start at *λ*<sup>0</sup> associated with *γ<sup>i</sup>* and construct checksum equations by equating the result in Step (2) with that of Step (3) along a *diagonal path* (i.e. - starting at

245

*<sup>C</sup>*<sup>ˆ</sup> <sup>≡</sup> *<sup>C</sup>* <sup>−</sup> *<sup>C</sup>*¯ (= (*<sup>μ</sup>* <sup>−</sup> *<sup>μ</sup>*¯)*Ppm* ).

*μ*ˆ ≡ *μ* − *μ*˜.

As with typical MLD schemes, this algorithm starts with the highest order*r* to obtain estimates of the code vector components and then successively estimates the lower order components. **Example 6.1.** *Let p* = 3*, m* = 2 *and r* = 2*. Consider decoding a P*3(2, 2) *code. From Example 4.10,*

*μ* = (*μ*0, *μ*1, *μ*2, *μ*3, *μ*4, 0, *μ*6, 0, 0).

*Also, we know that P*3(2, 2) *has dmin* = 3 *implying that we can correct a single error. Therefore, we*

*(2,3,4) Rather than write out the equations for γi, we summarize by equating the results of step (2)*

*μ*<sup>2</sup> = *c*<sup>0</sup> + *c*<sup>1</sup> + *c*<sup>2</sup> *μ*<sup>2</sup> = *c*<sup>3</sup> + *c*<sup>4</sup> + *c*<sup>5</sup> *μ*<sup>2</sup> = *c*<sup>6</sup> + *c*<sup>7</sup> + *c*<sup>8</sup>

*μ*<sup>4</sup> = *c*<sup>0</sup> + 2*c*<sup>1</sup> + 2*c*<sup>3</sup> + *c*<sup>4</sup> 2*μ*<sup>4</sup> = 2*c*<sup>1</sup> + *c*<sup>2</sup> + *c*<sup>4</sup> + 2*c*<sup>5</sup> 2*μ*<sup>4</sup> = 2*c*<sup>3</sup> + *c*<sup>4</sup> + *c*<sup>6</sup> + 2*c*<sup>7</sup> *μ*<sup>4</sup> = *c*<sup>4</sup> + 2*c*<sup>5</sup> + 2*c*<sup>7</sup> + *c*<sup>8</sup>

> *μ*<sup>6</sup> = *c*<sup>0</sup> + *c*<sup>3</sup> + *c*<sup>6</sup> *μ*<sup>6</sup> = *c*<sup>1</sup> + *c*<sup>4</sup> + *c*<sup>7</sup> *μ*<sup>6</sup> = *c*<sup>2</sup> + *c*<sup>5</sup> + *c*<sup>8</sup>

(8) Adjust *μ* to reflect the change in step (7) as follows. Construct a new vector *μ*˜ where

(5) For *i* ∈ *Sj*, create estimates *μ*¯*<sup>i</sup>* by a majority logic decision on the checksums.

*C*¯ = *μ*¯*Ppm*

k=0, choose the coefficient of *λ<sup>k</sup>* associated with *γi*+*k*).

(6) *j* = *j* − 1. If *j* < 0, stop.

*(0) Start with j* = 2*.*

*and step (3):*

*i* = 2 :

*i* = 4 :

*i* = 6 :

(7) Remove the estimated components as:

The Fourier Convolution Theorem over Finite Fields: Extensions of Its Application to Error Control Coding

*μ*˜*<sup>i</sup>* = *μ<sup>i</sup>* if *i* ∈ *Sj* and *μ*˜*<sup>i</sup>* = 0 otherwise. Then let

*expect that the MLD equations should have at least three checksums.*

*(1) Let S*<sup>2</sup> = {2, 4, 6} *(i.e. - i* = *i*<sup>0</sup> + *i*<sup>1</sup> *p such that w*3(*i*) = 2*).*

(9) Let *C* = *C*ˆ and *μ* = *μ*ˆ and go to Step (1).

.

where the binary expansion of the *γ* index has been explicitly written out at the beginning of each row. Next, consider Equations (26) and (28) to calculate the same convolution:

$$\begin{array}{ccccc} 00: & \gamma\_0 = \lambda\_0 \mu\_0 + & \lambda\_1(0) & + & \lambda\_2(0) & + & \lambda\_3(0) \\ 01: & \gamma\_1 = \lambda\_0 \mu\_1 + \lambda\_1(\mu\_0 + \mu\_1) + & \lambda\_2(0) & + & \lambda\_3(0) \\ 10: & \gamma\_2 = \lambda\_0 \mu\_2 + & \lambda\_1(0) & + & \lambda\_2(\mu\_0 + \mu\_2) + & \lambda\_3(0) \\ 11: & \gamma\_3 = \lambda\_0 \mu\_3 + \lambda\_1(\mu\_2 + \mu\_3) + & \lambda\_2(\mu\_1 + \mu\_3) + & \lambda\_3(\sum\_{i=0}^3 \mu\_i) \\ \end{array}$$

Since, for *P*2(1, 2), *μ* = (*μ*0, *μ*1, *μ*2, 0), this set of equations can be simplified as

$$\begin{array}{ccccccccc} 00: & \gamma\_0 = \lambda\_0 \mu\_0 & + & \lambda\_1(0) & + & \lambda\_2(0) & + & \lambda\_3(0) \\ 01: & \gamma\_1 = \lambda\_0 \mu\_1 + \lambda\_1(\mu\_0 + \mu\_1) & + & \lambda\_2(0) & + & \lambda\_3(0) & \\ 10: & \gamma\_2 = \lambda\_0 \mu\_2 & + & \lambda\_1(0) & + & \lambda\_2(\mu\_0 + \mu\_2) & + & \lambda\_3(0) & \\ 11: & \gamma\_3 = \lambda\_0(0) + & \lambda\_1 \mu\_2 & + & \lambda\_2 \mu\_1 & + & \lambda\_3(\mu\_0 + \mu\_1 + \mu\_2) \\ \end{array} \tag{32}$$

Equations (31) and (32) must hold for *any* vector *λ*. Therefore, for a specific *γj*, we can equate the coefficients of the *λ<sup>i</sup>* in Equation (31) with those in Equation (32). So, for example, we end with the result that

$$\begin{aligned} \mu\_2 &= \mathbf{C}\_0 + \mathbf{C}\_2 \\ \mu\_2 &= \mathbf{C}\_1 + \mathbf{C}\_3 \end{aligned}$$

and

$$\begin{aligned} \mu\_1 &= \mathcal{C}\_0 + \mathcal{C}\_1 \\ \mu\_1 &= \mathcal{C}\_2 + \mathcal{C}\_3. \end{aligned}$$

For this first order *r* = 1 code, we can generate a set of checksums using a simple algorithm. Start at an index *i* of *γ* such that *w*2(*i*) = 1 and equate Equations (31) and (32) along a *diagonal path* in order to 'collect' all checksum equations associated associated with *μi*. For example, the bold symbols in Equation (32) generate the checksums for *μ*1. It turns out that these diagonal equations actually generate what are known as the 'incidence vectors' of the MLD strategy (Blahut (2003); MacWilliams & Sloane (1977); Wicker (1994)).

We now provide an algorithm for *GF*(*p*) to show how the Pascal convolution approach is equivalent to a typical MLD using finite Euclidean geometry ((Wicker, 1994, p.155-165)). The interesting aspect of this algorithm is that the Pascal convolution generates the correct checksums for *any GF*(*p*). Consider a *Pp*(*r*, *m*) code where *C* = *μPpm* such that *μ<sup>j</sup>* = 0 if *wp*(*j*) > *r*:


14 Will-be-set-by-IN-TECH

where the binary expansion of the *γ* index has been explicitly written out at the beginning of

*<sup>i</sup>*=<sup>0</sup> *μi*)

.

. (32)

00 : *γ*<sup>0</sup> = *λ*0*μ*<sup>0</sup> + *λ*1(0) + *λ*2(0) + *λ*3(0) 01 : *γ*<sup>1</sup> = *λ*0*μ*<sup>1</sup> + *λ*1(*μ*<sup>0</sup> + *μ*1) + *λ*2(0) + *λ*3(0) 10 : *γ*<sup>2</sup> = *λ*0*μ*<sup>2</sup> + *λ*1(0) + *λ*2(*μ*<sup>0</sup> + *μ*2) + *λ*3(0) 11 : *γ*<sup>3</sup> = *λ*0*μ*<sup>3</sup> + *λ*1(*μ*<sup>2</sup> + *μ*3) + *λ*2(*μ*<sup>1</sup> + *μ*3) + *λ*3(∑<sup>3</sup>

each row. Next, consider Equations (26) and (28) to calculate the same convolution:

Since, for *P*2(1, 2), *μ* = (*μ*0, *μ*1, *μ*2, 0), this set of equations can be simplified as

with the result that

and

*wp*(*j*) > *r*:

(0) Let *j* = *r*.

the *μ<sup>i</sup>* are zero).

00 : *γ*<sup>0</sup> = *λ*0*μ*<sup>0</sup> + *λ*1(0) + *λ*2(0) + *λ*3(0) 01 : *γ*<sup>1</sup> = λ0μ<sup>1</sup> + *λ*1(*μ*<sup>0</sup> + *μ*1) + *λ*2(0) + *λ*3(0) 10 : *γ*<sup>2</sup> = *λ*0*μ*<sup>2</sup> + λ1(0) + *λ*2(*μ*<sup>0</sup> + *μ*2) + *λ*3(0) 11 : *γ*<sup>3</sup> = *λ*0(0) + *λ*1*μ*<sup>2</sup> + λ2μ<sup>1</sup> + *λ*3(*μ*<sup>0</sup> + *μ*<sup>1</sup> + *μ*2)

Equations (31) and (32) must hold for *any* vector *λ*. Therefore, for a specific *γj*, we can equate the coefficients of the *λ<sup>i</sup>* in Equation (31) with those in Equation (32). So, for example, we end

> *μ*<sup>2</sup> = *C*<sup>0</sup> + *C*<sup>2</sup> *μ*<sup>2</sup> = *C*<sup>1</sup> + *C*<sup>3</sup>

*μ*<sup>1</sup> = *C*<sup>0</sup> + *C*<sup>1</sup> *μ*<sup>1</sup> = *C*<sup>2</sup> + *C*3.

For this first order *r* = 1 code, we can generate a set of checksums using a simple algorithm. Start at an index *i* of *γ* such that *w*2(*i*) = 1 and equate Equations (31) and (32) along a *diagonal path* in order to 'collect' all checksum equations associated associated with *μi*. For example, the bold symbols in Equation (32) generate the checksums for *μ*1. It turns out that these diagonal equations actually generate what are known as the 'incidence vectors' of the MLD

We now provide an algorithm for *GF*(*p*) to show how the Pascal convolution approach is equivalent to a typical MLD using finite Euclidean geometry ((Wicker, 1994, p.155-165)). The interesting aspect of this algorithm is that the Pascal convolution generates the correct checksums for *any GF*(*p*). Consider a *Pp*(*r*, *m*) code where *C* = *μPpm* such that *μ<sup>j</sup>* = 0 if

(3) Apply Equation (28) to calculate *γ* (these equations will simplify based upon which of

strategy (Blahut (2003); MacWilliams & Sloane (1977); Wicker (1994)).

(1) Let *Sj* be the set of indices *i* such that *wp*(*i*) = *j*.

(2) Apply Equation (27) to calculate *γ*.

(7) Remove the estimated components as:

$$\begin{aligned} \bar{\mathcal{C}} &= \bar{\mu} P\_{p^m} \\ \hat{\mathcal{C}} &\equiv \mathcal{C} - \bar{\mathcal{C}} \ (= (\mu - \bar{\mu}) P\_{p^m}). \end{aligned}$$

(8) Adjust *μ* to reflect the change in step (7) as follows. Construct a new vector *μ*˜ where *μ*˜*<sup>i</sup>* = *μ<sup>i</sup>* if *i* ∈ *Sj* and *μ*˜*<sup>i</sup>* = 0 otherwise. Then let

$$
\hat{\mu} \equiv \mu - \tilde{\mu}.
$$

(9) Let *C* = *C*ˆ and *μ* = *μ*ˆ and go to Step (1).

As with typical MLD schemes, this algorithm starts with the highest order*r* to obtain estimates of the code vector components and then successively estimates the lower order components.

**Example 6.1.** *Let p* = 3*, m* = 2 *and r* = 2*. Consider decoding a P*3(2, 2) *code. From Example 4.10,*

$$
\mu = (\mu\_{0\prime}, \mu\_{1\prime}, \mu\_{2\prime}, \mu\_{3\prime}, \mu\_{4\prime}, 0, \mu\_{6\prime}, 0, 0).
$$

*Also, we know that P*3(2, 2) *has dmin* = 3 *implying that we can correct a single error. Therefore, we expect that the MLD equations should have at least three checksums.*



**8. References**

18: 141–146.

*Coding*, Springer.

The Fourier Convolution Theorem over Finite Fields: Extensions of Its Application to Error Control Coding

Prentice-Hall, NJ.

*on Information Theory* 34: 1152–1187.

London.

47: 2711–2736.

New York.

57: 835–857.

Ahmed, N., Rao, K. & Abdussattar, A. (1973). On cyclic autocorrelation and the

Blahut, R. & Burrus, C. (1991). *Algebraic Methods for Signal Processing and Communications*

Burrus, C., Gopinath, R. & Guo, H. (1998). *Introduction to Wavelets and Wavelet Transforms*,

Caire, G., Grossman, R. & Poor, H. (1993). Wavelet transforms and associated finite cyclic

Call, G. & Velleman, D. (1993). Pascal's matrices, *American Mathematical Monthly* 100: 372–376. Dodd, M. (2003). *Applications of the Discrete Fourier Transform in Information Theory and*

Forney, G. D. (1988). Coset codes - Part II: Binary lattices and related codes, *IEEE Transactions*

Kou, Y., Lin, S. & Fossorier, M. P. C. (2001). Low-density parity-check codes based on finite

Li, G., Li, D., Wang, Y. & Sun, W. (2010). Hybrid decoding of finite geometry low-density

Lin, S. & Costello, D. (1983). *Error Control Coding: Fundamentals and Applications*, Prentice-Hall,

Liu, Z. & Pados, D. A. (2005). Decoding algorithm for finite-geometry LDPC codes, *IEEE*

MacWilliams, F. J. & Sloane, N. J. A. (1977). *The Theory of Error-Correcting Codes*,

Massey, J. L., Costello, D. J. & Justesen, J. (1973). Polynomial weights and code constructions,

Ngatched, T. M. N., F, T. & Bossert, M. (2009). An improved decoding algorithm for finite-geometry LDPC codes, *IEEE Transactions on Communications* 57: 302–306. O.Vontobel, P., Smarandache, R., Kiyavash, N., Teutsch, J. & D.Vukobratovic (2005). On the

Pusane, A. E., Smarandache, R., Vontobel, P. O. & Costello, D. J. (2011). Deriving good LDPC

Robinson, G. (1972). Logical convolution and discrete Walsh and Fourier power spectra, *IEEE*

Sakk, E. (2002). *Wavelet Packet Formulation of Generalized Reed Muller Codes*, PhD thesis, Cornell

minimal pseudocodewords of codes from finite geometries, *Proc. IEEE Int. Symp. Inf.*

convolutional codes from LDPC block codes, *IEEE Transactions on Information Theory*

Heller, S. (1963). Inverse of triangular matrix, *American Mathematical Monthly* 70: 334.

parity-check codes, *IET Communications* 4(10): 1238–1246.

*Transactions on Communications* 53: 415–421.

*IEEE Transactions on Information Theory* 19: 101–110.

*Transactions on Audio and Electroacoustics* 20: 271–280.

North-Holland, Amsterdam.

*Theory*, Adelaide, Australia.

University, Ithaca, NY.

*Cryptology*, PhD thesis, Royal Holloway and Bedford New College, University of

geometries: A rediscovery and new results, *IEEE Transactions on Information Theory*

Blahut, R. (2003). *Algebraic Codes for Data Transmission*, Cambridge University Press.

groups, *IEEE Transactions on Information Theory* 39: 1157–1166.

Walsh-Hadamard transform, *IEEE Transactions on Electromagnetic Compatibility*

247

*After estimating the message components dictated by S*<sup>2</sup> *(step (5)), remove the code estimates from C (step (7)) and begin work on S*<sup>1</sup> *where now (step(8)) μ<sup>i</sup>* = 0 *if wp*(*i*) > 1*. For S*1*, we have the checksums:*

$$\begin{aligned} \mu\_1 &= 2c\_0 + c\_1 \\ 2\mu\_1 &= c\_1 + 2c\_2 \\ \mu\_1 &= 2c\_3 + c\_4 \\ 2\mu\_1 &= c\_4 + 2c\_5 \\ \mu\_1 &= 2c\_6 + c\_7 \\ 2\mu\_1 &= c\_7 + 2c\_8 \\ \end{aligned}$$

$$\begin{aligned} \mu\_3 &= 2c\_0 + c\_3 \\ \mu\_3 &= 2c\_1 + c\_4 \\ \mu\_3 &= 2c\_2 + c\_5 \\ \mu\_3 &= c\_3 + 2c\_6 \\ 2\mu\_3 &= c\_4 + 2c\_6 \\ 2\mu\_3 &= c\_5 + 2c\_8 \\ \end{aligned}$$

*After estimating the message components dictated by S*1*, once again, remove the code estimates from C and begin work on S*<sup>0</sup> *where now μ<sup>i</sup>* = 0 *if wp*(*i*) > 0*. At this stage, with all other components of μ* = 0 *except μ*0*, we are left with μ* = *C (i.e. - nine estimate of the check on μ*0*).*

## **7. Conclusions**

When considering the design of error control codes, it is interesting to look for guiding principles that can account for whole classes of codes. In this presentation, we have shown how the GFFT convolution approach to Reed-Solomon codes can be extended to other classes of codes such as generalized Reed-Muller codes.


Table 1. Comparison of Fourier and generalized convolution techniques.

Instead of applying a Fourier matrix to encode the message, we have applied a Pascal matrix and extended the convolution theorem over finite fields. In doing so, we have observed that this formulation leads to the well-known majority logic decoding algorithm. Additional investigations have also considered codes in the context of the wavelet transform (Sakk & Wicker (2003)). The block codes addressed in this chapter have been shown to lend themselves to graph-based iterative decoding strategies (see Table 1). The results derived above suggest that the generalized convolution approach is useful for understanding the systematic introduction of redundancy for the sake of error control.

## **8. References**

16 Will-be-set-by-IN-TECH

*After estimating the message components dictated by S*<sup>2</sup> *(step (5)), remove the code estimates from C (step (7)) and begin work on S*<sup>1</sup> *where now (step(8)) μ<sup>i</sup>* = 0 *if wp*(*i*) > 1*. For S*1*, we have the*

*After estimating the message components dictated by S*1*, once again, remove the code estimates from C and begin work on S*<sup>0</sup> *where now μ<sup>i</sup>* = 0 *if wp*(*i*) > 0*. At this stage, with all other components of*

When considering the design of error control codes, it is interesting to look for guiding principles that can account for whole classes of codes. In this presentation, we have shown how the GFFT convolution approach to Reed-Solomon codes can be extended to other classes

**Code Convolution Principle Decoding Strategy**

Reed-Solomon GFFT-based iterative GRM generalized iterative

Instead of applying a Fourier matrix to encode the message, we have applied a Pascal matrix and extended the convolution theorem over finite fields. In doing so, we have observed that this formulation leads to the well-known majority logic decoding algorithm. Additional investigations have also considered codes in the context of the wavelet transform (Sakk & Wicker (2003)). The block codes addressed in this chapter have been shown to lend themselves to graph-based iterative decoding strategies (see Table 1). The results derived above suggest that the generalized convolution approach is useful for understanding the

*μ* = 0 *except μ*0*, we are left with μ* = *C (i.e. - nine estimate of the check on μ*0*).*

Table 1. Comparison of Fourier and generalized convolution techniques.

systematic introduction of redundancy for the sake of error control.

of codes such as generalized Reed-Muller codes.

*μ*<sup>1</sup> = 2*c*<sup>0</sup> + *c*<sup>1</sup> 2*μ*<sup>1</sup> = *c*<sup>1</sup> + 2*c*<sup>2</sup> *μ*<sup>1</sup> = 2*c*<sup>3</sup> + *c*<sup>4</sup> 2*μ*<sup>1</sup> = *c*<sup>4</sup> + 2*c*<sup>5</sup> *μ*<sup>1</sup> = 2*c*<sup>6</sup> + *c*<sup>7</sup> 2*μ*<sup>1</sup> = *c*<sup>7</sup> + 2*c*<sup>8</sup>

*μ*<sup>3</sup> = 2*c*<sup>0</sup> + *c*<sup>3</sup> *μ*<sup>3</sup> = 2*c*<sup>1</sup> + *c*<sup>4</sup> *μ*<sup>3</sup> = 2*c*<sup>2</sup> + *c*<sup>5</sup> 2*μ*<sup>3</sup> = *c*<sup>3</sup> + 2*c*<sup>6</sup> 2*μ*<sup>3</sup> = *c*<sup>4</sup> + 2*c*<sup>7</sup> 2*μ*<sup>3</sup> = *c*<sup>5</sup> + 2*c*<sup>8</sup>

*checksums:*

*i* = 1 :

*i* = 3 :

**7. Conclusions**


**0**

**10**

Yoshihiro Ueda

*Japan*

<sup>+</sup> =

*Faculty of Maritime Sciences, Kobe University*

*ut* − Δ*u* + ∇ · *f*(*u*) = 0, (1.1)

) with *<sup>x</sup>*<sup>1</sup> <sup>∈</sup> **<sup>R</sup>**<sup>+</sup> and *<sup>x</sup>*� = (*x*2, ··· , *xn*) <sup>∈</sup> **<sup>R</sup>***n*−1; *<sup>u</sup>*(*x*, *<sup>t</sup>*) is the

*f*1(0) = 0, *f*1(*u*) > *f*1(0) (= 0) (1.4)

*u*(*x*, 0) = *u*0(*x*). (1.3)

, *t*) = *ub*, (1.2)

<sup>+</sup> <sup>=</sup> **<sup>R</sup>**<sup>+</sup> <sup>×</sup> **<sup>R</sup>***n*−<sup>1</sup> with *<sup>n</sup>* <sup>≥</sup> 2;

**Application of the Weighted Energy Method in the**

As you know, the energy method in the Fourier space is useful in deriving the decay estimates for problems in the whole space **R***n*. Recently, the author studied half space problems in **R***<sup>n</sup>*

**<sup>R</sup>**<sup>+</sup> <sup>×</sup> **<sup>R</sup>***n*−<sup>1</sup> and developed the energy method in the partial Fourier space obtained by taking the Fourier transform with respect to the tangential variable **R***n*−1. Then the author applied this energy method to the half space problem for linearized viscous conservation laws with convex condition and proved the asymptotic stability of planar stationary waves by showing

In this chapter, we consider the half space problem for linearized viscous conservation laws with non-convex condition, and derive the asymptotic stability of planar stationary waves and the corresponding convergence rate. Our proof is based on the energy method in the partial

In this present chapter, we are concerned with the half space problem for the viscous

*u*0(*x*) → 0 as *x*<sup>1</sup> → ∞,

and *ub* is the boundary data (assumed to be a constant) with *ub* < 0; *f*(*u*)=(*f*1(*u*), ··· , *fn*(*u*))

for *u* ∈ [*ub*, 0). Here we note that the condition (1.4) is the necessary condition for the existence of the planar stationary waves (for the detail, see Section 2.2). We emphasize that

*u*(0, *x*�

Here *<sup>x</sup>* = (*x*1, ··· , *xn*) is the space variable in the half space **<sup>R</sup>***<sup>n</sup>*

is a smooth function of *<sup>u</sup>* <sup>∈</sup> **<sup>R</sup>** with values in **<sup>R</sup>***<sup>n</sup>* and satisfies

unknown function, *u*0(*x*) is the initial data satisfying

**1. Introduction**

conservation laws:

we sometimes write as *x* = (*x*1, *x*�

a sharp convergence rate for *t* → ∞ (see, [14]).

Fourier space with the anti-derivative method.

**Conservation Laws with Non-Convex Condition**

**Partial Fourier Space to Linearized Viscous**

