**6. Data throughput quantification**

The data throughput, T could be given as [24].

$$\mathbf{T} = \mathbf{R}(\mathbf{1} - \mathbf{P\_e})^N \tag{14}$$

For illustrative purposes, the transfer function, T(D) of smaller convolutional

T Dð Þ¼ <sup>D</sup><sup>3</sup> <sup>þ</sup> 2D<sup>4</sup> <sup>þ</sup> 3D<sup>5</sup> <sup>þ</sup> 5D6 <sup>þ</sup> 9D7 <sup>þ</sup> 16D8 <sup>þ</sup> 28D9 <sup>þ</sup> 49D<sup>10</sup> <sup>þ</sup> 85D11 þ���

For both the (2, 2, 2) code and the (4, 4, 2) code, dfree = L = 3. Using the values of L and {ad}, the probability of a binary digit error, Pb as a function of the SNR per bit,

The curves illustrate that, the error probability increases with an increase in k for the same value of SNR. Hence, better performance for wireless transmission should involve lower order codes and many independent parallel channels rather than higher order codes with fewer independent parallel channels. Hence, high data throughput could be attained by using small number of bits in the block length, N.

The Viterbi algorithm [25] is the most extensively decoding algorithm for Convolutional codes and has been widely deployed for forward error correction in wireless communication systems. In this sub-section Viterbi algorithm will be applied to the non-linear convolutional code. The constraint length, L for a (n,k,m) convolutional code is given as L = k(m-1). The constraint length is very essential in convolutional encoding since a Trellis diagram which gives the best encoding representation populates after L bits. Hence to encode blocks of n bits, each block has

T Dð Þ¼ <sup>D</sup><sup>3</sup> <sup>þ</sup> 2 D<sup>4</sup> þ��� (17)

(18)

The transfer function T(D) for the (2, 2, 2) code is given as follows [16]:

The transfer function for the (4, 4, 2) code is given as follows [16]:

codes such as (2, 2, 2) and (4, 4, 2) will be used.

*DOI: http://dx.doi.org/10.5772/intechopen.82390*

*A New Cross-Layer FPGA-Based Security Scheme for Wireless Networks*

*γ*<sup>b</sup> is shown in **Figure 7** for k = 2 and 4 [16].

**6.2 Forward error correction (FEC) code**

to be terminated by L zeros (0 s) before encoding.

*Performance of coded orthogonal signaling for k = 2 and k = 4.*

**Figure 7.**

**27**

where Pe is the bit error probability, N is the number of bits in the block length and R is a fixed transmission rate for the frames. For Pe < < 1, the throughput could approximate to

$$\mathbf{T} \cong \mathbf{R}(\mathbf{1} - \mathbf{N} \mathbf{P}\_{\mathbf{e}}) \tag{15}$$

From (Eq. (15)) it could be seen that, for a fixed transmission rate, R the throughput, T could be increased by either minimizing N or Pe. In this section, it will be shown how convolutional coding could be used to achieve both conditions through orthogonal signaling and forward error correction respectively.

#### **6.1 Coded orthogonal signaling**

It is shown in [16, 25] that, for coded orthogonal signaling, the bit error probability to transmit k-bit symbols is as follows:

$$\mathbf{P\_{b}} \leq \left(\frac{2^{\mathbf{k}\cdot\mathbf{1}}}{2^{\mathbf{k}}-\mathbf{1}}\right)^{2} \sum\_{\mathbf{3}}^{\mathbf{k}} \mathbf{a\_{d}} \left[\frac{\mathbf{4}\left(\mathbf{1} + \frac{\mathbf{k}}{\mathbf{L}}\overline{\mathbf{\mathbf{y}}}\_{\mathbf{b}}\right)}{\left(\mathbf{2} + \frac{\mathbf{k}}{\mathbf{L}}\overline{\mathbf{\mathbf{y}}}\_{\mathbf{b}}\right)^{2}}\right]^{\mathbf{L}}\tag{16}$$

where ad denotes the number of paths of distance d from the all-zero path which merge with the all-zero path for the first time and dfree = 3 in this case, is the minimum distance of the code. dfree is also equal to the diversity, L. *γ*<sup>b</sup> is the average signal-to-noise ratio (SNR) per bit [25]. For each convolutional code, the transfer function is obtained and the sum of the coefficients {ad} calculated.

For illustrative purposes, the transfer function, T(D) of smaller convolutional codes such as (2, 2, 2) and (4, 4, 2) will be used.

The transfer function T(D) for the (2, 2, 2) code is given as follows [16]:

$$\text{Tr(D)} = \text{D}^3 + 2\,\text{D}^4 + ---\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---}\text{ ---$$

The transfer function for the (4, 4, 2) code is given as follows [16]:

$$\text{T(D)} = \text{D}^3 + 2\text{D}^4 + 3\text{D}^5 + 5\text{D}^6 + 9\text{D}^7 + 16\text{D}^8 + 28\text{D}^9 + 49\text{D}^{10} + 85\text{D}^{11} + \dots - \dots \tag{18}$$

For both the (2, 2, 2) code and the (4, 4, 2) code, dfree = L = 3. Using the values of L and {ad}, the probability of a binary digit error, Pb as a function of the SNR per bit, *γ*<sup>b</sup> is shown in **Figure 7** for k = 2 and 4 [16].

The curves illustrate that, the error probability increases with an increase in k for the same value of SNR. Hence, better performance for wireless transmission should involve lower order codes and many independent parallel channels rather than higher order codes with fewer independent parallel channels. Hence, high data throughput could be attained by using small number of bits in the block length, N.

#### **6.2 Forward error correction (FEC) code**

The Viterbi algorithm [25] is the most extensively decoding algorithm for Convolutional codes and has been widely deployed for forward error correction in wireless communication systems. In this sub-section Viterbi algorithm will be applied to the non-linear convolutional code. The constraint length, L for a (n,k,m) convolutional code is given as L = k(m-1). The constraint length is very essential in convolutional encoding since a Trellis diagram which gives the best encoding representation populates after L bits. Hence to encode blocks of n bits, each block has to be terminated by L zeros (0 s) before encoding.

**Figure 7.** *Performance of coded orthogonal signaling for k = 2 and k = 4.*

implemented at the physical layer. It is worth noting that, the security level could be much higher compared to the values displayed in **Table 7** if the S-boxes were implemented using 4-bit and 8-bit shuffling instead of the aforementioned 2-bit

16-bit 2.1 � <sup>10</sup><sup>57</sup> 9.6 � 1085 4.4 � <sup>10</sup><sup>114</sup> 32-bit 1.4 � 1057 5.2 � <sup>10</sup><sup>85</sup> 1.9 � 10114 64-bit 1.5 � <sup>10</sup><sup>62</sup> 1.8 � <sup>10</sup><sup>93</sup> 2.3 � <sup>10</sup><sup>124</sup> 128-bit 6.1 � 1071 4.8 � 10107 3.7 � <sup>10</sup><sup>143</sup> 256-bit 9.87 � <sup>10</sup><sup>87</sup> 9.8 � <sup>10</sup><sup>131</sup> 9.7 � <sup>10</sup><sup>175</sup> 512-bit 2.68 � 10112 4.4 � <sup>10</sup><sup>168</sup> 7.2 � <sup>10</sup><sup>224</sup>

**Operand key length Total number of steps**

where Pe is the bit error probability, N is the number of bits in the block length and R is a fixed transmission rate for the frames. For Pe < < 1, the throughput could

From (Eq. (15)) it could be seen that, for a fixed transmission rate, R the throughput, T could be increased by either minimizing N or Pe. In this section, it will be shown how convolutional coding could be used to achieve both conditions

It is shown in [16, 25] that, for coded orthogonal signaling, the bit error

∑ 2k 3 ad

merge with the all-zero path for the first time and dfree = 3 in this case, is the minimum distance of the code. dfree is also equal to the diversity, L. *γ*<sup>b</sup> is the average signal-to-noise ratio (SNR) per bit [25]. For each convolutional code, the transfer

where ad denotes the number of paths of distance d from the all-zero path which

4 1 <sup>þ</sup> <sup>k</sup> <sup>L</sup> *γ*<sup>b</sup> � �

<sup>2</sup> <sup>þ</sup> <sup>k</sup> <sup>L</sup> *γ*<sup>b</sup> � �<sup>2</sup> " #<sup>L</sup>

through orthogonal signaling and forward error correction respectively.

2k‐<sup>1</sup> 2k � <sup>1</sup> � �

function is obtained and the sum of the coefficients {ad} calculated.

<sup>T</sup> <sup>¼</sup> R 1ð Þ –Pe <sup>N</sup> (14)

**N=2 N=3 N=4**

T ffi R 1ð Þ –NPe (15)

(16)

shuffling.

**Table 7.**

*Computer and Network Security*

approximate to

**26**

**6. Data throughput quantification**

**6.1 Coded orthogonal signaling**

probability to transmit k-bit symbols is as follows:

Pb≤

The data throughput, T could be given as [24].

*Number of steps required to break the cross-layer security scheme.*

#### **Figure 8.**

*2-stage non-linear (4,2,3) convolutional code.*

For illustrative purposes, a non-linear (4,2,3) convolutional code will be used to demonstrate encoding and Viterbi decoding. A possible non-linear (4,2,3) convolutional code showing mod-2 connections and the product cipher is shown in **Figure 8**.

#### *6.2.1 Example: encode/decode the message M = 10,011*

• Encoding process

The constraint length, L = k(m-1) = 2(3–1) = 4.

Hence 4 zeros will be appended to message M before encoding. The modified message becomes M' = 10110000. Transition tables in appendix are used to encode the modified message.

a. Using transition tables in appendix, the transmitted sequence from the 1st stage is given as Tin = 10 01 01 11

The bits above the arrows will constitute the retrieved sequence from the 2nd stage. Hence, the retrieved sequence is given as, R1 = 00 11 11 10. This sequence is

• P-box output is given as P1 = 00 11 11 01. Sequence, P1 is fed to the S-box

Sequence, S1 is fed into the 1st stage to retrieve the final correct message. The

For a good trellis, the final state is the all-zero state as seen in the winning path in **Table 9**. The final received sequence is identical to the original transmitted message of M' = Rfinal = 10110000 despite the first bit error. Hence, using the non-linear convolutional code, the error bit was identified and corrected. The forward error correction capability will therefore enhance throughput, since the bit error rate, Pe

Viterbi algorithm applied to the 1st stage is summarized in **Table 9**.

fed to the P-box.

**Table 9.**

**Table 8.**

is reduced.

**29**

• S-box output is given as S1 = 10 01 01 11

*Viterbi algorithm applied to 1st stage of (4,2,3) code.*

*Viterbi algorithm applied to 2nd stage of (4,2,3) code.*

*DOI: http://dx.doi.org/10.5772/intechopen.82390*

*A New Cross-Layer FPGA-Based Security Scheme for Wireless Networks*


In performing the Viterbi algorithm, a bit in the sequence Tout will be altered. Let the received sequence be TR = 1000 1111 0101 1001 instead of Tout = 0000 1111 0101 1001. The Viterbi algorithm applied to the 2nd stage is summarized in **Table 8**.

*A New Cross-Layer FPGA-Based Security Scheme for Wireless Networks DOI: http://dx.doi.org/10.5772/intechopen.82390*


**Table 8.** *Viterbi algorithm applied to 2nd stage of (4,2,3) code.*


#### **Table 9.**

For illustrative purposes, a non-linear (4,2,3) convolutional code will be used to demonstrate encoding and Viterbi decoding. A possible non-linear (4,2,3) convolutional code showing mod-2 connections and the product cipher is shown

Hence 4 zeros will be appended to message M before encoding. The modified message becomes M' = 10110000. Transition tables in appendix are used to encode

a. Using transition tables in appendix, the transmitted sequence from the 1st

d.Transmitted sequence into the 2nd stage is given as P = 00 11 11 10

e. Using transition tables in appendix, the final transmitted sequence which is the output bits from the 2nd stage is given as Tout = 0000 1111 0101 1001

In performing the Viterbi algorithm, a bit in the sequence Tout will be altered. Let the received sequence be TR = 1000 1111 0101 1001 instead of Tout = 0000 1111 0101 1001. The Viterbi algorithm applied to the 2nd stage is summarized in **Table 8**.

*6.2.1 Example: encode/decode the message M = 10,011*

stage is given as Tin = 10 01 01 11

• Viterbi decoding process

**28**

b.S-box output is given as S = 00 11 11 01

c. P-box output is given as P = 00 11 11 10

The constraint length, L = k(m-1) = 2(3–1) = 4.

in **Figure 8**.

**Figure 8.**

• Encoding process

*2-stage non-linear (4,2,3) convolutional code.*

*Computer and Network Security*

the modified message.

*Viterbi algorithm applied to 1st stage of (4,2,3) code.*

The bits above the arrows will constitute the retrieved sequence from the 2nd stage. Hence, the retrieved sequence is given as, R1 = 00 11 11 10. This sequence is fed to the P-box.


Sequence, S1 is fed into the 1st stage to retrieve the final correct message. The Viterbi algorithm applied to the 1st stage is summarized in **Table 9**.

For a good trellis, the final state is the all-zero state as seen in the winning path in **Table 9**. The final received sequence is identical to the original transmitted message of M' = Rfinal = 10110000 despite the first bit error. Hence, using the non-linear convolutional code, the error bit was identified and corrected. The forward error correction capability will therefore enhance throughput, since the bit error rate, Pe is reduced.
